An Unsupervised Word Sense Disambiguation System forUnder-Resourced Languages

An Unsupervised Word Sense Disambiguation System for
Under-Resourced Languages

Abstract

In this paper, we present Watasense, an unsupervised system for word sense disambiguation. Given a sentence, the system chooses the most relevant sense of each input word with respect to the semantic similarity between the given sentence and the synset constituting the sense of the target word. Watasense has two modes of operation. The sparse mode uses the traditional vector space model to estimate the most similar word sense corresponding to its context. The dense mode, instead, uses synset embeddings to cope with the sparsity problem. We describe the architecture of the present system and also conduct its evaluation on three different lexical semantic resources for Russian. We found that the dense mode substantially outperforms the sparse one on all datasets according to the adjusted Rand index.

Keywords: word sense disambiguation, system, synset induction

\@input@

languageresource.aux

An Unsupervised Word Sense Disambiguation System for

Under-Resourced Languages

Dmitry Ustalov, Denis Teslenko, Alexander Panchenko, Mikhail Chernoskutov,
Chris Biemann, Simone Paolo Ponzetto
Data and Web Science Group, University of Mannheim, Germany
Ural Federal University, Russia
Universität Hamburg, Department of Informatics, Language Technology Group, Germany
{dmitry,simone}@informatik.uni-mannheim.de, teslenkoden@gmail.com,
mikhail.chernoskutov@urfu.ru, {panchenko,biemann}@informatik.uni-hamburg.de

Abstract content

1. Introduction

Word sense disambiguation (WSD) is a natural language processing task of identifying the particular word senses of polysemous words used in a sentence. Recently, a lot of attention was paid to the problem of WSD for the Russian language [Lopukhin and Lopukhina, 2016, Lopukhin et al., 2017, Ustalov et al., 2017]. This problem is especially difficult because of both linguistic issues – namely, the rich morphology of Russian and other Slavic languages in general – and technical challenges like the lack of software and language resources required for addressing the problem.

To address these issues, we present Watasense, an unsupervised system for word sense disambiguation. We describe its architecture and conduct an evaluation on three datasets for Russian. The choice of an unsupervised system is motivated by the absence of resources that would enable a supervised system for under-resourced languages. Watasense is not strictly tied to the Russian language and can be applied to any language for which a tokenizer, part-of-speech tagger, lemmatizer, and a sense inventory are available.

The rest of the paper is organized as follows. Section 2 reviews related work. Section 3 presents the Watasense word sense disambiguation system, presents its architecture, and describes the unsupervised word sense disambiguation methods bundled with it. Section 4 evaluates the system on a gold standard for Russian. Section 5 concludes with final remarks.

2. Related Work

Although the problem of WSD has been addressed in many SemEval campaigns [Navigli et al., 2007, Agirre et al., 2010, Manandhar et al., 2010, inter alia], we focus here on word sense disambiguation systems rather than on the research methodologies.

Among the freely available systems, IMS (“It Makes Sense”) is a supervised WSD system designed initially for the English language [Zhong and Ng, 2010]. The system uses a support vector machine classifier to infer the particular sense of a word in the sentence given its contextual sentence-level features. Pywsd is an implementation of several popular WSD algorithms implemented in a library for the Python programming language.111https://github.com/alvations/pywsd It offers both the classical Lesk algorithm for WSD and path-based algorithms that heavily use the WordNet and similar lexical ontologies. DKPro WSD [Miller et al., 2013] is a general-purpose framework for WSD that uses a lexical ontology as the sense inventory and offers the variety of WordNet-based algorithms. Babelfy [Moro et al., 2014] is a WSD system that uses BabelNet, a large-scale multilingual lexical ontology available for most natural languages. Due to the broad coverage of BabelNet, Babelfy offers entity linking as part of the WSD functionality.

?) present an unsupervised WSD system that is also knowledge-free: its sense inventory is induced based on the JoBimText framework, and disambiguation is performed by computing the semantic similarity between the context and the candidate senses [Biemann and Riedl, 2013]. ?) proposed a similar approach to WSD, but based on dense vector representations (word embeddings), called SenseGram. Similarly to SenseGram, our WSD system is based on averaging of word embeddings on the basis of an automatically induced sense inventory. A crucial difference, however, is that we induce our sense inventory from synonymy dictionaries and not distributional word vectors. While this requires more manually created resources, a potential advantage of our approach is that the resulting inventory contains less noise.

Figure 1: A snapshot of the online demo, which is available at http://watasense.nlpub.org/ (in Russian).

3. Watasense, an Unsupervised System for Word Sense Disambiguation

Watasense is implemented in the Python programming language using the scikit-learn [Pedregosa and others, 2011] and Gensim [Řehuřek and Sojka, 2010] libraries. Watasense offers a Web interface (Figure 1), a command-line tool, and an application programming interface (API) for deployment within other applications.

3.1. System Architecture

A sentence is represented as a list of spans. A span is a quadruple: , where is the word or the token, is the part of speech tag, is the lemma, is the position of the word in the sentence. These data are provided by tokenizer, part-of-speech tagger, and lemmatizer that are specific for the given language. The WSD results are represented as a map of spans to the corresponding word sense identifiers.

The sense inventory is a list of synsets. A synset is represented by three bag of words: the synonyms, the hypernyms, and the union of two former – the bag. Due to the performance reasons, on initialization, an inverted index is constructed to map a word to the set of synsets it is included into.

Each word sense disambiguation method extends the BaseWSD class. This class provides the end user with a generic interface for WSD and also encapsulates common routines for data pre-processing. The inherited classes like SparseWSD and DenseWSD should implement the disambiguate_word(…) method that disambiguates the given word in the given sentence. Both classes use the bag representation of synsets on the initialization. As the result, for WSD, not just the synonyms are used, but also the hypernyms corresponding to the synsets. The UML class diagram is presented in Figure 2.

Figure 2: The UML class diagram of Watasense.

Watasense supports two sources of word vectors: it can either read the word vector dataset in the binary Word2Vec format or use Word2Vec-Pyro4, a general-purpose word vector server.222https://github.com/nlpub/word2vec-pyro4 The use of a remote word vector server is recommended due to the reduction of memory footprint per each Watasense process.

3.2. User Interface

Figure 1 shows the Web interface of Watasense. It is composed of two primary activities. The first is the text input and the method selection (Figure 1). The second is the display of the disambiguation results with part of speech highlighting (Figure 3). Those words with resolved polysemy are underlined; the tooltips with the details are raised on hover.

Figure 3: The word sense disambiguation results with the word “experiments” selected. The tooltip shows its lemma “experiment”, the synset identifier (36055), and the words forming the synset “experiment”, “experimenting” as well as its hypernyms “attempt”, “reproduction”, “research”, “method”.

3.3. Word Sense Disambiguation

We use two different unsupervised approaches for word sense disambiguation. The first, called ‘sparse model’, uses a straightforward sparse vector space model, as widely used in Information Retrieval, to represent contexts and synsets. The second, called ‘dense model’, represents synsets and contexts in a dense, low-dimensional space by averaging word embeddings.

Sparse Model.

In the vector space model approach, we follow the sparse context-based disambiguated method [Faralli et al., 2016, Panchenko et al., 2017b]. For estimating the sense of the word in a sentence, we search for such a synset that maximizes the cosine similarity to the sentence vector:

(1)

where is the set of words forming the synset, is the set of words forming the sentence. On initialization, the synsets represented in the sense inventory are transformed into the -weighted word-synset sparse matrix efficiently represented in the memory using the compressed sparse row format. Given a sentence, a similar transformation is done to obtain the sparse vector representation of the sentence in the same space as the word-synset matrix. Then, for each word to disambiguate, we retrieve the synset containing this word that maximizes the cosine similarity between the sparse sentence vector and the sparse synset vector. Let be the maximal number of synsets containing a word and be the maximal size of a synset. Therefore, disambiguation of the whole sentence requires operations using the efficient sparse matrix representation.

Dense Model.

In the synset embeddings model approach, we follow SenseGram [Pelevina et al., 2016] and apply it to the synsets induced from a graph of synonyms. We transform every synset into its dense vector representation by averaging the word embeddings corresponding to each constituent word:

(2)

where denotes the word embedding of . We do the same transformation for the sentence vectors. Then, given a word , a sentence , we find the synset that maximizes the cosine similarity to the sentence:

(3)

On initialization, we pre-compute the dense synset vectors by averaging the corresponding word embeddings. Given a sentence, we similarly compute the dense sentence vector by averaging the vectors of the words belonging to non-auxiliary parts of speech, i.e., nouns, adjectives, adverbs, verbs, etc. Then, given a word to disambiguate, we retrieve the synset that maximizes the cosine similarity between the dense sentence vector and the dense synset vector. Thus, given the number of dimensions , disambiguation of the whole sentence requires operations.

4. Evaluation

We conduct our experiments using the evaluation methodology of SemEval 2010 Task 14: Word Sense Induction & Disambiguation [Manandhar et al., 2010]. In the gold standard, each word is provided with a set of instances, i.e., the sentences containing the word. Each instance is manually annotated with the single sense identifier according to a pre-defined sense inventory. Each participating system estimates the sense labels for these ambiguous words, which can be viewed as a clustering of instances, according to sense labels. The system’s clustering is compared to the gold-standard clustering for evaluation.

4.1. Quality Measure

The original SemEval 2010 Task 14 used the V-Measure external clustering measure [Manandhar et al., 2010]. However, this measure is maximized by clustering each sentence into his own distinct cluster, i.e., a ‘dummy’ singleton baseline. This is achieved by the system deciding that every ambiguous word in every sentence corresponds to a different word sense. To cope with this issue, we follow a similar study [Lopukhin et al., 2017] and use instead of the adjusted Rand index (ARI) proposed by ?) as an evaluation measure.

In order to provide the overall value of ARI, we follow the addition approach used in [Lopukhin et al., 2017]. Since the quality measure is computed for each lemma individually, the total value is a weighted sum, namely

(4)

where is the lemma, is the set of the instances for the lemma , is the adjusted Rand index computed for the lemma . Thus, the contribution of each lemma to the total score is proportional to the number of instances of this lemma.

4.2. Dataset

We evaluate the word sense disambiguation methods in Watasense against three baselines: an unsupervised approach for learning multi-prototype word embeddings called AdaGram [Bartunov et al., 2016], same sense for all the instances per lemma (One), and one sense per instance (Singletons). The AdaGram model is trained on the combination of RuWac, Lib.Ru, and the Russian Wikipedia with the overall vocabulary size of 2 billion tokens [Lopukhin et al., 2017].

As the gold-standard dataset, we use the WSD training dataset for Russian created during RUSSE’2018: A Shared Task on Word Sense Induction and Disambiguation for the Russian Language [Panchenko et al., 2018]. The dataset has words covered by instances in the bts-rnc subset and words covered by instances in the wiki-wiki subset.333http://russe.nlpub.org/2018/wsi/

The following different sense inventories have been used during the evaluation:

Since the Dense model requires word embeddings, we used the 500-dimensional word vectors from the Russian Distributional Thesaurus [Panchenko et al., 2017a].666https://doi.org/10.5281/zenodo.400631 These vectors are obtained using the Skip-gram approach trained on the lib.rus.ec text corpus.

4.3. Results

We compare the evaluation results obtained for the Sparse and Dense approaches with three baselines: the AdaGram model (AdaGram), the same sense for all the instances per lemma (One) and one sense per instance (Singletons). The evaluation results are presented in Table 1. The columns bts-rnc and wiki-wiki represent the overall value of ARI according to Equation (4). The column Avg. consists of the weighted average of the datasets w.r.t. the number of instances.

Method bts-rnc wiki-wiki Avg.
AdaGram
Watlink Sparse
Dense
RuThes Sparse
Dense
RuWordNet Sparse
Dense
One
Singletons
Table 1: Results on RUSSE’2018 (Adjusted Rand Index).

We observe that the SenseGram-based approach for word sense disambiguation yields substantially better results in every case (Table 1). The primary reason for that is the implicit handling of similar words due to the averaging of dense word vectors for semantically related words. Thus, we recommend using the dense approach in further studies. Although the AdaGram approach trained on a large text corpus showed better results according to the weighted average, this result does not transfer to languages with less available corpus size.

5. Conclusion

In this paper, we presented Watasense,777https://github.com/nlpub/watasense an open source unsupervised word sense disambiguation system that is parameterized only by a word sense inventory. It supports both sparse and dense sense representations. We were able to show that the dense approach substantially boosts the performance of the sparse approach on three different sense inventories for Russian. We recommend using the dense approach in further studies due to its smoothing capabilities that reduce sparseness. In further studies, we will look at the problem of phrase neighbors that influence the sentence vector representations.

Finally, we would like to emphasize the fact that Watasense has a simple API for integrating different algorithms for WSD. At the same time, it requires only a basic set of language processing tools to be available: tokenizer, a part-of-speech tagger, lemmatizer, and a sense inventory, which means that low-resourced language can benefit of its usage.

6. Acknowledgements

We acknowledge the support of the Deutsche Forschungsgemeinschaft (DFG) under the project “Joining Ontologies and Semantics Induced from Text” (JOIN-T), the RFBR under the projects no. 16-37-00203 mol_a and no. 16-37-00354 mol_a, and the RFH under the project no. 16-04-12019. The research was supported by the Ministry of Education and Science of the Russian Federation Agreement no. 02.A03.21.0006. The calculations were carried out using the supercomputer “Uran” at the Krasovskii Institute of Mathematics and Mechanics.

7. Bibliographical References

References

  • Agirre et al., 2010 Agirre, E., de Lacalle, O. L., Fellbaum, C., Hsieh, S.-K., Tesconi, M., Monachini, M., Vossen, P., and Segers, R. (2010). SemEval-2010 Task 17: All-words Word Sense Disambiguation on a Specific Domain. In Proceedings of the 5th International Workshop on Semantic Evaluation, SemEval ’10, pages 75–80, Los Angeles, CA, USA. Association for Computational Linguistics.
  • Bartunov et al., 2016 Bartunov, S., Kondrashkin, D., Osokin, A., and Vetrov, D. P. (2016). Breaking Sticks and Ambiguities with Adaptive Skip-gram. Journal of Machine Learning Research, 51:130–138.
  • Biemann and Riedl, 2013 Biemann, C. and Riedl, M. (2013). Text: now in 2D! A framework for lexical expansion with contextual similarity. Journal of Language Modelling, 1(1):55–95.
  • Faralli et al., 2016 Faralli, S., Panchenko, A., Biemann, C., and Ponzetto, S. P. (2016). Linked Disambiguated Distributional Semantic Networks. In The Semantic Web – ISWC 2016: 15th International Semantic Web Conference, Kobe, Japan, October 17–21, 2016, Proceedings, Part II, pages 56–64, Cham, Germany. Springer International Publishing.
  • Hubert and Arabie, 1985 Hubert, L. and Arabie, P. (1985). Comparing partitions. Journal of Classification, 2(1):193–218.
  • Lopukhin and Lopukhina, 2016 Lopukhin, K. A. and Lopukhina, A. A. (2016). Word Sense Disambiguation for Russian Verbs Using Semantic Vectors and Dictionary Entries. In Computational Linguistics and Intellectual Technologies: Papers from the Annual conference “Dialogue”, pages 393–404, Moscow, Russia. RSUH.
  • Lopukhin et al., 2017 Lopukhin, K. A., Iomdin, B. L., and Lopukhina, A. A. (2017). Word Sense Induction for Russian: Deep Study and Comparison with Dictionaries. In Computational Linguistics and Intellectual Technologies: Papers from the Annual conference “Dialogue”. Volume 1 of 2. Computational Linguistics: Practical Applications, pages 121–134, Moscow, Russia. RSUH.
  • Loukachevitch et al., 2016 Loukachevitch, N. V., Lashevich, G., Gerasimova, A. A., Ivanov, V. V., and Dobrov, B. V. (2016). Creating Russian WordNet by Conversion. In Computational Linguistics and Intellectual Technologies: papers from the Annual conference “Dialogue”, pages 405–415, Moscow, Russia. RSUH.
  • Loukachevitch, 2011 Loukachevitch, N. V. (2011). Thesauri in information retrieval tasks. Moscow University Press, Moscow, Russia. In Russian.
  • Manandhar et al., 2010 Manandhar, S., Klapaftis, I., Dligach, D., and Pradhan, S. (2010). SemEval-2010 Task 14: Word Sense Induction & Disambiguation. In Proceedings of the 5th International Workshop on Semantic Evaluation, pages 63–68, Uppsala, Sweden. Association for Computational Linguistics.
  • Miller et al., 2013 Miller, T., Erbs, N., Zorn, H.-P., Zesch, T., and Gurevych, I. (2013). DKPro WSD: A Generalized UIMA-based Framework for Word Sense Disambiguation. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 37–42, Sofia, Bulgaria.
  • Moro et al., 2014 Moro, A., Raganato, A., and Navigli, R. (2014). Entity Linking meets Word Sense Disambiguation: A Unified Approach. Transactions of the Association for Computational Linguistics, 2:231–244.
  • Navigli et al., 2007 Navigli, R., Litkowski, K. C., and Hargraves, O. (2007). SemEval-2007 Task 07: Coarse-Grained English All-Words Task. In Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007), pages 30–35, Prague, Czech Republic. Association for Computational Linguistics.
  • Panchenko et al., 2017a Panchenko, A., Ustalov, D., Arefyev, N., Paperno, D., Konstantinova, N., Loukachevitch, N., and Biemann, C., (2017a). Human and Machine Judgements for Russian Semantic Relatedness, pages 221–235. Springer International Publishing, Cham, Germany.
  • Panchenko et al., 2017b Panchenko, A., Marten, F., Ruppert, E., Faralli, S., Ustalov, D., Ponzetto, S. P., and Biemann, C. (2017b). Unsupervised, Knowledge-Free, and Interpretable Word Sense Disambiguation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 91–96, Copenhagen, Denmark. Association for Computational Linguistics.
  • Panchenko et al., 2018 Panchenko, A., Lopukhina, A., Ustalov, D., Lopukhin, K., Leontyev, A., Arefyev, N., and Loukachevitch, N. (2018). RUSSE’2018: A Shared Task on Word Sense Induction and Disambiguation for the Russian Language. In Computational Linguistics and Intellectual Technologies: Papers from the Annual conference “Dialogue”, Moscow, Russia. RSUH.
  • Pedregosa and others, 2011 Pedregosa, F. et al. (2011). Scikit-learn: Machine Learning in Python. Journal of Machine Learning Research, 12:2825–2830.
  • Pelevina et al., 2016 Pelevina, M., Arefiev, N., Biemann, C., and Panchenko, A. (2016). Making Sense of Word Embeddings. In Proceedings of the 1st Workshop on Representation Learning for NLP, pages 174–183, Berlin, Germany. Association for Computational Linguistics.
  • Ustalov et al., 2017 Ustalov, D., Panchenko, A., and Biemann, C. (2017). Watset: Automatic Induction of Synsets from a Graph of Synonyms. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1579–1590, Vancouver, Canada. Association for Computational Linguistics.
  • Ustalov, 2017 Ustalov, D. (2017). Expanding Hierarchical Contexts for Constructing a Semantic Word Network. In Computational Linguistics and Intellectual Technologies: Papers from the Annual conference “Dialogue”. Volume 1 of 2. Computational Linguistics: Practical Applications, pages 369–381, Moscow, Russia. RSUH.
  • Řehuřek and Sojka, 2010 Řehuřek, R. and Sojka, P. (2010). Software Framework for Topic Modelling with Large Corpora. In New Challenges for NLP Frameworks Programme: A workshop at LREC 2010, pages 51–55, Valetta, Malta. European Language Resources Association (ELRA).
  • Zhong and Ng, 2010 Zhong, Z. and Ng, H. T. (2010). It Makes Sense: A Wide-Coverage Word Sense Disambiguation System for Free Text. In Proceedings of the ACL 2010 System Demonstrations, pages 78–83, Uppsala, Sweden. Association for Computational Linguistics.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
169275
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description