Multi-channel Reverse Dictionary Model
A reverse dictionary takes the description of a target word as input and outputs the target word together with other words that match the description. Existing reverse dictionary methods cannot deal with highly variable input queries and low-frequency target words successfully. Inspired by the description-to-word inference process of humans, we propose the multi-channel reverse dictionary model, which can mitigate the two problems simultaneously. Our model comprises a sentence encoder and multiple predictors. The predictors are expected to identify different characteristics of the target word from the input query. We evaluate our model on English and Chinese datasets including both dictionary definitions and human-written descriptions. Experimental results show that our model achieves the state-of-the-art performance, and even outperforms the most popular commercial reverse dictionary system on the human-written description dataset. We also conduct quantitative analyses and a case study to demonstrate the effectiveness and robustness of our model. All the code and data of this work can be obtained on https://github.com/thunlp/MultiRD.
A regular (forward) dictionary maps words to definitions while a reverse dictionary  does the opposite and maps descriptions to corresponding words. In Figure 1, for example, a regular dictionary tells you that “expressway” is “a wide road that allows traffic to travel fast”, and when you input “a road where cars go very quickly without stopping” to a reverse dictionary, it might return “expressway” together with other semantically similar words like “freeway”.
Reverse dictionaries have great practical value. First and foremost, they can effectively address the tip-of-the-tongue problem , which severely afflicts many people, especially those who write a lot such as researchers, writers and students. Additionally, reverse dictionaries can render assistance to new language learners who know a limited number of words. Moreover, reverse dictionaries are believed to be helpful to word selection (or word dictionary) anomia patients, people who can recognize and describe an object but fail to name the object due to neurological disorder . In terms of natural language processing (NLP), reverse dictionaries can be used to evaluate the quality of sentence representations . They are also beneficial to the tasks involving text-to-entity mapping including question answering and information retrieval 
There have been some successful commercial reverse dictionary systems such as OneLook
Hill2016LearningTU \shortciteHill2016LearningTU propose a new method based on neural language model (NLM). They employ a NLM as the sentence encoder to learn the representation of the input query, and return those words whose embeddings are closest to the input query’s representation. The NLM based reverse dictionary model alleviates the above-mentioned problem of variable input queries, but its performance is heavily dependent on the quality of word embeddings. According to Zipf’s law , however, quite a few words are low-frequency and usually have poor embeddings, which will undermine the overall performance of ordinary NLM based models.
To tackle the issue, we propose the multi-channel reverse dictionary model, which is inspired by the description-to-word inference process of humans. Taking “expressway” as an example, when we forget what word means “a road where cars go very quickly”, it may occur to us that the part-of-speech tag of the target word should be “noun” and it belongs to the category of “entity”. We might also guess that the target word probably contains the morpheme “way”. When having knowledge of these characteristics, it is much easier for us to search the target word out. Correspondingly, in our multi-channel reverse dictionary model, we employ multiple predictors to identify different characteristics of target words from input queries. By doing this, the target words with poor embeddings can still be picked out by their characteristics and, moreover, the words which have close embeddings to the correct target word but contradictory characteristics to the given description will be filtered out.
We view each characteristic predictor as an information channel of searching the target word. Two types of channels involving internal and external channels are taken into consideration. The internal channels correspond to the characteristics of words themselves including the part-of-speech (POS) tag and morpheme. The external channels reflect characteristics of target words related to external knowledge bases. We take account of two external characteristics including the word category and sememe. The word category information can be obtained from word taxonomy systems and it usually corresponds to the genus words of definitions. A sememe is defined as the minimum semantic unit of human languages , which is similar to the concept of semantic primitive . Sememes of a word depict the meaning of the word atomically, which can be also predicted from the description of the word.
More specifically, we adopt the well-established bi-directional LSTM (BiLSTM)  with attention  as the basic framework and add four feature-specific characteristic predictors to it. In experiments, we evaluate our model on English and Chinese datasets including both dictionary definitions and human-written descriptions, finding that our model achieves the state-of-the-art performance. It is especially worth mentioning that for the first time OneLook is outperformed when input queries are human-written descriptions. In addition, to test our model under other real application scenarios like crossword game, we provide our model with prior knowledge about the target word such as the initial letter, and find it yields substantial performance enhancement. We also conduct detailed quantitative analyses and a case study to demonstrate the effectiveness of our model as well as its robustness in handling polysemous and low-frequency words.
Reverse Dictionary Models
Most of existing reverse dictionary models are based on sentence-sentence matching methods, i.e., comparing the input query with stored word definitions and return the word whose definition is most similar to the input query [37, 4]. They usually use some hand-engineered features, e.g., tf-idf, to measure sentence similarity, and leverage well-established information retrieval techniques to search the target word . Some of them utilize external knowledge bases like WordNet  to enhance sentence similarity measurement by finding synonyms or other pairs of related words between the input query and stored definitions [16, 13, 28].
Recent years have witnessed a growing number of reverse dictionary models which conduct sentence-word matching. \citeauthorThorat2016ImplementingAR \shortciteThorat2016ImplementingAR present a node-graph architecture which can directly measure the similarity between the input query and any word in a word graph. However, it works on a small lexicon ( words) only. \citeauthorHill2016LearningTU \shortciteHill2016LearningTU propose a NLM based reverse dictionary model, which uses a bag-of-words (BOW) model or an LSTM to embed the input query into the semantic space of word embeddings, and returns the words whose embeddings are closest to the representation of the input query.
Following the NLM model, \citeauthorMorinagaY18 \shortciteMorinagaY18 incorporate category inference to eliminate irrelevant results and achieve better performance; \citeauthorkartsaklis2018mapping \shortcitekartsaklis2018mapping employ a graph of WordNet synsets and words in definitions to learn target word representations together with a multi-sense LSTM to encode input queries, and they claim to deliver state-of-the-art results; \citeauthorhedderich2019using \shortcitehedderich2019using use multi-sense embeddings when encoding the queries, aiming to improve sentence representations of input queries; \citeauthorpilehvar2019importance \shortcitepilehvar2019importance adopt sense embeddings to disambiguate senses of polysemous target words.
Our multi-channel model also uses a NLM to embed input queries. Compared with previous work, our model employs multiple predictors to identity characteristics of target words, which is consistent with the inference process of humans, and achieves significantly better performance.
Applications of Dictionary Definitions
Dictionary definitions are handy resources for NLP research. Many studies utilize dictionary definitions to improve word embeddings [20, 31, 1, 6, 26]. In addition, dictionary definitions are utilized in various applications including word sense disambiguation , knowledge representation learning , reading comprehension  and knowledge graph generation [30, 22].
In this section, we first introduce some notations. Then we describe our basic framework, i.e., BiLSTM with attention. Next we detail our multi-channel model and its two internal and two external predictors. The architecture of our model is illustrated in Figure 2.
We define as the vocabulary set, as the whole morpheme set and as the whole POS tag set. For a given word , its morpheme set is , where each of its morpheme and denotes the cardinality of a set. A word may have multiple senses and each sense corresponds to a POS tag. Supposing has senses, all the POS tags of its senses form its POS tag set , where each POS tag . In subsequent sections, we use lowercase boldface symbols to stand for vectors and uppercase boldface symbols for matrices. For instance, is the word vector of and is a weight matrix.
The basic framework of our model is essentially similar to a sentence classification model, composed of a sentence encoder and a classifier. We select Bidirectional LSTM (BiLSTM)  as the sentence encoder, which encodes an input query into a vector. Different words in a sentence have different importance to the representation of the sentence, e.g., the genus words are more important than the modifiers in a definition. Therefore, we integrate attention mechanism  into BiLSTM to learn better sentence representations.
Formally, for an input query , we first pass the pre-trained word embeddings of its words to the BiLSTM, where is the dimension of word embeddings, and obtain two sequences of directional hidden states:
where and is the dimension of directional hidden states. Then we concatenate bi-directional hidden states to obtain non-directional hidden states:
The final sentence representation is the weighted sum of non-directional hidden states:
where is the attention item serving as the weight:
Next we map , the sentence vector of the input query, into the space of word embeddings, and calculate the confidence score of each word using dot product:
where indicates the confidence score of , is a weight matrix, is a bias vector.
Internal Channel: POS Tag Predictor
A dictionary definition or human-written description of a word is usually able to reflect the POS tag of the corresponding sense of the word. We believe that predicting the POS tag of the target word can alleviate the problem of returning words with POS tags contradictory to the input query in existing reverse dictionary models.
We simply pass the sentence vector of the input query to a single-layer perceptron:
where records the prediction score of each POS tag, is a weight matrix, and is a bias vector.
The confidence score of from the POS tag channel is the sum of the prediction scores of ’s POS tags:
where denotes the -th element of , and returns the POS tag index of .
Internal Channel: Morpheme Predictor
Most words are complex words consisting of more than one morphemes. We find there exists a kind of local semantic correspondence between the morphemes of a word and its definition or description. For instance, the word “expressway” has two morphemes “express” and “way” and its dictionary definition is “a wide road in a city on which cars can travel very quickly”. We can observe that the two words “road” and “quickly” semantically correspond to the two morphemes “way” and “express” respectively. By predicting morphemes of the target word from the input query, a reverse dictionary can capture compositional information of the target word, which is complementary to contextual information of word embeddings.
We design a special morpheme predictor. Different from the POS tag predictor, we allow each hidden state to be involved in morpheme prediction directly, and do max-pooling to obtain final morpheme prediction scores. Specifically, we feed each non-directional hidden state to a single-layer perceptron and obtain local morpheme prediction scores:
where measures the semantic correspondence between -th word in the input query and each morpheme, is a weight matrix, and is a bias vector. Then we do max-pooling over all the local morpheme prediction scores to obtain global morpheme prediction scores:
And the confidence score of from the morpheme channel is:
where returns the morpheme index of .
External Channel: Word Category Predictor
Semantically related words often belong to different categories, although they have close word embeddings, e.g., “car” and “road”. Word category information is helpful in eliminating semantically related but not similar words from the results of reverse dictionaries . There are many available word taxonomy systems which can provide hierarchical word category information, e.g., WordNet . Some of them provides POS tag information as well, in which case POS tag predictor can be removed.
We design a hierarchical predictor to calculate prediction scores of word categories. Specifically, each word belongs to a certain category in each layer of word hierarchy. We first compute the word category prediction score of each layer:
where is the word category prediction score distribution of -th layer, is a weight matrix, is a bias vector, and is the category number of -th layer. Then the final confidence score of from the word category channel is the weighted sum of its category prediction scores of all the layers:
where is the total layer number of the word hierarchy, is a hyper-parameter controlling the relative weights, and returns the category index of in the -th layer.
External Channel: Sememe Predictor
In linguistics, a sememe is the minimum semantic unit of natural languages . Sememes of a word can accurately depict the meaning of the word. HowNet  is the most famous sememe knowledge base. It defines about sememes and uses them to annotate more than Chinese and English words by hand. HowNet and its sememe knowledge has been widely applied to various NLP tasks including sentiment analysis , word representation learning , semantic composition , sequence modeling  and textual adversarial attack .
Sememe annotation of a word in HowNet includes hierarchical sememe structures as well as relations between sememes. For simplicity, we extract a set of unstructured sememes for each word, in which case sememes of a word can be regarded as multiple semantic labels of the word. We find there also exists local semantic correspondence between the sememes of a word and its description. Still taking “expressway” as an example, its annotated sememes in HowNet are route and fast, which semantically correspond to the words in its definition “road” and “quickly” respectively.
Therefore, we design a sememe predictor similar to the morpheme predictor. Formally, we use to represent the set of all sememes. The sememe set of a word is . We pass each hidden state to a single-layer perceptron to calculate local sememe prediction scores:
where indicates how corresponding between -th word in the input query and each sememe, is a weight matrix, and is a bias vector. Final sememe prediction scores are computed by doing max-pooling:
The confidence score of from the sememe channel is:
where returns the sememe index of .
Multi-channel Reverse Dictionary Model
By combining the confidence scores of direct word prediction and indirect characteristic prediction, we obtain the final confidence score of a given word in our multi-channel reverse dictionary model:
where is the channel set, and and are the hyper-parameters controlling relative weights of corresponding terms.
As for training loss, we simply adopt the one-versus-all cross-entropy loss inspired by the sentence classification models.
|Model||Seen Definition||Unseen Definition||Description|
|median rank||accuracy@1/10/100||rank variance|
In this section, we evaluate the performance of our multi-channel reverse dictionary model. We also conduct detailed quantitative analyses as well as a case study to explore the influencing factors in the reverse dictionary task and demonstrate the strength and weakness of our model. We carry out experiments on both English and Chinese datasets. But due to limited space, we present our experiments on the Chinese dataset in the appendix.
We use the English dictionary definition dataset created by \citeauthorHill2016LearningTU \shortciteHill2016LearningTU
To obtain the morpheme information our model needs, we use Morfessor  to segment all the words into morphemes. As for the word category information, we use the lexical names from WordNet . There are lexical names and the total layer number of the word category hierarchy is . Since the lexical names have included POS tags, e.g., noun.animal, we remove the POS tag predictor from our model. We use HowNet as the source of sememes. It contains English words manually annotated with different sememes in total. We employ OpenHowNet , the open data accessing API of HowNet, to obtain sememes of words.
We choose the following models as the baseline methods: (1) OneLook, the most popular commercial reverse dictionary system, whose 2.0 version is used; (2) BOW and RNN with rank loss , both of which are NLM based and the former uses a bag-of-words model while the latter uses an LSTM; (3) RDWECI , which incorporates category inference and is an improved version of BOW; (4) SuperSense , an improved version of BOW which uses pretrained sense embeddings to substitute target word embeddings; (5) MS-LSTM , an improved version of RNN which uses graph-based WordNet synset embeddings together with a multi-sense LSTM to predict synsets from descriptions and claims to produce state-of-the-art performance; and (6) BiLSTM, the basic framework of our multi-channel model.
Hyper-parameters and Training
For our model, the dimension of non-directional hidden states is , the weights of different channels are equally set to , and the dropout rate is 0.5.
For all the models except MS-LSTM, we use the 300-dimensional word embeddings pretrained on GoogleNews with word2vec
Following previous work, we use three evaluation metrics: the median rank of target words (lower better), the accuracy that target words appear in top 1/10/100 (acc@1/10/100, higher better) and the standard deviation of target words’ ranks (rank variance, lower better). Notice that MS-LSTM can only predict WordNet synsets. Thus, we map the target words to corresponding WordNet synsets (target synsets) and calculate the accuracy and rank variance of the target synsets.
|Prior Knowlege||Seen Definition||Unseen Definition||Description|
|median rank||accuracy@1/10/100||rank variance|
Overall Experimental Results
Table 1 exhibits reverse dictionary performance of all the models on the three test sets, where “Mor”, “Cat” and “Sem” represent the morpheme, word category and sememe predictors respectively. Notice that the performance of OneLook on the unseen test set is meaningless because we cannot exclude any definitions from its definition bank, hence we do not list corresponding results. From the table, we can see:
(1) Compared with all the baseline methods other than OneLook, our multi-channel model achieves substantially better performance on the unseen definition set and the description set, which verifies the absolute superiority of our model in generalizing to the novel and unseen input queries.
(2) OneLook significantly outperforms our model when the input queries are dictionary definitions. This result is expected because the input dictionary definitions are already stored in the database of OneLook and even simple text matching can easily handle this situation. However, the input queries of a reverse dictionary cannot be exact dictionary definitions in reality. On the description test set, our multi-channel model achieves better overall performance than OneLook. Although OneLook yields slightly higher acc@1, it has limited value in terms of practical application, because people always need to pick the proper word from several candidates, not to mention the fact that the acc@1 of OneLook is only .
(3) MS-LSTM performs very well on the seen definition set but badly on the description set, which manifests its limited generalization ability and practical value. Notice that when testing MS-LSTM, the searching space is the whole synset list rather than the synset list of the test set, which causes the difference in performance on the unseen definition set measured by us and recorded in the original work .
(4) All the BiLSTM variants enhanced with different information channels (+Mor, +Cat and +Sem) perform better than vanilla BiLSTM. These results prove the effectiveness of predicting characteristics of target words in the reverse dictionary task. Moreover, our multi-channel model achieves further performance enhancement as compared with the single-channel models, which demonstrates the potency of characteristic fusion and also verifies the efficacy of our multi-channel model.
(5) BOW performs better than RNN, which is consistent with the findings from \citeauthorHill2016LearningTU \shortciteHill2016LearningTU. However, BiLSTM far surpasses BOW as well as RNN. This verifies the necessity for bi-directional encoding in RNN models, and also shows the potential of RNNs.
Performance with Prior Knowledge
In practical application of reverse dictionaries, extra information about target words in addition to descriptions may be known. For example, we may remember the initial letter of the word we forget, or the length of the target word is known in crossword game. In this subsection, we evaluate the performance of our model with the prior knowledge of target words, including POS tag, initial letter and word length. More specifically, we extract the words satisfying given prior knowledge from the top results of our model, and then reevaluate the performance. The results are shown in Table 2.
We can find that any prior knowledge improves the performance of our model to a greater or lesser extent, which is an expected result. However, the performance boost brought by the initial letter and word length information is much bigger than that brought by the POS tag information. The possible reasons are as follows. For the POS tag, it has been already predicted in our multi-channel model, hence the improvement it brings is limited, which also demonstrates that our model can do well in POS tag prediction. For the initial letter and word length, they are hard to predict according to a definition or description and not considered in our model. Therefore, they can filter many candidates out and markedly increase performance.
Analyses of Influencing Factors
In this subsection, we conduct quantitative analyses of the influencing factors in reverse dictionary performance. To make results more accurate, we use a larger test set consisting of words and seen pairs of words and WordNet definitions. Since we are interested in the features of target words, we exclude MS-LSTM that predicts WordNet synsets.
Figure 3 exhibits the acc@10 of all the models on the words with different numbers of senses. It is obvious that performance of all the models declines with the increase in the sense number, which indicates that polysemy is a difficulty in the task of reserve dictionary. But our model displays outstanding robustness and its performance hardly deteriorates even on the words with the most senses.
Figure 4 displays all the models’ performance on the words within different ranges of word frequency ranking. We can find that the most frequent and infrequent words are harder to predict for all the reverse dictionary models. The most infrequent words usually have poor embeddings, which may damage the performance of NLM based models. For the most frequent words, on the other hand, although their embeddings are better, they usually have more senses. We count the average sense numbers of all the ranges, which are , , , , and respectively. The first range has a much larger average sense number, which explains its bad performance. Moreover, our model also demonstrates remarkable robustness.
The effect of query length on reverse dictionary performance is illustrated in Figure 5. When the input query has only one word, the system performance is strikingly poor, especially our multi-channel model. This is easy to explain because the information extracted from the input query is too limited. In this case, outputting the synonyms of the query word is likely to be a better choice.
In this subsection, we give two cases in Table 3 to display the strength and weakness of our reverse dictionary model. For the first word “postnuptial”, our model correctly predicts its morpheme “post” and sememe “GetMarried” from the words “after” and “marriage” in the input query. Therefore, our model easily finds the correct answer. For the second case, the input query describes a rare sense of the word “takeaway”. HowNet has no sememe annotation for this sense, and morphemes of the word are not semantically related to any words in the query either. Our model cannot solve this kind of cases, which is in fact hard to handle for all the NLM based models. In this situation, the text matching methods, which return the words whose stored definitions are most similar to the input query, may help.
|postnuptial||relating to events after a marriage.|
|takeaway||concession made by a labor union to a company.|
Conclusion and Future Work
In this paper, we propose a multi-channel reverse dictionary model, which incorporates multiple predictors to predict characteristics of target words from given input queries. Experimental results and analyses show that our model achieves the state-of-the-art performance and also possesses outstanding robustness.
In the future, we will try to combine our model with text matching methods to better tackle extreme cases, e.g., single-word input query. In addition, we are considering extending our model to the cross-lingual reverse dictionary task. Moreover, we will explore the feasibility of transferring our model to related tasks such as question answering.
This work is funded by the Natural Science Foundation of China (NSFC) and the German Research Foundation (DFG) in Project Crossmodal Learning, NSFC 61621136008 / DFG TRR-169. Furthermore, we thank the anonymous reviewers for their valuable comments and suggestions.
Appendix A Appendix: Experiments on Chinese Dataset
In this section, we evaluate our multi-channel reverse dictionary model on the Chinese dataset.
For Chinese, we build a dictionary definition dataset as the training set. It contains words and word-definition pairs, and the definitions are extracted from Modern Chinese Dictionary (6th Edition)
For the morpheme information, we simply cut each word into Chinese characters as morphemes.
As for the word category information, we use HIT-IR Tongyici Cilin
We choose the same baseline methods as English except OneLook, SuperSense and MS-LSTM. We exclude OneLook because it only supports English reverse dictionary search and there are no Chinese reverse dictionary systems. In addition, SuperSense and MS-LSTM rely on WordNet but the Chinese version of WordNet contains too few words. So we do not make comparison with them, either. More specifically, the baselines are (1) BOW and RNN with rank loss , both of which are NLM based and the former uses a bag-of-words model while the latter uses an LSTM; (2) RDWECI , which incorporates category inference and is an improved version of BOW; and (3) BiLSTM, the basic framework of our multi-channel model.
Hyper-parameters and Training
For our model on the Chinese dataset, the dimension of non-directional hidden states is , which is different from the model of English.
The weights of different channels are equally set to .
For the baseline methods, we use their recommended hyper-parameters.
For all the models, we use the 200-dimensional word embeddings pretrained on the SogouT corpus
Same as the English experiments, we utilize three metrics including (1) the median rank of the target words; (2) the accuracy that the target words appears in top 1/10/100; and (3) the standard deviation of the target words’ ranks.
Overall Experimental Results
Table 4 exhibits reverse dictionary performance of all the models on the four test sets, where “+POS”,“+Mor”, “+Cat” and “+Sem” represent the POS tag, morpheme, word category and sememe predictors respectively. From the table, we can see:
|Model||Seen Definition||Useen Definition||Description||Question|
|median rank||accuracy@1/10/100||rank variance|
|Prior Knowledge||Seen Definition||Useen Definition||Description||Question|
|median rank||accuracy@1/10/100||rank variance|
(1) Our multi-channel model achieves substantially better performance than all the baseline methods on all the four test sets, which demonstrates the superiorty of our model. In addition, similar to the results of the English experiments, our model can also generalize well to the novel, unseen input queries.
(2) All the BiLSTM variants enhanced with different information channels (+POS, +Mor, +Cat and +Sem) perform better than vanilla BiLSTM except the evaluation for BiLSTM+POS on the Question test set. That is because words in the Question test set are all idioms and most of them have no POS tags. Basically, the results prove the effectiveness of all the four information channels.
(3) BiLSTM’s better performance than BOW and RNN demonstrate the necessity of bi-directional encoding in RNN models, although BOW performs also better than RNN here.
(4) The results on the Question test set show that our model is also good at question-answer exercise problems in real-world exams.
Performance with Prior Knowledge
Similar to the English experiments, we use the prior knowledge of the target word to evaluate the performance of our model on the Chinese dataset in the same way.
The results are shown in Table 5. We can also find that any prior knowledge can improve our model’s performance, especially the initial character information. That is presumably because the average character number of Chinese words is much less than that of English words and the search space is reduced to be smaller. Similar to English, the performance improvement of our model given POS tag information is also insignificant, which also demonstrates that our model can do well in POS tag prediction.
- The definitions are extracted from five electronic resources: WordNet, The American Heritage Dictionary, The Collaborative International Dictionary of English, Wiktionary and Webster’s.
- (2017) Learning to compute word embeddings on the fly. arXiv preprint arXiv:1706.00286. Cited by: Applications of Dictionary Definitions.
- (2015) Neural machine translation by jointly learning to align and translate. In Proceedings of ICLR, Cited by: Introduction, Basic Framework.
- (1979) Neurologic correlates of anomia. In Studies in neurolinguistics, pp. 293–328. Cited by: Introduction.
- (2004) Dictionary search based on the target word description. In Proceedings of NLP, Cited by: Introduction, Reverse Dictionary Models.
- (1926) A set of postulates for the science of language. Language 2 (3), pp. 153–164. Cited by: Introduction, External Channel: Sememe Predictor.
- (2018) Auto-encoding dictionary definitions into consistent word embeddings. In Proceedings of EMNLP, Cited by: Applications of Dictionary Definitions.
- (1966) The âtip of the tongueâ phenomenon. Journal of verbal learning and verbal behavior 5 (4), pp. 325–337. Cited by: Introduction.
- (2003) HowNet-a hybrid language and knowledge resource. In Proceedings of NLP-KE, Cited by: External Channel: Sememe Predictor.
- (2013) Multi-aspect sentiment analysis for chinese online social reviews based on topic modeling and hownet lexicon. Knowledge-Based Systems 37, pp. 186–195. Cited by: External Channel: Sememe Predictor.
- (2016) Learning to understand phrases by embedding the dictionary. TKDE 4, pp. 17–30. Cited by: Appendix A, Introduction, Baseline Methods.
- (1997) Long short-term memory. Neural computation 9 (8), pp. 1735–1780. Cited by: Introduction.
- (2018) Mapping text to knowledge graph entities using multi-sense lstms. In Proceedings of EMNLP, Cited by: Introduction, Baseline Methods, Overall Experimental Results.
- (2013) Creating reverse bilingual dictionaries. In Proceedings of HLT-NAACL, Cited by: Reverse Dictionary Models.
- (2017) World knowledge for reading comprehension: rare entity prediction with hierarchical lstms using external descriptions. In Proceedings of EMNLP, Cited by: Applications of Dictionary Definitions.
- (2018) Incorporating glosses into neural word sense disambiguation. In Proceedings of ACL, Cited by: Applications of Dictionary Definitions.
- (2013) A reverse dictionary based on semantic analysis using wordnet. In Proceedings of MICAI 2013, Cited by: Introduction, Reverse Dictionary Models.
- (1995) WordNet: a lexical database for english. Communications of the Acm 38 (11), pp. 39–41. Cited by: Reverse Dictionary Models, External Channel: Word Category Predictor, Dataset.
- (2018) Improvement of reverse dictionary by tuning word vectors and category inference. In Proceedings of ICIST, Cited by: Appendix A, External Channel: Word Category Predictor, Baseline Methods.
- (2017) Improved word representation learning with sememes. In Proceedings of ACL, Cited by: External Channel: Sememe Predictor.
- (2017) Definition modeling: learning to define word embeddings in natural language. In Proceedings of AAAI, Cited by: Applications of Dictionary Definitions.
- (2019) On the importance of distinguishing word meaning representations: a case study on reverse dictionary mapping. In Proceedings of NAACL-HLT, Cited by: Baseline Methods.
- (2019) Generating knowledge graph paths from textual definitions using sequence-to-sequence models. In Proceedings NAACL-HLT, Cited by: Applications of Dictionary Definitions.
- (2019) Modeling semantic compositionality with sememe knowledge. In Proceedings of ACL, Cited by: External Channel: Sememe Predictor.
- (2019) OpenHowNet: an open sememe-based lexical knowledge base. arXiv preprint arXiv:1901.09957. Cited by: Dataset.
- (2019) Enhancing recurrent neural networks with sememes. arXiv preprint arXiv:1910.08910. Cited by: External Channel: Sememe Predictor.
- (2018) Improving word embedding compositionality using lexicographic definitions. In Proceedings of WWW, Cited by: Applications of Dictionary Definitions.
- (1997) Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing 45 (11), pp. 2673–2681. Cited by: Basic Framework.
- (2013) Building a scalable database-driven reverse dictionary. TKDE 25, pp. 528–540. Cited by: Introduction, Reverse Dictionary Models.
- (2000) The onomasiological dictionary: a gap in lexicography. In Proceedings of the ninth Euralex international congress, Cited by: Introduction.
- (2018) Building a knowledge graph from natural language definitions for interpretable text entailment recognition. In Proceedings LREC, Cited by: Applications of Dictionary Definitions.
- (2017) Dict2vec : learning word embeddings using lexical dictionaries. In Proceedings of EMNLP, Cited by: Applications of Dictionary Definitions.
- (2013) Morfessor 2.0: python implementation and extensions for morfessor baseline. Aalto University Publication. Cited by: Dataset.
- (1996) Semantics: primes and universals: primes and universals. Oxford University Press, UK. Cited by: Introduction.
- (2016) Representation learning of knowledge graphs with entity descriptions. In Proceedings of AAAI, Cited by: Applications of Dictionary Definitions.
- (2019) Textual adversarial attack as combinatorial optimization. arXiv preprint arXiv:1910.12196. Cited by: External Channel: Sememe Predictor.
- (1949) Human behavior and the principle of least effort. SERBIULA (sistema Librum 2.0). Cited by: Introduction.
- (2004) Word lookup on the basis of associations: from an idea to a roadmap. In Proceedings of the Workshop on Enhancing and Using Electronic Dictionaries, Cited by: Introduction, Reverse Dictionary Models.