Learning Multi-Sense Word Distributions using Approximate Kullback-Leibler Divergence

Learning Multi-Sense Word Distributions using Approximate Kullback-Leibler Divergence

P. Jayashree, Ballijepalli Shreya, and P.K. Srijith
Department of Computer Science and Engineering, Indian Institute of Technology,
Hyderabad, India
{cs16resch11002,cs15btech11009,srijith}@iith.ac.in
Abstract

Learning word representations has garnered greater attention in the recent past due to its diverse text applications. Word embeddings encapsulate the syntactic and semantic regularities of sentences. Modelling word embedding as multi-sense gaussian mixture distributions, will additionally capture uncertainty and polysemy of words. We propose to learn the Gaussian mixture representation of words using a Kullback-Leibler (KL) divergence based objective function. The KL divergence based energy function provides a better distance metric which can effectively capture entailment and distribution similarity among the words. Due to the intractability of KL divergence for Gaussian mixture, we go for a KL approximation between Gaussian mixtures. We perform qualitative and quantitative experiments on benchmark word similarity and entailment datasets which demonstrate the effectiveness of the proposed approach.

\useunder

\ul

1 Introduction

Language modelling in its inception had one-hot vector encoding of words. However, it captures only alphabetic ordering but not the word semantic similarity. Vector space models helps to learn word representations in a lower dimensional space and also captures semantic similarity. Learning word embedding aids in natural language processing tasks such as question answering and reasoning Choi et al. (2018), stance detection Augenstein et al. (2016), claim verification Hanselowski et al. (2018).

Recent models (Mikolov et al., 2013a; Bengio et al., 2003) work on the basis that words with similar context share semantic similarity. Bengio et al. (2003) proposes a neural probabilistic model which models the target word probability conditioned on the previous words using a recurrent neural network. Word2Vec models (Mikolov et al., 2013a) such as continuous bag-of-words (CBOW) predict the target word given the context, and skip-gram model works in reverse of predicting the context given the target word. While, GloVe embeddings were based on a Global matrix factorization on local contexts (Pennington et al., 2014). However, the aforementioned models do not handle words with multiple meanings (polysemies).

Huang et al. (2012) proposes a neural network approach considering both local and global contexts in learning word embeddings (point estimates). Their multiple prototype model handles polysemous words by providing apriori heuristics about word senses in the dataset. Tian et al. (2014) proposes an alternative to handle polysemous words by a modified skip-gram model and EM algorithm. Neelakantan et al. (2015) presents a non-parametric based alternative to handle polysemies. However, these approaches fail to consider entailment relations among the words.

Vilnis and McCallum (2014) learn a Gaussian distribution per word using the expected likelihood kernel. However, for polysemous words, this may lead to word distributions with larger variances as it may have to cover various senses.

Athiwaratkun and Wilson (2017) proposes multimodal word distribution approach. It captures polysemy. However, the energy based objective function fails to consider asymmetry and hence entailment. Textual entailment recognition is necessary to capture lexical inference relations such as causality (for example, mosquito malaria), hypernymy (for example, dog animal) etc.

In this paper, we propose to obtain multi-sense word embedding distributions by using a variant of max margin objective based on the asymmetric KL divergence energy function to capture textual entailment. Multi-sense distributions are advantageous in capturing polysemous nature of words and in reducing the uncertainty per word by distributing it across senses. However, computing KL divergence between mixtures of Gaussians is intractable, and we use a KL divergence approximation based on stricter upper and lower bounds. While capturing textual entailment (asymmetry), we have also not compromised on capturing symmetrical similarity between words (for example, funny and hilarious) which will be elucidated in Section . We also show the effectiveness of the proposed approach on the benchmark word similarity and entailment datasets in the experimental section.

2 Methodology

2.1 Word Representation

Probabilistic representation of words helps one model uncertainty in word representation, and polysemy. Given a corpus , containing a list of words each represented as , the probability density for a word can be represented as a mixture of Gaussians with components (Athiwaratkun and Wilson, 2017).

(1)

Here, represents the probability of word belonging to the component , represents dimensional word representation corresponding to the component sense of the word , and represents the uncertainty in representation for word belonging to component .

3 Objective function

The model parameters (means, covariances and mixture weights) can be learnt using a variant of max-margin objective (Joachims, 2002).

(2)

Here represents an energy function which assigns a score to the pair of words, is a particular word under consideration, its positive context (same context), and the negative context. The objective aims to push the margin of the difference between the energy function of a word to its positive context higher than its negative context by a threshold of . Thus, word pairs in the same context gets a higher energy than the word pairs in the dissimilar context. Athiwaratkun and Wilson (2017) consider the energy function to be an expected likelihood kernel which is defined as follows.

(3)

This is similar to the cosine similarity metric over vectors and the energy between two words is maximum when they have similar distributions. But, the expected likelihood kernel is a symmetric metric which will not be suitable for capturing ordering among words and hence entailment.

3.1 Proposed Energy function

As each word is represented by a mixture of Gaussian distributions, KL divergence is a better choice of energy function to capture distance between distributions. Since, KL divergence is minimum when the distributions are similar and maximum when they are dissimilar, energy function is taken as exponentiated negative KL divergence.

(4)

However, computing KL divergence between Gaussian mixtures is intractable and obtaining exact KL value is not possible. One way of approximating the KL is by Monte-Carlo approximation but it requires large number of samples to get a good approximation and is computationally expensive on high dimensional embedding space.

Alternatively, Hershey and Olsen (2007) presents a KL approximation between Gaussian mixtures where they obtain an upper bound through product of Gaussian approximation method and a lower bound through variational approximation method. In (Durrieu et al., 2012), the authors combine the lower and upper bounds from approximation methods of Hershey and Olsen (2007) to provide a stricter bound on KL between Gaussian mixtures. Lets consider Gaussian mixtures for the words and as follows.

The approximate KL divergence between the Gaussian mixture representations over the words and is shown in equation 5. More details on approximation is included in the Supplementary Material.

(5)

where and . Note that the expected likelihood kernel appears component wise inside the approximate KL divergence derivation.

One advantage of using KL as energy function is that it enables to capture asymmetry in entailment datasets. For eg., let us consider the words ’chair’ with two senses as ’bench’ and ’sling’, and ’wood’ with two senses as ’trees’ and ’furniture’. The word chair () is entailed within wood (), i.e. chair wood. Now, minimizing the KL divergence necessitates maximizing which in turn minimizes . This will result in the support of the component of to be within the component of , and holds for all component pairs leading to the entailment of within . Consequently, we can see that bench trees, bench furniture, sling trees, and sling furniture. Thus, it introduces lexical relationship between the senses of child word and that of the parent word. Minimizing the KL also necessitates maximizing term for all component pairs among and . This is similar to maximizing expected likelihood kernel, which brings the means of and closer (weighted by their co-variances) as discussed in (Athiwaratkun and Wilson, 2017). Hence, the proposed approach captures the best of both worlds, thereby catering to both word similarity and entailment.

We also note that minimizing the KL divergence necessitates minimizing which in turn maximizes . This prevents the different mixture components of a word converging to single Gaussian and encourages capturing different possible senses of the word. The same is also achieved by minimizing term and act as a regularization term which promotes diversity in learning senses of a word.

4 Experimentation and Results

We train our proposed model GMKL (Gaussian Mixture using KL Divergence) on the Text8 dataset (Mikolov et al., 2014) which is a pre-processed data of words from wikipedia. Of which, unique and frequent words are chosen using the subsampling trick in Mikolov et al. (2013b). We compare GMKL with the previous approaches w2g (Vilnis and McCallum, 2014) ( single Gaussian model) and w2gm (Athiwaratkun and Wilson, 2017) (mixture of Gaussian model with expected likelihood kernel). For all the models used for experimentation, the embedding size () was set to , number of mixtures to , context window length to , batch size to . The word embeddings were initialized using a uniform distribution in the range of , such that the expectation of variance is and mean (Cun et al., 1998). One could also consider initializing the word embeddings using other contextual representations such as BERT (Devlin et al., 2018) and ELMo (Peters et al., 2018) in the proposed approach. In order to purely analyze the performance of GM_KL over the other models, we have chosen initialization using uniform distribution for experiments. For computational benefits, diagonal covariance is used similar to (Athiwaratkun and Wilson, 2017). Each mixture probability is constrained in the range , summing to by optimizing over unconstrained scores in the range and converting scores to probability using softmax function. The mixture scores are initialized to to ensure fairness among all the components. The threshold for negative sampling was set to , as recommended in Mikolov et al. (2013a). Mini-batch gradient descent with Adagrad optimizer (Duchi et al., 2011) was used with initial learning rate set to .

Table 1 shows the qualitative results of GMKL. Given a query word and component id, the set of nearest neighbours along with their respective component ids are listed. For eg., the word ‘plane’ in its component captures the ‘geometry’ sense and so are its neighbours, and its component captures ‘vehicle’ sense and so are its corresponding neighbours. Other words such as ‘rock’ captures both the ‘metal’ and ‘music’ senses, ‘star’ captures ‘celebrity’ and ‘astronomical’ senses, and ‘phone’ captures ‘telephony’ and ‘internet’ senses.

Word Co. Nearest Neighbours
rock 0
rock:0, sedimentary:0, molten:1,
granite:0, felsic:0, carvings:1,
kiln:1
rock 1
rock:1, albums:0, rap:0, album:0,
bambaataa:0, jazzy:0, remix:0
star 0
star:0, hulk:0, sequel:0, godzilla:0,
ishiro:0, finale:1, cameo:1
star 1
star:1, galactic:0, stars:1, galaxy:1,
galaxy:0, sun:1, brightest:1,
phone 0
phone:0, dialing:0, voip:1,
channels:0,cable:1, telephone:1,
caller:0
phone 1
phone:1, gsm:1, ethernet:1,
wireless:1, telephony:0,
transceiver:0, gprs:0
plane 0
plane:0, ellipse:0, hyperbola:0,
tangent:0,axis:0, torus:0, convex:0,
plane 1
plane:1, hijacked:1, sidewinder:0,
takeoff:1, crashed:0, cockpit:1,
pilot:1
Table 1: Qualitative results of GMKL

We quantitatively compare the performance of the GMKL, w2g, and w2gm approaches on the SCWS dataset (Huang et al., 2012). The dataset consists of word pairs of polysemous and homonymous words with labels obtained by an average of human scores. The Spearman correlation between the human scores and the model scores are computed. To obtain the model score, the following metrics are used:

  1. MaxCos: Maximum cosine similarity among all component pairs of words and :

  2. AvgCos: Average component-wise cosine similarity between the words and .

  3. KLapprox: Formulated as shown in (5) between the words and .

  4. KLcomp: Maximum component-wise negative KL between words and :

Table 2 compares the performance of the approaches on the SCWS dataset. It is evident from Table 2 that GMKL achieves better correlation than existing approaches for various metrics on SCWS dataset.

Metric w2g w2gm GMKL
MaxCos 45.48 54.95 55.09
AvgCos 45.48 54.78 57.48
KL_approx 39.16 37.42 48.06
KL_comp 26.81 30.20 35.62
Table 2: Spearman correlation ( * 100) on SCWS.

Table 3 shows the Spearman correlation values of GMKL model evaluated on the benchmark word similarity datasets: SL (Hill et al., 2015), WS, WS-R, WS-S (Finkelstein et al., 2002), MEN (Bruni et al., 2014), MC (Miller and Charles, 1991), RG (Rubenstein and Goodenough, 1965), YP (Yang and Powers, 2006), MTurk-287 and MTurk-771 (Radinsky et al., 2011; Halawi et al., 2012), and RW (Luong et al., 2013). The metric used for comparison is ’AvgCos’. It can be seen that for most of the datasets, GMKL achieves significantly better correlation score than w2g and w2gm approaches. Other datasets such as MC and RW consist of only a single sense, and hence w2g model performs better and GMKL achieves next better performance. The YP dataset have multiple senses but does not contain entailed data and hence could not make use of entailment benefits of GMKL.

Dataset
w2g
w2gm
GMKL
(Ours)
SL 14.29 19.77 22.96
WS 47.63 58.35 64.79
WS-S
49.43 59.22 65.48
WS-R
47.85 56.90 64.67
MEN 42.61 55.96 57.04
MC 43.01 39.21 41.05
RG 27.13 49.68 51.87
YP 12.05 28.74 21.50
MT-287
51.41 61.25 64.00
MT-771
41.38 50.58 51.68
RW 18.43 12.65 12.96
Table 3: Spearman correlation results on word similarity datasets.

Table 4 shows the evaluation results of GMKL model on the entailment datasets such as entailment pairs dataset (Baroni et al., 2012) created from WordNet with both positive and negative labels, a crowdsourced dataset (Turney and Mohammad, 2015) of semantic relations labelled as entailed or not and annotated distributionally similar nouns dataset (Kotlerman et al., 2010). The ’MaxCos’ similarity metric is used for evaluation and the best precision and best F1-score is shown, by picking the optimal threshold. Overall, GMKL performs better than both w2g and w2gm approaches.

Dataset Metric w2g w2gm
GMKL
(Ours)
Turney and Mohammad (2015) Precision 51.69 53.47 54.25
F1 65.41 66.27 66.32
Baroni et al. (2012) Precision 57.18 66.42 67.55
F1 63.72 70.72 71.49
Kotlerman et al. (2010) Precision 66.12 69.89 70.00
F1 46.07 46.40 47.48
Table 4: Results on entailment datasets

5 Conclusion

We proposed a KL divergence based energy function for learning multi-sense word embedding distributions modelled as Gaussian mixtures. Due to the intractability of the Gaussian mixtures for the KL divergence measure, we use an approximate KL divergence function. We also demonstrated that the proposed GMKL approaches performed better than other approaches on the benchmark word similarity and entailment datasets.

References

  • B. Athiwaratkun and A. G. Wilson (2017) Multimodal word distributions. arXiv preprint arXiv:1704.08424. Cited by: §1, §2.1, §3.1, §3, §4.
  • I. Augenstein, T. Rocktäschel, A. Vlachos, and K. Bontcheva (2016) Stance detection with bidirectional conditional encoding. arXiv preprint arXiv:1606.05464. Cited by: §1.
  • M. Baroni, R. Bernardi, N. Do, and C. Shan (2012) Entailment above the word level in distributional semantics. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pp. 23–32. Cited by: Table 4, §4.
  • Y. Bengio, R. Ducharme, P. Vincent, and C. Jauvin (2003) A neural probabilistic language model. Journal of machine learning research 3 (Feb), pp. 1137–1155. Cited by: §1.
  • E. Bruni, N. Tran, and M. Baroni (2014) Multimodal distributional semantics. Journal of Artificial Intelligence Research 49, pp. 1–47. Cited by: §4.
  • E. Choi, H. He, M. Iyyer, M. Yatskar, W. Yih, Y. Choi, P. Liang, and L. Zettlemoyer (2018) Quac: question answering in context. arXiv preprint arXiv:1808.07036. Cited by: §1.
  • Y. Cun, L. Bottou, G. Orr, and K. Muller (1998) Efficient backprop, neural networks: tricks of the trade. Lecture notes in computer sciences 1524, pp. 5–50. Cited by: §4.
  • J. Devlin, M. Chang, K. Lee, and K. Toutanova (2018) BERT: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Cited by: §4.
  • J. Duchi, E. Hazan, and Y. Singer (2011) Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research 12 (Jul), pp. 2121–2159. Cited by: §4.
  • J. Durrieu, J. Thiran, and F. Kelly (2012) Lower and upper bounds for approximation of the kullback-leibler divergence between gaussian mixture models. In 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4833–4836. Cited by: Appendix A, §3.1.
  • L. Finkelstein, E. Gabrilovich, Y. Matias, E. Rivlin, Z. Solan, G. Wolfman, and E. Ruppin (2002) Placing search in context: the concept revisited. ACM Transactions on information systems 20 (1), pp. 116–131. Cited by: §4.
  • G. Halawi, G. Dror, E. Gabrilovich, and Y. Koren (2012) Large-scale learning of word relatedness with constraints. In Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 1406–1414. Cited by: §4.
  • A. Hanselowski, H. Zhang, Z. Li, D. Sorokin, B. Schiller, C. Schulz, and I. Gurevych (2018) UKP-athene: multi-sentence textual entailment for claim verification. arXiv preprint arXiv:1809.01479. Cited by: §1.
  • J. R. Hershey and P. A. Olsen (2007) Approximating the kullback leibler divergence between gaussian mixture models. In 2007 IEEE International Conference on Acoustics, Speech and Signal Processing-ICASSP’07, Vol. 4, pp. IV–317. Cited by: Appendix A, Appendix A, §3.1.
  • F. Hill, R. Reichart, and A. Korhonen (2015) Simlex-999: evaluating semantic models with (genuine) similarity estimation. Computational Linguistics 41 (4), pp. 665–695. Cited by: §4.
  • E. H. Huang, R. Socher, C. D. Manning, and A. Y. Ng (2012) Improving word representations via global context and multiple word prototypes. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers-Volume 1, pp. 873–882. Cited by: §1, §4.
  • T. Joachims (2002) Optimizing search engines using clickthrough data. In Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 133–142. Cited by: §3.
  • L. Kotlerman, I. Dagan, I. Szpektor, and M. Zhitomirsky-Geffet (2010) Directional distributional similarity for lexical inference. Natural Language Engineering 16 (4), pp. 359–389. Cited by: Table 4, §4.
  • T. Luong, R. Socher, and C. Manning (2013) Better word representations with recursive neural networks for morphology. In Proceedings of the Seventeenth Conference on Computational Natural Language Learning, pp. 104–113. Cited by: §4.
  • T. Mikolov, K. Chen, G. Corrado, and J. Dean (2013a) Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Cited by: §1, §4.
  • T. Mikolov, A. Joulin, S. Chopra, M. Mathieu, and M. Ranzato (2014) Learning longer memory in recurrent neural networks. arXiv preprint arXiv:1412.7753. Cited by: §4.
  • T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean (2013b) Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pp. 3111–3119. Cited by: §4.
  • G. A. Miller and W. G. Charles (1991) Contextual correlates of semantic similarity. Language and cognitive processes 6 (1), pp. 1–28. Cited by: §4.
  • A. Neelakantan, J. Shankar, A. Passos, and A. McCallum (2015) Efficient non-parametric estimation of multiple embeddings per word in vector space. arXiv preprint arXiv:1504.06654. Cited by: §1.
  • J. Pennington, R. Socher, and C. Manning (2014) Glove: global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pp. 1532–1543. Cited by: §1.
  • M. E. Peters, M. Neumann, M. Iyyer, M. Gardner, C. Clark, K. Lee, and L. Zettlemoyer (2018) Deep contextualized word representations. arXiv preprint arXiv:1802.05365. Cited by: §4.
  • K. Radinsky, E. Agichtein, E. Gabrilovich, and S. Markovitch (2011) A word at a time: computing word relatedness using temporal semantic analysis. In Proceedings of the 20th international conference on World wide web, pp. 337–346. Cited by: §4.
  • H. Rubenstein and J. B. Goodenough (1965) Contextual correlates of synonymy. Communications of the ACM 8 (10), pp. 627–633. Cited by: §4.
  • F. Tian, H. Dai, J. Bian, B. Gao, R. Zhang, E. Chen, and T. Liu (2014) A probabilistic model for learning multi-prototype word embeddings. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pp. 151–160. Cited by: §1.
  • P. D. Turney and S. M. Mohammad (2015) Experiments with three approaches to recognizing lexical entailment. Natural Language Engineering 21 (3), pp. 437–476. Cited by: Table 4, §4.
  • L. Vilnis and A. McCallum (2014) Word representations via gaussian embedding. arXiv preprint arXiv:1412.6623. Cited by: §1, §4.
  • D. Yang and D. M. Powers (2006) Verb similarity on the taxonomy of wordnet. Masaryk University. Cited by: §4.

Supplementary Material

Appendix A Approximation for KL divergence between mixtures of gaussians

KL between gaussian mixtures and can be decomposed as:

(6)

(Hershey and Olsen, 2007) presents KL approximation between gaussian mixtures using

  1. product of gaussian approximation method where KL is approximated using product of component gaussians and

  2. variational approximation method where KL is approximated by introducing some variational parameters.

The product of component gaussian approximation method using Jensen’s inequality provides upper bounds as shown in equations 7 and 8.

(7)
(8)

The variational approximation method provides lower bounds as shown in equations 9 and 10.

(9)
(10)

where represents the entropy term and the entropy of component of word with dimension is given as

In (Durrieu et al., 2012), the authors combine the lower and upper bounds from approximation methods of (Hershey and Olsen, 2007) to formulate a stricter bound on KL between gaussian mixtures.

From equations 7 and 10, a stricter lower bound for KL between gaussian mixtures is obtained as shown in equation 11

(11)

From equations 8 and 9, a stricter upper bound for KL between gaussian mixtures is obtained as shown in equation 12

(12)


Finally, the KL between gaussian mixtures is taken as the mean of KL upper and lower bounds as shown in equation 13.

(13)
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
398436
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description