Deconstructing and reconstructing word embedding algorithms
Abstract
Uncontextualized word embeddings are reliable feature representations of words used to obtain high quality results for various NLP applications. Given the historical success of word embeddings in NLP, we propose a retrospective on some of the most wellknown word embedding algorithms. In this work, we deconstruct Word2vec, GloVe, and others, into a common form, unveiling some of the necessary and sufficient conditions required for making performant word embeddings. We find that each algorithm: (1) fits vectorcovector dot products to approximate pointwise mutual information (PMI); and, (2) modulates the loss gradient to balance weak and strong signals. We demonstrate that these two algorithmic features are sufficient conditions to construct a novel word embedding algorithm, HilbertMLE. We find that its embeddings obtain equivalent or better performance against other algorithms across 17 intrinsic and extrinsic datasets.
1 Introduction
Word embeddings have been established as standard feature representations for words in most contemporary NLP tasks Kim (2014); Huang et al. (2015); Goldberg (2016). Their incorporation into larger models – from CNNs and LSTMs for sentiment analysis Zhang et al. (2018), to sequencetosequence models for machine translation Qi et al. (2018), to the input layer of deep contextualized embedders Peters et al. (2018) – enables high quality performance across a wide variety of problems.
Being the building blocks for many modern NLP applications, we argue that it is worthwhile to subject word embedding algorithms to close theoretical inspection. This work can be considered a retrospective analysis of the groundbreaking word embedding algorithms of the past, which simultaneously offers theoretical insights for how future, deeper models can be developed and understood. Indeed, analogous to a watchmaker who curiously scrutinizes the mechanical components comprising her watches’ oscillators, so too do we aim to uncover what makes word embeddings “tick”.
It is wellknown that word embedding algorithms train two sets of embeddings: the vectors (“input” vectors) and the covectors (“output”, or, “context” vectors). However, the covectors tend to be regarded as an afterthought when used by NLP practitioners, either being thrown away Mikolov et al. (2013b), or averaged into the vectors Pennington et al. (2014); Levy et al. (2015).
Nonetheless, recent work has found that separately incorporating pretrained covectors into downstream models can improve performance in specific tasks. This includes lexical substitution Melamud et al. (2015); Roller and Erk (2016), information retrieval Nalisnick et al. (2016), stateoftheart metaphor detection Mao et al. (2018) and generation Yu and Wan (2019), and more Press and Wolf (2017); Asr et al. (2018); Değirmenci et al. (2019). In this work, we contribute an engaged theoretical treatment of covectors, and later elucidate the different relationships learned separately by vectors and covectors (§6.5).
Training these vectors and covectors can be done by a variety of highperforming algorithms: the samplingbased shallow neural network of SGNS Mikolov et al. (2013b), GloVe’s weighted least squares over global corpus statistics Pennington et al. (2014), and matrix factorization methods Levy and Goldberg (2014b); Levy et al. (2015); Shazeer et al. (2016). In this work, we propose a framework for understanding these algorithms from a common vantage point. We deconstruct each algorithm into its constituent parts, and find that, despite their many different hyperparameters, the algorithms collectively intersect upon the following two key design features:

vectorcovector dot products are learned to approximate pointwise mutual information (PMI) statistics in the corpus; and,

modulation of the loss gradient, directly or indirectly, to balance weak and strong signals arising from the highly imbalanced distribution of corpus statistics.
Finding these commonalities across algorithms, we beg the question of whether or not these features are sufficient for reconstructing a new word embedding algorithm. Indeed, we derive and implement a novel embedding algorithm, HilbertMLE ^{1}^{1}1The name is inspired by the intuitions behind this work concerning Hilbert spaces, and the maximum likelihood estimate that defines the model’s loss function., by following these two principles to derive their corresponding global matrix factorization loss function based on the maximum likelihood estimate of the multinomial distribution of corpus statistics.
However, due to the infeasibility of matrix factorization objectives for large vocabulary sizes, we further derive a local samplingbased formulation by algebraically deconstructing HilbertMLE’s global objective function. As we abstractly depict in Figure 1, this derivation can be seen as a mirrored derivation of that which is presented by Levy and Goldberg (2014b), who derived the global matrix factorization for SGNS from the original local sampling formulation Mikolov et al. (2013b).
We find that HilbertMLE produces word embeddings that earn equivalent or better performance against SGNS and GloVe across 17 intrinsic and extrinsic datasets, therefore demonstrating the sufficiency of the two principles for designing a word embedding algorithm.
To summarize, this work offers the following contributions:
2 Fundamental concepts
In this section, we introduce notation and concepts that we will draw upon throughout this paper. This includes formally defining embeddings, their vectors and covectors, and pointwise mutual information (PMI).
Embedding.
In general topology, an embedding is understood as an injective “structure preserving map”, , between two mathematical structures and . A word embedding algorithm () learns an innerproduct space () to preserve a linguistic structure within a reference corpus of text, (), based on the words in a vocabulary, . The structure in is analyzed in terms of the relationships between words induced by their appearances in the corpus. In such an analysis, each word figures dually: (1) as a focal element inducing a local context; and (2) as elements of the local contexts induced by focal elements. To make these dual roles explicit, we distinguish two copies of the vocabulary: the focal words (or, terms), and the context words .
An embedding consists of two maps:
We use Dirac notation to distinguish vectors , associated to focal words, from covectors , associated to context words. In matrix notation, corresponds to a column vector and to a row vector. Their inner product is ; this inner product completely characterizes the learned vector space. We will later demonstrate that many word embedding algorithms, intentionally or not, learn a vector space where the inner product between a focal word and context word aims to approximate their PMI in the reference corpus: .
Pointwise mutual information (PMI).
PMI is a commonly used measure of association in computational linguistics, and has been shown to be consistent and reliable for many tasks Terra and Clarke (2003). It measures the deviation of the cooccurrence probability between two words and from the product of their marginal probabilities:
(1) 
where is the probability of word and word cooccurring (for some notion of cooccurrence), and where and are marginal probabilities of words and occurring. The empirical PMI can be found by replacing probabilities with corpus statistics. Words are typically considered to cooccur if they are separated by no more than words; is the number of counted cooccurrences between a context and a term ; , , and are computed by marginalizing over the statistics.
3 Word embedding algorithms
We will now introduce the low rank embedder framework for deconstructing word embedding algorithms, inspired by the theory of generalized low rank models Udell et al. (2016). We unify several word embedding algorithms by observing them all from the common vantage point of their global loss function. Note that this framework is only used for theoretical analysis, not practical implementation.
The global loss function for a low rank embedder takes the form:
(2) 
and satisfies
(3) 
where is a kernel function, and is some scalar function (such as a measure of association based on how and appear in the corpus); and are abbreviations for the same.
The design variable is some function of corpus statistics, and its purpose is to quantitatively measure some relationship between word and . In apposition, the design variable is a function of model parameters, and its purpose is to learn a succinct approximation:
and so represent the relationship measured by . Intuitively, we can think of a low rank embedder as trying to directly fit a kernel function of model parameters to some (statistical) relationship between words of our choosing. For example, SGNS takes and , and then learns parameter values that approximate .
Though the specific choice of varies slightly, existing low rank embedders generally base on cooccurrence of words within a linear window words wide. But it is worth pointing out that can in principle be any pairwise relationship encoded as a scalar function of corpus statistics.
As for the kernel function , one simple choice is to take . But the framework allows any function that is symmetric and positive definite. This allows the framework to include the use of bias parameters (e.g. in GloVe) and subword parameterization (e.g. in FastText).
To understand the range of models encompassed, it is helpful to see how the framework relates (but is not limited) to matrix factorization. We can think of and as providing the entries of two matrices:
For models that take , we can write , where is defined as having row equal to , and as having column equal to . Then, the loss function can be rewritten as:
This loss function can be interpreted as matrix reconstruction error, because the constraint in Eq. 3 means that the gradient goes to zero as .
Selecting a particular low rank embedder instance requires key design choices to be made: we must chose the embedding dimension , the form of the loss terms , the kernel function , and the association function . Only the gradient of actually affects the algorithm. The derivative of with respect to , which we call the characteristic gradient, helps compare models because it exhibits the action of the gradient yet is symmetric in the parameters. Thus, we address a specific embedder by the tuple .
In the following subsections, we present the derivations of , , and for each algorithm. We later (§4) compare the algorithms and summarize their derivations in Table 1.
3.1 SGNS as a low rank embedder
Levy and Goldberg (2014b) provided the important result that skip gram with negative sampling (SGNS) Mikolov et al. (2013b) was implicitly factorizing the matrix. However, Levy and Goldberg did not derive the loss function needed to explicitly pose SGNS as matrix factorization, and required additional assumptions for their derivation to hold. Moreover, empirically, they used SVD on the related (but different) positivePMI matrix, and this did not reproduce the results of SGNS. In other work, Li et al. (2015), provided an explicit MF formulation of SGNS from a “representation learning” perspective. This derivation diverges from Levy and Goldberg’s result, and masks the connection between SGNS and other low rank embedders. In this work, we derive the complete global loss function for SGNS, free of additional assumptions.
The loss function of SGNS is as follows:
where is the logistic sigmoid function, is a list containing each cooccurrence of a contextword with a focalword in the corpus, and the expectation is taken by drawing from the (smoothed) unigram distribution to generate “negative samples” for a given focalword. Mikolov et al. (2013b).
We rewrite this by counting the number of times each pair occurs in the corpus, , and the number of times each pair is drawn as a negative sample, , while indexing the sum over the set :
We can now observe that the global loss is almost in the required form for a low rank embedder (Eq. 2), and that the appropriate setting for the model approximation function is . The characteristic gradient is derived as, using the identity :
This provides that the association function for SGNS is , since the derivative will be equal to zero at that point (Eq. 3). However, recall that negative samples are drawn according to the unigram distribution (or a smoothed variant Levy et al. (2015)). This means that . Therefore, in agreement with Levy and Goldberg (2014b), we find that:
(4) 
3.2 GloVe as a low rank embedder
GloVe (global vectors) was proposed as a method that strikes a halfway point between local sampling and global matrix factorization, taking the best parts from both solution methods Pennington et al. (2014). Its efficiency came from the fact that it only performs a partial factorization of the matrix, only considering samples where . We will demonstrate that GloVe is not so different from SGNS, and that it too implicitly factorizes the matrix.
GloVe’s loss function is defined as follows:
(5) 
where and are learned bias parameters; and are empirically tuned hyperparameters for the weighting function , which has when .
GloVe can be cast as a low rank embedder by using the model approximation function as a kernel with bias parameters, and setting the association measure to simply be the objective of the loss function:
Let us observe the optimal solution to the loss function, when :
Multiplying the log operand by :
(6)  
(7) 
On the right side, we have two terms that depend respectively only on and , which are candidates for the bias terms. Based on this equation alone, we cannot draw any conclusions. However, empirically the bias terms are in fact very near and , and PMI is nearly centered at zero, as can be seen in Fig. 2. This means that Eq. 7 provides .
Analyzing the optimum of GloVe’s loss function yields important insights. First, GloVe can be added to the list of low rank embedders that learn a bilinear parameterization of PMI. Second, we can see why such a parameterization is advantageous. Generally, it helps to standardize features of low rank models Udell et al. (2016), and this is essentially what transforming cooccurrence counts into PMI achieves. Thus, PMI can be viewed as a parameterization trick, providing an approximately normal target association to be modelled.
3.3 Other algorithms as low rank embedders
We now present additional algorithms that can be cast as low rank embedders: LDS Arora et al. (2016) and FastText Joulin et al. (2017). The derivations for SVD Levy and Goldberg (2014b); Levy et al. (2015) and Swivel Shazeer et al. (2016) as low rank embedders are trivial, as both are already posed as matrix factorizations of PMI statistics.
Lds.
Arora et al. (2016) introduced an embedding perspective based on generative modelling with random walks through a latent discourse space (LDS). While their only experiments were on analogy completion tasks (which do not correlate well with downstream performance Linzen (2016); Faruqui et al. (2016); Rogers et al. (2017)) LDS provided a theoretical basis for the surprisingly wellperforming SIF document embedding algorithm soon afterwards Arora et al. (2017). We now demonstrate that LDS is also a lowrank embedder.
The low rank learning objective for LDS follows directly from Corollary 2.3, in Arora et al. (2016):
can be found by straightforward differentiation of LDS’s loss function:
where is as defined by GloVe. The quadratic term is a valid kernel function because:
where
FastText.
Proposed by Joulin et al. (2017), FastText’s motivation is orthogonal to the present work. It’s purpose is to provide subwordbased representation of words to improve vocabulary coverage and generalizability of word embeddings. Nonetheless, it can also be understood as a low rank embedder.
In FastText, the vector for each word is taken as the sum of embeddings for its character grams, . Then the vector is given by the feature function , where is the vector for gram , and is the set of grams in word . Meanwhile covectors are accorded to words directly, rather than using gram covector embeddings. This provides , and, by virtue of using the SGNS loss function, .
Model  


SGNS  
GloVe  
LDS  
Swivel  


HilbertMLE 
4 Deconstructing the algorithms
Table 1 presents a summary of our derivations of existing algorithms as low rank embedders.
We observe several common features between each of the algorithms. In each case, takes the form . The multiplier is always a “tempered” version of (or ), by which we mean that it increases sublinearly with (or )^{2}^{2}2In SGNS, ; and are tempered by undersampling and unigram smoothing..
Furthermore, for each algorithm, is equal to PMI or a scaled log of . Yet, the choice of in combination with provides that every model is optimized when tends toward (with or without a constant shift or scaling). We have already seen that the optimum for SGNS is equivalent to the shifted PMI (§3.1). For GloVe, we theoretically and empirically showed that incorporation of the bias terms captures the unigram counts needed for PMI (§3.2). We observe this property similarly with regards to LDS’s incorporation of the L2 norm into its learning objective, where we suspect that the unigram probability is implicitly captured in the norms of the respective vectors and covectors (§3.3).
Therefore, we observe that these embedders converge on two key points: (1) an optimum in which model parameters are bilinearly related to PMI, and (2) the weighting of by some tempered form of (or ). In the next section, we introduce HilbertMLE, which is derived based on the shared principles observed between the algorithms in Table 1.
5 Reconstructing an algorithm
If the two basic principles that we have identified are sufficient, then the simplest low rank embedder should be one that derives from them without any other assumptions.
We begin with principle (1), which prescribes a bilinear parameterization of PMI. The definition of PMI (Eq. 1) provides a logbilinear parameterization of cooccurrence probability, , if we presuppose that the aim of our model is to approximate the PMI with vectorcovector dot products:
(8) 
In the expression above, represents the model’s estimate of the cooccurrence probability, provided by the parameterization which includes the unigram probabilities and .
Accordingly, given the matrix of covectors and vectors , the likelihood of the observed cooccurrence statistics, , is distributed like the multinomial, :
where depends on and (whose rows and columns are respectively and ) through Eq. 8. Taking the negative log likelihood as the loss:
(9) 
where we have dropped constant terms that do not affect the gradient.
The unitarity axiom of probability requires that . Including this constraint with a Lagrange multiplier, we obtain:
(10) 
At the feasible optimum, the original loss and constraint gradients should balance:
(11)  
(12) 
Eq. 12 represents equations, one for each pair . Summing these equations together,
(13) 
The constrained loss function is therefore,
(14) 
Reintroducing the bilinear parameterization (Eq. 8), and dividing through by to eliminate dependence on corpus size:
(15) 
where, again, we have dropped constant terms that do not affect the gradient. Finally by differentiating we obtain the characteristic gradient:
(16) 
This yields a loss gradient closely resembling other members of the low rank embedders. Empirically, its performance is on par with the other low rank embedders (see §6).
The multiplier, , determines how errors in fitting individual pairs trade off. While it appropriately favors fitting statistics with lower standard error, the signal from rarer pairs will be weak for any nondivergent learning rate because spans orders of magnitude. This slows down training. So, we apply a gradient conditioning measure as is done for the other low rank embedders: we apply a temperature parameter, , that reduces differences in magnitude of the multiplier:
(17) 
5.1 Solving the objective function
The objective function presented in Equation 15 is most straightforwardly solved via dense matrix factorization. This can be done relatively efficiently by using the sharding method presented by Shazeer et al. (2016) for matrix factorization. Such a solution is acceptable given a small vocabulary size, but does not scale to large vocabularies, due to the quadratic dependency. GloVe Pennington et al. (2014) handled this problem by only training on statistics where . Levy and Goldberg (2014b); Levy et al. (2015) avoided the quadratic dependency by implementing sparse SVD on the positivePMI matrix. However, both of these solutions may be missing out on important information that can be gained by “noticing what’s missing” Shazeer et al. (2016).
Yet, SGNS Mikolov et al. (2013b) was never confronted with the vocabulary size problem due to the fact that it uses local sampling over the corpus. While this yields a linear time complexity on the corpus size, this is generally preferable to a quadratic memory complexity on the vocabulary size. Levy and Goldberg (2014b) derived the global matrix factorization formulation of SGNS by moving in the algorithmic direction of local to global. Conversely, we will now move in the direction of global to local and derive the local sampling formulation of the HilbertMLE loss function.
Locally sampling HilbertMLE.
Note how if we differentiate the loss function of HilbertMLE (Equation 15) relative to an arbitrary model parameter , we obtain a difference between two expectations:
(18) 
In words, the derivative of the loss function is the difference between the expected value of when taken under the model distribution on one hand and under the corpus distribution on the other.
This leads to a remarkably simple training algorithm. Draw a sample of word pairs from the corpus (using a local sampling approach as in SGNS), and draw a sample of pairs from the model distribution. Compute for both samples, and take their difference. The gradient of the result estimates the gradient of .
In the context of autodifferentiation libraries such as PyTorch Paszke et al. (2017) it is adequate to use a simplified loss function ,
(19) 
because in the first term, the autodifferential operator will ignore the appearance of model parameters in the distribution according to which the expectation is taken, but will recognize model parameters in the expectation’s operand . Thus .
Like SGNS, this uses positive samples drawn from the corpus, balanced against negative samples. But unlike SGNS, which draws negative samples according to the (distorted) unigram distribution, here we draw negative samples from model distribution . This can be done efficiently using Gibbs sampling, making this a form of contrastive divergence Hinton (2002); CarreiraPerpinan and Hinton (2005). To approximately sample a cooccurring pair from the model distribution, we start from a corpusderived pair, and repeatedly perform Gibbs sampling steps: randomly fix either or and resample the other from the model distribution conditioned on the fixed variable. E.g. if we fix , then we draw a new from . Sampling from the conditional distribution can be done in constant time using an adaptive softmax Grave et al. (2017). In theory, the model distribution is approximated after taking many Gibbs sample steps, but consistent with Hinton’s findings for contrastive divergence Hinton (2002); CarreiraPerpinan and Hinton (2005), we find that a single Gibbs sampling step supports efficient training.
6 Experiments
We provide a simple set of experiments comparing the two characteristic models for word embeddings with ours: SGNS and GloVe against HilbertMLE. Our aim in these experiments is simply to verify the sufficiency of the principles we used to derive HilbertMLE (§5). In other words, we are testing the following hypothesis: if the principles we have proposed are sufficient for designing a word embedding algorithm, then HilbertMLE should perform equivalently or better than SGNS and GloVe, which were proposed with different motivating principles Mikolov et al. (2013b); Pennington et al. (2014).
In our experiments, we use a matrix factorization implementation of HilbertMLE as the characteristic form of the model. During experimentation, we found that the Gibbs sampling implementation of HilbertMLE (§5.1) performed equivalently, as expected. We present results on word similarity (§6.1), analogical reasoning (§6.2), text classification (§6.3), and sequence labelling (§6.4).
Our reference corpus combines Gigaword 3 Graff et al. (2007) with a Wikipedia 2018 dump, lowercased, yielding 5.4 billion tokens. We limit and to be the 50,000 most frequent tokens in . We use a token context window, and . We use the released implementations and hyperparameter choices of SGNS and GloVe. Our implementation of HilbertMLE uses PyTorch to take advantage of GPUacceleration, automatic differentiation Paszke et al. (2017), and the Adam gradient descent optimizer Kingma and Ba (2015). Practically, HilbertMLE was implemented by using sharding Shazeer et al. (2016). We use a single 12GB GPU, and load element shards to calculate each update. Training embeddings took less than 3 hours for a 50,000 word vocabulary.
Intrinsic eval.  B143  MENd/t  RMT  RARE  SE17  S999  WSR/S  Y130  Analogy 
SGNS  .453  .753/.763  .680  .515  .656  .364  .589/.760  .514  .763/.312 
GloVe  .347  .760/.771  .663  .509  .662  .391  .602/.727  .541  .739/.310 
HilbertMLE  .397  .751/.761  .684  .579  .680  .462  .593/.765  .514  .705/.343 
6.1 Word similarity
A word similarity task involves interpreting the cosine similarity between embeddings as a measure of similarity or relatedness between words. Performance is computed with the Spearman rank correlation coefficient between the model’s scoring of all pairs of words versus the gold standard human scoring. These tasks can reflect the degree of linear structure captured in the embeddings, which can provide useful insights into differences between models. However, they do not always correlate with performance in downstream tasks Chiu et al. (2016); Faruqui et al. (2016).
We used the following word similarity datasets: Simlex999 (S999) Hill et al. (2015); Wordsim353 Finkelstein et al. (2002) divided into similarity (WSS) and relatedness (WSR) Agirre et al. (2009); the SemEval 2017 task (SE17) CamachoCollados et al. (2017); Radinsky Mechanical Turk (RMT) Radinsky et al. (2011); Baker Verbs 143 (B143) Baker et al. (2014); Yang Powers Verbs 130 (Y130) Yang and Powers (2006); MEN divided into a 2000sample development set (MENd) and 1000sample test set (MENt) Bruni et al. (2012); Rare Words (RARE) Luong et al. (2013). We had an average of 96% coverage over all wordpairs in each dataset, excepting RARE; we had 31% coverage over RARE, yielding 620 word pairs (i.e., more samples than SE17, RMT, WSS/R, and Y130). Results are computed on these covered wordpairs.
Word similarity results.
Table 2 presents results across the 10 word similarity tasks. We observe that HilbertMLE obtains the best performance in 5 out of 10 tasks. In particular, HilbertMLE obtains substantially better scores on S999 than the other models, earning a Spearman correlation coefficient of 0.462, an relative improvement over the next best (SGNS). Note that S999 has been shown to have a high correlation with performance in extrinsic tasks such as Named Entity Recognition and NPchunking, unlike the other word similarity datasets Chiu et al. (2016). On the tasks with worse performance, we observe that the differences between the three algorithms are relatively marginal. Base on these experiments, and the ones that follow, we can therefore conclude that our hypothesis (§6) is valid.
6.2 Analogical reasoning
We performed intrinsic evaluation of our embeddings using standard analogy tasks (e.g., “man” is to “woman” as “king” is to ). We evaluated on the Google Analogy dataset (Google) Mikolov et al. (2013a) and the Balanced Analogy Test Set (BATS) Gladkova et al. (2016). We observed and coverage of the words in each dataset, respectively. Preliminary experiments using 3CosAdd and 3CosMul Levy and Goldberg (2014a) as selection rules, showed 3CosMul was always superior, consistent with the findings of Levy and Goldberg.
Analogy results.
Table 2 presents results on the two analogy datasets in the final column. HilbertMLE performs somewhat worse than the other models on the Google Analogy dataset. However, there has been a considerable amount of work finding that performance on these tasks does not necessarily provide a reliable judgment for embedding quality (Faruqui et al., 2016; Linzen, 2016; Rogers et al., 2017). Indeed, we can see that performance on the Google Analogy dataset does not correspond with performance on the other larger analogy dataset (BATS), where HilbertMLE gets the best performance.
Classification  IMDB  AGNews 
SGNS  .910 .001  .812 .003 
GloVe  .905 .001  .807 .003 
HilbertMLE  .911 .002  .812 .003 
Seq. labelling  Semcor  WSJ  Brown 
Baseline  .6126  .8905  .9349 
SGNS  .6638  .9615  .9762 
GloVe  .6550  .9609  .9750 
HilbertMLE  .6663  .9617  .9767 
6.3 Text classification
We performed extrinsic evaluation for classification tasks on two benchmark NLP classification datasets. First, the IMDB movie reviews dataset for sentiment analysis Maas et al. (2011), divided into train and test sets of 25,000 samples each. Second, the AGNews news classification dataset, as divided into 8 approximately 12,000sample classes (such as Sports, Health, and Business) by KenyonDean et al. (2019); here, we separate 30% of the samples as the final test set. On each, we separate 10% of the training set for validation tuning.
We use a standard BiLSTMmax sequence encoder for these tasks Conneau et al. (2017). This model produces a sequence representation by maxpooling over the forward and backward hidden states produced a bidirectional LSTM. This representation is then passed through a MLP before final prediction. We found that validation performance was optimized with a 1layer d BiLSTM, followed by a d MLP using a ReLU activation, a minibatch size of 64, dropout rate of , and normalizing the embeddings before input. We use a learning rate of with Adam, and divide the learning rate by a factor of if validation performance does not improve in 3 epochs, similar to Conneau et al. (2017); we schedule the learning rate in the same way for sequence labelling (§6.4).
Classification results.
In Table 3 we present the test set results from our classification experiments. We trained each BiLSTMmax 10 times with different random seeds for weight initialization and present the mean test accuracy plus/minus the standard deviation. These results show that each embedding model is similar, although GloVe is slightly worse than the others. Meanwhile, SGNS and HilbertMLE perform approximately the same, obtaining high quality results on both tasks.
6.4 Sequence labelling
Our final extrinsic evaluations are sequence labelling tasks on three datasets. The first task is supersense tagging (SST) Ciaramita and Altun (2006) on the Semcor 3.0 corpus Miller et al. (1993). SST is a coarsegrained semantic sequence labelling problem with 83 unique labels; we report results using the microFscore without the Otag score due to the skew of label distribution, as is standard (Alonso and Plank, 2017; Changpinyo et al., 2018). We divide Semcor into a 7030% traintest split, and use 10% of the training set for validation tuning. The second task is syntactic partofspeech tagging (POS); we use the Penn TreeBank Wall Street Journal corpus (WSJ) Marcus et al. (1993) and the Brown corpus (as distributed by NLTK^{3}^{3}3https://www.nltk.org/book/ch02.html). For the WSJ, we use the given 44tag tagset, and for Brown we map the original tags to the 12tag “universal tagset” Petrov et al. (2012). We use sections 22, 23, and 24 of the WSJ corpus as its test set, and separate out 30% of the sentences in Brown as its test set.
On each dataset, we train a standard sequence labelling model inspired by Huang et al. (2015): a 2layer, d bidirectional LSTM, using a minibatch size of 16, and a dropout rate of 0.5. Interestingly, we found that normalizing the embeddings substantially reduced validation performance, so we keep them in their original form.
Sequence labelling results.
To accompany our results in Table 4, we include results from a trivial mostfrequenttag baseline. This baseline returns that the tag of a token is the most frequently occurring tag for that token within the training set. In SST it is standard to include results from a mostfrequentsupersense baseline, being inspired from the tradition of word sense disambiguation, which uses the mostfrequentsense baseline.
The results for the BiLSTMs are the mean test set score across 10 different runs with different random seeds for weight initialization. The low standard deviations were approximately the same for each model. As in the classification tasks, we find that the embeddings produced by each model obtain very similar results. Nonetheless, we observe that HilbertMLE offers marginal improvements over the others. Note that our performance on WSJ and Brown is expected since we use vanilla BiLSTMs that do not include any hand engineered character or contextbased features. Indeed, Huang et al. report results of 96.04% on the WSJ with their vanilla BiLSTM, which suggests that our embeddings possess strong syntactic properties.
6.5 Qualitative analysis
Argmax  Top most similar embeddings 
kittens, cats, kitten, poodle  
burglar, siamese, schrödinger  
funds, monies, billions, cash  
laundering, launder, extort  
cubans, cuban, anticastro  
gooding, guantanamo, havana 
We provide a final set of qualitative results in Table 5. Here, we use the vectors and covectors trained by the HilbertMLE model used in our experiments. These results elucidate the difference between using vectorvector similarity versus vectorcovector dot product similarity (results are practically the same when using cosine similarity). The vectorvector similarity is well known as a way to measure semantic similarity between two concepts captured in word embeddings. As expected, we see recoveries like “cat” is similar to “kitten”, “money” with “funds”, etc.
However, when we instead obtain the most similar covector to the corresponding vector, the results are dramatically different. We see that the vector for “cat” is most similar to covectors for words with which it forms multiword expressions: “cat burglar”, ”siamese cat”, ”schrödinger’s cat”. We see that “cuba” is most similar to the covector for “gooding” – this is because Cuba Gooding Jr. is a famous American actor whose Wikipedia page appears in our corpus. Indeed, the vectorcovector recoveries are directly correlated to the PMIs between the terms in the corpus.
Overall, we see that vectorvector dot products recover semantic similarity, while vectorcovector dot products recover cooccurrence similarity. Melamud et al. (2015) and Asr and Jones (2017) discuss these different statistical recoveries as paradigmatic (targettotarget) and syntagmatic (targettocontext) recoveries, respectively. However, to our knowledge, previous work has not explicitly explained the reason for these two different types of recoveries; i.e., because the learning objective for word embeddings is to approximate PMI. Therefore, these results qualitatively demonstrate exactly what our hypothesis anticipates: the dot product between trained vectors and covectors approximates the PMI between their corresponding words in the original corpus.
7 Discussion
In the past, probabilistic distributional semantic models for creating word embeddings surpassed the traditional countbased models Turney and Pantel (2010) that preceded them, which was wellestablished by Baroni et al. (2014). At the same time, models like Word2vec (SGNS), GloVe, and SVD of PPMI Mikolov et al. (2013a, b); Pennington et al. (2014); Levy et al. (2015) offered strong improvements (in terms of performance and efficiency) over other probabilisticallymotivated embedding models Collobert and Weston (2008); Mnih and Hinton (2009); Turian et al. (2010); Mnih and Kavukcuoglu (2013).
Today, NLP seems to be orienting toward deep contextualized models Peters et al. (2018); Devlin et al. (2018). Nonetheless, pretrained word embeddings are still highly relevant. Indeed, they have been used recently to greatly assist solving problems in materials science Tshitoyan et al. (2019), biomedical text mining Zhang et al. (2019), and law Chalkidis and Kampas (2019). Moreover, word embeddings are used in dynamic metaembeddings to obtain stateoftheart results Kiela et al. (2018), are used as inputs to ELMO Peters et al. (2018), and are crucial in memoryconstrained NLP contexts (such as in mobile devices, which cannot store large deep neural networks Shu and Nakayama (2017)).
We believe a robust understanding of the “shallow” (or, nondeep), uncontextualized embedding models presented in this work is a prerequisite for informed development of deeper models. In this work, we advanced the theoretical understanding of word embeddings by proposing the low rank embedder framework. Cast under this framework, the similarities between many existing algorithms become apparent. After isolating two key principles shared by the low rank embedders—a probabilistically informed bilinear parameterization of PMI and a “tempered” gradient conditioning measure—we demonstrate that these ideas are sufficient to derive HilbertMLE, a model that is simpler yet has performance that is equivalent or better than the other models. This provides a parsimonious explanation for the success of the low rank embedders, demonstrating that many idiosyncratic features of other embedders are unnecessary.
The design parameters of our framework have not yet been fully explored. Our framework could be used to model more complex linguistic phenomena, by following the blueprint provided by our derivation of HilbertMLE. Moreover, based on our findings concerning the importance of covectors in parameterizing the model’s approximation of PMI, we believe that methods such as retrofitting Faruqui et al. (2015), subwordbased embedding decomposition Stratos (2017), and dynamic metaembedding Kiela et al. (2018) could benefit by incorporating covectors into their modelling designs. As well, similar to how the theoretical basis of LDS Arora et al. (2016) was the grounding for the widelyused SIF document embedding algorithm Arora et al. (2017), we believe that the theoretical basis provided in this work can inform future development of document embedding techniques.
References
 A study on similarity and relatedness using distributional and wordnetbased approaches. In Proceedings of the 2009 Annual Conference of NAACLHLT, pp. 19–27. Cited by: §6.1.
 When is multitask learning effective? semantic sequence prediction under varying data conditions. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics (EACL), Vol. 1, pp. 44–53. Cited by: §6.4.
 A latent variable model approach to pmibased word embeddings. Transactions of the Association for Computational Linguistics 4, pp. 385–399. Cited by: §3.3, §3.3, §3.3, Table 1, §7.
 A simple but toughtobeat baseline for sentence embeddings. International Conference on Learning Representations. Cited by: §3.3, §7.
 An artificial language evaluation of distributional semantic models. CoNLL 2017, pp. 134. Cited by: §6.5.
 Querying word embeddings for similarity and relatedness. In Proceedings of the 2018 Conference of NAACLHLT, Volume 1 (Long Papers), pp. 675–684. Cited by: §1.
 An unsupervised model for instance level subcategorization acquisition. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pp. 278–289. Cited by: §6.1.
 Don’t count, predict! a systematic comparison of contextcounting vs. contextpredicting semantic vectors. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 238–247. Cited by: §7.
 Distributional semantics in technicolor. In Proceedings of the 50th annual meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 136–145. Cited by: §6.1.
 Semeval2017 task 2: multilingual and crosslingual semantic word similarity. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval2017), pp. 15–26. Cited by: §6.1.
 On contrastive divergence learning.. In Aistats, Vol. 10, pp. 33–40. Cited by: §5.1.
 Deep learning in law: early adaptation and legal word embeddings trained on large corpora. Artificial Intelligence and Law 27 (2), pp. 171–198. Cited by: §7.
 Multitask learning for sequence tagging: an empirical study. In Proceedings of the 27th International Conference on Computational Linguistics, pp. 2965–2977. Cited by: §6.4.
 Intrinsic evaluation of word vectors fails to predict extrinsic performance. In Proceedings of the 1st Workshop on Evaluating VectorSpace Representations for NLP, pp. 1–6. Cited by: §6.1, §6.1.
 Broadcoverage sense disambiguation and information extraction with a supersense sequence tagger. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 594–602. Cited by: §6.4.
 A unified architecture for natural language processing: deep neural networks with multitask learning. In Proceedings of the 25th International Conference on Machine Learning, pp. 160–167. Cited by: §7.
 Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 670–680. Cited by: §6.3.
 Waste not: metaembedding of word and context vectors. In International Conference on Applications of Natural Language to Information Systems, pp. 393–401. Cited by: §1.
 BERT: pretraining of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Cited by: §7.
 Retrofitting word vectors to semantic lexicons. In Proceedings of the 2015 Conference of NAACLHLT, pp. 1606–1615. Cited by: §7.
 Problems with evaluation of word embeddings using word similarity tasks. In Proceedings of the 1st Workshop on Evaluating VectorSpace Representations for NLP, pp. 30–35. Cited by: §3.3, §6.1, §6.2.
 Placing search in context: the concept revisited. ACM Transactions on information systems 20 (1), pp. 116–131. Cited by: §6.1.
 Analogybased detection of morphological and semantic relations with word embeddings: what works and what doesn’t.. In Proceedings of the NAACLHLT Student Research Workshop, pp. 8–15. Cited by: §6.2.
 A primer on neural network models for natural language processing. Journal of Artificial Intelligence Research 57, pp. 345–420. Cited by: §1.
 English gigaword third edition ldc2007t07. Web Download. Philadelphia: Linguistic Data Consortium. Cited by: §6.
 Efficient softmax approximation for gpus. In Proceedings of the 34th International Conference on Machine LearningVolume 70, pp. 1302–1310. Cited by: §5.1.
 Simlex999: evaluating semantic models with (genuine) similarity estimation. Computational Linguistics 41 (4), pp. 665–695. Cited by: §6.1.
 Training products of experts by minimizing contrastive divergence. Neural computation 14 (8), pp. 1771–1800. Cited by: §5.1.
 Bidirectional LSTMCRF models for sequence tagging. arXiv preprint arXiv:1508.01991. Cited by: §1, §6.4, §6.4.
 Bag of tricks for efficient text classification. European Association for Computational Linguistics 2017, pp. 427. Cited by: §3.3, §3.3.
 Clustering oriented representation learning with attractiverepulsive loss. AAAI Workshop: Network Interpretability for Deep Learning. Cited by: §6.3.
 Dynamic metaembeddings for improved sentence representations. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 1466–1477. Cited by: §7, §7.
 Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pp. 1746–1751. Cited by: §1.
 Adam: a method for stochastic optimization. In Proceedings of the 2015 International Conference on Learning Representations, Cited by: §6.
 Improving distributional similarity with lessons learned from word embeddings. Transactions of the Association for Computational Linguistics 3, pp. 211–225. Cited by: §1, §1, §3.1, §3.3, §5.1, §7.
 Linguistic regularities in sparse and explicit word representations. In Proceedings of the eighteenth conference on computational natural language learning, pp. 171–180. Cited by: §6.2.
 Neural word embedding as implicit matrix factorization. In Advances in Neural Information Processing Systems, pp. 2177–2185. Cited by: Figure 1, §1, §1, §3.1, §3.1, §3.3, §5.1, §5.1.
 Word embedding revisited: a new representation learning and explicit matrix factorization perspective. In Proceedings of the 24th International Conference on Artificial Intelligence, pp. 3650–3656. Cited by: §3.1.
 Issues in evaluating semantic spaces using word analogies. In Proceedings of the 1st Workshop on Evaluating VectorSpace Representations for NLP, pp. 13–18. Cited by: §3.3, §6.2.
 Better word representations with recursive neural networks for morphology. In Proceedings of the Seventeenth Conference on Computational Natural Language Learning, pp. 104–113. Cited by: §6.1.
 Learning word vectors for sentiment analysis. In Proceedings of the 49th annual meeting of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pp. 142–150. Cited by: §6.3.
 Word embedding and wordnet based metaphor identification and interpretation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, Cited by: §1.
 Building a large annotated corpus of english: the penn treebank. Computational linguistics 19 (2), pp. 313–330. Cited by: §6.4.
 A simple word embedding model for lexical substitution. In Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing, pp. 1–7. Cited by: §1, §6.5.
 Efficient estimation of word representations in vector space. Proceedings of the 2013 International Conference on Learning Representations. Cited by: §6.2, §7.
 Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pp. 3111–3119. Cited by: §1, §1, §1, §3.1, §3.1, Table 1, §5.1, §6, §7.
 A semantic concordance. In Proceedings of the Workshop on Human Language Technology, pp. 303–308. Cited by: §6.4.
 A scalable hierarchical distributed language model. In Advances in neural information processing systems, pp. 1081–1088. Cited by: §7.
 Learning word embeddings efficiently with noisecontrastive estimation. In Advances in neural information processing systems, pp. 2265–2273. Cited by: §7.
 Improving document ranking with dual word embeddings. In Proceedings of the 25th International Conference Companion on World Wide Web, pp. 83–84. Cited by: §1.
 Automatic differentiation in PyTorch. In Neural Information Processing Systems 2017 Workshop on Automatic Differentiation, Cited by: §5.1, §6.
 GloVe: global vectors for word representation. In Proceedings of the 2014 conference on Empirical Methods in Natural Language Processing, pp. 1532–1543. Cited by: §1, §1, §3.2, Table 1, §5.1, §6, §7.
 Deep contextualized word representations. In Proceedings of the 2018 Conference of NAACLHLT, Volume 1 (Long Papers), Vol. 1, pp. 2227–2237. Cited by: §1, §7.
 A universal partofspeech tagset. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC2012), Cited by: §6.4.
 Using the output embedding to improve language models. EACL 2017, pp. 157. Cited by: §1.
 When and why are pretrained word embeddings useful for neural machine translation?. In Proceedings of the 2018 Conference of NAACLHLT, Volume 2 (Short Papers), pp. 529–535. Cited by: §1.
 A word at a time: computing word relatedness using temporal semantic analysis. In Proceedings of the 20th international conference on World Wide Web, pp. 337–346. Cited by: §6.1.
 The (too many) problems of analogical reasoning with word vectors. In Proceedings of the 6th Joint Conference on Lexical and Computational Semantics (* SEM 2017), pp. 135–148. Cited by: §3.3, §6.2.
 PIC a different word: a simple model for lexical substitution in context. In Proceedings of the 2016 Conference of the NAACLHLT, pp. 1121–1126. Cited by: §1.
 Swivel: improving embeddings by noticing what’s missing. arXiv preprint arXiv:1602.02215. Cited by: §1, §3.3, Table 1, §5.1, §6.
 Compressing word embeddings via deep compositional code learning. arXiv preprint arXiv:1711.01068. Cited by: §7.
 Reconstruction of word embeddings from subword parameters. In Proceedings of the First Workshop on Subword and Character Level Models in NLP, pp. 130–135. Cited by: §7.
 Frequency estimates for statistical word similarity measures. In Proceedings of the 2003 Conference of NAACLHLT  Volume 1, pp. 165–172. Cited by: §2.
 Unsupervised word embeddings capture latent knowledge from materials science literature. Nature 571 (7763). Cited by: §7.
 Word representations: a simple and general method for semisupervised learning. In Proceedings of the 48th annual meeting of the Association for Computational Linguistics, pp. 384–394. Cited by: §7.
 From frequency to meaning: vector space models of semantics. Journal of artificial intelligence research 37, pp. 141–188. Cited by: §7.
 Generalized low rank models. Foundations and Trends® in Machine Learning 9 (1), pp. 1–118. Cited by: §3.2, §3.
 Verb similarity on the taxonomy of wordnet. Masaryk University. Cited by: §6.1.
 How to avoid sentences spelling boring? towards a neural approach to unsupervised metaphor generation. In Proceedings of the 2019 Conference of NAACLHLT, Volume 1 (Long and Short Papers), pp. 861–871. Cited by: §1.
 Deep learning for sentiment analysis: a survey. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery 8 (4), pp. e1253. Cited by: §1.
 BioWordVec, improving biomedical word embeddings with subword information and mesh. Scientific data 6 (1). Cited by: §7.