Are Girls Neko or Shōjo? Cross-Lingual Alignment of Non-Isomorphic Embeddings with Iterative Normalization

Are Girls Neko or Shōjo? Cross-Lingual Alignment of Non-Isomorphic Embeddings with Iterative Normalization

Mozhi Zhang Keyulu Xu Ken-ichi Kawarabayashi
Stefanie Jegelka Jordan Boyd-Graber
University of Maryland, College Park, Maryland, USA
Massachusetts Institute of Technology, Cambridge, Massachusetts, USA
National Institue of Informatics, Tokyo, Japan
{mozhi,jbg}@umiacs.umd.edu {keyulu,stefje}@mit.edu k_keniti@nii.ac.jp
Abstract

Cross-lingual word embeddings (clwe) underlie many multilingual natural language processing systems, often through orthogonal transformations of pre-trained monolingual embeddings. However, orthogonal mapping only works on language pairs whose embeddings are naturally isomorphic. For non-isomorphic pairs, our method (Iterative Normalization) transforms monolingual embeddings to make orthogonal alignment easier by simultaneously enforcing that (1) individual word vectors are unit length, and (2) each language’s average vector is zero. Iterative Normalization consistently improves word translation accuracy of three clwe methods, with the largest improvement observed on English-Japanese (from 2% to 44% test accuracy).

\aclfinalcopy

1 Orthogonal Cross-Lingual Mappings

Cross-lingual word embedding (clwe) models map words from multiple languages to a shared vector space, where words with similar meanings are close, regardless of language. clwe is widely used in multilingual natural language processing (Klementiev et al., 2012; Guo et al., 2015; Zhang et al., 2016). Recent clwe methods (Ruder et al., 2017; Glavas et al., 2019) independently train two monolingual embeddings on large monolingual corpora and then align them with a linear transformation. Previous work argues that these transformations should be orthogonal (Xing et al., 2015; Smith et al., 2017; Artetxe et al., 2016): for any two words, the dot product of their representations is the same as the dot product with the transformation. This preserves similarities and substructure of the original monolingual word embedding but enriches the embeddings with multilingual connections between languages.

Figure 1: The most similar Japanese words for {CJK*}UTF8ipxm少女 (shōjo “girl”) and English words for “girl”, measured by cosine similarity on Wikipedia fastText vectors, before (left) and after (right) Iterative Normalization. In the original embedding spaces, “boy” is the nearest neighbor for both languages but with a very different cosine similarity, and “cat” in English is not close to “girl”: both violate the isomorphism assumed by an orthogonal transformation for cross-lingual representations. Iterative Normalization replaces {CJK*}UTF8ipxm猫 (neko “cat”) with the more relevant {CJK*}UTF8ipxm美少女 (bishōjo “pretty girl”) and brings cosine similarities closer.

Thus, many state-of-the-art mapping-based clwe methods impose an orthogonal constraint (Artetxe et al., 2017; Conneau et al., 2018; Alvarez-Melis and Jaakkola, 2018; Artetxe et al., 2018; Ruder et al., 2018; Alvarez-Melis et al., 2019). The success of orthogonal methods relies on the assumption that embedding spaces are isomorphic; i.e., they have the same inner-product structures across languages, but this does not hold for all languages (Søgaard et al., 2018; Fujinuma et al., 2019). For example, English and Japanese fastText vectors (Bojanowski et al., 2017) have different substructures around “girl” (Figure 1 left). As a result, orthogonal mapping fails on some languages—when Hoshen and Wolf (2018) align fastText embeddings with orthogonal mappings, they report 81% English–Spanish word translation accuracy but only 2% for the more distant English–Japanese.

While recent work challenges the orthogonal assumption (Doval et al., 2018; Joulin et al., 2018; Jawanpuria et al., 2019), we focus on whether simple preprocessing techniques can improve the suitability of orthogonal models. Our iterative method normalizes monolingual embeddings to make their structures more similar (Figure 1), which improves subsequent alignment.

Our method is motivated by two desired properties of monolingual embeddings that support orthogonal alignment: {enumerate*}

Every word vector has the same length.

Each language’s mean has the same length. Standard preprocessing such as dimension-wise mean centering and length normalization Artetxe et al. (2016) do not meet the two requirements at the same time. Our analysis leads to Iterative Normalization, an alternating projection algorithm that normalizes any word embedding to provably satisfy both conditions. After normalizing the monolingual embeddings, we then apply mapping-based clwe algorithms on the transformed embeddings.

We empirically validate our theory by combining Iterative Normalization with three mapping-based clwe methods. Iterative Normalization improves word translation accuracy on a dictionary induction benchmark across thirty-nine language pairs.

2 Learning Orthogonal Mappings

This section reviews learning orthogonal cross-lingual mapping between word embeddings and, along the way, introduces our notation.

We start with pre-trained word embeddings in a source language and a target language. We assume111Word translation benchmarks use the same assumptions. all embeddings are -dimensional, and the two languages have the same vocabulary size . Let be the word embedding matrix for the source language, where each column is the representation of the -th word from the source language, and let be the word embedding matrix for the target language. Our goal is to learn a transformation matrix  that maps the source language vectors to the target language space. While our experiments focus on the supervised case with a seed dictionary with translation pairs , the analysis also applies to unsupervised projection.

One straightforward way to learn is by minimizing Euclidean distances between translation pairs (Mikolov et al., 2013a). Formally, we solve:

(1)

Xing et al. (2015) further restrict to orthogonal transformations; i.e., . The orthogonal constraint significantly improves word translation accuracy (Artetxe et al., 2016). However, this method still fails for some language pairs because word embeddings are not isomorphic across languages. To improve orthogonal alignment between non-isomorphic embedding spaces, we aim to transform monolingual embeddings in a way that helps orthogonal transformation.

3 When Orthogonal Mappings Work

When are two embedding spaces easily aligned? A good orthogonal mapping is more likely if word vectors have two properties: length-invariance and center-invariance.

Length-Invariance.

First, all word vectors should have the same, constant length. Length-invariance resolves inconsistencies between monolingual word embedding and cross-lingual mapping objectives (Xing et al., 2015). During training, popular word embedding algorithms Mikolov et al. (2013b); Pennington et al. (2014); Bojanowski et al. (2017) maximize dot products between similar words, but evaluate on cosine similarity. To make things worse, the transformation matrix minimizes a third metric, Euclidean distance (Equation 1). This inconsistency is naturally resolved when the lengths of word vectors are fixed. Suppose and have the same length, then

Minimizing Euclidean distance is equivalent to maximizing both dot product and cosine similarity with constant word vector lengths, thus making objectives consistent.

Length-invariance also satisfies a prerequisite for bilingual orthogonal alignment: the embeddings of translation pairs should have the same length. If a source word vector can be aligned to its target language translation with an orthogonal matrix , then

(2)

where the second equality follows from the orthogonality of . Equation (2) is trivially satisfied if all vectors have the same length. In summary, length-invariance not only promotes consistency between monolingual word embedding and cross-lingual mapping objective but also simplifies translation pair alignment.

Center-Invariance.

Our second condition is that the mean vector of different languages should have the same length, which we prove is a pre-requisite for orthogonal alignment. Suppose two embedding matrices and can be aligned with an orthogonal matrix such that . Let and be the mean vectors. Then . Since is orthogonal,

In other words, orthogonal mappings can only align embedding spaces with equal-magnitude centers.

A stronger version of center-invariance is zero-mean, where the mean vector of each language is zero. \newciteartetxe-16 find that centering improves dictionary induction; our analysis provides an explanation.

Method Normalization  ja  zh  hi  tr  da  de  es
Procrustes None 1.7 32.5 33.3 44.9 54.0 73.5 81.4
c+l 12.3 41.1 34.0 46.5 54.9 74.6 81.3
in 44.3 44.2 36.7 48.7 58.4 75.5 81.5
Procrustes + refine None 1.7 32.5 33.6 46.3 56.8 74.3 81.9
c+l 13.1 42.3 34.9 48.7 59.3 75.2 82.4
in 44.3 44.2 37.7 51.7 60.9 76.0 82.5
rcsls None 14.6 17.1 5.0 18.3 19.2 43.6 50.5
c+l 16.1 45.1 36.2 50.7 58.3 77.5 83.6
in 56.3 48.6 38.0 52.4 60.5 78.1 83.9
Table 1: Word translation accuracy aligning English embeddings to seven languages. We combine three normalizations—no normalization (None), mean centering and length normalization (c+l), and Iterative Normalization (in) for five rounds—with three clwes: Procrustes, Procrustes with refinement (Conneau et al., 2018), and rcsls (Joulin et al., 2018). Procrustes with c+l is equivalent to Artetxe et al. (2016). The best result for each clwe in each column in bold. Iterative Normalization has the best accuracy of the three normalization techniques.

4 Iterative Normalization

We now develop Iterative Normalization, which transforms monolingual word embeddings to satisfy both length-invariance and center-invariance. Specifically, we normalize word embeddings to simultaneously have unit-length and zero-mean. Formally, we produce embedding matrix such that

(3)

and

(4)

Iterative Normalization transforms the embeddings to make them satisfy both constraints at the same time. Let be the initial embedding for word . We assume that all word embeddings are non-zero.222For such vectors, a small perturbation is an easy fix. For every word , we iteratively transform each word vector by first making the vectors unit length,

(5)

and then making them mean zero,

(6)

Equation (5) and (6) project the embedding matrix to the set of embeddings that satisfy Equation (3) and (4). Therefore, our method is a form of alternating projection (Bauschke and Borwein, 1996), an algorithm to find a point in the intersection of two closed sets by alternatively projecting onto one of the two sets. Alternating projection guarantees convergence in the intersection of two convex sets at a linear rate Gubin et al. (1967); Bauschke and Borwein (1993). Unfortunately, the unit-length constraint is non-convex, ruling out the classic convergence proof. Nonetheless, we use recent results on alternating non-convex projections Zhu and Li (2018) to prove Iterative Normalization’s convergence (details in Appendix A).

Theorem 1.

If the embeddings are non-zero after each iteration; i.e., for all and , then the sequence produced by Iterative Normalization is convergent.

All embeddings in our experiments satisfy the non-zero assumption; it is violated only when all words have the same embedding. In degenerate cases, the algorithm might converge to a solution that does not meet the two requirements. Empirically, our method always satisfy both constraints.

Previous approach and differences.

Artetxe et al. (2016) also study he unit-length and zero-mean constraints, but our work differs in two aspects. First, they motivate the zero-mean condition based on the heuristic argument that two randomly selected word types should not be semantically similar (or dissimilar) in expectation. While this statement is attractive at first blush, some word types have more synonyms than others, so we argue that word types might not be evenly distributed in the semantic space. We instead show that zero-mean is helpful because it satisfies center-invariance, a necessary condition for orthogonal mappings. Second, Artetxe et al. (2016) attempt to enforce the two constraints by a single round of dimension-wise mean centering and length normalization. Unfortunately, this often fails to meet the constraints at the same time—length normalization can change the mean, and mean centering can change vector length. In contrast, Iterative Normalization simultaneously meets both constraints and is empirically better (Table 1) on dictionary induction.

5 Dictionary Induction Experiments

On a dictionary induction benchmark, we combine Iterative Normalization with three clwe methods and show improvement in word translation accuracy across languages.

5.1 Dataset and Methods

We train and evaluate clwe on muse dictionaries (Conneau et al., 2018) with default split. We align English embeddings to thirty-nine target language embeddings, pre-trained on Wikipedia with fastText Bojanowski et al. (2017). The alignment matrices are trained from dictionaries of 5,000 source words. We report top-1 word translation accuracy for 1,500 source words, using cross-domain similarity local scaling (Conneau et al., 2018, csls). We experiment with the following clwe methods.333We only report accuracy for one run, because these clwe methods are deterministic.

Procrustes Analysis.

Our first algorithm uses Procrustes analysis Schönemann (1966) to find the orthogonal transformation that minimizes Equation 1, the total distance between translation pairs.

Post-hoc Refinement.

Orthogonal mappings can be improved with refinement steps Artetxe et al. (2017); Conneau et al. (2018). After learning an initial mapping from the seed dictionary , we build a synthetic dictionary by translating each word with . We then use the new dictionary to learn a new mapping and repeat the process.

Relaxed csls Loss (rcsls).

Joulin et al. (2018) optimize csls scores between translation pairs instead of Equation (1). rcsls has state-of-the-art supervised word translation accuracies on muse (Glavas et al., 2019). For the ease of optimization, rcsls does not enforce the orthogonal constraint. Nevertheless, Iterative Normalization also improves its accuracy (Table 1), showing it can help linear non-orthogonal mappings too.

5.2 Training Details

We use the implementation from muse for Procrustes analysis and refinement (Conneau et al., 2018). We use five refinement steps. For rcsls, we use the same hyperparameter selection strategy as Joulin et al. (2018)—we choose learning rate from and number of epochs from by validation. As recommended by Joulin et al. (2018), we turn off the spectral constraint. We use ten nearest neighbors when computing csls.

5.3 Translation Accuracy

For each method, we compare three normalization strategies: (1) no normalization, (2) dimension-wise mean centering followed by length normalization (Artetxe et al., 2016), and (3) five rounds of Iterative Normalization. Table 1 shows word translation accuracies on seven selected target languages. Results on other languages are in Appendix B.

As our theory predicts, Iterative Normalization increases translation accuracy for Procrustes analysis (with and without refinement) across languages. While centering and length-normalization also helps, the improvement is smaller, confirming that one round of normalization is insufficient. The largest margin is on English-Japanese, where Iterative Normalization increases test accuracy by more than 40%. Figure 1 shows an example of how Iterative Normalization makes the substructure of an English-Japanese translation pair more similar.

Surprisingly, normalization is even more important for rcsls, a clwe method without orthogonal constraint. rcsls combined with Iterative Normalization has state-of-the-art accuracy, but rcsls is much worse than Procrustes analysis on unnormalized embeddings, suggesting that length-invariance and center-invariance are also helpful for learning linear non-orthogonal mappings.

Dataset Before After
ws-353 73.9 73.7
mc 81.2 83.9
rg 79.7 80.0
yp-130 53.3 57.6
Table 2: Correlations before and after applying Iterative Normalization on four English word similarity benchmarks: ws-353 (Finkelstein et al., 2002), mc (Miller and Charles, 1991), rg (Rubenstein and Goodenough, 1965), and yp-130 (Yang and Powers, 2006). The scores are similar, which shows that Iterative Normalization retains useful structures from the original embeddings.

5.4 Monolingual Word Similarity

Many trivial solutions satisfy both length-invariance and center-invariance; e.g., we can map half of words to and the rest to , where is any unit-length vector. A meaningful transformation should also preserve useful structure in the original embeddings. We confirm Iterative Normalization does not hurt scores on English word similarity benchmarks (Table 2), showing that Iterative Normalization produces meaningful representations.

6 Conclusion

We identify two conditions that make cross-lingual orthogonal mapping easier: length-invariance and center-invariance, and provide a simple algorithm that transforms monolingual embeddings to satisfy both conditions. Our method improves word translation accuracy of different mapping-based clwe algorithms across languages. In the future, we will investigate whether our method helps other downstream tasks.

Acknowledgments

We thank the anonymous reviewers for comments. Boyd-Graber and Zhang are supported by DARPA award HR0011-15-C-0113 under subcontract to Raytheon BBN Technologies. Jegelka and Xu are supported by NSF CAREER award 1553284. Xu is also supported by a Chevron-MIT Energy Fellowship. Kawarabayashi is supported by JST ERATO JPMJER1201 and JSPS Kakenhi JP18H05291. Any opinions, findings, conclusions, or recommendations expressed here are those of the authors and do not necessarily reflect the view of the sponsors.

References

Appendix A Proof for Theorem 1

Our convergence analysis is based on a recent result on alternating non-convex projections. Theorem 1 in the work of  Zhu and Li (2018) states that the convergence of alternating projection holds even if the constraint sets are non-convex, as long as the two constraint sets satisfy the following assumption:

Assumption 1.

Let and be any two closed semi-algebraic sets, and let be the sequence of iterates generated by the alternating projection method (e.g., Iterative Normalization). Assume the sequence is bounded and the sets and obey the following properties:

  1. three-point property of : there exists a nonnegative function with such that for any , we have

    and

  2. local contraction property of : there exist and such that when , we have

    where is the projection onto .

Zhu and Li (2018) only consider sets of vectors, but our constraint are sets of matrices. For ease of exposition, we treat every embedding matrix as a vector by concatenating the column vectors: . The -norm of the concatenated vector is equivalent to the Frobenius norm of the original matrix .

The two operations in Iterative Normalization, Equation (5) and (6), are projections onto two constraint sets, unit-length set and zero-mean set . To prove convergence of Iterative Normalization, we show that satisfies the three-point property, and satisfies the local contraction property.

Three-point property of .

For any and , let be the projection of onto the constraint set with Equation (5). The columns of and have the same length, so we have

(7)

Since is the projection of onto the unit-length set with Equation (5); i.e., , we can rewrite Equation (7).

(8)

All columns of and are unit-length. Therefore, we can further rewrite Equation (8).

Let be the minimum length of the columns in . We have the following inequality:

From our non-zero assumption, the minimum column length is always positive. Let be the minimum column length of the embedding matrix after the -th iteration. It follows that satisfies the three-point property with and .

Local contraction property of .

The zero-mean constraint set is convex and closed: if two matrices and both have zero-mean, their linear interpolation must also have zero-mean for any . Projections onto convex sets in a Hilbert space are contractive Browder (1967), and therefore satisfies the local contraction property with any positive and .

In summary, the two constraint sets that Iterative Normalization projects onto satisfy Assumption 1. Therefore, Iterative Normalization converges following the analysis of Zhu and Li (2018).

Appendix B Results on All Languages

Table 3 shows word translation accuracies on all target languages. Iterative Normalization improves accuracy on all languages.

Procrustes Procrustes + refine rcsls
Target None c+l in None c+l in None c+l in
af 26.3 28.3 29.7 27.7 28.7 30.4 9.3 28.6 29.3
ar 36.5 37.1 37.9 36.5 37.1 37.9 18.4 40.5 41.5
bs 22.3 23.5 24.4 23.3 23.9 26.6 5.4 25.5 26.6
ca 65.9 67.6 68.9 66.5 67.6 68.9 43.0 68.9 69.5
cs 54.0 54.7 55.3 54.0 54.7 55.7 29.9 57.8 58.2
da 54.0 54.9 58.4 56.8 59.3 60.9 19.2 58.3 60.5
de 73.5 74.6 75.5 74.3 75.2 76.0 43.6 77.5 78.1
el 44.0 44.9 47.5 44.6 45.9 47.9 14.0 47.1 48.5
es 81.4 81.3 81.5 81.9 82.1 82.5 50.5 83.6 83.9
et 31.9 34.5 36.1 31.9 35.3 36.4 8.1 37.3 39.4
fa 33.1 33.7 37.3 33.1 34.1 37.3 5.9 37.5 38.3
fi 47.6 48.5 50.9 47.6 50.1 51.1 20.9 52.3 53.3
fr 81.1 81.3 81.7 82.1 82.7 82.4 53.1 83.9 83.9
he 40.2 43.1 43.7 40.2 43.1 43.7 13.1 49.7 50.1
hi 33.3 34.0 36.7 33.6 34.9 37.7 5.0 36.2 38.0
hr 37.0 37.8 40.2 37.6 37.8 40.2 14.5 41.1 42.6
hu 51.8 54.1 55.5 53.3 54.1 56.1 11.7 57.3 58.2
id 65.6 65.7 67.9 67.7 68.4 70.3 24.8 68.9 70.0
it 76.2 76.6 76.6 77.5 78.1 78.1 48.4 78.8 79.1
ja 1.7 13.1 44.3 1.7 13.1 44.3 14.6 16.1 56.3
ko 31.5 32.1 33.9 31.5 32.1 33.9 6.4 37.5 37.5
lt 22.5 22.8 23.2 22.5 22.8 23.3 7.6 23.3 23.5
lv 23.6 24.9 26.1 23.6 24.9 26.1 10.1 28.3 28.7
ms 44.0 45.4 48.9 46.5 48.3 51.1 19.9 49.1 50.2
nl 72.8 73.7 74.1 73.8 75.1 75.8 46.7 75.6 75.8
pl 58.2 60.2 60.1 58.5 60.2 60.4 39.4 62.4 62.5
pt 79.5 79.7 79.9 79.9 81.0 81.2 63.1 81.1 81.7
ro 58.1 60.5 61.8 59.9 60.5 62.5 27.1 61.9 63.3
ru 51.7 52.1 52.1 51.7 52.1 52.1 26.6 57.1 57.9
sk 38.0 39.3 40.4 38.0 39.3 41.7 13.3 41.5 42.3
sl 32.5 34.3 36.7 32.5 34.4 36.7 12.3 36.0 37.9
sq 23.5 25.1 27.3 23.5 25.1 27.3 4.4 26.5 27.3
sv 58.7 59.6 60.7 60.9 61.2 62.6 35.6 63.8 63.9
ta 15.1 15.5 16.8 15.1 15.5 17.7 6.7 16.3 17.1
th 22.5 23.3 22.9 22.5 23.3 22.9 9.4 23.7 23.9
tr 44.9 46.5 48.7 46.3 48.7 51.7 18.3 50.7 52.4
uk 34.8 35.9 36.3 35.5 35.9 36.5 18.8 40.7 40.8
vi 41.3 42.1 43.7 42.1 42.7 44.2 14.2 43.3 43.9
zh 32.5 42.3 44.2 32.5 42.3 44.2 17.1 45.1 48.6
Average 44.7 46.3 48.4 45.3 47.0 49.1 21.8 49.0 50.9
Table 3: Word translation accuracy aligning English embeddings to thirty-nine languages. We combine three normalizations—no normalization (None), mean centering and length normalization (c+l), and Iterative Normalization (in) for five rounds—with three clwes: Procrustes, Procrustes with refinement (Conneau et al., 2018), and rcsls (Joulin et al., 2018). Procrustes with c+l is equivalent to Artetxe et al. (2016). The best result for each clwe in each column in bold. Iterative Normalization has the best accuracy of the three normalization techniques.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
371475
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description