Cross-Lingual Alignment of Contextual Word Embeddings, with Applications to Zero-shot Dependency Parsing

Cross-Lingual Alignment of Contextual Word Embeddings,
with Applications to Zero-shot Dependency Parsing

Tal Schuster    Ori Ram    Regina Barzilay    Amir Globerson

Computer Science and Artificial Intelligence Lab, MIT
Tel Aviv University
{tals, regina},    {ori.ram, gamir}
   Equal contribution

We introduce a novel method for multilingual transfer that utilizes deep contextual embeddings, pretrained in an unsupervised fashion. While contextual embeddings have been shown to yield richer representations of meaning compared to their static counterparts, aligning them poses a challenge due to their dynamic nature. To this end, we construct context-independent variants of the original monolingual spaces and utilize their mapping to derive an alignment for the context-dependent spaces. This mapping readily supports processing of a target language, improving transfer by context-aware embeddings. Our experimental results demonstrate the effectiveness of this approach for zero-shot and few-shot learning of dependency parsing. Specifically, our method consistently outperforms the previous state-of-the-art on 6 target languages, yielding an improvement of 6.8 LAS points on average.111Code and models:

Cross-Lingual Alignment of Contextual Word Embeddings,
with Applications to Zero-shot Dependency Parsing

Tal Schusterthanks:    Equal contribution    Ori Ram    Regina Barzilay    Amir Globerson Computer Science and Artificial Intelligence Lab, MIT Tel Aviv University {tals, regina},    {ori.ram, gamir}

1 Introduction

Multilingual embedding spaces have been demonstrated to be a promising means for enabling cross-lingual transfer across multiple natural language processing tasks (e.g. Ammar et al. (2016); Lample et al. (2018)). Similar to how universal part-of-speech tags enabled parsing transfer across languages (Petrov et al., 2012), multilingual word embeddings further improve transfer capacity by enriching models with lexical information. Since this lexical representation is learned in an unsupervised fashion and thus can leverage large amounts of raw data, it can capture a more nuanced representation of meaning than unlexicalized transfer. Naturally, this enrichment is translated into improved transfer accuracy, especially in low-resource scenarios (Guo et al., 2015).

In this paper, we are moving further along this line and exploring the use of contextual word embeddings for multilingual transfer. These representations have demonstrated their benefits across a range of NLP applications (Peters et al., 2018). By dynamically linking words to their various contexts, they provide a richer semantic and syntactic representation than traditional token-based word embeddings. A straightforward way to utilize this richer representation is to directly apply existing transfer algorithms on the contextual embeddings instead of their static counterparts. In this case, however, each token pair is represented by many different vectors corresponding to its specific context. Even when supervision is available in the form of a dictionary, it is still unclear how to utilize this information for multiple contextual embeddings that correspond to a word translation pair.

In this paper, we propose a simple, but effective, mechanism for constructing a multilingual space of contextual embeddings. Instead of learning the alignment in the original, complex contextual space, we drive the mapping process by context-independent embedding anchors. We obtain these anchors by factorizing the contextual embedding space into context-independent and context-dependent parts. Operating at the anchor level not only compresses the space, but also enables us to utilize a word-level bilingual dictionary as a source of supervision, if available. Once the anchor-level alignment is learned, it can be readily applied to map the original spaces with contextual embeddings.

Clearly, the value of word embeddings depends on their quality, which is determined by the amount of raw data available for their training (Jiang et al., 2018). We are interested in expanding the above approach to the truly low-resource scenario, where a language not only lacks annotations, but also has limited amounts of raw data. In this case, we can also rely on a data rich language to stabilize monolingual alignments for the resource-limited language. As above, context-independent anchors are informing this process. Specifically, we introduce an alignment component to the loss function of the language model, pushing the anchors to be closer in the joint space. While this augmentation is performed on the static anchors, the benefit extends to the contextual embeddings space in which we operate.

We evaluate our aligned contextual embeddings on the task of zero-shot cross-lingual dependency parsing. Our model consistently outperforms previous transfer methods, yielding absolute improvement of 6.9 LAS points over the prior state-of-the-art (Ammar et al., 2016). We also perform comprehensive studies of simplified variants of our model. Even without POS tag labeling or a dictionary, our model performs on par with context-independent models that do use such supervision. Our results also demonstrate the benefits of this approach for few-shot learning, i.e. processing languages with limited data. Specifically, on the Kazakh tree-bank from the recent CoNLL 2018 shared task, the model yields 5 LAS points gain over the top result (Smith et al., 2018).

2 Related work

Multilingual Embeddings

The topic of cross-lingual embedding alignment is an active area of research (Mikolov et al., 2013; Xing et al., 2015; Dinu and Baroni, 2014; Lazaridou et al., 2015; Zhang et al., 2017). Our work most closely relates to MUSE (Conneau et al., 2018a), which construct multilingual space by aligning monolingual embedding spaces. When a bilingual dictionary is provided, their approach is similar to those of (Smith et al., 2017; Artetxe et al., 2017). MUSE extends these methods to the unsupervised case by constructing a synthetic dictionary. The resulting alignment achieves strong performance across a range of NLP tasks, from sequence labeling (Lin et al., 2018) to natural language inference (Conneau et al., 2018b) and machine translation (Lample et al., 2018; Qi et al., 2018). Recent works managed to further improve the supervised (Joulin et al., 2018) and unsupervised (Grave et al., 2018; Alvarez-Melis and Jaakkola, 2018; Hoshen and Wolf, 2018) settings for token-based embeddings.

While MUSE operates over token-based embeddings, we are interested in aligning contextual embeddings which have shown their benefits in several monolingual applications (Peters et al., 2018; McCann et al., 2017; Howard and Ruder, 2018; Radford et al., 2018; Devlin et al., 2018). However, this expansion introduces new challenges which we address in this paper.

Our work also relates to prior approaches that utilize bilingual dictionaries to improve embeddings that were trained on small datasets. For instance, Xiao and Guo (2014) represent word pairs as a mutual vector, while Adams et al. (2017) jointly train cross-lingual word embeddings by replacing the predicted word with its translation. To utilize a dictionary in the contextualized case, we include a soft constraint that pushes those translations to be similar in their context-independent representation.

Multilingual Parsing

In early work on multilingual parsing, transfer was commonly implemented using delexicalized representation such as parts-of-speech tags (Petrov et al., 2012; Naseem et al., 2012; Tiedemann, 2015). Advancements in multilingual word representations opened a possibility of lexicalized transfer. Some of these approaches start by aligning monolingual embedding spaces (Zhang and Barzilay, 2015; Guo et al., 2015, 2016; Ammar et al., 2016), and using resulting word embeddings as word representations instead of universal tags. Other approaches are learning customized multilingual syntactic embeddings bootstrapping from universal POS tags (Duong et al., 2015). In all of the above cases, token-level embeddings are used. Inspired by strong results of using contextualized embeddings in monolingual parsing (Che et al., 2018; Wang et al., 2018; Clark et al., 2018), we aim to utilize them in the multilingual transfer case. Our results demonstrate that richer representation of lexical space does lead to significant performance gains.

3 Aligning Contextual Word Embeddings

All Words Homonyms
0.85 () 0.18 () 0.21 ()
Table 1: Average cosine distances between pairs of embedding anchors (left column) and between contextualized embeddings of words to their corresponding anchor. The right column includes these distances only for homonyms, whereas the center column is averaged across all words. Only alphabetic words with at least 100 occurrences were included.

In this section we describe several approaches for aligning context dependent embeddings from a source language to a target language . We address multiple scenarios, where different amounts of supervision and data are present. Our methods are motivated by interesting properties of context-dependent embeddings, which we discuss later.

We begin with some notations:

  • Context Dependent Embeddings: Given a context and a token , we denote the embedding of in the context by .

  • Embedding Anchor: Given a token we denote the anchor of its context dependent embeddings by , where:


    In practice, we calculate the average over a subset of the available unlabeled data.

  • Shift From Mean: For any embedding we can therefore define the shift from the average via:

  • Embedding Alignment: Given an embedding in , we want to generate an embedding that will lie in the target language space, using a linear mapping . Formally, our alignment is always of the following form:


3.1 The Geometry of Context-Dependent Embeddings

Each token can generate multiple vectors , each corresponding to a different context . A key question is how the point cloud is distributed for a particular token . In what follows we explore this structure, and reach several conclusions that will motivate our alignment approach. All the following experiments are performed on ELMo (Peters et al., 2018).

Figure 1: A two dimensional PCA showing examples of contextual representations for four Spanish words and their corresponding anchors presented as a star in the same color. (best viewed in color)

Point Clouds are Well Separated Intuitively, a cloud corresponds to occurrences of the word in different contexts. Thus, we would expect its points to be closer to each other than to points from { for a different word . Indeed, when measuring similarity between points and the average , we find that these are much more similar than averages of different words and (see Table 1). This observation supports our hypothesis that anchors are a good backbone for the contextual space. Still, as previous studies have shown, and as our results point, the context component is very useful for downstream tasks. A visualized example of the contextualized representations of four words is given in Figure 1, demonstrating the appropriateness of their anchors.

Homonym Point Clouds are Multi-Modal When a word has multiple distinct senses, we might expect the embeddings for to reflect this by separating into multiple distinct clouds, one for each meaning. Figure 2 demonstrates that this indeed happens for the English word “bear”. Furthermore, it can be seen that after alignment (Section 3.3) with Spanish, the distinct point clouds are aligned with their corresponding distinct words in Spanish.

We examined the shift from mean for a list of 250 English homonyms from Wikipedia.222 As Table 1 shows, the shift of those words is indeed slightly higher than this of other words. However, they still remain relatively close to their per-token mean, hence these anchors can still serve as a good approximation for learning alignments.

Figure 2: Contextual embeddings for the English word “bear” and its two possible translations in Spanish — “oso” (animal) in blue and “tener” (to have) in red. The figure shows a two dimensional PCA for the aligned space of the two languages. The symbols are the anchors, the clouds represent the distribution of the contextualized Spanish words, and the black dots are for contextualized embeddings of “bear”. The gray colored triangles show the anchors of the English words “dog”, “elephant”, “cat”, from left to right respectively.

3.2 Context-Independent Alignment

We begin by briefly reviewing previous approaches for aligning context-independent embeddings, as they are generalized in this work to the contextual case. We denote the embedding of a word by . At first, assume we are given word pairs from a source language and a target language , and we look for a mapping between those. Mikolov et al. (2013) proposed to learn a linear transformation whereby is approximated via , for a learned matrix . We focus on methods that follow this linear alignment. The alignment matrix is found by solving:


where is the space of orthogonal matrices. This constraint was proposed by Xing et al. (2015) in order to preserve inter-lingual relations.

For the unsupervised case (i.e. when a dictionary is absent), Conneau et al. (2018a) (MUSE) suggested to learn the alignment via adversarial training, such that a discriminator is trained to distinguish between target and aligned source embeddings. Thereafter, a refinement procedure is applied iteratively as follows. First, a dictionary is built dynamically using the current alignment such that only frequent words are considered. Using the dictionary, the alignment matrix is re-calculated similar to the supervised case.

3.3 Context-Dependent Alignment

We next turn our attention to the main task of this paper, which is aligning context-dependent embeddings. We now describe our generalization of the methods described in Section 3.2 for this case. Altogether, we suggest three alignment procedures, one aimed for the supervised and two for the unsupervised cases.

Supervised Anchored Alignment

As a first step, we are assuming access to a dictionary for the source and target domains. For each source word denote by the corresponding word in the target language.333In practice, we may have multiple target words for a single source word, and the extension is straight-forward.

In the context-dependent case, Eq. 4 is no longer well-defined, as there are many corresponding vectors to both the source and the target words. However, a bypass for this challenge is to align the anchors for which we do have one per word. This is motivated by our observations in Section 3.1 that context-dependent embeddings are well clustered around their centers.

Thus, in the case where a dictionary is available, we solve Eq. 4 with token anchors as inputs.

We emphasize that by constraining to be orthogonal, we also preserve relations between and that represent the contextual information.

Unsupervised Anchored Alignment

We now turn to deal with the case where no dictionary is present. As in the supervised case, we can naturally extend a context-independent alignment procedure to the contextual space by leveraging the anchor space . This can be done using the unsupervised MUSE framework proposed by Conneau et al. (2018a) and described in Section 3.2.

Unsupervised Context-based Alignment

Alternatively, the alignment could be learned directly on the contextual space. To this end, we follow the adversarial algorithm of MUSE, but instead of sampling from a static table, we use contextual embeddings as examples for the adversarial training.

This context-based alignment presents opportunities but also introduces certain challenges. On the one hand, it allows to directly handle polysemous words during the training process. However, empirically we found that training them is less stable than unsupervised anchored alignments.


We conclude both unsupervised methods with the refinement procedure incorporated in MUSE. In order to synthesize a dictionary, we use anchor vectors again, since distance between target and aligned source words should be determined.

3.4 Learning Anchored Language Models

Thus far we assumed that embeddings for both source and target languages are pretrained and are given. Afterwards the source is mapped to the target in a second step via a learned mapping. However, this approach may not work well when raw data for the source languages is scarce, resulting in deficient embeddings. In what follows we show how to address this problem when a dictionary is available. We focus on embeddings that are learned using a language model objective but this can be easily generalized to other objectives as well.

Our key idea is to constrain word-level embeddings across languages to be consistent. This can serve as a regularizer for the resource-limited language model. In this case, the anchors are the model representations prior to its context-aware components.

Denote the anchor for word in language by . Now assume we have trained a model for the target language and similarly have embeddings . We propose to train the source model with an added regularization term as follows:


This regularization has two positive outcomes. First, it reduces overfitting by reducing the effective number of parameters the model fits (e.g., if the regularizer has large coefficient, these parameters are essentially fixed). Second, it provides a certain level of alignment between the source and target language since they use similar initial representations.

4 Multilingual Dependency Parsing

Now that we presented our methods for aligning context-aware embeddings, we turn to evaluate it on the task of cross-lingual dependency parsing. We first describe our baseline model. We then show how our alignment can easily be incorporated in this architecture to obtain a multilingual parser.

Baseline Parser

Most previous cross-lingual dependency parsing models used transition-based models (Ammar et al., 2016; Guo et al., 2016). We follow Che et al. (2018); Wang et al. (2018); Clark et al. (2018) and use a first-order graph-based model. Specifically, we adopt the neural edge-scoring architecture from Dozat and Manning (2017); Dozat et al. (2017), which is based on Kiperwasser and Goldberg (2016). We now briefly review this architecture. Given a sentence , let and be its word and POS-tag embeddings. These are concatenated and fed into a Bi-LSTM to produce token-level contextual representations . Four separate Multi-Layer Perceptrons (MLPs) are applied on these intermediate vectors, resulting in new representations , , and for each word . Arc scores are then obtained by:


Given that an edge was predicted, the score for a specific relation type is


The relation type for this edge is then predicted by . Note that and are not alignment matrices, but rather model parameters.

Multilingual Parsing with Alignment

We now extend this model, in order to effectively use it for transfer learning. First, we include contextualized word embeddings by replacing the static embeddings with a pre-trained ELMo (Peters et al., 2018) model. Second, we share all model parameters across languages and use contextual word embeddings after they are aligned to a joint space . Formally, if is a sentence of language , contextual word embeddings are obtained via:


where is the alignment matrix from language to the joint space.444We use the space of the source language as our joint space. In the multi-source scenario, we align all embeddings to English. Naturally, the alignment matrix corresponding to this language is the identity matrix. This alignment is learned apriori and kept fixed during the parser’s training. The alignment methods are described in detail in Section 3.

In their paper, Peters et al. (2018) suggest to output a linear combination over the internal layers of an ELMo model, learning these weights jointly with a downstream task. We learn a separate alignment for each of those layers and fix the weights of the combination across the languages to ensure that the parser’s inputs are from a joint cross-lingual space.

All of our above modifications are in the word embedding level, making them applicable to any other model that uses word embeddings.

5 Experimental Setup

Alignment Method de es fr it pt sv average
Supervised Anchored   78   85   86   82   74   68   79
Unsupervised Anchored 63 61 70 58 35 22 52
                    + Refine 72 74 81 77 53 33 65
Unsupervised Context-based 57 68 59 57 53 * 49
                    + Refine 73 82 77 73 66 * 62

Table 2: Average word translation to English precision @5 using CSLS (Conneau et al., 2018a) with a dictionary (supervised) and without (unsupervised) for German (de), Spanish (es), French (fr), Italian (it), Portuguese (pt) and Swedish (sv). Each of the unsupervised results is followed by a line with the results post the anchor-based refinement steps. * stands for ’Failed to converge’.

Contextual Embeddings

We use the ELMo model (Peters et al., 2018) with its default parameters to generate embeddings of dimension 1024 for all languages. For each language, training data comprises Wikipedia dumps555 that were tokenized using UDpipe (Straka and Straková, 2017). We randomly shuffle the sentences and, following the setting of ELMO, use 95% of them for training and 5% for evaluation.


We utilize the MUSE framework666 (Conneau et al., 2018a) and the dictionary tables provided by them. The (anchor) vectors for the alignment are generated by computing the average of representations on the evaluation set (except for the limited unlabeled data case). To evaluate our alignment, we use the anchors to produce word translations. For all experiments we use the 50k most common words in each language.

Model de es fr it pt sv average
Zhang and Barzilay (2015) 54.1   68.3 68.8 69.4 72.5 62.5 65.9
Guo et al. (2016) 55.9 73.1 71.0 71.2 78.6 69.5 69.9
Ammar et al. (2016)   57.1   74.6   73.9   72.5   77.0   68.1 70.5
Aligned fastText 61.5 78.2 76.9 76.5 83.0 70.1 74.4
Aligned 58.0 76.7 76.7 76.1 79.2 71.9 73.1
Ours 65.2 80.0 80.8 79.8 82.7 75.4 77.3
Ours, no dictionary 64.1 77.8 79.8 79.7 79.1 69.6 75.0
Ours, no pos 61.4 77.5 77.0 77.6 73.9 71.0 73.1
Ours, no dictionary, no pos 61.7 76.6 76.3 77.1 69.1 54.2 69.2
Table 3: Zero-shot cross lingual LAS scores compared to previous methods, for German (de), Spanish (es), French (fr), Italian (it), Portuguese (pt) and Swedish (sv). Aligned fastText and context-independent models are also presented as baselines. All models, except for the bottom three, are using gold POS tags and supervised anchored alignment. The bottom three rows are models that don’t use POS tags at all and/or use an unsupervised anchored alignment. Corresponding UAS results are provided in App. A.

Dependency Parsing

We used the biaffine model implemented in AllenNLP (Gardner et al., 2018), refactored to handle our modifications as described in Section 4. During training, we randomly alternate between the source languages, i.e. at each iteration we randomly choose a source language and sample a corresponding batch. Dropout (Srivastava et al., 2014) is applied on ELMo representations, Bi-LSTM representations and outputs of MLP layers. We also apply early stopping, where validation accuracy is measured as average LAS score on the development set across all source languages. The model parameters are the same as Dozat et al. (2017) except we reduce the POS tags embedding size from 100 to 50 and increase the head/dependent MLP dimension from 400 to 500.

As discussed in Section 4, we fix the weights of the linear combination across languages. From experiments on the English tree-bank, we found that using the outputs of the first LSTM layer is as good as learning a combination. This agrees with Belinkov et al. (2017), showing that lower layers capture more syntactic information. Therefore, we fix the weights over ELMo layers to , i.e. using only representations from the first LSTM layer.

Evaluation Scenarios for Dependency Parsing

For a fair comparison, we use the same setting as used by previous models for each scenario. Our main model (which we refer to as Ours) is using a Supervised Anchored Alignment (Section 3.3) to align the multilingual pretrained ELMo embeddings which are used for the parser. We compare against several variants of our model:

  • Aligned FastText: instead of ELMo, we use fastText pretrained embeddings, aligned to English with MUSE.

  • Aligned : Our model, where instead of contextualized embeddings, we use the anchors themselves as fixed embeddings, aligned to English.

  • No Dictionary: we assume the absence of a dictionary and use Unsupervised Anchored Alignment.

  • No Pos: Our model without the use of POS tags.

6 Results


As mentioned above, we use outputs of the first LSTM layer of ELMo for our parsing experiments. Therefore, we present the alignment accuracy for those. Table 2 summarizes the precision@5 translation accuracy from 6 languages to English. As expected, supervised alignments outperform unsupervised ones by a large margin. Between the two unsupervised methods, the context-based alignment achieved significantly better results for Spanish and Portuguese but failed to converge for Swedish. In both cases, the value of anchors in the refine step is clear, substantially improving the precision for all languages.

# Sentences LM Model uas / las Perplexity Align
Dev Test Train Dev
28M ELMo 72.3 / 62.8 72.5 / 61.3 22 44 85
10K ELMo 52.9 / 38.3 50.1 / 33.1 4 4060 4
Anchored ELMo 59.2 / 47.3 57.2 / 42.2 92 600 12
Table 4: Zero-shot, single-source results for the Spanish limited unlabeled data experiments. The parsing results are UAS/LAS scores, the perplexity is of the ELMo model, and the alignment scores are precision@5 on the held-out set, based on CSLS. All models were aligned to English using supervised anchored alignment.
Model LAS-F1
Rosa and Mareček (2018) 26.31
Smith et al. (2018) 31.93
Aligned fastText 26.77
Ours 36.98
Table 5: Results for the Kazakh dataset from CoNLL 2018 Shared Task on Multilingual Parsing, compared to the two leading models w.r.t. this treebank.

Zero-Shot Parsing, Multiple Source Languages

Table 3 summarizes the results for our zero-shot, multi-source experiments on six languages from Google universal dependency treebank version 2.0.777 For each target language, the parser was trained on all treebanks in the five other languages and English. We compare our model to the performance of previous methods in the same setting (referred to as in Ammar et al. (2016)). The results show that our multilingual parser outperforms all previous parsers with a large margin of 6.8 LAS points. Even with an unsupervised alignment, our model consistently improves over previous models.

To make a fair comparison to previous models, we also use gold POS tags as inputs to our parser. However, for a low-resource language, we might not have access to such labels. Training our model without the use of POS tags at all leads to a drop of 8.8 LAS points for Portuguese. However, for the other five languages, the score is less affected and is still higher than previous methods that do consider such annotations.

In order to assess the value of contextual embeddings, we also evaluate our model using non-contextual embeddings produced by fastText (Bojanowski et al., 2017). While these improve over previous works, our context-aware model outperforms them for all six languages in UAS score and for 5 out of 6 languages in LAS score, obtaining an average higher by 3 points. To further examine the impact of introducing context, we run our model with precomputed anchors (). Unlike fastText embeddings of size 300, these anchors share the same dimension with contextual embeddings but lack the contextual information. Indeed, the context-aware model is consistently better.

Few-Shot Parsing, Small Treebanks

In this scenario we assume a very small tree-bank for the target language and no POS tags. We used the Kazakh tree-bank from CoNLL 2018 shared task (Zeman et al., 2018). The training set consists of only 38 trees and no development set is provided. Segmentation and tokenization are applied using UDPipe. Turkish is utilized as a single source language. To align contextual embeddings, we used a dictionary generated and provided by Rosa and Mareček (2018).888 The dictionary was obtained using FastAlign (Dyer et al., 2013) on the OpenSubtitles2018 (Lison and Tiedemann, 2016) parallel sentences dataset from Opus (Tiedemann, 2012). Table 5 summarizes the results, showing that our algorithm outperforms the best model from the shared task by LAS points and improves by over 10 points over a fastText baseline.

Zero-Shot Parsing, Limited Unlabeled Data

To evaluate our anchored language model (Section 3.4), we simulate a low resource scenario by extracting only 10k random sentences out of the Spanish unlabeled data. We also extract 50k sentences for LM evaluation but perform all computations, such as anchor extraction, on the 10k training data. For a dictionary, we used the 5k training table from Conneau et al. (2018a).999We filtered words with multiple translations to the most common one by Google Translate. Another table of size 1,500 was used to evaluate the alignment. In this scenario, we assume a single source language (English) and no usage of POS tags nor any labeled data for the target language.

Table 4 depicts the results. Reducing the amount of unlabeled data drastically decreases the precision by around 20 points. The regularization introduced in our anchored LM model manage to significantly improve language model’s validation perplexity, leading to a gain of 7 UAS points and 9 LAS points.

7 Conclusion

We introduce a novel method for multilingual transfer that utilizes deep contextual embeddings of different languages, pretrained in an unsupervised fashion. At the core of our methods, we suggest to use anchors for tokens, reducing this problem to context-independent alignment. Our methods are compatible both for cases where a dictionary is present and absent, as well as for low-resource languages. The acquired alignment can be used to improve cross-lingual transfer learning, gaining from the contextual nature of the embeddings. We show that these methods lead to good word translation results, and improve significantly upon state-of-the-art zero-shot and few-shot cross-lingual dependency parsing models.


This work was supported by the US-Israel Binational Science Foundation (BSF, Grant No. 2012330), and by the Yandex Initiative in Machine Learning.


Appendix A Additional Parsing Results

In Table 6 we provide complementary results to those in zero-shot closs-lingual parsing

Model de es fr it pt sv average
Zhang and Barzilay (2015) 62.5 78.0 78.9 79.3 78.6 75.0 75.4
Guo et al. (2016) 65.0 79.0 77.7 78.5 81.9 78.3 76.7
Aligned fastText   69.2   83.4   84.6   84.3   86.0   80.6   81.4
Aligned 65.1 82.8 83.9 83.6 83.4 82.0 80.1
Ours 73.7 85.5 87.8 87.0 86.6 84.6 84.2
Ours, no dictionary 73.2 84.3 87.0 86.8 84.5 80.4 82.7
Ours, no pos 69.7 84.8 85.3 85.3 79.7 81.7 81.1
Ours, no dictionary, no pos 72.2 84.7 84.9 85.0 78.1 67.9 78.8
Table 6: Zero-shot cross lingual results compared to previous methods, measured in UAS. Aligned fastText and context-independent models are also presented as baselines. All models, except for the bottom three, are using gold POS tags and supervised anchored alignment. The bottom three rows are models that don’t use POS tags at all and/or use an unsupervised anchored alignment.
Note that Ammar et al. (2016) did not publish UAS results.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description