On Using Very Large Target Vocabulary for Neural Machine Translation

On Using Very Large Target Vocabulary for Neural Machine Translation

Abstract

Neural machine translation, a recently proposed approach to machine translation based purely on neural networks, has shown promising results compared to the existing approaches such as phrase-based statistical machine translation. Despite its recent success, neural machine translation has its limitation in handling a larger vocabulary, as training complexity as well as decoding complexity increase proportionally to the number of target words. In this paper, we propose a method based on importance sampling that allows us to use a very large target vocabulary without increasing training complexity. We show that decoding can be efficiently done even with the model having a very large target vocabulary by selecting only a small subset of the whole target vocabulary. The models trained by the proposed approach are empirically found to match, and in some cases outperform, the baseline models with a small vocabulary as well as the LSTM-based neural machine translation models. Furthermore, when we use an ensemble of a few models with very large target vocabularies, we achieve performance comparable to the state of the art (measured by BLEU) on both the EnglishGerman and EnglishFrench translation tasks of WMT’14.

1Introduction

Neural machine translation (NMT) is a recently introduced approach to solving machine translation [13]. In neural machine translation, one builds a single neural network that reads a source sentence and generates its translation. The whole neural network is jointly trained to maximize the conditional probability of a correct translation given a source sentence, using the bilingual corpus. The NMT models have shown to perform as well as the most widely used conventional translation systems [22].

Neural machine translation has a number of advantages over the existing statistical machine translation system, specifically, the phrase-based system [14]. First, NMT requires a minimal set of domain knowledge. For instance, all of the models proposed in [22], [1] or [13] do not assume any linguistic property in both source and target sentences except that they are sequences of words. Second, the whole system is jointly tuned to maximize the translation performance, unlike the existing phrase-based system which consists of many feature functions that are tuned separately. Lastly, the memory footprint of the NMT model is often much smaller than the existing system which relies on maintaining large tables of phrase pairs.

Despite these advantages and promising results, there is a major limitation in NMT compared to the existing phrase-based approach. That is, the number of target words must be limited. This is mainly because the complexity of training and using an NMT model increases as the number of target words increases.

A usual practice is to construct a target vocabulary of the most frequent words (a so-called shortlist), where is often in the range of [1] to [22]. Any word not included in this vocabulary is mapped to a special token representing an unknown word . This approach works well when there are only a few unknown words in the target sentence, but it has been observed that the translation performance degrades rapidly as the number of unknown words increases [6].

In this paper, we propose an approximate training algorithm based on (biased) importance sampling that allows us to train an NMT model with a much larger target vocabulary. The proposed algorithm effectively keeps the computational complexity during training at the level of using only a small subset of the full vocabulary. Once the model with a very large target vocabulary is trained, one can choose to use either all the target words or only a subset of them.

We compare the proposed algorithm against the baseline shortlist-based approach in the tasks of EnglishFrench and EnglishGerman translation using the NMT model introduced in [1]. The empirical results demonstrate that we can potentially achieve better translation performance using larger vocabularies, and that our approach does not sacrifice too much speed for both training and decoding. Furthermore, we show that the model trained with this algorithm gets the best translation performance yet achieved by single NMT models on the WMT’14 EnglishFrench translation task.

2Neural Machine Translation and Limited Vocabulary Problem

In this section, we briefly describe an approach to neural machine translation proposed recently in [1]. Based on this description we explain the issue of limited vocabularies in neural machine translation.

2.1Neural Machine Translation

Neural machine translation is a recently proposed approach to machine translation, which uses a single neural network trained jointly to maximize the translation performance [10].

Neural machine translation is often implemented as the encoder–decoder network. The encoder reads the source sentence and encodes it into a sequence of hidden states :

Then, the decoder, another recurrent neural network, generates a corresponding translation based on the encoded sequence of hidden states :

where

and .

The whole model is jointly trained to maximize the conditional log-probability of the correct translation given a source sentence with respect to the parameters of the model:

where is the -th training pair of sentences, and is the length of the -th target sentence ().

Detailed Description

In this paper, we use a specific implementation of neural machine translation that uses an attention mechanism, as recently proposed in [1].

In [1], the encoder in Eq. is implemented by a bi-directional recurrent neural network such that

where

They used a gated recurrent unit for (see, e.g., [7]).

The decoder, at each time, computes the context vector as a convex sum of the hidden states with the coefficients computed by

where is a feedforward neural network with a single hidden layer.

A new hidden state of the decoder in Eq. is computed based on the previous hidden state , previous generated symbol and the computed context vector . The decoder also uses the gated recurrent unit, as the encoder does.

The probability of the next target word in Eq. is then computed by

where is an affine transformation followed by a nonlinear activation, and and are respectively the target word vector and the target word bias. is the normalization constant computed by

where is the set of all the target words.

For the detailed description of the implementation, we refer the reader to the appendix of [1].

2.2Limited Vocabulary Issue and Conventional Solutions

One of the main difficulties in training this neural machine translation model is the computational complexity involved in computing the target word probability (Eq. ). More specifically, we need to compute the dot product between the feature and the word vector as many times as there are words in a target vocabulary in order to compute the normalization constant (the denominator in Eq. ). This has to be done for, on average, 20–30 words per sentence, which easily becomes prohibitively expensive even with a moderate number of possible target words. Furthermore, the memory requirement grows linearly with respect to the number of target words. This has been a major hurdle for neural machine translation, compared to the existing non-parametric approaches such as phrase-based translation systems.

Recently proposed neural machine translation models, hence, use a shortlist of 30,000 to 80,000 most frequent words [1]. This makes training more feasible, but comes with a number of problems. First of all, the performance of the model degrades heavily if the translation of a source sentence requires many words that are not included in the shortlist [6]. This also affects the performance evaluation of the system which is often measured by BLEU. Second, the first issue becomes more problematic with languages that have a rich set of words such as German or other highly inflected languages.

There are two model-specific approaches to this issue of large target vocabulary. The first approach is to stochastically approximate the target word probability. This has been used proposed recently in [19] based on noise-contrastive estimation [12]. In the second approach, the target words are clustered into multiple classes, or hierarchical classes, and the target probability is factorized as a product of the class probability and the intra-class word probability . This reduces the number of required dot-products into the sum of the number of classes and the words in a class. These approaches mainly aim at reducing the computational complexity during training, but do not often result in speed-up when decoding a translation during test time1.

Other than these model-specific approaches, there exist translation-specific approaches. A translation-specific approach exploits the properties of the rare target words. For instance, Luong et al. of [17] proposed such an approach for neural machine translation. They replace rare words (the words that are not included in the shortlist) in both source and target sentences into corresponding tokens using the word alignment model. Once a source sentence is translated, each in the translation will be replaced based on the source word marked by the corresponding .

It is important to note that the model-specific approaches and the translation-specific approaches are often complementary and can be used together to further improve the translation performance and reduce the computational complexity.

3Approximate Learning Approach to Very Large Target Vocabulary

3.1Description

In this paper, we propose a model-specific approach that allows us to train a neural machine translation model with a very large target vocabulary. With the proposed approach, the computational complexity of training becomes constant with respect to the size of the target vocabulary. Furthermore, the proposed approach allows us to efficiently use a fast computing device with limited memory, such as a GPU, to train a neural machine translation model with a much larger target vocabulary.

As mentioned earlier, the computational inefficiency of training a neural machine translation model arises from the normalization constant in Eq. . In order to avoid the growing complexity of computing the normalization constant, we propose here to use only a small subset of the target vocabulary at each update. The proposed approach is based on the earlier work of [3].

Let us consider the gradient of the log-probability of the output in Eq. . The gradient is composed of a positive and negative part:

where we define the energy as

The second, or negative, term of the gradient is in essence the expected gradient of the energy:

where denotes .

The main idea of the proposed approach is to approximate this expectation, or the negative term of the gradient, by importance sampling with a small number of samples. Given a predefined proposal distribution and a set of samples from , we approximate the expectation in Eq. with

where

This approach allows us to compute the normalization constant during training using only a small subset of the target vocabulary, resulting in much lower computational complexity for each parameter update. Intuitively, at each parameter update, we update only the vectors associated with the correct word and with the sampled words in . Once training is over, we can use the full target vocabulary to compute the output probability of each target word.

Although the proposed approach naturally addresses the computational complexity, using this approach naively does not guarantee that the number of parameters being updated for each sentence pair, which includes multiple target words, is bounded nor can be controlled. This becomes problematic when training is done, for instance, on a GPU with limited memory.

In practice, hence, we partition the training corpus and define a subset of the target vocabulary for each partition prior to training. Before training begins, we sequentially examine each target sentence in the training corpus and accumulate unique target words until the number of unique target words reaches the predefined threshold . The accumulated vocabulary will be used for this partition of the corpus during training. We repeat this until the end of the training set is reached. Let us refer to the subset of target words used for the -th partition by .

This may be understood as having a separate proposal distribution for each partition of the training corpus. The distribution assigns equal probability mass to all the target words included in the subset , and zero probability mass to all the other words, i.e.,

This choice of proposal distribution cancels out the correction term from the importance weight in Eqs. –, which makes the proposed approach equivalent to approximating the exact output probability in Eq. with

It should be noted that this choice of makes the estimator biased.

The proposed procedure results in speed up against usual importance sampling, as it exploits the advantage of modern computers in doing matrix-matrix vs matrix-vector multiplications.

Informal Discussion on Consequence

The parametrization of the output probability in Eq. can be understood as arranging the vectors associated with the target words such that the dot product between the most likely, or correct, target word’s vector and the current hidden state is maximized. The exponentiation followed by normalization is simply a process in which the dot products are converted into proper probabilities.

As learning continues, therefore, the vectors of all the likely target words tend to align with each other but not with the others. This is achieved exactly by moving the vector of the correct word in the direction of , while pushing all the other vectors away, which happens when the gradient of the logarithm of the exact output probability in Eq. is maximized. Our approximate approach, instead, moves the word vectors of the correct words and of only a subset of sampled target words (those included in ).

3.2Decoding

Once the model is trained using the proposed approximation, we can use the full target vocabulary when decoding a translation given a new source sentence. Although this is advantageous as it allows the trained model to utilize the whole vocabulary when generating a translation, doing so may be too computationally expensive, e.g., for real-time applications.

Since training puts the target word vectors in the space so that they align well with the hidden state of the decoder only when they are likely to be a correct word, we can use only a subset of candidate target words during decoding. This is similar to what we do during training, except that at test time, we do not have access to a set of correct target words.

The most naïve way to select a subset of candidate target words is to take only the top- most frequent target words, where can be adjusted to meet the computational requirement. This, however, effectively cancels out the whole purpose of training a model with a very large target vocabulary. Instead, we can use an existing word alignment model to align the source and target words in the training corpus and build a dictionary. With the dictionary, for each source sentence, we construct a target word set consisting of the -most frequent words (according to the estimated unigram probability) and, using the dictionary, at most likely target words for each source word. and may be chosen either to meet the computational requirement or to maximize the translation performance on the development set. We call a subset constructed in either of these ways a candidate list.

3.3Source Words for Unknown Words

In the experiments, we evaluate the proposed approach with the neural machine translation model called RNNsearch [1] (see Sec. ?). In this model, as a part of decoding process, we obtain the alignments between the target words and source locations via the alignment model in Eq. .

We can use this feature to infer the source word to which each target word was most aligned (indicated by the largest in Eq. ). This is especially useful when the model generated an token. Once a translation is generated given a source sentence, each may be replaced using a translation-specific technique based on the aligned source word. For instance, in the experiment, we try replacing each token with the aligned source word or its most likely translation determined by another word alignment model. Other techniques such as transliteration may also be used to further improve the performance [15].

4Experiments

We evaluate the proposed approach in EnglishFrench and EnglishGerman translation tasks. We trained the neural machine translation models using only the bilingual, parallel corpora made available as a part of WMT’14. For each pair, the datasets we used are:

  • EnglishFrench2:
    Europarl v7, Common Crawl, UN,
    News Commentary, Gigaword

  • EnglishGerman:
    Europarl v7, Common Crawl, News Commentary

To ensure fair comparison, the EnglishFrench corpus, which comprises approximately 12 million sentences, is identical to the one used in [13]. As for EnglishGerman, the corpus was preprocessed, in a manner similar to [21], in order to remove many poorly translated sentences.

We evaluate the models on the WMT’14 test set (news-test 2014)3, while the concatenation of news-test-2012 and news-test-2013 is used for model selection (development set). Table 1 presents data coverage w.r.t. the vocabulary size, on the target side.

Unless mentioned otherwise, all reported BLEU scores [20] are computed with the multi-bleu.perl script4 on the cased tokenized translations.

Table 1: Data coverage (in %) on target-side corpora for different vocabulary sizes. “All” refers to all the tokens in the training set.
Train Test Train Test
15k 93.5 90.8 88.5 83.8
30k 96.0 94.6 91.8 87.9
50k 97.3 96.3 93.7 90.4
500k 99.5 99.3 98.4 96.1
All 100.0 99.6 100.0 97.3

4.1Settings

As a baseline for EnglishFrench translation, we use the RNNsearch model proposed by [1], with 30,000 source and target words5. Another RNNsearch model is trained for EnglishGerman translation with 50,000 source and target words.

For each language pair, we train another set of RNNsearch models with much larger vocabularies of 500,000 source and target words, using the proposed approach. We call these models RNNsearch-LV. We vary the size of the shortlist used during training ( in Section 3.1). We tried 15,000 and 30,000 for EnglishFrench, and 15,000 and 50,000 for EnglishGerman. We later report the results for the best performance on the development set, with models generally evaluated every twelve hours.

For both language pairs, we also trained new models, with and , by reshuffling the dataset at the beginning of each epoch. While this causes a non-negligible amount of overhead, such a change allows words to be contrasted with different sets of other words each epoch.

To stabilize parameters other than the word embeddings, at the end of the training stage, we freeze the word embeddings and tune only the other parameters for approximately two more days after the peak performance on the development set is observed. This helped increase BLEU scores on the development set.

We use beam search to generate a translation given a source. During beam search, we keep a set of 12 hypotheses and normalize probabilities by the length of the candidate sentences, as in [6]6. The candidate list is chosen to maximize the performance on the development set, for and . As explained in Section 3.2, we test using a bilingual dictionary to accelerate decoding and to replace unknown words in translations. The bilingual dictionary is built using fast_align [9]. We use the dictionary only if a word starts with a lowercase letter, and otherwise, we copy the source word directly. This led to better performance on the development sets.

Table 2: The translation performances in BLEU obtained by different models on (a) EnglishFrench and (b) EnglishGerman translation tasks. RNNsearch is the model proposed in , RNNsearch-LV is the RNNsearch trained with the approach proposed in this paper, and Google is the LSTM-based model proposed in . Unless mentioned otherwise, we report single-model RNNsearch-LV scores using (EnglishFrench) and (EnglishGerman). For the experiments we have run ourselves, we show the scores on the development set as well in the brackets. () , () , () , () Standard Moses Setting , () .
RNNsearch RNNsearch-LV Google
Basic NMT 29.97 (26.58) 32.68 (28.76) 30.6 33.3 37.03
+Candidate List 33.36 (29.32)
+UNK Replace 33.08 (29.08) 34.11 (29.98) 33.1
+Reshuffle (=50k) 34.60 (30.53)
+Ensemble 37.19 (31.98) 37.5

(a) EnglishFrench

Table 3: The translation performances in BLEU obtained by different models on (a) EnglishFrench and (b) EnglishGerman translation tasks. RNNsearch is the model proposed in , RNNsearch-LV is the RNNsearch trained with the approach proposed in this paper, and Google is the LSTM-based model proposed in . Unless mentioned otherwise, we report single-model RNNsearch-LV scores using (EnglishFrench) and (EnglishGerman). For the experiments we have run ourselves, we show the scores on the development set as well in the brackets. () , () , () , () Standard Moses Setting , () .
RNNsearch RNNsearch-LV Phrase-based SMT
Basic NMT 16.46 (17.13) 16.95 (17.85) 20.67
+Candidate List 17.46 (18.00)
+UNK Replace 18.97 (19.16) 18.89 (19.03)
+Reshuffle 19.40 (19.37)
+Ensemble 21.59 (21.06)

(b) EnglishGerman

4.2Translation Performance

In Table. Table 3, we present the results obtained by the trained models with very large target vocabularies, and alongside them, the previous results reported in [22], [17], [5] and [8]. Without translation-specific strategies, we can clearly see that the RNNsearch-LV outperforms the baseline RNNsearch.

In the case of the EnglishFrench task, RNNsearch-LV approached the performance level of the previous best single neural machine translation (NMT) model, even without any translation-specific techniques (Sec. Section 3.2Section 3.3). With these, however, the RNNsearch-LV outperformed it. The performance of the RNNsearch-LV is also better than that of a standard phrase-based translation system [7]. Furthermore, by combining 8 models, we were able to achieve a translation performance comparable to the state of the art, measured in BLEU.

For EnglishGerman, the RNNsearch-LV outperformed the baseline before unknown word replacement, but after doing so, the two systems performed similarly. We could reach higher large-vocabulary single-model performance by reshuffling the dataset, but this step could potentially also help the baseline. In this case, we were able to surpass the previously reported best translation result on this task by building an ensemble of 8 models.

With , the RNNsearch-LV performance worsened a little, with best BLEU scores, without reshuffling, of 33.76 and 18.59 respectively for EnglishFrench and EnglishGerman.

Table 4: The average per-word decoding time. Decoding here does not include parameter loading and unknown word replacement. The baseline uses 30,000 words. The candidate list is built with = 30,000 and = 10.
() i7-4820K (single thread), () GTX TITAN Black
CPU GPU
RNNsearch 0.09 s 0.02 s
RNNsearch-LV 0.80 s 0.25 s
RNNsearch-LV 0.12 s 0.05 s
+Candidate list

4.3Note on Ensembles

For each language pair, we began training four models from each of which two points corresponding to the best and second-best performance on the development set were collected. We continued training from each point, while keeping the word embeddings fixed, until the best development performance was reached, and took the model at this point as a single model in an ensemble. This procedure resulted in total eight models, but because much of training had been shared, the composition of the ensemble may be sub-optimal. This is supported by the fact that higher cross-model BLEU scores [11] are observed for models that were partially trained together.

4.4Analysis

Decoding Speed

In Table 4, we present the timing information of decoding for different models. Clearly, decoding from RNNsearch-LV with the full target vocabulary is slowest. If we use a candidate list for decoding each translation, the speed of decoding substantially improves and becomes close to the baseline RNNsearch.

A potential issue with using a candidate list is that for each source sentence, we must re-build a target vocabulary and subsequently replace a part of the parameters, which may easily become time-consuming. We can address this issue, for instance, by building a common candidate list for multiple source sentences. By doing so, we were able to match the decoding speed of the baseline RNNsearch model.

Decoding Target Vocabulary

For EnglishFrench , we evaluate the influence of the target vocabulary when translating the test sentences by using the union of a fixed set of common words and (at most) likely candidates for each source word according to the dictionary. Results are presented in Figure Figure 1. With (not shown), the performance of the system is comparable to the baseline when not replacing the unknown words (30.12), but there is not as much improvement when doing so (31.14). As the large vocabulary model does not predict as much during training, it is less likely to generate it when decoding, limiting the effectiveness of the post-processing step in this case. With , which limits the diversity of allowed uncommon words, BLEU is not as good as with moderately larger , which indicates that our models can, to some degree, correctly choose between rare alternatives. If we rather use , as we did for testing based on validation performance, the improvement over is approximately 0.2 BLEU.

When validating the choice of , we found it to be correlated to which was chosen during training. For example, on the EnglishFrench validation set, with (and ), the BLEU score is 29.44 with , but drops to and respectively for and . For , scores increase moderately from to . Similar effects were observed for EnglishGerman and on the test sets. As our implementation of importance sampling does not apply to usual correction to the gradient, it seems beneficial for the test vocabularies to resemble those used during training.

Figure 1:  Single-model test BLEU scores (English\toFrench) with respect to the number of dictionary entries K' allowed for each source word.
Figure 1: Single-model test BLEU scores (EnglishFrench) with respect to the number of dictionary entries allowed for each source word.

5Conclusion

In this paper, we proposed a way to extend the size of the target vocabulary for neural machine translation. The proposed approach allows us to train a model with much larger target vocabulary without any substantial increase in computational complexity. It is based on the earlier work in [3] which used importance sampling to reduce the complexity of computing the normalization constant of the output word probability in neural language models.

On EnglishFrench and EnglishGerman translation tasks, we observed that the neural machine translation models trained using the proposed method performed as well as, or better than, those using only limited sets of target words, even when replacing unknown words. As performance of the RNNsearch-LV models increased when only a selected subset of the target vocabulary was used during decoding, this makes the proposed learning algorithm more practical.

When measured by BLEU, our models showed translation performance comparable to the state-of-the-art translation systems on both the EnglishFrench task and EnglishGerman task. On the EnglishFrench task, a model trained with the proposed approach outperformed the best single neural machine translation (NMT) model from [17] by approximately 1 BLEU point. The performance of the ensemble of multiple models, despite its relatively less diverse composition, is approximately 0.3 BLEU points away from the best system [17]. On the EnglishGerman task, the best performance of 21.59 BLEU by our model is higher than that of the previous state of the art (20.67) reported in [5].

Acknowledgments

The authors would like to thank the developers of Theano [4]. We acknowledge the support of the following agencies for research funding and computing support: NSERC, Calcul Québec, Compute Canada, the Canada Research Chairs and CIFAR.

Footnotes

  1. This is due to the fact that the beam search requires the conditional probability of every target word at each time step regardless of the parametrization of the output probability.
  2. The preprocessed data can be found and downloaded from http://www-lium.univ-lemans.fr/~schwenk/nnmt-shared-task/README.
  3. To compare with previous submissions, we use the filtered test sets.
  4. https://github.com/moses-smt/mosesdecoder/blob/master/scripts/generic/multi-bleu.perl
  5. The authors of [1] gave us access to their trained models. We chose the best one on the validation set and resumed training.
  6. These experimental details differ from [1].

References

  1. 2014.
    Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate.
  2. 2012.
    Frédéric Bastien, Pascal Lamblin, Razvan Pascanu, James Bergstra, Ian J. Goodfellow, Arnaud Bergeron, Nicolas Bouchard, and Yoshua Bengio. Theano: new features and speed improvements.
  3. 2008.
    Yoshua Bengio and Jean-Sébastien Sénécal. Adaptive importance sampling to accelerate training of a neural probabilistic language model.
  4. 2010.
    James Bergstra, Olivier Breuleux, Frédéric Bastien, Pascal Lamblin, Razvan Pascanu, Guillaume Desjardins, Joseph Turian, David Warde-Farley, and Yoshua Bengio. Theano: a CPU and GPU math expression compiler.
  5. 2014.
    Christian Buck, Kenneth Heafield, and Bas van Ooyen. N-gram counts and language models from the common crawl.
  6. 2014a.
    Kyunghyun Cho, Bart van Merriënboer, Dzmitry Bahdanau, and Yoshua Bengio. On the properties of neural machine translation: Encoder–Decoder approaches.
  7. 2014b.
    Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using RNN encoder-decoder for statistical machine translation.
  8. 2014.
    Nadir Durrani, Barry Haddow, Philipp Koehn, and Kenneth Heafield. Edinburgh’s phrase-based machine translation systems for WMT-14.
  9. 2013.
    Chris Dyer, Victor Chahuneau, and Noah A. Smith. A simple, fast, and effective reparameterization of IBM Model 2.
  10. 1997.
    Mikel L. Forcada and Ramón P. Ñeco. Recursive hetero-associative memories for translation.
  11. 2014.
    Markus Freitag, Stephan Peitz, Joern Wuebker, Hermann Ney, Matthias Huck, Rico Sennrich, Nadir Durrani, Maria Nadejde, Philip Williams, Philipp Koehn, et al. Eu-bridge MT: Combined machine translation.
  12. 2010.
    M. Gutmann and A. Hyvarinen. Noise-contrastive estimation: A new estimation principle for unnormalized statistical models.
  13. 2013.
    Nal Kalchbrenner and Phil Blunsom. Recurrent continuous translation models.
  14. 2003.
    Philipp Koehn, Franz Josef Och, and Daniel Marcu. Statistical phrase-based translation.
  15. 2010.
    Philipp Koehn. Statistical Machine Translation.
  16. 2014.
    Liangyou Li, Xiaofeng Wu, Santiago Cortes Vaillo, Jun Xie, Andy Way, and Qun Liu. The DCU-ICTCAS MT system at WMT 2014 on German-English translation task.
  17. 2014.
    Thang Luong, Ilya Sutskever, Quoc V Le, Oriol Vinyals, and Wojciech Zaremba. Addressing the rare word problem in neural machine translation.
  18. 2013.
    Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space.
  19. 2013.
    Andriy Mnih and Koray Kavukcuoglu. Learning word embeddings efficiently with noise-contrastive estimation.
  20. 2002.
    Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. BLEU: A method for automatic evaluation of machine translation.
  21. 2014.
    Stephan Peitz, Joern Wuebker, Markus Freitag, and Hermann Ney. The RWTH Aachen German-English machine translation system for WMT 2014.
  22. 2014.
    Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to sequence learning with neural networks.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
20468
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description