Attention-based Vocabulary Selection for NMT Decoding

Attention-based Vocabulary Selection for NMT Decoding

Baskaran Sankaran and Markus Freitag and Yaser Al-Onaizan
IBM T. J. Watson Research Center
1101 Kitchawan Rd, Yorktown Heights, NY 10598

Neural Machine Translation (NMT) models usually use large target vocabulary sizes to capture most of the words in the target language. The vocabulary size is a big factor when decoding new sentences as the final softmax layer normalizes over all possible target words. To address this problem, it is widely common to restrict the target vocabulary with candidate lists based on the source sentence. Usually, the candidate lists are a combination of external word-to-word aligner, phrase table entries or most frequent words. In this work, we propose a simple and yet novel approach to learn candidate lists directly from the attention layer during NMT training. The candidate lists are highly optimized for the current NMT model and do not need any external computation of the candidate pool. We show significant decoding speedup compared with using the entire vocabulary, without losing any translation quality for two language pairs.

Attention-based Vocabulary Selection for NMT Decoding

Baskaran Sankaran and Markus Freitag and Yaser Al-Onaizan IBM T. J. Watson Research Center 1101 Kitchawan Rd, Yorktown Heights, NY 10598 {bsankara,freitagm,onaizan}

1 Introduction

Due to the fact that Neural Machine Translation (NMT) is reaching comparable or even better performance compared to the statistical machine translation (SMT) models (Jean et al., 2015; Luong et al., 2015), it has become very popular in the recent years (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Bahdanau et al., 2014). With the recent success of NMT, attention has shifted towards making it more practical. Compared to the traditional phrase-based machine translation engines, NMT decoding tends to be significantly slower. One of the most expensive parts in NMT is the softmax calculation over the full target vocabulary. Recent work show that we can restrict the softmax to a subset of likely candidates given the source. The candidates are based on a dictionary built from Viterbi word alignments, or by matching phrases in a phrase-table, or by using the most frequent words in the target language. In this work, we present a novel approach which extracts the candidates during training based on the attention weights within the network. One advantage is that we do not need to determine the candidates with an external tool and also generate a reliable candidate pool for NMT systems whose vocabularies are based on subword units. The risk with Viterbi alignments is that we could miss some words in the target that are not fully explained by a Viterbi word alignment. As we train the candidate list with the model parameters, the candidate list is highly adapted to the current model which makes it very unlikely that we miss a high scoring word due to the candidate restriction. In this work, we show that it is sufficient to use only the top 100 candidates per source word and speed up decoding by up to a factor of 7 without losing any translation performance.

2 Neural Machine Translation

The attention-based NMT (Bahdanau et al., 2014) is an encoder-decoder network. The encoder employs a bi-directional RNN to encode the source sentence into a sequence of hidden states , where is the length of the source sentence. Each is a concatenation of a left-to-right and a right-to-left RNN:

where and are two gated recurrent units (GRU) introduced by (Cho et al., 2014).

Given the encoded , the decoder predicts the target translation by maximizing the conditional log-probability of the correct translation , where is the length of the target. At each time , the probability of each word from a target vocabulary is:


where is a two layer feed-forward network ( being an intermediate state) over the embedding of the previous target word (), the decoder hidden state (), and the weighted sum of encoder states (). A single feedforward layer then projects to the target vocabulary and applies softmax to predict the probability distribution over the output vocabulary.

We compute with a two layer GRU as:


where is an intermediate state. The two GRU units and together with the attention constitute the conditional GRU layer. And is computed as:

00footnotetext: Same as the decoder GRU introduced in session-2 of the dl4mt-tutorial:

The attention model (in the right box) is a two layer feed-forward network , with being an intermediate state and another layer converting it into a real number . The alignment weights , are computed from the two layer feed-forward network as:


are actually the soft alignment probabilities, denoting the probability of aligning the target word at timestep to source position .

3 Our Approach

In this section we describe our approach for learning alignments from the attention.

3.1 Learning Alignments from Attention

At each time step of the decoder, the attention mechanism determines which word to attend to based on the previous target word , decoder hidden state and the source annotations and . This attention implicitly captures the alignment between the target word to be generated at this time step and source words. We formalize this implicit notion into soft alignments, by aligning the generated target word to the source word(s) attended to in the current timestep. The strength of the alignment is determined by the weight of the attention weights . While, the attention weights are probabilities, we treat them as fractional counts distribution over source words for the current target word.

Our method simply accumulates these (normalized) attention weights into a matrix as the training progresses. A naive implementation of this would need a matrix of dimensions , which would be infeasible in the typical memory available. Instead, we maintain a sparse matrix to keep track of these raw word counts, where we only update the cells that are touched by the alignments observed in each minibatch. We further delay the accumulation of alignments during the first epoch of training, to ensure that the network can produce reasonably good alignments. And finally, we also employ a threshold and only record the alignments where the attention weights are larger than this threshold. This filters out large number of spurious alignments especially for frequent words, which are unlikely to be of any use.

It should be noted that, the idea of treating the attention weights as soft alignments is already being used in certain cases during decoding. For example, it is a standard practice to get the alignments for the UNK tokens in the decoder post-processing in order to replace it with appropriate target translation using external alignments such as Model-1 (Jean et al., 2015; Luong et al., 2015; Mi et al., 2016; L’Hostis et al., 2016). Some of these works, notably Jean et al. (2015) and Mi et al. (2016) have relied on the alignments generated by external aligners to identify candidate vocabulary for softmax for each training sentence. Additionally, we also propose a way for using these alignments during training.

An attractive aspect of our approach is that the alignments could be learned even for previously trained models by continuing the training for one or two epochs. As we show later, it is usually sufficient to learn alignments by accumulating the attention weights for just one additional epoch (see Section 4.2).

3.2 Vocabulary Selection for Decoding

As mentioned earlier, vocabulary selection to speedup decoding has been widely employed in NMT (Jean et al., 2015; Mi et al., 2016, inter alia). In this work, we use the alignments that are learned during training for vocabulary selection. It should be noted that the accumulated attention weights are fractional counts and not probabilities. Secondly, these counts as learned from the attention characterize target to source alignments. During decoding we are interested in getting the target vocabulary given the source words. So, we first normalize the distribution matrix along target axis and then use the normalized distribution to obtain top- target words for each source token.

This obviates any need for external tools for generating alignments and also simplifies the decoding pipeline. Following the findings by L’Hostis et al. (2016), we only rely on learned alignments and do not use top-111It is typical to set to be 2000 (Jean et al., 2015; Mi et al., 2016). most frequent words or any other resource for decoding. Our experiments (see Section. 4.2) show that the alignments learned from the NMT training are sufficient and we do not lose translation performance.

4 Experiments

We test our vocabulary selection approach on two language pairs: GermanEnglish and ItalianEnglish. The alignments from which we extract the candidate lists are learned either during the full training (from scratch) or only during the final epoch (continue training).

4.1 Setup

For the GermanEnglish translation task, we train an NMT system based on the WMT 2016 training data (Bojar et al., 2016) (3.9 parallel sentences) and use newstest-2014 and newstest-2015 as our dev/ test sets. For the ItalianEnglish translation task, we train our system on a large data set consisting of 20 million parallel sentences. The sentences come from varied resources such as Europarl, news-commentary, Wikipedia, openSubtitles among others. As test set, we use Newstest-2009.

In all our experiments, we use our in-house attention-based NMT implementation which is similar to (Bahdanau et al., 2014). We use sub-word units extracted by byte pair encoding (Sennrich et al., 2015) instead of words, which shrinks the vocabulary to 40k sub-word symbols for both source and target. For comparison, we also run a word model only for GermanEnglish. We limit our word vocabularies to be the top 80K most frequent words for both source and target. Words not in these vocabularies are mapped to a single unknown token. The oov-rate for the 80K word-based model on the dev and test sets is about and respectively. During translation, we use the alignments (learned from the attention mechanism) to replace the unknown tokens either with potential targets (obtained from an Model-1 trained on the parallel data) or with the source word itself (if no target was found). For all our experiments, we use an embedding dimension of 620 and fix the recurrent GRU layers to be of 1000 cells each. For the training procedure, we use uAdam (Kingma and Ba, 2014) to update model parameters with a mini-batch size of 80. The training data is shuffled after each epoch.

For evaluation, we compare the Bleu and Ter scores for the baseline decoding and the vocabulary selection decoding. For the baseline decoding, we use full search without using the candidate vocabulary from the learned alignments. We further compare the candidate lists of different sizes, where we limit the maximum number of target words per source word to be , , or . For the continued training setup, we train models to convergence without learning alignments and then train one epoch to learn the alignments. For decoding speed comparison, we report relative speedup gains with respect to the baseline, by averaging across 10 runs.

4.2 GermanEnglish

The results for the GermanEnglish translation task are shown in Table 1. Applying vocabulary selection during decoding speeds up the decoding by up to x compared to the baseline decoding without any vocabulary selection. In our experiments across both languages, candidate list of size best target candidates per word, seems to be a good trade-off setting between speedup gain (over x) without loosing any performance. The average numbers of candidates (per source word) in this case is just a tiny fraction () of the full target vocabulary used in the baseline setting.

One of the interesting trends, we note is that continue training turns out to be extremely competitive to learning alignments throughout NMT training over several epochs. It should be stressed again that we only ran the continue training setup for one epoch to learn alignments. This suggests that the attention weights are very stable once the model is reasonably trained. While the word based models are slightly worse than the BPE models, we observe the same trends that we noted earlier in terms of speedup and number of candidates.

For both BPE and word-based models, we also compare our vocabulary selection with one generated from traditional IBM models typically used. We employ vocabulary selection from alignments generated by fastalign (Dyer et al., 2013), which is reparameterized model based on IBM Model-2. We limit the number of top- target words to be 100 to match our chosen setting. As can be seen in the table (last rows in BPE and Words blocks), the Bleu and Ter scores are similar to the numbers using our approach for the same top- setting. However, the average number of candidates from the fastalign is significantly larger: about for BPE and for words. We plan to compare the target candidates from both approaches in future and we also hope that, it would give us some insights to improve our approach.

Model/ Vocabulary Alignments Cand list Speedup Avg cands Newstest-2015
Size Learning Size Gain per word Bleu Ter
BPE () No alignments - - 34,494 27.9 52.9
From scratch 20 3.3x 11.3 27.1 53.7
50 3.2x 24.5 27.5 53.3
100 3.1x 43.0 27.8 53.0
200 2.9x 74.6 27.8 53.0
Continue training 20 3.2x 10.3 27.1 53.8
50 3.2x 22.3 27.5 53.3
100 3.1x 38.6 27.7 53.1
200 3.0x 66.4 27.8 53.0
Fast-align 100 3.1x 42.3 27.9 52.9
Words (80K) No alignments - - 80,000 26.2 54.3
From scratch 20 7.3x 7.3 26.1 54.2
50 7.2x 17.4 26.3 54.0
100 6.9x 31.8 26.5 53.9
200 6.7x 56.8 26.5 53.9
Continue training 20 7.3x 6.9 26.2 54.2
50 7.2x 16.0 26.4 54.0
100 7.0x 28.3 26.6 53.9
200 6.8x 49.5 26.6 53.9
Fast-align 100 6.9x 36.7 26.7 54.0
Table 1: Vocabulary Selection for decoding: GermanEnglish test set (newstest-2015) Bleu and Ter, with . Speedup gain is the relative gain compared to the baseline decoding without candidate list (computed on CPU). Average candidates refers to the average number of unique target vocabulary items per source word.
(a) Bleu across different epochs
(b) Avg. # of cands across different epochs
Figure 1: Effect of alignments learned for different number of epochs in GermanEnglish setting (dev set: newstest-2014). The decoding is performed by fixing the NMT model and only changing the alignments, which are obtained from the training of corresponding epoch in the x-axis. Figure 0(a) on left correspond to Bleu and  0(b) on right to average, unique target candidates (plots in broken lines). Easier to read in color.

We show the effect of learning alignments for different number of epochs for GermanEnglish translation setting in Figure 1 for our devset (newstest-2014). In these experiments, we fix the NMT model and only change the source-target word distribution for vocabulary selection during decoding. The left plot shows the effect on Bleu scores while the one on the right-side plots average number of candidates per source word. The word distribution from early epochs seems to negatively affect the smaller candidate lists (top- in figure), where the Bleu increases by one point. For larger candidate sizes, the effect is only marginal in Bleu (). As for the candidates, we see a flat curve, when the candidate list size is small (say or even ) and it starts to have some variance for 100 or more candidates.

thresholds Avg cands Newstest-2014 Newstest-2015 Density/
per word Bleu Ter Bleu Ter Size (MB)
43.73 27.25 54.12 27.76 52.98 3.054/ 172.6
43.68 27.25 54.12 27.77 53.00 1.928/ 109.0
40.22 27.27 54.06 27.70 53.09 1.011/ 57.2
39.02 27.23 54.16 27.76 53.05 0.809/ 45.7
Table 2: GermanEnglish: Effect of different for accumulating alignments (with candidate list size = 100). Density refers to the percent of non-zero entries among the full size of the source-target vocabulary matrix and size indicates the raw size for the non-zero entries (in MB).

Table 2 shows the effect of different thresholds for accumulating the alignments. The could be used to strike a balance between desired coverage in source-target word distribution and avoiding spurious source-target links. We observe that the thresholds up to result in similar performance levels (shown for both dev and test sets), with smaller candidate vocabulary size as the threshold is increasing. We also noticed at larger thresholds, the accumulated count matrices lacked variety in the source words distribution, leading to poor coverage. This is to be expected because, for such large thresholds, the alignments will be accumulated only when attention exhibits a peaked distribution it that it is strongly confident about some particular source-target link. We believe or would be practical values for most data sets/ language pairs.

4.3 ItalianEnglish

Empirical results for the ItalianEnglish translation task are shown in Table 3. We can speed up the decoding speed by a factor of 3.6x to 3.9x by using a candidate list coming from the attention of our NMT model. The sweet spot candidate size is 100. We can speed up the decoding by a factor of 3.7 while losing only 0.1 point in Bleu. The continue training (in which we only learn the candidate list in the final epoch), works as good as the full trained candidate list. The average candidate per words are even smaller compared to the full trained candidate list which makes the decoding even a little bit faster.

Alignments Cand list Speedup Avg cands Newstest-2009
Learning Size Gain per word Bleu Ter
No cand list - - 33 497 29.7 52.7
From scratch 20 3.9x 10.6 28.6 52.6
50 3.8x 23.6 29.2 53.5
100 3.7x 42.9 29.6 53.1
200 3.6x 76.8 29.6 53.0
Continue training 20 3.9x 10.1 28.6 52.6
50 3.8x 22.4 29.2 53.5
100 3.7x 40.9 29.5 53.1
200 3.6x 72.7 29.7 53.0
Table 3: Results for ItalianEnglish translation task, with .

4.4 Dynamic Vocabulary Selection during Training

As we accumulate the attention weights into a sparse alignment matrix, we could also exploit this to dynamically select the target vocabulary during training. This would be exactly same as the large vocabulary NMT; but unlike other approaches we would not be relying on external resource/ tools such as Model-1 alignments, phrase tables etc.

We now explain the recipe for doing this. We first normalize the sparse matrix and obtain the top- target tokens for each source word as explained in Section 3.2.222In order to avoid stale probabilities, we normalize/ trim the alignments at the beginning of each epoch for dynamic vocabulary selection.

We begin the NMT training without any vocabulary selection and train with entire target vocabulary during the initial stages. We switch to vocabulary selection mode, once the alignment matrix is seeded with initial alignments from at least one full sweep over data. Given a mini batch of source sentences , we identify for each source word, top- target words and use the set of all unique target words as candidate vocabulary for that batch.


The dynamic vocabulary can then be used as the target vocabulary to train the present batch. Dynamic selection for each mini batch during training could add to the computational cost and potentially slow it down. One simple solution would be to just do an offline vocabulary selection based on the alignments at the beginning of each epoch. We leave this for future experimentation.

5 Related Works

Vocabulary selection has been studied widely in the context of NMT decoding (Luong et al., 2015; Mi et al., 2016; L’Hostis et al., 2016). All these works are inspired by the early work by Jean et al. (2015) and use some kind of external strategy (based on word alignments or phrase tables or co-occurrence counts etc.) in order to do vocabulary selection. In contrast, we use the alignments that are learned from the attention weights in the early training, for selecting target vocabulary. The other difference is that the selected vocabulary remains stale throughout the training under these earlier approaches. However in this work, the alignments learned in the previous epoch could be used to select target vocabulary for next epoch.

Hierarchical softmax (Morin and Bengio, 2005; Mnih and Hinton, 2009) is well-known way to reduce softmax over a large number of target words. It uses a hierarchical binary tree representation of the output layer with all words as its leaves. It allows exponentially faster computation of word probabilities and their gradients, but the predictive performance of the resulting model is heavily dependent on the tree used, which is often constructed heuristically. Moreover, by relaxing the constraint of a binary structure, Le et al. (2011) and Baltescu and Blunsom (2014) introduce a structured output layer with an arbitrary tree structure constructed from word clustering. All these methods speed up both the model training and evaluation considerably but are heavily depend on the quality of the word cluster. NMT experiments with hierarchical softmax showed improvement for smaller datasets with about 2m sentence pairs (Baltescu and Blunsom, 2014).

6 Summary

We presented a simple approach for directly learning the source-target alignments from the attention layer in Neural Machine Translation. We showed that the alignments could be used for vocabulary selection in decoding, without requiring any external resources such as aligners, phrase-tables etc. We recommend setting and top- candidates to be for good performance and faster decoding for most language pairs/ datasets. Our experiments showed decoding speedup of up to a factor of for different settings. We also showed how this could be used for dynamic vocabulary selection during training.


  • Bahdanau et al. (2014) D. Bahdanau, K. Cho, and Y. Bengio. 2014. Neural Machine Translation by Jointly Learning to Align and Translate. ArXiv e-prints .
  • Baltescu and Blunsom (2014) Paul Baltescu and Phil Blunsom. 2014. Pragmatic neural language modelling in machine translation. arXiv preprint arXiv:1412.7119 .
  • Bojar et al. (2016) Ondrej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, Varvara Logacheva, Christof Monz, et al. 2016. Findings of the 2016 conference on machine translation (wmt16). Proceedings of WMT .
  • Cho et al. (2014) KyungHyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder approaches. CoRR abs/1409.1259.
  • Dyer et al. (2013) Chris Dyer, Victor Chahuneau, and Noah A. Smith. 2013. A simple, fast, and effective reparameterization of ibm model 2. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Atlanta, Georgia, pages 644–648.
  • Jean et al. (2015) Sébastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015. On using very large target vocabulary for neural machine translation. In Proceedings of ACL. Beijing, China, pages 1–10.
  • Kalchbrenner and Blunsom (2013) Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent continuous translation models. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Seattle.
  • Kingma and Ba (2014) Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 .
  • Le et al. (2011) Hai-Son Le, Ilya Oparin, Alexandre Allauzen, Jean-Luc Gauvain, and François Yvon. 2011. Structured output layer neural network language model. In Acoustics, Speech and Signal Processing (ICASSP), 2011 IEEE International Conference on. IEEE, pages 5524–5527.
  • L’Hostis et al. (2016) Gurvan L’Hostis, David Grangier, and Michael Auli. 2016. Vocabulary selection strategies for neural machine translation. CoRR abs/1610.00072.
  • Luong et al. (2015) Thang Luong, Ilya Sutskever, Quoc Le, Oriol Vinyals, and Wojciech Zaremba. 2015. Addressing the rare word problem in neural machine translation. In Proceedings of ACL. Beijing, China, pages 11–19.
  • Mi et al. (2016) Haitao Mi, Zhiguo Wang, and Abe Ittycheriah. 2016. Vocabulary manipulation for neural machine translation. arXiv preprint arXiv:1605.03209 .
  • Mnih and Hinton (2009) Andriy Mnih and Geoffrey E Hinton. 2009. A scalable hierarchical distributed language model. In Advances in neural information processing systems. pages 1081–1088.
  • Morin and Bengio (2005) Frederic Morin and Yoshua Bengio. 2005. Hierarchical probabilistic neural network language model. In Aistats. Citeseer, volume 5, pages 246–252.
  • Sennrich et al. (2015) Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909 .
  • Sutskever et al. (2014) Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada. pages 3104–3112.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description