Guided Alignment Training for Topic-Aware Neural Machine Translation
In this paper, we propose an effective way for biasing the attention mechanism of a sequence-to-sequence neural machine translation (NMT) model towards the well-studied statistical word alignment models. We show that our novel guided alignment training approach improves translation quality on real-life e-commerce texts consisting of product titles and descriptions, overcoming the problems posed by many unknown words and a large type/token ratio. We also show that meta-data associated with input texts such as topic or category information can significantly improve translation quality when used as an additional signal to the decoder part of the network. With both novel features, the BLEU score of the NMT system on a product title set improves from 18.6 to 21.3%. Even larger MT quality gains are obtained through domain adaptation of a general domain NMT system to e-commerce data. The developed NMT system also performs well on the IWSLT speech translation task, where an ensemble of four variant systems outperforms the phrase-based baseline by 2.1% BLEU absolute.
NMT systems were shown to reach state-of-the-art translation quality on tasks established in MT research community such as IWSLT speech translation task . In this paper, we also apply NMT approach to e-commerce data: user-generated product titles and descriptions for items put on sale. Such data are very different from newswire and other texts typically considered in the MT research community. Titles in particular are short (usually fewer than 15 words), contain many brand names which often do not have to be translated, but also product feature values and specific abbreviations and jargon. Also, the vocabulary size is very large due to the large variety of product types, and many words are observed in the training data only once. At the same time, these data are provided with additional meta-information about the item (e.g. product category such as clothing or electronics), which can be used as context to perform topic/domain adaptation for improved translation quality.
At first glance, established phrase-based statistical MT approaches are well-suited for e-commerce data translation. In a phrase-based approach, singleton, but unambiguous words and phrases are usually translated correctly. Also, since the alignment between source and target words is available, it is possible to transfer certain entities from the source sentence to the generated target sentence “in-context” without translating them. Such entities can include numbers, product specifications such as “5S” or “.35XX”, or brand names such as “Samsung” or “Lenovo”. In training, these entities can be replaced with placeholders to reduce the vocabulary size.
However, NMT approaches are more powerful at capturing context beyond phrase boundaries and were shown to better exploit available training data. They also successfully adapt themselves to a domain, for which only a limited amount of parallel training data is available . Also, previous research  has shown that it is difficult to obtain translation quality improvements with topic adaptation in phrase-based SMT because of data sparseness and a large number of topics (e. g. corresponding to product categories), which may or may not be relevant for disambiguating between alternative translations or solving other known MT problems. In contrast, we expected NMT to better solve the topic adaptation problem by using the additional meta-information as an extra signal in the neural network. To the best of our knowledge, this is the first work where the additional information about the text topic is embedded into the vector space and used to directly influence NMT decisions.
In an NMT system, the attention mechanism introduced in  is important both for decoding as well as for restoration of placeholder content and insertion of unknown words in the right positions in the target sentence. To improve the estimation of the soft alignment, we propose to use the Viterbi alignments of the IBM model 4  as an additional source of knowledge during NMT training. The additional alignment information helps the current system to bias the attention mechanism towards the Viterbi alignment.
This paper is structured as follows. After an overview of related NMT work in Section 2, we propose a novel approach in Section 3 on how to improve the NMT translation quality by combining two worlds: the phrase-based SMT and its statistical word alignment and the neural MT attention mechanism. In Section 4, we describe in more detail how topic information can benefit NMT. Section 5 and Section 6 describes our domain adaptation approach. Experimental results are presented in Section 7. The paper is concluded with a discussion and outlook in Section 8.
Neural machine translation is mainly based on using recurrent neural networks to grasp long term dependencies in natural language. An NMT system is trained on end-to-end basis to maximize the conditional probability of a correct translation given a source sentence , , . When using attention mechanism, large vocabularies , and some other techniques, NMT is reported to achieve comparable translation quality to state-of-art phrase-based translation systems. Most NMT approaches are based on the encoder-decoder architecture , in which the input sentence is first encoded into a fixed-length representation, from which the recurrent neural network decoder generates the sequence of target words. Since fixed-length representation cannot give enough information for decoding, a more sophisticated approach using attention mechanism is proposed by . In this approach, the neural network learns to attend to different parts of source sentence to improve translation quality. Since the source and target language vocabularies for a neural network have to be limited, the rare words problem deteriorates translation quality significantly. The rare word replacement technique using soft alignment proposed by  gives a promising solution for the problem. Both encoder-decoder architecture and insertion of unknown words into NMT output highly rely on the quality of the attention mechanism, thus it becomes the crucial part of NMT. Some research has been done to refine it by , who proposed global and local attention-based models, and , who used biases, fertility and symmetric bilingual structure to improve the attention model mechanism.
Research on topic adaptation most closely related to our work was performed by , but the features proposed there were added to the log-linear model of a phrase-based system. Here, we use the topic information as part of the input to the NMT system. Another difference is that we primarily work with human-labeled topics, whereas in  the topic distribution is inferred automatically from data.
When translating e-commerce content, we are faced with a situation when only a few product titles and descriptions were manually translated, resulting in a small in-domain parallel corpus, but a large general-domain parallel corpus is available. In such situations, domain adaption techniques have been used both in phrase-based systems  and NMT . In addition, while diverse NMT models using different features and techniques are trained, an ensemble decoder can be used to combine them together to make a more robust model. This approach was used by  to outperform the state-of-art phrase-based system with their NMT approach in the WMT 2015 evaluation.
3Guided Alignment Training
When using the attention-based NMT , we observed that the attention mechanism sometimes fails to yield appropriate soft alignments, especially with increasing length of the input sentence and many out-of-vocabulary words or placeholders. In translation, this can lead to disordered output and word repetition.
In contrast to a statistical phrase-based system, the NMT decoder does not have explicit information about the candidates of the current word, so at each recurrent step, the attention weights only rely on the previously generated word and decoder/encoder state, as depicted in . The target word itself is not used to compute its attention weights. If the previous word is an out-of-vocabulary (OOV) or a placeholder, then the information it provides for calculating the attention weights for the current word is neither sufficient nor reliable anymore. This leads to incorrect target word prediction, and the error propagates to the future steps due to feedback loop. The problem is even larger in the case of e-commerce data where the number of OOVs and placeholders is considerably higher.
To improve the estimation of the soft alignment, we propose to use the Viterbi alignments of the IBM model 4 as an additional source of knowledge during the NMT training. Therefore, we firstly extract Viterbi alignments as trained by the GIZA++ toolkit , then use them to bias the attention mechanism. Our approach is to optimize on both the decoder cost and the divergence between the attention weights and the alignment connections generated by statistical alignments. The multi-objective optimization task is then expressed as a single-objective one by means of linear combination of two loss functions: the original and the new alignment-guided loss.
NMT proposed by  maximizes the conditional log-likelihood of target sentence given the source sentence :
where refers to training sentence pair, and denotes the total number of sentence pairs in the training corpus. In the paper, we name the negative log-likelihood as decoder cost to distinguish from alignment cost. When using encoder-decoder architecture by , the conditional probability can be written as:
where is the length of the target sentence and is the length of source sentence, is a fixed-length vector to encode source sentence, is a hidden state of RNN at time step , and is a non-linear function to approximate word probability. If attention mechanism is used, the fixed-length is replaced by variable-length representation that is a weighted summary over a sequence of annotations , and contains information about the whole input sentence, but with a strong focus on the parts surrounding the word . Then, the context vector can be defined as:
where for each annotation is computed by normalizing the score function with softmax, as described in .
Here is the function to calculate the score of -th target word aligning to -th word in the source sentence. The alignment model is used to calculate similarity between previous state and bi-directional state . In our experiments, we took the idea of dot global attention model , but we still keep the order as proposed by . We calculate the dot product of encoder state with the last decoder state instead of the current decoder state. We observe that this dot attention model () works better than concatenation in our experiments.
We introduce alignment cost to penalize attention mechanism when it is not consistent with statistical word alignment. We represent the pre-trained statistical alignments by a matrix , where refers to the probability of the word in the target sentence of being aligned to the word in the source sentence. In case of multiple source words aligning to the same target word, we normalize to make sure . In attention-based NMT, the matrix of attention weights has the same shape and semantics as . We propose to penalize NMT based on the divergence of the two matrices during the training, the divergence function can e. g. be cross entropy or mean square error as in . As shown in , comes from statistical alignment, feeding into our guided-alignment NMT as an additional input to penalize the attention mechanism.
We combine decoder cost and alignment cost to build the new loss function :
During training, we optimize the new compound loss function with regard to the same parameters as before. The guided-alignment training influences the attention mechanism to generate alignment closer to Viterbi alignment and has the advantage of unchanged parameter space and model complexity. When training is done, we assume that NMT can generate robust alignment by itself, so there is no need to feed an alignment matrix as input during evaluation. As indicated in , we set and for weights of decoder cost and alignment cost to balance their weight ratio. We performed further experiments (see ) to analyze the impact of different weight settings on translation quality.
4Topic-aware Machine Translation
In the e-commerce domain, the information on the product category (e.g., “men’s clothing”, “mobile phones“, “kitchen appliances”) often accompanies the product title and description and can be used as an additional source of information both in the training of a MT system and during translation. In particular, such meta-information can help to disambiguate between alternative translations of the same word that have different meaning. The choice of the right translation often depends on the category. For example, the word “skin” has to be translated differently in the categories “mobile phone accessories” and “make-up”. Outside of the e-commerce world, similar topic information is available in the form of e.g. tags and keywords for a given document (on-line article, blog post, patent, etc.) and can also be used for word sense disambiguation and topic adaptation. In general, the same document can belong to multiple topics.
Here, we propose to feed such meta-information into the recurrent neural network to help generate words which are appropriate given a particular category or topic.
The idea is to represent topic information in a -dimensional vector , where is the number of topics. Since one sentence can belong to multiple topics (possibly with different probabilities/weights), we normalize the topic vector so that the sum of its elements is 1. It is fed into the decoder to influence the proposed target word distribution. The conditional probability given the topic membership vector can be written as (cf. Equations Equation 2 and Equation 3):
where is used to approximate the probability distribution. In our implementation, we introduce an intermediate readout layer to build function , which is a feed-forward network as depicted in .
In the NMT decoder, we feed the topic membership vector to the readout layer in each recurrent step to enhance word selection. As shown in , topic membership vector is fed into NMT decoder as an additional input besides source and target sentences:
where is concatenation of original transformation matrix and , is the output from readout layer and is the embedding of the last target word ; refers the last decoder state. and are weights and bias for the linear transformation, respectively. We can rearrange the formula as:
where is concatenation of original transformation matrix and topic transformation matrix . Then adding topic into readout layer input is equivalent to adding an additional topic vector into the original readout layer output. Assuming is a one-hot category vector, then is equivalent to retrieving a specific column from the matrix . Hence, we can name this additional vector as topic embedding, regarded as a vector representation of topic information. It is quite similar to word embedding by , we will further analyze the similarity between different topics in .
The readout layer depicted in merges information from the last state , previous word embedding (coming from word index , which is sampled w.r.t. the proposed word distribution), as well as the current context to generate output. It can be seen as a shallow network, which consists of a max-out layer , a fully-connected layer, and a softmax layer.
When trained on small amounts of data, the attention-based neural network approach does not always produce reliable soft alignment. The problem gets worse when the sentence pairs available for training are getting longer. To solve this problem, we extracted bilingual sub-sentence units from existing sentence pairs to be used as additional training data. These units are exclusively aligned to each other, i. e. all words within the source sub-sentence are aligned only to the words within the corresponding target sub-sentence and vice versa. The alignment is determined with the standard approach (IBM Model 4 alignment trained with the GIZA++ toolkit ). As boundaries for sub-sentence units, we used punctuation marks, including period, comma, semicolon, colon, dash, etc. To simplify bilingual sentence splitting, we used the standard phrase pair extraction algorithm for phrase-based SMT, but set the minimum/maximum source phrase length to 8 and 30 tokens, respectively. From all such long phrase pairs extracted by the algorithm, we only kept those which are started or ended with a punctuation mark or started/ended a sentence; both on the source and on the target side.
For the bootstrapped training, we merged the original training data with the extracted sub-sentence units and ran the neural training algorithm on this extended training set. Since the extracted bilingual sub-sentence units generally showed good correspondence between source and target due to the constraints described above, the expectation was that having such units repeated in the training data as stand-alone training instances would guide the attention mechanism to become more robust and make it easier for the neural training algorithm to find better correspondences between more difficult source/target sentence parts. Also, having both short and long training instances was expected to make neural translation quality less dependent on the input length.
6E-commerce Domain Adaptation
For the e-commerce English-to-French translation task, we only had a limited amount of in-domain parallel training data (item titles and descriptions). To benefit from large amounts of general-domain training data, we followed the method described in . We first trained a baseline NMT model on English-French WMT data (common-crawl, Europarl v7, and news commentary corpora) for two epochs to get the best result on a development set, and then continued training the same model on the in-domain training set for a few more epochs. In contrast to , however, we used the vocabularies of the most frequent 52K source/target words in the in-domain data (instead of the out-of-domain data vocabularies). This allowed us to focus the NN on translation of the most relevant in-domain words.
7.1Datasets and Preprocessing
We performed MT experiments on the German-to-English IWLST 2015 speech translation task  and on an in-house English-to-French e-commerce translation task. As part of data preprocessing, we tokenized and lowercased the corpora, as well as replaced numbers, product specifications, and other special symbols with placeholders such as $num. We only keep these placeholders in training, but preserve their content as XML markups in the dev/test sets, which we try to restore using attention mechanism. This content is inserted for the generated placeholders on the target side based on the attention mechanism (see ). In the beam search for the best translation, we make sure that each placeholder content is used only once. Using the same mechanism, we also pass OOV words to the target side “as is” (without using any special unknown word symbol).
|Running words performed on||3 873 816||3 656 038||2 592 202||2 895 089|
|Full vocabulary||103 390||45 068||119 607||129 848|
|Running words||9 812||10 695||10 339||11 283|
|Running words||19 019||22 895||10 817||11 016|
IWSLT TED Talk Data
For the IWSLT German-to-English task (translation of transcribed TED talks), we mapped the topic keywords of each TED talk in the 2015 training/dev/test evaluation campaign release to ten general topics such as politics, environment, education, and others. All sentences in the same talk share the same topic, and one talk can belong to several topics. Instead of using the official IWSLT dev/test data, we set aside 81/159 talks for development/test set, respectively. Out of these talks, we used 567 dev and 1100 test sentences which had the highest probability of relating to a particular topic (bag-of-words classification using the remaining 1365 talks as the training data). The full corpus statistics for the IWSLT data sets obtained this way are given in
For the e-commerce English-to-French task, we used the product category such as “fashion” or “electronics” as topic information (a total of 80 most widely used categories plus the category “other” that combined all the less frequent categories). The training set contained both product titles and product descriptions, while dev and test set only contained product titles. Each title or description sentence was assigned to only one category. The statistics of the e-commerce data sets are given in .
We implemented our neural translation model in Python using the Blocks deep learning library  based on the open-source MILA translation project. We compared our implementation of NMT baseline system with  on the WMT 2014 English-to-French machine translation task and obtained a similar BLEU score on the official test set as they reported in . Then we implemented the topic-aware algorithm (), guided alignment training (), and the bootstrapped training () into the NMT model. We trained separate models with various feature combinations. We also created an ensemble of different models to obtain the best NMT translation results.
In our experiments, we set the word embedding size to 620 and used a two-layer bi-directional GRU encoder and one layer of GRU decoder, the cell dimension of both were 1000. We selected the 50k most frequent German words and top 30k English words as vocabularies for the IWSLT task, and most frequent 52k English/French words for the e-commerce task. The optimization of the objective function was performed by using AdaDelta algorithm . We set the beam size to 10 for dev/test set beam search translation.
For training implementation, we use stochastic gradient descent with batch size of 100, saving model parameters after a certain number of epochs. We saved around 30 consecutive model parameters. We selected the best parameter set according to the sum of the established MT evaluation measures BLEU  and 1-TER  on the development set. After model selection, we evaluated the best model on the test set. We report the test set BLEU and TER scores in and .
We use TITAN X GPUs with 12GB of RAM to run experiments on Ubuntu Linux 14.04. The training converges in less than 24 hours on the IWSLT talk task and around 30 hours on the e-commerce task. The beam search on the test set for both tasks takes around 10 minutes, the exact time depends on the vocabulary size and beam size.
|+prefixed human-labeled categs||18.3||69.3|
|+readout human-labeled categs||19.7||65.3|
|+readout LDA topics||14.5||74.9|
7.3Effect of Topic-aware NMT
We tested different approaches to find out where topic information fits best into NMT, since topic information can affect alignment, word selection, etc. The most naive approach is to insert a pseudo topic word in the beginning of a sentence to bias the context of the sentence to a certain topic. We also tried topic vectors of different origin in the read-out layer of the network. We used both topics predicted automatically with the Latent Dirichlet Analysis (LDA) and human-labeled topics to feed into the network as shown in .
|source||ich möchte Ihnen heute Morgen gerne von meinem Projekt, Kunst Aufräumen, erzählen.|
|NMT||I want to clean you this morning, from my project, to say Art.|
|+ topics||I would like to talk to you today by my project, Art clean.|
|reference||I would like to talk to you this morning about my project, Tidying Up Art.|
|source||… unsere Kollegen an Tufts verbinden Modelle wie diese mit durch Tissue Engineering erzeugten Knochen,|
|um zu sehen, wie Krebs sich von einem Teil des Körpers zum nächsten verbreiten könnte.|
|NMT||… our NOAA colleagues combined models of models like this with tissue generated bones from bones to see|
|how cancer could spread from one part of the body, to the next distribution.|
|+ topics||… our colleagues at Tufts are using models like this with tissue-based engineered bones to see|
|how cancer could spread from a part of the body to the next part.|
|reference||… our colleagues at Tufts are mixing models like these with tissue-engineered bone to see|
|how cancer might spread from one part of the body to the next.|
|E-commerce EnFr||BLEU %||TER %|
|+squared error (1:1)||20.8||64.5|
The results on the e-commerce task in Table 2 show that category information as a pseudo topic word does not carry enough semantic and syntactic meaning in comparison to real source words to have a positive effect on the target words predicted in the decoder. The BLEU score of such system (18.3%) is even below the baseline (18.6%). In contrast, the human-labeled categories are more reliable and are able to positively influence word selection in the NMT decoder, significantly (19.7% BLEU) outperforming the baseline.
Replacing the human-labeled topic one-hot vectors of size 80 with the LDA-predicted topic distribution vectors of the same dimension in the read-out layer of the neural network deteriorated the BLEU and TER scores significantly. We attribute this to data sparseness problems when training the LDA of dimension 80 on product titles.
On the German-to-English task, we also observed MT quality improvements when using human-labeled topic information as described in . Here, we extracted the topic embedding from different experiments and show their cosine distance in . It’s straightforward that in different experiments, the same topic tends to share similar representation in continuous embedding space. At the same time, closer topic pairs like “politics” and “issues” tend to have shorter distance from each other. Examples of improved German-to-English NMT translations when human-labeled topic information is used are shown in Table 3.
|ID||NMT w. 3) and 4)|
|NMT||9) with DW|
|OOD||9) w. topic vectors|
|source||Vintage Ollech & Wajs Early Bird Diver watch, Excellent!|
|SMT||Vintage Ollech & Wajs début oiseau montre de plongée, excellent!|
|NMT||Montre de plongée vintage Ollech & Wajs early bird, excellent!|
|reference||Montre de Plongée Vintage Ollech & Wajs Early Bird, Excellent !|
|source||APT Holman Model 1 Audiophile Power Amplifer made in Cambridge Mass|
|SMT||APT Holman modèle 1 audiophile power fabricant d’ampli made in Cambridge Mass|
|NMT||L’amplificateur de puissance audiophile APT Holman modèle 1 fabriquée á Cambridge Mass|
|reference||Amplificateur de puissance pour audiophile APT Holman modèle 1 fabriqué á Cambridge Massachussets|
|Exp||BLEU %||TER %|
|Ensemble||NMT + topic vectors||27.8||55.4|
|NMT + topic vectors + guided alignment|
|NMT + topic vectors + bootstrapping|
|NMT + topic v. + guided alignment + bootstrapping|
7.4Implementation of Guided Alignment
To balance decoder cost and the attention weight cost, we experimented with different weights for these costs. We analyzed the relation between weight ratio and the final result in . Besides fixing the cost ratios during training, we also apply a heuristic to adjust the ratio as the training is progressing. One approach is to set a high value for the alignment cost in the beginning, then decay the weight to 90% after every epoch, finally eliminating the influence of the alignment after some time. This approach helps for the IWSLT task, but not on the e-commerce task. We assume that the alignment for the TED talk sentences seems to be easier for NMT to learn on its own than the alignment between product titles and their translations. We also analyzed the effect of using different loss functions for calculating alignment divergence (see Section 3.2). The difference between the squared error and cross-entropy is not so large as shown in . Since the cross-entropy function has a consistent form as decoder cost, we decided to use it in further experiments. We extracted the NMT attention weights and marked the connection with the highest score as hard alignment for each word. We drew the alignment in to compare baseline NMT and alignment-guided NMT. It can be seen from the graph that the guided alignment training truly improves the alignment correspondence.
The overall results on the e-commerce translation task and IWSLT task are shown in and , respectively. We observed consistency on both tasks, in a sense that a feature that improves BLEU/TER results on one task is also beneficial for the other.
For comparison, we trained phrase-based SMT models using the Moses toolkit  on both translation tasks. We used the standard Moses features, including a 4-gram LM trained on the target side of the bilingual data, word-level and phrase-level translation probabilities, as well as the distortion model with the maximum distortion of 6. Our stronger phrase-based baseline included additional 5 features of a 4-gram operation sequence model – OSM .
On the e-commerce task, which is more challenging due to a high number of OOV words and placeholders, we observed that NMT translation output had many errors related to incorrect attention weights. To improve the attention mechanism, we applied guided alignment and bootstrapping. Both boosted the translation performance. Adding topic information increased the BLEU score to 21.3%. We selected the four best model parameters from various experiments to make an ensemble system, this improved the BLEU score to 24.5%. For the following experiment, we had pre-trained a model on WMT15 parallel data with the guided alignment technique, and then continued training on the e-commerce data for several epocs as described in , performing domain adaptation. This approach proved to be extremely helpful, giving an increase of over 3.0% absolute in BLEU. Finally, we also applied ensemble methods on variants of the domain-adapted models to further increase the BLEU score to 25.6, which is 7.0 BLEU higher than the NMT baseline system, and only 0.6% BLEU behind the BLEU score of 26.2% for the state-of-the-art phrase-based baseline. Table 6 shows examples where the ensemble NMT system is better than the phrase-based system despite the slightly lower corpus-level BLEU score. In fact, a more detailed analysis of the sentence-level BLEU scores showed that the NMT translation of 386 titles out of 910 was ranked higher than the SMT translation, the reverse was true for 460 titles. In particular, the word order of noun phrases was observed to be better in the NMT translations.
On the IWSLT task (), the baseline NMT was not as far behind the phrase-based system as on the e-commerce task, and thus the obtained improvements were smaller than for product title translations. We observed that topic information is less helpful than bootstrapping and guided alignment learning. When we combined them, we reached the same BLEU score as the phrase-based system (see ). Finally, we combined four variant systems to create an ensemble, which resulted in the BLEU score of 27.8%, surpassing the phrase-based translation with the OSM model by 2.1% BLEU absolute.
We have presented a novel guided alignment training for a NMT model that utilizes IBM model 4 Viterbi alignments to guide the attention mechanism. This approach was shown experimentally to bring consistent improvements of translation quality on e-commerce and spoken language translation tasks. Also on both tasks, the proposed novel way of utilizing topic meta-information was shown to improve BLEU and TER scores. We also showed improvements when using domain adaptation by continuing training of an out-of-domain NMT system on in-domain parallel data. In the future, we would like to investigate how to effectively make use of the abundant monolingual data with human-labeled product category information that we have available for the envisioned e-commerce application.
- This IWSLT data set with topic labels is publicly available at https://github.com/wenhuchen/iwslt-2015-de-en-topics.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate.
Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert. L. Mercer. The mathematics of statistical machine translation: Parameter estimation.
Mauro Cettolo, Christian Girardi, and Marcello Federico. Wit: Web inventory of transcribed and translated talks.
Kyunghyun Cho, Bart van Merriënboer, Dzmitry Bahdanau, and Yoshua Bengio. On the properties of neural machine translation: Encoder-decoder approaches.
Kyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using RNN encoder-decoder for statistical machine translation.
Trevor Cohn, Cong Duy Vu Hoang, and Ekaterina Vymolova. Incorporating structural alignment biases into an attention neural translation model.
Nadir Durrani, Helmut Schmid, Alexander Fraser, Philipp Koehn, and Hinrich Schütze. The operation sequence model—combining n-gram-based and phrase-based statistical machine translation.
Ian J Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron Courville, and Yoshua Bengio. Maxout networks.
Eva Hasler, Phil Blunsom, Philipp Koehn, and Barry Haddow. Dynamic topic adaptation for phrase-based MT.
Sébastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. On using very large target vocabulary for neural machine translation.
Philipp Koehn and Josh Schroeder. Experiments in domain adaptation for statistical machine translation.
Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, et al. Moses: Open source toolkit for statistical machine translation.
Minh-Thang Luong and Christopher D. Manning. Stanford neural machine translation systems for spoken language domain.
Minh-Thang Luong, Ilya Sutskever, Quoc V Le, Oriol Vinyals, and Wojciech Zaremba. Addressing the rare word problem in neural machine translation.
Minh-Thang Luong, Hieu Pham, and Christopher D Manning. Effective approaches to attention-based neural machine translation.
Prashant Mathur, Marcello Federico, Selçuk Köprü, Sharam Khadivi, and Hassan Sawaf. Topic adaptation for machine translation of e-commerce content.
Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space.
Franz Josef Och and Hermann Ney. A systematic comparison of various statistical alignment models.
Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation.
Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. A study of translation edit rate with targeted human annotation.
Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks.
Bart van Merriënboer, Dzmitry Bahdanau, Vincent Dumoulin, Dmitriy Serdyuk, David Warde-Farley, Jan Chorowski, and Yoshua Bengio. Blocks and fuel: Frameworks for deep learning.
Matthew D Zeiler. Adadelta: an adaptive learning rate method.