Deep Neural Machine Translation with Linear Associative Unit

Deep Neural Machine Translation with Linear Associative Unit

Abstract

Deep Neural Networks (DNNs) have provably enhanced the state-of-the-art Neural Machine Translation (NMT) with their capability in modeling complex functions and capturing complex linguistic structures. However NMT systems with deep architecture in their encoder or decoder RNNs often suffer from severe gradient diffusion due to the non-linear recurrent activations, which often make the optimization much more difficult. To address this problem we propose novel linear associative units (LAU) to reduce the gradient propagation length inside the recurrent unit. Different from conventional approaches (LSTM unit and GRU), LAUs utilizes linear associative connections between input and output of the recurrent unit, which allows unimpeded information flow through both space and time direction. The model is quite simple, but it is surprisingly effective. Our empirical study on Chinese-English translation shows that our model with proper configuration can improve by 11.7 BLEU upon Groundhog and the best reported results in the same setting. On WMT14 English-German task and a larger WMT14 English-French task, our model achieves comparable results with the state-of-the-art.

1Introduction

Neural Machine Translation (NMT) is an end-to-end learning approach to machine translation which has recently shown promising results on multiple language pairs [?]. Unlike conventional Statistical Machine Translation (SMT) systems [?] which consist of multiple separately tuned components, NMT aims at building upon a single and large neural network to directly map input text to associated output text. Typical NMT models consists of two recurrent neural networks (RNNs), an encoder to read and encode the input text into a distributed representation and a decoder to generate translated text conditioned on the input representation [?].

Driven by the breakthrough achieved in computer vision [?], research in NMT has recently turned towards studying Deep Neural Networks (DNNs). Wu et al. and Zhou et al. found that deep architectures in both the encoder and decoder are essential for capturing subtle irregularities in the source and target languages. However, training a deep neural network is not as simple as stacking layers. Optimization often becomes increasingly difficult with more layers. One reasonable explanation is the notorious problem of vanishing/exploding gradients which was first studied in the context of vanilla RNNs [?]. Most prevalent approaches to solve this problem rely on short-cut connections between adjacent layers such as residual or fast-forward connections [?]. Different from previous work, we choose to reduce the gradient path inside the recurrent units and propose a novel Linear Associative Unit (LAU) which creates a fusion of both linear and non-linear transformations of the input. Through this design, information can flow across several steps both in time and in space with little attenuation. The mechanism makes it easy to train deep stack RNNs which can efficiently capture the complex inherent structures of sentences for NMT. Based on LAUs, we also propose a NMT model , called DeepLAU, with deep architecture in both the encoder and decoder.

Although DeepLAU is fairly simple, it gives remarkable empirical results. On the NIST Chinese-English task, DeepLAU with proper settings yields the best reported result and also a 4.9 BLEU improvement over a strong NMT baseline with most known techniques (e.g, dropout) incorporated. On WMT English-German and English-French tasks, it also achieves performance superior or comparable to the state-of-the-art.

2Neural machine translation

A typical neural machine translation system is a single and large neural network which directly models the conditional probability of translating a source sentence to a target sentence .

Attention-based NMT, with RNNsearch as its most popular representative, generalizes the conventional notion of encoder-decoder in using an array of vectors to represent the source sentence and dynamically addressing the relevant segments of them during decoding. The process can be explicitly split into an encoding part, a decoding part and an attention mechanism. The model first encodes the source sentence into a sequence of vectors . In general, is the annotation of from a bi-directional RNN which contains information about the whole sentence with a strong focus on the parts of . Then, the RNNsearch model decodes and generates the target translation based on the context and the partial traslated sequence by maximizing the probability of . In the attention model, is dynamically obtained according to the contribution of the source annotation made to the word prediction. This is called automatic alignment [?] or attention mechanism [?], but it is essentially reading with content-based addressing defined in [?]. With this addressing strategy the decoder can attend to the source representation that is most relevant to the stage of decoding.

Deep neural models have recently achieved a great success in a wide range of problems. In computer vision, models with more than convolutional layers have outperformed shallow ones by a big margin on a series of image tasks [?]. Following similar ideas of building deep CNNs, some promising improvements have also been achieved on building deep NMT systems. Zhou et al. proposed a new type of linear connections between adjacent layers to simplify the training of deeply stacked RNNs. Similarly, Wu et al. introduced residual connections to their deep neural machine translation system and achieve great improvements. However the optimization of deep RNNs is still an open problem due to the massive recurrent computation which makes the gradient propagation path extremely tortuous.

3Model Description

In this section, we discuss Linear Associative Unit (LAU) to ease the training of deep stack of RNNs. Based on this idea, we further propose DeepLAU, a neural machine translation model with a deep encoder and decoder.

3.1Recurrent Layers

A recurrent neural network [?] is a class of neural network that has recurrent connections and a state (or its more sophisticated memory-like extension). The past information is built up through the recurrent connections. This makes RNN applicable for sequential prediction tasks of arbitrary length. Given a sequence of vectors as input, a standard RNN computes the sequence hidden states by iterating the following equation from to :

is usually a nonlinear function such as composition of a logistic sigmoid with an affine transformation.

3.2Gated Recurrent Unit

It is difficult to train RNNs to capture long-term dependencies because the gradients tend to either vanish (most of the time) or explode. The effect of long-term dependencies is dropped exponentially with respect to the gradient propagation length. The problem was explored in depth by [?]. A successful approach is to design a more sophisticated activation function than a usual activation function consisting of gating functions to control the information flow and reduce the propagation path. There is a long thread of work aiming to solve this problem, with the long short-term memory units (LSTM) being the most salient examples and gated recurrent unit (GRU) being the most recent one [?]. RNNs employing either of these recurrent units have been shown to perform well in tasks that require capturing long-term dependencies.

GRU can be viewed as a slightly more dramatic variation on LSTM with fewer parameters. The activation function is armed with two specifically designed gates called update and reset gates to control the flow of information inside each hidden unit. Each hidden state at time-step is computed as follows

where is an element-wise product, is the update gate, and is the candidate activation.

where is the reset gate. Both reset and update gates are computed as :

This procedure of taking a linear sum between the existing state and the newly computed state is similar to the LSTM unit.

3.3Linear Associative Unit

GRU can actually be viewed as a non-linear activation function with gating mechanism. Here we propose LAU which extends GRU by having an additional linear transformation of the input in its dynamics. More formally, the state update function becomes

Here the updated has three sources: 1) the direct transfer from previous state , 2) the candidate update , and 3) a direct contribution from the input . More specifically, contains the nonlinear information of the input and the previous hidden state.

where and express how much of the nonlinear abstraction are produced by the input and previous hidden state , respectively. For simplicity, we set in this paper and find that this works well in our experiments. The term is usually an affine linear transformation of the input to mach the dimensions of , where . The associated term (the input gate) decides how much of the linear transformation of the input is carried to the hidden state and then the output. The gating function (reset gate) and (update gate) are computed following Equation (Equation 1) and ( ?) while is computed as

The term therefore offers a direct way for input to go to later hidden layers, which can eventually lead to a path to the output layer when applied recursively. This mechanism is potentially very useful for translation where the input, no matter whether it is the source word or the attentive reading (context), should sometimes be directly carried to the next stage of processing without any substantial composition or nonlinear transformation. To understand this, imagine we want to translate a English sentence containing a relative rare entity name such as “Bahrain” to Chinese: LAU is potentially able to retain the embedding of this word in its hidden state, which will otherwise be prone to serious distortion due to the scarcity of training instances for it.

3.4DeepLAU

Figure 1: DeepLAU: a neural machine translation model with deep encoder and decoder.
Figure 1: DeepLAU: a neural machine translation model with deep encoder and decoder.

Graves et al. explored the advantages of deep RNNs for handwriting recognition and text generation. There are multiple ways of combining one layer of RNN with another. Pascanu et al. introduced Deep Transition RNNs with Skip connections (DT(S)-RNNs). Kalchbrenner et al. proposed to make a full connection of all the RNN hidden layers. In this work we employ vertical stacking where only the output of the previous layer of RNN is fed to the current layer as input. The input at recurrent layer (denoted as ) is exactly the output of the same time step at layer (denoted as ). Additionally, in order to learn more temporal dependencies, the sequences can be processed in different directions. More formally, given an input sequence , the output on layer is

where

  • gives the output of layer at location .

  • is a recurrent function and we choose LAUs in this work.

  • The directions are marked by a direction term . If we fixed to , the input will be processed in forward direction, otherwise backward direction.

The deep architecture of DeepLAU, as shown in Figure 1, consists of three parts: a stacked LAU-based encoder, a stacked LAU-based decoder and an improved attention model.

Encoder One shortcoming of conventional RNNs is that they are only able to make use of previous context. In machine translation, where whole source utterances are transcribed at once, there is no reason not to exploit future context as well. Thus bi-directional RNNs are proposed to integrate information from the past and the future. The typical bidirectional approach processes the raw input in backward and forward direction with two separate layers, and then concatenates them together. Following Zhou et al. , we choose another bidirectional approach to process the sequence in order to learn more temporal dependencies in this work. Specifically, an RNN layer processes the input sequence in forward direction. The output of this layer is taken by an upper RNN layer as input, processed in reverse direction. Formally, following Equation (Equation 3), we set . This approach can easily build a deeper network with the same number of parameters compared to the classical approach. The final encoder consists of layers and produces the output to compute the conditional input to the decoder.

Attention Model The alignment model scores how well the output at position matches the inputs around position based on and where is the top-most layer of the encoder at step and is the first layer of decoder at step . It is intuitively beneficial to exploit the information of when reading from the source sentence representation, which is missing from the implementation of attention-based NMT in [?]. In this work, we build a more effective alignment path by feeding both the previous hidden state and the context word to the attention model, inspired by the recent implementation of attention-based NMT1. The conditional input is a weighted sum of attention score and encoder output . Formally, the calculation of is

where

is a nonlinear function with the information of (its word embedding being ) added. In our preliminary experiments, we found that GRU works slightly better than tanh function, but we chose the latter for simplicity.

Decoder The decoder follows Equation (Equation 3) with fixed direction term . At the first layer, we use the following input:

where is the target word embedding at time step , is dynamically obtained follows Equation (Equation 4). There are layers of RNNs armed with LAUs in the decoder. At inference stage, we only utilize the top-most hidden states to make the final prediction with a softmax layer:

.

4Experiments

4.1Setup

We mainly evaluated our approaches on the widely used NIST Chinese-English translation task. In order to show the usefulness of our approaches, we also provide results on other two translation tasks: English-French, English-German. The evaluation metric is BLEU2 [?].

For Chinese-English, our training data consists of M sentence pairs extracted from LDC corpora3, with M Chinese words and M English words respectively. We choose NIST 2002 (MT02) dataset as our development set, and the NIST 2003 (MT03), 2004 (MT04) 2005 (MT05) and 2006 (MT06) datasets as our test sets.

For English-German, to compare with the results reported by previous work [?], we used the same subset of the WMT 2014 training corpus that contains 4.5M sentence pairs with 91M English words and 87M German words. The concatenation of news-test 2012 and news-test 2013 is used as the validation set and news-test 2014 as the test set.

To evaluate at scale, we also report the results of English-French. To compare with the results reported by previous work on end-to-end NMT [?], we used the same subset of the WMT 2014 training corpus that contains 12M sentence pairs with 304M English words and 348M French words. The concatenation of news-test 2012 and news-test 2013 serves as the validation set and news-test 2014 as the test set.

4.2Training details

Case-insensitive BLEU scores on Chinese-English translation.
SYSTEM MT03 MT04 MT05 MT06 AVE.
Moses
Groundhog
coverage
MemDec
DeepGRU
DeepLAU
DeepLAU +Ensemble + PosUnk

Our training procedure and hyper parameter choices are similar to those used by [?]. In more details, we limit the source and target vocabularies to the most frequent words in both Chinese-English and English-French. For English-German, we set the source and target vocabularies size to and , respectively.

For all experiments, the dimensions of word embeddings and recurrent hidden states are both set to . The dimension of is also of size . Note that our network is more narrow than most previous work where hidden states of dimmention is used. we initialize parameters by sampling each element from the Gaussian distribution with mean and variance .

Parameter optimization is performed using stochastic gradient descent. Adadelta [?] is used to automatically adapt the learning rate of each parameter ( and ). To avoid gradient explosion, the gradients of the cost function which had norm larger than a predefined threshold were normalized to the threshold [?]. We set to at the beginning and halve the threshold until the BLEU score does not change much on the development set. Each SGD is a mini-batch of examples. We train our NMT model with the sentences of length up to words in the training data, while for the Moses system we use the full training data. Translations are generated by a beam search and log-likelihood scores are normalized by sentence length. We use a beam width of in all the experiments. Dropout was also applied on the output layer to avoid over-fitting. The dropout rate is set to . Except when otherwise mentioned, NMT systems are have 4 layers encoders and 4 layers decoders.

4.3Results on Chinese-English Translation

Table ? shows BLEU scores on Chinese-English datasets. Clearly DeepLAU leads to a remarkable improvement over their competitors. Compared to DeepGRU, DeepLAU is BLEU score higher on average four test sets, showing the modeling power gained from the liner associative connections. We suggest it is because LAUs apply adaptive gate function conditioned on the input which make it able to automatically decide how much linear information should be transferred to the next step.

To show the power of DeepLAU, we also make a comparison with previous work. Our best single model outperforms both a phrased-based MT system (Moses) as well as an open source attention-based NMT system (Groundhog) by and BLEU points respectively on average. The result is also better than some other state-of-the-art variants of attention-based NMT mode with big margins. After PosUnk and ensemble, DeepLAU seizes another notable gain of BLEU and outperform Moses by BLEU.

4.4Results on English-German Translation

Case-sensitive BLEU scores on German-English translation.
SYSTEM Architecture Voc. BLEU
Buck et al. Winning WMT’14 system – phrase-based + large LM -
Jean et al. gated RNN with search + LV + PosUnk
Luong et al. LSTM with layers + dropout + local att. + PosUnk
Shen et al. gated RNN with search + PosUnk + MRT
Zhou et al. LSTM with layers + F-F connections
Wu et al. LSTM with laysrs + RL-refined Word 23.1
Wu et al. LSTM with laysrs + RL-refined WPM-32K - 24.6
Wu et al. LSTM with laysrs + RL-refined WPM-32K + Ensemble - 26.3
this work DeepLAU
this work DeepLAU + PosUnk
this work DeepLAU + PosUnk + Ensemble models

The results on English-German translation are presented in Table ?. We compare our NMT systems with various other systems including the winning system in WMT’14 [?], a phrase-based system whose language models were trained on a huge monolingual text, the Common Crawl corpus. For end-to-end NMT systems, to the best of our knowledge, Wu et al. is currently the SOTA system and about BLEU points on top of previously best reported results even though Zhou et al. used a much deeper neural network4.

Following Wu et al. , the BLEU score represents the averaged score of 8 models we trained. Our approach achieves comparable results with SOTA system. As can be seen from the Table ?, DeepLAU performs better than the word based model and even not much worse than the best wordpiece models achieved by Wu et al. . Note that DeepLAU are simple and easy to implement, as opposed to previous models reported in Wu et al. , which dependends on some external techniques to achieve their best performance, such as their introduction of length normalization, coverage penalty, fine-tuning and the RL-refined model.

4.5Results on English-French Translation

English-to-French task: BLEU scores of single neural models.
SYSTEM BLEU
Enc-Dec [?]
RNNsearch [?]
RNNsearch-LV [?]
Deep-Att [?]
DeepLAU

To evaluate at scale, we also show the results on an English-French task with sentence pairs and vocabulary in Table ?. Luong et al. achieves BLEU score of with a six layers deep Encoder-Decoder model. The two attention models, RNNSearch and RNNsearch-LV achieve BLEU scores of and respectively. The previous best single NMT Deep-Att model with an layers encoder and layers decoder achieves BLEU score of . For DeepLAU, we obtain the BLEU score of with a layers encoder and layers decoder, which is on par with the SOTA system in terms of BLEU. Note that Zhou et al. utilize a much larger depth as well as external alignment model and extensive regularization to achieve their best results.

4.6Analysis

Then we will study the main factors that influence our results on NIST Chinese-English translation task. We also compare our approach with two SOTA topologies which were used in building deep NMT systems.

  • Residual Networks (ResNet) are among the pioneering works [?] that utilize extra identity connections to enhance information flow such that very deep neural networks can be effectively optimized. Share the similar idea, Wu et al. introduced to leverage residual connections to train deep RNNs.

  • Fast Forward (F-F) connections were proposed to reduce the propagation path length which is the pioneer work to simplify the training of deep NMT model [?]. The work can be viewed as a parametric ResNet with short cut connections between adjacent layers. The procedure takes a linear sum between the input and the newly computed state.

LAU vs. GRU
BLEU scores of DeepLAU and DeepGRU with different model sizes.
SYSTEM (, width AVE.

1

DeepGRU
(,)

2

DeepGRU
(,)

3

DeepGRU
(,)

4

DeepGRU
(,)

5

4+ResNet
(,)

6

4+F-F
(,)

7

DeepLAU
(,)

8

DeepLAU
(,)

9

DeepLAU
(,)

10

DeepLAU
(,)

Table ? shows the effect of the novel LAU. By comparing row to row , we see that when and are set to , the average BLEU scores achieved by DeepGRU and DeepLAU are and , respectively. LAU can bring an improvement of in terms of BLEU. After increasing the model depth to (row and row ), the improvement is enlarged to . When DeepGRU is trained with larger depth (say, ), the training becomes more difficult and the performance falls behind its shallow partner. While for DeepLAU, as can be see in row , with increasing the depth even to and we can still obtain growth by BLEU score. Compared to previous short-cut connection methods (row and row ), The LAU still achieve meaningful improvements over F-F connections and Residual connections by and BLEU points respectively.

DeepLAU introduces more parameters than DeepGRU. In order to figure out the effect of DeepLAU comparing models with the same parameter size, we increase the hidden size of DeepGRU model. Row shows that, after using a twice larger GRU layer, the BLEU score is , which is still worse than the corresponding DeepLAU model with fewer parameters.

Depth vs. Width Next we will study the model size. In Table ?, starting from and and gradually increasing the model depth, we can achieve substantial improvements in terms of BLEU. With and , our DeepLAU model yields the best BLEU score. We tried to increase the model depth with the same hidden size but failed to see further improvements.

We then tried to increase the hidden size. By comparing row and row , we find the improvements is relative small with a wider hidden size. It is also worth mentioning that a deep and thin network with fewer parameters can still achieve comparable results with its shallow partner. This suggests that depth plays a more important role in increasing the complexity of neural networks than width and our deliberately designed LAU benefit from the optimizing of such a deep model.

About Length
The BLEU scores of generated translations on the merged four test sets with respect to the lengths of source sentences.
The BLEU scores of generated translations on the merged four test sets with respect to the lengths of source sentences.

A more detailed comparison between DeepLAU (4 layers encoder and 4 layers decoder), DeepLAU(2 layer encoder and 2 layer decoder) and DeepGRU (4 layers encoder and 4 layers decoder), suggest that with deep architectures are essential to the superior performance of our system. In particular, we test the BLEU scores on sentences longer than on the merged test set. Clearly, in all curves, performance degrades with increased sentence length. However, DeepLAU models yield consistently higher BLEU scores than the DeepGRU model on longer sentences. These observations are consistent with our intuition that very deep RNN model is especially good at modeling the nested latent structures on relatively complicated sentences and LAU plays an important role on optimizing such a complex deep model.

5Conclusion

We propose a Linear Associative Unit (LAU) which makes a fusion of both linear and non-linear transformation inside the recurrent unit. On this way, gradients decay much slower compared to the standard deep networks which enable us to build a deep neural network for machine translation. Our empirical study shows that it can significantly improve the performance of NMT.

6acknowledge

We sincerely thank the anonymous reviewers for their thorough reviewing and valuable suggestions. Wang’s work is partially supported by National Science Foundation for Deep Semantics Based Uighur to Chinese Machine Translation (ID 61662077). Qun Liu’s work is partially supported by Science Foundation Ireland in the ADAPT Centre for Digital Content Technology (www.adaptcentre.ie) at Dublin City University funded under the SFI Research Centres Programme (Grant 13/RC/2106) co-funded under the European Regional Development Fund.

Footnotes

  1. github.com/nyu-dl/dl4mt-tutorial/tree/master/session2
  2. For Chinese-English task, we apply case-insensitive NIST BLEU. For other tasks, we tokenized the reference and evaluated the performance with multi-bleu.pl. The metrics are exactly the same as in previous work.
  3. The corpora include LDC2002E18, LDC2003E07, LDC2003E14, Hansards portion of LDC2004T07, LDC2004T08 and LDC2005T06.
  4. It is also worth mentioning that the result reported by Zhou et al. does not include PosUnk, and this comparison is not fair enough.
12171
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
Edit
-  
Unpublish
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel
Comments 0
""
The feedback must be of minumum 40 characters
Add comment
Cancel
Loading ...

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description