A Deep Memory-based Architecture for Sequence-to-Sequence Learning

A Deep Memory-based Architecture for Sequence-to-Sequence Learning


We propose DeepMemory, a novel deep architecture for sequence-to-sequence learning, which performs the task through a series of nonlinear transformations from the representation of the input sequence (e.g., a Chinese sentence) to the final output sequence (e.g., translation to English). Inspired by the recently proposed Neural Turing Machine \citepntmgraves2014neural, we store the intermediate representations in stacked layers of memories, and use read-write operations on the memories to realize the nonlinear transformations between the representations. The types of transformations are designed in advance but the parameters are learned from data. Through layer-by-layer transformations, DeepMemory can model complicated relations between sequences necessary for applications such as machine translation between distant languages. The architecture can be trained with normal back-propagation on sequence-to-sequence data, and the learning can be easily scaled up to a large corpus. DeepMemory is broad enough to subsume the state-of-the-art neural translation model in \citepcho as its special case, while significantly improving upon the model with its deeper architecture. Remarkably, DeepMemory, being purely neural network-based, can achieve performance comparable to the traditional phrase-based machine translation system Moses with a small vocabulary and a modest parameter size.

1 Introduction

Sequence-to-sequence learning is a fundamental problem in natural language processing, with many important applications such as machine translation \citepcho,googleS2S, part-of-speech tagging \citepcollobert2011natural,vinyals2014grammar and dependency parsing \citepchen2014fast. Recently, there has been significant progress in development of technologies for the task using purely neural network-based models. Without loss of generality, we consider machine translation in this paper. Previous efforts on neural machine translation generally fall into two categories:

  • Encoder-Decoder: As illustrated in left panel of Figure 1, models of this type first summarize the source sentence into a fixed-length vector by the encoder, typically implemented with a recurrent neural network (RNN) or a convolutional neural network (CNN), and then unfold the vector into the target sentence by the decoder, typically implemented with a RNN \citepauli2013,kalchbrenner2013,ChoEMNLP,googleS2S;

  • Attention-Model: with RNNsearch \citepcho,luongEMNLP2015 as representative, it represents the source sentence as a sequence of vectors after a RNN (e.g., a bi-directional RNN \citepschuster1997bidirectional), and then simultaneously conducts dynamic alignment with a gating neural network and generation of the target sentence with another RNN, as illustrated in right panel of Figure 1.

Figure 1: Two types of neural machine translators. Note the pictorial illustrations may deviate from individual models e.g., [Sutskever et al.(2014)Sutskever, Vinyals, and Le], on modeling details.

Empirical comparison between the approaches indicates that the attention-model is more efficient than the encoder-decoder approach: it can achieve comparable results with far less parameters and training instances \citepjean-EtAl:2015:ACL-IJCNLP. This superiority in efficiency comes mainly from the mechanism of dynamic alignment, which avoids the need to represent the entire source sentence with a fixed-length vector \citepgoogleS2S.

1.1 Deep Memory-based Architecture

Both encoder-decoders and attention models can be reformalized in the language of Neural Turing Machines (NTM) \citepntmgraves2014neural, by replacing different forms of representations as content in memories and the operations on them as basic neural net-controlled read-write actions, as illustrated in the left panel of Figure 2. This is clear after realizing that the attention mechanism \citepcho is essentially a special case of reading (in particular with content-based addressing ) in NTM on the memory that contains the representation of source sentence. More importantly, under this new view, the whole process becomes transforming the source sentence and putting it into memory (vector or array of vectors), and reading from this memory to further transform it into the target sentence. This architecture is intrinsically shallow in terms of the transformations on the sequence as an object, with essentially one hidden “layer”, as illustrated in the left panel of Figure 2. Note that although RNN (as encoding/decoding or equivalently as controller in NTM) can be infinitely deep, this depth is merely for dealing with the temporal structure within the sequence. On the other hand, many sequence-to-sequence tasks, e.g, translation, are intrinsically complex and calls for more complex and powerful transformation mechanism than that in encoder-decoder and attention models.

Figure 2: Space holder for NTM view of things

For this reason, we propose a novel deep memory-based architecture, named DeepMemory, for sequence-to-sequence learning. As shown in the right panel of Figure 2, DeepMemory carries out the task through a series of non-linear transformations from the input sequence, to different levels of intermediate memory-based representations, and eventually to the final output sequence. DeepMemory is essentially a customized and deep version of NTM with multiple stages of operations controlled by a program, where the choices of “layers” and types of read/write operations between layers are tailored for a particular task.

Through layer-by-layer stacking of transformations on memory-based representations, DeepMemory generalizes the notion of inter-layer nonlinear mapping in neural networks, and therefore introduces a powerful new deep architecture for sequence-to-sequence learning. The aim of DeepMemory is to learn the representation of sequence better suited to the task (e.g., machine translation) through layer-by-layer transformations. Just as in deep neural network (DNN), we expect that stacking relatively simple transformations can greatly enhance the expressing power and the efficiency of DeepMemory, especially in handling translation between languages with vastly different nature (e.g., Chinese and English) and sentences with complicated structures. DeepMemory naturally subsumes current neural machine translation models \citepcho,googleS2S as special cases, but more importantly it accommodates many deeper alternatives with more modeling power, which are empirically superior to the current shallow architectures on machine translation tasks.

Although DeepMemory is initially proposed for machine translation, it can be adapted for other tasks that require substantial transformations of sequences, including paraphrasing, reasoning \citepReasoning2015, and semantic parsing \citepEnquirer2015. Also, in defining the layer-by-layer transformations, we can go beyond the read-write operations proposed in \citepntmgraves2014neural and design differentiable operations for the specific structures of the task (e.g., in  \citepEnquirer2015).

RoadMap We will first discuss in Section 2 the read-write operations as a new form of non-linear transformation, as the building block of DeepMemory. Then in Section 3, we stack the transformations together to get the full DeepMemory architecture, and discuss several architectural variations of it. In Section 4 we report our empirical study of DeepMemory on a Chinese-English translation task.

2 Read-Write as a Nonlinear Transformation

We start with discussing read-write operations between two pieces of memory as a generalized form of nonlinear transformation. As illustrated in Figure 3 (left panel), this transformation is between two non-overlapping memories, namely -memory and -memory, with -memory being initially blank. A controller operates the read-heads to get the values from -memory (“reading”), which are then sent to the write-head for modifying the values at specific locations in -memory (“writing”). After those operations are completed, the content in -memory is considered transformed and written to -memory. These operations therefore define a transformation from one representation (in -memory) to another (in -memory), which is pictorially noted in the right panel of Figure 3.

Figure 3: Read-write as an nonlinear transformation.

These basic components are more formally defined below, following those in a generic NTM \citepntmgraves2014neural, with however important modifications for the nesting architecture, implementation efficiency and description simplicity.

Memory: a memory is generally defined as a matrix with potentially infinite size, while here we limit ourselves to pre-determined (pre-claimed) matrix, with memory locations and values in each location. In our implementation of DeepMemory, is always instance-dependent and is pre-determined by the algorithm1. Memories of different layers generally have different . Now suppose for one particular instance (index omitted for notational simplicity), the system reads from the -memory (, with units) and writes to -memory (denoted , with units)

with and .

Read/write heads: a read-head gets the values from the corresponding memory, following the instructions of the controller, which also influences the controller in feeding the state-machine. DeepMemory allows multiple read-heads for one controller, with potentially different addressing strategies (see Section 2.1 for more details). A write-head simply takes the instruction from controller and modifies the values at specific locations.

Controller: The core to the controller is a state machine, implemented as a RNN Long Short-Term Memory (LSTM) \citeplstm, with state at time denoted as (as illustrated in Figure 3). With , the controller determines the reading and writing at time , while the return of reading in turn takes part in updating the state. For simplicity, only one reading and writing is allowed at one time step, but more than one read-heads are allowed. The main equations for controllers are then

where , and are respectively the operators for dynamics, reading and writing2, parameterized by , , and . In DeepMemory, 1) it is only allowed to read from memory of lower layer and write to memory of higher layer, and 2) reading memory can only be performed after finishing the writing to it.

The above read-write operations transform the representation in -memory to the new representation in -memory, while the design choice of read-write specifies the inner structure of the memories in -memory. The transformation is therefore jointly specified by the read-write strategies (e.g., the addressing described in Section 2.1) and the parameters learned in a supervised fashion (described later in Section 3.2). Memory with the designed inner structure, in this particular case vector array with instance specific length, offers more flexibility than a fixed-length vector in representing sequences. This representational flexibility is particularly advantageous when combined with proper reading-writing strategy in defining nonlinear transformations for sequences, which will serve as the building block of the deep architecture.

2.1 Addressing

Addressing for Reading

Location-based Addressing With location-based addressing (-addressing), the reading is simply Notice that with -addressing, the state machine automatically runs on a clock determined by the spatial structure of -memory. Following this clock, the write-head operates the same number of times. One important variant, as suggested in \citepcho,googleS2S, is to go through -memory backwards after the forward reading pass, where the controller RNN has the same structure but is parameterized differently.

Content-based Addressing With a content-based addressing (-addressing), the return at is

where , implemented as a DNN, gives an un-normalized “affliiation” score for unit in -memory. Clearly it is related to the attention mechanism introduced in \citepcho for machine translation and general attention models discussed in \citepdeepmind for computer vision. Content-based addressing offers the following two advantages in representation learning:

  1. it can focus on the right segment of the representation, as demonstrated by the automatic alignment observed in \citepcho, therefore better preserving the information in lower layers;

  2. it provides a way to alter the spatial structure of the sequence representation on a large scale, for which the re-ordering in machine translation is an intuitive example.

Hybrid Addressing: With hybrid addressing (-addressing) for reading, we essentially use two read-heads (can be easily extended to more), one with -addressing and the other with -addressing. At each time , the controller simply concatenates the return of two individual read-heads as the final return:

It is worth noting that with -addressing, the tempo of the state machine will be determined by the -addressing read-head, and therefore creates -memory of the same number of locations in writing. As shown later, -addressing can be readily extended to allow read-heads to work on different memories.

Addressing for Writing

Location-based Addressing With -addressing, the writing is simple. At any time , only the location in -memory is updated: which will be kept unchanged afterwards. For both location- and content-based addressing, is implemented as a DNN with weights .

Content-based Addressing In a way similar to -addressing for reading, the units to write is determined through a gating network , where the values in -memory at time is given by

where stands for the values of the location in -memory at time , is the forgetting factor (similarly defined as in \citepntmgraves2014neural), is the normalized weight (with unnormalized score implemented also with a DNN) given to the location at time .

2.2 Types of Nonlinear Transformations

As the most “conventional” special case, if we use -addressing for both reading and writing, we actually get the familiar structure in units found in RNN with stacked layers \citeppascanu2014construct. Indeed, as illustrated in the figure right to the text, this read-write strategy will invoke a relatively local dependency based on the original spatial order in -memory. It is not hard to show that we can recover some deep RNN model in \citeppascanu2014construct after stacking layers of read-write operations like this. This deep architecture actually partially accounts for the great performance of the Google neural machine translation model \citepgoogleS2S.

The -addressing, however, be it for reading and writing, offers a means of major reordering on the units, while -addressing can add into it the spatial structure of the lower layer memory. In this paper, we consider four types of transformations induced by combinations of the read and write addressing strategies, listed pictorially in Figure 4. Notice that 1) we only include one combination with -addressing for writing since it is computationally expensive to optimize when combined with a -addressing reading (see Section 3.2 for some analysis) , and 2) for one particular read-write strategy there are still a fair amount of implementation details to be specified, which are omitted due to the space limit. One can easily design different read/write strategies, for example a particular way of -addressing for writing.

Figure 4: Examples of read-write strategies.

3 DeepMemory: Stacking Them Together

As illustrated in Figure 5 (left panel), the stacking is straightforward: we can just apply a transformation on top of another, with the -memory in lower layer being the -memory of upper layer. The entire deep architecture of DeepMemory, with diagram in Figure 5 (right panel), can be therefore defined accordingly. Basically, it starts with a symbol sequence (Layer-0), then moves to the sequence of word embeddings (Layer-1), through layers of transformation to reach the final intermeidate layer (Layer-), which will be read by the output layer.

The operations in output layer, relying on another LSTM to generate the target sequence, are similar to a memory read-write, with the following two differences:

  • it predicts the symbols for the target sequence, and takes the “guess” as part of the input to update the state of the generating LSTM, while in a memory read-write, there is no information flow from higher layers to the controller;

  • since the target sequence in general has different length as the top-layer memory, it takes only pure -addressing reading and relies on the built-in mechanism of the generating LSTM to stop (i.e., after generating a End-of-Sentence token).

Figure 5: Illustration of stacked layers of memory (left) and the overall diagram of DeepMemory (right).

Memory of different layers could be equipped with different read-write strategies, and even for the same strategy, the configurations and learned parameters are in general different. This is in contrast to DNNs, for which the transformations of different layers are more homogeneous (mostly linear transforms with nonlinear activation function). A sensible architecture design in combining the nonlinear transformations can greatly affect the performance of the model, on which however little is known and future research is needed.

3.1 Cross-Layer Reading

In addition to the generic read-write strategies in Section 2.2, we also introduce the cross-layer reading into DeepMemory for more modeling flexibility. In other words, for writing in any Layer-, DeepMemory allows reading from more than one layers lower than , instead of just Layer-. More specifically, we consider the following two cases.

Figure 6: Cross-layer reading.

Memory-Bundle: A Memory-Bundle, as shown in Figure 6 (left panel), concatenates the units of two aligned memories in reading, regardless of the addressing strategy. Formally, the location in the bundle of memory Layer- and Layer- would be Since it requires strict alignment between the memories to put together, Memory-Bundle is usually on layers created with spatial structure of same origin (see Section 3.3 for examples).

Short-Cut Unlike Memory-Bundle, Short-Cut allows reading from layers with potentially different inner structures by using multiple read-heads, as shown in Figure 6 (right panel). For example, one can use a -addressing read-head on memory Layer- and a -addressing read-head on Layer- for the writing to memory Layer- with .

3.2 Optimization

For any designed architecture, the parameters to be optimized include for each controller, the parameters for the LSTM in the output layer, and the word-embeddings. Since the reading from each memory can only be done after the writing on it completes, the “feed-forward” process can be described in two scales: 1) the flow from memory of lower layer to memory of higher layer, and 2) the forming of a memory at each layer controlled by the corresponding state machine. Accordingly in optimization, the flow of “correction signal” also propagates at two scales:

  • On the “cross-layer” scale: the signal starts with the output layer and propagates from higher layers to lower layers, until Layer-1 for the tuning of word embedding;

  • On the “within-layer” scale: the signal back-propagates through time (BPTT) controlled by the corresponding state-machine (LSTM). In optimization, there is a correction for each reading or writing on each location in a memory, making the -addressing more expensive than -addressing for it in general involves all locations in the memory at each time .

The optimization can be done via the standard back-propagation (BP) aiming to maximize the likelihood of the target sequence. In practice, we use the standard stochastic gradient descent (SGD) and mini-batch (size 80) with learning rate controlled by AdaDelta \citepadadelta.

3.3 Architectural Variations of DeepMemory

We discuss four representative special cases of DeepMemory: Arc-I, II, III and IV, as novel deep architectures for machine translation. We also show that current neural machine translation models like RNNsearch can be described in the framework of DeepMemory as a relatively shallow case.

Arc-I The first proposal, including two variants (Arc-I and Arc-I), is designed to demonstrate the effect of -addressing reading between intermediate memory layers, with diagram shown in the figure right to the text. Both variants employ a -addressing reading from memory Layer-1 (the embedding layer) and -addressing writing to Layer-2. After that, Arc-I writes to Layer-3 (-addressing) based on its -addressing reading (two read-heads) on Layer-2, while Arc-I uses -addressing to read from Layer-2. Once Layer-3 is formed, it is then put together with Layer-2 for a Memory-Bundle, from which the output layer reads (-addressing) for predicting the target sequence. Memory-Bundle, with its empirical advantage over single layers (see Section 4.2), is also used in other three architectures for generating the target sequence or forming intermediate layers.

Arc-II As an architecture similar to Arc-I, Arc-II is designed to investigate the effect of -addressing reading from different layers of memory (or Short-Cut in Section 3.1). It uses the same strategy as Arc-I in generating memory Layer-1 and 2, but differs in generating Layer-3, where Arc-II uses -addressing reading on Layer-2 but -addressing reading on Layer-1. Once Layer-3 is formed, it is then put together with Layer-2 as a Memory-Bundle, which is then read by the output layer for predicting the target sequence.

Arc-III We intend to use this design to study a deeper architecture and more complicated addressing strategy. Arc-III follows the same way as Arc-II to generate Layer-1, Layer-2 and Layer-3. After that it uses two read-heads combined with a -addressing write to generate Layer-4, where the two read-heads consist of a -addressing read-head on Layer-1 and a -addressing read-head on the memory bundle of Layer-2 and Layer-3. After the generation of Layer-4, it puts Layer-2, 3 and 4 together for a bigger Memory-Bundle to the output layer. Arc-III, with 4 intermediate layers, is the deepest among the four special cases.

Arc-IV This proposal is designed to study the efficacy of -addressing writing in forming intermediate representation. It employs a -addressing reading from memory Layer-1 and -addressing writing to Layer-2. After that, it uses a -addressing reading on Layer-2 to write to Layer-3 with -addressing. For -addressing writing to Layer-3, all locations in Layer-3 are randomly initialized. Once Layer-3 is formed, it is then bundled with Layer-2 for the reading (-addressing) of the output layer.

Relation to other neural machine translators

As pointed out earlier, RNNsearch \citepcho with its automatic alignment, is a special case of DeepMemory with shallow architecture. As pictorially illustrated in Figure 7, it employs -addressing reading on memory Layer-1 (the embedding layer), and -addressing writing to Layer-2, which then is read (-addressing) by the output layer to generate the target sequence. As shown in Figure 7, Layer-2 is the only intermediate layer created by nontrivial read-write operations.

On the other hand, the connection between DeepMemory and encoder-decoder architectures is less obvious since they usually require the reading from only the last cell (i.e., for a fixed-length vector representation) between certain layers. More specifically, [Sutskever et al.(2014)Sutskever, Vinyals, and Le] can be viewed as DeepMemory with stacking layers of -addressing read-write (described in Section 2.2) for both the encoder and decoder part, while the two are actually connected through last hidden states of the LSTMs of the corresponding layers.

Figure 7: RNNsearch as a special case of DeepMemory.

4 Experiments

We report our empirical study on applying DeepMemory to Chinese-to-English translation. Our training data consist of 1.25M sentence pairs extracted from LDC corpora, with 27.9M Chinese words and 34.5M English words respectively. We choose NIST 2002 (MT02) dataset as our development set, and the NIST 2003 (MT03), 2004 (MT04) and 2005 (MT05) datasets as our test sets. We use the case-insensitive 4-gram NIST BLEU score as our evaluation metric, and sign-test \citepcollins2005clause as statistical significance test. In training of the neural networks, we limit the source and target vocabularies to the most frequent 16K words in Chinese and English, covering approximately 95.8% and 98.3% of the two corpora respectively.

We compare our method with two state-of-the-art SMT and NMT3 models:

  • Moses \citepkoehn2007: an open source phrase-based translation system with default configuration and a 4-gram language model trained on the target portion of training data with \citepstolcke2002srilm;

  • RNNsearch \citepcho: an attention-based NMT model with default setting (RNNsearch), as well as an optimal re-scaling of the model (on sizes of both embedding and hidden layers, with about 50% more parameters) (RNNsearch).

For a fair comparison, 1) the output layer in eachDeepMemory variant is implemented as Gated Recurrent Units (GRU) in \citepcho, and 2) all the DeepMemory architectures are designed to have the same embedding size as in RNNsearch with parameter size less or comparable to RNNsearch.

4.1 Results

The main results of different models are given in Table 1. RNNsearch (best) is about points behind Moses in BLEU on average, which is consistent with the observations made by other authors on different machine translation tasks \citepcho,jean-EtAl:2015:ACL-IJCNLP. Remarkably, some sensible designs of DeepMemory (e.g., Arc-II) can already achieve performance comparable to Moses, with only 42M parameters, while RNNsearch has 46M parameters.

Clearly all DeepMemory architectures yield performance significantly better than (Arc-I, Arc-II & Arc-III) or comparable (Arc-I & Arc-IV) to the NMT baselines. Among them, Arc-II outperforms the best setting of NMT baseline (RNNsearch), by about BLEU on average with less parameters.

Systems MT03 MT04 MT05 Average Parameters #
RNNsearch 29.02 31.25 28.32 29.53 31M
RNNsearch 30.28 31.72 28.52 30.17 46M

28.98 32.02 29.53* 30.18 54M
Arc-I 30.14 32.70* 29.40* 30.75 54M
Arc-II 31.27* 33.02* 30.63* 31.64 42M
Arc-III 30.15 33.46* 29.49* 31.03 53M
Arc-IV 29.88 32.00 28.76 30.21 48M
Moses 31.61 33.48 30.75 31.95
Table 1: BLEU-4 scores (%) of NMT baselines: RNNsearch and RNNsearch, DeepMemory architectures (Arc-I, II, III and IV), and phrase-based SMT system (Moses). The “*” indicates that the results are significantly (p0.05) better than those of the RNNsearch.

4.2 Discussion

Figure 8: The BLEU scores of generated translations on the merged three test sets with respect to the lengths of source sentences. The numbers on X-axis of the figure stand for sentences longer than the corresponding length, e.g., for source sentences with words.

About Depth: A more detailed comparison between RNNsearch (two layers), Arc-II (three layers) and Arc-III) (four layers), both quantitatively and qualitatively, suggests that with deep architectures are essential to the superior performance of DeepMemory. Although the deepest architecture Arc-III is about 0.6 BLEU behind the Arc-II, its performance on long sentences is significant better. Figure 8 shows the BLEU scores of generated translations on the test sets with respect to the length of the source sentences. In particular, we test the BLEU scores on sentences longer than {0, 10, 20, 30, 40, 50, 60} in the merged test set of MT03, MT04 and MT05. Clearly, on sentences with length 30, Arc-III yields consistently higher BLEU scores than Arc-II. This observation is further confirmed by our observations of translation quality (see Appendix) and is consistent with our intuition that DeepMemory with its multiple layers of transformation, is especially good at modeling the transformations of representations essential to machine translation on relatively complicated sentences.

About -addressing Read: Further comparison between Arc-I and Arc-I (similar parameter sizes) suggests that -addressing reading plays an important role in learning a powerful transformation between intermediate representations, necessary for translation between language pairs with vastly different syntactical structures. This conjecture is further verified by the good performances of Arc-II and Arc-III, both of which have -addressing read-heads in their intermediate memory layers. However, if memory Layer- is formed with only -addressing read from memory Layer-, and serves as the only going to later stages, the performance is usually less satisfying. Comparison of this design with -addressing (results omitted here) suggests that another read-head with -addressing can prevent the transformation from going astray by adding the tempo from a memory with a clearer temporal structure.

About -addressing Write: The BLEU scores of Arc-IV are lower than that of Arc-II but comparable to that of RNNsearch, suggesting that writing with -addressing alone yields reasonable representation. A closer look shows that although Arc-IV performs poorly on very long sentences (e.g., source sentences with over 60 words), it does fairly well on sentences with normal length. More specifically, on source sentences with length 40, it outperforms RNNsearch with 0.79 BLEU points. One possible explanation is that our particular implementation of -addressing for writing in Arc-IV (Section 3.3) relies heavily on the randomly initialized content and is hard to optimize, especially when the structure of the sentence is complex, which might need to be “guided” by another write-head or some smart initialization.

About Cross-layer Read: As another observation, cross-layer reading almost always helps. The performances of Arc-I, II, III and IV unanimously drop after removing the Memory-Bundle and Short-Cut (results omitted here), even after the broadening of memory units to keep the parameter size unchanged. It might be due to the flexibility gained in mixing different addressing modes and representations of different stages.

5 Conclusion

We propose DeepMemory, a novel architecture for sequence-to-sequence learning, which is stimulated by the recent work of Neural Turing Machine \citepntmgraves2014neural and Neural Machine Translation \citepcho. DeepMemory builds its deep architecture for processing sequence data on the basis of a series of transformations induced by the read-write operations on a stack of memories. This new architecture significantly improves the expressive power of models in sequence-to-sequence learning, which is verified by our empirical study on a benchmark machine translation task.

APPENDIX: Actual Translation Examples

In appendix we give some example translations from DeepMemory, more specically, Arc-II and Arc-III, and compare them against the reference and the translation given by RNNsearch. We will focus on long sentences with relatively complicated structures.

Example Translation of Arc-II

Example Translation of Arc-III


  1. It is possible to let the controller learn to determine the length of the memory, but that does not yield better performance on our tasks and is therefore omitted here.
  2. Note that our definition of writing is slightly different from that in \citepntmgraves2014neural.
  3. There are recent progress on aggregating multiple models or enlarging the vocabulary(e.g., in  \citepjean-EtAl:2015:ACL-IJCNLP), but here we focus on the generic models.


  1. Auli, Michael, Galley, Michel, Quirk, Chris, and Zweig, Geoffrey. Joint language and translation modeling with recurrent neural networks. In Proceedings of EMNLP, pp. 1044–1054, 2013.
  2. Bahdanau, Dzmitry, Cho, Kyunghyun, and Bengio, Yoshua. Neural machine translation by jointly learning to align and translate. In Proceedings of ICLR, 2015.
  3. Chen, Danqi and Manning, Christopher D. A fast and accurate dependency parser using neural networks. In Proceedings of EMNLP, pp. 740–750, 2014.
  4. Cho, Kyunghyun, van Merrienboer, Bart, Gulcehre, Caglar, Bougares, Fethi, Schwenk, Holger, and Bengio, Yoshua. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In Proceedings of EMNLP, pp. 1724–1734, 2014.
  5. Collins, Michael, Koehn, Philipp, and Kučerová, Ivona. Clause restructuring for statistical machine translation. In Proceedings of ACL, pp. 531–540, 2005.
  6. Collobert, Ronan, Weston, Jason, Bottou, Léon, Karlen, Michael, Kavukcuoglu, Koray, and Kuksa, Pavel. Natural language processing (almost) from scratch. The Journal of Machine Learning Research, 12:2493–2537, 2011.
  7. Graves, Alex, Wayne, Greg, and Danihelka, Ivo. Neural turing machines. arXiv preprint arXiv:1410.5401, 2014.
  8. Gregor, Karol, Danihelka, Ivo, Graves, Alex, and Wierstra, Daan. DRAW: A recurrent neural network for image generation. arXiv preprint arXiv:1502.04623, 2015.
  9. Hochreiter, Sepp and Schmidhuber, Jürgen. Long short-term memory. Neural computation, 9(8):1735–1780, 1997.
  10. Jean, Sébastien, Cho, Kyunghyun, Memisevic, Roland, and Bengio, Yoshua. On using very large target vocabulary for neural machine translation. In ACL-IJNLP, 2015.
  11. Kalchbrenner, Nal and Blunsom, Phil. Recurrent continuous translation models. In Proceedings of EMNLP, pp. 1700–1709, 2013.
  12. Koehn, Philipp, Hoang, Hieu, Birch, Alexandra, Callison-Burch, Chris, Federico, Marcello, Bertoldi, Nicola, Cowan, Brooke, Shen, Wade, Moran, Christine, Zens, Richard, Dyer, Chris, Bojar, Ondrej, Constantin, Alexandra, and Herbst, Evan. Moses: Open source toolkit for statistical machine translation. In Proceedings of ACL on interactive poster and demonstration sessions, pp. 177–180, Prague, Czech Republic, June 2007.
  13. Luong, Thang, Pham, Hieu, and Manning, Christopher D. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pp. 1412–1421, 2015.
  14. Pascanu, Razvan, Gulcehre, Caglar, Cho, Kyunghyun, and Bengio, Yoshua. How to construct deep recurrent neural networks. In Proceedings of ICLR, 2014.
  15. Peng, Baolin, Lu, Zhengdong, Li, Hang, and Wong, Kam-Fai. Towards neural network-based reasoning. arXiv preprint arXiv:1508.05508, 2015.
  16. Schuster, Mike and Paliwal, Kuldip K. Bidirectional recurrent neural networks. Signal Processing, IEEE Transactions on, 45(11):2673–2681, 1997.
  17. Stolcke, Andreas et al. SRILM-an extensible language modeling toolkit. In Proceedings of ICSLP, volume 2, pp. 901–904, 2002.
  18. Sutskever, Ilya, Vinyals, Oriol, and Le, Quoc VV. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems, pp. 3104–3112, 2014.
  19. Vinyals, Oriol, Kaiser, Lukasz, Koo, Terry, Petrov, Slav, Sutskever, Ilya, and Hinton, Geoffrey. Grammar as a foreign language. arXiv preprint arXiv:1412.7449, 2014.
  20. Yin, Pengcheng, Lu, Zhengdong, Li, Hang, and Ben, Kao. Neural enquirer: Learning to query tables. arXiv preprint arXiv:1512.00965, 2015.
  21. Zeiler, Matthew D. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701, 2012.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minumum 40 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description