Recurrent Neural Networks with External Memory for
Recurrent Neural Networks (RNNs) have become increasingly popular for the task of language understanding. In this task, a semantic tagger is deployed to associate a semantic label to each word in an input sequence. The success of RNN may be attributed to its ability to memorize long-term dependence that relates the current-time semantic label prediction to the observations many time instances away. However, the memory capacity of simple RNNs is limited because of the gradient vanishing and exploding problem. We propose to use an external memory to improve memorization capability of RNNs. We conducted experiments on the ATIS dataset, and observed that the proposed model was able to achieve the state-of-the-art results. We compare our proposed model with alternative models and report analysis results that may provide insights for future research.
Recurrent Neural Networks with External Memory for
|Baolin Peng, Kaisheng Yao|
|The Chinese University of Hong Kong|
Index Terms: Recurrent Neural Network, Language Understanding, Long Short-Term Memory, Neural Turing Machine
Neural network based methods have recently demonstrated promising results on many natural language processing tasks [1, 2]. Specifically, recurrent neural networks (RNNs) based methods have shown strong performances, for example, in language modeling , language understanding , and machine translation [5, 6] tasks.
The main task of a language understanding (LU) system is to associate words with semantic meanings [7, 8, 9]. For example, in the sentence ”Please book me a ticket from Hong Kong to Seattle”, a LU system should tag ”Hong Kong” as the departure-city of a trip and ”Seattle” as its arrival city. The widely used approaches include conditional random fields (CRFs) [8, 10], support vector machine , and, more recently, RNNs [4, 12].
A RNN consists of an input, a recurrent hidden layer, and an output layer. The input layer reads each word and the output layer produces probabilities of semantic labels. The success of RNNs can be attributed to the fact that RNNs, if successfully trained, can relate the current prediction with input words that are several time steps away. However, RNNs are difficult to train, because of the gradient vanishing and exploding problem . The problem also limits RNNs’ memory capacity because error signals may not be able to back-propagated far enough.
There have been two lines of researches to address this problem. One is to design learning algorithms that can avoid gradient exploding, e.g., using gradient clipping , and/or gradient vanishing, e.g., using second-order optimization methods . Alternatively, researchers have proposed more advanced model architectures, in contrast to the simple RNN that uses, e.g., Elman architecture . Specifically, the long short-term memory (LSTM) [17, 18] neural networks have three gates that control flows of error signals. The recently proposed gated recurrent neural networks (GRNN)  may be considered as a simplified LSTM with fewer gates.
Along this line of research on developing more advanced architectures, this paper focuses on a novel neural network architecture. Inspired by the recent work in , we extend the simple RNN with Elman architecture to using an external memory. The external memory stores the past hidden layer activities, not only from the current sentence but also from past sentences. To predict outputs, the model uses input observation together with a content retrieved from the external memory. The proposed model performs strongly on a common language understanding dataset and achieves new state-of-the-art results.
2.1 Language understanding
A language understanding system predicts an output sequence with tags such as named-entity given an input sequence words. Often, the output and input sequences have been aligned. In these alignments, an input may correspond to a null tag or a single tag. An example is given in Table 1.
Given a -length input word sequence , a corresponding output tag sequence , and an alignment , the posterior probability is approximated by
where is the size of a context window and indexes the positions in the alignment.
2.2 Simple recurrent neural networks
The above posterior probability can be computed using a RNN. A RNN consists of an input layer , a hidden layer , and an output layer . In Elman architecture , hidden layer activity is dependent on both the input and also recurrently on the past hidden layer activity .
Because of the recurrence, the hidden layer activity is dependent on the observation sequence from its beginning. The posterior probability is therefore computed as follows
where the output and hidden layer activity are computed as
In the above equation, is softmax function and is sigmoid or tanh function. The above model is denoted as simple RNN, to contrast it with more advanced recurrent neural networks described below.
2.3 Recurrent neural networks using gating functions
The current hidden layer activity of a simple RNN is related to its past hidden layer activity via the nonlinear function in Eq. (4). The non-linearity can cause errors back-propagated from to explode or to vanish. This phenomenon prevents simple RNN from learning patterns that are spanned with long time dependence .
To tackle this problem, long short-term memory (LSTM) neural network was proposed in  with an introduction of memory cells, linearly dependent on their past values. LSTM also introduces three gating functions, namely input gate, forget gate and output gate. We follow a variant of LSTM in .
More recently, a gated recurrent neural network (GRNN)  was proposed. Instead of the three gating functions in LSTM, it uses two gates.
One is a reset gate that relates a candidate activation with the past hidden layer activity ; i.e.,
where is the candidate activation. and are the matrices relate the current observation and the past hidden layer activity. is element-wise product.
The second gate is an update gate that interpolates the candidate activation and the past hidden layer activity to update the current hidden layer activity; i.e.,
These gates are usually computed as functions of the current observation and the past hidden layer activity; i.e.,
where and are the weights to observation and to the past hidden layer activity for the reset gate. and are similarly defined for the update gate.
3 The RNN-EM architecture
We extend simple RNN in this section to using external memory. Figure 1 illustrates the proposed model, which we denote it as RNN-EM. Same as with the simple RNN, it consists of an input layer, a hidden layer and an output layer. However, instead of feeding the past hidden layer activity directly to the hidden layer as with the simple RNN, one input to the hidden layer is from a content of an external memory. RNN-EM uses a weight vector to retrieve the content from the external memory to use in the next time instance. The element in the weight vector is proportional to the similarity of the current hidden layer activity with the content in the external memory. Therefore, content that is irrelevant to the current hidden layer activity has small weights. We describe RNN-EM in details in the following sections. All of the equations to be described are with their bias terms, which we omit for simplicity of descriptions. We implemented RNN-EM using Theano [20, 21].
3.1 Model input and output
The input to the model is a dense vector . In the context of language understanding, is a projection of input words, also known as word embedding.
The hidden layer reads both the input and a content vector from the memory. The hidden layer activity is computed as follows
where is tanh function. is the weight to the input vector. is the content from a read operation to be described in Eq. (15). is the weight to the content vector.
The output from this model is fed into the output layer as follows
where is the weight to the hidden layer activity and is softmax function.
Notice that in case of , the above model is simple RNN.
3.2 External memory read
RNN-EM has an external memory . It can be considered as a memory with slots and each slot is a vector with m elements. Similar to the external memory in computers, the memory capacity of RNN-EM may be increased if using a large .
The model generates a key vector to search for content in the external memory. Though there are many possible ways to generate the key vector, we choose a simple linear function that relates hidden layer activity as follows
where is a linear transformation matrix. Our intuition is that the memory should be in the same space of or affine to the hidden layer activity.
We use cosine distance to compare this key vector with contents in the external memory. The weight for the -th slot in memory is computed as follows
where the above weight is normalized and sums to 1.0. is a scalar larger than 0.0. It sharpens the weight vector when is larger than 1.0. Conversely, it smooths or dampens the weight vector when is between 0.0 and 1.0. We use the following function to obtain ; i.e.,
where maps the hidden layer activity to a scalar.
Importantly, we also use a scalar coefficient to interpolate the above weight estimate with the past weight as follows:
This function is similar to Eq. (6) in the gated RNN, except that we use a scalar to interpolate the weight updates and the gated RNN uses a vector to update its hidden layer activity.
The memory content is retrieved from the external memory at time using
3.3 External memory update
RNN-EM generates a new content vector to be added to its memory; i.e,
where . We use the above linear function based on the same intuition in Sec. 3.2 that the new content and the hidden layer activity are in the same space of or affine to each other.
RNN-EM has a forget gate as follows
where is an erase vector, generated as . Notice that the -th element in the forget gate is zero only if both read weight and erase vector have their -th element set to one. Therefore, memory cannot be forgotten if it is not to be read.
RNN-EM has an update gate . It simply uses the weight as follows
Therefore, memory is only updated if it is to be read.
With the above described two gates, the memory is updated as follows
where transforms a vector to a diagonal matrix with diagonal elements from the vector.
In order to compare the proposed model with alternative modeling techniques, we conducted experiments on a well studied language understanding dataset, Air Travel Information System (ATIS) [22, 23, 24]. The training part of this dataset consists of 4978 sentences and 56590 words. There are 893 sentences and 9198 words for test. The number of semantic label is 127, including the common null label. We use lexicon-only features in experiments.
4.2 Comparison with the past results
The input in RNN-EM has a window size of 3, consisting of the current input word and its neighboring two words. We use the AdaDelta method to update gradients . The maximum number of training iterations was 50. Hyper parameters for tuning included the hidden layer size , the number of memory slots , and the dimension for each memory slot . The best performing RNN-EM had 100 dimensional hidden layer and 8 memory slots with 40 dimensional memory slot.
|simple RNN ||94.11|
Table 2 lists performance in F1 score of RNN-EM, together with the previous best results of alternative models in the literature. Since there are no previous results from GRNN, we use our own implementation of it for this study. These results are optimal in their respective systems. The previous best result was achieved using LSTM. A change of 0.38% of F1 score from LSTM result is significant at the 90% confidence level. Results in Table 2 show that RNN-EM is significantly better than the previous best result using LSTM.
4.3 Analysis on convergence and averaged performances
|Model||hidden layer dimension||# of Parameters|
100 dimensional hidden layer, 40 dimensional slot with 8 slots.
Results in the previous sections were obtained with models using different sizes. This section further compares neural network models given that they have approximately the same number of parameters, listed in Table 3. We use AdaDelta  gradient update method for all these models. Figure 2 plots their training set entropy with respect to iteration numbers. To better illustrate their convergences, we have converted entropy values to their logarithms. The results show that RNN-EM converges to lower training entropy than other models. RNN-EM also converges faster than the simple RNN and LSTM.
We further repeated ATIS experiments for 10 times with different random seeds for these neural network models. We evaluated their performances after their convergences. Table 4 lists their averaged F1 scores, together with their maximum and minimum F1 scores. A change of 0.12% is significant at the 90% confidence level, when comparing against LSTM result. Results in Table 4 show that RNN-EM, on average, significantly outperforms LSTM. The best performance by RNN-EM is also significantly better than the best performing LSTM.
4.4 Analysis on memory size
The size of the external memory is proportional to the number of memory slots . We fixed the dimension of memory slots to 40 and varied the number of slots. Table 5 lists their test set F1 scores. The best performing RNN-EM was with . Notice that RNN-EM with performed better than the simple RNN with 94.09% F1 score in Table 4. This can be explained as using gate functions in Eqs. (17) and (18) in RNN-EM, which are absent in simple RNNs. RNN-EM with also performed similarly as the gated RNN with 94.70% F1 score in Table 4, partly because of these gate functions.
Memory capacity may be measured using training set entropy. Table 5 shows that training set entropy is decreased initially with increased from 1 to 8, showing that the memory capacity of the RNN-EM is improved. However, the entropy is increased with s further increased. This suggests that memory capacity of RNN-EM cannot be increased simply by increasing the number of slots.
5 Related works
The RNN-EM is along the same line of research in [19, 29] that uses external memory to improve memory capacity of neural networks. Perhaps the closest work is the Neural Turing Machine (NTM) work in , which focuses on those tasks that require simple inference and has proved its effectiveness in copy, repeat and sorting tasks. NTM requires complex models because of these tasks. The proposed model is considerably simpler than NTM and can be considered as an extension of simple RNN. Importantly, we have shown through experiments on a common language understanding dataset the promising results from using the external memory architecture.
6 Conclusions and discussions
In this paper, we have proposed a novel neural network architecture, RNN-EM, that uses external memory to improve memory capacity of simple recurrent neural networks. On a common language understanding task, RNN-EM achieves new state-of-the-art results and performs significantly better than the previous best result using long short-term memory neural networks. We have conducted experiments to analyze its convergence and memory capacity. These experiments provide insights for future research directions such as mechanisms of accessing memory contents and methods to increase memory capacity.
The authors would like to thank Shawn Tan and Kai Sheng Tai for useful discussions on NTM structure and implementation.
-  Y. Bengio, R. Ducharme, P. Vincent, and C. Janvin, “A neural probabilistic language model,” Journal of Machine Learning Research, vol. 3, pp. 1137–1155, 2003.
-  R. Collobert and J. Weston, “A unified architecture for natural language processing: deep neural networks with multitask learning,” in ICML, 2008, pp. 160–167.
-  T. Mikolov, M. Karafiát, L. Burget, J. Cernocký, and S. Khudanpur, “Recurrent neural network based language model,” in INTERSPEECH, 2010, pp. 1045–1048.
-  K. Yao, G. Zweig, M. Hwang, Y. Shi, and D. Yu, “Recurrent neural networks for language understanding,” in INTERSPEECH, 2013, pp. 2524–2528.
-  J. Devlin, R. Zbib, Z. Huang, T. Lamar, R. M. Schwartz, and J. Makhoul, “Fast and robust neural network joint models for statistical machine translation,” in ACL, 2014, pp. 1370–1380.
-  K. Cho, B. van Merrienboer, Ç. Gülçehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio, “Learning phrase representations using RNN encoder-decoder for statistical machine translation,” in EMNLP, 2014, pp. 1724–1734.
-  W. Ward et al., “The cmu air travel information service: Understanding spontaneous speech,” in Proceedings of the DARPA Speech and Natural Language Workshop, 1990, pp. 127–129.
-  C. Raymond and G. Riccardi, “Generative and discriminative algorithms for spoken language understanding,” in INTERSPEECH, 2007, pp. 1605–1608.
-  R. de Mori, “Spoken language understanding: a survey,” in ASRU, 2007, pp. 365–376.
-  J. D. Lafferty, A. McCallum, and F. C. N. Pereira, “Conditional random fields: Probabilistic models for segmenting and labeling sequence data,” in ICML, 2001, pp. 282–289.
-  T. Kudo and Y. Matsumoto, “Chunking with support vector machines,” in NAACL, 2001.
-  G. Mesnil, Y. Dauphin, K. Yao, Y. Bengio, L. Deng, D. Hakkani-Tur, X. He, L. Heck, G. Tur, D. Yu, and G. Zweig, “Using recurrent neural networks for slot filling in spoken language understanding,” IEEE/ACM Trans. on Audio, Speech, and Language Processing, vol. 23, no. 3, pp. 530–539, 2015.
-  Y. Bengio, P. Y. Simard, and P. Frasconi, “Learning long-term dependencies with gradient descent is difficult,” IEEE Transactions on Neural Networks, vol. 5, no. 2, pp. 157–166, 1994.
-  R. Pascanu, T. Mikolov, and Y. Bengio, “On the difficulty of training recurrent neural networks,” in ICML, 2013, pp. 1310–1318.
-  J. Martens and I. Sutskever, “Training deep and recurrent networks with hessian-free optimization,” in Neural Networks: Tricks of the Trade - Second Edition, 2012, pp. 479–535.
-  J. Elman, “Finding structure in time,” Cognitive science, vol. 14, no. 2, pp. 179–211, 1990.
-  S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural computation, vol. 9, no. 8, pp. 1735–1780, 1997.
-  A. Graves, A. Mohamed, and G. E. Hinton, “Speech recognition with deep recurrent neural networks,” in ICASSP, 2013, pp. 6645–6649.
-  A. Graves, G. Wayne, and I. Danihelka, “Neural turing machines,” CoRR, vol. abs/1410.5401, 2014.
-  F. Bastien, P. Lamblin, R. Pascanu, J. Bergstra, I. J. Goodfellow, A. Bergeron, N. Bouchard, and Y. Bengio, “Theano: new features and speed improvements,” Deep Learning and Unsupervised Feature Learning NIPS 2012 Workshop, 2012.
-  J. Bergstra, O. Breuleux, F. Bastien, P. Lamblin, R. Pascanu, G. Desjardins, J. Turian, D. Warde-Farley, and Y. Bengio, “Theano: a CPU and GPU math expression compiler,” in Proceedings of the Python for Scientific Computing Conference (SciPy), Jun. 2010, oral Presentation.
-  D. Dahl, M. Bates, M. Brown, W. Fisher, K. Hunicke-Smith, D. Pallett, C. Pao, A. Rudnicky, and E. Shriberg, “Expanding the scope of the ATIS task: The ATIS-3 corpus,” in Proceedings of the workshop on Human Language Technology. Association for Computational Linguistics, 1994, pp. 43–48.
-  Y.-Y. Wang, A. Acero, M. Mahajan, and J. Lee, “Combining statistical and knowledge-based spoken language understanding in conditional models,” in COLING/ACL, 2006, pp. 882–889.
-  G. Tur, D. Hakkani-TÃ¼r, and L. Heck, “What’s left to be understood in ATIS?” in IEEE Workshop on Spoken Language Technologies, 2010.
-  M. D. Zeiler, “ADADELTA: An adaptive learning rate method,” arXiv:1212.5701, 2012.
-  G. Mesnil, X. He, L. Deng, and Y. Bengio, “Investigation of recurrent-neural-network architectures and learning methods for language understanding,” in INTERSPEECH, 2013.
-  P. Xu and R. Sarikaya, “Convolutional neural network based triangular CRF for joint intent detection and slot filling,” in ASRU, 2013, pp. 78–83.
-  K. Yao, B. Peng, Y. Zhang, D. Yu, G. Zweig, and Y. Shi, “Spoken language understanding using long short-term memory neural networks,” in IEEE SLT, 2014.
-  J. Weston, S. Chopra, and A. Bordes, “Memory networks,” submitted to ICLR, vol. abs/1410.3916, 2015. [Online]. Available: http://arxiv.org/abs/1410.3916