What to talk about and how? Selective Generation using LSTMs with Coarse-to-Fine Alignment
We propose an end-to-end, domain-independent neural encoder-aligner-decoder model for selective generation, i.e., the joint task of content selection and surface realization. Our model first encodes a full set of over-determined database event records via an LSTM-based recurrent neural network, then utilizes a novel coarse-to-fine aligner to identify the small subset of salient records to talk about, and finally employs a decoder to generate free-form descriptions of the aligned, selected records. Our model achieves the best selection and generation results reported to-date (with relative improvement in generation) on the benchmark WeatherGov dataset, despite using no specialized features or linguistic resources. Using an improved -nearest neighbor beam filter helps further. We also perform a series of ablations and visualizations to elucidate the contributions of our key model components. Lastly, we evaluate the generalizability of our model on the RoboCup dataset, and get results that are competitive with or better than the state-of-the-art, despite being severely data-starved.
We consider the important task of producing a natural language description of a rich world state represented as an over-determined database of event records. This task, which we refer to as selective generation, is often formulated as two subproblems: content selection, which involves choosing a subset of relevant records to talk about from the exhaustive database, and surface realization, which is concerned with generating natural language descriptions for this subset. Learning to perform these tasks jointly is challenging due to the ambiguity in deciding which records are relevant, the complex dependencies between selected records, and the multiple ways in which these records can be described.
Previous work has made significant progress on this task [\citenameChen and Mooney2008, \citenameAngeli et al.2010, \citenameKim and Mooney2010, \citenameKonstas and Lapata2012]. However, most approaches solve the two content selection and surface realization subtasks separately, use manual domain-dependent resources (e.g., semantic parsers) and features, or employ template-based generation. This limits domain adaptability and reduces coherence. We take an alternative, neural encoder-aligner-decoder approach to free-form selective generation that jointly performs content selection and surface realization, without using any specialized features, resources, or generation templates. This enables our approach to generalize to new domains. Further, our memory-based model captures the long-range contextual dependencies among records and descriptions, which are integral to this task [\citenameAngeli et al.2010].
We formulate our model as an encoder-aligner-decoder framework that uses recurrent neural networks with long short-term memory units (LSTM-RNNs) [\citenameHochreiter and Schmidhuber1997] together with a coarse-to-fine aligner to select and “translate” the rich world state into a natural language description. Our model first encodes the full set of over-determined event records using a bidirectional LSTM-RNN. A novel coarse-to-fine aligner then reasons over multiple abstractions of the input to decide which of the records to discuss. The model next employs an LSTM decoder to generate natural language descriptions of the selected records.
The use of LSTMs, which have proven effective for similar long-range generation tasks [\citenameSutskever et al.2014, \citenameVinyals et al.2015b, \citenameKarpathy and Fei-Fei2015], allows our model to capture the long-range contextual dependencies that exist in selective generation. Further, the introduction of our proposed variation on alignment-based LSTMs [\citenameBahdanau et al.2014, \citenameXu et al.2015] enables our model to learn to perform content selection and surface realization jointly, by aligning each generated word to an event record during decoding. Our novel coarse-to-fine aligner avoids searching over the full set of over-determined records by employing two stages of increasing complexity: a pre-selector and a refiner acting on multiple abstractions (low- and high-level) of the record input. The end-to-end nature of our framework has the advantage that it can be trained directly on corpora of record sets paired with natural language descriptions, without the need for ground-truth content selection.
We evaluate our model on a benchmark weather forecasting dataset (WeatherGov) and achieve the best results reported to-date on content selection ( relative improvement in F-1) and language generation ( relative improvement in BLEU), despite using no domain-specific resources. We also perform a series of ablations and visualizations to elucidate the contributions of the primary model components, and also show improvements with a simple, -nearest neighbor beam filter approach. Finally, we demonstrate the generalizability of our model by directly applying it to a benchmark sportscasting dataset (RoboCup), where we get results competitive with or better than state-of-the-art, despite being extremely data-starved.
2 Related Work
Selective generation is a relatively new research area and more attention has been paid to the individual content selection and selective realization subproblems. With regards to the former, \newcitebarzilay-04 model the content structure from unannotated documents and apply it to the application of text summarization. \newcitebarzilay-05 treat content selection as a collective classification problem and simultaneously optimize the local label assignment and their pairwise relations. \newciteliang-09 address the related task of aligning a set of records to given textual description clauses. They propose a generative semi-Markov alignment model that jointly segments text sequences into utterances and associates each to the corresponding record.
Surface realization is often treated as a problem of producing text according to a given grammar. \newcitesoricut-06 propose a language generation system that uses the WIDL-representation, a formalism used to compactly represent probability distributions over finite sets of strings. \newcitewong-07 and \newcitelu-11 use synchronous context-free grammars to generate natural language sentences from formal meaning representations. Similarly, \newcitebelz-08 employs probabilistic context-free grammars to perform surface realization. Other effective approaches include the use of tree conditional random fields [\citenameLu et al.2009] and template extraction within a log-linear framework [\citenameAngeli et al.2010].
Recent work seeks to solve the full selective generation problem through a single framework. \newcitechen-08 and \newcitechen-10 learn alignments between comments and their corresponding event records using a translation model for parsing and generation. \newcitekim-10 implement a two-stage framework that decides what to discuss using a combination of the methods of \newcitelu-08 and \newciteliang-09, and then produces the text based on the generation system of \newcitewong-07.
angeli-10 propose a unified concept-to-text model that treats joint content selection and surface realization as a sequence of local decisions represented by a log-linear model. Similar to other work, they train their model using external alignments from \newciteliang-09. Generation then follows as inference over this model, where they first choose an event record, then the record’s fields (i.e., attributes), and finally a set of templates that they then fill in with words for the selected fields. Their ability to model long-range dependencies relies on their choice of features for the log-linear model, while the template-based generation further employs some domain-specific features for fluent output.
konstas-12 propose an alternative method that simultaneously optimizes the content selection and surface realization problems. They employ a probabilistic context-free grammar that specifies the structure of the event records, and then treat generation as finding the best derivation tree according to this grammar. However, their method still selects and orders records in a local fashion via a Markovized chaining of records. \newcitekonstas-13 improve upon this approach with global document representations. However, this approach also requires alignment during training, which they estimate using the method of \newciteliang-09.
We treat the problem of selective generation as end-to-end learning via a recurrent neural network encoder-aligner-decoder model, which enables us to jointly learn content selection and surface realization directly from database-text pairs, without the need for an external aligner or ground-truth selection labels. The use of LSTM-RNNs enables our model to capture the long-range dependencies that exist among the records and natural language output. Additionally, the model does not rely on any manually-selected or domain-dependent features, templates, or parsers, and is thereby generalizable. The alignment-RNN approach has recently proven successful for generation-style tasks, e.g., machine translation [\citenameBahdanau et al.2014] and image captioning [\citenameXu et al.2015]. Since selective generation requires identifying the small number of salient records among an over-determined database, we avoid performing exhaustive search over the full record set, and instead propose a novel coarse-to-fine aligner that divides the search complexity into pre-selection and refinement stages.
3 Task Definition
We consider the problem of generating a natural language description for a rich world state specified in terms of an over-determined set of records (database). This problem requires deciding which of the records to discuss (content selection) and how to discuss them (surface realization). Training data consists of scenario pairs for , where is the complete set of records and is the natural language description (Fig. 1). At test time, only the records are given. We evaluate our model in the context of two publicly-available benchmark selective generation datasets.
The weather forecasting dataset (see Fig. 1(a)) of \newciteliang-09 consists of scenarios, each with weather records (e.g., temperature, sky cover, etc.) paired with a natural language forecast ( avg. word length).
We evaluate our model’s generalizability on the sportscasting dataset of \newcitechen-08, which consists of only pairs of temporally ordered robot soccer events (e.g., pass, score) and commentary drawn from the four-game 2001–2004 RoboCup finals (see Fig. 1(b)). Each scenario contains an average of event records and a word natural language commentary.
4 The Model
We formulate selective generation as inference over a probabilistic model
is the input set of
over-determined event records,
The goal of inference is to generate a natural language description for a given set of records. An effective means of learning to perform this generation is to use an encoder-aligner-decoder architecture with a recurrent neural network, which has proven effective for related problems in machine translation [\citenameBahdanau et al.2014] and image captioning [\citenameXu et al.2015]. We propose a variation on this general model with novel components that are well-suited to the selective generation problem.
Our model (Fig. 2) first encodes each input record into a hidden state with using a bidirectional recurrent neural network (RNN). Our novel coarse-to-fine aligner then acts on a concatenation of each record and its hidden state as multi-level representation of the input to compute the selection decision at each decoding step . The model then employs an RNN decoder to arrive at the word likelihood as a function of the multi-level input and the hidden state of the decoder at time step . In order to model the long-range dependencies among the records and descriptions (which is integral to effectively performing selective generation [\citenameAngeli et al.2010, \citenameKonstas and Lapata2012, \citenameKonstas and Lapata2013]), our model employs LSTM units as the nonlinear encoder and decoder functions.
Our LSTM-RNN encoder (Fig. 2) takes as input the set of event records represented as a sequence and returns a sequence of hidden annotations , where the annotation summarizes the record . This results in a representation that models the dependencies that exist among the records in the database.
We adopt an encoder architecture similar to that of \newcitegraves-13
where is an affine transformation, is the logistic sigmoid that restricts its input to , , , and are the input, forget, and output gates of the LSTM, respectively, and is the memory cell activation vector. The memory cell summarizes the LSTM’s previous memory and the current input, which are modulated by the forget and input gates, respectively. Our encoder operates bidirectionally, encoding the records in both the forward and backward directions, which provides a better summary of the input records. In this way, the hidden annotations concatenate forward and backward annotations, each determined using Equation (2c).
Having encoded the input records to arrive at the hidden annotations , the model then seeks to select the content at each time step that will be used for generation. Our model performs content selection using an extension of the alignment mechanism proposed by \newcitebahdanau-14, which allows for selection and generation that is independent of the ordering of the input.
In selective generation, the given set of event records is over-determined with only a small subset of salient records being relevant to the output natural language description. Standard alignment mechanisms limit the accuracy of selection and generation by scanning the entire range of over-determined records. In order to better address the selective generation task, we propose a coarse-to-fine aligner that prevents the model from being distracted by non-salient records. Our model aligns based on multiple abstractions of the input: both the original input record as well as the hidden annotations , an approach that has previously been shown to yield better results than aligning based only on the hidden state [\citenameMei et al.2015].
Our coarse-to-fine aligner avoids searching over the full set of over-determined records by using two stages of increasing complexity: a pre-selector and refiner (Fig. 2). The pre-selector first assigns to each record a probability of being selected, while the standard aligner computes the alignment likelihood over all the records at each time step during decoding. Next, the refiner produces the final selection decision by re-weighting the aligner weights with the pre-selector probabilities :
where , , , , are learned parameters. Ideally, the selection decision would be based on the highest-value alignment where . However, we use the weighted average (Eqn. 3e) as its soft approximation to maintain differentiability of the entire architecture.
The pre-selector assigns large values () to a small subset of salient records and small values () to the rest. This modulates the standard aligner, which then has to assign a large weight in order to select the -th record at time . In this way, the learned prior makes it difficult for the alignment (attention) to be distracted by non-salient records. Further, we can relate the output of the pre-selector to the number of records that are selected. Specifically, the output expresses the extent to which the -th record should be selected. The summation can then be regarded as a real-valued approximation to the total number of pre-selected records (denoted as ), which we regularize towards, based on validation (see Eqn. 5).
Our architecture uses an LSTM decoder that takes as input the current context vector , the last word , and the LSTM’s previous hidden state . The decoder outputs the conditional probability distribution over the next word, represented as a deep output layer [\citenamePascanu et al.2014],
where (an embedding matrix), , , and are parameters to be learned.
Training and Inference
We train the model using the database-record pairs from the training corpora so as to maximize the likelihood of the ground-truth language description (Eqn. 1). Additionally, we introduce a regularization term that enables the model to influence the pre-selector weights based on the aforementioned relationship between the output of the pre-selector and the number of selected records. Moreover, we also introduce the term , which accounts for the fact that at least one record should be pre-selected. Note that when is equal to , the pre-selector is forced to select all the records ( for all ), and the coarse-to-fine alignment reverts to the standard alignment introduced by \newcitebahdanau-14. Together with the negative log-likelihood of the ground-truth description , our loss function becomes
Having trained the model, we generate the natural language description by finding the maximum a posteriori words under the learned model (Eqn. 1). For inference, we perform greedy search starting with the first word . Beam search offers a way to perform approximate joint inference — however, we empirically found that beam search does not perform any better than greedy search on the datasets that we consider, an observation that is shared with previous work [\citenameAngeli et al.2010]. We later discuss an alternative -nearest neighbor-based beam filter (see Sec 6.2).
5 Experimental Setup
We analyze our model on the benchmark WeatherGov dataset, and use the data-starved RoboCup dataset to demonstrate the model’s generalizability. Following \newciteangeli-10, we use WeatherGov training, development, and test splits of size , , and , respectively. For RoboCup, we follow the evaluation methodology of previous work [\citenameChen and Mooney2008], performing three-fold cross-validation whereby we train on three games (approximately scenarios) and test on the fourth. Within each split, we hold out of the training data as the development set to tune the early-stopping criterion and . We then report the standard average performance (weighted by the number of scenarios) over these four splits.
On WeatherGov, we lightly tune the number of hidden units and
on the development set according to the generation metric
(BLEU), and choose units from and
from . For
RoboCup, we only tune on the development set and
choose from the set .
However, we do not retune the number of hidden units on
RoboCup. For each iteration, we randomly sample a mini-batch
of scenarios during back-propagation and use
Adam [\citenameKingma and Ba2015] for optimization. Training typically converges
within epochs. We select the model according to the BLEU score on
the development set.
We consider two metrics as a means of evaluating the effectiveness of our model on the two selective generation subproblems. For content selection, we use the F-1 score of the set of selected records as defined by the harmonic mean of precision and recall with respect to the ground-truth selection record set. We define the set of selected records as consisting of the record with the largest selection weight computed by our aligner at each decoding step .
We evaluate the quality of surface realization using the BLEU
6 Results and Analysis
We analyze the effectiveness of our model on the benchmark WeatherGov (as primary) and RoboCup (as generalization) datasets. We also present several ablations to illustrate the contributions of the primary model components.
6.1 Primary Results (WeatherGov)
We report the performance of content selection and surface realization using F-1 and two BLEU scores (standard sBLEU and the customized cBLEU of \newciteangeli-10), respectively (Sec. 5). Table 1 compares our test results against previous methods that include KL12 [\citenameKonstas and Lapata2012], KL13 [\citenameKonstas and Lapata2013], and ALK10 [\citenameAngeli et al.2010]. Our method achieves the best results reported to-date on all three metrics, with relative improvements of (F-1), (sBLEU), and (cBLEU) over the previous state-of-the-art.
6.2 Beam Filter with -Nearest Neighbors
We considered beam search as an alternative to greedy search in our primary setup (Eqn. 1), but this performs worse, similar to what previous work found on this dataset [\citenameAngeli et al.2010]. As an alternative, we consider a beam filter based on a -nearest neighborhood. See Supplementary Material for details. Table 9 shows that this -NN beam filter improves results over the primary greedy results.
|Primary||-NN Beam Filter|
6.3 Ablation Analysis (WeatherGov)
Next, we present several ablations to analyze the contribution of our
First, we evaluate the contribution of our proposed coarse-to-fine aligner by comparing our model with the basic encoder-aligner-decoder model introduced by \newcitebahdanau-14. Table 3 reports the results demonstrating that our aligner yields superior F-1 and BLEU scores relative to a standard aligner.
Next, we consider the effectiveness of the encoder. Table 4 compares the results with and without the encoder on the development set, and demonstrates that there is a significant gain from encoding the event records using the LSTM-RNN. We attribute this improvement to the LSTM-RNN’s ability to capture the relationships that exist among the records, which is known to be essential to selective generation [\citenameBarzilay and Lapata2005, \citenameAngeli et al.2010].
6.4 Qualitative Analysis (WeatherGov)
Fig. 3 shows an example record set with its output description and record-word alignment heat map. As shown, our model learns to align records with their corresponding words (e.g., windDir and “southeast,” temperature and “71,” windSpeed and “wind 10,” and gust and “winds could gust as high as 30 mph”). It also learns the subset of salient records to talk about (matching the ground-truth description perfectly for this example, i.e., a standard BLEU of ). We also see some word-level mismatch, e.g., “cloudy” mis-aligns to id-0 temp and id-10 precipChance, which we attribute to the high correlation between these types of records (“garbage collection” in \newciteliang-09).
Training our decoder has the effect of learning embeddings for the words in the training set (via the embedding matrix in Eqn. 4). Here, we explore the extent to which these learned embeddings capture semantic relationships among the training words. Table 10 presents nearest neighbor words for some of the common words from the WeatherGov dataset (according to cosine similarity in the embedding space). More details of other embedding approaches that we tried are discussed in the Supplementary Material section.
6.5 Out-of-Domain Results (RoboCup)
We use the RoboCup dataset to evaluate the
domain-independence of our model. The dataset is
severely data-starved with only (approx.) training pairs, which
is much smaller than is typically necessary to train RNNs. This
results in higher variance in the trained model distributions, and we
thus adopt the
standard denoising method of
ensembles [\citenameSutskever et al.2014, \citenameVinyals et al.2015b, \citenameZaremba et al.2014].
Following previous work, we perform two experiments on the
RoboCup dataset (Table 6), the first
considering full selective generation and the second assuming
ground-truth content selection at test time. On the former, we obtain
a standard BLEU score (sBLEU) of , which exceeds the best score
of [\citenameKonstas and Lapata2012]. Additionally, we achieve an selection
F-1 score of , which is also the best result reported
to-date. In the case of assumed (known) ground-truth content
selection, our model attains an sBLEU score of ,
which is competitive with the state-of-the-art.
We presented an encoder-aligner-decoder model for selective generation that does not use any specialized features, linguistic resources, or generation templates. Our model employs a bidirectional LSTM-RNN model with a novel coarse-to-fine aligner that jointly learns content selection and surface realization. We evaluate our model on the benchmark WeatherGov dataset and achieve state-of-the-art selection and generation results. We achieve further improvements via a -nearest neighbor beam filter. We also present several model ablations and visualizations to elucidate the effects of the primary components of our model. Moreover, our model generalizes to a different, data-starved domain (RoboCup), where it achieves results competitive with or better than the state-of-the-art.
We thank Gabor Angeli, David Chen, and Ioannis Konstas for their helpful comments.
Appendix A Supplementary Material
The following provides further evaluations of our model as a supplement to our original manuscript.
a.1 Beam Filter with -Nearest Neighbors
We perform greedy search as an approximation to full inference over the set of decision variables (Eqn. 1). We considered beam search as an alternative, but as with previous work on this dataset [\citenameAngeli et al.2010], we found that greedy search still yields better BLEU performance (Table 7).
As an alternative, we consider a beam filter based on a -nearest neighborhood. First, we generate the -best description candidates (i.e., a beam width of ) for a given input record set (database) using standard beam search. Next, we find the nearest neighbor database-description pairs from the training data, based on the cosine similarity of each neighbor database with the given input record. We then compute the BLEU score for each of the description candidates relative to the nearest neighbor descriptions (as references) and select the candidate with the highest BLEU score. We tune and on the development set and report the results in Table 8. Table 9 presents the test results with this tuned setting (, ), where we achieve BLEU scores better than our primary greedy results.
|Primary||-NN (, )|
a.2 Word Embeddings (Trained & Pretrained)
Training our decoder has the effect of learning embeddings for the words in the training set (via the embedding matrix in Eqn. 4). Here, we explore the extent to which these learned embeddings capture semantic relationships among the training words. Table 10 presents nearest neighbor words for some of the common words from the WeatherGov dataset (according to cosine similarity in the embedding space).
We also consider different ways of using pre-trained word embeddings [\citenameMikolov et al.2013] to bootstrap the quality of our learned embeddings. One approach initializes our embedding matrix with the pre-trained vectors and then refines the embedding based on our training corpus. The second concatenates our learned embedding matrix with the pre-trained vectors in an effort to simultaneously exploit general similarities as well as those learned for the domain. As shown previously for other tasks [\citenameVinyals et al.2014, \citenameVinyals et al.2015b], we find that the use of pre-trained embeddings results in negligible improvements (on the development set).
- These records may take the form of an unordered set or have a natural ordering (e.g., temporal in the case of RoboCup). In order to make our model generalizable, we treat the set as a sequence and use the order specified by the dataset. We note that it is possible that a different ordering will yield improved performance, since ordering has been shown to be important when operating on sets [\citenameVinyals et al.2015a].
- We implement our model in Theano [\citenameBergstra et al.2010, \citenameBastien et al.2012] and will make the code publicly available.
- We compute BLEU using the publicly available evaluation provided by \newciteangeli-10.
- These results are based on our primary model of Sec. 6.1 and on the development set.
- We use an ensemble of five randomly initialized models.
- The \newcitechen-08 sBLEU result is from \newciteangeli-10.
- Gabor Angeli, Percy Liang, and Dan Klein. 2010. A simple domain-independent probabilistic approach to generation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 502–512.
- Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.
- Regina Barzilay and Mirella Lapata. 2005. Collective content selection for concept-to-text generation. In Proceedings of the Human Language Technology Conference and the Conference on Empirical Methods in Natural Language Processing (HLT/EMNLP), pages 331–338.
- Regina Barzilay and Lillian Lee. 2004. Catching the drift: Probabilistic content models, with applications to generation and summarization. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics â Human Language Technologies (NAACL HLT), pages 113–120.
- Frédéric Bastien, Pascal Lamblin, Razvan Pascanu, James Bergstra, Ian J. Goodfellow, Arnaud Bergeron, Nicolas Bouchard, and Yoshua Bengio. 2012. Theano: new features and speed improvements. NIPS Workshop on Deep Learning and Unsupervised Feature Learning.
- Anja Belz. 2008. Automatic generation of weather forecast texts using comprehensive probabilistic generation-space models. Natural Language Engineering, 14(04):431–455.
- James Bergstra, Olivier Breuleux, Frédéric Bastien, Pascal Lamblin, Razvan Pascanu, Guillaume Desjardins, Joseph Turian, David Warde-Farley, and Yoshua Bengio. 2010. Theano: a CPU and GPU math expression compiler. In Proceedings of the Scientific Computing with Python Conference (SciPy).
- David L. Chen and Raymond J. Mooney. 2008. Learning to sportscast: a test of grounded language acquisition. In Proceedings of the International Conference on Machine Learning (ICML), pages 128–135.
- David L. Chen, Joohyun Kim, and Raymond J. Mooney. 2010. Training a multilingual sportscaster: Using perceptual context to learn language. Journal of Artificial Intelligence Research, 37:397–435.
- Alex Graves, Mohamed Abdel-rahman, and Geoffrey Hinton. 2013. Speech recognition with deep recurrent neural networks. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6645–6649.
- Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780.
- Andrej Karpathy and Li Fei-Fei. 2015. Deep visual-semantic alignments for generating image descriptions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3128–3137.
- Joohyun Kim and Raymond J Mooney. 2010. Generative alignment and semantic parsing for learning from ambiguous supervision. In Proceedings of the International Conference on Computational Linguistics (COLING), pages 543–551.
- Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representations (ICLR).
- Ioannis Konstas and Mirella Lapata. 2012. Unsupervised concept-to-text generation with hypergraphs. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics â Human Language Technologies (NAACL HLT), pages 752–761.
- Ioannis Konstas and Mirella Lapata. 2013. Inducing document plans for concept-to-text generation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1503–1514.
- Percy Liang, Michael I. Jordan, and Dan Klein. 2009. Learning semantic correspondences with less supervision. In Proceedings of the Joint Conference of the Annual Meeting of the Association for Computational Linguistics and the International Joint Conference on Natural Language Processing (ACL/IJCNLP), pages 91–99.
- Wei Lu and Hwee Tou Ng. 2011. A probabilistic forest-to-string model for language generation from typed lambda calculus expressions. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1611–1622.
- Wei Lu, Hwee Tou Ng, Wee Sun Lee, and Luke S Zettlemoyer. 2008. A generative model for parsing natural language to meaning representations. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 783–792.
- Wei Lu, Hwee Tou Ng, and Wee Sun Lee. 2009. Natural language generation with tree conditional random fields. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 400–409.
- Hongyuan Mei, Mohit Bansal, and Matthew R. Walter. 2015. Listen, attend, and walk: Neural mapping of navigational instructions to action sequences. arXiv preprint arXiv:1506.04089.
- Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. In Proceedings of the International Conference on Learning Representations (ICLR).
- Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2001. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL), pages 311–318.
- Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. 2014. How to construct deep recurrent neural networks. In Proceedings of the International Conference on Learning Representations (ICLR).
- Radu Soricut and Daniel Marcu. 2006. Stochastic language generation using WIDL-expressions and its application in machine translation and summarization. In Proceedings of the International Conference on Computational Linguistics and the Annual Meeting of the Association for Computational Linguistics (COLING/ACL), pages 1105–1112.
- Ilya Sutskever, Oriol Vinyals, and Quoc V. Lee. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems (NIPS).
- Oriol Vinyals, Lukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. 2014. Grammar as a foreign language. arXiv preprint arXiv:1412.7449.
- Oriol Vinyals, Samy Bengio, and Manjunath Kudlur. 2015a. Order matters: Sequence to sequence for sets. arXiv preprint arXiv:1511.06391.
- Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015b. Show and tell: A neural image caption generator. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3156–3164.
- Yuk Wah Wong and Raymond J Mooney. 2007. Generation by inverting a semantic parser that uses statistical machine translation. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics â Human Language Technologies (NAACL HLT), pages 172–179.
- Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In Proceedings of the International Conference on Machine Learning (ICML).
- Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. 2014. Recurrent neural network regularization. arXiv preprint arXiv:1409.2329.