Text Generation with Exemplar-based Adaptive Decoding

Text Generation with Exemplar-based Adaptive Decoding

Abstract

We propose a novel conditioned text generation model. It draws inspiration from traditional template-based text generation techniques, where the source provides the content (i.e., what to say), and the template influences how to say it. Building on the successful encoder-decoder paradigm, it first encodes the content representation from the given input text; to produce the output, it retrieves exemplar text from the training data as “soft templates,” which are then used to construct an exemplar-specific decoder. We evaluate the proposed model on abstractive text summarization and data-to-text generation. Empirical results show that this model achieves strong performance and outperforms comparable baselines.

\aclfinalcopy

1 Introduction

1

Conditioned text generation is the essence of many natural language processing (NLP) tasks, e.g., text summarization (Mani, 1999), machine translation (Koehn, 2009), and data-to-text generation (Kukich, 1983; McKeown, 1992; Reiter and Dale, 1997). In its common neural sequence-to-sequence formulation (Sutskever et al., 2014; Cho et al., 2014), an encoder-decoder architecture is used. The decoder generates the text autoregressively, token-by-token, conditioning on the feature representations encoded from the source, typically with attention (Bahdanau et al., 2015) and copy mechanisms (Gu et al., 2016; See et al., 2017). This paradigm is capable of generating fluent abstractive text, but in an uncontrolled and sometimes unreliable way, often producing degenerate outputs and favoring generic utterances Vinyals and Le (2015); Li et al. (2016).

The encoder-decoder approach differs considerably from earlier template-based methods (Becker, 2002; Foster and White, 2004; Reiter et al., 2005; Gatt and Reiter, 2009, inter alia), where the source content is filled into the slots of a handcrafted template. These solutions offer higher generation precision compared to neural approaches Wiseman et al. (2017), but tend to lack the naturalness of neural systems, and are less scalable to open domain settings, where the number of required templates can be prohibitively large.

To sidestep the scalability problems with handcrafted templates, it has been proposed to use similar training samples as exemplars, to guide the decoding process (Gu et al., 2018; Guu et al., 2018; Weston et al., 2018; Pandey et al., 2018; Cao et al., 2018a, inter alia).2 In general, existing methods accomplish this by (a) using traditional information retrieval (IR) techniques for exemplar extraction (e.g., TF-IDF), and then (b) concatenating the exemplar to the source as additional inputs, allowing the decoder to attend over and copy from both.

We propose a different strategy for using exemplars. For motivation, Figure 1 shows a source-target pair together with its exemplar from the Gigaword dataset (Graff et al., 2003). The target is a summary of the source sentence, and the exemplar is retrieved from the training set (§3.2).3 There is word overlap between the exemplar and the desired output, which would be easily captured by an attention/copy mechanism (e.g. Norway and aid). Despite this, ideally, the model should also exploit the structural and stylistic aspects to produce an output with a similar sentence structure, even if the words are different.

Source: Norway said Friday it would give Zimbabwe 40 million kroner (7.02 million dollars, 4.86 million euros) in aid to help the country deal with a lack of food and clean drinking water and a cholera outbreak. Exemplar: Norway boosts earthquake aid to Pakistan. Target: Norway grants aid of 4.86 million euros to Zimbabwe.

Figure 1: A source-target pair from Gigaword training set, along with its exemplar.

Indeed, in traditional templates, the source is supposed to determine “what to say,” while the templates aim to address “how to say it,” reminiscent of the classical content selection and surface realization pipeline (Reiter and Dale, 1997). For instance, an ideal template for this example might look as follows:
          grants aid of           to          

In the neural formulation, the “how to say it” aspect is primarily controlled by the decoder.

Inspired by the above intuition, we propose exemplar-based adaptive decoding, where a customized decoder is constructed for each exemplar. This is achieved by letting the exemplars to directly influence decoder parameters through a reparameterization step (§3.1). The adaptive decoder can be used as a drop-in replacement in the encoder-decoder architecture. It offers the potential to better incorporate the exemplars’ structural and stylistic aspects into decoding, without excessive increase in the amount of parameters or computational overhead.

We empirically evaluate our approach on abstractive text summarization and data-to-text generation (§4), on which most of the recent efforts on exemplar-guided text generation have been studied. On three benchmark datasets, our approach outperforms comparable baselines, and achieves performance competitive with the state of the art. The proposed method can be applicable in many other conditioned text generation tasks. Our implementation is available at https://homes.cs.washington.edu/~hapeng.

2 Background

This section lays out the necessary background and notations for further technical discussion. We begin with conditioned text generation and the encoder-decoder framework (Sutskever et al., 2014; Cho et al., 2014). In the interest of the notation clarity, §3 will use an Elman network (Elman, 1990) as a running example for the decoder, which is briefly reviewed in §3. The proposed technique generalizes to other neural network architectures (§3.3).

Conditioned text generation and the encoder-decoder architecture.

Our discussion centers around conditioned text generation, i.e., the model aims to output the target given the source input , both of which are sequences of tokens. Each token , takes one value from a vocabulary . and could vary depending on the tasks, e.g., they will respectively be articles and summaries for text summarization; and for data-to-text generation, would be structured data, which can sometimes be linearized (Lebret et al., 2016; Wiseman et al., 2018, inter alia), and is the output text. We aim to learn a (parameterized) conditional distribution of the target text given the source ,

(1)

where is the prefix of up to the token (inclusive).

The probability of each target token is usually estimated with a  function:

(2)

denotes a learned vector for token . depends on and , and is computed by a function which we will describe soon.

A typical implementation choice for computing is the encoder-decoder architecture (Sutskever et al., 2014). More specifically, an encoder first gathers the feature representations from the source ; then a decoder is used to compute the feature vectors:

(3)

and are, respectively, the collections of parameters for the encoder and the decoder, both of which can be implemented as recurrent neural networks (RNNs) such as LSTMs Hochreiter and Schmidhuber (1997) or GRUs (Cho et al., 2014), or the transformer (Vaswani et al., 2017). In Sutskever et al. (2014), the dependence of on is made by using the last hidden state of the encoder as the initial state of the decoder. Such dependence can be further supplemented with attention (Bahdanau et al., 2015) and copy mechanisms (Gu et al., 2016; See et al., 2017), as we will do in this work.

§3 introduces how we use exemplars to inform decoding, by dynamically constructing the decoder’s parameters . For the notation clarity, we will use the Elman network as a running example, reviewed below.

Elman networks.

Given input sequence , an Elman network (Elman, 1990) computes the hidden state at time step from the previous one and the current input token by

(4)

where and are learned parameter matrices (with being the hidden dimension), and is the embedding vector for token . We omit the bias term for clarity.

3 Method

This section introduces the proposed method in detail. Our aim is to use exemplars to inform the decoding procedure (i.e., how to say it). To accomplish this, we reparameterize the decoder’s parameters with weighted linear sums, where the coefficients are determined by an exemplar. The decoder is adaptive, in the sense that its parameters vary according to the exemplars. The adaptive decoder can be used as a drop-in replacement in the encoder-decoder architecture. Before going into details, let us first overview the high-level generation procedure of our model. Given source text , the model generates an output as follows:

  • Run a standard encoder to gather the content representations from the source.

  • Retrieve its exemplar , and compute exemplar-specific coefficients (§3.2).

  • Construct the adaptive decoder parameters  (§3.1), using the coefficients computed at step 2. Then the output is generated by applying the adaptive decoder followed by a , just as in any other encoder-decoder architecture.

Aiming for a smoother transition, we will first describe step 3 in §3.1, and then go back to discuss step 2 in §3.2. For clarity, we shall assume that the decoder is implemented as an Elman network (Elman, 1990; Equation 4). The proposed technique generalizes to other neural network architectures, as we will discuss later in §3.3.

3.1 Reparameterizing the RNN Decoder

At its core, the exemplar-specific adaptive decoder involves a reparameterization step, which we now describe. We focus on the parameters of the Elman network decoder, i.e., and in Equation 4.

Parameter construction with linear sums.

We aim to reparameterize the pair of matrices , in a way that they are influenced by the exemplars.

Let us first consider an extreme case, where one assigns a different pair of parameter matrices to each exemplar, without any sharing. This leads to an unreasonably large amount of parameters, which are difficult to estimate reliably.4

We instead construct and from a set of predefined parameters matrices. Take for example, it is computed as the weighted sum of matrices:

(5)

where , with being the size of the hidden states. is a hyperparameter, determining the number of matrices to use.5 The summation is weighted by the coefficients , which are computed from the exemplar . For clarity, the dependence of both and on is suppressed when the context is clear.

Equation 5 constructs the decoder’s parameter matrix using a linear combination of . The exemplar informs this procedure through the coefficients ’s, the detailed computation of which is deferred to §3.2. The other matrix can be similarly constructed by .

Rank-1 constraints.

In the above formulation, the number of parameters is still times more than a standard Elman network, which can lead to overfitting with a limited amount of training data. Besides, it would be more interesting to compare the adaptive decoder to a standard RNN under a comparable parameter budget. Therefore we want to further limit the amount of parameters. This can be achieved by forcing the ranks of and to be 1, since it then takes parameters to form each of them, instead of . More formally, we upper-bound their ranks by construction:

(6)

denotes the outer product of two vectors; and are learned -dimensional vectors. Each can be similarly constructed by a separate set of vectors .

Let denote the stack of , vectors, i.e.,

(7a)
(7b)

Equations 5 and 6 can be compactly written as

(8)

where is the diagonal matrix built from the -dimensional coefficient vector :

(9)

The construction of is similar, but with a different set of parameters matrices and :6

(10)

Note that, despite their similarities to SVD at a first glance, Equations 8 and 10 are not performing matrix factorization. Rather, we are learning directly; , , , and are never explicitly instantiated (Peng et al., 2017, 2018c).

To summarize, we reparameterize and as interpolations of rank-1 matrices. By the fact that , the ranks of and are upper-bounded by . As pointed out by Krueger and Memisevic (2017), the parameter matrices of a trained RNN tend to have full rank. Therefore, in the experiments, we set equal to the hidden size , aiming to allow the adaptive decoder to use full-rank matrices in the recurrent computation. Yet, if one holds a priori beliefs that the matrices should have lower ranks, using could be desirable. When , an adaptive RNN constructed by the above approach has parameters, which is comparable to the parameters in a standard Elman network.7

3.2 Incorporating Exemplars

We now discuss the computation of coefficients , through which the exemplars inform the decoder construction (Equations 8 and 10). Before detailing the neural network architecture, we begin by describing the exemplar retrieval procedure.

Retrieving exemplars .

Intuitively, similar source texts should hold similar targets. Therefore, given source input , we use the training target as its exemplar , whose source is most similar to .8 To compute the similarities between source texts, we use bag-of-words (BOW) features and cosine similarity. We extract the top-1 exemplar for each instance. This step is part of the pre-processing, and we do not change the exemplars as the training proceeds.

There are, of course, many other strategies to get the exemplars, e.g., using handcrafted or heuristically created hard templates (Reiter et al., 2005; Becker, 2002; Foster and White, 2004, inter alia), randomly sampling multiple training instances (Guu et al., 2018), or learning a neural reranker (Cao et al., 2018a). Using more sophistically extracted exemplars is definitelly interesting to explore, which we defer to future work.

Computing coefficients.

Next we describe the computation of , the -dimensional coefficient vector, which is used to construct the adaptive decoder (Equations 8 and 10).

Intuitively, the rank-1 matrices (’s and ’s in Equation 6 and thereafter) can be seen as capturing different aspects of the generated text. And determines how much each of them contributes to the adaptive decoder construction. A natural choice to calculate is to use the similarities between the exemplar and each of the aspects.

To accomplish this, we run a RNN encoder over , and use the last hidden state as its vector representation .9 We further associate each pair with a learned vector ; and then is computed as the similarity between and , using an inner product . More compactly,

(11)

with .

Closing this section, Algorithm 1 summarizes the procedure to construct an adaptive decoder.

1:procedure ()
2:     Retrieve the exemplar §3.2
3:     Compute ’s representation §3.2
4:     Compute coefficients Eq.11
5:     Construct the decoder Eqs.8, 10
6:end procedure
Algorithm 1 Adaptive decoder construction.

3.3 Discussion.

Although we’ve based our discussion on Elman networks so far, it is straightforward to apply this method to its gated variants (Hochreiter and Schmidhuber, 1997; Cho et al., 2014, inter alia), and other quasi-/non-recurrent neural architectures (Bradbury et al., 2017; Vaswani et al., 2017; Peng et al., 2018a, inter alia). Throughout the experiments, we will be using an adaptive LSTM decoder (§4). As a drop-in replacement in the encoder-decoder architecture, it introduces a reasonable amount of additional parameters and computational overhead, especially when one uses a small encoder for the exemplar (i.e., the sizes of the vectors in Equation 11 are small). It can benefit from the highly-optimized GPU implementations, e.g., CuDNN, since it uses the same recurrent computation as a standard nonadaptive RNN.

In addition to the neural networks, the adaptive decoder requires access to the full training set due to the retrieval step. In this sense it is semi-parametric.10 The idea to dynamically construct the parameters is inspired by Hypernetworks (Ha et al., 2017) and earlier works therein. It proves successful in tasks such as classification (Jia et al., 2016; Liu et al., 2017) and machine translation (Platanios et al., 2018). Many recent template-based generation models include the exemplars as content in addition to the source, and allow the decoder to attend over and copy from both (Gu et al., 2018; Guu et al., 2018; Weston et al., 2018; Pandey et al., 2018; Cao et al., 2018a, inter alia). We compare to this approach in the experiments, and show that our model offers favorable performance, and that they can potentially be combined to achieve further improvements.

NYT Giga Wikibio
# inst. Train 92K 3.8M 583K
Dev. 9K 190K 73K
Test 9,706 1,951 73K
Avg. len. Src. 939.0 31.4 N/A
Tgt. 48.6 8.2 26.0
Table 1: Number of instances and average text lengths for the datasets used in the experiments. The lengths are averaged over training instances.

4 Experiments

This section empirically evaluates the proposed model on two sets of text generation tasks: abstractive summarization (§4.2) and data-to-text generation (§4.3). Before heading into the experimental details, we first describe the architectures of the compared models in §4.1.

4.1 Compared Models

In addition to previous works, we compare to the following baselines, aiming to control for confounding factors due to detailed implementation choices.

  • Seq2seq. The encoder-decoder architecture enhanced with attention and copy mechanisms. The encoder is implemented with a bi-directional LSTM (BiLSTM; Hochreiter and Schmidhuber, 1997; Schuster and Paliwal, 1997; Graves, 2012), and the decoder a uni-directional one. We tie the input embeddings of both the encoder and the decoder, as well as the weights (Press and Wolf, 2017). We use beam search during evaluation, with length penalty (Wu et al., 2016).

  • AttExp. It is based on Seq2seq. It encodes, attends over, and copies from the exemplars, in addition to the source inputs.

Our model using the adaptive decoder (AdaDec) closely builds upon Seq2seq. It uses a dynamically constructed LSTM decoder, and does not use attention or copy mechanisms over the encoded exemplars. The extracted exemplars are the same as those used by AttExp. To ensure fair comparisons, we use comparable training procedures and regularization techniques for the above models. The readers are referred to the appendix for further details such as hyperparameters.

4.2 Text Summarization

Datasets.

We empirically evaluate our model on two benchmark text summarization datasets:

  • Annotated Gigaword corpus (Gigaword; Graff et al., 2003; Napoles et al., 2012). Gigaword contains news articles sourced from various news services over the last two decades. To produce the dataset, we follow the split and preprocessing by Rush et al. (2015), and pair the first sentences and the headlines in the news articles. It results in a 3.8M/190K/1,951 train/dev./test split. The average lengths of the source and target texts are 31.4 and 8.2, respectively.

  • New York Times Annotated Corpus (NYT; Sandaus, 2008). It contains news articles published between 1996 and 2007 by New York Times. We use the split and preprocessing by Durrett et al. (2016).11 Following their effort, we evaluate on a smaller portion of the test set, where the gold summaries are longer than 50 tokens. We further randomly sample 9,000 instances from the training data for validation, resulting in a 91,834/9,000/3,452 train/dev./test split. Compared to Gigaword, the inputs and targets in NYT are much longer (averaging 939.0 and 48.6, respectively).

Table 1 summarizes some statistics of the datasets. We note that some recent works use a different split of the NYT corpus (Paulus et al., 2018; Gehrmann et al., 2018), and thus are not comparable to the models in Table 3. We decide to use the one by Durrett et al. (2016) because their preprocessing script is publicly available.

For both datasets, we apply byte-paired encoding (BPE; Sennrich et al., 2016), which proves to improve the generation of proper nouns (Fan et al., 2018).

Model Rg-1 Rg-2 Rg-L
Open-NMT 35.0 16.6 32.4
Cao et al., 2018a (Basic) 36.0 17.1 33.2
Cao et al., 2018a (Full) 37.0 19.0 34.5
Cao et al. (2018b) 37.3 17.6 34.2
This work (Seq2seq) 35.8 17.5 33.5
This work (AttExp) 36.0 17.7 33.1
This work (AdaDec) 37.3 18.5 34.7
Table 2: Text summarization performance in Rouge  scores (dubbed as Rg-X) on Gigaword test set (§4.2). denotes the models using retrieved exemplars, while  uses handcrafted features. Bold font indicates best performance. Open-NMT numbers are taken from Cao et al. (2018a).

Empirical results.

Table 2 compares the models on Gigaword test set in Rouge  (Lin, 2004).12

By using adaptive decoders, our model (AdaDec) improves over Seq2seq by more than 1.1 Rouge scores. Cao et al. (2018b) and the Full model by Cao et al. (2018a) hold the best published results. The former uses extensive handcrafted features and relies on external information extraction and syntactic parsing systems; while the latter uses additional encoding, attention and copy mechanisms over the exemplars extracted using a novel neural reranker. AdaDec achieves better or comparable performance to the state-of-the-art models, without using any handcrafted features or reranking techniques. The Basic model by Cao et al. (2018a) ablates the reranking component from their Full model, and uses the top exemplar retrieved by the IR system. Therefore it is a more comparable baseline to ours. AdaDec outperforms it by more than 1.3 Rouge scores. Surprisingly, we do not observe interesting improvements by AttExp over the sequence-to-sequence baseline. We believe that our model can benefit from better extracted exemplars by, e.g., applying a reranking system. Such exploration is deferred to future work.

The NYT experimental results are summarized in Table 3. We follow previous works and report limited-length Rouge recall values.13 Durrett et al. (2016) is an extractive model, and Paulus et al. (2018) an abstractive approach based on reinforcement learning. Our AdaDec model outperforms both. We observe similar trends when comparing AdaDec to the Seq2seq and AttExp baselines, with the exception that AttExp does improve over Seq2seq.

Model Rouge-1 Rouge-2
Durrett et al. (2016) 42.2 24.9
Paulus et al. (2018) 42.9 26.0
This work (Seq2seq) 41.9 25.1
This work (AttExp) 42.5 25.7
This work (AdaDec) 43.2 26.4
Table 3: NYT text summarization test performance in Rouge recall values. This is a smaller portion of the original test data, after filtering out instances with summaries shorter than 50 tokens (§4.2Durrett et al., 2016). denotes the models using retrieved exemplars, and bold font indicates best performance.

4.3 Data-to-text Generation

Data-to-text generation aims to generate textual descriptions of structured data, which can be seen as a table consisting of a collection of records (Liang et al., 2009). For a given entity, each record is an (attribute, value) tuple. Figure 2 shows an example for entity Jacques-Louis David. The table specifies the entity’s properties with tuples (born, 30 August 1748), (nationality, French), and so forth. The table is paired with a description, which the model is supposed to generate using the table as input. We refer the readers to Lebret et al. (2016) for more details about the task.

Dataset and implementation details.

We use the Wikibio dataset (Lebret et al., 2016). It is automatically constructed by pairing the tables and the opening sentences of biography articles from English Wikipedia. We follow the split and preprocessing provided along with the dataset, with around 583K/73K/73K train/dev./test instances. Following Lebret et al. (2016), we linearize the tables, such that we can conveniently train the sequence-to-sequence style models described in §4.1. Table 1 summarizes some statistics of the dataset.

In contrast to the text summarization experiment (§4.2), we do not apply BPE here. Further, the word embeddings are initialized with GloVe (Pennington et al., 2014; fixed during training), and not tied with the weights. In addition to the models introduced in §4.1, we additionally compare to AdaDec+AttExp, aiming to study whether the adaptive decoder can further benefit from attention and copy mechanisms over the exemplars.

Figure 2: A training instance from the Wikibio dataset. It consists of a collections of records for Jacques-Louis David (top), and a piece of textual description (bottom).

Empirical results.

Following Liu et al. (2018) we report Rouge-4 and Bleu scores (Papineni et al., 2002).14 Table 4 summarizes the data-to-text generation results on the Wikibio test set. Overall, we observe similar trends to those in the summarization experiment (§4.2): by attending over and copying from the exemplars, AttExp improves upon the Seq2seq baseline by around 0.6 absolute scores. Also utilizing exemplar information, our AdaDec model outperforms Seq2seq by a larger margin: 1.3 for Rouge-4 and 1.1 for Bleu. We further study whether we can get further improvements by combining both. AdaDec+AttExp achieves around 0.5 absolute improvements over AdaDec, less than those by AttExp over Seq2seq. This provides evidence that, to some extend, the ways AttExp and AdaDec incorporate exemplar information might be complementary. Wiseman et al. (2018) is a template-motivated model based on a semi-Markov model. Liu et al. (2018) hold the current state-of-the-art results. They encode the table structures by using (a) position and filed embeddings, and (b) structure-aware attention and gating techniques. These techniques are beyond the scope of this work, which focuses mainly on the decoding end.

Model Rg-4 Bleu
Wiseman et al. (2018) 38.6 34.8
Liu et al. (2018) 41.7 44.7
This work (Seq2seq) 39.3 42.5
This work (AttExp) 40.0 43.1
This work (AdaDec) 40.6 43.6
This work (AdaDec+AttExp) 41.1 44.1
Table 4: Data-to-text generation performance in Rouge-4 and Bleu on the Wikibio test set (§4.3). indicates the models using retrieved exemplars.

5 Analysis

We now qualitatively evaluate our model, by studying how its outputs are affected by using different exemplars. Figure 3 shows two randomly sampled Gigaword development instances. It compares the outputs by AdaDec (i.e., without attention/copy over exemplars; §4.1) when receiving different exemplars, controlling for the same source inputs. In each example, Exemplar 1 is retrieved by the system (i.e., a training target; §3.2); while the remaining ones are produced by the authors, by modifying the first one in styles and sometimes introducing distractions in the content.

In the top example, the model includes people into the subject (Three vs. Three people) under the influence by Exemplar 2; Exemplar 3 changes the tense and adds some distraction by changing the place from Britain to Canada. The model follows the tense switch, but gets confused by the distraction, and decides to let a train in southern Europe collide into North America, which it should not. Looking at the bottom example, the model in general follows the exemplar in using noun adjuncts or prepositional phrases (e.g., new home sales vs. sales of new homes), except the first one. Perhaps confused by the distraction in Exemplar 3, the model makes a judgment on the specific amount of growth, but gets it wrong.

Source: A Portuguese train derailed in the northern region of Oporto on Wednesday, killing three people… Exemplar 1: Two die in a Britain train collision. Output 1: Three killed in Portuguese train derailment. Exemplar 2: Two people were killed in Britain train collision. Output 2: Three people killed in Portuguese train derailment. Exemplar 3: A train collision in Canada killed two people. Output 3: Portuguese train derails in northern Mexico killing three.

Source: Sales of new homes in the U.S. increased by 11.8 percent in May, the biggest gain in 26 years… Exemplar 1: U.S. sales of new homes up strongly in March. Output 1: US new home sales rise 11.8 percent in May. Exemplar 2: The sales of new homes in the U.S. grow strongly. Output 2: Sales of new homes in US rise in May. Exemplar 3: U.S. economic statistics: new home sales grow by 2.5 percent. Output 3: US new home sales grow 26 percent in May.

Figure 3: Two randomly sampled Gigaword development instances used for qualitative evaluation (§5). Exemplar 1’s are retrieved by the system (§3.2), while the remaining ones are produced by the authors. Notable exemplars changes are highlighted in bold purple, and output changes in italic yellow.

6 Related Work

Exemplar-based generation.

Partly inspired by traditional template-based generation (Kukich, 1983; Reiter and Dale, 1997, inter alia), many recent efforts have been devoted to augmenting text generation models with retrieved exemplars (Hodosh et al., 2013; Mason and Charniak, 2014; Song et al., 2016; Lin et al., 2017; Li et al., 2018, inter alia). Without committing to an explicit slot-filling process, a typical method is to include exemplars as additional inputs to the sequence-to-sequence models (Gu et al., 2018; Pandey et al., 2018; Guu et al., 2018, inter alia). Wiseman et al. (2018) took a different approach and used a semi-Markov model to learn templates.

Dynamic parameter construction.

The idea of using a smaller network to generate weights for a larger one dues back to Stanley et al. (2009) and Koutnik et al. (2010), mainly under the evolution computing context. It is later revisited with representation learning (Moczulski et al., 2015; Fernando et al., 2016; Al-Shedivat et al., 2017, inter alia), and successfully applied to classification (Jia et al., 2016; Liu et al., 2017) and machine translation (Platanios et al., 2018). It also relates to the meta-learning set-up (Thrun and Pratt, 1998).

7 Conclusion

We presented a text generation model using exemplar-informed adaptive decoding. It reparameterizes the decoder using the information gathered from retrieved exemplars. We experimented with text summarization and data-to-text generation, and showed that the proposed model achieves strong performance and outperforms comparable baselines on both. The proposed model can be applicable in other conditioned text generation tasks. We release our implementation at https://homes.cs.washington.edu/~hapeng.

Acknowledgments

We thank Antonios Anastasopoulos, Ming-Wei Chang, Michael Collins, Jacob Devlin, Yichen Gong, Luheng He, Kenton Lee, Dianqi Li, Zhouhan Lin, Slav Petrov, Oscar Täckström, Kristina Toutanova, and other members of the Google AI language team for the helpful discussion, and the anonymous reviewers for their valuable feedback.

Appendix A Implementation Details

Our implementation is based on TensorFlow.15 For both experiments, we use the similar implementation strategies for the baselines and our model, aiming for a fair comparison.

a.1 Text Summarization

We train the models using Adam (Kingma and Ba, 2015) with a batch size of 64. We use the default values in TensorFlow Adam implementation for initial learning rate , , , and . The models are trained for up to 20 epochs, with the learning rate annealed at a rate of 0.2 every 4 epochs. A weight decay of is applied to all parameters, with being the current learning rate. The -norms of gradients are clipped to 1.0. Early stopping is applied based on Rouge-L performance on the development set.

The weights of the output function are tied with the word embeddings, which are randomly initialized. For the encoders, we use 256-dimensional 3-layer BiLSTMs in the Gigaword experiment, and 300-dimensional 2-layer BiLSTMs in the NYT experiment, both with residual connections (He et al., 2015); the decoders are one-layer (adaptive) LSTMs, and have the same size as the encoders, and so do the word embeddings. We apply variational dropout (Kingma et al., 2015) in the encoder RNNs, and dropout (Srivastava et al., 2014) in the embeddings and the layer, the rates of which are empirically selected from . The last hidden state at the top layer of the encoder is fed through an one-layer -MLP, and then used to as the decoder’s initial state. We use the attention function by Luong et al. (2015), and copy mechanism by See et al. (2017). The exemplar encoder (§3.2) uses one-layer 32/50 BiLSTM for Gigaword and NYT experiments, respectively. For numerical stability, (Equation 11) is scaled to have norms of , with being the hidden size of the adaptive decoder (Peng et al., 2018b).

In the Gigaword experiment, we use 25K BPE types, and limit the maximum decoding length to be 50 subword units; while for NYT, we use 10K types with a maximum decoding length of 300, and further truncate the source to the first 1000 units. During evaluation, we apply beam search of width 5, with a 1.0 length penalty (Wu et al., 2016).

a.2 Data-to-text Generation

We in general follow the implementation details in the summarization experiment, with the following modifications:

  • We do not apply byte-paired encoding here, and use a vocabulary of size 50K.

  • The word embeddings are initialized using 840B version 300-dimensional GloVe (Pennington et al., 2014) and fixed during training. Further, the weights are not tied to the embeddings.

  • Three-layer 300-dimensional BiLSTM encoders are used, with residual connections; the exemplar encoder uses an one-layer 50-dimensional BiLSTM.

  • Early stopping is applied based on development set Rouge-4 performance.

  • A maximum decoding length of 40 tokens is used.

Footnotes

  1. Work done during internship at Google.
  2. The term exemplar indicates a training instance used to help generation. We aim to distinguish from “templates,” since here no explicit slot-filling procedure is involved.
  3. We use the training target as the exemplar, whose source is most similar to the current input. §3.2 describes the details.
  4. The amount of parameters grows linearly with the number of possible exemplars, which, as we will soon discuss in §3.2, can be as large as the training set.
  5. Instead of choosing empirically, we set it equal to in the experiments. Please see the end of §3.1 for a related discussion.
  6. The bias term in the Elman network can be constructed as , with being a learned matrix.
  7. This does not include the bias term, which contributes additional parameters to the former, and to the latter.
  8. The source of an exemplar is only used in the retrieval and never fed into the encoder-decoder model. For a training instance, we additionally disallow using its own target as the exemplar.
  9. For clarity, the dependence of on the exemplar is suppressed, just as .
  10. Nothing prohibits adaptively constructing other components of the model, e.g., the encoder . Yet, our motivation is to use exemplars to inform how to say it, which is primarily determined by the decoder (in contrast, the encoder relates more to selecting the content).
  11. https://github.com/gregdurrett/berkeley-doc-summarizer.
  12. Version 1.5.5 of the official script.
  13. Following Durrett et al. (2016) and Paulus et al. (2018), we truncate the predictions to the lengths of the gold summaries, and evaluate Rouge recall, instead of on full-length predictions.
  14. We use the script by Lin (2004) to calculate the Rouge score, and the mteval script for Bleu: https://github.com/moses-smt/mosesdecoder/blob/master/scripts/generic/mteval-v13a.pl.
  15. https://www.tensorflow.org/

References

  1. Maruan Al-Shedivat, Avinava Dubey, and Eric P. Xing. 2017. Contextual explanation networks. arXiv:1705.10301.
  2. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proc. of ICLR.
  3. Tilman Becker. 2002. Practical, template–based natural language generation with tag. In Proceedings of the Sixth International Workshop on Tree Adjoining Grammar and Related Frameworks.
  4. James Bradbury, Stephen Merity, Caiming Xiong, and Richard Socher. 2017. Quasi-Recurrent Neural Network. In Proc. of ICLR.
  5. Ziqiang Cao, Wenjie Li, Sujian Li, and Furu Wei. 2018a. Retrieve, rerank and rewrite: Soft template based neural summarization. In Proc. of ACL.
  6. Ziqiang Cao, Furu Wei, Wenjie Li, and Sujian Li. 2018b. Faithful to the original: Fact aware neural abstractive summarization. In Proc. of AAAI.
  7. Kyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proc. of EMNLP.
  8. Greg Durrett, Taylor Berg-Kirkpatrick, and Dan Klein. 2016. Learning-based single-document summarization with compression and anaphoricity constraints. In Proc. of ACL.
  9. Jeffrey L. Elman. 1990. Finding structure in time. Cognitive science, 14(2):179–211.
  10. Angela Fan, David Grangier, and Michael Auli. 2018. Controllable abstractive summarization. In Proceedings of the 2nd Workshop on Neural Machine Translation and Generation.
  11. Chrisantha Fernando, Dylan Banarse, Malcolm Reynolds, Frederic Besse, David Pfau, Max Jaderberg, Marc Lanctot, and Daan Wierstra. 2016. Convolution by evolution: Differentiable pattern producing networks. In Proceedings of the Genetic and Evolutionary Computation Conference 2016.
  12. Mary Ellen Foster and Michael White. 2004. Techniques for text planning with xslt. In Proceeedings of the Workshop on NLP and XML: RDF/RDFS and OWL in Language Technology.
  13. Albert Gatt and Ehud Reiter. 2009. SimpleNLG: A realisation engine for practical applications. In Proceedings of the 12th European Workshop on Natural Language Generation.
  14. Sebastian Gehrmann, Yuntian Deng, and Alexander M Rush. 2018. Bottom-up abstractive summarization. In Proc. of EMNLP.
  15. David Graff, Junbo Kong, Ke Chen, and Kazuaki Maeda. 2003. English Gigaword Second Edition.
  16. Alex Graves. 2012. Supervised Sequence Labelling with Recurrent Neural Networks, volume 385 of Studies in Computational Intelligence. Springer.
  17. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor OK Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proc. of ACL.
  18. Jiatao Gu, Yong Wang, Kyunghyun Cho, and Victor OK Li. 2018. Search engine guided non-parametric neural machine translation. In Proc. of AAAI.
  19. Kelvin Guu, Tatsunori B Hashimoto, Yonatan Oren, and Percy Liang. 2018. Generating sentences by editing prototypes. TACL, 6:437–450.
  20. David Ha, Andrew Dai, and Quoc V Le. 2017. Hypernetworks. In Proc. of ICLR.
  21. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015. Deep residual learning for image recognition. In Proc. of CVPR.
  22. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780.
  23. Micah Hodosh, Peter Young, and Julia Hockenmaier. 2013. Framing image description as a ranking task: Data, models and evaluation metrics. JAIR, 47(1):853–899.
  24. Xu Jia, Bert De Brabandere, Tinne Tuytelaars, and Luc V Gool. 2016. Dynamic filter networks. In Proc. of NeuralIPS.
  25. Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proc. of ICLR.
  26. Durk P Kingma, Tim Salimans, and Max Welling. 2015. Variational dropout and the local reparameterization trick. In Proc. of NeuralIPS.
  27. Philipp Koehn. 2009. Statistical machine translation. Cambridge University Press.
  28. Jan Koutnik, Faustino Gomez, and Jürgen Schmidhuber. 2010. Evolving neural networks in compressed weight space. In Proceedings of the Annual Conference on Genetic and Evolutionary Computation.
  29. David Krueger and Roland Memisevic. 2017. Regularizing rnns by stabilizing activations. In Proc. of ICLR.
  30. Karen Kukich. 1983. Design of a knowledge-based report generator. In Proc. of ACL.
  31. Rémi Lebret, David Grangier, and Michael Auli. 2016. Neural text generation from structured data with application to the biography domain. In Proc. of EMNLP.
  32. Dianqi Li, Xiaodong He, Qiuyuan Huang, Ming-Ting Sun, and Lei Zhang. 2018. Generating diverse and accurate visual captions by comparative adversarial learning.
  33. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting objective function for neural conversation models. In Proc. of NAACL.
  34. Percy Liang, Michael I. Jordan, and Dan Klein. 2009. Learning semantic correspondences with less supervision. In Proc. of ACL.
  35. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text Summarization Branches Out: Proceedings of the ACL-04 Workshop.
  36. Kevin Lin, Dianqi Li, Xiaodong He, Zhengyou Zhang, and Ming-ting Sun. 2017. Adversarial ranking for language generation. In Proc. of NeuralIPS.
  37. Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. 2017. Dynamic compositional neural networks over tree structure. In Proc. of IJCAI.
  38. Tianyu Liu, Kexiang Wang, Lei Sha, Baobao Chang, and Zhifang Sui. 2018. Table-to-text generation by structure-aware seq2seq learning. In Proc. of AAAI.
  39. Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proc. of EMNLP.
  40. Inderjeet Mani. 1999. Advances in automatic text summarization. MIT press.
  41. Rebecca Mason and Eugene Charniak. 2014. Domain-specific image captioning. In Proc. of CoNLL.
  42. Kathleen McKeown. 1992. Text generation. Cambridge University Press.
  43. Marcin Moczulski, Misha Denil, Jeremy Appleyard, and Nando de Freitas. 2015. ACDC: A structured efficient linear layer. arXiv:1511.05946.
  44. Courtney Napoles, Matthew Gormley, and Benjamin Van Durme. 2012. Annotated gigaword. In Proceedings of the Joint Workshop on Automatic Knowledge Base Construction and Web-scale Knowledge Extraction.
  45. Gaurav Pandey, Danish Contractor, Vineet Kumar, and Sachindra Joshi. 2018. Exemplar encoder-decoder for neural conversation generation. In Proc. of ACL.
  46. Kishore Papineni, Salim Roukos, Todd Ward, and Wei jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proc. of ACL.
  47. Romain Paulus, Caiming Xiong, and Richard Socher. 2018. A deep reinforced model for abstractive summarization. In Proc. of ICLR.
  48. Hao Peng, Roy Schwartz, Sam Thomson, and Noah A. Smith. 2018a. Rational recurrences. In In Proc. of EMNLP.
  49. Hao Peng, Sam Thomson, and Noah A. Smith. 2017. Deep multitask learning for semantic dependency parsing. In Proc. of ACL.
  50. Hao Peng, Sam Thomson, and Noah A. Smith. 2018b. Backpropagating through structured argmax using a spigot. In Proc. of ACL.
  51. Hao Peng, Sam Thomson, Swabha Swayamdipta, and Noah A. Smith. 2018c. Learning joint semantic parsers from disjoint data. In Proc. of NAACL.
  52. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global vectors for word representation. In Proc. of EMNLP.
  53. Emmanouil Antonios Platanios, Mrinmaya Sachan, Graham Neubig, and Tom Mitchell. 2018. Contextual parameter generation for universal neural machine translation. In Proc. of EMNLP.
  54. Ofir Press and Lior Wolf. 2017. Using the output embedding to improve language models. In Proc. of EACL.
  55. Ehud Reiter and Robert Dale. 1997. Building applied natural language generation systems. Natural Language Engineering, 3(1):57–87.
  56. Ehud Reiter, Somayajulu Sripada, Jim Hunter, Jin Yu, and Ian Davy. 2005. Choosing words in computer-generated weather forecasts. Artificial Intelligence, 167(1-2):137–169.
  57. Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. In Proc. of EMNLP.
  58. Evan Sandaus. 2008. The New York Times Annotated Corpus. LDC corpora. Linguistic Data Consortium.
  59. M. Schuster and K.K. Paliwal. 1997. Bidirectional recurrent neural networks. Transactions on Signal Proccesing, 45(11):2673–2681.
  60. Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer-generator networks. In Proc. of ACL.
  61. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proc. of ACL.
  62. Yiping Song, Rui Yan, Xiang Li, Dongyan Zhao, and Ming Zhang. 2016. Two are better than one: An ensemble of retrieval- and generation-based dialog systems. arXiv:1610.07149.
  63. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. JMLR, 15(1):1929–1958.
  64. K. O. Stanley, D. B. D’Ambrosio, and J. Gauci. 2009. A hypercube-based encoding for evolving large-scale neural networks. Artificial Life, 15(2):185–212.
  65. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Proc. of NeuralIPS.
  66. Sebastian Thrun and Lorien Pratt, editors. 1998. Learning to Learn. Kluwer Academic Publishers.
  67. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proc. of NeuralIPS.
  68. Oriol Vinyals and Quoc Le. 2015. A neural conversational model. In Proc. of ICML.
  69. Jason Weston, Emily Dinan, and Alexander Miller. 2018. Retrieve and refine: Improved sequence generation models for dialogue. In Proceedings of the International Workshop on Search-Oriented Conversational AI.
  70. Sam Wiseman, Stuart Shieber, and Alexander Rush. 2017. Challenges in data-to-document generation. In Proc. of EMNLP.
  71. Sam Wiseman, Stuart M Shieber, and Alexander M Rush. 2018. Learning neural templates for text generation. In Proc. of EMNLP.
  72. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv:1609.08144.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
350880
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description