A Generalized Framework of Sequence Generation with Application to Undirected Sequence Models
Abstract
Undirected neural sequence models such as BERT (Devlin et al., 2019) have received renewed interest due to their success on discriminative natural language understanding tasks such as questionanswering and natural language inference. The problem of generating sequences directly from these models has received relatively little attention, in part because generating from such models departs significantly from the conventional approach of monotonic generation in directed sequence models. We investigate this problem by first proposing a generalized model of sequence generation that unifies decoding in directed and undirected models. The proposed framework models the process of generation rather than a resulting sequence, and under this framework, we derive various neural sequence models as special cases, such as autoregressive, semiautoregressive, and refinementbased nonautoregressive models. This unification enables us to adapt decoding algorithms originally developed for directed sequence models to undirected models. We demonstrate this by evaluating various decoding strategies for the recently proposed crosslingual masked translation model (Lample and Conneau, 2019). Our experiments reveal that generation from undirected sequence models, under our framework, is competitive with the state of the art on WMT’14 EnglishGerman translation. We furthermore observe that the proposed approach enables constanttime translation while remaining within 1 BLEU score compared to lineartime translation from the same undirected neural sequence model.
A Generalized Framework of Sequence Generation with Application to Undirected Sequence Models
Elman Mansimov New York University mansimov@cs.nyu.edu Alex Wang New York University alexwang@nyu.edu Kyunghyun Cho New York University Facebook AI Research CIFAR Azrieli Global Scholar kyunghyun.cho@nyu.edu
noticebox[b]Preprint. Under review.\end@float
1 Introduction
Undirected neural sequence models such as BERT (Devlin et al., 2019) have recently brought significant improvements to a variety of discriminative language modeling tasks such as questionanswering and natural language inference. Generation of sequences from such models has received relatively little attention. Unlike directed sequence models, each word often depends on the full left and right context around it in undirected sequence models. Thus, a decoding algorithm for an undirected sequence model must specify both how to select positions and what symbols to place in the selected positions. In this paper we formalize this process of selecting positions and replacing symbols as a generalized framework of sequence generation, and unify decoding from both directed and undirected sequence models under this framework. This framing enables us to study generation on its own, independent from the specific parameterization of the sequence models.
Our proposed unified framework casts sequence generation as a process of determining the length of the sequence, and then repeatedly alternating between selecting sequence positions followed by generation of symbols for those positions. A variety of sequence models can be derived under this framework by appropriately designing the length distribution, position selection distribution, and symbol replacement distribution. Specifically, we derive popular neural decoding algorithms such as monotonic autoregressive, nonautoregressive by iterative refinement, monotonic semiautoregressive, and nonmonotonic decoding as special cases of the proposed framework.
This separation of coordinate selection and symbol replacement allows us to build a diverse set of decoding algorithms agnostic to the parameterization or training procedure of the underlying model. We thus fix the symbol replacement distribution as a variant of BERT and focus on deriving novel generation procedures for undirected neural sequence models under the proposed framework. We design a coordinate selection distribution using a loglinear model and demonstrate that our model generalizes various fixedorder generation strategies, while also being capable of adapting generation order based on the content of intermediate sequences.
We empirically validate our proposal on machine translation using a translationvariant of BERT called a masked translation model (Lample and Conneau, 2019). We specifically design several generation strategies based on the properties of intermediate sequence distributions and compare them against the stateoftheart monotonic autoregressive sequence model (Vaswani et al., 2017) on WMT’14 EnglishGerman. Our experiments reveal that generation from undirected sequence models, under our framework, is competitive against the state of the art, and that adaptiveorder generation strategies generate sequences in different ways, including lefttoright, righttoleft and the mixture of these. This suggests the potential for designing and learning a more sophisticated coordinate selection mechanism.
Due to the flexibility in specifying a coordinate selection mechanism, we design constanttime variants of the proposed generation strategies, closely following the experimental setup of Ghazvininejad et al. (2019). Our experiments reveal that we can do constanttime translation with the budget as low as 20 iterations (equivalently, generating a sentence of length 20 in the conventional approach) while losing only around 1 BLEU score compared to lineartime translation from the same masked translation model. This again confirms the potential of the proposed framework and generation strategies. We release the implementation, preprocessed datasets as well as trained models online at https://github.com/nyudl/dl4mtseqgen.
2 A Generalized Framework of Sequence Generation
We propose a generalized framework of probabilistic sequence generation to unify both directed and undirected neural sequence models under a single framework. In this generalized framework, we have a generation sequence of pairs of an intermediate sequence and the corresponding coordinate sequence , where , is a vocabulary, is a length of a sequence, is a number of generation steps, and . The coordinate sequence indicates which of the current intermediate sequence are to be replaced. That is, each consecutive pairs are related to each other by , where is a new symbol for the position . This sequence of pairs describes a procedure in which a final sequence is created, starting from an empty sequence and empty coordinate sequence . This procedure of sequence generation is probabilistically modelled as
(1) 
We condition the whole process on an input variable to indicate that the proposed model is applicable to both conditional and unconditional sequence generation. In the latter case, .
Before starting generation, we predict the length of a target sequence according to distribution to which we refer as (c) length prediction. At each generation step , we first select next positions/coordinates for which the corresponding symbols will be replaced according to , to which we refer as (a) coordinate selection. Once the coordinate sequence is determined, we replace the corresponding symbols according to distribution , leading to the next intermediate sequence . Given such sequence generation model, we recover the sequence distribution by marginalizing out all the intermediate and index sequences except for the final sequence . In the remainder of this section, we describe several special cases of the proposed generalized framework of neural sequence generation, which are monotonic autoregressive, nonautoregressive, semiautoregressive and nonmonotonic neural sequence models.
2.1 Special Cases
Monotonic autoregressive neural sequence models
We first consider one extreme case of the generalized sequence generation model, where we replace one symbol at a time, monotonically moving from the leftmost position to the rightmost. In this case, we define the coordinate selection distribution of the generalized sequence generation model in Eq. (1) (a) as
(2) 
where is an indicator function and . This choice of the coordinate selection distribution is equivalent to saying that we replace one symbol at a time, shifting from the leftmost symbol to the rightmost symbol, regardless of the content of intermediate sequences. We then choose the symbol replacement distribution in Eq. (1) (b) to be
(3) 
for . In other words, we limit the dependency of only to the symbols to its left in the previous intermediate sequence and the input variable . The length distribution (1) (c) is implicitly defined by considering how often the special token , which indicates the end of a sequence, appears after generation steps: .
With these choices, the proposed generalized model reduces to which is a widelyused monotonic autoregressive neural sequence model.
Nonautoregressive neural sequence modeling by iterative refinement
We next consider the other extreme in which we replace the symbols in all positions at every single generation step (Lee et al., 2018). We design the coordinate selection distribution to be , implying that we replace the symbols in all the positions. We then choose the symbol replacement distribution to be as it was in Eq. (1) (b). That is, the distribution over the symbols in the position in a new intermediate sequence is conditioned on the entire current sequence and the input variable . We do not need to assume any relationship between the number of generation steps and the length of a sequence in this case. The length prediction distribution is estimated from training data.
Semiautoregressive neural sequence models
Wang et al. (2018) recently proposed a compromise between autoregressive and nonautoregresive sequence models by predicting a chunk of symbols in parallel at a time. This approach can also be put under the proposed generalized model. We first extend the coordinate selection distribution of the autoregressive sequence model in Eq. (2) into
where is a group size. Similarly we modify the symbol replacement distribution from Eq. (3) to
for . This naturally implies that .
Nonmonotonic neural sequence models
The proposed generalized framework subsumes recently proposed variants of nonmonotonic generation (Welleck et al., 2019; Stern et al., 2019; Gu et al., 2019). Unlike the other special cases described above, these nonmonotonic generation approaches learn not only the symbol replacement distribution but also the coordinate selection distribution, and implicitly the length distribution, from data. Because the length of a sequence is often not decided in advance, the intermediate coordinate sequence and the coordinate selection distribution are reparameterized to work with relative coordinates rather than absolute coordinates. We do not go into details of these recent algorithms, but we emphasize that all these approaches are special cases of the proposed framework, which further suggests other variants of nonmonotonic generation.
3 Decoding from Masked Language Models
In this section, we give an overview of masked language models like BERT, cast Gibbs sampling under the proposed framework, and use then this connection to design a set of approximate, deterministic decoding algorithms for undirected sequence models.
3.1 BERT as an undirected sequence model
BERT (Devlin et al., 2019) is a masked language model: It is trained to predict a word given the word’s left and right context. Because the model gets the full context, there are no directed dependencies among words, so the model is undirected. The word to be predicted is masked with a special symbol and the model is trained to predict . We refer to this as the conditional BERT distribution. This objective was interpreted as a stochastic approximation to the pseudo loglikelihood objective (Besag, 1977) by Wang and Cho (2019). This approach of fullcontext generation with pseudo loglikelihood maximization for recurrent networks was introduced earlier by Berglund et al. (2015). More recently, Sun et al. (2017) use it for image caption generation.
Recent work (Wang and Cho, 2019; Ghazvininejad et al., 2019) has demonstrated that undirected neural sequence models like BERT can learn complicated sequence distributions and generate wellformed sequences. In such models, it is relatively straightforward to collect unbiased samples using, for instance, Gibbs sampling. But due to high variance of Gibbs sampling, the generated sequence is not guaranteed to be highquality relative to a groundtruth sequence. Finding a good sequence in a deterministic manner is also nontrivial.
A number of papers have explored using pretrained language models like BERT to initialize sequence generation models. Ramachandran et al. (2017); Song et al. (2019) and Lample and Conneau (2019) use a pretrained undirected language model to initialize a conventional monotonic autoregressive sequence model, while Edunov et al. (2019) use a BERTlike model to initialize the lower layers of a generator, without finetuning. Our work differs from these in that we attempt to directly generate from the pretrained model, rather than using it as a starting point to learn another model.
3.2 Gibbs sampling in the generalized sequence generation model
Gibbs sampling: uniform coordinate selection
To cast Gibbs sampling into our framework, we first assume that the length prediction distribution is estimated from training data, as is the case in the nonautoregressive neural sequence model. In Gibbs sampling, we often uniformly select a new coordinate at random, which corresponds to with the constraint that . By using the conditional BERT distribution as a symbol replacement distribution, we end up with Gibbs sampling.
Adaptive Gibbs sampling: nonuniform coordinate selection
Instead of selecting coordinates uniformly at random, we can base selections on the intermediate sequences. In particular, we use a loglinear model with features of the intermediate and coordinate sequences:
(4) 
again with the constraint that . is a temperature parameter controlling the sharpness of the coordinate selection distribution. A moderately high smooths the coordinate selection distribution and ensures that all the coordinates/positions are replaced in the infinite limit of , making it a valid Gibbs sampler (Levine and Casella, 2006).
We investigate three features : (1) We compute how peaked the conditional distribution of each position given the symbols in all the other positions is by measuring its negative entropy: In other words, we prefer a position if we know the change in has a high potential to alter the joint probability . (2) For each position we measure how unlikely the current symbol (, not ) is under the new conditional distribution: . Intuitively, we prefer to replace a symbol if it is highly incompatible with the input variable and all the other symbols in the current sequence. (3) We encode a positional preference that does not consider the content of intermediate sequences: , where is a small constant scalar to prevent . This feature encodes our preference to generate from left to right if there is no information about the input variable nor of any intermediate sequences.
Unlike the special cases of the proposed generalized model in §2, the coordinate at each generation step is selected based on the intermediate sequences, previous coordinate sequences, and the input variable. We mix the features using scalar coefficients , and , which are selected or estimated to maximize a target quality measure on validation set.
3.3 Optimistic decoding and beam search from a masked language model
Based on the adaptive Gibbs sampler with the coordinate selection distribution in Eq. (4), we can now design an inference procedure to approximately find the most likely sequence from the sequence distribution by exploiting the corresponding model of sequence generation. In doing so, a naive approach is to marginalize out the generation procedure using a Monte Carlo method: , where is the th sample from the sequence generation model. This approach suffers from a high variance and nondeterministic behavior, and is less appropriate for practical use. We instead propose an optimistic decoding approach:
(5) 
The proposed procedure is optimistic in that we consider a sequence generated by following the most likely generation path to be highly likely under the sequence distribution obtained by marginalizing out the generation path. This optimism in the criterion more readily admits a deterministic approximation scheme such as greedy and beam search, although it is as intractable to solve this problem as the original problem which required marginalization of the generation path.
Lengthconditioned beam search
To solve this intractable optimization problem, we design a heuristic algorithm, called lengthconditioned beam search. This algorithm operates given a target length . At each step of this iterative algorithm, we start from the hypothesis set that contains generation hypotheses: . Each generation hypothesis has a score:
For notational simplicity, we drop the time superscript . Each of the generation hypotheses is first expanded with candidate positions according to the coordinate selection distribution:
so that we have candidates , where each candidate consists of a hypothesis with the position sequence extended by the selected position and has a score .^{1}^{1}1 appends at the end of the sequence of the coordinate sequences in We then expand each candidate with the symbol replacement distribution:
This results in candidates , each consisting of hypothesis with intermediate and coordinate sequence respectively extended by and . Each hypothesis has a score ,^{2}^{2}2 denotes creating a new sequence from by replacing the th symbol with , and then appending this sequence to the end of the sequence of the intermediate sequences in . which we use to select candidates to form a new hypothesis set .
After iterating for a predefined number of steps, beam search terminates with the final set of generation hypotheses. We then choose one of them according to a prespecified criterion, such as Eq. (5), and return the final symbol sequence .
Baseline  Decoding from an undirected sequence model  
Autoregressive 
Uniform 
Left2Right 
Least2Most 
EasyFirst 

EnDe 
25.33  21.01  24.27  23.08  23.73  
26.84  22.16  25.15  23.81  24.13  
–  21.16  24.45  23.32  23.87  
–  21.99  25.14  23.81  24.14  
DeEn 
29.83  26.01  28.34  28.85  29.00  
30.92  27.07  29.52  29.03  29.41  
–  26.24  28.64  28.60  29.12  
–  26.98  29.50  29.02  29.41 
4 Experimental Settings
Data and preprocessing
We evaluate our framework on WMT’14 EnglishGerman translation. The dataset consists of 4.5M EnglishGerman parallel sentence pairs, and we follow the widely used protocol for preprocessing this dataset. We first tokenize each sentence using a script from Moses (Koehn et al., 2007) and then segment each word into subword units using byte pair encoding (BPE, Sennrich et al., 2016) with a joint vocabulary of 60k tokens. We use newstest2013 and newstest2014 as validation and test sets respectively.
Sequence models
We base our models off those of Lample and Conneau (2019). Specifically, we use a Transformer (Vaswani et al., 2017) with 1024 hidden units, 6 layers, 8 heads, and Gaussian error linear units (GELU, Hendrycks and Gimpel, 2016). We use a pretrained model^{3}^{3}3 https://dl.fbaipublicfiles.com/XLM/mlm_ende_1024.pth trained using a masked language modeling objective (Lample and Conneau, 2019) on 5M monolingual sentences from WMT NewsCrawl 20072008. To distinguish between English and German sentences, a special language embedding is added as an additional input to the model.
We adapt the pretrained model to perform translation by finetuning it with a masked translation objective (Lample and Conneau, 2019). We concatenate parallel English and German sentences, mask out a subset of the tokens in either the English or German sentence, and predict the masked out tokens. We uniformly mask out tokens as in Ghazvininejad et al. (2019). Training the model this way more closely matches the test time generation setting where the model starts with an input sequence of all masks.
Baseline model
We compare against a standard encoderdecoder autoregressive neural sequence model (Bahdanau et al., 2015) trained for lefttoright generation and initialized with the same pretrained masked language model (Lample and Conneau, 2019; Song et al., 2019). We train a separate autoregressive model to translate an English sentence to a German sentence and vice versa, with the same hyperparameters as our model.
Training details
Decoding strategies
We design four generation strategies for the masked translation model based on the loglinear coordinate selection distribution in §4:

Uniform: , i.e., sample a position uniformly at random without replacement

Left2Right: , ,

Least2Most (Ghazvininejad et al., 2019): , ,

EasyFirst: , ,^{4}^{4}4 We set for DeEn based on the validation set performance.
# of length candidates  
Gold  1  2  3  4  
EnDe  22.50  22.22  22.76  23.01  23.22 
DeEn  28.05  26.77  27.32  27.79  28.15 
We use beam search described in §3.3 with fixed to , i.e., we consider only one possible position for replacing a symbol per hypothesis each time of generation. We vary between (greedy) and . For each source sentence, we consider four length candidates according to the length distribution estimated from the training pairs, based on early experiments showing that using only four length candidates performs as well as using the groundtruth length (see Table 2). Given the four candidate translations, we choose the best one according to the pseudo logprobability of the final sequence (Wang and Cho, 2019).
Decoding scenarios
We consider two decoding scenarios: lineartime and constanttime decoding. In the lineartime scenario, the number of decoding iterations grows linearly w.r.t. the length of a target sequence . We test setting to and . In the constanttime scenario, the number of iterations is constant w.r.t. the length of a translation, i.e., . At the th iteration of generation, we replace many symbols, where is either a constant or linearly anneals from to () as done by Ghazvininejad et al. (2019).
5 LinearTime Decoding: Result and Analysis
Main result and findings
We present translation quality measured by BLEU (Papineni et al., 2002) in Table 1. We identify a number of important trends. (1) The deterministic coordinate selection strategies (left2right, least2most and easyfirst) significantly outperform selecting coordinates uniformly at random, by up to 3 BLEU in both directions. The success of these relatively simple handcrafted coordinate selection strategies suggest avenues for further improvement for generation from undirected sequence models. (2) The proposed beam search algorithm for undirected sequence models provides an improvement of approximately 1 BLEU point over naive greedy search, confirming the utility of the proposed framework as a way to move decoding techniques across different paradigms of sequence modeling. (3) Different generation strategies result in translations of varying qualities depending on the setting. On GermanEnglish translation, left2right is the best performing strategy, achieving up to 25.15 BLEU. Easyfirst and left2right perform nearly the same in the other direction, achieving up to 29.52 BLEU. (4) We see little improvement in refining a sequence beyond the first pass, though we suspect this may be due to the simplicity of the coordinate selection schemes tested in this paper. (5) Lastly, the masked translation model lags behind the more conventional neural autoregressive model, although the difference is within 1 BLEU point when greedy search is used with the autoregressive model and approximately 2 BLEU with beam search, which confirms the recent finding by Ghazvininejad et al. (2019).
Adaptive generation order
The least2most and easyfirst generation strategies automatically adapt the generation order based on the intermediate sequences generated. We investigate the resulting generation orders on the development set by presenting each as a 10dim vector (downsampling as necessary), where each element corresponds to the selected position in the target sequence normalized by sequence length. We cluster these sequences with means clustering and visualize the clusters centers as curves with thickness proportional to the number of sequences in the cluster in Fig. 1.
In both strategies, we see two major trends. First, many sequences are generated largely monotonically either lefttoright or righttoleft (see, e.g., green, blue and orange clusters in easyfirst, EnDe, and blue, orange, and red clusters in least2most, DeEn.) Second, another cluster of sequences are generated from outside in, as seen in the red and purple clusters in easyfirst, EnDe, and green, orange, and purple clusters in least2most, EnDe. We explain these two behaviors by the availability of contextual evidence, or lack thereof. At the beginning of generation, there are only two nonmask symbols are the beginning and end of sentence symbols, making it easier to predict a symbol adjacent to either of these special symbols. As more symbols are filled up near the boundaries, more evidence is accumulated for the decoding strategy to accurately predict symbols near the center. This process manifests itself either as monotonic or outsidein generation. We present sample sequences generated using these strategies in Appendix B.
Uniform 
Left2Right 
Least2Most 
EasyFirst 
HardFirst 

22.38  22.38  27.14  22.21  26.66  
22.43  21.92  24.69  25.16  23.46  
26.01  26.01  28.54  22.24  28.32  
24.69  25.94  27.01  27.49  25.56 
6 ConstantTime Decoding: Result and Analysis
The trends in constanttime decoding noticeably differ from those in lineartime decoding. First, the left2right strategy significantly lags behind the least2most strategy, and the gap is wider (up to 4.7 BLEU) with a tighter budget (). This gap suggests that a better, perhaps learned, coordinate selection scheme could further improve constanttime translation. Second, the easyfirst strategy is surprisingly the worst in constanttime translation, unlike in lineartime translation. To investigate this degradation, we test another strategy where we flip the signs of the coefficients in the loglinear model. This new hardfirst strategy works on par with least2most, which again confirms that decoding strategies must be selected based on the target tasks and decoding setting.
Perhaps most importantly, with a fixed budget of , linearly annealing , and left2right decoding, constanttime translation can achieve translation quality within 1 BLEU of comparable lineartime translation (28.54 vs. 29.52). Even with a tighter budget of iterations, which is less than half the average sentence length, constanttime translation loses only 2.4 BLEU points (27.14 vs. 29.52), which both confirms the finding by Ghazvininejad et al. (2019) and presents us with new opportunities in developing and advancing constanttime machine translation systems.
7 Conclusion
We present a generalized framework of neural sequence generation that unifies decoding in directed and undirected neural sequence models. Under this framework, we separate position selection and symbol replacement, allowing us to apply a diverse set of generation algorithms, inspired by those for directed neural sequence models, to undirected models such as BERT and its translation variant.
We evaluate these generation strategies on WMT’14 EnDe machine translation using a recently proposed masked translation model. Our experiments reveal that undirected neural sequence models achieve within 12 BLEU of conventional, stateoftheart autoregressive models, given an appropriate choice of decoding strategy. We further show that constanttime translation in these models comes within 11.5 BLEU of lineartime translation by using one of the proposed generation strategies. Analysis of the generation order automatically determined by these adaptive decoding strategies reveals that most sequences are generated either monotonically or outsidein.
We identify two promising extensions to our work. First, we could have a model learn the coordinate selection distribution from data to maximize translation quality. Doing so would likely result in better quality sequences as well as the discovery of more nontrivial generation orders. Second, we only apply our framework to sequence generation, but we could also apply it to other structured data such as grids (for e.g. images) and arbitrary graphs. Overall, we hope that our generalized framework opens new avenues in developing and understanding generation algorithms for a variety of settings.
References
 Bahdanau et al. [2015] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. In ICLR, 2015.
 Berglund et al. [2015] Mathias Berglund, Tapani Raiko, Mikko Honkala, Leo Kärkkäinen, Akos Vetek, and Juha T Karhunen. Bidirectional recurrent neural networks as generative models. In Advances in Neural Information Processing Systems, pages 856–864, 2015.
 Besag [1977] Julian Besag. Efficiency of pseudolikelihood estimation for simple gaussian fields. 1977.
 Devlin et al. [2019] Jacob Devlin, MingWei Chang, and Kristina Toutanova Kenton Lee. Bert: Pretraining of deep bidirectional transformers for language understanding. In NAACL, 2019.
 Edunov et al. [2019] Sergey Edunov, Alexei Baevski, and Michael Auli. Pretrained language model representations for language generation, 2019.
 Ghazvininejad et al. [2019] Marjan Ghazvininejad, Omer Levy, Yinhan Liu, and Luke Zettlemoyer. Constanttime machine translation with conditional masked language models. arXiv preprint arXiv:1904.09324, 2019.
 Gu et al. [2019] Jiatao Gu, Qi Liu, and Kyunghyun Cho. Insertionbased decoding with automatically inferred generation order. CoRR, abs/1902.01370, 2019.
 Hendrycks and Gimpel [2016] Dan Hendrycks and Kevin Gimpel. Bridging nonlinearities and stochastic regularizers with gaussian error linear units. arXiv preprint arXiv:1606.08415,, 2016.
 Kingma and Ba [2014] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint 1412.6980, 2014.
 Koehn et al. [2007] Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris CallisonBurch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, et al. Moses: Open source toolkit for statistical machine translation. In ACL, 2007.
 Lample and Conneau [2019] Guillaume Lample and Alexis Conneau. Crosslingual language model pretraining. arXiv preprint arXiv:1901.07291, 2019.
 Lee et al. [2018] Jason Lee, Elman Mansimov, and Kyunghyun Cho. Deterministic nonautoregressive neural sequence modeling by iterative refinement. arXiv preprint arXiv:1802.06901, 2018.
 Levine and Casella [2006] Richard A Levine and George Casella. Optimizing random scan gibbs samplers. Journal of Multivariate Analysis, 97(10):2071–2100, 2006.
 Papineni et al. [2002] Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Association for Computational Linguistics, 2002.
 Ramachandran et al. [2017] Prajit Ramachandran, Peter Liu, and Quoc Le. Unsupervised pretraining for sequence to sequence learning. Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, 2017. doi: 10.18653/v1/d171039. URL http://dx.doi.org/10.18653/v1/d171039.
 Sennrich et al. [2016] Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine translation of rare words with subword units. In ACL, 2016.
 Song et al. [2019] Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and TieYan Liu. Mass: Masked sequence to sequence pretraining for language generation, 2019.
 Srivastava et al. [2014] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. 2014.
 Stern et al. [2019] Mitchell Stern, William Chan, Jamie Kiros, and Jakob Uszkoreit. Insertion transformer: Flexible sequence generation via insertion operations. CoRR, abs/1902.03249, 2019.
 Sun et al. [2017] Qing Sun, Stefan Lee, and Dhruv Batra. Bidirectional beam search: Forwardbackward inference in neural sequence models for fillintheblank image captioning. 2017.
 Vaswani et al. [2017] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In NIPS, 2017.
 Wang and Cho [2019] Alex Wang and Kyunghyun Cho. Bert has a mouth, and it must speak: Bert as a markov random field language model. arXiv preprint arXiv:1902.04094, 2019.
 Wang et al. [2018] Chunqi Wang, Ji Zhang, and Haiqing Chen. Semiautoregressive neural machine translation. arXiv preprint arXiv:1808.08583, 2018.
 Welleck et al. [2019] Sean Welleck, Kianté Brantley, Hal Daumé, and Kyunghyun Cho. Nonmonotonic sequential text generation. CoRR, abs/1902.02192, 2019.
Appendix A Energy evolution over generation steps
While the results in Table 1 indicate that our decoding algorithms find better generations in terms of BLEU relative to uniform decoding, we verify that the algorithms produce generations that are more likely according to the model. We do so by computing the energy (negative logit) of the sequence of intermediate sentences generated while using an algorithm, and comparing to the average energy of intermediate sentences generated by picking positions uniformly at random. We plot this energy difference over decoding in Figure 2. Overall, we find that lefttoright, leasttomost, and easyfirst do find sentences that are lower energy than the uniform baseline over the entire decoding process. Easyfirst produces sentences with the lowest energy, followed by leasttomost, and then lefttoright.
Appendix B Sample sequences and their generation orders
We present sample decoding processes using the easyfirst decoding algorithm on DeEn with in Figures 3, 4, 5, and 6. We highlight examples decoding in righttolefttorighttoleft order, outsidein, lefttoright, and righttoleft orders, which respectively correspond to the orange, purple, red, and blue clusters from Figure 1, bottom left. These example demonstrate the ability of the easyfirst coordinate selection algorithm to adapt the generation order based on the intermediate sequences generated. Even in the cases of largely monotonic generation order (lefttoright and righttoleft), the algorithm has the capacity to make small changes to the generation order as needed.
Iteration  RighttoLefttoRighttoLeft 
(source)  Würde es mir je gelingen , an der Universität Oxford ein normales Leben zu führen ? 
1  _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ? 
2  _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Oxford ? 
3  _ _ ever _ _ _ _ _ _ _ _ _ _ _ Oxford ? 
4  _ I ever _ _ _ _ _ _ _ _ _ _ _ Oxford ? 
5  _ I ever _ _ _ _ _ _ _ _ _ _ of Oxford ? 
6  Would I ever _ _ _ _ _ _ _ _ _ _ of Oxford ? 
7  Would I ever _ _ _ _ _ normal _ _ _ _ of Oxford ? 
8  Would I ever _ _ _ _ _ normal _ at _ _ of Oxford ? 
9  Would I ever _ _ _ _ _ normal _ at the _ of Oxford ? 
10  Would I ever _ _ _ _ _ normal _ at the University of Oxford ? 
11  Would I ever _ _ _ _ _ normal life at the University of Oxford ? 
12  Would I ever _ _ _ live _ normal life at the University of Oxford ? 
13  Would I ever _ _ _ live a normal life at the University of Oxford ? 
14  Would I ever _ able _ live a normal life at the University of Oxford ? 
15  Would I ever be able _ live a normal life at the University of Oxford ? 
16  Would I ever be able to live a normal life at the University of Oxford ? 
(target)  Would I ever be able to lead a normal life at Oxford ? 
Iteration  OutsideIn 
(source)  Doch ohne zivilgesellschaftliche Organisationen könne eine Demokratie nicht funktionieren . 
1  _ _ _ _ _ _ _ _ _ _ . 
2  _ _ _ _ _ _ _ _ cannot _ . 
3  _ _ _ _ _ _ _ democracy cannot _ . 
4  _ without _ _ _ _ _ democracy cannot _ . 
5  _ without _ _ _ _ _ democracy cannot work . 
6  But without _ _ _ _ _ democracy cannot work . 
7  But without _ _ _ _ a democracy cannot work . 
8  But without _ society _ _ a democracy cannot work . 
9  But without _ society _ , a democracy cannot work . 
10  But without civil society _ , a democracy cannot work . 
11  But without civil society organisations , a democracy cannot work . 
(target)  Yet without civil society organisations , a democracy cannot function . 
Iteration  LefttoRight 
(source)  Denken Sie , dass die Medien zu viel vom PSG erwarten ? 
1  _ _ _ _ _ _ _ _ _ _ _ ? 
2  Do _ _ _ _ _ _ _ _ _ _ ? 
3  Do you _ _ _ _ _ _ _ _ _ ? 
4  Do you think _ _ _ _ _ _ _ _ ? 
5  Do you think _ _ _ _ _ _ PS _ ? 
6  Do you think _ _ _ _ _ _ PS @G ? 
7  Do you think _ media _ _ _ _ PS @G ? 
8  Do you think the media _ _ _ _ PS @G ? 
9  Do you think the media expect _ _ _ PS @G ? 
10  Do you think the media expect _ much _ PS @G ? 
11  Do you think the media expect too much _ PS @G ? 
12  Do you think the media expect too much of PS @G ? 
(target)  Do you think the media expect too much of PS @G ? 
Iteration  RighttoLeft 
(source)  Ein weiterer Streitpunkt : die Befugnisse der Armee . 
1  _ _ _ _ _ _ _ _ _ _ . 
2  _ _ _ _ _ _ _ _ _ army . 
3  _ _ _ _ _ _ _ of _ army . 
4  _ _ _ _ _ _ _ of the army . 
5  _ _ _ _ _ _ powers of the army . 
6  _ _ _ _ _ the powers of the army . 
7  _ _ _ _ : the powers of the army . 
8  _ _ point : the powers of the army . 
9  _ contentious point : the powers of the army . 
10  Another contentious point : the powers of the army . 
(target)  Another issue : the powers conferred on the army . 