TransformerXL: Attentive Language Models
Beyond a FixedLength Context
Abstract
Transformer networks have a potential of learning longerterm dependency, but are limited by a fixedlength context in the setting of language modeling. As a solution, we propose a novel neural architecture, TransformerXL, that enables Transformer to learn dependency beyond a fixed length without disrupting temporal coherence. Concretely, it consists of a segmentlevel recurrence mechanism and a novel positional encoding scheme. Our method not only enables capturing longerterm dependency, but also resolves the problem of context fragmentation. As a result, TransformerXL learns dependency that is about 80% longer than RNNs and 450% longer than vanilla Transformers, achieves better performance on both short and long sequences, and is up to 1,800+ times faster than vanilla Transformer during evaluation. Additionally, we improve the stateoftheart (SoTA) results of bpc/perplexity from 1.06 to 0.99 on enwiki8, from 1.13 to 1.08 on text8, from 20.5 to 18.3 on WikiText103, from 23.7 to 21.8 on One Billion Word, and from 55.3 to 54.5 on Penn Treebank (without finetuning). Our code, pretrained models, and hyperparameters are available in both Tensorflow and PyTorch^{1}^{1}1https://github.com/kimiyoung/transformerxl.
TransformerXL: Attentive Language Models
Beyond a FixedLength Context
Zihang Dai*, Zhilin Yang*, Yiming Yang, William W. Cohen, Jaime Carbonell, 

Quoc V. Le, Ruslan Salakhutdinov 
Carnegie Mellon University, Google Brain, Google AI 
{dzihang,yiming,jgc,rsalakhu}@cs.cmu.edu, {zhiliny,wcohen,qvl}@google.com 
1 Introduction
Language modeling is among the important problems that require modeling longterm dependency, with successful applications such as unsupervised pretraining (Peters et al., 2018; Devlin et al., 2018). However, it has been a challenge to equip neural networks with the capability to model longterm dependency in sequential data. Recurrent neural networks (RNNs), in particular Long ShortTerm Memory (LSTM) networks (Hochreiter & Schmidhuber, 1997), have been a standard solution to language modeling and obtained strong results on multiple benchmarks. Despite the wide adaption, RNNs are difficult to optimize due to gradient vanishing and explosion (Hochreiter et al., 2001), and the introduction of gating in LSTMs and the gradient clipping technique (Graves, 2013; Pascanu et al., 2012) might not be sufficient to fully address this issue. Empirically, previous work has found that LSTM language models use 200 context words on average (Khandelwal et al., 2018), indicating room for further improvement.
On the other hand, the direct connections between longdistance word pairs baked in attention mechanisms might ease optimization and enable the learning of longterm dependency (Bahdanau et al., 2014; Vaswani et al., 2017). Recently, AlRfou et al. (2018) designed a set of auxiliary losses to train deep Transformer networks for characterlevel language modeling, which outperform LSTMs by a large margin. Despite the success, the LM training in AlRfou et al. (2018) is performed on separated fixedlength segments of a few hundred characters, without any information flow across segments. As a consequence of the fixed context length, the model cannot capture any longerterm dependency beyond the predefined context length. In addition, the fixedlength segments are created by selecting a consecutive chunk of symbols without respecting the sentence or any other semantic boundary. Hence, the model lacks necessary contextual information needed to well predict the first few symbols, leading to inefficient optimization and inferior performance. We refer to this problem as context fragmentation.
To address the aforementioned limitations of fixedlength contexts, we propose a new architecture called TransformerXL (meaning extra long). We introduce the notion of recurrence into our deep selfattention network. In particular, instead of computing the hidden states from scratch for each new segment, we reuse the hidden states obtained in previous segments. The reused hidden states serve as memory for the current segment, which builds up a recurrent connection between the segments. As a result, modeling very longterm dependency becomes possible because information can be propagated through the recurrent connections. Meanwhile, passing information from the previous segment can also resolve the problem of context fragmentation. More importantly, we show the necessity of using relative positional encodings rather than absolute ones, in order to enable state reuse without causing temporal confusion. Hence, as an additional technical contribution, we introduce a simple but more effective relative positional encoding formulation that generalizes to attention lengths longer than the one observed during training.
TransformerXL obtained strong results on five datasets, varying from wordlevel to characterlevel language modeling. TransformerXL improves the previous stateoftheart (SoTA) results from 1.06 to 0.99 in bpc on enwiki8, from 1.13 to 1.08 in bpc on text8, from 20.5 to 18.3 in perplexity on WikiText103, and from 23.7 to 21.8 in perplexity on One Billion Word. On small data, TransformerXL also achieves a perplexity of 54.5 on Penn Treebank without finetuning, which is SoTA when comparable settings are considered.
We use two methods to quantitatively study the effective lengths of TransformerXL and the baselines. Similar to Khandelwal et al. (2018), we gradually increase the attention length at test time until no further noticeable improvement (0.1% relative gains) can be observed. Our best model in this settings use attention lengths of 1,600 and 3,800 on WikiText103 and enwiki8 respectively. In addition, we devise a metric called Relative Effective Context Length (RECL) that aims to perform a fair comparison of the gains brought by increasing the context lengths for different models. In this setting, TransformerXL learns a RECL of 900 words on WikiText103, while the numbers for recurrent networks and Transformer are only 500 and 128.
2 Related Work
In the last few years, the field of language modeling has witnessed many significant advances, including but not limited to devising novel architectures to better encode the context (Bengio et al., 2003; Mikolov et al., 2010; Zilly et al., 2016; Krause et al., 2016; Grave et al., 2016b; Dauphin et al., 2016; Chung et al., 2016; Merity et al., 2016; Kalchbrenner et al., 2016; AlRfou et al., 2018), improving regularization and optimization algorithms Zaremba et al. (2014); Inan et al. (2016); Press & Wolf (2016); Merity et al. (2017); Gal & Ghahramani (2016), speeding up the Softmax computation (Morin & Bengio, 2005; Kuchaiev & Ginsburg, 2017; Grave et al., 2016a; Jozefowicz et al., 2016), and enriching the output distribution family (Yang et al., 2017; Kanai et al., 2018).
To capture the longrange context in language modeling, a line of work directly feeds a representation of the wider context into the network as an additional input. Existing works range from ones where context representations are manually defined (Mikolov & Zweig, 2012; Ji et al., 2015; Wang & Cho, 2015) to others that rely on documentlevel topics learned from data (Dieng et al., 2016; Wang et al., 2017).
More broadly, in generic sequence modeling, how to capture longterm dependency has been a longstanding research problem. From this perspective, since the ubiquitous adaption of LSTM, many efforts have been spent on relieving the vanishing gradient problem, including better initialization (Le et al., 2015), additional loss signal (Trinh et al., 2018), augmented memory structure (Ke et al., 2018) and others that modify the internal architecture of RNNs to ease the optimization Mikolov et al. (2014); Koutnik et al. (2014); Wu et al. (2016); Li et al. (2018). Different from them, our work is based on the Transformer architecture and shows that language modeling as a realworld task benefits from the ability to learn longerterm dependency.
3 Model
Given a corpus of tokens , the task of language modeling is to estimate the joint probability , which is often autoregressively factorized as . With the factorization, the problem reduces to estimating each conditional factor. In this work, we stick to the standard neural approach to modeling the conditional probability. Specifically, a trainable neural network is used to encode the context into a fixed size hidden state, which is multiplied with the word embeddings to obtain the logits. The logits are then fed into the Softmax function, yielding a categorical probability distribution over the next token.
3.1 Vanilla Transformer Language Models
In order to apply Transformer or selfattention to language modeling, the central problem is how to train a Transformer to effectively encode an arbitrarily long context into a fixed size representation. Given infinite memory and computation, a simple solution would be to process the entire context sequence using an unconditional Transformer decoder, similar to a feedforward neural network. However, this is usually infeasible with the limited resource in practice.
One feasible but crude approximation is to split the entire corpus into shorter segments of manageable sizes, and only train the model within each segment, ignoring all contextual information from previous segments. This is the idea adopted by AlRfou et al. (2018). We call it the vanilla model and visualize it in Fig. 0(a). Under this training paradigm, information never flows across segments in either the forward or backward pass. There are two critical limitations of using a fixedlength context. First, the largest possible dependency length is upper bounded by the segment length, which is a few hundred on characterlevel language modeling (AlRfou et al., 2018). Therefore, although the selfattention mechanism is less affected by the vanishing gradient problem compared to RNNs, the vanilla model is not able to fully exploit this optimization advantage. Second, though it is possible to use padding to respect the sentence or other semantic boundaries, in practice it has been standard practice to simply chunk long text into fixedlength segments due to improved efficiency (Peters et al., 2018; Devlin et al., 2018; AlRfou et al., 2018). However, simply chunking a sequence into fixedlength segments will lead to the context fragmentation problem as discussed in Section 1.
During evaluation, at each step, the vanilla model also consumes a segment of the same length as in training, but only makes one prediction at the last position. Then, at the next step, the segment is shifted to the right by only one position, and the new segment has to be processed all from scratch. As shown in Fig. 0(b), this procedure ensures that each prediction utilizes the longest possible context exposed during training, and also relieves context fragmentation issue encountered in training. However, this evaluation procedure is extremely expensive. We will show that our proposed architecture is able to substantially improve the evaluation speed.
3.2 SegmentLevel Recurrence with State Reuse
To address the limitations of using a fixedlength context, we propose to introduce a recurrence mechanism to the Transformer architecture. During training, the hidden state sequence computed for the previous segment is fixed and cached to be reused as an extended context when the model processes the next new segment, as shown in Fig. 1(a). Although the gradient still remains within a segment, this additional input allows the network to exploit information in the history, leading to an ability of modeling longerterm dependency and avoiding context fragmentation. Formally, let the two consecutive segments of length be and respectively. Denoting the th layer hidden state sequence produced for the th segment by , where is the hidden dimension. Then, the th layer hidden state for segment is produced (schematically) as follows,
(extended context)  
(query, key, value vectors)  
(selfattention + feedforward) 
where the function stands for stopgradient, the notation indicates the concatenation of two hidden sequences along the length dimension, and denotes model parameters. Compared to the standard Transformer, the critical difference lies in that the key and value are conditioned on the extended context and hence cached from the previous segment. We emphasize this particular design by the green paths in Fig. 1(a).
With this recurrence mechanism applied to every two consecutive segments of a corpus, it essentially creates a segmentlevel recurrence in the hidden states. As a result, the effective context being utilized can go way beyond just two segments. However, notice that the recurrent dependency between and shifts one layer downwards persegment, which differs from the samelayer recurrence in conventional RNNLMs. Consequently, the largest possible dependency length grows linearly w.r.t. the number of layers as well as the segment length, i.e., , as visualized by the shaded area in Fig. 1(b). This is analogous to truncated BPTT (Mikolov et al., 2010), a technique developed for training RNNLMs. However, different from truncated BPTT, our method caches a sequence of hidden states instead of the last one, and should be applied together with the relative positional encoding technique described in Section 3.3.
Besides achieving extra long context and resolving fragmentation, another benefit that comes with the recurrence scheme is significantly faster evaluation. Specifically, during evaluation, the representations from the previous segments can be reused instead of being computed from scratch as in the case of the vanilla model. In our experiments on enwiki8, TransformerXL is up to 1,800+ times faster than the vanilla model during evaluation (see Section 4).
Finally, notice that the recurrence scheme does not need to be restricted to only the previous segment. In theory, we can cache as many previous segments as the GPU memory allows, and reuse all of them as the extra context when processing the current segment. Thus, we can cache a predefined length old hidden states spanning (possibly) multiple segments, and refer to them as the memory , due to a clear connection to the memory augmented neural networks (Graves et al., 2014; Weston et al., 2014). In our experiments, we set equal to the segment length during training, and increase it by multiple times during evaluation.
3.3 Relative Positional Encodings
While we found the idea presented in the previous subsection very appealing, there is a crucial technical challenge we haven’t solved in order to reuse the hidden states. That is, how can we keep the positional information coherent when we reuse the states? Recall that, in the standard Transformer, the information of sequence order is provided by a set of positional encodings, denoted as , where the th row corresponds to the th absolute position within a segment and prescribes the maximum possible length to be modeled. Then, the actual input to the Transformer is the elementwise addition of the word embeddings and the positional encodings. If we simply adapt this positional encoding to our recurrence mechanism introduced above, the hidden state sequence would be computed schematically by
where is the word embedding sequence of , and represents a transformation function. Notice that, both and are associated with the same positional encoding . As a result, the model has no information to distinguish the positional difference between and for any , resulting in a sheer performance loss.
In order to avoid this failure mode, the fundamental idea is to only encode the relative positional information in the hidden states. Conceptually, the positional encoding gives the model a temporal clue or “bias” about how information should be gathered, i.e., where to attend. For the same purpose, instead of incorporating bias statically into the initial embedding, one can inject the same information into the attention score of each layer. More importantly, it is more intuitive and generalizable to define the temporal bias in a relative manner. For instance, when a query vector attends on the key vectors , it does not need to know the absolute position of each key vector to identify the temporal order of the segment. Instead, it suffices to know the relative distance between each key vector and itself , i.e. . Practically, one can create a set of relative positional encodings , where the th row indicates a relative distance of between two positions. By injecting the relative distance dynamically into the attention score, the query vector can easily distinguish the representations of and from their different distances, making the state reuse mechanism feasible. Meanwhile, we won’t lose any temporal information, as the absolute position can be recovered recursively from relative distances.
Previously, the idea of relative positional encodings has been explored in the context of machine translation (Shaw et al., 2018) and music generation (Huang et al., 2018). Here, we offer a different derivation, arriving at a new form of relative positional encodings, which not only has a onetoone correspondence to its absolute counterpart but also enjoys much better generalization empirically (see Section 4). Firstly, in the standard Transformer (Vaswani et al., 2017), the attention score between query and key vector within the same segment can be decomposed as
Following the idea of only relying on relative positional information, we propose to reparameterize the four terms as follows

The first change we make is to replace all appearances of the absolute positional embedding for computing key vectors in term and with its relative counterpart . This essentially reflects the prior that only the relative distance matters for where to attend. Note that is a sinusoid encoding matrix (Vaswani et al., 2017) without learnable parameters.

Secondly, we introduce a trainable parameter to replace the query in term . In this case, since the query vector is the same for all query positions, it suggests that the attentive bias towards different words should remain the same regardless of the query position. With a similar reasoning, a trainable parameter is added to substitute in term .

Finally, we deliberately separate the two weight matrices and for producing the contentbased key vectors and locationbased key vectors respectively.
Under the new parameterization, each term has an intuitive meaning: term represents contentbased addressing, term captures a contentdependent positional bias, term governs a global content bias, and encodes a global positional bias.
In comparison, the formulation in Shaw et al. (2018) only has terms and , dropping the two bias terms and . Moreover, Shaw et al. (2018) merge the multiplication into a single trainable matrix , which abandons the inductive bias built into the original sinusoid positional encoding (Vaswani et al., 2017). In contrast, our relative positional embedding adapts the sinusoid formulation. As a benefit of the inductive bias, a model trained on a memory of some certain length can automatically generalize to a memory several times longer during evaluation.
Equipping the recurrence mechanism with our proposed relative positional embedding, we finally arrive at the TransformerXL architecture. For completeness, we summarize the computational procedure for a layer TransformerXL with a single attention head below:
with defined as the word embedding sequence. In addition, it is worth mentioning that a naive way to compute requires computing for all pairs , whose cost is quadratic w.r.t. the sequence length. However, noticing that the value of only ranges from zero to the sequence length, we show a simple computation procedure in Appendix B, which reduces the cost to be linear w.r.t. the sequence length.
4 Experiments
4.1 Main Results
Model  #Params  Validation PPL  Test PPL 

Grave et al. (2016b) – LSTM      48.7 
Bai et al. (2018) – TCN      45.2 
Dauphin et al. (2016) – GCNN8      44.9 
Grave et al. (2016b) – LSTM + Neural cache      40.8 
Dauphin et al. (2016) – GCNN14      37.2 
Merity et al. (2018) – 4layer QRNN  151M  32.0  33.0 
Rae et al. (2018) – LSTM + Hebbian + Cache    29.7  29.9 
Ours – TransformerXL Standard  151M  23.1  24.0 
Baevski & Auli (2018) – adaptive input  247M  19.8  20.5 
Ours – TransformerXL Large  257M  17.7  18.3 
Model  #Params  Test bpc 
Ha et al. (2016) – LN HyperNetworks  27M  1.34 
Chung et al. (2016) – LN HMLSTM  35M  1.32 
Zilly et al. (2016) – Recurrent highway networks  46M  1.27 
Mujika et al. (2017) – Large FSLSTM4  47M  1.25 
Krause et al. (2016) – Large mLSTM  46M  1.24 
Knol (2017) – cmix v13    1.23 
AlRfou et al. (2018) – 12layer Transformer  44M  1.11 
Ours – 12layer TransformerXL  41M  1.06 
AlRfou et al. (2018) – 64layer Transformer  235M  1.06 
Ours – 18layer TransformerXL  88M  1.03 
Ours – 24layer TransformerXL  277M  0.99 
Model  #Params  Test bpc 

Cooijmans et al. (2016) – BNLSTM    1.36 
Chung et al. (2016) – LN HMLSTM  35M  1.29 
Zilly et al. (2016) – Recurrent highway networks  45M  1.27 
Krause et al. (2016) – Large mLSTM  45M  1.27 
AlRfou et al. (2018) – 12layer Transformer  44M  1.18 
AlRfou et al. (2018) – 64layer Transformer  235M  1.13 
Ours – 24layer TransformerXL  277M  1.08 
Model  #Params  PPL 
Shazeer et al. (2014) – Sparse NonNegative  33B  52.9 
Chelba et al. (2013) – RNN1024 + 9 Gram  20B  51.3 
Jozefowicz et al. (2016) – LSTM2048512  0.83B  43.7 
Kuchaiev & Ginsburg (2017) – BIG GLSTM2    36.0 
Dauphin et al. (2016) – GCNN14 bottleneck    31.9 
Jozefowicz et al. (2016) – LSTM81921024  1.8B  30.6 
Jozefowicz et al. (2016) – LSTM81921024 + CNN Input  1.04B  30.0 
Shazeer et al. (2017) – LowBudget MoE  5B  34.1 
Shazeer et al. (2017) – HighBudget MoE  5B  28.0 
Shazeer et al. (2018) – Mesh Tensorflow  4.9B  24.0 
Baevski & Auli (2018) – Adaptive Input Large  0.46B  24.1 
Baevski & Auli (2018) – Adaptive Input Very Large  1.0B  23.7 
Ours – TransformerXL Base  0.46B  23.5 
Ours – TransformerXL Large  0.8B  21.8 
Model  #Params  Dev PPL  Test PPL 
Inan et al. (2016) – Tied Variational LSTM + augmented loss  24M  75.7  73.2 
Zilly et al. (2016) – Variational RHN  23M  67.9  65.4 
Zoph & Le (2016) – NAS Cell  25M    64.0 
Merity et al. (2017) – AWDLSTM  24M  60.7  58.8 
Pham et al. (2018) – Efficient NAS  24M  60.8  58.6 
Liu et al. (2018) – Differentiable NAS  23M  58.3  56.1 
Yang et al. (2017) – AWDLSTMMoS  22M  58.08  55.97 
Melis et al. (2018) – 2layer skipLSTM + dropout tuning  24M  57.1  55.3 
Ours – TransformerXL  24M  56.72  54.52 
Merity et al. (2017) – AWDLSTM + finetuning  24M  60.0  57.3 
Yang et al. (2017) – AWDLSTMMoS + finetuning  22M  56.54  54.44 
Remark  Recurrence  Encoding  Loss  PPL init  PPL best  Attn Len 
TransformerXL (128M)  ✓  Ours  Full  27.02  26.77  500 
  ✓  Shaw et al. (2018)  Full  27.94  27.94  256 
  ✓  Ours  Half  28.69  28.33  460 
  ✗  Ours  Full  29.59  29.02  260 
  ✗  Ours  Half  30.10  30.10  120 
  ✗  Shaw et al. (2018)  Full  29.75  29.75  120 
  ✗  Shaw et al. (2018)  Half  30.50  30.50  120 
  ✗  Vaswani et al. (2017)  Half  30.97  30.97  120 
Transformer (128M)  ✗  AlRfou et al. (2018)  Half  31.16  31.16  120 
TransformerXL (151M)  ✓  Ours  Full  23.43  23.09  640 
23.16  450  
23.35  300 
Method  PPL 

Ours  25.2 
With Shaw et al. (2018) encodings  25.7 
Without recurrence  27.1 
We apply TransformerXL to a variety of datasets on both wordlevel and characterlevel language modeling to have a comparison with stateoftheart systems, including WikiText103 (Merity et al., 2016), enwiki8 (LLC, 2009), text8 (LLC, 2009), One Billion Word (Chelba et al., 2013), and Penn Treebank (Mikolov & Zweig, 2012).
WikiText103 is the largest available wordlevel language modeling benchmark with longterm dependency. It contains 103M training tokens from 28K articles, with an average length of 3.6K tokens per article, which allows testing the ability of longterm dependency modeling. We set the attention length to 384 during training and 1600 during evaluation. We adopted adaptive softmax and input representations (Baevski & Auli, 2018; Grave et al., 2016a). As shown in Table 1, TransformerXL reduces the previous SoTA perplexity from 20.5 to 18.3, which demonstrates the superiority of the TransformerXL architecture.
The dataset enwiki8 contains 100M bytes of unprocessed Wikipedia text. We compare our architecture with the previous results in Table 2. Under the model size constraint, the 12layer TransformerXL achieves a new SoTA result, outperforming the 12layer vanilla Transformer from AlRfou et al. (2018) by 0.05, while both Transformer variants have a large margin over conventional RNNbased models. Notably, our 12layer architecture achieves the same result as the 64layer network from AlRfou et al. (2018), using only 17% of the parameter budget. In order to see whether better performances can be obtained by increasing the model size, we train 18layer and 24layer TransformerXLs with increased model sizes. With the attention length 784 during training and 3,800 during evaluation, we obtained a new SoTA result and our method is the first to break through 1.0 on widelystudied characterlevel benchmarks. Different from AlRfou et al. (2018), TransformerXL does not need any auxiliary losses, and thus all benefits are credited to a better architecture.
Similar to but different from enwiki8, text8 contains 100M processed Wikipedia characters created by lowering case the text and removing any character other than the 26 letters a through z, and space. Due to the similarity, we simply adapt the best model and the same hyperparameters on enwiki8 to text8 without further tuning. The comparison with previous methods is summarized in Table 3. Again, TransformerXL achieves the new SoTA result with a clear margin.
One Billion Word does not preserve any longterm dependency because sentences have been shuffled. Consequently, this dataset mainly tests the ability of modeling only shortterm dependency. The comparison between TransformerXL and the other methods is shown in Table 4. Although TransformerXL is mainly designed to better capture longerterm dependency, it dramatically improves the singlemodel SoTA from 23.7 to 21.8. Specifically, TransformerXL significantly outperforms a contemporary method using vanilla Transformers Baevski & Auli (2018), suggesting the advantage of TransformerXL is generalizable to modeling short sequences.
We also report the results on wordlevel Penn Treebank in Table 5. Similar to AWDLSTM (Merity et al., 2017), we apply variational dropout and weight average to TransformerXL. With proper regularization, TransformerXL achieves a new SoTA result among models without twostep finetuning. Penn Treebank has only 1M training tokens, which implies that TransformerXL also generalizes well even on small datasets.
4.2 Ablation Study
We conduct two sets of ablation studies to examine the effects of two proposed techniques used in TransformerXL: the recurrence mechanism and the new positional encoding scheme.
The first study is performed on WikiText103, which requires modeling longterm dependency. The results are reported in Table 6. Among the compared encoding schemes, Shaw et al. (2018) is relative, while Vaswani et al. (2017) and AlRfou et al. (2018) are absolute. “Full” and “half” losses refer to applying a cross entropy loss to all or the recent half positions in the segment. We found that absolute encodings only work well with half losses because half losses exclude positions with very short attention lengths during training for better generalization. Table 6 shows that both the recurrence mechanism and our encoding scheme are necessary to achieve the best performance, as well as generalizing to longer attention sequences during evaluation time. Although the backpropagation length during training is only 128, with the two techniques the attention length can be increased to 640 at test time. In the standard setting with 151M parameters, the perplexity decreases as the attention length increases.
Since the recurrence mechanism costs additional memory, we also compare TransformerXL with baselines under the same GPU memory constraints. As shown in Table 10 in Appendix A, despite using a shorter backpropagation length, TransformerXL remains superior to the baselines.
The second study targets at isolating the effects of resolving the context fragmentation problem from the benefit of capturing longer context length. In order to achieve this goal, we deliberately choose a dataset that does not require longterm dependency, so that any improvement from establishing the recurrence can be attributed to solving the context fragmentation. Specifically, we perform this controlled experiment on the One Billion Word dataset, which can only benefit from removing the context fragmentation. We train a 20layer TransformerXL with 0.3B parameters for 400K steps. As shown in Table 7, using segmentlevel recurrence substantially improves performance even when longterm dependency is not needed, which is consistent with our previous discussion that the recurrence mechanism resolves the context fragmentation problem. Moreover, our relative positional encodings is also superior to Shaw et al. (2018) on short sequences.
4.3 Relative Effective Context Length
Model  

TransformerXL 151M  900  800  700 
QRNN  500  400  300 
LSTM  400  300  200 
TransformerXL 128M  700  600  500 
 use Shaw et al. (2018) encoding  400  400  300 
 remove recurrence  300  300  300 
Transformer  128  128  128 
Khandelwal et al. (2018) proposed a method to evaluate the Effective Context Length (ECL) of a sequence model. ECL is the longest length to which increasing the context span would lead to a gain more than a threshold. However, ECL ignores the fact that it is harder to get improvement when a model already achieves a lower perplexity using only a shorter context, and thus it is not suitable for fair comparison among multiple models. We instead propose a new metric called Relative Effective Context Length (RECL). RECL is defined on a model group instead of a single model, and the gain of a long context is measure by the relative improvement over the best short context model. As such, the model group shares the same baseline to enable fair comparison. RECL also has a parameter , which means constraining the comparison on top hard examples. See Appedix C for more details about RECL. As shown in Table 8, TransformerXL manages to model dependency of 900 words long on average with . The RECL of TransformerXL is 80% and 450% longer than recurrent networks and Transformer respectively. Both the recurrence mechanism and our positional encodings contribute to a longer RECL. This further substantiates our argument that TransformerXL is able to model longerterm dependency.
4.4 Evaluation Speed
Finally, we compare the evaluation speed of the proposed model with the vanilla Transformer model AlRfou et al. (2018). As shown in Table 9, due to the state reuse scheme, TransformerXL achieves an up to 1,874 times speedup during evaluation compared to the architecture in AlRfou et al. (2018).
Attn Len  How much AlRfou et al. (2018) is slower than ours 

3,800  1,874x 
2,800  1,409x 
1,800  773x 
800  363x 
5 Conclusions
We propose a novel architecture, TransformerXL, for language modeling with selfattention architectures beyond a fixedlength context. Our main technical contributions include introducing the notion of recurrence in a purely selfattentive model and deriving a novel positional encoding scheme. These two techniques form a complete set of solutions, as any one of them alone does not address the issue of fixedlength contexts. TransformerXL is the first selfattention model that achieves substantially better results than RNNs on both characterlevel and wordlevel language modeling. TransformerXL is also able to model longerterm dependency than RNNs and Transformer, and achieves substantial speedup during evaluation compared to vanilla Transformers.
Acknowledgments
This work was supported in part by the Office of Naval Research, NSF grant IIS1763562, Google focused award, and the Nvidia fellowship.
References
 AlRfou et al. (2018) Rami AlRfou, Dokook Choe, Noah Constant, Mandy Guo, and Llion Jones. Characterlevel language modeling with deeper selfattention. arXiv preprint arXiv:1808.04444, 2018.
 Baevski & Auli (2018) Alexei Baevski and Michael Auli. Adaptive input representations for neural language modeling. arXiv preprint arXiv:1809.10853, 2018.
 Bahdanau et al. (2014) Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.
 Bai et al. (2018) Shaojie Bai, J Zico Kolter, and Vladlen Koltun. An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. arXiv preprint arXiv:1803.01271, 2018.
 Bengio et al. (2003) Yoshua Bengio, Réjean Ducharme, Pascal Vincent, and Christian Jauvin. A neural probabilistic language model. Journal of machine learning research, 3(Feb):1137–1155, 2003.
 Chelba et al. (2013) Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, and Tony Robinson. One billion word benchmark for measuring progress in statistical language modeling. arXiv preprint arXiv:1312.3005, 2013.
 Chung et al. (2016) Junyoung Chung, Sungjin Ahn, and Yoshua Bengio. Hierarchical multiscale recurrent neural networks. arXiv preprint arXiv:1609.01704, 2016.
 Cooijmans et al. (2016) Tim Cooijmans, Nicolas Ballas, César Laurent, Çağlar Gülçehre, and Aaron Courville. Recurrent batch normalization. arXiv preprint arXiv:1603.09025, 2016.
 Dauphin et al. (2016) Yann N Dauphin, Angela Fan, Michael Auli, and David Grangier. Language modeling with gated convolutional networks. arXiv preprint arXiv:1612.08083, 2016.
 Devlin et al. (2018) Jacob Devlin, MingWei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pretraining of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
 Dieng et al. (2016) Adji B Dieng, Chong Wang, Jianfeng Gao, and John Paisley. Topicrnn: A recurrent neural network with longrange semantic dependency. arXiv preprint arXiv:1611.01702, 2016.
 Gal & Ghahramani (2016) Yarin Gal and Zoubin Ghahramani. A theoretically grounded application of dropout in recurrent neural networks. In Advances in neural information processing systems, pp. 1019–1027, 2016.
 Grave et al. (2016a) Edouard Grave, Armand Joulin, Moustapha Cissé, David Grangier, and Hervé Jégou. Efficient softmax approximation for gpus. arXiv preprint arXiv:1609.04309, 2016a.
 Grave et al. (2016b) Edouard Grave, Armand Joulin, and Nicolas Usunier. Improving neural language models with a continuous cache. arXiv preprint arXiv:1612.04426, 2016b.
 Graves (2013) Alex Graves. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013.
 Graves et al. (2014) Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. arXiv preprint arXiv:1410.5401, 2014.
 Ha et al. (2016) David Ha, Andrew Dai, and Quoc V Le. Hypernetworks. arXiv preprint arXiv:1609.09106, 2016.
 Hochreiter & Schmidhuber (1997) Sepp Hochreiter and Jürgen Schmidhuber. Long shortterm memory. Neural computation, 9(8):1735–1780, 1997.
 Hochreiter et al. (2001) Sepp Hochreiter, Yoshua Bengio, Paolo Frasconi, Jürgen Schmidhuber, et al. Gradient flow in recurrent nets: the difficulty of learning longterm dependencies, 2001.
 Huang et al. (2018) ChengZhi Anna Huang, Ashish Vaswani, Jakob Uszkoreit, Noam Shazeer, Curtis Hawthorne, Andrew M Dai, Matthew D Hoffman, and Douglas Eck. An improved relative selfattention mechanism for transformer with application to music generation. arXiv preprint arXiv:1809.04281, 2018.
 Inan et al. (2016) Hakan Inan, Khashayar Khosravi, and Richard Socher. Tying word vectors and word classifiers: A loss framework for language modeling. arXiv preprint arXiv:1611.01462, 2016.
 Ji et al. (2015) Yangfeng Ji, Trevor Cohn, Lingpeng Kong, Chris Dyer, and Jacob Eisenstein. Document context language models. arXiv preprint arXiv:1511.03962, 2015.
 Jozefowicz et al. (2016) Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410, 2016.
 Kalchbrenner et al. (2016) Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, and Koray Kavukcuoglu. Neural machine translation in linear time. arXiv preprint arXiv:1610.10099, 2016.
 Kanai et al. (2018) Sekitoshi Kanai, Yasuhiro Fujiwara, Yuki Yamanaka, and Shuichi Adachi. Sigsoftmax: Reanalysis of the softmax bottleneck. arXiv preprint arXiv:1805.10829, 2018.
 Ke et al. (2018) Nan Rosemary Ke, Anirudh Goyal ALIAS PARTH GOYAL, Olexa Bilaniuk, Jonathan Binas, Michael C Mozer, Chris Pal, and Yoshua Bengio. Sparse attentive backtracking: Temporal credit assignment through reminding. In Advances in Neural Information Processing Systems, pp. 7650–7661, 2018.
 Khandelwal et al. (2018) Urvashi Khandelwal, He He, Peng Qi, and Dan Jurafsky. Sharp nearby, fuzzy far away: How neural language models use context. arXiv preprint arXiv:1805.04623, 2018.
 Knol (2017) Bryon Knol. cmix v13. http://www.byronknoll.com/cmix.html, 2017.
 Koutnik et al. (2014) Jan Koutnik, Klaus Greff, Faustino Gomez, and Juergen Schmidhuber. A clockwork rnn. arXiv preprint arXiv:1402.3511, 2014.
 Krause et al. (2016) Ben Krause, Liang Lu, Iain Murray, and Steve Renals. Multiplicative lstm for sequence modelling. arXiv preprint arXiv:1609.07959, 2016.
 Kuchaiev & Ginsburg (2017) Oleksii Kuchaiev and Boris Ginsburg. Factorization tricks for lstm networks. arXiv preprint arXiv:1703.10722, 2017.
 Le et al. (2015) Quoc V Le, Navdeep Jaitly, and Geoffrey E Hinton. A simple way to initialize recurrent networks of rectified linear units. arXiv preprint arXiv:1504.00941, 2015.
 Li et al. (2018) Shuai Li, Wanqing Li, Chris Cook, Ce Zhu, and Yanbo Gao. Independently recurrent neural network (indrnn): Building a longer and deeper rnn. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5457–5466, 2018.
 Liu et al. (2018) Hanxiao Liu, Karen Simonyan, and Yiming Yang. Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055, 2018.
 LLC (2009) MultiMedia LLC. Large text compression benchmark. 2009.
 Melis et al. (2018) Gábor Melis, Charles Blundell, Tomáš Kočiskỳ, Karl Moritz Hermann, Chris Dyer, and Phil Blunsom. Pushing the bounds of dropout. arXiv preprint arXiv:1805.09208, 2018.
 Merity et al. (2016) Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer sentinel mixture models. arXiv preprint arXiv:1609.07843, 2016.
 Merity et al. (2017) Stephen Merity, Nitish Shirish Keskar, and Richard Socher. Regularizing and optimizing lstm language models. arXiv preprint arXiv:1708.02182, 2017.
 Merity et al. (2018) Stephen Merity, Nitish Shirish Keskar, and Richard Socher. An analysis of neural language modeling at multiple scales. arXiv preprint arXiv:1803.08240, 2018.
 Mikolov & Zweig (2012) Tomas Mikolov and Geoffrey Zweig. Context dependent recurrent neural network language model. SLT, 12(234239):8, 2012.
 Mikolov et al. (2010) Tomáš Mikolov, Martin Karafiát, Lukáš Burget, Jan Černockỳ, and Sanjeev Khudanpur. Recurrent neural network based language model. In Eleventh Annual Conference of the International Speech Communication Association, 2010.
 Mikolov et al. (2014) Tomas Mikolov, Armand Joulin, Sumit Chopra, Michael Mathieu, and Marc’Aurelio Ranzato. Learning longer memory in recurrent neural networks. arXiv preprint arXiv:1412.7753, 2014.
 Morin & Bengio (2005) Frederic Morin and Yoshua Bengio. Hierarchical probabilistic neural network language model. In Aistats, volume 5, pp. 246–252. Citeseer, 2005.
 Mujika et al. (2017) Asier Mujika, Florian Meier, and Angelika Steger. Fastslow recurrent neural networks. In Advances in Neural Information Processing Systems, pp. 5915–5924, 2017.
 Pascanu et al. (2012) Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. Understanding the exploding gradient problem. CoRR, abs/1211.5063, 2012.
 Peters et al. (2018) Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. Deep contextualized word representations. arXiv preprint arXiv:1802.05365, 2018.
 Pham et al. (2018) Hieu Pham, Melody Y Guan, Barret Zoph, Quoc V Le, and Jeff Dean. Efficient neural architecture search via parameter sharing. arXiv preprint arXiv:1802.03268, 2018.
 Press & Wolf (2016) Ofir Press and Lior Wolf. Using the output embedding to improve language models. arXiv preprint arXiv:1608.05859, 2016.
 Rae et al. (2018) Jack W Rae, Chris Dyer, Peter Dayan, and Timothy P Lillicrap. Fast parametric learning with activation memorization. arXiv preprint arXiv:1803.10049, 2018.
 Shaw et al. (2018) Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. Selfattention with relative position representations. arXiv preprint arXiv:1803.02155, 2018.
 Shazeer et al. (2014) Noam Shazeer, Joris Pelemans, and Ciprian Chelba. Skipgram language modeling using sparse nonnegative matrix probability estimation. arXiv preprint arXiv:1412.1454, 2014.
 Shazeer et al. (2017) Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. Outrageously large neural networks: The sparselygated mixtureofexperts layer. arXiv preprint arXiv:1701.06538, 2017.
 Shazeer et al. (2018) Noam Shazeer, Youlong Cheng, Niki Parmar, Dustin Tran, Ashish Vaswani, Penporn Koanantakool, Peter Hawkins, HyoukJoong Lee, Mingsheng Hong, Cliff Young, et al. Meshtensorflow: Deep learning for supercomputers. In Advances in Neural Information Processing Systems, pp. 10434–10443, 2018.
 Trinh et al. (2018) Trieu H Trinh, Andrew M Dai, Thang Luong, and Quoc V Le. Learning longerterm dependencies in rnns with auxiliary losses. arXiv preprint arXiv:1803.00144, 2018.
 Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems, pp. 5998–6008, 2017.
 Wang & Cho (2015) Tian Wang and Kyunghyun Cho. Largercontext language modelling. arXiv preprint arXiv:1511.03729, 2015.
 Wang et al. (2017) Wenlin Wang, Zhe Gan, Wenqi Wang, Dinghan Shen, Jiaji Huang, Wei Ping, Sanjeev Satheesh, and Lawrence Carin. Topic compositional neural language model. arXiv preprint arXiv:1712.09783, 2017.
 Weston et al. (2014) Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. arXiv preprint arXiv:1410.3916, 2014.
 Wu et al. (2016) Yuhuai Wu, Saizheng Zhang, Ying Zhang, Yoshua Bengio, and Ruslan R Salakhutdinov. On multiplicative integration with recurrent neural networks. In Advances in neural information processing systems, pp. 2856–2864, 2016.
 Yang et al. (2017) Zhilin Yang, Zihang Dai, Ruslan Salakhutdinov, and William W Cohen. Breaking the softmax bottleneck: A highrank rnn language model. arXiv preprint arXiv:1711.03953, 2017.
 Zaremba et al. (2014) Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. Recurrent neural network regularization. arXiv preprint arXiv:1409.2329, 2014.
 Zilly et al. (2016) Julian Georg Zilly, Rupesh Kumar Srivastava, Jan Koutník, and Jürgen Schmidhuber. Recurrent highway networks. arXiv preprint arXiv:1607.03474, 2016.
 Zoph & Le (2016) Barret Zoph and Quoc V Le. Neural architecture search with reinforcement learning. arXiv preprint arXiv:1611.01578, 2016.
Appendix A Ablation Study with Memory Constraints
Backprop Len  Recurrence  Encoding  Loss  pplx best  pplx init  Attn Len 

128  ✓  Ours  Full  26.77  27.02  500 
128  ✓  Ours  Partial  28.33  28.69  460 
176  ✗  Ours  Full  27.98  28.43  400 
172  ✗  Ours  Partial  28.83  28.83  120 
Table 10 compares TransformerXL with baseline under the same memory budget. TransformerXL still outperforms the baseline even with a shorter backprop length.
Appendix B Efficient Computation of the Attention with Relative Positional Embedding
As we discussed in section 3.3, the naive way of computing the for all pairs is subject to a quadratic cost. Here, we present a simple method with only a linear cost. Firstly, notice that the relative distance can only be integer from 0 to , where and are the memory length and segment length respectively. Hence, the rows of the matrix
consist of all possible vector outputs of for any . Note that we have defined in a reversed order, i.e., , to make further discussion easier.
Next, we collect the term for all possible into the following matrix,
Then, we further define
Now, it is easy to see an immediate relationship between and , where the th row of is simply a leftshifted version of th row of . Hence, the computation of only requires a matrix multiplication to compute and then a set of leftshifts.
Similarly, we can collect all term for all possible into another matrix ,
Then, we can follow the same procedure to define
Again, each row of is simply a leftshift version of . Hence, the main computation cost comes from the matrixvector multiplication , which is not expensive any more.
Appendix C Details About RECL
In this section, we describe the details of the metric RECL. Let be a model group consisting of models. Let denote the loss of model on the th token in the corpus with a context length . Concretely, the loss can be written as
where is the probability distribution given by model , and is the th token in the corpus. Given a short context length and a long context length such that , we can further define a baseline for each position ,
The relative loss of w.r.t. the model group is written as
The above equation uses the minimum loss of all models on the short length as a baseline, and only losses smaller than the baseline will be effectively counted towards the relative loss. This enables fair comparison between multiple models because all models with a long context length need to improve over the same baseline. Sometimes we only care about those positions where the baseline performs poorly (which means shortterm dependency with context length is not sufficient), so given a ratio parameter , we define the set is the above equation as
The relative gain is subsequently defined as the relative perplexity reduction:
Given a step size , we then use an algorithm to find the RECL by thresholding the relative gain:

Set initial short context length , and long context length

Compute . If , return . If , set and go to step 1.
In Figure 3, we visualize the unnormalized relative perplexity gains with various pairs of when . It is clear that TransformerXL has a longer RECL compared to RNNs and other baselines because the relative gains are substantially larger.
For reference, we plot the perplexities with varying context lengths in Figure 4. The yaxis denotes the “normal” perplexity (not calibrated by baselines).
Appendix D Attention Visualization
In this section, we provide some visualization of the attention learned by the SoTA model on the WikiText103 validation set. Recall that, this model has 16 10head transformer layers and relies on a memory of length 640.
The first visualization aims at revealing the overall trend of where the model is attending. Specifically, for each attention head of each layer, we average the attention distributions of all tokens in the validation set. This is shown in Fig. 5. As we can see, the overall trend is to focus more on the nearby tokens than the faraway ones. However, it is also very clear that some attention heads have a wider attention distribution over the entire memory span, notably the head 8 from layer 1, head 78 from layer 8, and the head 158 from layer 16.
Since we are focused on learning longrange dependency, we are especially interested in these heads with a wider attention span. Thus, in the second set of visualization, we pick the three notable heads mentioned above, and visualize their attention behavior for a randomly chosen position, as shown in Fig. 6. Here, we see three different patterns of wider attention:

For the head 8 in the 1st layer, we see an almost uniform attention over the entire memory span. This is quite intuitive, as lowerlevel layers needs to screen the entire memory span to decide where to focus for higherlevel layers

For the head 78 in the 8th layer (a middlelevel layer), we see a very sparse attention pattern scattered in all ranges of the memory. Again, this well fits our intuition that as information accumulates, the network may focus on some particular position with special interests.

For the head 158 in the 16th layer (i.e. the last layer), each target location (corresponding to each row) has its own distinct sparse focus, differing from head 78 where target locations largely share the same attentive location in memory. Meanwhile, the pattern is also different from the case of head 8, where a few locations are clearly attended more than others.
Finally, as we have discussed in section 3.3, the attention score can be decomposed into four intuitive terms. Here, we want to further investigate how these four terms contribute to the overall attention trend in Fig. 5. Since the term represents the global content bias, i.e., the prior importance of each word regardless of the context, we will leave it out and focus on the terms , and . So, for each term, we take the Softmax w.r.t. the memory span and average the resulted distribution of all tokens in the validation set. The results are visualized in Fig. 7:

Since term is fully contentbased addressing, when averaging over all target words, the result is essentially uniform over the entire context, except for a few very close words, which are likely to be semantically similar to the target word.

The overall trend of term highly resembles that of the entire attention distribution in Fig. 5. It suggests that the global trend of focusing on the nearby context is largely contributed by this contentdependent positional bias.

The overall trend of term is also focusing more on nearby words. However, compared to the trend of term , it is clearly flatter and biases towards a longer context.