XLNet: Generalized Autoregressive Pretraining
for Language Understanding
Abstract
With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining approaches based on autoregressive language modeling. However, relying on corrupting the input with masks, BERT neglects dependency between the masked positions and suffers from a pretrainfinetune discrepancy. In light of these pros and cons, we propose XLNet, a generalized autoregressive pretraining method that (1) enables learning bidirectional contexts by maximizing the expected likelihood over all permutations of the factorization order and (2) overcomes the limitations of BERT thanks to its autoregressive formulation. Furthermore, XLNet integrates ideas from TransformerXL, the stateoftheart autoregressive model, into pretraining. Empirically, XLNet outperforms BERT on 20 tasks, often by a large margin, and achieves stateoftheart results on 18 tasks including question answering, natural language inference, sentiment analysis, and document ranking.^{1}^{1}1Pretrained models and code are available at https://github.com/zihangdai/xlnet.
[inlinelist,1]label=(),
XLNet: Generalized Autoregressive Pretraining
for Language Understanding
Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le Carnegie Mellon University, Google Brain {zhiliny,dzihang,yiming,jgc,rsalakhu}@cs.cmu.edu, qvl@google.com
noticebox[b]Preprint. Under review.\end@float
1 Introduction
Unsupervised representation learning has been highly successful in the domain of natural language processing [7, 19, 24, 25, 10]. Typically, these methods first pretrain neural networks on largescale unlabeled text corpora, and then finetune the models or representations on downstream tasks. Under this shared highlevel idea, different unsupervised pretraining objectives have been explored in literature. Among them, autoregressive (AR) language modeling and autoencoding (AE) have been the two most successful pretraining objectives.
AR language modeling seeks to estimate the probability distribution of a text corpus with an autoregressive model [7, 24, 25]. Specifically, given a text sequence , AR language modeling factorizes the likelihood into a forward product or a backward one . A parametric model (e.g. a neural network) is trained to model each conditional distribution. Since an AR language model is only trained to encode a unidirectional context (either forward or backward), it is not effective at modeling deep bidirectional contexts. On the contrary, downstream language understanding tasks often require bidirectional context information. This results in a gap between AR language modeling and effective pretraining.
In comparison, AE based pretraining does not perform explicit density estimation but instead aims to reconstruct the original data from corrupted input. A notable example is BERT [10], which has been the stateoftheart pretraining approach. Given the input token sequence, a certain portion of tokens are replaced by a special symbol [MASK], and the model is trained to recover the original tokens from the corrupted version. Since density estimation is not part of the objective, BERT is allowed to utilize bidirectional contexts for reconstruction. As an immediate benefit, this closes the aforementioned bidirectional information gap in AR language modeling, leading to improved performance. However, the artificial symbols like [MASK] used by BERT during pretraining are absent from real data at finetuning time, resulting in a pretrainfinetune discrepancy. Moreover, since the predicted tokens are masked in the input, BERT is not able to model the joint probability using the product rule as in AR language modeling. In other words, BERT assumes the predicted tokens are independent of each other given the unmasked tokens, which is oversimplified as highorder, longrange dependency is prevalent in natural language [9].
Faced with the pros and cons of existing language pretraining objectives, in this work, we propose XLNet, a generalized autoregressive method that leverages the best of both AR language modeling and AE while avoiding their limitations.

Firstly, instead of using a fixed forward or backward factorization order as in conventional AR models, XLNet maximizes the expected log likelihood of a sequence w.r.t. all possible permutations of the factorization order. Thanks to the permutation operation, the context for each position can consist of tokens from both left and right. In expectation, each position learns to utilize contextual information from all positions, i.e., capturing bidirectional context.

Secondly, as a generalized AR language model, XLNet does not rely on data corruption. Hence, XLNet does not suffer from the pretrainfinetune discrepancy that BERT is subject to. Meanwhile, the autoregressive objective also provides a natural way to use the product rule for factorizing the joint probability of the predicted tokens, eliminating the independence assumption made in BERT.
In addition to a novel pretraining objective, XLNet improves architectural designs for pretraining.

Inspired by the latest advancements in AR language modeling, XLNet integrates the segment recurrence mechanism and relative encoding scheme of TransformerXL [9] into pretraining, which empirically improves the performance especially for tasks involving a longer text sequence.

Naively applying a Transformer(XL) architecture to permutationbased language modeling does not work because the factorization order is arbitrary and the target is ambiguous. As a solution, we propose to reparameterize the Transformer(XL) network to remove the ambiguity.
Empirically, XLNet achieves stateoftheart results on 18 tasks, i.e., 7 GLUE language understanding tasks, 3 reading comprehension tasks including SQuAD and RACE, 7 text classification tasks including Yelp and IMDB, and the ClueWeb09B document ranking task. Under a set of fair comparison experiments, XLNet consistently outperforms BERT [10] on multiple benchmarks.
Related Work The idea of permutationbased AR modeling has been explored in [32, 11], but there are several key differences. Previous models are orderless, while XLNet is essentially orderaware with positional encodings. This is important for language understanding because an orderless model is degenerated to bagofwords, lacking basic expressivity. The above difference results from the fundamental difference in motivation—previous models aim to improve density estimation by baking an “orderless” inductive bias into the model while XLNet is motivated by enabling AR language models to learn bidirectional contexts.
2 Proposed Method
2.1 Background
In this section, we first review and compare the conventional AR language modeling and BERT for language pretraining. Given a text sequence , AR language modeling performs pretraining by maximizing the likelihood under the forward autoregressive factorization:
(1) 
where is a context representation produced by neural models, such as RNNs or Transformers, and denotes the embedding of . In comparison, BERT is based on denoising autoencoding. Specifically, for a text sequence , BERT first constructs a corrupted version by randomly setting a portion (e.g. 15%) of tokens in to a special symbol [MASK]. Let the masked tokens be . The training objective is to reconstruct from :
(2) 
where indicates is masked, and is a Transformer that maps a length text sequence into a sequence of hidden vectors . The pros and cons of the two pretraining objectives are compared in the following aspects:

Independence Assumption: As emphasized by the sign in Eq. (2), BERT factorizes the joint conditional probability based on an independence assumption that all masked tokens are separately reconstructed. In comparison, the AR language modeling objective (1) factorizes using the product rule that holds universally without such an independence assumption.

Input noise: The input to BERT contains artificial symbols like [MASK] that never occur in downstream tasks, which creates a pretrainfinetune discrepancy. Replacing [MASK] with original tokens as in [10] does not solve the problem because original tokens can be only used with a small probability — otherwise Eq. (2) will be trivial to optimize. In comparison, AR language modeling does not rely on any input corruption and does not suffer from this issue.

Context dependency: The AR representation is only conditioned on the tokens up to position (i.e. tokens to the left), while the BERT representation has access to the contextual information on both sides. As a result, the BERT objective allows the model to be pretrained to better capture bidirectional context.
2.2 Objective: Permutation Language Modeling
According to the comparison above, AR language modeling and BERT possess their unique advantages over the other. A natural question to ask is whether there exists a pretraining objective that brings the advantages of both while avoiding their weaknesses.
Borrowing ideas from orderless NADE [32], we propose the permutation language modeling objective that not only retains the benefits of AR models but also allows models to capture bidirectional contexts. Specifically, for a sequence of length , there are different orders to perform a valid autoregressive factorization. Intuitively, if model parameters are shared across all factorization orders, in expectation, the model will learn to gather information from all positions on both sides.
To formalize the idea, let be the set of all possible permutations of the length index sequence . We use and to denote the th element and the first elements of a permutation . Then, our proposed permutation language modeling objective can be expressed as follows:
(3) 
Essentially, for a text sequence , we sample a factorization order at a time and decompose the likelihood according to factorization order. Since the same model parameter is shared across all factorization orders during training, in expectation, has seen every possible element in the sequence, hence being able to capture the bidirectional context. Moreover, as this objective fits into the AR framework, it naturally avoids the independence assumption and the pretrainfinetune discrepancy discussed in Section 2.1.
Remark on Permutation The proposed objective only permutes the factorization order, not the sequence order. In other words, we keep the original sequence order, use the positional encodings corresponding to the original sequence, and rely on a proper attention mask in Transformers to achieve permutation of the factorization order. Note that this choice is necessary, since the model will only encounter text sequences with the natural order during finetuning.
To provide an overall picture, we show an example of predicting the token given the same input sequence but under different factorization orders in Figure 1.
2.3 Architecture: TwoStream SelfAttention for TargetAware Representations
While the permutation language modeling objective has desired properties, naive implementation with standard Transformer parameterization may not work. To see the problem, assume we parameterize the nexttoken distribution using the standard Softmax formulation, i.e., where denotes the hidden representation of produced by the shared Transformer network after proper masking. Now notice that the representation does not depend on which position it will predict, i.e., the value of . Consequently, the same distribution is predicted regardless of the target position, which is not able to learn useful representations (see Appendix A.1 for a concrete example). To avoid this problem, we propose to reparameterize the nexttoken distribution to be target position aware:
(4) 
where denotes a new type of representations which additionally take the target position as input.
TwoStream SelfAttention While the idea of targetaware representations removes the ambiguity in target prediction, how to formulate remains a nontrivial problem. Among other possibilities, we propose to “stand” at the target position and rely on the position to gather information from the context through attention. For this parameterization to work, there are two requirements that are contradictory in a standard Transformer architecture: (1) to predict the token , should only use the position and not the content , otherwise the objective becomes trivial; (2) to predict the other tokens with , should also encode the content to provide full contextual information. To resolve such a contradiction, we propose to use two sets of hidden representations instead of one:

The content representation , or abbreviated as , which serves a similar role to the standard hidden states in Transformer. This representation encodes both the context and itself.

The query representation , or abbreviated as , which only has access to the contextual information and the position , but not the content , as discussed above.
Computationally, the first layer query stream is initialized with a trainable vector, i.e. , while the content stream is set to the corresponding word embedding, i.e. . For each selfattention layer , the two streams of representations are schematically^{2}^{2}2To avoid clutter, we omit the implementation details including multihead attention, residual connection, layer normalization and positionwise feedforward as used in Transformer(XL). The details are included in Appendix A.2 for reference. updated with a shared set of parameters as follows (illustrated in Figures 2 (a) and (b)):
where Q, K, V denote the query, key, and value in an attention operation [33]. The update rule of the content representations is exactly the same as the standard selfattention, so during finetuning, we can simply drop the query stream and use the content stream as a normal Transformer(XL). Finally, we can use the lastlayer query representation to compute Eq. (4).
Partial Prediction While the permutation language modeling objective (3) has several benefits, it is a much more challenging optimization problem due to the permutation and causes slow convergence in preliminary experiments. To reduce the optimization difficulty, we choose to only predict the last tokens in a factorization order. Formally, we split into a nontarget subsequence and a target subsequence , where is the cutting point. The objective is to maximize the loglikelihood of the target subsequence conditioned on the nontarget subsequence, i.e.,
(5) 
Note that is chosen as the target because it possesses the longest context in the sequence given the current factorization order . A hyperparameter is used such that about tokens are selected for predictions; i.e., . For unselected tokens, their query representations need not be computed, which saves speed and memory.
2.4 Incorporating Ideas from TransformerXL
Since our objective function fits in the AR framework, we incorporate the stateoftheart AR language model, TransformerXL [9], into our pretraining framework, and name our method after it. We integrate two important techniques in TransformerXL, namely the relative positional encoding scheme and the segment recurrence mechanism. We apply relative positional encodings based on the original sequence as discussed earlier, which is straightforward. Now we discuss how to integrate the recurrence mechanism into the proposed permutation setting and enable the model to reuse hidden states from previous segments. Without loss of generality, suppose we have two segments taken from a long sequence ; i.e., and . Let and be permutations of and respectively. Then, based on the permutation , we process the first segment, and then cache the obtained content representations for each layer . Then, for the next segment , the attention update with memory can be written as
where denotes concatenation along the sequence dimension. Notice that positional encodings only depend on the actual positions in the original sequence. Thus, the above attention update is independent of once the representations are obtained. This allows caching and reusing the memory without knowing the factorization order of the previous segment. In expectation, the model learns to utilize the memory over all factorization orders of the last segment. The query stream can be computed in the same way. Finally, Figure 2 (c) presents an overview of the proposed permutation language modeling with twostream attention (see Appendix A.4 for more detailed illustration).
2.5 Modeling Multiple Segments
Many downstream tasks have multiple input segments, e.g., a question and a context paragraph in question answering. We now discuss how we pretrain XLNet to model multiple segments in the autoregressive framework. During the pretraining phase, following BERT, we randomly sample two segments (either from the same context or not) and treat the concatenation of two segments as one sequence to perform permutation language modeling. We only reuse the memory that belongs to the same context. Specifically, the input to our model is similar to BERT: [A, SEP, B, SEP, CLS], where “SEP” and “CLS” are two special symbols and “A” and “B” are the two segments. Although we follow the twosegment data format, XLNetLarge does not use the objective of next sentence prediction [10] as it does not show consistent improvement in our ablation study (see Section 3.7).
Relative Segment Encodings Architecturally, different from BERT that adds an absolute segment embedding to the word embedding at each position, we extend the idea of relative encodings from TransformerXL to also encode the segments. Given a pair of positions and in the sequence, if and are from the same segment, we use a segment encoding or otherwise , where and are learnable model parameters for each attention head. In other words, we only consider whether the two positions are within the same segment, as opposed to considering which specific segments they are from. This is consistent with the core idea of relative encodings; i.e., only modeling the relationships between positions. When attends to , the segment encoding is used to compute an attention weight , where is the query vector as in a standard attention operation and is a learnable headspecific bias vector. Finally, the value is added to the normal attention weight. There are two benefits of using relative segment encodings. First, the inductive bias of relative encodings improves generalization [9]. Second, it opens the possibility of finetuning on tasks that have more than two input segments, which is not possible using absolute segment encodings.
2.6 Discussion and Analysis
2.6.1 Comparison with BERT
Comparing Eq. (2) and (5), we observe that both BERT and XLNet perform partial prediction, i.e., only predicting a subset of tokens in the sequence. This is a necessary choice for BERT because if all tokens are masked, it is impossible to make any meaningful predictions. In addition, for both BERT and XLNet, partial prediction plays a role of reducing optimization difficulty by only predicting tokens with sufficient context. However, the independence assumption discussed in Section 2.1 disables BERT to model dependency between targets.
To better understand the difference, let’s consider a concrete example [New, York, is, a, city]. Suppose both BERT and XLNet select the two tokens [New, York] as the prediction targets and maximize . Also suppose that XLNet samples the factorization order [is, a, city, New, York]. In this case, BERT and XLNet respectively reduce to the following objectives:
Notice that XLNet is able to capture the dependency between the pair (New, York), which is omitted by BERT. Although in this example, BERT learns some dependency pairs such as (New, city) and (York, city), it is obvious that XLNet always learns more dependency pairs given the same target and contains “denser” effective training signals.
To prove a general point beyond one example, we now turn to more formal expressions. Inspired by previous work [38], given a sequence , we define a set of targetcontext pairs of interest, , where is a set of tokens in that form a context of . Intuitively, we want the model to learn the dependency of on through a pretraining loss term . For example, given the above sentence, the pairs of interest could be instantiated as:
Note that is merely a virtual notion without unique ground truth, and our analysis will hold regardless of how is instantiated.
Given a set of target tokens and a set of nontarget tokens , BERT and XLNet both maximize but with different formulations:
where denote tokens in that have a factorization order prior to . Both objectives consist of multiple loss terms in the form of . Intuitively, if there exists a targetcontext pair such that , then the loss term provides a training signal to the dependency between and . For convenience, we say a targetcontext pair is covered by a model (objective) if .
Given the definition, let’s consider two cases:

If , the dependency is covered by both BERT and XLNet.

If and , the dependency can only be covered by XLNet but not BERT. As a result, XLNet is able to cover more dependencies than BERT. In other words, the XLNet objective contains more effective training signals, which empirically leads to better performance in Section 3.
2.6.2 Comparison with Language Modeling
Borrowing examples and notations from Section 2.6.1, a standard AR language model like GPT [25] is only able to cover the dependency but not . XLNet, on the other hand, is able to cover both in expectation over all factorization orders. Such a limitation of AR language modeling can be critical in realworld applications. For example, consider a span extraction question answering task with the context “Thom Yorke is the singer of Radiohead” and the question “Who is the singer of Radiohead”. The representations of “Thom Yorke” are not dependent on “Radiohead” with AR language modeling and thus they will not be chosen as the answer by the standard approach that employs softmax over all token representations. More formally, consider a contexttarget pair :

If , where denotes the tokens prior to in the original sequence, AR language modeling is not able to cover the dependency.

In comparison, XLNet is able to cover all dependencies in expectation.
Approaches like ELMo [24] concatenate forward and backward language models in a shallow manner, which is not sufficient for modeling deep interactions between the two directions.
2.6.3 Bridging the Gap Between Language Modeling and Pretraining
With a deep root in density estimation^{3}^{3}3The problem of language modeling is essentially density estimation for text data. [4, 32, 21], language modeling has been a rapidlydeveloping research area [9, 1, 3]. However, there has been a gap between language modeling and pretraining due to the lack of the capability of bidirectional context modeling, as analyzed in Section 2.6.2. It has even been challenged by some machine learning practitioners whether language modeling is a meaningful pursuit if it does not directly improve downstream tasks ^{4}^{4}4https://openreview.net/forum?id=HJePno0cYm. XLNet generalizes language modeling and bridges such a gap. As a result, it further “justifies” language modeling research. Moreover, it becomes possible to leverage the rapid progress of language modeling research for pretraining. As an example, we integrate TransformerXL into XLNet to demonstrate the usefulness of the latest language modeling progress.
3 Experiments
3.1 Pretraining and Implementation
Following BERT [10], we use the BooksCorpus [41] and English Wikipedia as part of our pretraining data, which have 13GB plain text combined. In addition, we include Giga5 (16GB text) [23], ClueWeb 2012B (extended from [5]), and Common Crawl [6] for pretraining. We use heuristics to aggressively filter out short or lowquality articles for ClueWeb 2012B and Common Crawl, which results in 19GB and 78GB text respectively. After tokenization with SentencePiece [16], we obtain 2.78B, 1.09B, 4.75B, 4.30B, and 19.97B subword pieces for Wikipedia, BooksCorpus, Giga5, ClueWeb, and Common Crawl respectively, which are 32.89B in total.
Our largest model XLNetLarge has the same architecture hyperparameters as BERTLarge, which results in a similar model size. The sequence length and memory length are set to 512 and 384 respectively. We train XLNetLarge on 512 TPU v3 chips for 500K steps with an Adam optimizer, linear learning rate decay and a batch size of 2048, which takes about 2.5 days. It was observed that the model still underfits the data at the end of training but continuing training did not help downstream tasks, which indicates that given the optimization algorithm, the model does not have enough capacity to fully leverage the data scale. However, in this work, we refrain from training a larger model as its practical usage for finetuning might be limited. Further, we train an XLNetBase, analogous to BERTBase, on BooksCorpus and Wikipedia only, for ablation study and fair comparison with BERT. Related results are presented in Section 3.7.
Since the recurrence mechanism is introduced, we use a bidirectional data input pipeline where each of the forward and backward directions takes half of the batch size. For training XLNetLarge, we set the partial prediction constant as 6 (see Section 2.3). Our finetuning procedure follows BERT [10] except otherwise specified^{5}^{5}5Hyperparameters for pretraining and finetuning are in Appendix A.3.. We employ an idea of spanbased prediction, where we first sample a length , and then randomly select a consecutive span of tokens as prediction targets within a context of tokens.
RACE  Accuracy  Middle  High 

GPT [25]  59.0  62.9  57.4 
BERT [22]  72.0  76.6  70.1 
BERT+OCN [28]  73.5  78.4  71.5 
BERT+DCMN [39]  74.1  79.5  71.8 
XLNet  81.75  85.45  80.21 
3.2 RACE Dataset
The RACE dataset [17] contains near 100K questions taken from the English exams for middle and high school Chinese students in the age range between 12 to 18, with the answers generated by human experts. This is one of the most difficult reading comprehension datasets that involve challenging reasoning questions. Moreover, the average length of the passages in RACE are longer than 300, which is significantly longer than other popular reading comprehension datasets such as SQuAD [26]. As a result, this dataset serves as a challenging benchmark for long text understanding. We use a sequence length of 640 during finetuning. As shown in Table 1, a single model XLNet outperforms the best ensemble by 7.6 points in accuracy. It is also clear that XLNet substantially outperforms other pretrained models such as BERT and GPT. Since RACE contains relatively long passages, we believe one of the reasons why XLNet obtains substantial gains on this dataset is that the integration of the TransformerXL architecture improves the capability of modeling long text, besides the AR objective. More analysis on the sequence length is presented in Section 3.7.
SQuAD1.1  EM  F1  SQuAD2.0  EM  F1 

Dev set results without data augmentation  
BERT [10]  84.1  90.9  BERT [10]  78.98  81.77 
XLNet  88.95  94.52  XLNet  86.12  88.79 
Test set results on leaderboard, with data augmentation (as of June 19, 2019)  
Human [27]  82.30  91.22  BERT+NGram+SelfTraining [10]  85.15  87.72 
ATB  86.94  92.64  SGNet  85.23  87.93 
BERT [10]  87.43  93.16  BERT+DAE+AoA  85.88  88.62 
XLNet  89.90  95.08  XLNet  86.35  89.13 
3.3 SQuAD Dataset
SQuAD is a largescale reading comprehension dataset with two tasks. SQuAD1.1 [27] contains questions that always have a corresponding answer in the given passages, while SQuAD2.0 [26] introduces unanswerable questions. To finetune an XLNet on SQuAD2.0, we jointly apply a logistic regression loss for answerability prediction similar to classification tasks and a standard span extraction loss for question answering [10]. Since v1.1 and v2.0 share the same answerable questions in the training set, we simply remove the answerability prediction part from the model finetuned on v2.0 for evaluation on v1.1. As the top leaderboard entries all employ some form of data augmentation, we jointly train an XLNet on SQuAD2.0 and NewsQA [31] for our leaderboard submission. As shown in Table 2, XLNet obtains the stateoftheart single model results on the leaderboard, outperforming a series of BERTbased methods. Notably, on v1.1, an XLNet single model outperforms human and the best ensemble by 7.6 and 2.5 points in EM. Finally, for direct comparison with BERT to eliminate the effects of additional tricks in leaderboard submissions, we compare XLNet against BERT on the dev set. XLNet substantially outperforms BERT by 3.6 and 7.0 points in F1 for v1.1 and v2.0.
Model  IMDB  Yelp2  Yelp5  DBpedia  AG  Amazon2  Amazon5 

CNN [14]    2.90  32.39  0.84  6.57  3.79  36.24 
DPCNN [14]    2.64  30.58  0.88  6.87  3.32  34.81 
Mixed VAT [30, 20]  4.32      0.70  4.95     
ULMFiT [13]  4.6  2.16  29.98  0.80  5.01     
BERT [35]  4.51  1.89  29.32  0.64    2.63  34.17 
XLNet  3.79  1.55  27.80  0.62  4.49  2.40  32.26 
3.4 Text Classification
Following previous work on text classification [40, 20], we evaluate XLNet on the following benchmarks: IMDB, Yelp2, Yelp5, DBpedia, AG, Amazon2, and Amazon5. According to Table 3, XLNet achieves new stateoftheart results on all the considered datasets, reducing the error rate by 16%, 18%, 5%, 9% and 5% on IMDB, Yelp2, Yelp5, Amazon2, and Amazon5 respectively compared to BERT.
Model  MNLI  QNLI  QQP  RTE  SST2  MRPC  CoLA  STSB  WNLI 

Singletask single models on dev  
BERT [2]  86.6/  92.3  91.3  70.4  93.2  88.0  60.6  90.0   
XLNet  89.8/  93.9  91.8  83.8  95.6  89.2  63.6  91.8   
Singletask single models on test  
BERT [10]  86.7/85.9  91.1  89.3  70.1  94.9  89.3  60.5  87.6  65.1 
Multitask ensembles on test (from leaderboard as of June 19, 2019)  
Snorkel [29]  87.6/87.2  93.9  89.9  80.9  96.2  91.5  63.8  90.1  65.1 
ALICE  88.2/87.9  95.7  90.7  83.5  95.2  92.6  68.6  91.1  80.8 
MTDNN [18]  87.9/87.4  96.0  89.9  86.3  96.5  92.7  68.4  91.1  89.0 
XLNet  90.2/89.7  98.6  90.3  86.3  96.8  93.0  67.8  91.6  90.4 
3.5 GLUE Dataset
The GLUE dataset [34] is a collection of 9 natural language understanding tasks. The test set labels are removed from the publicly released version, and all the practitioners must submit their predictions on the evaluation server to obtain test set results. In Table 4, we present results of multiple settings, including singletask and multitask, as well as single models and ensembles. In the multitask setting, we jointly train an XLNet on the four largest datasets—MNLI, SST2, QNLI, and QQP—and finetune the network on the other datasets. Only singletask training is employed for the four large datasets. For QNLI, we employed a pairwise relevance ranking scheme as in [18] for our test set submission. However, for fair comparison with BERT, our result on the QNLI dev set is based on a standard classification paradigm. For WNLI, we use the loss described in [15]. A multitask ensemble XLNet achieves the stateoftheart results on 7 out of 9 tasks on the public leaderboard. On the most widelybenchmarked task MNLI, XLNet improves the “matched” and “mismatched” settings by 2.0 and 1.8 points respectively. Note that the leaderboard competitors employ improved techniques over BERT such as distillation, modified multitask losses, or meta learning, but still underperform XLNet which does not employ additional tricks besides using a standard multitask learning method. Since the leaderboard is not intended for ablation study or hyperparameter tuning, we only evaluated our best multitask models on the test set. To obtain a direct comparison with BERT, we run a singletask XLNet on the dev set. As shown in the uppermost rows of Table 4, XLNet consistently outperforms BERT, with an improvement of 13.4 points, 3.2 points, 3.0 points, 2.4 points, 1.8 points on RTE, MNLI, CoLA, SST2, and STSB respectively.
3.6 ClueWeb09B Dataset
Following the setting in previous work [8], we use the ClueWeb09B dataset to evaluate the performance on document ranking. The queries were created by the TREC 20092012 Web Tracks based on 50M documents and the task is to rerank the top 100 documents retrieved using a standard retrieval method. Since document ranking, or adhoc retrieval, mainly concerns the lowlevel representations instead of highlevel semantics, this dataset serves as a testbed for evaluating the quality of word embeddings. We use a pretrained XLNet to extract word embeddings for the documents and queries without finetuning, and employ a kernel pooling network [37] to rank the documents. According to Table 5, XLNet substantially outperforms the other methods, including a BERT model that uses the same training procedure as ours. This illustrates that XLNet learns better lowlevel word embeddings than BERT. Note that for fair comparison we exclude the results (19.55 in ERR@20, slightly worse than ours) in [36] as it uses additional entityrelated data.
3.7 Ablation Study
We perform an ablation study to understand the importance of each design choice based on four datasets with diverse characteristics. Specifically, there are three main aspects we hope to study:

The effectiveness of the permutation language modeling objective, especially compared to the denoising autoencoding objective used by BERT.

The importance of using TransformerXL as the backbone neural architecture and employing segmentlevel recurrence (i.e. using memory).

The necessity of some implementation details including spanbased prediction, the bidirectional input pipeline, and nextsentence prediction.
With these purposes in mind, in Table 6, we compare 6 XLNetBase variants with different implementation details (rows 3  8), the original BERTBase model (row 1), and an additional TransformerXL baseline trained with the denoising autoencoding (DAE) objective used in BERT but with the bidirectional input pipeline (row 2). For fair comparison, all models are based on a 12layer architecture with the same model hyperparameters as BERTBase and are trained on only Wikipedia and the BooksCorpus. All results reported are the median of 5 runs.
#  Model  RACE  SQuAD2.0  MNLI  SST2  

F1  EM  m/mm  
1  BERTBase  64.3  76.30  73.66  84.34/84.65  92.78 
2  DAE + TransformerXL  65.03  79.56  76.80  84.88/84.45  92.60 
3  XLNetBase ()  66.05  81.33  78.46  85.84/85.43  92.66 
4  XLNetBase ()  66.66  80.98  78.18  85.63/85.12  93.35 
5   memory  65.55  80.15  77.27  85.32/85.05  92.78 
6   spanbased pred  65.95  80.61  77.91  85.49/85.02  93.12 
7   bidirectional data  66.34  80.65  77.87  85.31/84.99  92.66 
8  + nextsent pred  66.76  79.83  76.94  85.32/85.09  92.89 
Examining rows 1  4 of Table 6, we see the two full XLNetBase models trained with different values of significantly outperform both BERT and the DAE trained TransformerXL across tasks, showing the superiority of the permutation language modeling objective. Meanwhile, it is also interesting to see that the DAE trained TransformerXL achieves better performance than BERT on tasks with long text such as RACE and SQuAD, suggesting the excellence of TransformerXL in language modeling also benefits pretraining. Next, if we remove the memory caching mechanism (row 5), the performance clearly drops, especially for RACE which involves the longest context among the 4 tasks. In addition, rows 6  7 show that both spanbased prediction and the bidirectional input pipeline play important roles in XLNet. Finally, we unexpectedly find the the nextsentence prediction objective proposed in the original BERT does not necessarily lead to an improvement in our setting. Instead, it tends to harm the performance except for the RACE dataset. Hence, when we train XLNetLarge, we exclude the nextsentence prediction objective.
4 Conclusions
XLNet is a generalized AR pretraining method that uses a permutation language modeling objective to combine the advantages of AR and AE methods. The neural architecture of XLNet is developed to work seamlessly with the AR objective, including integrating TransformerXL and careful design of the twostream attention mechanism. XLNet achieves stateoftheart results various tasks with substantial improvement. In the future, we envision applications of XLNet to a wider set of tasks such as vision and reinforcement learning.
Acknowledgments
The authors would like to thank Qizhe Xie and Adams Wei Yu for providing useful feedback on the project, Youlong Cheng and Yanping Huang for providing ideas to improve our TPU implementation, Chenyan Xiong and Zhuyun Dai for clarifying the setting of the document ranking task. ZY and RS were supported by the Office of Naval Research grant N000141812861, the National Science Foundation (NSF) grant IIS1763562, the Nvidia fellowship, and the Siebel scholarship. ZD and YY were supported in part by NSF under the grant IIS1546329 and by the DOEOffice of Science under the grant ASCR #KJ040201.
References
 [1] Rami AlRfou, Dokook Choe, Noah Constant, Mandy Guo, and Llion Jones. Characterlevel language modeling with deeper selfattention. arXiv preprint arXiv:1808.04444, 2018.
 [2] Anonymous. Bam! bornagain multitask networks for natural language understanding. anonymous preprint under review, 2018.
 [3] Alexei Baevski and Michael Auli. Adaptive input representations for neural language modeling. arXiv preprint arXiv:1809.10853, 2018.
 [4] Yoshua Bengio and Samy Bengio. Modeling highdimensional discrete data with multilayer neural networks. In Advances in Neural Information Processing Systems, pages 400–406, 2000.
 [5] Jamie Callan, Mark Hoy, Changkuk Yoo, and Le Zhao. Clueweb09 data set, 2009.
 [6] Common Crawl. Common crawl. URl: http://http://commoncrawl. org.
 [7] Andrew M Dai and Quoc V Le. Semisupervised sequence learning. In Advances in neural information processing systems, pages 3079–3087, 2015.
 [8] Zhuyun Dai, Chenyan Xiong, Jamie Callan, and Zhiyuan Liu. Convolutional neural networks for softmatching ngrams in adhoc search. In Proceedings of the eleventh ACM international conference on web search and data mining, pages 126–134. ACM, 2018.
 [9] Zihang Dai, Zhilin Yang, Yiming Yang, William W Cohen, Jaime Carbonell, Quoc V Le, and Ruslan Salakhutdinov. Transformerxl: Attentive language models beyond a fixedlength context. arXiv preprint arXiv:1901.02860, 2019.
 [10] Jacob Devlin, MingWei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pretraining of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
 [11] Mathieu Germain, Karol Gregor, Iain Murray, and Hugo Larochelle. Made: Masked autoencoder for distribution estimation. In International Conference on Machine Learning, pages 881–889, 2015.
 [12] Jiafeng Guo, Yixing Fan, Qingyao Ai, and W Bruce Croft. A deep relevance matching model for adhoc retrieval. In Proceedings of the 25th ACM International on Conference on Information and Knowledge Management, pages 55–64. ACM, 2016.
 [13] Jeremy Howard and Sebastian Ruder. Universal language model finetuning for text classification. arXiv preprint arXiv:1801.06146, 2018.
 [14] Rie Johnson and Tong Zhang. Deep pyramid convolutional neural networks for text categorization. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 562–570, 2017.
 [15] Vid Kocijan, AnaMaria Cretu, OanaMaria Camburu, Yordan Yordanov, and Thomas Lukasiewicz. A surprisingly robust trick for winograd schema challenge. arXiv preprint arXiv:1905.06290, 2019.
 [16] Taku Kudo and John Richardson. Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. arXiv preprint arXiv:1808.06226, 2018.
 [17] Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. Race: Largescale reading comprehension dataset from examinations. arXiv preprint arXiv:1704.04683, 2017.
 [18] Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. Multitask deep neural networks for natural language understanding. arXiv preprint arXiv:1901.11504, 2019.
 [19] Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. Learned in translation: Contextualized word vectors. In Advances in Neural Information Processing Systems, pages 6294–6305, 2017.
 [20] Takeru Miyato, Andrew M Dai, and Ian Goodfellow. Adversarial training methods for semisupervised text classification. arXiv preprint arXiv:1605.07725, 2016.
 [21] Aaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. arXiv preprint arXiv:1601.06759, 2016.
 [22] Xiaoman Pan, Kai Sun, Dian Yu, Heng Ji, and Dong Yu. Improving question answering with external knowledge. arXiv preprint arXiv:1902.00993, 2019.
 [23] Robert Parker, David Graff, Junbo Kong, Ke Chen, and Kazuaki Maeda. English gigaword fifth edition, linguistic data consortium. Technical report, Technical Report. Linguistic Data Consortium, Philadelphia, Tech. Rep., 2011.
 [24] Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. Deep contextualized word representations. arXiv preprint arXiv:1802.05365, 2018.
 [25] Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pretraining. URL https://s3uswest2. amazonaws. com/openaiassets/researchcovers/languageunsupervised/language understanding paper. pdf, 2018.
 [26] Pranav Rajpurkar, Robin Jia, and Percy Liang. Know what you don’t know: Unanswerable questions for squad. arXiv preprint arXiv:1806.03822, 2018.
 [27] Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250, 2016.
 [28] Qiu Ran, Peng Li, Weiwei Hu, and Jie Zhou. Option comparison network for multiplechoice reading comprehension. arXiv preprint arXiv:1903.03033, 2019.
 [29] Alexander Ratner, Stephen H Bach, Henry Ehrenberg, Jason Fries, Sen Wu, and Christopher Ré. Snorkel: Rapid training data creation with weak supervision. Proceedings of the VLDB Endowment, 11(3):269–282, 2017.
 [30] Devendra Singh Sachan, Manzil Zaheer, and Ruslan Salakhutdinov. Revisiting lstm networks for semisupervised text classification via mixed objective function. 2018.
 [31] Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, and Kaheer Suleman. Newsqa: A machine comprehension dataset. arXiv preprint arXiv:1611.09830, 2016.
 [32] Benigno Uria, MarcAlexandre Côté, Karol Gregor, Iain Murray, and Hugo Larochelle. Neural autoregressive distribution estimation. The Journal of Machine Learning Research, 17(1):7184–7220, 2016.
 [33] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008, 2017.
 [34] Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. GLUE: A multitask benchmark and analysis platform for natural language understanding. 2019. In the Proceedings of ICLR.
 [35] Qizhe Xie, Zihang Dai, Eduard Hovy, MinhThang Luong, and Quoc V. Le. Unsupervised data augmentation. arXiv preprint arXiv:1904.12848, 2019.
 [36] Chenyan Xiong, Jamie Callan, and TieYan Liu. Wordentity duet representations for document ranking. In Proceedings of the 40th International ACM SIGIR conference on research and development in information retrieval, pages 763–772. ACM, 2017.
 [37] Chenyan Xiong, Zhuyun Dai, Jamie Callan, Zhiyuan Liu, and Russell Power. Endtoend neural adhoc ranking with kernel pooling. In Proceedings of the 40th International ACM SIGIR conference on research and development in information retrieval, pages 55–64. ACM, 2017.
 [38] Zhilin Yang, Zihang Dai, Ruslan Salakhutdinov, and William W Cohen. Breaking the softmax bottleneck: A highrank rnn language model. arXiv preprint arXiv:1711.03953, 2017.
 [39] Shuailiang Zhang, Hai Zhao, Yuwei Wu, Zhuosheng Zhang, Xi Zhou, and Xiang Zhou. Dual comatching network for multichoice reading comprehension. arXiv preprint arXiv:1901.09381, 2019.
 [40] Xiang Zhang, Junbo Zhao, and Yann LeCun. Characterlevel convolutional networks for text classification. In Advances in neural information processing systems, pages 649–657, 2015.
 [41] Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. Aligning books and movies: Towards storylike visual explanations by watching movies and reading books. In Proceedings of the IEEE international conference on computer vision, pages 19–27, 2015.
Appendix A TargetAware Representation via TwoStream SelfAttention
a.1 A Concrete Example of How Standard LM Parameterization Fails
In this section, we provide a concrete example to show how the standard language model parameterization fails under the permutation objective, as discussed in Section 2.3. Specifically, let’s consider two different permutations and satisfying the following relationship
Then, substituting the two permutations respectively into the naive parameterization, we have
Effectively, two different target positions and share exactly the same model prediction. However, the groundtruth distribution of two positions should certainly be different.
a.2 TwoStream Attention
Here, we provide the implementation details of the twostream attention with a TransformerXL backbone.
Initial represetation:  
Cached layer content represetation (memory) from previous segment:  
For the TransformerXL layer , attention with relative positional encoding and positionwise feedforward are consecutively employed to update the represetntations:  
Targetaware prediction distribution:  
a.3 Hyperparameters
a.3.1 Pretraining Hyperparameters
Hparam  Value 
Number of layers  24 
Hidden size  1024 
Number of attention heads  16 
Attention head size  64 
FFN inner hidden size  4096 
Dropout  0.1 
Attention dropout  0.1 
Partial prediction  6 
Max sequence length  512 
Memory length  384 
Batch size  2048 
Learning rate  1e5 
Number of steps  500K 
Warmup steps  20,000 
Learning rate decay  linear 
Adam epsilon  1e6 
Weight decay  0.01 
The hyperparameters used for pretraining XLNet are shown in Table 7.
a.3.2 Hyperparameters for Finetuning
Hparam  RACE  SQuAD  MNLI  Yelp5 

Dropout  0.1  
Attention dropout  0.1  
Max sequence length  512  512  128  512 
Batch size  32  48  128  128 
Learning rate  2e5  3e5  3e5  2e5 
Number of steps  12K  8K  10K  10K 
Learning rate decay  linear  
Weight decay  0.00  
Adam epsilon  1e6  1e6  1e6  1e6 
Layerwise lr decay  1.0  0.75  1.0  1.0 
The hyperparameters used for finetuning XLNet on various tasks are shown in Table 8. “Layerwise decay” means exponentially decaying the learning rates of individual layers in a topdown manner. For example, suppose the th layer uses a learning rate , and the Layerwise decay rate is , then the learning rate of layer is .
a.4 Visualizing Memory and Permutation
In this section, we provide a detailed visualization of the proposed permutation language modeling objective, including the mechanism of reusing memory (aka the recurrence mechanism), how we use attention masks to permute the factorization order, and the difference of the two attention streams. As shown in Figure 3 and 4, given the current position , the attention mask is decided by the permutation (or factorization order) such that only tokens the occur before in the permutation can be attended; i.e., positions with . Moreover, comparing Figure 3 and 4, we can see how the query stream and the content stream work differently with a specific permutation through attention masks. The main difference is that the query stream cannot do selfattention and does not have access to the token at the position, while the content stream performs normal selfattention.