FreeLB: Enhanced Adversarial Training for Natural Language Understanding
Adversarial training, which minimizes the maximal risk for label-preserving input perturbations, has proved to be effective for improving the generalization of language models. In this work, we propose a novel adversarial training algorithm, FreeLB, that promotes higher invariance in the embedding space, by adding adversarial perturbations to word embeddings and minimizing the resultant adversarial risk inside different regions around input samples. To validate the effectiveness of the proposed approach, we apply it to Transformer-based models for natural language understanding and commonsense reasoning tasks. Experiments on the GLUE benchmark show that when applied only to the finetuning stage, it is able to improve the overall test scores of BERT-base model from 78.3 to 79.4, and RoBERTa-large model from 88.5 to 88.8. In addition, the proposed approach achieves state-of-the-art single-model test accuracies of 85.44% and 67.75% on ARC-Easy and ARC-Challenge. Experiments on CommonsenseQA benchmark further demonstrate that FreeLB can be generalized and boost the performance of RoBERTa-large model on other tasks as well.
Adversarial training is a method for creating robust neural networks. During adversarial training, mini-batches of training samples are contaminated with adversarial perturbations (alterations that are small and yet cause misclassification), and then used to update network parameters until the resulting model learns to resist such attacks. Adversarial training was originally proposed as a means to enhance the security of machine learning systems (Goodfellow et al., 2015), especially for safety-critical systems like self-driving cars (Xiao et al., 2018) and copyright detection (Saadatpanah et al., 2019).
In this paper, we turn our focus away from the security benefits of adversarial training, and instead study its effects on generalization. While adversarial training boosts the robustness, it is widely accepted by computer vision researchers that it is at odds with generalization, with classification accuracy on non-corrupted images dropping as much as on CIFAR-10, and on Imagenet (Madry et al., 2018; Xie et al., 2019). Surprisingly, people observe the opposite result for language models (Miyato et al., 2017; Cheng et al., 2019), showing that adversarial training can improve both generalization and robustness.
We will show that adversarial training significantly improves performance of state-of-the-art models for many language understanding tasks. In particular, we propose a novel adversarial training algorithm, called FreeLB (Free Large-Batch), which adds adversarial perturbations to word embeddings and minimizes the resultant adversarial loss around input samples. The method leverages recently proposed “free” training strategies (Shafahi et al., 2019; Zhang et al., 2019) to enrich the training data with diversified adversarial samples under different norm constraints at no extra cost than PGD-based (Projected Gradient Descent) adversarial training (Madry et al., 2018), which enables us to perform such diversified adversarial training on large-scale state-of-the-art models. We observe improved invariance in the embedding space for models trained with FreeLB, which is positively correlated with generalization.
We perform comprehensive experiments to evaluate the performance of a variety of adversarial training algorithms on state-of-the-art language understanding models and tasks. In the comparisons with standard PGD (Madry et al., 2018), FreeAT (Shafahi et al., 2019) and YOPO (Zhang et al., 2019), FreeLB stands out to be the best for the datasets and models we evaluated. With FreeLB, we achieve state-of-the-art results on several important language understanding benchmarks. On the GLUE benchmark, FreeLB pushes the performance of the BERT-base model from 78.3 to 79.4. The overall score of the RoBERTa-large models on the GLUE benchmark is also lifted from 88.5 to 88.8, achieving best results on most of its sub-tasks. Experiments also show that FreeLB can boost the performance of RoBERTa-large on question answering tasks, such as the ARC and CommonsenseQA benchmarks. We also provide a comprehensive ablation study and analysis to demonstrate the effectiveness of our training process.
2 Related Work
2.1 Adversarial Training
To improve the robustness of neural networks against adversarial examples, many defense strategies and models have been proposed, in which PGD-based adversarial training (Madry et al., 2018) is widely considered to be the most effective, since it largely avoids the the obfuscated gradient problem (Athalye et al., 2018). It formulates a class of adversarial training algorithms (Kurakin et al., 2017) into solving a minimax problem on the cross-entropy loss, which can be achieved reliably through multiple projected gradient ascent steps followed by a SGD (Stochastic Gradient Descent) step.
Despite being verified by Athalye et al. (2018) to avoid obfuscated gradients, Qin et al. (2019) shows that PGD-based adversarial training still leads to highly convolved and non-linear loss surfaces when is small, which could be readily broken under stronger adversaries. Thus, to be effective, the cost of PGD-based adversarial training is much higher than conventional training. To mitigate this cost, Shafahi et al. (2019) proposed a “free” adversarial training algorithm that simultaneously updates both model parameters and adversarial perturbations on a single backward pass. Using a similar formulation, Zhang et al. (2019) effectively reduce the total number of full forward and backward propagations for obtaining adversarial examples by restricting most of its adversarial updates in the first layer.
2.2 Adversarial Examples in Natural Languages
Adversarial examples have been explored primarily in the image domain, and received many attention in text domain recently. Previous works on text adversaries have focused on heuristics for creating adversarial examples in the black-box setting, or on specific tasks. Jia and Liang (2017) propose to add distracting sentences to the input document in order to induce mis-classification. Zhao et al. (2018) generate text adversaries by projecting the input data to a latent space using GANs, and searching for adversaries close to the original instance. Belinkov and Bisk (2018) manipulate every word in a sentence with synthetic or natural noise in machine translation systems. Iyyer et al. (2018) propose a neural paraphrase model based on back-translated data to produce paraphrases that have different sentence structures. Different from previous work, ours is not to produce actual adversarial examples, but only take the benefit of adversarial training for natural language understanding.
We are not the first to observe that robust language models may perform better on clean test data. Miyato et al. (2017) extend adversarial and virtual adversarial training (Miyato et al., 2019) to the text domain to improve the performance on semi-supervised classification tasks. Ebrahimi et al. (2018) propose a character/word replacement for crafting attacks, and show employing adversarial examples in training renders the models more robust. Ribeiro et al. (2018) show that adversarial attacks can be used as a valuable tool for debugging NLP models. Cheng et al. (2019) also find that crafting adversarial examples can help neural machine translation significantly. Notably, these studies have focused on simple models or text generation tasks. Our work explores how to efficiently use the gradients obtained in adversarial training to boost the performance of state-of-the-art transformer-based models.
3 Adversarial Training for Language Understanding
Pre-trained large-scale language models, such as BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019b), ALBERT (Lan et al., 2020) and T5 (Raffel et al., 2019), have proven to be highly effective for downstream tasks. We aim to further improve the generalization of these pre-trained language models on the downstream language understanding tasks by enhancing their robustness in the embedding space during finetuning on these tasks. We achieve this goal by creating “virtual” adversarial examples in the embedding space, and then perform parameter updates on these adversarial embeddings. Creating actual adversarial examples for language is difficult; even with state-of-the-art language models as guidance (e.g., (Cheng et al., 2019)), it remains unclear how to construct label-preserving adversarial examples via word/character replacement without human evaluations, because the meaning of each word/character depends on the context (Ribeiro et al., 2018). Since we are only interested in the effects of adversarial training, rather than producing actual adversarial examples, we add norm-bounded adversarial perturbations to the embeddings of the input sentences using a gradient-based method. Note that our embedding-based adversary is strictly stronger than a more conventional text-based adversary, as our adversary can make manipulations on word embeddings that are not possible in the text domain.
For models that incorporate various input representations, including word or subword embeddings, segment embeddings and position embeddings, our adversaries only modify the concatenated word or sub-word embeddings, leaving other components of the sentence representation unchanged.
3.1 PGD for Adversarial Training
Standard adversarial training seeks to find optimal parameters to minimize the maximum risk for any within a norm ball as:
where is the data distribution, is the label, and is some loss function. We use the Frobenius norm to constrain . For neural networks, the outer “min” is non-convex, and the inner “max” is non-concave. Nonetheless, Madry et al. (2018) demonstrated that this saddle-point problem can be solved reliably with SGD for the outer minimization and PGD (a standard method for large-scale constrained optimization, see (Combettes and Pesquet, 2011) and (Goldstein et al., 2014)), for the inner maximization. In particular, for the constraint , with an additional assumption that the loss function is locally linear, PGD takes the following step (with step size ) in each iteration:
where is the gradient of the loss with respect to , and performs a projection onto the -ball. To achieve high-level robustness, multi-step adversarial examples are needed during training, which is computationally expensive. The -step PGD (-PGD) requires forward-backward passes through the network, while the standard SGD update requires only one. As a result, the adversary generation step in adversarial training increases run-time by an order of magnitudeâa catastrophic amount when training large state-of-the-art language models.
3.2 Large-Batch Adversarial Training for Free
In the inner ascent steps of PGD, the gradients of the parameters can be obtained with almost no overhead when computing the gradients of the inputs. From this observation, FreeAT (Shafahi et al., 2019) and YOPO (Zhang et al., 2019) have been proposed to accelerate adversarial training. They achieve comparable robustness and generalization as standard PGD-trained models using only the same or a slightly larger number of forward-backward passes as natural training (i.e., SGD on clean samples). FreeAT takes one descent step on the parameters together with each of the ascent steps on the perturbation. As a result, FreeAT may suffer from the “stale gradient” problem (Dutta et al., 2018), where in every step , does not necessarily maximize the model with parameter since its update is based on , and vice versa, does not necessarily minimize the adversarial risk with adversary since its update is based on . Such a problem may be more significant when the step size is large.
Different from FreeAT, YOPO accumulates the gradient of the parameters from each of the ascent steps, and updates the parameters only once after the inner ascent steps. YOPO also advocates that after each back-propagation, one should take the gradient of the first hidden layer as a constant and perform several additional updates on the adversary using the product of this constant and the Jacobian of the first layer of the network to obtain strong adversaries. However, when the first hidden layer is a linear layer as in their implementation, such an operation is equivalent to taking a larger step size on the adversary. The analysis backing the extra update steps also assumes a twice continuously differentiable loss, which does not hold for ReLU-based neural networks they experimented with, and thus the reasons for the success of such an algorithm remains obscure. We give empirical comparisons between YOPO and our approach in Sec. 4.3.
To obtain better solutions for the inner max and avoid fundamental limitations on the function class, we propose FreeLB, which performs multiple PGD iterations to craft adversarial examples, and simultaneously accumulates the “free” parameter gradients in each iteration. After that, it updates the model parameter all at once with the accumulated gradients. The overall procedure is shown in Algorithm 1, in which is an approximation to the local maximum within the intersection of two balls . By taking a descent step along the averaged gradients at , we approximately optimize the following objective:
which is equivalent to replacing the original batch with a -times larger virtual batch, consisting of samples whose embeddings are . Compared with PGD-based adversarial training (Eq. 1), which minimizes the maximum risk at a single estimated point in the vicinity of each training sample, FreeLB minimizes the maximum risk at each ascent step at almost no overhead.
Intuitively, FreeLB could be a learning method with lower generalization error than PGD.
Sokolic et al. (2017) have proved that the generalization error of a learning method invariant to a set of transformations may be up to smaller than a non-invariant learning method. According to their theory, FreeLB could have a more significant improvement over natural training, since FreeLB enforces the invariance to adversaries from a set of up to different norm constraints,
Empirically, FreeLB does lead to higher robustness and invariance than PGD in the embedding space, in the sense that the maximum increase of loss in the vicinity of for models trained with FreeLB is smaller than that with PGD. See Sec. 4.3 for details. In theory, such improved robustness can lead to better generalization (Xu and Mannor, 2012), which is consistent with our experiments. Qin et al. (2019) also demonstrated that PGD-based method leads to highly convolved and non-linear loss surfaces in the vicinity of input samples when is small, indicating a lack of robustness.
3.3 When Adversarial Training Meets Dropout
Usually, adversarial training is not used together with dropout (Srivastava et al., 2014). However, for some language models like RoBERTa (Liu et al., 2019b), dropout is used during the finetuning stage. In practice, when dropout is turned on, each ascent step of Algorithm 1 is optimizing for a different network. Specifically, denote the dropout mask as with each entry . Similar to our analysis for FreeAT, the ascent step from to is based on , so is sub-optimal for . Here is the effective parameters under dropout mask .
The more plausible solution is to use the same in each step. When applying dropout to any network, the objective for is to minimize the expectation of loss under different networks determined by the dropout masks, which is achieved by minimizing the Monte Carlo estimation of the expected loss. In our case, the objective becomes:
where the 1-sample Monte Carlo estimation should be and can be minimized by using FreeLB with dropout mask in each ascent step. This is similar to applying Variational Dropout to RNNs as used in Gal and Ghahramani (2016).
In this section, we provide comprehensive analysis on FreeLB through extensive experiments on three Natural Language Understanding benchmarks: GLUE (Wang et al., 2019), ARC (Clark et al., 2018) and CommonsenseQA (Talmor et al., 2019). We also compare the robustness and generalization of FreeLB with other adversarial training algorithms to demonstrate its strength. Additional experimental details are provided in the Appendix.
GLUE Benchmark. The GLUE benchmark is a collection of 9 natural language understanding tasks,
namely Corpus of Linguistic Acceptability (CoLA; Warstadt et al. (2018)), Stanford Sentiment Treebank (SST; Socher et al. (2013)), Microsoft
Research Paraphrase Corpus (MRPC; Dolan and Brockett (2005)), Semantic Textual Similarity Benchmark (STS; Agirre et al. (2007)), Quora Question Pairs (QQP; Iyer and Csernai (2017)), Multi-Genre NLI (MNLI; Williams et al. (2018)), Question NLI (QNLI; Rajpurkar et al. (2016)), Recognizing Textual Entailment (RTE; Dagan et al. (2006); Bar Haim et al. (2006); Giampiccolo et al. (2007); Bentivogli et al. (2009)) and Winograd NLI (WNLI; Levesque et al. (2011)).
8 of the tasks are formulated as classification problems and only STS-B is formulated as regression, but FreeLB applies to all of them.
For BERT-base, we use the HuggingFace implementation
ARC Benchmark. The ARC dataset (Clark et al., 2018) is a collection of multi-choice science questions from grade-school level exams. It is further divided into ARC-Challenge set with 2,590 question answer (QA) pairs and ARC-Easy set with 5,197 QA pairs. Questions in ARC-Challenge are more difficult and cannot be handled by simply using a retrieval and co-occurence based algorithm (Clark et al., 2018). A typical question is:
Which property of a mineral can be determined just by looking at it?
(A) luster [correct] (B) mass (C) weight (D) hardness.
CommonsenseQA Benchmark. The CommonsenseQA dataset (Talmor et al., 2019) consists of 12,102 natural language questions that require human commonsense reasoning ability to answer. A typical question is :
Where can I stand on a river to see water falling without getting wet?
(A) waterfall, (B) bridge [correct], (C) valley, (D) stream, (E) bottom.
Each question has five candidate answers from ConceptNet (Speer et al., 2017). To make the question more difficult to solve, most answers have the same relation in ConceptNet to the key concept in the question. As shown in the above example, most answers can be connected to “river” by “AtLocation” relation in ConceptNet. For a fair comparison with the reported results
in papers and leaderboard
4.2 Experimental Results
|ReImp||-||-||-||85.61 (1.7)||96.56 (.3)||90.69 (.5)||67.57 (1.3)||92.20 (.2)|
|PGD||90.53 (.2)||94.87 (.2)||92.49 (.07)||87.41 (.9)||96.44 (.1)||90.93 (.2)||69.67 (1.2)||92.43 (7.)|
|FreeAT||90.02 (.2)||94.66 (.2)||92.48 (.08)||86.69 (15.)||96.10 (.2)||90.69 (.4)||68.80 (1.3)||92.40 (.3)|
|FreeLB||90.61 (.1)||94.98 (.2)||92.60 (.03)||88.13 (1.2)||96.79 (.2)||91.42 (.7)||71.12 (.9)||92.67 (.08)|
GLUE We summarize results on the dev sets of GLUE in Table 1, comparing the proposed FreeLB against other adversatial training algorithms (PGD (Madry et al., 2018) and FreeAT (Shafahi et al., 2019)). We use the same step size and number of steps for PGD, FreeAT and FreeLB. FreeLB is consistently better than the two baselines. Comparisons and detailed discussions about YOPO (Zhang et al., 2019) are provided in Sec. 4.3. We have also submitted our results to the evaluation server, results provided in Table 2. FreeLB lifts the performance of the BERT-base model from 78.3 to 79.4, and RoBERTa-large model from 88.5 to 88.8 on overall scores.
ARC For ARC, a corpus of 14 million related science documents (from ARC Corpus, Wikipedia and other sources) is provided. For each QA pair, we first use a retrieval model to select top 10 related documents. Then, given these retrieved documents
Results are summarized in Table 3. Following Sun et al. (2018), we first finetune the RoBERTa model on the RACE dataset (Lai et al., 2017). The finetuned RoBERTa model achieves 85.70% and 85.24% accuracy on the development and test set of RACE, respectively. Based on this, we further finetune the model on both ARC-Easy and ARC-Challenge datasets with the same hyper-parameter searching strategy (for 5 epochs), which achieves 84.13%/64.44% test accuracy on ARC-Easy/ARC-Challenge. And by adding FreeLB finetuning, we can reach 84.81%/65.36%, a significant boost on ARC benchmark, demonstrating the effectiveness of FreeLB.
To further improve the results, we apply a multi-task learning (MTL) strategy using additional datasets. We first finetune the model on RACE (Lai et al., 2017), and then finetune on a joint dataset of ARC-Easy, ARC-Challenge, OpenbookQA (Mihaylov et al., 2018) and Regents Living Environment
CommonsenseQA Similar to the training strategy in Liu et al. (2019b), we construct five inputs for each question by concatenating the question and each answer separately, then encode each input with the representation of the [CLS] token. A final score is calculated by applying the representation of [CLS] to a fully-connected layer. Following the fairseq repository
Results are summarized in Table 3. We obtained a dev-set accuracy of 77.56% with the RoBERTa-large model. When using FreeLB finetuning, we achieved 78.81%, a 1.25% absolute gain. Compared with the results reported from fairseq repository, which obtains 78.43% accuracy on the dev-set, FreeLB still achieves better performance. Our submission to the CommonsenseQA leaderboard achieves 72.2% single-model test set accuracy, and the result of a 20-model ensemble is 73.1%, which achieves No.1 among all the submissions without making use of ConceptNet.
4.3 Ablation Study and Analysis
In this sub-section, we first show the importance of reusing dropout mask, then conduct a thorough ablation study on FreeLB over the GLUE benchmark to analyze the robustness and generalization strength of different approaches. We observe that it is unnecessary to perform shallow-layer updates on the adversary as YOPO for our case, and FreeLB results in improved robustness and generalization compared with PGD.
Importance of Reusing Mask
|RTE||85.61 (1.67)||87.14 (1.29)||88.13 (1.21)||87.05 (1.36)||87.05 (0.20)|
|CoLA||67.57 (1.30)||69.31 (1.16)||71.12 (0.90)||70.40 (0.91)||69.91 (1.16)|
|MRPC||90.69 (0.54)||90.93 (0.66)||91.42 (0.72)||90.44 (0.62)||90.69 (0.37)|
Table 4 (columns 2 to 4) compares the results of FreeLB with and without reusing the same dropout mask in each ascent step, as proposed in Sec. 3.3. With reusing, FreeLB can achieve a larger improvement over the naturally trained models. Thus, we enable mask reusing for all experiments involving RoBERTa.
Comparing the Robustness Table 5 provides the comparisons of the maximum increment of loss in the vicinity of each sample, defined as:
which reflects the robustness and invariance of the model in the embedding space. In practice, we use PGD steps as in Eq. 2 to find the value of . We found that when using a step size of and , the PGD iterations converge to almost the same value, starting from 100 different random initializations of for the RoBERTa models, trained with or without FreeLB. This indicates that PGD reliably finds for these models. Therefore, we compute for each via a 2000-step PGD.
Samples with small margins exist even for models with perfect accuracy, which could give a false sense of vulnerability of the model.
To rule out the outlier effect and make comparable across different samples, we only consider samples that all the evaluated models can correctly classify, and search for an for each sample such that the reference model can correctly classify all samples within the ball.
Across all three datasets and different reference models, FreeLB has the smallest median increment even when starting from a larger natural loss than vanilla models. This demonstrates that FreeLB is more robust and invariant in most cases. Such results are also consistent with the models’ dev set performance (the performances for Vanilla/PGD/FreeLB models on RTE, CoLA and MRPC are 86.69/87.41/89.21, 69.91/70.84/71.40, 91.67/91.17/91.17, respectively).
Comparing with YOPO The original implementation of YOPO (Zhang et al., 2019) chooses the first convolutional layer of the ResNets as for updating the adversary in the “s-loop”. As a result, each step of the “s-loop” should be using exactly the same value to update the adversary,and YOPO-- degenerates into FreeLB with a -times large step size. To avoid that, we choose the layers up to the output of the first Transformer block as when implementing YOPO. To make the total amount of update on the adversary equal, we take the hyper-parameters for FreeLB- and only change the step size into for YOPO--. Table 4 shows that FreeLB performs consistently better than YOPO on all three datasets. Accidentally, we also give the results comparing with YOPO-- without changing the step size for YOPO in Table 8. The gap between two approaches seem to shrink, which may be caused by using a larger total step size for the YOPO adversaries. We leave exhaustive hyperparameter search for both models as our future work.
In this work, we have developed an adversarial training approach, FreeLB, to improve natural language understanding. The proposed approach adds perturbations to continuous word embeddings using a gradient method, and minimizes the resultant adversarial risk in an efficient way. FreeLB is able to boost Transformer-based model (BERT and RoBERTa) on several datasets and achieve new state of the art on GLUE and ARC benchmarks. Empirical study demonstrates that our method results in both higher robustness in the embedding space than natural training and better generalization ability. Such observation is also consistent with recent findings in Computer Vision. However, adversarial training still takes significant overhead compared with vanilla SGD. How to accelerate this process while improving generalization is an interesting future direction.
This work was done when Chen Zhu interned at Microsoft Dynamics 365 AI Research. Goldstein and Zhu were supported in part by the DARPA GARD, DARPA QED for RML, and AFOSR MURI programs.
Appendix A Additional Experimental Details
a.1 Problem Formulations
For tasks with ranking loss like ARC, CommonsenseQA, WNLI and QNLI, add the perturbation to the concatenation of the embeddings of all question/answer pairs.
Additional tricks are required to achieve high performance on WNLI and QNLI for the GLUE benchmark. We use the same tricks as Liu et al. (2019b). For WNLI, we use the same WSC data provided by Liu et al. (2019b) for training. For testing, Liu et al. (2019b) also provided the test set with span annotations, but the order is different form the GLUE dataset. We re-order their test set by matching. For the QNLI, we follow Liu et al. (2019b) and formulate the problem as pairwise ranking problem, which is the same for CommonsenseQA. We find the matching pairs for both training set and testing set by matching the queries in the dev set. We predict “entailment” if the candidate has the higher score, and “not_entailment” otherwise.
As other adversarial training methods, introduces three additional hyper-parameters: step size , maximum perturbation , number of steps .
For all other hyper-parameters such as learning rate and number of iterations, we either search in the same interval as RoBERTa (on CommonsenseQA, ARC, and WNLI), or use exactly the same setting as RoBERTa (except for MRPC, where we find using a learning rate of gives better results).
Appendix B Variance of Maximum Increment of Loss
Table 7 provides the complete results for the increment of loss in the interval, with median and standard deviation.
Appendix C Additional Results for Ablation Studies
|STS-B||92.20 (.2)||92.67 (.08)||92.60 (.17)||92.60 (0.20)|
|SST-2||96.56 (.3)||96.79 (.2)||96.44 (.2)||96.33 (.1)|
|QNLI||-||94.98 (.2)||94.96 (.1)||-|
|QQP||-||92.60 (.03)||92.55 (.05)||92.50 (.02)|
|MNLI||-||90.61 (.1)||90.59 (.2)||90.45 (.2)|
Here we provide some additional results for comparison with YOPO as complementary results to Table 4. We will release the complete results for comparing with YOPO and without variational dropout on each of the GLUE tasks in our next revision. From the current results, there is no need in using extra shallow-layer updates that YOPO advocates, since this consistently deteriorates the performance while introducing extra computations.
- Code is available at https://github.com/zhuchen03/FreeLB.
- “Subword embeddings” refers to the embeddings of sub-word encodings such as the popular Byte Pair Encoding (BPE) (Sennrich et al., 2016).
- The cardinality of the set is approximately .
- We thank AristoRoBERTa team for providing retrieved documents and additional Regents Living Environments dataset.
- Equivalent to [CLS] and [SEP] token in BERT.
- https://leaderboard.allenai.org/arc/submissions/public and https://leaderboard.allenai.org/arc_easy/
- For each sample, we start from a value slightly larger than the norm constraint during training for , and then decrease linearly until the model trained with the reference model can correctly classify after a 2000-step PGD attack. The reference model is either trained with FreeLB or PGD.
- Proceedings of the fourth international workshop on semantic evaluations (semeval-2007). Association for Computational Linguistics. Cited by: §4.1.
- Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples. In ICML, Cited by: §2.1, §2.1.
- The second PASCAL recognising textual entailment challenge. Cited by: §4.1.
- Synthetic and natural noise both break neural machine translation. In ICLR, Cited by: §2.2.
- The fifth PASCAL recognizing textual entailment challenge. Cited by: §4.1.
- Robust neural machine translation with doubly adversarial inputs. In ACL, Cited by: §1, §2.2, §3.
- Think you have solved question answering? try arc, the ai2 reasoning challenge. arXiv preprint arXiv:1803.05457. Cited by: §4.1, §4.
- Proximal splitting methods in signal processing. In Fixed-point algorithms for inverse problems in science and engineering, Cited by: §3.1.
- The PASCAL recognising textual entailment challenge. In Machine learning challenges. evaluating predictive uncertainty, visual object classification, and recognising tectual entailment, Cited by: §4.1.
- BERT: pre-training of deep bidirectional transformers for language understanding. In NAACL, Cited by: §3, §4.1, Table 2.
- Automatically constructing a corpus of sentential paraphrases. In Proceedings of the International Workshop on Paraphrasing, Cited by: §4.1.
- Slow and stale gradients can win the race: error-runtime trade-offs in distributed sgd. In AISTATS, pp. 803–812. Cited by: §3.2.
- HotFlip: white-box adversarial examples for text classification. In ACL, Cited by: §2.2.
- A theoretically grounded application of dropout in recurrent neural networks. In NeurIPS, Cited by: §3.3.
- The third PASCAL recognizing textual entailment challenge. In Proceedings of the ACL-PASCAL workshop on textual entailment and paraphrasing, Cited by: §4.1.
- A field guide to forward-backward splitting with a fasta implementation. arXiv preprint 1411.3406. Cited by: §3.1.
- Explaining and harnessing adversarial examples. In ICLR, Cited by: §1.
- First quora dataset release: question pairs. External Links: Cited by: §4.1.
- Adversarial example generation with syntactically controlled paraphrase networks. In NAACL, Cited by: §2.2.
- Adversarial examples for evaluating reading comprehension systems. In EMNLP, Cited by: §2.2.
- Adversarial machine learning at scale. In ICLR, Cited by: §2.1.
- Race: large-scale reading comprehension dataset from examinations. arXiv preprint arXiv:1704.04683. Cited by: §4.2, §4.2.
- Albert: a lite bert for self-supervised learning of language representations. ICLR. Cited by: §3.
- The Winograd schema challenge. In AAAI Spring Symposium: Logical Formalizations of Commonsense Reasoning, Cited by: §4.1.
- Multi-task deep neural networks for natural language understanding. In ACL, Cited by: Table 2.
- Roberta: a robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. Cited by: §A.1, §3.3, §3, §4.1, §4.2, Table 2.
- Towards deep learning models resistant to adversarial attacks. In ICLR, Cited by: §1, §1, §1, §2.1, §3.1, §4.2.
- Can a suit of armor conduct electricity? a new dataset for open book question answering. arXiv preprint arXiv:1809.02789. Cited by: §4.2.
- Adversarial training methods for semi-supervised text classification. In ICLR, Cited by: §1, §2.2, §3.
- Virtual adversarial training: A regularization method for supervised and semi-supervised learning. TPAMI. Cited by: §2.2.
- Adversarial robustness through local linearization. In NeurIPS, Cited by: §2.1, §3.2.
- Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint 1910.10683. Cited by: §3.
- SQuAD: 100,000+ questions for machine comprehension of text. In EMNLP, Cited by: §4.1.
- Semantically equivalent adversarial rules for debugging NLP models. In ACL, Cited by: §2.2, §3.
- Adversarial attacks on copyright detection systems. arXiv preprint 1906.07153. Cited by: §1.
- Neural machine translation of rare words with subword units. In ACL, Cited by: footnote 2.
- Adversarial Training for Free!. In NeurIPS, Cited by: §1, §1, §2.1, §3.2, §4.2.
- Recursive deep models for semantic compositionality over a sentiment treebank. In EMNLP, Cited by: §4.1.
- Generalization error of invariant classifiers. In AISTATS, Cited by: §3.2.
- Conceptnet 5.5: an open multilingual graph of general knowledge. In AAAI, Cited by: §4.1.
- Dropout: a simple way to prevent neural networks from overfitting. JMLR. Cited by: §3.3.
- Improving machine reading comprehension with general reading strategies. arXiv preprint arXiv:1810.13441. Cited by: §4.2.
- CommonsenseQA: a question answering challenge targeting commonsense knowledge. In NAACL, Cited by: §4.1, §4.
- Glue: a multi-task benchmark and analysis platform for natural language understanding. In ICLR, Cited by: §4.
- Neural network acceptability judgments. arXiv preprint 1805.12471. Cited by: §4.1.
- A broad-coverage challenge corpus for sentence understanding through inference. In NAACL, Cited by: §4.1.
- Characterizing adversarial examples based on spatial consistency information for semantic segmentation. In ECCV, Cited by: §1.
- Feature denoising for improving adversarial robustness. In CVPR, Cited by: §1.
- Robustness and generalization. Machine learning. Cited by: §3.2.
- XLNet: generalized autoregressive pretraining for language understanding. In NeurIPS, Cited by: Table 2.
- You only propagate once: painless adversarial training using maximal principle. In NeurIPS, Cited by: §1, §1, §2.1, §3.2, §4.2, §4.3.
- Generating natural adversarial examples. In ICLR, Cited by: §2.2.