Frustratingly Easy Natural Question Answering

Frustratingly Easy Natural Question Answering

Lin Pan, Rishav Chakravarti1, Anthony Ferritto, Michael Glass,
Alfio Gliozzo, Salim Roukos, Radu Florian, Avirup Sil
IBM Research AI
Yorktown Heights, NY
{panl, rchakravarti, mrglass, gliozzo, roukos, raduf, avi}@us.ibm.com
aferritto@ibm.com
Equal Contribution. Corresponding author.
1footnotemark: 1
Abstract

Existing literature on Question Answering (QA) mostly focuses on algorithmic novelty, data augmentation, or increasingly large pre-trained language models like XLNet and RoBERTa. Additionally, a lot of systems on the QA leaderboards do not have associated research documentation in order to successfully replicate their experiments. In this paper, we outline these algorithmic components such as Attention-over-Attention, coupled with data augmentation and ensembling strategies that have shown to yield state-of-the-art results on benchmark datasets like SQuAD, even achieving super-human performance. Contrary to these prior results, when we evaluate on the recently proposed Natural Questions benchmark dataset, we find that an incredibly simple approach of transfer learning from BERT outperforms the previous state-of-the-art system trained on million more examples than ours by F1 points. Adding ensembling strategies further improves that number by F1 points.

\nocopyright

Introduction

A relatively new field in the open domain question answering (QA) community is machine reading comprehension (MRC) which aims to read and comprehend a given text, and then answer questions based on it. MRC is one of the key steps for natural language understanding (NLU). MRC also has wide applications in the domain of conversational agents and customer service support. Among the most widely worked on MRC benchmark datasets are the Stanford SQuAD v1.1 [14] and v2.0 [13] datasets. Recent MRC research has explored transfer learning from large pre-trained language models like BERT [4] and XLNet [19] which have solved the tasks in less than a year since their inception. Hence, we argue that harder benchmark MRC challenges are needed. In addition, the SQuAD datasets both suffer from observational bias: the datasets contain questions and answers provided by annotators who have read the given passage first and then created a question given the context. Other datasets like NarrativeQA [7] and HotpotQA [20] are similarly flawed.

In this paper, we focus on a new benchmark MRC dataset called Natural Questions (NQ) [8] which does not possess the above bias. The NQ queries were sampled from Google search engine logs according to a variety of handcrafted rules to filter for “natural questions” that are potentially answerable by a Wikipedia article. This is a key differentiator from past datasets where observation bias is a concern due to the questions having been generated after seeing an article or passage containing the answer [8]. Also, systems need to extract a short and a long answer (paragraphs which would contain the short answer). The dataset shows a human upper bound of 76% on the short answer and 87% on the long answer selection tasks. Since the task has been recently introduced and is bias-free, the authors claim that matching human performance on this task will require significant progress in natural language understanding.

The contributions of our paper include:

  • Algorithmic novelties: We add an Attention-over-attention (AoA) [3] layer on top of BERT during model finetuning, which gives us the best single model performance on NQ. We also perform a linear combination of BERT output layers instead of using the last layer only. Additionally, we show empirically that an incredibly simple transfer learning strategy of finetuning the pre-trained BERT model on SQuAD first and then on NQ can nearly match the performance of further adding the complex AoA layer.

  • Smarter Data Augmentation: We show that a simple but effective data augmentation strategy that shuffles the training data helps outperform the previous state-of-the-art (SOTA) system trained on million additional synthetically generated QA data.

  • Ensembling Strategies: We describe several methods that can combine the output of single MRC systems to further improve performance on a leaderboard. Most previous work that obtains “super-human”111\citeauthorrajpurkar2018know \shortciterajpurkar2018know note that human performance is likely somewhat underestimated. performance on the leaderboard fail to outline their ensembling techniques.

Related Work

Most recent MRC systems are predominantly BERT-based as is evident on leaderboards for SQuAD v1.1 and v2.0, HotpotQA and Natural Questions. “Super-human” results are achieved by adding additional components on top of BERT or BERT-like models such as XLNet. Among them, XLNet + SG-Net Verifier [22] adds a syntax layer, and BERT + DAE + AoA adds an AoA component as shown on the SQuAD leaderboard.

Another common technique is data augmentation by artificially generating more questions to enhance the training data. \citeauthoralbert-synth-data \shortcitealbert-synth-data, an improvement over \citeauthoralberti2019bert \shortcitealberti2019bert, combine models of question generation with answer extraction and filter results to ensure round-trip consistency. This technique helped them gather an additional 4 million synthetic training examples which provides SOTA performance on the NQ task.

Top submissions on the aforementioned leaderboards are usually ensemble results of single systems, yet the underlying ensemble technique is rarely documented. Even the most popular system, BERT + N-Gram Masking + Synthetic Self-Training (ensemble) [4], does not provide their ensemble strategies. In this paper, we describe our recipe for various ensemble strategies together with algorithmic improvements and data augmentation to produce SOTA results on the NQ dataset.

Model Architecture

In this section, we first describe BERT-for-QA, the model our system is built upon, and two algorithmic improvements on top of it. (1) Attention-over-Attention (AoA) [3], as an attention mechanism, combines query-to-document and document-to-query attentions by computing a document-level attention that is weighted by the importance of query words. This technique gives SOTA performance on SQuAD. (2) Inspired by the success of ELMo [12], we use a linear combination of all the BERT encoded layers instead of only the last layer.

BERT-for-QA

Given a token sequence , BERT, a deep Transformer [18] network, outputs a sequence of contextualized token representations .

BERT consists of Transformer layers (), each with heads and while BERT is smaller, (, each layer with heads and ). As an important preprocessing step for BERT, special markup tokens [CLS] and [SEP] are added; one to the beginning of the input sequence and the other to the end. In cases like MRC, where there are two separate input sequences, one for the question and the other for the given context, an additional [SEP] is added in between the two to form a single sequence.

BERT-for-QA adds three dense layers followed by a softmax on top of BERT for answer extraction: , and , where , , , , and . and denote the probability of the token in the sequence being the answer beginning and end, respectively. These three layers are trained during the finetuning stage. The NQ task requires not only a prediction for short answer beginning/end offsets, but also a (containing) longer span of text that provides the necessary context for that short answer. Following prior work from \citeauthoralberti2019bert \shortcitealberti2019bert, we only optimize for short answer spans and then identify the bounds of the containing HTML span as the long answer prediction222The candidate long answer HTML spans are provided as part of the preprocessed data for NQ.. We use the hidden state of the [CLS] token to classify the answer type , so denotes the probability of the answer type being correct. Our loss function is the averaged cross entropy on the two answer pointers and the answer type classifier:

where and ) are one-hot vectors for the ground-truth beginning and end positions, and ) for the ground-truth answer type. During decoding, the span over argmax of and argmax of is picked as the predicted short answer.

Attention-over-Attention

AoA was originally designed for cloze-style question answering, where a phrase in a short passage of text is removed in forming a question. Let be a sequence of question tokens , and a sequence of context tokens . AoA first computes a attention matrix:

(1)

where , , and . In our case, the hidden dimension is . Next, it separately performs on a column-wise softmax and a row-wise softmax . Each row of matrix represents the document-level attention regarding (query-to-document attention), and each row of matrix represents the query-level attention regarding (document-to-query attention). To combine the two attentions, is first row-wise averaged:

(2)

The resulting vector can be viewed as the average importance of each with respect to , and is used to weigh the document-level attention .

(3)

The final attention vector represents document-level attention weighted by the importance of query words.

In our work, we use AoA by adding an two-headed AoA layer into the BERT-for-QA model and this layer is trained together with the answer extraction layer during the finetuning stage. Concretely, the combined question and context hidden representation from BERT is first separated to and 333Superscript is dropped here for notation convenience; we use the last layer from the BERT output., followed by two linear projections of and respectively to and , :

(4)
(5)

where , ; , ; and . Therefore, the AoA layer adds about 2.1 million parameters on top of BERT which already has 340 million. Next, we feed and into AoA calculation specified in Equation (1) to (3) to get the attention vector for head . The same procedure is applied to and to get for head . Lastly, and are combined with and respectively via two weighted sum operations for answer extraction.

BERT Layer Combination

So far, we have described using the last layer from the BERT output as input to downstream layers. We also experiment with combining all the BERT output layers into one representation. Following \citeauthorPeters_2018 \shortcitePeters_2018, we create a trainable vector and apply softmax over it, yielding . The output layers are linearly combined as follows:

is jointly trained with parameters in BERT-for-QA. is then used as input to the final answer extraction layer.

Model Training

Our models follow the now common approach of starting with the pre-trained BERT language model and then finetune over the NQ dataset with an additional QA sequence prediction layer as described in previous section. As mentioned in [2], we also find it helpful to run additional task specific pre-training of the underlying BERT language model before starting with the finetuning step with the target NQ dataset. The following two subsections discuss different pre-training and data augmentation strategies employed to try and improve the overall performance of the models. Note that unless we specify otherwise, we are referring to the “large” version of BERT.

Pre-Training

We explore three types of BERT parameter pre-trainings prior to finetuning on the NQ corpus:

  1. BERT with Whole Word Masking (WWM) is one of the default BERT pre-trained models that has the same model structure as the original BERT model, but masks whole words instead of word pieces for the Masked Language Model pre-training task.

  2. BERT with Span Selection Pre-Training (SSPT) uses an unsupervised auxiliary QA specific task proposed by \citeauthorAnon_2019 \shortciteAnon_2019 to further train the BERT model. The task generates synthetic cloze style queries by masking out terms (named entities or noun phrases) in a sentence. Then answer bearing passages are extracted from the Wikipedia corpus using BM25 based information retrieval [15]. This allows us to pre-train all layers of the BERT model including the answer extraction weights by training the model to extract the answer term from the selected passage.

  3. BERT-for-QA with SQuAD 2.0 finetunes BERT on the supervised task of SQuAD 2.0 as initial pre-training. The intuition is that this allows the model to become more domain and task aware than vanilla BERT. \citeauthoralberti2019bert \shortcitealberti2019bert similarly leverage SQuAD 1.1 to pre-train the network for NQ. However, we found better results using SQuAD 2.0, likely because of SQuAD 2.0’s incorporation of unanswerable questions which also exist in NQ.

In our future work, we intend to explore the effect of these pre-trainings on additional language models including RoBERTa [9] and XLNet.

Data Augmentation

As noted in a number of works such as [21], and [5], model performance in the MRC literature has benefited from finetuning the model with labeled examples from either human annotated or synthetic data augmentation from similar tasks (often with the final set of mini batch updates relying exclusively on data from the target domain as described in the transfer learning tutorial by \citeauthorruder-etal-2019-transfer \shortciteruder-etal-2019-transfer). In fact, \citeauthoralbert-synth-data \shortcitealbert-synth-data achieve prior SOTA results for the NQ benchmark by adding 4 million synthetically generated QA examples. In this paper, we similarly try to introduce both synthetically generated as well as human labelled data from other related MRC tasks during NQ training.

Synthetic Data: Sentence Order Shuffling (SOS)

The SOS strategy shuffles the ordering of sentences in the paragraphs containing short answer annotations from the NQ training set. The strategy was attempted based on the observation that preliminary Bert-for-QA models showed a bias towards identifying candidate short answer spans from earlier in the paragraph rather than later in the paragraph (which may be a feature of how Wikipedia articles are written and the types of answerable questions that appear in the NQ dataset). This is similar in spirit to the types of perturbations introduced by \citeauthorZhan2019EnsembleBW \shortciteZhan2019EnsembleBW for SQuAD 2.0 based on observed biases in the SQuAD dataset. Note that this strategy is much simpler than the genuine text generation strategy employed by \citeauthoralbert-synth-data \shortcitealbert-synth-data to produce the previous SOTA results for NQ which we intend to explore further in future work.

Data from other MRC Tasks

We attempt to leverage human annotated data from three different machine reading comprehension (MRC) datasets for data augmentation:

  1. SQuAD 2.0 - ~130,000 crowd sourced question and answer training pairs derived from Wikipedia paragraphs.

  2. NewsQA [17] - ~100,000 crowd sourced question and answer training pairs derived from news articles.

  3. TriviaQA [6] - ~78,000 question and answers authored by trivia enthusiasts which were subsequently associated with wikipedia passages (potentially) containing the answer.

Augmentation Data Sampling

Our simple BERT-for-QA model takes about 20 hours to train a single epoch on the roughly 300,000 NQ training examples using a system with 2 Nvidia® Tesla® P100 GPUs. Introducing augmentation data, therefore, can (1) increase training time dramatically and (2) begin to overshadow the examples from the target NQ dataset. So we try two sampling strategies for choosing human annotated MRC examples from past datasets: (1) random and (2) based on question-answer similarity to the NQ dataset.

For similarity based sampling, we follow a strategy similar to \citeauthorDBLP:journals/corr/abs-1809-06963 \shortciteDBLP:journals/corr/abs-1809-06963. Specifically, we train a BERT-for-Sequence-Classification model using the Huggingface PyTorch implementation of BERT 444https://github.com/huggingface/pytorch-transformers.. The model accepts question tokens (discarding question marks since those do not appear in NQ) as the first text segment and short answer tokens (padded or truncated to 50 to limit maximum sequence length) as the second text segment. The model is trained with cross entropy loss to predict the source dataset for the question-answer pair using the development set from the three augmentation candidate datasets as well as target NQ development set.

Once trained, the predicted likelihood of an example being from the NQ dataset is calculated for all question-answer pairs from the three augmentation candidate training datasets and used to order the examples by similarity for the purposes of sampling555The BERT-for-Sequence-Classification model achieves  90% accuracy at detecting the dataset source for a given query-answer pair.. As would be expected, the most “similar” question-answer pairs were from SQuAD 2.0 (~80% of the sampled data came from SQuAD 2.0) since the task is well aligned with the NQ task while TriviaQA question-answer pairs tended to be least “similar” (only ~9.5% of the sampled data came from TriviaQA).

Experiments

Dataset

The NQ dataset provides 307,373 queries for training, 7,830 queries for development, and 7,842 queries for testing (with the test set only being accessible through a public leaderboard submission).

For each question, crowd sourced annotators also provide start and end offsets for short answer spans666Instead of short answer spans, annotators have marked  1% of the questions with a simple Yes/No. We leave it as future work to detect and generate answers for these types of queries. within the Wikipedia article, if available, as well as long answer spans (which is generally the most immediate HTML paragraph, list, or table span containing the short answer span), if available [8].

Similar to other MRC datasets such as SQuAD 2.0, the NQ dataset forces models to make an attempt at “knowing what they don’t know” by requiring a confidence score with each prediction. The evaluation script777The evaluation script is provided by Google at https://github.com/google-research-datasets/natural-questions., then calculates the optimal threshold at which the system will “choose” to provide an answer. The resulting F1 scores for Short Answer (SA) and Long Answer (LA) predictions are used as our headline metric.

The “partial un-answerability” and “natural generation” aspects of this dataset along with the recency of the task’s publication make it an attractive dataset for evaluating model architecture and training choices (with lots of headroom between human performance and the best performing automated system).

The training itself is carried out using the Huggingface PyTorch implementation of BERT which supports starting from either BERT or BERT.

Hyperparameter Optimization

The primary hyperparameter settings for the models discussed in the Model Architecture section are derived from [2] with the exception of the following:

  1. Stride - Following the implementation of the BERT-for-QA model in [4], we accommodate BERT’s pre-trained input size constraint of 512 tokens by splitting larger sequences into multiple spans over the Wikipedia article text using a sliding window. We experiment with multiple stride lengths to control for both experiment latency (shorter strides results in a larger number of spans per article) as well as F1 performance.

  2. Negative Instance Sub-Sampling - Another consequence of splitting each Wikipedia article into multiple spans is that most spans of the article do not contain the correct short answer (only  65% of the questions are answerable by a short span and, of these,  90% contain a single correct answer span in the article with an average span length of only  4 words). As a result, there is a severe imbalance in the number of positive to negative (i.e. no answer) spans of text. The authors of [2] address the imbalance during training by sub-sampling negative instances at a rate of 2%.

    We emulate this sub-sampling behavior when generating example spans for answerable questions. However, based on the observation that our preliminary BERT models tended to be overconfident for unanswerable questions, we vary the sampling rate between answerable and unanswerable questions.

  3. Batch Size & Learning Rate - These parameters were tuned for each experiment using the approach outlined in [16] where we evaluate a number of batch sizes and learning rates on a randomly selected 20% subset of the NQ training and development data. During experimentation, we did find that slight changes in learning rate can have a couple of points impact on the final F1 scores. Further work is needed to improve robustness of learning rate selection.

Ensembling

In addition to optimizing for single model performance, in this section we outline a number of strategies that we investigated for ensembling models as is common for top ranking leaderboard submissions in MRC888The top ranking submissions for SQuAD 2.0, TriviaQA, and HotpotQA are all ensemble models as of this paper’s writing.. In order to formally compare approaches we partition the NQ dev set into “dev-train” and “dev-test” by taking the first three dev files for the “train” set and using the last two for the “test” set (the original dev set for NQ is partitioned into 5 files for distribution). This yields “train” and “test” sets of 4,653 and 3,177 examples (query-article pairs) respectively.

For each ensembling strategy considered we search for the best k-model ensemble over the “train” set and then evaluate on the “test” set. For these experiments we use as this is the number of models that we can decode in 24 hours on a Nvidia® Tesla® P100 GPU, which is the limit for the NQ leaderboard.

We examine two types of ensembling experiments: (i) ensembling the same model trained with different seeds and (ii) ensembling different model architectures and (pre–)training data. Ensembling the same model trained on different seeds attempts to smooth the variance to produce a stronger result. On the other hand ensembling different models attempts to find models that may not be the strongest individually but harmonize well to produce strong results.

To generate the ensembled predictions for an example, we combine the top-20 candidate long and short answers from each system in the ensemble999We empirically find that considering 20 is better than considering fewer candidates (e.g. 5 or 10).. To combine systems we take the arithmetic mean101010We have experimented with other approaches such as median, geometric mean, and harmonic mean; however these are omitted here as they resulted in much lower scores than arithmetic mean. of the scores for each long and short span predicted by at least one system. For spans which are only predicted by a subset of models, a score of zero is imputed for the remaining models. The predicted long/short span is then the span with the greatest arithmetic mean.

Seed experiments

We investigate ensembling the best single model, selected as the model with greatest sum of short and long answer F1 scores, trained with unique seeds.

Multiple Model Ensembling Experiments

In our investigation of ensembling multiple models we greedy and exhaustive search strategies for selecting models from a pool of candidate models consisting of various configurations described in the Model Training and Model Architecture sections. The candidate pool also contains multiple instances of the same model training and architecture configuration, but with different learning rates (as mentioned in the previous section, we found that slight changes in learning rate can affect the final performance by a couple of F1 points):

Exhaustive Search During exhaustive search, we consider all ensembles of k candidates from our group of n models. After searching all possible ensembles we return two ensembles: (i) the ensemble with the highest long answer F1 score and (ii) the ensemble with the highest short answer F1 score. Given the combinatorial complexity, we limit the search to the top 20 best performing models. We select the top models using the same approach as in our seed experiments (i.e. the ones with the greatest sum of short and long answer F1 scores).

Greedy Search For the greedy approach we consider all 41 BERT models that we had trained during experimentation and greedily build an ensemble of size k from this model set, optimizing for either short or long answer performance. We refer to the ensembles created in this way as and respectively.

We construct by greedily building model ensembles optimizing for short answer F1. In case adding some of the models decreased our short answer performance, we take the first models of which give the highest short answer F1. The same is done for when optimizing for long answers.

To build the long answer ensemble (when optimizing for short answer performance), we check to see which subset of results in the best long answer performance. More formally we create where is the long answer F1 for the ensemble created with the models in . A corresponding approach is used to create when optimizing for long answers.

Finally, we join the predictions for short and long answers together by taking the short answer and long answer predictions from our short and long answer model sets respectively. If for an example a null long answer is predicted, we also predict a null short answer regardless of what predicted as there are no short answers for examples which do not have a long answer in NQ [8].

Duplicate Answer Span Aggregation A consequence of splitting large paragraphs into multiple overlapping is that, often, a single system for a single example will generate identical answer spans multiple times in its top 20 predictions. In order to produce a unique prediction score for each answer span from each system, we experiment with the following aggregation strategies on the vector of scores for a given answer span.

  • Max

  • Reciprocal Rank Sum

  • Exponential Sum for some constant (we use .

  • Noisy-Or

For the last three strategies111111Using un-normalized versions of sum and noisy-or causes dramatic deterioration. (reciprocal rank sum, exponential sum, and noisy-or), we additionally experiment with score normalization using a logistic regression model that was trained to predict top 1 precision based on the top score121212Though we experimented with additional input features such as query length and mean score across top 20, we omit results as performance does not improve over simple logistic regression. using the “dev-train” examples. We use the scikit-learn [11] implementation of logistic regression (with stratified 5-fold cross-validation to select the L2 regularization strength).

Results

Figure 1: Effect of stride length (in tokens) on the NQ Short Answer Dev Set F1 Performance

Stride

Rather than using a stride length of 128 tokens as was done by [4] and [2], we find that increasing the stride to 192 improves the final F1 score while also reducing the number of spans and, thus, the training time. See figure 1 for experimental results showing a  0.9% gain by increasing the stride length to 192 on some preliminary Bert-for-QA models.

Further increases seem to deteriorate the performance which may be a function of the size of the relevant context in Wikipedia articles, though additional work is required to better explore context size selection approaches given the document text.

Negative Instance Sub-Sampling

As per table 2, performance initially improves as we sample negative instances at slightly higher rates than the 2% level used in [2], but eventually begins to deteriorate when the sampling rate is increased too much. Performance can be improved further by sampling at a slightly lower rate of 1% for answerable questions and at higher rate of 4% for un-answerable questions. Overall, this change provides a boost of  0.8% in SA F1 over the setting used in [2] on preliminary BERT-for-QA models.

Pre-Training

As per table 1, pre-training on SQuAD 2.0 from the WWM model provides the best single BERT-for-QA model on the target NQ dataset. So we use apply this pre-training strategy to the additional model architectures discussed earlier: AoA and Layer Combo.

Model Architecture

Given our best pre-training strategy of the WWM model on SQuAD 2.0, we show in table 1 that adding the AoA layer during the finetuning stage of our target dataset of NQ yields the best single model performance. Linearly combining the BERT output layers shows a slight improvement over BERT-for-QA for SA but the same amount of drop for LA.

Data Augmentation

As seen in table 1, a naive strategy of simply shuffling the examples from the aforementioned strategies into the first 80% of mini batches during the fine-tuning phase did not provide significant improvements in single model performance over BERT. This may indicate that the NQ dataset is sufficiently large so as to not require additional examples. Instead, pre-training the base model on a similar task like SQuAD 2.0 on top of the WWM BERT model seems to be the best strategy for maximizing single model performance and outperforms the previous SOTA: a BERT model trained with million additional synthetic question answer pairs. Another interesting result is that, even the simpler (sentence shuffling) and less data intense (307,373 examples) data augmentation strategy (BERT w/ SOS) outperforms the previous SOTA model’s use of million synthetic question answer generation model.

SA F1 LA F1
Prior Work
DecAtt + Doc Reader 31.4 54.8
[10]
BERT w/ SQuAD 1.1 PT 52.7 64.7
[2]
BERT w/ 4M Synthetic Data 55.1 65.9
Augmentation [1]
This Work (Pre-Training)
BERT 55.35 66.04
BERT 54.83 66.75
BERT + SQuAD 2 PT 56.95 67.28
BERT + SQuAD 2 PT 57.15 67.08
+ Layer Combo
BERT + SQuAD 2 PT + AoA 57.22 68.24
This Work (Data Augmentation)
BERT w/ SOS 55.81 66.67
BERT w/  21K Random 54.05 66.23
Examples from MRC Tasks
BERT w/  21K Similar 55.18 66.34
Examples from MRC Tasks
BERT w/  100K Similar 54.68 65.82
Examples from MRC Tasks
Table 1: Short & long answer F1 performance of BERT-for-QA models on NQ dev. We abbreviate pre-training with PT.
Neg Sampling Rate Neg Sampling Rate SA F1
for Answerable for Un-Answerable
1% 1% 45.22
2% 2% 46.20
4% 4% 46.45
5% 5% 45.94
1% 4% 47.02
Table 2: Performance on NQ dev using a preliminary BERT-for-QA model with varying sub-sampling

Ensembling

Seed Experiments

Table 3 shows there is a benefit to ensembling multiple versions of the same model trained with different random seeds at training time. Specifically, there is a gain of roughly 2.5% in both SA and LA F1 by ensembling four models.

Multiple Model Ensembling Experiments

SA F1 LA F1
Best Single Model 56.14 67.10
Ensemble of Best Model Trained 58.73 69.61
with Random Seeds
Exhaustive Search (Short Answer) 59.64 69.98
Exhaustive Search (Long Answer) 59.64 70.49
Greedy (Short Answer) 59.07 69.81
Greedy (Long Answer) 59.71 70.84
Table 3: Ensemble performance on NQ dev-test

As shown in table 3, we find that ensembling a diverse set of models can provide an additional 1% boost in SA F1 and a 1.2% boost in LA F1 over simply ensembling the same model configuration with different random seeds during training.

Specifically, performing a greedy search and optimizing for long answer performance appears to generalize best to the dev-test set. We hypothesize that the reasons for the superior generalization of the greedy approach over exhaustive is that exhaustive search is “overfitting” to the examples in dev-train. Another potential cause of the better generalization of greedy is that it can search more candidates due to the decreased computational complexity.

Similarly we hypothesize the reason optimizing for long answer F1 generalizes better for short and long answers is due to the strict definition of correctness for Natural Questions which requires exact span matching [8].

In our final search over all ensembles using the greedy (long answer) search, the algorithm selects an ensemble consisting of the following models: (1) BERT + SQuAD 2 PT + AoA (2) BERT + SQuAD 2 PT (3) BERT + SQuAD 2 PT (4) BERT. So only one of the chosen model configurations is that of the single best performing model. The remaining models, though outperformed as individual models, provide a boost over multiple random seed variations of the best single model configuration.

Aggregation Strategy SA F1 LA F1
Max 0.5971 0.7084
Reciprocal Rank Sum 0.5728 0.7066
Exponential Sum 0.5826 0.7040
Noisy-Or 0.573 0.715
Table 4: Performance on NQ dev-test for varying aggregation strategies for duplicate answer spans (using greedy long answer search)

Duplicate Answer Span Aggregation Table 4 shows further experimentation with the greedy long answer ensembling strategy where we vary the aggregation strategies for duplicate answer span predictions. We find that using max aggregation results in the best short answer F1 whereas using normalized noisy-or aggregation results in the best long answer F1. Therefore, for our final submission, we use a combination strategy of producing short answer predictions using a greedy long answer search with max score for duplicate spans and long answer predictions using a greedy long answer search with noisy-or scores for duplicate spans.

Conclusion

We outline MRC algorithms that yield SOTA performance on benchmark datasets like SQuAD and show that a very simple approach involving transfer learning reaches the same performance while being computationally inexpensive. We also show that the same simple approach has strong empirical performance and yields the new SOTA on the NQ task as it outperforms a QA system trained on 4 million examples when ours was trained on only 307,373 (i.e. the size of the original NQ training set). Our future work will involve adding larger pre-trained language models like RoBERTa and XLNet.

References

  • [1] C. Alberti, D. Andor, E. Pitler, J. Devlin, and M. Collins (2019) Synthetic QA corpora generation with roundtrip consistency. CoRR abs/1906.05416. External Links: Link, 1906.05416 Cited by: Table 1.
  • [2] C. Alberti, K. Lee, and M. Collins (2019) A BERT baseline for the natural questions. arXiv preprint arXiv:1901.08634, pp. 1–4. External Links: Link, 1901.08634 Cited by: Model Training, item 2, Hyperparameter Optimization, Stride, Negative Instance Sub-Sampling, Table 1.
  • [3] Y. Cui, Z. Chen, S. Wei, S. Wang, T. Liu, and G. Hu (2017-07) Attention-over-attention neural networks for reading comprehension. In Proc. of ACL (Volume 1: Long Papers), pp. 593–602. Cited by: 1st item, Model Architecture.
  • [4] J. Devlin, M. Chang, K. Lee, and K. Toutanova (2019) BERT: pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT, Cited by: Introduction, Related Work, item 1, Stride.
  • [5] B. Dhingra, D. Pruthi, and D. Rajagopal (2018) Simple and effective semi-supervised question answering. CoRR abs/1804.00720. External Links: Link, 1804.00720 Cited by: Data Augmentation.
  • [6] M. Joshi, E. Choi, D. S. Weld, and L. Zettlemoyer (2017) TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. CoRR abs/1705.03551. External Links: Link, 1705.03551 Cited by: item 3.
  • [7] T. Kočiskỳ, J. Schwarz, P. Blunsom, C. Dyer, K. M. Hermann, G. Melis, and E. Grefenstette (2018) The NarrativeQA reading comprehension challenge. TACL 6, pp. 317–328. Cited by: Introduction.
  • [8] T. Kwiatkowski, J. Palomaki, O. Redfield, M. Collins, A. Parikh, C. Alberti, D. Epstein, I. Polosukhin, M. Kelcey, J. Devlin, K. Lee, K. N. Toutanova, L. Jones, M. Chang, A. Dai, J. Uszkoreit, Q. Le, and S. Petrov (2019) Natural Questions: a benchmark for question answering research. TACL. External Links: Link Cited by: Introduction, Dataset, Multiple Model Ensembling Experiments, Multiple Model Ensembling Experiments.
  • [9] Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov (2019) RoBERTa: A robustly optimized BERT pretraining approach. CoRR abs/1907.11692. External Links: Link, 1907.11692 Cited by: Pre-Training.
  • [10] A. Parikh, O. Täckström, D. Das, and J. Uszkoreit (2016) A decomposable attention model for natural language inference. EMNLP. External Links: Link, Document Cited by: Table 1.
  • [11] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay (2011) Scikit-learn: machine learning in Python. Journal of Machine Learning Research 12, pp. 2825–2830. Cited by: Multiple Model Ensembling Experiments.
  • [12] M. E. Peters, M. Neumann, M. Iyyer, M. Gardner, C. Clark, K. Lee, and L. Zettlemoyer (2018) Deep contextualized word representations. In Proc. of NAACL, Cited by: Model Architecture.
  • [13] P. Rajpurkar, R. Jia, and P. Liang (2018) Know what you don’t know: unanswerable questions for SQuAD. arXiv preprint arXiv:1806.03822. Cited by: Introduction.
  • [14] P. Rajpurkar, J. Zhang, K. Lopyrev, and P. Liang (2016) SQuAD: 100,000+ questions for machine comprehension of text. EMNLP. External Links: Link, Document Cited by: Introduction.
  • [15] S. Robertson (2009) The probabilistic relevance framework: BM25 and beyond. Foundations and Trends in IR 3, pp. 333–389. Cited by: item 2.
  • [16] L. N. Smith (2018) A disciplined approach to neural network hyper-parameters: part 1 – learning rate, batch size, momentum, and weight decay. arXiv preprint arXiv:1803.09820. External Links: 1803.09820 Cited by: item 3.
  • [17] A. Trischler, T. Wang, X. Yuan, J. Harris, A. Sordoni, P. Bachman, and K. Suleman (2016) NewsQA: A machine comprehension dataset. CoRR abs/1611.09830. External Links: Link, 1611.09830 Cited by: item 2.
  • [18] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin (2017) Attention is all you need. In Advances in Neural Information Processing Systems, pp. 5998–6008. Cited by: BERT-for-QA.
  • [19] Z. Yang, Z. Dai, Y. Yang, J. G. Carbonell, R. Salakhutdinov, and Q. V. Le (2019) XLNet: generalized autoregressive pretraining for language understanding. CoRR abs/1906.08237. External Links: Link, 1906.08237 Cited by: Introduction.
  • [20] Z. Yang, P. Qi, S. Zhang, Y. Bengio, W. W. Cohen, R. Salakhutdinov, and C. D. Manning (2018) HotpotQA: a dataset for diverse, explainable multi-hop question answering. arXiv preprint arXiv:1809.09600. Cited by: Introduction.
  • [21] M. Yatskar (2018) A qualitative comparison of CoQA, SQuAD 2.0 and QuAC. CoRR abs/1809.10735. External Links: Link, 1809.10735 Cited by: Data Augmentation.
  • [22] Z. Zhang, Y. Wu, J. Zhou, S. Duan, and H. Zhao (2019) SG-Net: syntax-guided machine reading comprehension. arXiv preprint arXiv:1908.05147. Cited by: Related Work.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
389998
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description