Ask to Learn: A Study on Curiosity-driven Question Generation
We propose a novel text generation task, namely Curiosity-driven Question Generation. We start from the observation that the Question Generation task has traditionally been considered as the dual problem of Question Answering, hence tackling the problem of generating a question given the text that contains its answer. Such questions can be used to evaluate machine reading comprehension. However, in real life, and especially in conversational settings, humans tend to ask questions with the goal of enriching their knowledge and/or clarifying aspects of previously gathered information. We refer to these inquisitive questions as Curiosity-driven: these questions are generated with the goal of obtaining new information (the answer) which is not present in the input text. In this work, we experiment on this new task using a conversational Question Answering (QA) dataset; further, since the majority of QA dataset are not built in a conversational manner, we describe a methodology to derive data for this novel task from non-conversational QA data. We investigate several automated metrics to measure the different properties of Curious Questions, and experiment different approaches on the Curiosity-driven Question Generation task, including model pre-training and reinforcement learning. Finally, we report a qualitative evaluation of the generated outputs.
The growing interest in Machine Reading Comprehension (MRC) has sparked significant research efforts on Question Generation (QG), the dual task to Question Answering (QA). In QA, the objective is to produce an adequate response given a query and a text; conversely, for QG, the task is generally defined as generating relevant question given a source text, focusing on a specific answer span. To our knowledge, all works tackling QG have thus far focused exclusively on generating relevant questions which can be answered given the source text: for instance, given AAAI was founded in 1979 as input, a question likely to be automatically generated would be When was AAAI founded?, where the answer 1979 is a span of the input. Such questions are useful to evaluate reading comprehension for both machines Hermann et al. (2015); Eyal et al. (2019) and humans Mani et al. (1999).
However, the human ability of asking questions goes well beyond evaluation: asking questions is essential in education Gall (1970) and has been proven to be fundamental for children cognitive development Chouinard et al. (2007). Curiosity is baked into the human experience. It allows to extend one’s comprehension and knowledge by asking questions that, while being relevant to context, are not directly answerable by it, thus being inquisitive and curious. The significance of such kind of questions is two-fold: first, they allow for gathering novel relevant information, e.g. a student asking for clarification; second, they are also tightly linked to one’s understanding of the context, e.g. a teacher testing a student’s knowledge by asking questions whose answers require a deeper understanding of the context and more complex reasoning.
From an applicative point of view, we deem the ability to generate curious, inquisitive, questions as highly beneficial for a broad range of scenarios: i) in the context of human-machine interaction (e.g. robots, chat-bots, educational tools), where the communication with the users could be more natural; ii) during the learning process itself, which could be partially driven in a self-supervised manner, reminiscent of how humans learn by exploring and interacting with their environment.
To our knowledge, this is the first paper attempting to tackle Curiosity-driven neural question generation. The contributions of this paper can be summarized as follow:
we propose a new natural language generation task: curiosity-driven question generation;
we propose a method to derive data for the task from popular non-conversational QA datasets;
we experiment using language model pre-training and reinforcement learning, on two different datasets;
we report a human evaluation analysis to assess both the pertinence of the automatic metrics used and the efficacy of the proposed dataset-creation method above.
2 Related Works
Deep learning models have been widely applied to text generation tasks such as machine translation Kalchbrenner and Blunsom (2013), abstractive summarization Rush et al. (2015) or dialog Henderson et al. (2013), providing significant gains in performance. The state of the art approaches are based on sequence to sequence models Cho et al. (2014); Sutskever et al. (2014). In recent years, significant research efforts have been directed to the tasks of Machine Reading Comprehension (MRC) and Question Answering (QA) Hermann et al. (2015); Rajpurkar et al. (2016). The data used for tackling these tasks are usually composed of triplets: given a context and the question, a model is trained to predict the answer.
Conversely, the Question Generation (QG) task introduced by Du et al. (2017); Zhou et al. (2017) can be considered as the dual task for QA Duan et al. (2017): thus, given a context and (optionally) an answer, the model is trained to generate the question. Following QA, research on QG Amidei et al. (2018) has also seen increasing interest from the community. One of the main motivations is that an effective QG model can be used to generate synthetic data in order to augment existing QA datasets Yuan et al. (2017); Alberti et al. (2019). For instance, Yuan et al. (2017) proposed a reinforcement learning setup trained using a QA-based metric reward: given a paragraph and an answer, the model first generates questions; then, the paragraph and the corresponding generated questions are given to a pre-trained QA model which predicts an answer; finally, the reward is computed as the number of overlapping words between the ground truth answer and the predicted answer. For an extensive evalution of models trained with different rewards we refer the reader to Hosking and Riedel (2019). Most of these works followed Ranzato et al. (2015), who applied reinforcement to neural machine translation. First, a sequence to sequence model is trained under teacher forcing Williams and Zipser (1989) to optimize cross-entropy, hence helping to reduce the action space (i.e. the vocabulary size). Then, the model is finetuned with a mix of teacher forcing and REINFORCE Williams (1992).
For automatic evaluation, all previous works on QG resort to BLEU metrics Papineni et al. (2002), originally developed and widely used in Machine Translation. However, how to evaluate text generation models remains an open research question: Nema and Khapra (2018) pointed out that, on QG tasks, the correlation between BLEU and human evaluation was poor.
A thorough investigation of the behavior of open-domain conversational agents has been recently presented by See et al. (2019). Using controllable neural text generation methods, the authors control important attributes for chit-chat dialogues, including question-asking behavior. Among the take-away messages of this work, is that question-asking represents an essential component in an engaging chit-chat pipeline: the authors find, via a large-scale human validation study, that agents with higher rates of question-asking obtain qualitative improvements in terms of inquisitiveness, interestingness and engagingness.
Indeed, in a conversational setting, it can be expected that the nature of follow-up questions significantly differs from those used as target in a traditional QG training setup: as mentioned earlier, QG has so far been tackled as the dual task to QA, hence training models to generate questions whose answer is present in the input context. On the contrary, we argue that in natural conversations the questions follow the input context but are rather a mean to augment one’s knowledge (thus, their answer is not present in the input context). In this work, we thus define the task as Curiosity-driven Question Generation.
Question Answering datasets are usually composed of a set of questions associated with the corresponding answers and the reading passages (the context) containing the answer. The QA task is defined as finding the answer to a question given the context. As opposed, the Question Generation (QG) task is to generate the question given the input and (optionally) the answer. Most previous efforts on the QG task have resorted to the widely used Stanford Question Answering Dataset (SQuAD) Rajpurkar et al. (2016). It contains roughly 100,000 questions posed by crowd-workers on selected sample of Wikipedia articles. Several other QA datasets have also been recently published accounting for characteristic such as requiring multi-passage or discrete reasoning Yang et al. (2018); Dua et al. (2019); further, conversational QA datasets have been made available: CoQA Reddy et al. (2019) and QuAC Choi et al. (2018) have the desirable property to be in a dialogue-like setting.
In our scenario, Curiosity-driven QG, the reading passage associated with a question should not contain the answer, but rather pave the way for asking a new question – whose answer would eventually enrich the knowledge on the matter at hand. Therefore, a natural choice to build QG data would be to rely on existing datasets for conversational QA. A detailed comparison of the above-mentioned CoQA and QuAC datasets is provided by Yatskar (2019), who reports the proportion of Topic Error (questions unlikely to be asked in the context) and Entity Salad (i.e. questions unanswerable for any context111see section 2.1 in Yatskar (2019)): CoQA includes a significantly higher proportion Topic Error and Entity Salad compared to QuAC. For this reason, we resort to QuAC in order to derive data Curiosity-driven QG.
Furthermore, recognizing the fact that the great majority of QA datasets available does not account for conversational characteristics, we propose a methodology to derive data for Curiosity-driven Question Generation from standard QA datasets, applying it to the popular SQuAD Rajpurkar et al. (2016).
For both our data sources, and consistently with standard QA and QG tasks, we encode each sample as a triplet where the paragraph comprises sentences , and represents the answer to the question . A canonical QG approach would thus use , i.e. the sentence of that contains the answer, as source, and as generation target. On the contrary, for Curiosity-driven QG, any sentence from can potentially be used as the source sequence, as long as it does not contain the answer – i.e. under the necessary constraint of . In the following subsections, we elaborate on additional constraints depending on the nature of the source data.
In general, we define samples as triplets
where and are, respectively, the input sentence and the paragraph modified according to the appropriate dataset-depending constraint, and is the reference (target) question.
3.1 Conversational QA Data
As mentioned above, we first derive our data from the QuAC dataset, which is built from Wikipedia articles by iterating over the following procedure: given a sentence, a student annotator asks a relevant question for which he does not have the answer; then, the teacher – annotator – retrieves a sentence that contains the answer. Thus, a QuAC question is curious by design, given the text that precedes it. More formally, for the question (i.e. our target), the source is composed by the concatenation of the sentences of which appear before the sentence that contains the answer. Therefore, our QuAC-derived dataset is built by applying the stricter constraint .
Numerically, the QuAC dataset compounds to 83,568 questions (on 11,567 articles) for the train set, 7,354 for the validation set and 7,353 for the test set (1,000 articles each). Since the test set is not public, we use the original QuAC validation set to build our test set. From the training set, we randomly drop 1,000 articles (hence, 7,224 samples) which we use to derive our validation set, thus resulting in 76,345 questions for training.
3.2 Standard QA Data
Most of the available QA datasets are not conversational. Thus, we propose a simple method to obtain data for Curiosity-driven QG from standard QA datasets. For this, we use the widely popular SQuADRajpurkar et al. (2016), and specifically the original splits released by Du et al. (2017) which is commonly used for Question Generation.
As opposed to QuAC, the questions in SQuAD do not follow logical ordering. Therefore, any sentence from can potentially be used as the source sequence, as long as it does not contain the answer (constraint: ). Nonetheless, as is reasonable for factoid QA datasets, several questions are so specific to their associated sentence that they would be extremely unlikely to be asked without knowing the contents of itself.
To exemplify this issue, take the following paragraph from SQuAD:
Given “Dane was killed in a horse-riding accident when Nikola was five.” as , and operating under the sole constraint of , the sentence “Tesla was the fourth of five children” would be eligible as a source for the target question “What happened to Dane?”. This question can only be asked if either contextual information or background knowledge is available, since it requires to know that Dane was among Tesla’s four siblings.
To overcome this problem, we added an additional constraint based on Named Entity Recognition (NER): is an acceptable input only if all the entities present in the question are also present in the input sentence . In the previous example, this would thus filter out the target “What happened to Dane?” while allowing for “What was Tesla’s brother’s name?”.
For our experiments we used spaCy222https://spacy.io/usage/linguistic-features.
|Learning to ask||86,635||8,965||8,964|
In Table 1 we report the number of samples we obtained from SQuAD before and after applying NER filtering. After applying the above methodology to construct a dataset for Curiosity-driven QG, our training dataset contains 25,356 samples for training, 2,076 for development, and 2,087 for testing.
Automatic evaluation of Natural Language Generation (NLG) systems is a challenging task Nema and Khapra (2018). For QG, -gram based similarity metrics are commonly used. These measures evaluate how similar the generated text is to the corresponding reference(s). While they are known to suffer from several shortcomings Paulus et al. (2017); Liu et al. (2016), they allow to evaluate specific properties of the developed models. In this work, the metrics detailed below are proposed and we evaluate their quality through a human evaluation in subsection 6.2.
One of the most popular metrics for QG, BLEU Papineni et al. (2002) provides a set of measures to compare automatically generated texts against one or more references. In particular, BLEU-N is based on the count of overlapping n-grams between the candidate and its corresponding reference(s).
Within the field of Computational Creativity, Diversity is considered a desirable property Karampiperis et al. (2014). Indeed, generating always the same question such as “What is the meaning of the universe?” would be an undesirable behavior, reminiscent of the “collapse mode” observed in Generative Adversarial Networks (GAN) Goodfellow et al. (2014). Therefore, we adopt Self-BLEU, originally proposed by Zhu et al. (2018), as a measure of diversity for the generated text sequences. Self-BLEU is computed as follows: for each generated sentence , a BLEU score is computed using as hypothesis while the other generated sentences are used as reference. When averaged over all the references, it thus provides a measure of how diverse the sentences are. Lower Self-BLEU scores indicate more diversity. We refer to these metrics as Self-B* throughout this paper.
4.3 QA-based metrics
Given a text, a question can be considered curious if the answer is not contained in the input text. In our task, this implies that a question should not be answerable given its corresponding input sentence . Thanks to the recent improvements obtained on Question Answering tasks – for instance, human-level performance has been achieved on SQuAD-v1333https://rajpurkar.github.io/SQuAD-explorer/ – the answerability of a question can be automatically measured.
Therefore, given a question-context pair as input to a QA model, two type of metrics can be computed:
n-gram based score: measuring the average overlap between the retrieved answer and the ground truth.
probability score: the confidence of the QA model for its retrieved answer; this corresponds to the probability of being the correct answer assigned by the QA model to the retrieved answer.
Since several diverse questions can be generated for a given input, we consider the latter metric (probability score) to better fit the Curiosity-driven QG task.
Hence, given the evaluated question and the input text , we define a metric QA_prob as the confidence of the QA model that its predicted answer is correct. This metric measures answerability of given : therefore, the lower this score, the less likely the answer is contained in the input text.
While being non-answerable represents a necessary condition for being a curious question with respect to its context , we also want to be as relevant and useful as possible. To this end, we compute the above QA_prob for question on , which represents the source paragraph stripped from the sentence containing the answer (see Eq. 1). The higher this score, the more likely the question is relevant and useful to augment the knowledge provided by .
Thus, the two proposed metrics are defined as
Under our definition, Curiosity-driven questions are those that minimize while maximizing . To compute these QA-based metrics, we use the HuggingFace implementation444https://github.com/huggingface/pytorch-transformers of BERT Devlin et al. (2018).
5.1 Baseline model
As baseline architecture we adopt the popular Transformer Vaswani et al. (2017), which proved to perform well on a wide range of text generation tasks. Among these, neural machine translation Ott et al. (2018b), automatic summarization Gehrmann et al. (2018), and question generation Dong et al. (2019); Scialom et al. (2019). It can be briefly described as a sequence-to-sequence model with a symmetric encoder and decoder based on a self-attention mechanism, which allows to overcome the inherent obstacles to parallelism present in recurrent models such as Long Short Time Memory (LSTM) networks Hochreiter and Schmidhuber (1997).
The copy mechanism Gulcehre et al. (2016) proved beneficial for QG Zhao et al. (2018); Scialom et al. (2019): indeed, the QG task is very sensitive to rare and out of vocabulary words such as named entities and such a mechanism help deal with it efficiently: more than 50% of the answers in the SQuAD dataset, for instance, correspond to named entities (see Table 2 in Rajpurkar et al. (2016). Hence, following Gehrmann et al. (2018); Scialom et al. (2019), we include a copy mechanism in our Transformer architecture.
For our experiments, we used the following hyper-parameters for the transformer: N = 2 (number of blocks); d_model = 256 (hidden state dimension); d_ff = 512 (position-wise feed-forward networks dimension); and, h = 2 (number of attention heads).
Experiments run with the original hyper-parameters555N=6, d_model=512, d_ff=2048, h=8. as proposed by Vaswani et al. (2017) obtained consistent and numerically similar results. During training, we used mini batches of size 64 and the Adam optimizer Kingma and Ba (2014). At generation time, the decoding steps are computed trough the beam search algorithm with beams by default.
Reinforcement Learning (RL) is an efficient technique to maximize discrete metrics for text generation. Previously, Ranzato et al. (2015) used the REINFORCE algorithm Williams (1992) to train RNNs for several generation tasks, showing improvements over previous supervised approaches. Moreover, Paulus et al. (2017) combined supervised and reinforcement learning, demonstrating improvements over competing approaches both in terms of ROUGE and on human evaluation.
However, the metrics used as reward are often overfit, leading to numerical improvements which do not translate to increased – and, rather, contribute to degrading – output quality, thus leading to reduced effectiveness of the trained models for practical applications. On this matter, and with a particular focus on QG, Hosking and Riedel (2019) performed a human evaluation on RL models trained with several metrics as reward, finding them to be indeed poorly aligned with human judgments: the models appear to learn to exploit the weaknesses of the reward source.
To overcome this issue, we propose to use a balanced reward:
thus maximizing the probability of finding an answer to the generated question within the input paragraph but not inside the source sentence.
where the maximum likelihood is defined as
where represents the source text of length and the corresponding reference question of length .
Conversely, we define the reinforcement loss to be minimized according to the standard RL actor-critic scheme, where is the reward function defined in 4:
Greedy decoding according to the conditional distribution is used to obtain a sequence . The model is sampled using its Markov property, that is, one token at a time, giving rise to the sequence .
5.3 Pretraining (PT)
As shown in Table 1, the constrained dataset amounts to roughly three times less samples than both QuAC and the original SQuAD dataset it derives from. We thus investigate, for this dataset, the effect of pretraining the model under the traditional (i.e. not Curiosity-driven) QG training setup, using the training set as provided by Du et al. (2017)). Then we resume training on the final dataset obtained after applying the NER-based constraint for Curiosity-driven QG on the same training samples.
For the QuAC Curiosity-driven dataset, the amount of data is comparable to the original dataset, given the conversational nature of QuAC. Therefore, we do not use pretraining for the experiments on QuAC.
6.1 Automatic metrics
In Table 2 we report the results of our experiments on QuAC for the baseline model (base) and the RL model. We use a beam , and compute the results for . In addition the generated questions with a beam , we also computed the results for and . While one would expect to see for all the metrics a slight improvement, with increasing beam size, we observe a strong divergence among the results: increasing values for correspond to a significant improvements in terms of BLEU-4 and notable drops for BLEU-1. A similar phenomena was observed by Ott et al. (2018a) in the context of machine translation: in this work, the presence of 1 or 2% of noisy data is found to be enough to significantly degrade the beam search results. In our case, one of most frequent generated question is Are there any other interesting aspects about this article ?. Indeed, the frequency of this question in our training set amounts to 4.18% of the questions. On the test set we see that roughly 80% of the generated questions start with the token “are” . Generating this sequence is not very likely with a greedy search (): at any time step during the generation, if any other token has a higher probability, this question will be dismissed. On the other hand, with a higher beam, it is likely to be kept and eventually result as the most probable sequence, among the different remaining beams at the end of the inference.
Moving to our SQuAD-based experiments, we observe that the models trained on SQuAD do not seem to suffer from this issue since all the metrics improved when increasing the beam size from to . This is consistent with the results reported by Zhao et al. (2018) where improving the beam improve slightly all the metrics. Thus, we only report the results with in Table 3. A possible explanation is that SQuAD, as opposed to QuAC, only contains factoid questions.
We observe that the models trained with RL obtain, as could be expected, higher scores for QAcontext with respect to those trained without RL. A higher QAcontext implies that the QA model is more likely to find an answer in the near context of the source. QAsource is lower, as expected, for SQuAD based models, though comparatively higher than the models trained with RL on QuAC. We identify two possible reasons for this: first, the QA model is trained on answerable questions; second, the nature of the QUaC questions is less factoid than the SQuAD ones, and non-factoid questions can arguably be harder for the QA model to evaluate. This could explain why, in the RL setting, QAcontext (the evaluation on answerable questions) is higher for both SQuAD and QUaC models, but only SQuAD models achieve a lower QA_source (the evaluation on non answerable questions).
Furthermore, we see that pretraining allows to achieve higher BLEU scores, at the cost of lower Self-BLEU, thus showing an increased accuracy but less diversity in the generated questions. Indeed, we find that pretrained models tend to generate a higher number of questions starting with “What” compared to both other models and the references; the distribution for the first words of the human questions appears closer to that non pretrained models.
In Figure 1 we report the distribution of the first word frequency for the different models trained: the models without pretraining appear closer to the human-quality samples and also show more diversity.
6.2 Human Evaluation
In addition to the automatic metrics, we proceeded to a human evaluation. We chose to use the data from our SQuAD-based experiments in order to also to measure the effectiveness of the proposed approach to derive Curiosity-driven QG data from a standard, non-conversational, QA dataset. We randomly sampled 50 samples from the test set. Three professional English speakers were asked to evaluate the questions generated by: humans (i.e. the reference questions), and models trained using pre-training (PT) or (RL), and all combinations of those methods.
Before submitting the samples for human evaluation, the questions were shuffled. Ratings were collected on a 1-to-5 likert scale, to measure to what extent the generated questions were: answerable by looking at their context; grammatically correct; how much external knowledge is required to answer; relevant to their context; and, semantically sound. The results of the human evaluation are reported in Table 4.
What is the impact of the pretraining?
We observe that for pretrained models (i.e. PT and PT+RL) the Correctness is significantly higher than the models without pretraining (i.e. base and RL). It corroborates the higher BLEU observed for these models in Table 3. An other observation is that the External Knowledge is lower for the pretrained models while the Relevance is slightly higher. It could be due to the nature of the pretraing for which the models learn to generate non curious questions that focus on their inputs. It correlates with the significantly higher QA_source reported in Table 3 for those pretrained models.
Does Reinforcement help?
From the human assessment we conducted – see Table 4, we observe for the models trained with RL obtain higher scores for Relevance and lower Soundness as compared to their non-reinforced counterparts. Further, the results reported in Table 3 show reinforced model obtaining lower BLEU and source; conversely they score higher when it comes to . To summarize those results, we conclude that reinforcement brings improvements in terms of diversity of the generated questions, at the price of slightly degraded formulations in the outputs.
How effective is our dataset creation methodology?
Looking at the bottom row of Table 4, which shows the results obtained by the reference (i.e. human-generated) questions, we observe the highest relative score for all assessed dimensions, with the exception of Answerability. This indicates that the data we derived seem to fit well the task of Curiosity-driven question generation. As a sidenote, we remark that the models built obtain even lower scores in terms of Answerability than humans, a fact we hypothesize due to the lower quality of the generated questions: the less sound and correct, the less answerable a question would be, regardless of its context.
How well do the metrics fit human judgement?
We report the pairwise Spearman correlation and p-value among all the different metrics and human measures in Figure 2. Correlation analysis on the human assessment data shows that BLEU correlates positively with Relevance, Answerability, Soundness and Unexpectedness666To give an order of magnitude, for a standard QG task, Nema and Khapra (2018) report a Pearson correlation of 0.258 for BLEU-1 and 0.233 for BLEU-4.. Self-BLEU metrics correlate significantly with Soundness and Correctness and QAcontext with Relevance. The only human measure that does not correlate significantly with any automatic metric is External knowledge. It is indeed one of the most challenging aspect to evaluate, even for humans. However, as expected, it correlates negatively with Answerability.
The human skill of asking inquisitive questions allows them to learn from the other and increase their knowledge. Curiosity-driven question generation could be a key component for several human-machine interaction scenarios. We thus proposed a new task: Curiosity-driven Question Generation. In absence of data directly usable for this task, we propose an automatic method to derive it from conversational QA datasets. Recognizing that the great majority of QA datasets are not dialogue-based, we also extend the method to standard QA data. Our experiments, including strategies as pretraining and reinforcement, show promising results under both automatic and human evaluation.
In future works, we plan to extend the approach to conditional generation of Curiosity-driven questions.
Appendix A Computational Costs
All our experiments were run on a single nVidia 2080ti gpu. For SQuAD experiments, training time amounted to circa 45 minutes and 12 hours for the model built without and with reinforcement, respectively. The additional pretraining step took roughly 2 hours. For QuAC experiments, training time amounted to circa 2 hours and 15 hours for the models built without and with reinforcement, respectively.
Appendix B Sample Outputs
From QuAC (test set):
From SQuAD (test set):
- Alberti et al. (2019) Chris Alberti, Daniel Andor, Emily Pitler, Jacob Devlin, and Michael Collins. 2019. Synthetic QA Corpora Generation with Roundtrip Consistency. arXiv:1906.05416 [cs]. ArXiv: 1906.05416.
- Amidei et al. (2018) Jacopo Amidei, Paul Piwek, and Alistair Willis. 2018. Evaluation methodologies in Automatic Question Generation 2013-2018. In Proceedings of the 11th International Conference on Natural Language Generation, pages 307–317, Tilburg University, The Netherlands. Association for Computational Linguistics.
- Cho et al. (2014) Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation. arXiv:1406.1078 [cs, stat]. ArXiv: 1406.1078.
- Choi et al. (2018) Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wen-tau Yih, Yejin Choi, Percy Liang, and Luke Zettlemoyer. 2018. QuAC: Question answering in context. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2174–2184, Brussels, Belgium. Association for Computational Linguistics.
- Chouinard et al. (2007) Michelle M Chouinard, Paul L Harris, and Michael P Maratsos. 2007. Children’s questions: A mechanism for cognitive development. Monographs of the Society for Research in Child Development, pages i–129.
- Devlin et al. (2018) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
- Dong et al. (2019) Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019. Unified Language Model Pre-training for Natural Language Understanding and Generation. arXiv:1905.03197 [cs]. ArXiv: 1905.03197.
- Du et al. (2017) Xinya Du, Junru Shao, and Claire Cardie. 2017. Learning to Ask: Neural Question Generation for Reading Comprehension. arXiv:1705.00106 [cs]. ArXiv: 1705.00106.
- Dua et al. (2019) Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. 2019. DROP: A reading comprehension benchmark requiring discrete reasoning over paragraphs. In Proc. of NAACL.
- Duan et al. (2017) Nan Duan, Duyu Tang, Peng Chen, and Ming Zhou. 2017. Question generation for question answering. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 866–874.
- Eyal et al. (2019) Matan Eyal, Tal Baumel, and Michael Elhadad. 2019. Question Answering as an Automatic Evaluation Metric for News Article Summarization. arXiv:1906.00318 [cs]. ArXiv: 1906.00318.
- Gall (1970) Meredith D Gall. 1970. The use of questions in teaching. Review of educational research, 40(5):707–721.
- Gehrmann et al. (2018) Sebastian Gehrmann, Yuntian Deng, and Alexander M Rush. 2018. Bottom-up abstractive summarization. arXiv preprint arXiv:1808.10792.
- Goodfellow et al. (2014) Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680.
- Gulcehre et al. (2016) Caglar Gulcehre, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, and Yoshua Bengio. 2016. Pointing the Unknown Words. arXiv:1603.08148 [cs]. ArXiv: 1603.08148.
- Henderson et al. (2013) Matthew Henderson, Blaise Thomson, and Steve Young. 2013. Deep Neural Network Approach for the Dialog State Tracking Challenge. In Proceedings of the SIGDIAL 2013 Conference, pages 467–471, Metz, France. Association for Computational Linguistics.
- Hermann et al. (2015) Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in neural information processing systems, pages 1693–1701.
- Hochreiter and Schmidhuber (1997) Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780.
- Hosking and Riedel (2019) Tom Hosking and Sebastian Riedel. 2019. Evaluating Rewards for Question Generation Models. arXiv:1902.11049 [cs]. ArXiv: 1902.11049.
- Kalchbrenner and Blunsom (2013) Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent Continuous Translation Models. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1700–1709, Seattle, Washington, USA. Association for Computational Linguistics.
- Karampiperis et al. (2014) Pythagoras Karampiperis, Antonis Koukourikos, and Evangelia Koliopoulou. 2014. Towards machines for measuring creativity: The use of computational tools in storytelling activities. In 2014 IEEE 14th International Conference on Advanced Learning Technologies, pages 508–512. IEEE.
- Kingma and Ba (2014) Diederik P. Kingma and Jimmy Ba. 2014. Adam: A Method for Stochastic Optimization. arXiv:1412.6980 [cs]. ArXiv: 1412.6980.
- Liu et al. (2016) Chia-Wei Liu, Ryan Lowe, Iulian V. Serban, Michael Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How NOT To Evaluate Your Dialogue System: An Empirical Study of Unsupervised Evaluation Metrics for Dialogue Response Generation. arXiv:1603.08023 [cs]. ArXiv: 1603.08023.
- Mani et al. (1999) Inderjeet Mani, David House, Gary Klein, Lynette Hirschman, Therese Firmin, and Beth Sundheim. 1999. The TIPSTER SUMMAC text summarization evaluation. In Ninth Conference of the European Chapter of the Association for Computational Linguistics.
- Nema and Khapra (2018) Preksha Nema and Mitesh M. Khapra. 2018. Towards a Better Metric for Evaluating Question Generation Systems. arXiv:1808.10192 [cs]. ArXiv: 1808.10192.
- Ott et al. (2018a) Myle Ott, Michael Auli, David Grangier, and Marc’Aurelio Ranzato. 2018a. Analyzing Uncertainty in Neural Machine Translation. arXiv:1803.00047 [cs]. ArXiv: 1803.00047.
- Ott et al. (2018b) Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. 2018b. Scaling neural machine translation. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 1–9, Belgium, Brussels. Association for Computational Linguistics.
- Papineni et al. (2002) Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Association for Computational Linguistics.
- Paulus et al. (2017) Romain Paulus, Caiming Xiong, and Richard Socher. 2017. A deep reinforced model for abstractive summarization. arXiv preprint arXiv:1705.04304.
- Rajpurkar et al. (2016) Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ Questions for Machine Comprehension of Text. arXiv:1606.05250 [cs]. ArXiv: 1606.05250.
- Ranzato et al. (2015) Marc’Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2015. Sequence Level Training with Recurrent Neural Networks. arXiv:1511.06732 [cs]. ArXiv: 1511.06732.
- Reddy et al. (2019) Siva Reddy, Danqi Chen, and Christopher D Manning. 2019. Coqa: A conversational question answering challenge. Transactions of the Association for Computational Linguistics, 7:249–266.
- Rush et al. (2015) Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A Neural Attention Model for Abstractive Sentence Summarization. arXiv:1509.00685 [cs]. ArXiv: 1509.00685.
- Scialom et al. (2019) Thomas Scialom, Benjamin Piwowarski, and Jacopo Staiano. 2019. Self-attention architectures for answer-agnostic neural question generation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6027–6032, Florence, Italy. Association for Computational Linguistics.
- See et al. (2019) Abigail See, Stephen Roller, Douwe Kiela, and Jason Weston. 2019. What makes a good conversation? How controllable attributes affect human judgments. arXiv:1902.08654 [cs]. ArXiv: 1902.08654.
- Sutskever et al. (2014) Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to Sequence Learning with Neural Networks. arXiv:1409.3215 [cs]. ArXiv: 1409.3215.
- Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention Is All You Need. arXiv:1706.03762 [cs]. ArXiv: 1706.03762.
- Williams (1992) Ronald J Williams. 1992. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229–256.
- Williams and Zipser (1989) Ronald J Williams and David Zipser. 1989. A learning algorithm for continually running fully recurrent neural networks. Neural computation, 1(2):270–280.
- Yang et al. (2018) Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W. Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018. HotpotQA: A dataset for diverse, explainable multi-hop question answering. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP).
- Yatskar (2019) Mark Yatskar. 2019. A Qualitative Comparison of. In Proceedings of the 2019 Conference of the North, pages 2318–2323, Minneapolis, Minnesota. Association for Computational Linguistics.
- Yuan et al. (2017) Xingdi Yuan, Tong Wang, Caglar Gulcehre, Alessandro Sordoni, Philip Bachman, Sandeep Subramanian, Saizheng Zhang, and Adam Trischler. 2017. Machine comprehension by text-to-text neural question generation. arXiv preprint arXiv:1705.02012.
- Zhao et al. (2018) Yao Zhao, Xiaochuan Ni, Yuanyuan Ding, and Qifa Ke. 2018. Paragraph-level Neural Question Generation with Maxout Pointer and Gated Self-attention Networks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3901–3910, Brussels, Belgium. Association for Computational Linguistics.
- Zhou et al. (2017) Qingyu Zhou, Nan Yang, Furu Wei, Chuanqi Tan, Hangbo Bao, and Ming Zhou. 2017. Neural Question Generation from Text: A Preliminary Study. arXiv:1704.01792 [cs]. ArXiv: 1704.01792.
- Zhu et al. (2018) Yaoming Zhu, Sidi Lu, Lei Zheng, Jiaxian Guo, Weinan Zhang, Jun Wang, and Yong Yu. 2018. Texygen: A Benchmarking Platform for Text Generation Models. arXiv preprint arXiv:1802.01886.