Studying the Effects of Cognitive Biases in Evaluation of Conversational Agents

Studying the Effects of Cognitive Biases in Evaluation of Conversational Agents

Abstract

Humans quite frequently interact with conversational agents. The rapid advancement in generative language modeling through neural networks has helped advance the creation of intelligent conversational agents. Researchers typically evaluate the output of their models through crowdsourced judgments, but there are no established best practices for conducting such studies. Moreover, it is unclear if cognitive biases in decision-making are affecting crowdsourced workers’ judgments when they undertake these tasks. To investigate, we conducted a between-subjects study with 77 crowdsourced workers to understand the role of cognitive biases, specifically anchoring bias, when humans are asked to evaluate the output of conversational agents. Our results provide insight into how best to evaluate conversational agents. We find increased consistency in ratings across two experimental conditions may be a result of anchoring bias. We also determine that external factors such as time and prior experience in similar tasks have effects on inter-rater consistency.

Conversational agents; Human evaluation; Anchoring bias; Experiment design
\CopyrightYear

2020 \setcopyrightacmlicensed \doihttps://doi.org/10.1145/3313831.3376318 \isbn978-1-4503-6708-0/20/04 \conferenceinfoCHI’20,April 25–30, 2020, Honolulu, HI, USA \acmPrice$15.00 \toappearPermission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org.
CHI ’20, April 25–30, 2020, Honolulu, HI, USA.
Copyright is held by the owner/author(s). Publication rights licensed to ACM.
ACM ISBN 978-1-4503-6708-0/20/04 …$15.00.
http://dx.doi.org/10.1145/3313831.3376318

\numberofauthors

1

{CCSXML}

<ccs2012> <concept> <concept_id>10003120.10003121.10003122</concept_id> <concept_desc>Human-centered computing HCI design and evaluation methods</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10003120.10003121.10003122.10003334</concept_id> <concept_desc>Human-centered computing User studies</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10010147.10010178.10010179.10010181</concept_id> <concept_desc>Computing methodologies Discourse, dialogue and pragmatics</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012>

\ccsdesc

[500]Human-centered computing HCI design and evaluation methods \ccsdesc[500]Human-centered computing User studies \ccsdesc[500]Computing methodologies Discourse, dialogue and pragmatics

\printccsdesc

1 Introduction

Conversational agents, also commonly known as chatbots, are typically designed with the intention of generating meaningful, informative and coherent responses that keep humans engaged in conversation. Conversational agents have become extremely popular and have been heralded as one of the recent breakthrough technologies.1 The development of conversational agents has evolved from simple rule-based approaches such as Eliza [61] and PARRY [16] to more sophisticated templates-based [43, 55] and data-driven approaches [35, 6]. Extant approaches towards building conversational agents are end-to-end systems that employ seq2seq architectures [58, 52], language modeling [11, 44] or transformer architectures [56].

Even with the rapid advancement in the development of conversational agents through these neural approaches, there are no established set of best practices towards evaluating their performance. Evaluation procedures vary from one research article to the next, leading to a fragmented view of how the field is advancing. Overall, the output generated from these models is evaluated using automated metrics and/or (crowdsourced) human judgments. With respect to automated metrics, measures including BLEU [47], METEOR [5], ROUGE [39] and word-embedding based metrics [41], which can be calculated based on word overlap have been used. However, prior research has shown that these metrics show little to no correlation with human ratings [45, 42, 41]. Due to these limitations of automated metrics, evaluation of chatbots is increasingly conducted by obtaining qualitative judgments from crowd-sourced workers [44, 63]. This puts a major imperative on how the experiments to collect crowd-sourced judgments are designed. However, research advancing best practices for experiment design for evaluating chatbots performance and obtaining more reliable and consistent ratings from crowd-sourced workers is very limited. Our work seeks to fill this research gap.

Consider the simple choice of the type of question used to elicit human judgments. Most current experiments for evaluating conversational agent output use Likert scales; a typical question would be to ask the humans to rate the Readability of chatbot output on a scale of 1–5. However, research by Belz and Kow [10] has shown that using Likert scales may affect rating consistency, for example, some individuals may tend to avoid the extremes of the scale while others may not. Novikova et al. [46] have shown that continuous scales help improve the consistency and reliability of human ratings across several language evaluation tasks as opposed to Likert scales. In their experiments, Novikova et al. [46] found that consistency of crowd-sourced workers improved when workers were asked to rate the conversational agent output by comparing it against a given (gold) standard. A sample question in their study would ask human raters to input a number to rate the Readability of an algorithm output by comparing it against the provided gold standard response (with a standard response value of 100). But what if this increased consistency is a result of the very presence of the predetermined gold standard, possibly because humans evaluators are anchored on that standard value of 100?

Anchoring bias, which is the tendency of people to focus on the first piece of information presented; also defined as “inability of the people to make sufficient adjustments starting from the initial value (anchor) to yield the final answer[29]. Decades of research has resulted in a robust finding that humans are prone to cognitive biases when engaged in decision-making [54, 29, 22, 15, 62, 17], which are heuristics that help humans reach decisions quickly [29]. To the best of our knowledge, the impact of anchoring bias when humans evaluate conversational agent output has not heretofore been studied, even as human evaluation has become an integral piece of most current research evaluating chatbots.

To investigate the effects of cognitive biases, specifically anchoring bias, on decision-making around evaluating chatbot output, we designed a 22 experiment with 77 crowdsourced workers. We studied how anchors (both numerical and textual) and the presentation order of rating tasks affect consistency of human judgments. We elicited ratings from workers on two metrics: Readability and Coherence of model output. Our key findings are listed below.

  • We find systematic effects of anchoring in the magnitude of participants’ ratings: participants who are presented with an anchor will provide a rating that is closer to the anchor value than those who are not presented with an anchor.

  • We find systematic effects of anchoring in the consistency of participants’ ratings: participants who are presented with an anchor will be (generally) more consistent in their ratings than those who are not presented with an anchor.

  • We find that interpretation of metrics affects consistency: participants were more consistent with their ratings on Readability than in their ratings on Coherence, potentially because the interpretation of Coherence is more subjective than Readability.

Our findings demonstrate the impact anchoring bias might have in designing evaluation experiments. Along with exploring the impact of anchoring, we also provide insights into how the prior experience of being involved in similar research studies as well as time taken to complete the task as factors that can affect rating consistency. Our findings have the potential to advance the field of human-agent interaction by extending the reproducibility of conversational agent evaluation experiments. The findings of this paper are applicable to other areas of natural language processing, including text summarization and story generation, that also rely on human evaluation to study the quality of the algorithms. More broadly, the design of experiments in this paper can be adapted to investigate the effects of cognitive biases in a range of human-computer interaction tasks, building upon prior work in Explainable AI [60] and bias mitigation [51].

2 Related Work

Our work relates to three primary areas of research; we present related work in each area in this subsection.

2.1 Cognitive Biases in Decision Making

Evaluating algorithm output is an inherently subjective task. Cognitive biases, simple heuristics that are effective but may lead to suboptimal decision-making, especially when uncertainty is involved [54], are a critical concern but surprisingly understudied when evaluating conversational agent output. Cognitive biases were first introduced by Tversky and Kahneman, and have been studied extensively in the field of psychology [28, 54, 21]. One form of cognitive bias is anchoring bias, which is when humans rely on a single piece of information (“anchor”) to make a decision [30]. Tversky and Kahneman [54] found evidence that when individuals are asked to provide an estimate, their estimates were pretty close to the reference value or anchor. Anchoring can thus affect decision-making in visual analytics [15, 62], valuations [1], even general knowledge [25, 22]. For Natural Language Processing tasks however, there has been little research studying the impact of anchoring bias. One prior study by Berzak et al. [12] evaluated the impact of anchoring bias in the creation of syntactic parsers. When it comes to evaluating the output of conversational agents, there has been no prior work on understanding the impact of cognitive biases. Our work is the first step in that direction.

2.2 Evaluation of Dialogue Systems

There are two main domains in which conversational agents are deployed: open-domain [19, 50, 2] and goal-oriented [40] conversational settings. Goal-oriented systems are designed to achieve a specific goal, such as restaurant booking [13] or movie ticket booking [38]. Open-domain systems, also known commonly as chit-chat systems, engage with a conversation partner towards no predetermined goal [63]. Typically, natural language generation in conversational agents is achieved by training seq2seq architectures [58, 52]. Prior research has shown that agents built using seq2seq frameworks suffer from generating dull and generic responses [58, 36]. Evaluating the quality of responses generated by these models in open-domain situations is thus an important area of research because it affects user satisfaction and engagement [59, 57].

To evaluate output automatically, researchers have adopted metrics such as BLEU [47], METEOR [5] and ROUGE [39] from machine translation and text summarization [41] tasks. BLEU, METEOR and ROUGE can be computed based on word overlap between the proposed and ground truth responses; however, they do not adequately account for the diversity of responses that are possible for a given input utterance. Experiments show that these automated metrics along with word embedding based metrics [41] show little to no correlation with human ratings [41, 42]. With the lack of proper automated metrics for evaluation, obtaining human ratings is a primary evaluation method for evaluating chatbots. Even with human evaluation, a variety of metrics have been proposed, including ease of answering [37], coherence [37], information flow [37] , naturalness [2], fluency [63] and engagement [57]. Our current study builds upon this prior research and seeks to investigate the use of appropriate metrics in evaluating chatbots. As an experiment design choice, we also asked crowdsourced workers which metrics they would themselves consider most important while undertaking these tasks.

2.3 Experiment Design in Language Evaluation

Our focus in this paper is on experiment design. Our motivation to do so is based on prior research that demonstrated the effectiveness of different questions types (e.g. continuous scales, magnitude estimation, etc.) to obtain human ratings instead of using discrete scales (e.g. Likert scales) [46, 10, 9, 32]. Likert Scales are widely used to obtain human ratings for conversational agent output [18, 63, 14]. However, Likert scales suffer from a number of limitations such as inconsistencies in ratings by different annotators, scale region bias and fixed granularity [32, 48, 8]. Recent work done by Novikova et al. [46] addresses the issue of inconsistency in ratings, although in goal-oriented systems. Their work demonstrates the effectiveness of using continuous scales towards increased consistency for language evaluation tasks. However, the extent to which anchoring bias may affect consistency has not been previously studied. Prior research from Novikova et al. [46] also demonstrates an increase in consistency when the rating tasks are split so that each metric is rated individually (rating Readability followed by rating Coherence). Taking inspiration from this, our experiment design has explicit conditions to investigate the effects of splitting the rating tasks.

To summarize this Related Work section, evaluation of dialogue system output relies increasingly on human evaluation, yet not a lot of research focuses on experiment design for this task. Also, we find very little work towards understanding the impact of cognitive biases that might affect ratings obtained from crowd-sourced workers. Our present study seeks to fill this research gap and propose better experiment design procedures for use by fellow researchers in this area.

3 Corpus and Models

To obtain ratings on conversational agent output, we trained three models from scratch to generate responses. Code for these models was made available by Dziri et al. [19] (https://github.com/nouhadziri/THRED). We first describe the corpus we used to train the models.

3.1 Corpus

We used the Reddit Conversational Corpus made available by Dziri et al. [19]. This corpus consists of conversations obtained from 95 different subreddits, curated out of 1.1M subreddits. The date range is a 20-month period from November, 2016 until August, 2018. Table 1 shows overall descriptive statistics of the corpus, where the average length of utterances is consistent across the Training, Validation and Test sets.

Train Valid. Test
Dialogues 9.2M 500K 400K
Avg. Length of Utterances 13.98 13.98 13.99
Table 1: Descriptive statistics of the corpus used in our experiments.

3.2 Models

All three models used in our experiments are based on seq2seq approaches that contain an encoder and decoder component. Seq2seq approaches are commonly used in language generation tasks, such as machine translation and dialogue generation. For dialogue generation, the encoder receives the input sequence as input. Each input sequence is passed through an LSTM [26] on the encoder side which produces a hidden state representation (Eq 1.)

(1)

where represents the previous hidden state and represents a non-linear activation function. The decoder uses the last hidden state of the encoder as the initial state and output tokens are conditioned on the input (Eq 2.) where represents the ground truth input into the decoder.

(2)
  1. Seq2Seq: Our first model is a traditional seq2seq model with attention mechanism. We use the attention mechanism proposed by Bahdanau et al. [4]. Attention assists the decoder to attend to different parts of the input while generating the response. The decoder produces a context vector at each time step by attending to the encoder hidden state along with the last hidden of the decoder (represented through Eq 3.) where represents the relative importance on the input side. The output from the model is produced through a softmax function (Eq 4.).

    (3)
    (4)
  2. HRED: Our second model uses Hierarchical Encoder-Decoder [50] architecture. This model is an advancement over traditional seq2seq models. HRED overcomes the bottlenecks of traditional seq2seq models by capturing longer context from dialogue histories. HRED model introduces a two-level hierarchy to capture long term context. The first layer is called the utterance layer that captures the meaning of each sentence, similar to traditional seq2seq models. It further encodes the hidden states of the utterance layer to the inter-utterance layers that capture the context and input information [53].

  3. THRED: Our last model is the Topic Augmented Hierarchical Encoder-Decoder [19]. This model uses topic words along with a hierarchical encoder-decoder to produce a response. The topic words were obtained using a pre-trained LDA model [27]. This model also makes uses of attention mechanism on the context along with the topic words from the input sequence.

Sample Output from Models: In Figure 1 (top-left), we show a sample conversation from the Reddit Corpus. It consists of two sentences, spoken by Person A and B. The corpus also provides the target (or gold-Standard) response against which the model can be trained, and also against which performance can be evaluated. This is shown in the Standard Response in Figure 1 screenshot. In the bottom of the screenshot, the output from the three generative models described in this section is shown (seq2seq, HRED and THRED output in Response 1, 2 and 3 respectively).

Figure 1: Sample screen showing variations in the experiment conditions. (A) represents the conversational context that is shown across all conditions. (B) is the numerical and textual anchor presented to participants in anchoring conditions. (B’) shows the screenshot of conditions where no anchor is presented. (C) is used in Setup 1 where both questions of readability and coherence ratings are shown together. (C’) is used in Setup 2 where the readability and coherence are treated as individual tasks and only one is shown at a time to the participant.

4 Experiment Design

Having obtained the outputs from our three models, we built an interface to allow participants to evaluate the generated responses. We initially focus on two metrics: Readability and Coherence. Readability and Coherence are frequently used in obtaining evaluation ratings from crowd-sourced workers [45, 24, 63, 2]. Readability measures the linguistic quality of text and helps quantify the difficulty of understanding the text by the reader [23, 45]. Coherence measures the ability to produce responses consistent with the topic or context of conversation [57]. Based on prior findings of the limitations of Likert scales [10],

we instead use magnitude estimation (ME) questions to obtain ratings from crowdsourced workers. Magnitude Estimation allows participants to rate the responses over a free scale without being constrained. Recently, Novikova et al. [46] demonstrated that use of magnitude estimation helps improve consistency amongst crowd-sourced workers when evaluating responses from goal-oriented systems. We build upon this prior work but specifically focus on investigating the impact of cognitive biases to design our experiments.

Accordingly, we design four experiment conditions, namely Anchor: With or Without Anchor and Presentation Order: Both Questions or Single Question (on a single screen). Table 2 shows the four different experiment conditions in our experiment design, while Figure 1 shows two sample screenshots from the study interface.

No Anchor Anchor
Both Questions (Setup 1) 18 22
Single Question (Setup 2) 18 19
Table 2: experiment design with four experiment conditions and number of participants across each condition

As shown in Figure 1, participants across all experiment conditions are shown the Conversation Context (A). Participants in the Anchor conditions are shown the Standard Response and the Readability and Coherence value of the Standard Response (set to 100 in this study, following prior work done by [46]); together these form the Numerical and Textual Anchor (B) (Figure 1-left). Participants in the No Anchor condition are shown neither the Standard Response nor the Readability and Coherence value of the standard response (B’) (Figure 1-right). Participants in the Both Questions (Setup 1) condition are asked to input their ratings of Readability and Coherence on a single screen (C) (as shown in Figure 1-left). Participants in the Single Question condition (Setup 2) are asked to input their ratings on a single metric on single screen (as shown Figure 1-right (C’) for Readability), and then input their ratings on the Coherence metric on the next screen when they click the next button (not shown).

Figure 2: The experiment flow for each crowd-sourced worker taking part in this study.

Figure 2 provides the flow of steps taken by workers in the experiment, beginning with the informed consent procedure and pre-questionnaire, followed by the task of evaluating 50 sets of outputs on two metrics of Readability and Coherence and ending with the post-questionnaire. In the pre-questionnaire, we asked two questions about the prior experience of workers: (Q1) Have you taken part in previous studies that involve evaluating conversational responses? and (Q2) Have you taken part in previous studies that involve talking to a chatbot? Our motivation behind asking these questions is to understand if prior experience participating in similar studies affects inter-rater consistency. In the post-questionnaire, we obtain participant demographics including their age, gender, race, and education. We also ask them if they find it preferable to provide ratings as magnitude estimation question or on Likert scales. In addition, we obtain their free-form responses on which metrics they would consider important for evaluating conversational agent output. These post-questionnaire questions are designed to obtain qualitative data to better inform our future studies.

4.1 Research Questions

Following the review of prior work in this area and our decisions on the experiment design, we developed three main research questions for our study.

  • RQ1: Which factors affect the magnitude of ratings provided by the participants? Rationale: The presence of an anchor may orient participants towards that number (100) and also the reference text, thus we expect that participants in the anchoring conditions will have higher ratings (closer to 100) than do participants in the no anchor conditions. In addition, we investigate if the presentation order of questions (Setup 1 vs. Setup 2) has an effect on how high participants’ ratings are on the task. We also investigate whether the time to complete the task has any effect on the magnitude of ratings. We use the responses on the pre-questionnaire about the prior experience to analyze whether having taken part in similar studies or conversing with a chatbot has any effect on the magnitude of ratings.

  • RQ2: Which factors affect the consistency in ratings provided by participants? Rationale: Similar to RQ1, we expect that the presence of an anchor may orient participants towards that number (100) and also the reference text, thus we expect that participants in the anchoring conditions will have higher consistency in their ratings than do participants in the no anchor conditions. In addition, we investigate if the presentation order of questions (Setup 1 vs. Setup 2) has an effect on the consistency of ratings on the task. We also investigate whether the time to complete the task and prior experience affect inter-rater consistency.

  • RQ3: Are participants more consistent in their ratings of readability than coherence? Rationale: Across both setups, we except higher consistency in readability ratings than coherence. We also expect the impact of anchoring to be more pronounced for readability over coherence. We contend coherence is more subjective to evaluate than is readability, since humans have judge whether the response is related to the context of the conversation [20, 45]. Readability on the other hand has been evaluated across other fields through automated metrics and is more well-defined [31].

5 Results

We present the results of our analysis in this section. We begin by describing the pool of participants we recruited and the quality checks we put in place to ensure high-quality crowdsourced data.

5.1 Descriptive Statistics

Our study was approved under our institution’s Institutional Review Board (IRB) policies (IRB #18-0357). Table 2 provides the number of participants across the four experiment conditions. We recruited crowdsourced workers through Amazon Mechanical Turk.2 The participants were assigned to experiment condition randomly. We allowed each participant a maximum of 4 hours to complete the study. In order to ensure high-quality data, we had stringent qualifying criteria: (1) Workers should have a Masters qualification;3 (2) HIT Approval Rate to be 80; and (3) Number of approved HITs 500. Our study interface was hosted on a secure server at our institution and participant responses were saved in a MongoDB database. In our analyses, we use the aggregated value of the responses provided by each participant for the entire task. The entire anonymized data, analyses and code used in this study are available at this link: https://github.com/sashank06/ConvEvaluation_CHI2020.

A total of 77 crowdsourced workers participated in our study. The gender distribution was 67.5% male (52), 31.17% female (24) and 1.33% other (1). The age of workers was between 20 and 60 years (mean=34.85 years). A majority of the participants had an undergraduate degree (), while others indicated having a Masters degree (), Doctorate () and High School diploma (). In terms of race/ethnicity, were Indian, along with White (), Black (), East Asian (), Hispanic () and Native American () making up rest of the demographics.

In the pre-questionnaire, we also asked participants to indicate: (Q1) whether the participant has taken part in prior research studies evaluating conversational responses; and (Q2) interacting with a chatbot. Table 3 provides the number of participants’ response across both setups to the pre-questionnaire questions.

Question 1 Question 2
Yes No Yes No
Setup 1 No Anchor 5 13 5 13
Anchor 4 18 5 17
Setup 2 No Anchor 7 11 8 10
Anchor 6 13 7 12
Table 3: Number of participants in each category: we refer to prior experience on evaluating conversational output as Question 1 and prior experience of engaging with chatbots as Question 2.

5.2 Analysis and Results for RQ1

Effects of anchor and type of setup on magnitude of ratings

We find significant differences between the magnitude of responses provided by participants across the both setups with p 0.001. Figure 3 provides the mean and bootstrapped confidence interval (95%) of the responses across the experiment conditions. In Setup 1, we find that participants with no anchor produce ratings () that are significantly lower than ratings provided by participants in anchor condition (). We find a similar pattern across Setup 2 with no anchor, resulting in a mean rating of , while ratings in anchor condition responses have a mean of . We analyze the ratings on Readability and Coherence separately (Figure 4): the presence of numerical and textual anchors results in higher (on average) ratings than the absence of the anchor (statistically significant with p0.001).

Figure 3: Mean of the responses bootstrapped with 95% confidence intervals across setups 1 and setup 2
Figure 4: Mean of the responses bootstrapped with 95% confidence intervals across Setups 1 and 2 on the metrics of Readability and Coherence.

Figure 4 presents ratings for the metrics of readability and coherence separately. We find that across both setups, the difference between anchor and no anchor conditions to be larger for the metrics of readability than coherence (statistically significant with p0.001). We find that in Setup 1, readability values have a mean of 83.13 in the anchor condition and in no anchor condition the mean of the responses drop down to 64.97. Also in Setup 1, we find that for coherence metric, the mean of responses in the anchoring condition is M=62.74 and without anchor M=52.89. We find similar trends in the responses provided in Setup 2 for both metrics of readability and coherence.

Figure 5: Average time taken to complete the task across four experiment conditions. Overall average is shown in dashed line in the graph (57.17 minutes).
Figure 6: Mean of the responses bootstrapped with 95% confidence intervals across Setups 1 and 2 based on amount of time spent on study.

Effect of time taken to complete task on magnitude of ratings

We analyze the effects of time taken to complete the task on magnitude of ratings. We find that participants who are presented with anchors spend more time on average taking the study than participants in no anchor conditions across both setups. From the total of 77 participants, the mean time taken to complete the study was 57.17 minutes (see Figure 5). In Setup 1, we find that participants took an average of 66 minutes in the with anchor condition and average of 54.83 minutes in the without anchor conditions. Similarly, in Setup 2 we find participants took an average of 54.94 minutes with anchor condition and 50.94 with no anchor condition.

Next, we grouped the participants based on the amount of time spent into two categories: (1) Below Average - when participants spend less than mean time; (2) Above Average - when participants spend more than mean time. Table 4 provides the number of participants based on the time spent across the experiment conditions. Across both setups, we find that people in the above average group show significant differences in their responses. In Setup 1, in the above average group, the mean of responses in no anchor condition was and mean of the responses in anchor condition was . We find similar evidence in Setup 2 with people in anchor condition provide higher values () close to the numerical anchor (). Although, we note that the sample sizes in the Below Average time taken groups in Setup 2 are smaller (4 and 5 participants resp., c.f. Table 4); more experimentation is needed to further substantiate this finding.

Below Average Above Average
Setup 1 No Achor 7 (71.19) 11 (39.65)
Anchor 11 (73.53) 11 (72.35)
Setup 2 No Anchor 4 (61.96) 14 (58.75)
Anchor 5 (64.02) 14 (83)
Table 4: Number of the participants who spent below and above average time across conditions and their average rating values (in parenthesis)
Figure 7: Mean of the responses bootstrapped with 95% confidence intervals across setups 1 and setup 2 based on prior experience of being involved studies about evaluating conversations.
Figure 8: Mean of the responses bootstrapped with 95% confidence intervals across setups 1 and setup 2 based on prior experience of being involved studies about talking to chatbot.

Effect of prior experience on magnitude of ratings

Figure 7 demonstrates the impact of the prior experience of evaluating conversational responses (Question 1 on the pre-questionnaire) on the magnitude of ratings. We find contrasting responses across both setups. In Setup 1, we find that people with prior experience in the anchor condition produce higher responses (M=74.41) close to the numerical anchor (100) and no anchor condition produce lower values (M=38.36) whilst people with no prior experience are similar in their responses across both conditions. In comparison to Setup 1, we find that in Setup 2 participants with no prior experience produce higher responses in the anchor condition (M=71.45) and in no anchor condition (M=63.74).

Figure 8 shows the impact of prior experience of interacting with chatbots. Participants who have such prior experience demonstrated signs of anchoring. We find that mean of responses (M=80.40) for participants with prior experience in the anchor condition to be significantly higher () than participants in no anchor condition (M=48.01) in Setup 1.

When comparing against Setup 1, we find that people in Setup 2 with no prior experience produce higher responses (M=70.74) in the anchoring condition than in the no anchor condition (M=63.12).

These findings substantiate the hypothesis that people with prior experience (answered Yes on Questions 1 and 2) would be more susceptible to the anchoring effect than those who do not have prior experience with similar tasks, however this effect is only seen in Setup 1, while Setup 2 demonstrates the opposite effect. We find this evidence to be particularly interesting and plan to further investigate the potential of eliciting ratings on different metrics as separate tasks (Setup 2) as a means of mitigating the anchoring bias effect.

5.3 Analysis and Results for RQ2

We measure consistency of ratings using the intra-class correlation measure (ICC) [34]. Following Bard et al. [7], we perform a log normalization of the scores obtained using magnitude estimation method across both setups.

Effects of anchor and type of setup on consistency of ratings

Table 5 represents the ICC scores obtained across both setups on the metrics of readability and coherence. We find that there is a significant increase in the consistency of the ratings in the anchor condition in Setup 1. The consistency values obtained in Setup 2 for readability and coherence show mixed results. We find that the no anchor condition of Setup 2 produces more consistency in ratings for the readability metrics whilst on the metric of coherence, we find that there is extremely low consistency between the raters when they are presented with no anchors. However, we see a significant increase in consistency for Setup 2, when participants are in anchoring condition.

Readability Coherence
Setup 1 No Anchor (n=18) 0.74 0.76
Anchor (n=22) 0.921 0.855
Setup 2 No Anchor (n=18) 0.874 0.151
Anchor (n=19) 0.835 0.727
Table 5: ICC scores on the metrics of readability and coherence for each experiment condition. All values are statistically significant p-value0.001
Condition Time Taken Readability Coherence
Setup 1
No Anchor
Below Average
(n=11)
0.75 0.63
Above Average
(n=7)
0.23 0.59
Setup 1
Anchor
Below Average
(n=11)
0.86 0.785
Above Average
(n=11)
0.83 0.68
Setup 2
No Anchor
Below Average
(n=14)
0.85 -0.03.
Above Average
(n=4)
0. 0.
Setup 2
Anchor
Below Average
(n=14)
0.726 0.76
Above Average
(n=5)
0.556 -0.20.
Table 6: ICC scores on the metrics of readability and coherence based on the amount of time spent in the study across both conditions. All values statistically significant at p-value0.001 except those indicated by .
Condition
Prior experience
evaluating
conversations?
Readability Coherence
Setup 1
No Anchor
Yes (n=5) 0.44 0.71
No (n=13) 0.67 0.62
Setup 1
Anchor
Yes ((n=4) 0.61 0.52
No (n=18) 0.91 0.84
Setup 2
No Anchor
Yes (n=7) 0.77 -0.88
No (n=11) 0.71 0.46
Setup 2
Anchor
Yes (n=6) -0.2 0.65
No (n=13) 0.93 0.86
Table 7: ICC scores on the metrics of readability and coherence when based of participants prior experience of taking part in research studies about evaluating conversations. All values statistically significant at p-value0.001 except those indicated by .

Effect of time taken to complete task on consistency of ratings

We look at the role of external factors of time and prior experience towards consistency of the ratings provided. Table 6 represents the ICC scores on the metrics on readability and coherence across both setups. We group these participants into two groups of Above Average and Below Average based on the amount of time spent in the study (c.f Table 4).

Surprisingly, we find that people who spend below average time achieve higher consistency in the ratings across both setups. However, we do notice some differences between the two setups. In Setup 1, we find that amongst participants who are in the below average group, the participants in the anchor condition have a higher consistency than participants in no anchor condition. Similarly, we find that people who spend above average time on Setup 1 with anchor condition achieve higher consistency when compared to Setup 1 with no anchor condition for the above average group. However, in Setup 2 we find people who spend above average time have a poor consistency score on the metric of coherence, a possible indication that coherence is highly subjective.

Effect of prior experience on consistency of ratings

Table 7 provides an overview of the consistency on the readability and coherence metrics based on participants prior experience about taking part in studies about evaluating conversations across both setups. We find that participants with no prior experience of evaluating conversation across both setups tend to have higher consistency when compared to participants with prior experience of evaluating conversations irrespective on experimental condition assigned. When compared within the anchor conditions across both setups, we find that participants with no prior experience of evaluating conversations achieve higher consistency in Setup 2 and participants with prior experiences of evaluating conversations achieve a higher consistency on readability metrics with Setup 1.

Condition
Prior experience
interacting with
chatbots?
Readability Coherence
Setup 1
No Anchor
Yes (n=5) 0.73 0.75
No (n=13) 0.55 0.58
Setup 1
Anchor
Yes ((n=5) 0.89 0.69
No (n=17) 0.87 0.79
Setup 2
No Anchor
Yes (n=8) 0.85 -0.163
No (n=10) 0.58 -0.48
Setup 2
Anchor
Yes (n=7) -0.2 0.49
No (n=12) 0.91 0.82
Table 8: ICC scores on the metrics of readability and coherence when based of participants prior experience of taking part in research studies talking to chatbot. All values statistically significant at p-value0.001 except those indicated by .

Table 8 gives an overview of the consistency on the readability and coherence metrics based on participants prior experience of taking part in studies related to engagement with a chatbot. Compared to Table 7, we find that participants with prior experience of engaging with chatbots achieve higher consistency across both setups irrespective of the experiment condition except on the Setup 2 anchoring condition. Also, we find the anchoring condition enables participants to achieve higher consistency across both Setup 1 and Setup 2. We find that irrespective of the participants’ prior experience, anchoring helps achieve a higher consistency. This also provides similar evidence to presence of anchoring helping towards achieving higher consistency in this experiment design. Tables with confidence intervals for Figures 3, 4, 6, 7 and 8 are included in our github repository.

5.4 Analysis and Results for RQ3

As shown in Table 5, we see that readability has a higher consistency over coherence on both setups. We also notice the significant impact anchoring has towards increasing consistency of ratings. We see that it seems harder to agree upon the more subjective metric of coherence, without any textual or numerical anchor. We also suspect the impact of instructions might have towards consistency. In the instructions screen in our study, Readability was defined as: Is the response easy to understand, fluent and grammatical and does not have any consecutive repeating words (following [45, 46]), which provides clear indicators regarding evaluating a response on the metric of readability. Coherence was defined as: Is the response relevant to the topic and context of the conversation. (following [19, 57] making it more subjective.

6 Discussion and Limitations

In this section, we discuss implications of our results on anchoring effect in dialogue evaluation, and point out possible limitations related to the study design and analysis.

6.1 Implication of experiment results

Our key findings indicate that the presence of numerical and textual anchors significantly influences the ratings across two different experiment setups. We find the effect of anchoring is more pronounced in instances when participants are asked to provide ratings on two metrics at the same time (Both Questions/Setup 1) and the effect of anchoring is slightly less pronounced when participants are asked to provide ratings for a single metric on a single screen (Single Question/Setup 2). Our findings have implications for potential future experiment designs that are geared towards evaluating the performance of dialogue systems, if there are ratings to be elicited on multiple dimensions, such as Readability and Coherence.

Additionally, external factors of time taken to complete the study and participants prior experience of having taken part in research studies either about evaluation or engagement with a chatbot were found to impact the magnitude of the responses and consistency in the ratings. We find participants who spend more than the average time (above average) on the study get anchored and also exhibit low consistency scores on the metrics of readability and coherence.

We notice the choice of metrics to evaluate also has an impact on consistency. We see that ratings for the more subjective metric of coherence are less consistent than those for readability amongst the raters across all conditions and setups.

We also analyzed the data from the post-questionnaire questions asking participants which method of rating they preferred to work with. From the 77 participants, we find that 42 participants preferred the magnitude estimation method and 35 of them preferred the Likert scale method. Prior research has shown that continuous scale methods like magnitude estimation do offer advantages [10, 46] and they need to be explored further for the purposes of evaluation. Consistent with the prior work in this area, we also find similar advantages provided by magnitude estimation across both our setups with an increase in consistency of the ratings provided by the crowd-sourced workers. These findings and the participants’ feedback on their own preferences lead us to recommend magnitude estimation for future evaluation design of conversational agents.

Limitations

We acknowledge a few limitations of our work. First, we consider only two metrics for evaluation of conversational agents. In reality, there may be more metrics that are better designed to evaluate the performance of conversational agents. Second, we acknowledge that this study is exploratory; understanding the impact of anchoring bias in the evaluation of conversational agents is in its infancy. For future studies, we plan to pre-register our study to improve the validity of our findings [33]. Third, we study the effect of anchoring, however, we provide both numerical and textual anchors. Although we find the impact of anchoring, we are unable to determine if the numerical or the textual anchor is causing this effect. To address this, we are planning an extension study with additional experiment conditions so that we can study the impact of textual and numerical anchors separately.

Future Work

Figure 9: Participant ratings on which metrics they considered important for conversational output evaluation. Y-axis represents the % of importance.

The results of our study offer insights into the challenging task of designing and understanding the impact of experiments for evaluation of dialogue systems. To provide additional information for possible future directions, we also asked the participants in our study to rank the metrics that they considered important for output of conversational agents (Figure 9), including Readability and Coherence. We ask them to rate their preferences in order of importance for the following metrics: Readability, Coherence, Novelty, Diversity, Specificity and Engagement. These metrics are some of the commonly used metrics in research articles that develop and evaluate conversational agent output. We notice that readability and coherence are considered very important, but other metrics such as engagement and specificity are also worth investigating. Possible extensions to our work would include specificity and engagement metrics, based on this evidence. Past research by See et al. [49] specifies metrics including specificity and engagement/interestingness and shows how these metrics could impact the training process of a model.

7 Summary

Evaluation of dialogue systems is an extremely challenging task since automated metrics do not adequately capture the nuances related to natural language and its production. However, prior research has not focused on the impact that experiment design has on qualitative dialogue evaluation.

Our findings are a step towards understanding the impact of experiment design and the possible role of cognitive bias such as anchoring bias towards dialogue evaluation. Cognitive biases could be the result of System 1 thinking (Type 1 processing), which is considered to be relatively fast, relatively low on cognitive demand, often based on intuition. By contrast, System 2 thinking (or Type 2 processing), is considered to be the result of systematic thinking and reasoning. Our results, however, indicate that participants who spent less time on the task had higher consistency of ratings than those who took longer. One possible experiment to identify the effects of Type 1 vs. Type 2 processing is to design an experiment condition which explicitly triggers intuitive responses (Type 1) by imposing a strict and challenging response deadline. Bago and De Neys [3] observed in their experiments that participants gave correct, logical responses as the first, immediate response, by explicitly triggering Type 1 vs. Type 2 processing for logic problems. Capturing time taken per question in the interaction logs would allow us to collect the data that supports this investigation.

We specifically investigate impact of anchoring bias in our experiment, to determine its effects on the consistency measure across participants. By separately analyzing the effect of the presence/absence of anchors and also the presentation order of questions, we are able to make design recommendations for future experiments on dialogue evaluation. We focus on the metrics of readability and coherence, but our proposed experiment design can be extended to multiple other metrics. In addition, our study also suggests that external factors of time and prior experience of taking part in research studies about evaluation of responses and engagement with chatbots have a significant impact towards responses provided and also on consistency.

Acknowledgments

This work was supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No FA8650-18-C-7881. All statements of fact, opinion or conclusions contained herein are those of the authors and should not be construed as representing the official views or policies of AFRL, DARPA, or the U.S. Government. We thank the anonymous reviewers for the helpful feedback.

Footnotes

  1. https://www.technologyreview.com/lists/technologies/2016/
  2. https://www.mturk.com/
  3. https://www.mturk.com/worker/help

References

  1. D. Ariely, G. Loewenstein and D. Prelec (2003) “Coherent arbitrariness”: stable demand curves without stable preferences. The Quarterly journal of economics 118 (1), pp. 73–106. Cited by: §2.1.
  2. N. Asghar, P. Poupart, J. Hoey, X. Jiang and L. Mou (2018) Affective neural response generation. In European Conference on Information Retrieval, pp. 154–166. Cited by: §2.2, §2.2, §4.
  3. B. Bago and W. De Neys (2017) Fast logic?: examining the time course assumption of dual process theory. Cognition 158, pp. 90–109. Cited by: §7.
  4. D. Bahdanau, K. Cho and Y. Bengio (2014) Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Cited by: item 1.
  5. S. Banerjee and A. Lavie (2005) METEOR: an automatic metric for mt evaluation with improved correlation with human judgments. In Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization, pp. 65–72. Cited by: §1, §2.2.
  6. S. Bangalore and O. Rambow (2000) Corpus-based lexical choice in natural language generation. In Proceedings of the 38th Annual Meeting on Association for Computational Linguistics, ACL ’00, Stroudsburg, PA, USA, pp. 464–471. External Links: Link, Document Cited by: §1.
  7. E. G. Bard, D. Robertson and A. Sorace (1996) Magnitude estimation of linguistic acceptability. Language, pp. 32–68. Cited by: §5.3.
  8. H. Baumgartner and J. E. Steenkamp (2001) Response styles in marketing research: a cross-national investigation. Journal of marketing research 38 (2), pp. 143–156. Cited by: §2.3.
  9. A. Belz and E. Kow (2010) Comparing rating scales and preference judgements in language evaluation. In Proceedings of the 6th International Natural Language Generation Conference, pp. 7–15. Cited by: §2.3.
  10. A. Belz and E. Kow (2011) Discrete vs. continuous rating scales for language evaluation in nlp. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers-Volume 2, pp. 230–235. Cited by: §1, §2.3, §4, §6.1.
  11. Y. Bengio, R. Ducharme, P. Vincent and C. Jauvin (2003) A neural probabilistic language model. Journal of machine learning research 3 (Feb), pp. 1137–1155. Cited by: §1.
  12. Y. Berzak, Y. Huang, A. Barbu, A. Korhonen and B. Katz (2016-11) Anchoring and agreement in syntactic annotations. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Austin, Texas, pp. 2215–2224. External Links: Link, Document Cited by: §2.1.
  13. A. Bordes, Y. Boureau and J. Weston (2016) Learning end-to-end goal-oriented dialog. arXiv preprint arXiv:1605.07683. Cited by: §2.2.
  14. H. Chen, Z. Ren, J. Tang, Y. E. Zhao and D. Yin (2018) Hierarchical variational memory network for dialogue generation. In Proceedings of the 2018 World Wide Web Conference on World Wide Web, pp. 1653–1662. Cited by: §2.3.
  15. I. Cho, R. Wesslen, A. Karduni, S. Santhanam, S. Shaikh and W. Dou (2017) The anchoring effect in decision-making with visual analytics. In IEEE Conference on Visual Analytics Science and Technology (VAST), Cited by: §1, §2.1.
  16. K. M. Colby (1975) Artificial paranoia: a computer simulation of paranoid process. Pergamon Press. Cited by: §1.
  17. E. Dimara, S. Franconeri, C. Plaisant, A. Bezerianos and P. Dragicevic (2018) A task-based taxonomy of cognitive biases for information visualization. IEEE transactions on visualization and computer graphics. Cited by: §1.
  18. E. Dinan, S. Roller, K. Shuster, A. Fan, M. Auli and J. Weston (2018) Wizard of wikipedia: knowledge-powered conversational agents. arXiv preprint arXiv:1811.01241. Cited by: §2.3.
  19. N. Dziri, E. Kamalloo, K. W. Mathewson and O. Zaiane (2018) Augmenting neural response generation with context-aware topical attention. arXiv preprint arXiv:1811.01063. Cited by: §2.2, item 3, §3.1, §3, §5.4.
  20. N. Dziri, E. Kamalloo, K. Mathewson and O. Zaiane (2019-06) Evaluating coherence in dialogue systems using entailment. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, Minnesota, pp. 3806–3812. External Links: Link, Document Cited by: 3rd item.
  21. G. Ellis (2018) Cognitive biases in visualizations. In Springer International Publishing, Cited by: §2.1.
  22. A. Furnham and H. C. Boo (2011) A literature review of the anchoring effect. The Journal of Socio-Economics 40 (1), pp. 35–42. Cited by: §1, §2.1.
  23. A. Gatt and E. Krahmer (2018-01) Survey of the state of the art in natural language generation: core tasks, applications and evaluation. J. Artif. Int. Res. 61 (1), pp. 65–170. External Links: ISSN 1076-9757, Link Cited by: §4.
  24. M. Ghazvininejad, C. Brockett, M. Chang, B. Dolan, J. Gao, W. Yih and M. Galley (2018) A knowledge-grounded neural conversation model. In Thirty-Second AAAI Conference on Artificial Intelligence, Cited by: §4.
  25. T. Gilovich, N. Robert and A. Amos (2001) Putting adjustment back into the anchoring and adjustment heuristic: differential processing of self-generated and experimenterprovided anchors. Psychological Science. Cited by: §2.1.
  26. S. Hochreiter and J. Schmidhuber (1997) Long short-term memory. Neural computation 9 (8), pp. 1735–1780. Cited by: §3.2.
  27. M. Hoffman, F. R. Bach and D. M. Blei (2010) Online learning for latent dirichlet allocation. In advances in neural information processing systems, pp. 856–864. Cited by: item 3.
  28. D. Kahneman and A. Tversky (1972) Subjective probability: a judgment of representativeness. Cognitive psychology 3 (3), pp. 430–454. Cited by: §2.1.
  29. D. Kahneman (2003) A perspective on judgment and choice: mapping bounded rationality.. American psychologist 58 (9), pp. 697. Cited by: §1.
  30. D. Kahneman (2016) 36 heuristics and biases. Scientists Making a Difference: One Hundred Eminent Behavioral and Brain Scientists Talk about Their Most Important Contributions, pp. 171. Cited by: §2.1.
  31. J. P. Kincaid, R. P. Fishburne Jr, R. L. Rogers and B. S. Chissom (1975) Derivation of new readability formulas (automated readability index, fog count and flesch reading ease formula) for navy enlisted personnel. Cited by: 3rd item.
  32. S. Kiritchenko and S. Mohammad (2017-07) Best-worst scaling more reliable than rating scales: a case study on sentiment intensity annotation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), Vancouver, Canada, pp. 465–470. External Links: Link, Document Cited by: §2.3.
  33. R. Kosara and S. Haroz (2018) Skipping the replication crisis in visualization: threats to study validity and how to address them. Cited by: §6.1.
  34. J. R. Landis and G. G. Koch (1977) The measurement of observer agreement for categorical data. biometrics, pp. 159–174. Cited by: §5.3.
  35. I. Langkilde-Geary and K. Knight (2002) Halogen statistical sentence generator. In Proceedings of the ACL-02 Demonstrations Session, pp. 102–103. Cited by: §1.
  36. J. Li, M. Galley, C. Brockett, J. Gao and B. Dolan (2016-06) A diversity-promoting objective function for neural conversation models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego, California, pp. 110–119. External Links: Link, Document Cited by: §2.2.
  37. J. Li, W. Monroe, A. Ritter, D. Jurafsky, M. Galley and J. Gao (2016) Deep reinforcement learning for dialogue generation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 1192–1202. External Links: Document, Link Cited by: §2.2.
  38. X. Li, Y. Chen, L. Li, J. Gao and A. Celikyilmaz (2017-11) End-to-end task-completion neural dialogue systems. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), Taipei, Taiwan, pp. 733–743. External Links: Link Cited by: §2.2.
  39. C. Lin (2004) Rouge: a package for automatic evaluation of summaries. Text Summarization Branches Out. Cited by: §1, §2.2.
  40. Z. Lipton, X. Li, J. Gao, L. Li, F. Ahmed and L. Deng (2018) Bbq-networks: efficient exploration in deep reinforcement learning for task-oriented dialogue systems. In Thirty-Second AAAI Conference on Artificial Intelligence, Cited by: §2.2.
  41. C. Liu, R. Lowe, I. Serban, M. Noseworthy, L. Charlin and J. Pineau (2016) How not to evaluate your dialogue system: an empirical study of unsupervised evaluation metrics for dialogue response generation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 2122–2132. External Links: Document, Link Cited by: §1, §2.2.
  42. R. Lowe, M. Noseworthy, I. V. Serban, N. Angelard-Gontier, Y. Bengio and J. Pineau (2017-07) Towards an automatic Turing test: learning to evaluate dialogue responses. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Vancouver, Canada, pp. 1116–1126. External Links: Link, Document Cited by: §1, §2.2.
  43. S. W. McRoy, S. Channarukul and S. S. Ali (2003) An augmented template-based approach to text realization. Natural Language Engineering 9 (4), pp. 381–420. Cited by: §1.
  44. H. Mei, M. Bansal and M. R. Walter (2017-02) Coherent dialogue with attention-based language models. In Proceedings of the National Conference on Artificial Intelligence (AAAI), San Francisco, CA. Cited by: §1, §1.
  45. J. Novikova, O. Dušek, A. Cercas Curry and V. Rieser (2017-09) Why we need new evaluation metrics for NLG. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, Copenhagen, Denmark, pp. 2241–2252. External Links: Link, Document Cited by: §1, 3rd item, §4, §5.4.
  46. J. Novikova, O. Dušek and V. Rieser (2018-06) RankME: reliable human ratings for natural language generation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), New Orleans, Louisiana, pp. 72–78. External Links: Link, Document Cited by: §1, §2.3, §4, §4, §5.4, §6.1.
  47. K. Papineni, S. Roukos, T. Ward and W. Zhu (2002) BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pp. 311–318. Cited by: §1, §2.2.
  48. H. Schuman and S. Presser (1996) Questions and answers in attitude surveys: experiments on question form, wording, and context. Sage. Cited by: §2.3.
  49. A. See, S. Roller, D. Kiela and J. Weston (2019-06) What makes a good conversation? how controllable attributes affect human judgments. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, Minnesota, pp. 1702–1723. External Links: Link, Document Cited by: §6.1.
  50. I. V. Serban, A. Sordoni, Y. Bengio, A. C. Courville and J. Pineau (2016) Building end-to-end dialogue systems using generative hierarchical neural network models.. In AAAI, Vol. 16, pp. 3776–3784. Cited by: §2.2, item 2.
  51. F. Sperrle, U. Schlegel, M. El-Assady and D. Keim (2019) Human trust modeling for bias mitigation in artificial intelligence. In ACM CHI 2019 Workshop: Where is the Human? Bridging the Gap Between AI and HCI, Cited by: §1.
  52. I. Sutskever, O. Vinyals and Q. V. Le (2014) Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pp. 3104–3112. Cited by: §1, §2.2.
  53. Z. Tian, R. Yan, L. Mou, Y. Song, Y. Feng and D. Zhao (2017-07) How to make context more useful? an empirical study on context-aware neural conversational models. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), Vancouver, Canada, pp. 231–236. External Links: Link, Document Cited by: item 2.
  54. A. Tversky and D. Kahneman (1974) Judgment under uncertainty: heuristics and biases. science 185 (4157), pp. 1124–1131. Cited by: §1, §2.1.
  55. K. van Deemter, M. Theune and E. Krahmer (2005) Real vs. template-based natural language generation: a false opposition. Computational Linguistics 31 (1), pp. 15–24. Cited by: §1.
  56. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser and I. Polosukhin (2017) Attention is all you need. In Advances in Neural Information Processing Systems, pp. 5998–6008. Cited by: §1.
  57. A. Venkatesh, C. Khatri, A. Ram, F. Guo, R. Gabriel, A. Nagar, R. Prasad, M. Cheng, B. Hedayatnia and A. Metallinou (2018) On evaluating and comparing conversational agents. arXiv preprint arXiv:1801.03625. Cited by: §2.2, §2.2, §4, §5.4.
  58. O. Vinyals and Q. Le (2015) A neural conversational model. arXiv preprint arXiv:1506.05869. Cited by: §1, §2.2.
  59. M. A. Walker, D. J. Litman, C. A. Kamm and A. Abella (1997) PARADISE: a framework for evaluating spoken dialogue agents. In Proceedings of the eighth conference on European chapter of the Association for Computational Linguistics, pp. 271–280. Cited by: §2.2.
  60. D. Wang, Q. Yang, A. Abdul and B. Y. Lim (2019) Designing theory-driven user-centric explainable ai. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 601. Cited by: §1.
  61. J. Weizenbaum (1966) ELIZA—a computer program for the study of natural language communication between man and machine. Communications of the ACM 9 (1), pp. 36–45. Cited by: §1.
  62. R. Wesslen, S. Santhanam, A. Karduni, I. Cho, S. Shaikh and W. Dou (2019) Investigating effects of visual anchors on decision-making about misinformation. In Computer Graphics Forum, Vol. 38, pp. 161–171. Cited by: §1, §2.1.
  63. S. Zhang, E. Dinan, J. Urbanek, A. Szlam, D. Kiela and J. Weston (2018-07) Personalizing dialogue agents: I have a dog, do you have pets too?. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Melbourne, Australia, pp. 2204–2213. External Links: Link, Document Cited by: §1, §2.2, §2.2, §2.3, §4.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
408816
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description