Consistent Dialogue Generation with Self-supervised Feature Learning

Consistent Dialogue Generation with Self-supervised Feature Learning

Yizhe Zhang   Xiang Gao   Sungjin Lee
Chris Brockett  Michel Galley  Jianfeng Gao  Bill Dolan
Microsoft Research, Redmond, WA, USA
{yizzhang,xiag,sule,chrisbkt,mgalley,jfgao,billdol}@microsoft.com
Abstract

Generating responses that are consistent with the dialogue context is one of the central challenges in building engaging conversational agents. In this paper, we propose a neural conversation model that generates consistent responses by maintaining certain features related to topics and personas throughout the conversation. Unlike past work that requires external supervision such as user identities, which are often unavailable or classified as sensitive information, our approach trains topic and persona feature extractors in a self-supervised way by utilizing the natural structure of dialogue data. Moreover, we adopt a binary feature representation and introduce a feature disentangling loss which, paired with controllable response generation techniques, allows us to promote or demote certain learned topics and personas features. The evaluation result demonstrates the model’s capability of capturing meaningful topics and personas features, and the incorporation of the learned features brings significant improvement in terms of the quality of generated responses on two datasets, even comparing with model which explicit persona information.

Consistent Dialogue Generation with Self-supervised Feature Learning


Yizhe Zhang   Xiang Gao   Sungjin Lee Chris Brockett  Michel Galley  Jianfeng Gao  Bill Dolan Microsoft Research, Redmond, WA, USA {yizzhang,xiag,sule,chrisbkt,mgalley,jfgao,billdol}@microsoft.com

1 Introduction

The notion of speaker consistency is attracting growing interest in neural response generation research (Li et al., 2016b; Luan et al., 2016; Zhang et al., 2018a; Gao et al., 2018). When interacting with an open-domain neural conversation agent, users may expect the agent to develop the dialogue with consistent information, mitigating the user confusion and improving engagement. Speaker consistency presents two aspects: topic consistency and persona consistency. Topic consistency reflects the model’s ability to maintain dialogue topics such as sport, movie or music without getting sidetracked. Persona consistency envisions the agent as human-like, endowed with a relatively invariant individual personality, style of engagement (e.g., enthusiasm and casualness) or personal profile (e.g., place of residence).

Figure 1: Task illustration: generating responses that are consistent with dialogue history in persona, tone and topic (from our system, 2 context turns).

Generating appropriate responses with these characteristics is a major challenge (Figure 1). Li et al. (2016b); Luan et al. (2017) and Al-Rfou et al. (2016) use persona embeddings as the additional input to train end-to-end conversational agents. Obtaining accurate persona embeddings as in Li et al. (2016b) however requires many thousands of utterances per persona, and targeted test personas may not always be found in the training data. End-to-end systems are often trained from social media data in which only a small spectrum of personas (casual speakers) is represented and professional roles (e.g. customer service) may be underrepresented, thus limiting deployment. Typically, moreover, the objective is to maintain consistency of both persona and topic throughout the dialogue, rather than inject specific personas/topics in responses. Under these scenarios, learning and leveraging persona or dialogue topic in a data-efficient and unsupervised way becomes crucial.

We present a self-supervised approach that uses the natural structure of conversational data to learn and leverage topic and persona features. Our proposals include:

1) A discriminative feature extraction mechanism that captures conversational topics and personas in a self-supervised manner, without requiring specification of speaker identity, thus allowing massive unlabeled datasets to be utilized while protecting sensitive user information.

2) Use of binary features and a disentangling loss to improve interpretability of learned features. This affords flexibility to activate or deactivate specific features when generating responses.

3) Leveraging a controllable text generation mechanism to force generated responses to adhere to high-level features such as topic and persona encoded in the controlling signal.

2 Related Work

Self-supervised learning

Self-supervised as a subdomain of unsupervised learning, has been applied to representation learning for image, video and audio (Denton and Vighnesh, 2017; Doersch et al., 2015; Owens and Efros, 2018). However, to the best of the authors’ knowledge, the application of self-supervision in conversational agents is rare. Borrowing definitions from other domains, self-supervised approaches in NLP make use of non-textual signals that intrinsically correlate with the text to supervise the text feature learning (Denton and Vighnesh, 2017).

Persona-aware response generation

Welleck et al. (2018) suggested a natural language inference (NLI) approach to improve the persona consistency, however additional labels are required. Zhang et al. (2018a); Qian et al. (2018) use explicit personal profiles as side information to guide response generation. Such information, however, may not always be available. Other work proposes injecting either emotion (Zhou et al., 2018) or functional control (Ke et al., 2018) into dialogue generation. As in Li et al. (2016b), learning to leverage the controlling signal in order to bias generation may require significant amounts of labelled data.

Topic-aware response generation

Leveraging topic modeling in response generation has been explored by several prior works (Xing et al., 2017; Wang et al., 2017; Wu et al., 2018). Our approach differs from these methods in that we focus on learning discriminative features that help distinguish a topic or person from another. Also, our method employs a neural sentence encoder to capture richer features than the bag-of-words features that the conventional topic models opt for.

Interpretable and controllable generation

Controllable text generation (Hu et al., 2017) has been employed in text style transfer and many other tasks (Ficler and Goldberg, 2017; Asghar et al., 2018; Ghosh et al., 2017; Dong et al., 2017). This helps disentangling high-level style information from contextual information such that the style information can be independently manipulated to produce text with different styles. Related to our work, Zhao et al. (2018) considered discrete latent actions to learn a human-interpretable representation for task-oriented dialogue systems.

3 Proposed Approach

The proposed approach use additional unsupervisedly learned features to generate response utterances that reflect these features. We elaborate two major components of the proposed approach: a feature extractor trained to extract topic/persona features from each utterance; and a response generator that takes the extracted features as input to generate responses accordingly.

3.1 Problem statement

Let denote the -th dialogue session in dataset , where is the -th utterance and is the number of turns in this dialogue session. We assume that each dialogue only consists of the utterances between two speakers, interleaving with each other. Suppose the first () turns of each dialogue are revealed, our aim is to generate the remaining turns of the dialogue that are consistent with the observed context.

3.2 Discriminative feature extraction

Figure 2: Feature extractor design. and are two randomly shuffled sentences. An extractor network ( can be either or ) encodes both of them to yield features and , which are then used to predict label . represents matching function/network

Inspired by Denton and Vighnesh (2017), we adopt a self-supervised discriminative training scheme where we design a neural model which includes an explicit feature extraction layer as illustrated in Figure 2 and formulate a discriminative task to train the model. When training is done, the feature extraction layer yields relevant features for the associated task. In this section, we introduce two discriminative tasks to capture two types of sentence features, respectively: 1) topic-specific features () that characterize conversation topics. 2) persona-specific features () that reflect speaker characteristics.

Topic feature extractor

In order to build a topic feature extractor with self-supervision, we rely on the assumption that utterances from the same conversation session are likely to share similar topics. Thus, we formulate a surrogate task to identify if two random sentences and from belong to the same dialogue session. Specifically, when they come from the same dialogue session, i.e., we assign 1 to target and 0 otherwise. We optimize the cross-entropy objective:

where denotes the topic-specific feature extractor (shared among all sentences), and represents a matching network detecting whether the two feature vectors and belong to the same dialogue session. We use a 3-layer convolutional neural network (CNN) followed by a non-linear mapping for to produce an -dimensional vector. For the non-linear mapping, we explore two options: 1) We employ a sigmoid function to produce a soft-binary representation, i.e.. 2) We compute a hard-binary representation by taking 1 if it is positive and 0 otherwise, i.e.. This non-negative bounded representation lends itself well to interpretation and control of each component of . For instance, we can activate or deactivate a certain topic or persona by simply turning on and off the corresponding component. For the matching function, we apply a sigmoid function to the inner-product of two feature vectors, i.e., where is a hyperparameter to scale the .

Persona feature extractor

Here we consider extracting persona features in a broader sense of current speaker’s status related to emotion (Zhou et al., 2018), personality, tone and function control (Ke et al., 2018). Note that we are only interested in maintaining consistency of any emerging persona, rather than characterizing a full spectrum of persona features. The only difference between the topic () and persona () feature extractors is how the positive and negative sample pairs for training are created. In the persona feature extractor, the positive pairs () or negative pairs () are the utterances from the same or different speaker within a dialogue, aiming to eliminate the topic information from the persona features. Ideally, the two speakers in a dialogue are discussing the same topic. Under this assumption, since the utterances in a negative pair are also from the same dialogue, they are like to share the same topics. Thus, the model is forced to learn the features that can capture different personas of the two speakers.

Unlike Li et al. (2016b), where each speaker is assigned a single speaker embedding vector, in our proposed method the utterances by one speaker can have different feature vectors as the manifestations of the underlying persona embedding in a different context. Nevertheless, the discriminator objective encourages these vectors to be similar since they refer to the same person. We believe that our approach is more data-efficient than (Li et al., 2016b) because the former allows borrowing information from a wider range of speakers. In (Li et al., 2016b), information borrowing only happens to the speakers who are similar to the current speaker in persona embedding space. As a result, persona embeddings can be poor for those not based on many dialogues. Our method, on the other hand, can leverage those speakers who share any specific features with the current speaker and is able to learn more robust representations of speakers because we aggregate personal traits across all users. However, Li et al. (2016b) complement our methods nicely in that it does not require dialogue history as the seed to initiate the first several turns.

Interpretable features

We considered two methods of making the learned features more interpretable: 1) feature vector disentanglement (Cogswell et al., 2015); 2) feature vector binarization (Zhao et al., 2018).

First, we employ a decorrelation (DeCorr) loss inspired by Cogswell et al. (2015), who introduced a DeCov loss to regularize deep neural networks. Specifically, we add an additional term in the objective function when training the topic and persona feature extractors:

where represents the matrix Frobenius norm, and the operator represents diagonalization of a matrix. denotes the feature extractor, and can be either or . is the correlation matrix of , computed from the current batch of data. Note that achieving a reasonable estimation of the correlation matrix requires a relatively large mini-batch size. The resulting final objective for the discriminator is , where is a balancing hyperparameter.

Second, alternatively, we also consider binary feature vectors, where a straight-through (ST) estimator is used for the gradient calculation (Bengio et al., 2013). Suppose the binary feature is rounded from a probability vector , ST estimator back-propagate through the hard threshold by approximating the gradient as 1. We empirically found that setting to use the inner product of and fails. We presume the reason may be that the value of the inner product between two binary vectors can only take integers from which limits the representation power of the model. We therefore concatenate and and passing it through a multi-layer perceptron (MLP) to predict the matching label . Interchangeability is still loosely maintained as the pair is randomly swapped when feeding into the discriminator.

Utterance pair construction

One issue in constructing the positive/negative pair for the feature extractor is that the number of positive/negative pairs need to be balanced to achieve a robust empirical result. Moreover, when constructing the positive sample pairs with , if the and are adjacent or close to each other in a dialogue, we might end up capturing adjacency pairs (Sacks and Schegloff, 1973) rather than conversation topics. For example, ’How are you?’ and ’Fine. How are you?’. The captured similarity in feature space of this pair is contextual appropriateness rather than topic/persona consistency. To alleviate this, we collect only those pairs that are more than 4 turns away from each other for the positive sample pairs.

We note also that the persona features may affect the topic feature extractor because the persona features can be weak signals for predicting whether two sentences are from the same dialogue. One remedy is to select utterances from different speakers within a dialogue session when constructing the positive pairs for the topic extractor to eliminate as much as possible the effect of the persona features. However, this remedy can result in fewer positive pairs. Empirically the topic extractor works well even without this remedy, presumably because the strong signal from topic overwhelms the weak signal from persona.

Figure 3: Controllable generation scheme (better in color). The feature extractor are represented as solid blue arrows. During training time, contextual sources are encoded (by encoder), and aggregated (by aggregator) to a context vector , meanwhile the target is abstracted by feature extractor as feature vector . The decoder then (controllably) generates a response based on both and . For testing, the feature is obtained by aggregating the feature vectors for each source sentence.

3.3 Generator design

Training a response generator

The conditional multi-turn generator that produces neural responses given the -turn source sentences is shown in Figure 3, which is conceptually related to (Serban et al., 2016). During training time (Figure 3 left panel), each source sentence is first encoded by a 3-layer CNN encoder, which shares the same architecture as the feature extractor, followed by a context aggregator () layer that summarizes all sentence embedding vectors into one single context vector with the same dimension as . In this paper, the layer is designed as first concatenating and applying a fully-connected layer to map the resulting vector to .

On the other hand, the target sentence is processed by the feature extractors to produce feature vector(s) as described in Section 3.2. The feature extractors are fixed in the response generator since we observed fine-tuning the feature extractor leads to suboptimal empirical results. The context vector and feature vector(s) are fed into an MLP to generate a fixed-length initial hidden variable . This is followed by a series of long short-term memory (LSTM) units as the decoder, where is employed as input in each time-step.

Controllable objective during training

Our generator loss incorporates two components. The first one is the vanilla teacher-forcing (Williams and Zipser, 1989) MLE loss . The second part is a cycle consistency loss , introduced by (Hu et al., 2017) to admit additional controlling ability of feature vector in the generation. Intuitively, it encourages the self-generated response under free-running generative mode to have the same features as the input signal . Specifically, consider a response greedily generated by conditioning on previously generated tokens. The is simply the Euclidean distance between input feature vectors and , i.e.. In the case of binary features, where is the network output before rounding to binary values. Note that the generated tokens involves an argmax operation and are not directly differentiable, preventing the gradient signals from back-propagating to the encoder and decoder. Common remedies for this includes Gumbel-softmax (GS)(Gumbel and Lieblein, 1954), policy gradient (PG)(Yu et al., 2017) and soft-argmax (SA)(Zhang et al., 2017). Unfortunately, GS and PG suffer from high variances of gradient estimation while SA suffers from a dilemma between gradient vanishing and inaccurate gradient. To alleviate such a problem in SA, we consider an approach in what we call the Straight-Through LSTM unit (ST-LSTM), which use ST estimation (Bengio et al., 2013; Jang et al., 2016) to achieve a biased but smooth gradient signal while maintaining the forward computation exact via a temperature parameter . The details are provided in the Appendix.

In the experiment, we applied the slope-annealing trick (Chung et al., 2016), and set which works well in practice. The final training objective for the generation is .

Testing time

At test time, as shown in Figure 3 (right panel), the feature vectors from the source sentences are first collected by applying feature extractors . We denote the feature vectors for the source sentences as . We apply a feature aggregator layer to estimate the output feature vector , which is further fed into the LSTM-RNN for the generation. Different from the context layer, we consider a weighted-sum aggregation function for the feature layer 111Other possibilities of such an layer exist, such as mean, max or concatenation (as in ). We choose weighted-sum for the layer due to its superior empirical performance comparing to alternatives., i.e., , where are linear interpolation weights learned during training time, where a Euclidean distance between predicted target feature and target feature is optimized, i.e.. For the persona feature, we only use the source sentences of the current speaker, thus all where is set as zero. Intuitively it can be perceived as the attention of each utterance. We note that more complicated attention mechanisms can further improve the model; however, we leave these for future work, since this paper focuses on the utilization of dialogue features rather than improving the multi-turn S2S structure in general.

4 Experimental setups

We evaluate the proposed methods on two datasets. All experiments are conducted using single Nvidia Tesla V100 GPU. The source code will be released.

4.1 Data collection

We consider two datasets. For both we use a (80%, 10%, 10%) split for training, validation and test respectively.

Twitter data

Training data was extracted from the Twitter FireHose covering a five-year period from 2012 through 2016.222Deleted tweets and closed accounts were removed. From this set, we collected total 6,658,385 8-turn dialogues where two participants chatted with each other.

Maluuba data

The Maluuba dataset consists of 40,389 dialogues with 11 turns. Each dialogue is a task-oriented conversational interaction between two real speakers regarding 51 domains and 242 tasks, collected by crowd-sourcing where one crowd worker simulates a user and another simulates a chatbot.

4.2 System specifications

The dimension of the LSTM hidden layer is set at 500. We use ADAM as the optimizer with learning rate 0.0001. The hyperparameters and are set at 0.01 and 0.1, respectively. For the dimension of feature vectors we use 100. For Maluuba dataset we use a 50% dropout rate in each of the CNN layers and the is set to . The hyperparameters are selected to maintain the discrimination accuracy while reducing as much as possible.

For evaluation, we consider three variants of our COnsistent CONversation (CoCon) models: CoCon-T: CoCon model with topic-consistency; CoCon-TP: CoCon model with topic-consistency and persona-consistency; CoCon-TP-bin: CoCon model using binary features with topic-consistency and persona-consistency. We compared our models with two baselines: a vanilla sequence-to-sequence model (S2S) and persona model (Persona) (Li et al., 2016b). We implement the persona model by reusing the encoder and decoder architecture in our approach. For Twitter dataset, we map all users with fewer than 88 utterances as unknown (86% of the total training samples) and in the test set (for all compared methods) we eliminate conversation sessions with unknown users. This yields 50k total users. We use the same number of feature dimensions for all systems compared. All modules are trained until convergence.

5 Results

Self-supervised feature learning

We used equal numbers of positive/negative examples to train each feature extractor. For Twitter dataset, the resulting accuracies for topic and persona feature extractor are around and (for both continuous and binary features), respectively. For Maluuba dataset, the discriminator accuracy for persona and topic feature extractors are and , respectively. With the disentangling loss (), the correlation between features drops from 0.25 to 0.16.

Representative n-grams for some learned feature units for Twitter dataset are shown in Table 1. To calculate the feature vector for a specific n-gram, we average over the feature vector of test sentences that contain that n-gram. We then select the top-ranked n-grams with occurrences greater than 200 for each feature bit. We observe that when , i.e.without disentangling loss, the learned features exhibit heavy colinearity, which weakens the interpretability of each separate feature units.

Topics 1gram 2gram 3gram 4gram
T-2 (electronic) android; ios; apps the iphone; the app; my ipad a new phone; my phone is; are you using is on my phone; you can use the; send it to the
T-59 (sport) striker; arsenal; madrid champions league ; best player; the spurs; in the league; in the playoffs; a good game one of the best; best of the season; the team in the;
T-18 (Movie , show) episodes; film; netflix of thrones; that movie; the ending i watched it; to watch it; the first one one of my favorite; have you seen the; i want to see
Persona 1gram 2gram 3gram 4gram
P-71 (inquiry) what; thx; wheres can u; do u; where is how do i; is there a; what is the thank you so much; i need to get ; i want to see
P-49 (agreement) yea; great; sure lol yea; yea i; hell yea that s so; i will do; yea i m i would love to; looking forward to it; sounds like a plan
P-28 (abbreviation) hav; urself; wats u will; u got; for u i thought u; did u get; do you have s what s up; i m bout to; do u have any
P-83 (african amer.) aint; homie; yo; n**** i;hell yea;i ain u ain ’t; cuz i am; i ’m tryna i ’m bout to; i ’m gonna have; ain ’t nothing gonna
Table 1: Representative n-grams in topic (T) and persona (P) features
Context: A:tommorows game just became that much bigger . must win .
B: unbeliebable . mourinho has never lost at home in the league as chelsea coach and against the worst team in the league he loses
A:yup , would be insane if they could somehow avoid relegation now .
B:would be amazing.we played against them at home 3 weeks ago but barely beat them . they are always up for it against the big teams
Human: A:yeah , said they played aggainst top half of table much better then bottom half .
S2S: A:i think they ’ll be fine . . .
Persona: A:i think they ’ll be in the same position .
(Ours) CoCon-T: A:i think they play well . i ’m a big fan of the manager . they ’ve played well in the same league as well .
(Ours) CoCon-TP: A:yeah , they played well with the top three teams , but i ’d say it was . not sure what to expect .
Context: A:lucky you mate . have a super fine weekend buddy
B: i m gonna be knackered next week should be a good weekend though gonna get loads a pics
A:can n’t fault you mate nice mid night start tuesday lol
B:gonna be at the show saturday then home sunday back to work monday . where you off to tomorrow ?
Human: A:one out of cedar farm to march then not sure mate either ipswich or a hiams to march .
S2S: A:not sure . i ’m off for a few days
Persona: A:i ’m off friday night . you going to the game ?
(Ours) CoCon-T: A:i know . i am going to manchester on the weekend . what about you ?
(Ours) CoCon-TP: A:i ’m going to the beach mate . i ’m going for a couple days off . good weekend pal
Table 2: Sampled generation on Twitter.
Figure 4: t-SNE embedding of topic (left) and persona (right) features for different tasks (left) and roles (right).
Context: A:done with the game
B:what are you talking about
A:you heard me
B:if you ’re not there today i ’m going to be mad
Human: A:lol ok i aint going
B:are you trying to be a bod mon or something ?
A:oh you that ’s exactly how i ’m tryna be aha
B:so tell me why you are n’t going ? you just say you ’re not , talk about it
Plain: A:i’m on the vacation. i’m not going to play that game
B:you don’t really mean this, right? you’re n’t going to do that
A:i aint , i told you .
B:well you can n’t be a person like that , you can n’t be a loser
Sport: A:i’m watching the school football. that’s enough for now
on B:that’s why you’re a loser
A:lol no i’m going to be the best player in the league.
B:oh well you’re not going to win man you are terrible at all this time.
Table 3: Examples of bit manipulations. Plain denotes without manipulation. on denotes bit is activated (set to ) when generating the response

We further visualized the topic features on both datasets using t-SNE embedding (Maaten and Hinton, 2008). For Maluuba dataset, Figure 4 illustrates the learned topic and persona feature embeddings on the test set. Without any label information, the learned topic and persona features separate well. For twitter dataset, we observed that the persona features of the utterances from different time zones form some clusters, indicating the features learned from our approach can partially reflect the difference in societal groups (See Appendix B).

Sampled response generation

We evaluate our approaches by generating the next response given 4 contextual seed source sentences. Some sampled results are shown in Table 2. We observed that the CoCon-T and CoCon-TP in general are able to produce informative responses which seem to be more consistent with the theme of the given context comparing with baselines. For CoCon-TP, beyond being context-aware, the responses seem to be persona-aware, i.e., mimicking the tone and personal wording preferences like mate, oh my gosh, haha, ain ’t and other words associated with them.

Feature manipulation

We further manipulate feature bits that seem to be associated with certain topics. The results are shown in Table 3 (additional results are provided in the Appendix). We generate next 4 turns consecutively. The later generations consider the previous 4 sentences, including previously generated utterances as source context. This bit manipulation is based on binary feature codes, achieved by toggling the specific bit to be to activate it. With the additional controllable generation objective , we are able to better control the flipping of each bit. As shown in Figure 6 in Appendix B, increasing leads to a fast decrease of (indicating a better controlling power), however may at a cost of harming the generation quality. We select the by trading off between both aspects. We observe the success rate of bit toggling is about percent (based on 2000 tested cases), meaning that around cases where we flip a of the input feature to , the response feature will remain . Presumably, the model has learned to detect that, based on the context, it is unnatural to toggle a certain bit and refused to make the change. This hypothesis needs further experimental verification. Controllable generation is still an emergent technology and the noisy nature of dialogue data makes this even more challenging.

For Maluuba dataset, we provide sampled responses of S2S and CoCon-TP in Table 8 in Appendix 5. The context is given as 4 turns of dialogues and the task to generate all remaining 7 turns. It can be seen that during free generation, the S2S model tend to generate looping responses like thanks - you ’re welcome and is generally less informative. However, our proposed CoCon-TP approach can generate reasonably well by unsupervisedly capturing the topics of the context and role of each turn.

Models Relevance Diversity
BLEU METEOR NIST Greedy Average Extreme Dist-1 Dist-2 Ent-4
Ours CoCon-T 3.01 0.061 1.022 1.968 0.675 0.321 0.008 0.065 9.71
CoCon-TP 3.31 0.064 1.135 2.048 0.683 0.342 0.008 0.081 10.46
CoCon-TP-bin 3.04 0.063 1.061 2.025 0.677 0.331 0.009 0.100 10.59
S2S 2.83 0.056 0.945 1.855 0.640 0.307 0.004 0.023 7.51
Persona model 2.96 0.059 1.014 1.931 0.658 0.319 0.005 0.028 7.96
Human - - - - - - 0.078 0.473 11.75
Table 4: Quantitative evaluation for twitter dataset
Models Relevance Diversity
BLEU METEOR NIST Greedy Average Extreme Dist-1 Dist-2 Ent-4
Ours CoCon-T 5.6 0.076 1.421 2.105 0.565 0.358 0.025 0.142 8.899
CoCon-TP 5.8 0.077 1.459 2.175 0.575 0.365 0.028 0.16 8.983
CoCon-TP-bin 4.6 0.074 1.280 2.094 0.559 0.341 0.027 0.19 9.767
S2S 3.9 0.066 1.045 2.021 0.529 0.328 0.017 0.16 8.293
Persona model 4.4 0.073 1.134 2.042 0.543 0.319 0.021 0.177 8.603
Human - - - - - - 0.092 0.462 10.281
Table 5: Quantitative evaluation for Maluuba dataset
Topic Consistency (human judges preferred) Persona Consistency (human judges preferred)
Our Method Neutral Comparison Our Method Neutral Comparison
CoCon-TP 45.20% 22.30% 32.50% seq2seq CoCon-TP 40.95% 29.85% 29.20% seq2seq
CoCon-TP 40.05% 23.10% 36.85% persona CoCon-TP 35.65% 34.10% 30.25% persona
CoCon-TP 21.50% 26.85% 51.65% human CoCon-TP 21.35% 33.35% 45.30% human
Table 6: Results of Human Evaluation for topical and persona consistency, showing preferences (%) for our model (CoCon-TP) vis-a-vis baseline or other comparison systems. Distributions are skewed towards CoCon-TP, except when compared with human outputs. Numbers in bold indicate the most preferred systems. For simplicity, the 5-point Likert scale is collapsed to a 3-point scale. See the Appendix for further details.

Automatic evaluations

In our quantitative evaluations we test both relevance and diversity metrics. For relevance, we adopt BLEU (Papineni et al., 2002), METEOR (Denkowski and Lavie, 2014), NIST (Doddington, 2002) and three embedding-based metrics Greedy,Average,Extreme following (Serban et al., 2017; Rus and Lintean, 2012; Mitchell and Lapata, 2008; Forgues et al., 2014). To evaluate diversity, we follow (Li et al., 2016a) to use Dist-1 and Dist-2, which is characterized by the proportion between the number of unique n-grams and total number of n-grams of tested sentence. We also include the Entropy (Ent-n) metric (Zhang et al., 2018b; Gao et al., 2019), which does not depend on the size of test data. The results of automatic evaluations are shown in Table 4 (Twitter) and Table 5 (Maluuba). For both dataset, the CoCon-TP model achieves best relevance score, while the CoCon-TP-bin outperforms other methods in diversity.

Human evaluations

We evaluated 500 randomly sampled test sources from Twitter dataset using crowd-sourcing provided by a contracting service. Systems were paired and each pair of system outputs was randomly presented to 4 judges, who ranked them for topic consistency, persona consistency, informativeness and relevance using a 5-point Likert scale. Overall judges’ preferences for the topic consistency, persona consistency, given as a percentage of total judgments are shown in Table 6. A strong overall preference can be observed for CoCon-TP over the other systems evaluated. We also evaluated for relevance and informativeness, with CoCon-TP showing similar preference gains. Further details, including the human evaluation template used, are provided in the Appendix.

6 Conclusion

We present a self-supervised feature learning framework to abstract high-level latent representations of topic and persona information underlying the dialogue context and leverage these representations to generate more consistent dialogue in a controllable manner. For future work, investigating the variance reduction strategies for controllable text generation would presumably improve the controllablity of the feature units. Besides, combining and aligning supervised and unsupervised features would potentially enable better feature learning and interpretability. Our approach can be adapted to facilitate style transfer and long-form text generation (Guo et al., 2018; Zhang et al., 2017) to improve the generation consistency.

References

  • Al-Rfou et al. (2016) Rami Al-Rfou, Marc Pickett, Javier Snaider, Yun-hsuan Sung, Brian Strope, and Ray Kurzweil. 2016. Conversational contextual cues: The case of personalization and history for response ranking. arXiv.
  • Asghar et al. (2018) Nabiha Asghar, Pascal Poupart, Jesse Hoey, Xin Jiang, and Lili Mou. 2018. Affective neural response generation. In ECIR, pages 154–166. Springer.
  • Bengio et al. (2013) Yoshua Bengio, Nicholas Léonard, and Aaron Courville. 2013. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv.
  • Chung et al. (2016) Junyoung Chung, Sungjin Ahn, and Yoshua Bengio. 2016. Hierarchical multiscale recurrent neural networks. arXiv preprint arXiv:1609.01704.
  • Cogswell et al. (2015) Michael Cogswell, Faruk Ahmed, Ross Girshick, Larry Zitnick, and Dhruv Batra. 2015. Reducing overfitting in deep networks by decorrelating representations. arXiv preprint arXiv:1511.06068.
  • Denkowski and Lavie (2014) Michael Denkowski and Alon Lavie. 2014. Meteor universal: Language specific translation evaluation for any target language. In workshop on statistical machine translation, pages 376–380.
  • Denton and Vighnesh (2017) Emily L Denton and Birodkar Vighnesh. 2017. Unsupervised learning of disentangled representations from video. In NIPS, pages 4414–4423.
  • Doddington (2002) George Doddington. 2002. Automatic evaluation of machine translation quality using n-gram co-occurrence statistics. In Proceedings of the second international conference on Human Language Technology Research, pages 138–145. Morgan Kaufmann Publishers Inc.
  • Doersch et al. (2015) Carl Doersch, Abhinav Gupta, and Alexei A Efros. 2015. Unsupervised visual representation learning by context prediction. In ICCV, pages 1422–1430.
  • Dong et al. (2017) Li Dong, Shaohan Huang, Furu Wei, Mirella Lapata, Ming Zhou, and Ke Xu. 2017. Learning to generate product reviews from attributes. In EACL, volume 1, pages 623–632.
  • Ficler and Goldberg (2017) Jessica Ficler and Yoav Goldberg. 2017. Controlling linguistic style aspects in neural language generation. EMNLP, page 94.
  • Forgues et al. (2014) Gabriel Forgues, Joelle Pineau, Jean-Marie Larchevêque, and Réal Tremblay. 2014. Bootstrapping dialog systems with word embeddings. In NIPS, modern machine learning and natural language processing workshop.
  • Gao et al. (2018) Jianfeng Gao, Michel Galley, and Lihong Li. 2018. Neural approaches to conversational AI. arXiv preprint arXiv:1809.08267.
  • Gao et al. (2019) Xiang Gao, Sungjin Lee, Yizhe Zhang, Chris Brockett, Michel Galley, Jianfeng Gao, and Bill Dolan. 2019. Jointly optimizing diversity and relevance in neural response generation. arXiv preprint arXiv:1902.11205.
  • Ghosh et al. (2017) Sayan Ghosh, Mathieu Chollet, Eugene Laksana, Louis-Philippe Morency, and Stefan Scherer. 2017. Affect-LM: A neural language model for customizable affective text generation. In ACL, volume 1, pages 634–642.
  • Gumbel and Lieblein (1954) Emil Julius Gumbel and Julius Lieblein. 1954. Statistical theory of extreme values and some practical applications: a series of lectures. US Government Printing Office Washington.
  • Guo et al. (2018) Jiaxian Guo, Sidi Lu, Han Cai, Weinan Zhang, Yong Yu, and Jun Wang. 2018. Long text generation via adversarial training with leaked information. In AAAI.
  • Hu et al. (2017) Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P Xing. 2017. Toward controlled generation of text. In International Conference on Machine Learning, pages 1587–1596.
  • Jang et al. (2016) Eric Jang, Shixiang Gu, and Ben Poole. 2016. Categorical reparameterization with gumbel-softmax. arXiv.
  • Ke et al. (2018) Pei Ke, Jian Guan, Minlie Huang, and Xiaoyan Zhu. 2018. Generating informative responses with controlled sentence function. In ACL, volume 1, pages 1499–1508.
  • Li et al. (2016a) Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016a. A diversity-promoting objective function for neural conversation models. In NAACL.
  • Li et al. (2016b) Jiwei Li, Michel Galley, Chris Brockett, Georgios P Spithourakis, Jianfeng Gao, and Bill Dolan. 2016b. A persona-based neural conversation model. ACL.
  • Luan et al. (2017) Yi Luan, Chris Brockett, Bill Dolan, Jianfeng Gao, and Michel Galley. 2017. Multi-task learning for speaker-role adaptation in neural conversation models. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), volume 1, pages 605–614.
  • Luan et al. (2016) Yi Luan, Yangfeng Ji, Hannaneh Hajishirzi, and Boyang Li. 2016. Multiplicative representations for unsupervised semantic role induction. In ACL.
  • Maaten and Hinton (2008) Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of Machine Learning Research, 9(Nov):2579–2605.
  • Mitchell and Lapata (2008) Jeff Mitchell and Mirella Lapata. 2008. Vector-based models of semantic composition. In ACL.
  • Owens and Efros (2018) Andrew Owens and Alexei A Efros. 2018. Audio-visual scene analysis with self-supervised multisensory features. In ECCV.
  • Papineni et al. (2002) Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In ACL.
  • Qian et al. (2018) Qiao Qian, Minlie Huang, Haizhou Zhao, Jingfang Xu, and Xiaoyan Zhu. 2018. Assigning personality/identity to a chatting machine for coherent conversation generation. In IJCAI.
  • Rus and Lintean (2012) Vasile Rus and Mihai Lintean. 2012. A comparison of greedy and optimal assessment of natural language student input using word-to-word similarity metrics. In Proceedings of the Seventh Workshop on Building Educational Applications Using NLP.
  • Sacks and Schegloff (1973) Harvey Sacks and Emanuel A. Schegloff. 1973. Opening up closings. Semiotica, 8(4):289–327.
  • Serban et al. (2016) Iulian V Serban, Alessandro Sordoni, Yoshua Bengio, Aaron C Courville, and Joelle Pineau. 2016. Hierarchical neural network generative models for movie dialogues. In AAAI.
  • Serban et al. (2017) Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2017. A hierarchical latent variable encoder-decoder model for generating dialogues. In AAAI.
  • Wang et al. (2017) Di Wang, Nebojsa Jojic, Chris Brockett, and Eric Nyberg. 2017. Steering output style and topic in neural response generation. In EMNLP, pages 2140–2150.
  • Welleck et al. (2018) Sean Welleck, Jason Weston, Arthur Szlam, and Kyunghyun Cho. 2018. Dialogue natural language inference. arXiv preprint arXiv:1811.00671.
  • Williams and Zipser (1989) Ronald J Williams and David Zipser. 1989. A learning algorithm for continually running fully recurrent neural networks. Neural computation, 1(2):270–280.
  • Wu et al. (2018) Yu Wu, Zhoujun Li, Wei Wu, and Ming Zhou. 2018. Response selection with topic clues for retrieval-based chatbots. Neurocomputing, 316:251–261.
  • Xing et al. (2017) Chen Xing, Wei Wu, Yu Wu, Jie Liu, Yalou Huang, Ming Zhou, and Wei-Ying Ma. 2017. Topic aware neural response generation. In AAAI, volume 17, pages 3351–3357.
  • Yu et al. (2017) Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. 2017. Seqgan: sequence generative adversarial nets with policy gradient. In AAAI.
  • Zhang et al. (2018a) Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018a. Personalizing dialogue agents: I have a dog, do you have pets too? arXiv preprint arXiv:1801.07243.
  • Zhang et al. (2018b) Yizhe Zhang, Michel Galley, Jianfeng Gao, Zhe Gan, Xiujun Li, Chris Brockett, and Bill Dolan. 2018b. Generating informative and diverse conversational responses via adversarial information maximization. In NeurIPS.
  • Zhang et al. (2017) Yizhe Zhang, Zhe Gan, Kai Fan, Zhi Chen, Ricardo Henao, Dinghan Shen, and Lawrence Carin. 2017. Adversarial feature matching for text generation. In ICML.
  • Zhao et al. (2018) Tiancheng Zhao, Kyusong Lee, and Maxine Eskenazi. 2018. Unsupervised discrete sentence representation learning for interpretable neural dialog generation. arXiv preprint arXiv:1804.08069.
  • Zhou et al. (2018) Hao Zhou, Minlie Huang, Tianyang Zhang, Xiaoyan Zhu, and Bing Liu. 2018. Emotional chatting machine: Emotional conversation generation with internal and external memory. In AAAI.
    Appendix for Consistent Dialogue Generation with Self-supervised Feature Learning

Appendix A Straight-through LSTM (ST-LSTM)

In the forward calculation, the -th ST-LSTM unit takes the previously generated word , hidden state and as input, and generates the next word that maximizes the probability over the vocabulary set. That is, the argmax operation is used. However, in the backward calculation, the gradient of is calculated as a constant where is a temperature parameter. Presumably, this approach delivers a biased but smooth gradient signal while maintaining the forward computation exact.

Appendix B Twitter additional results

Figure 5 shows the t-SNE embedding of topic features for speakers in different time zones, demonstrating our self-supervised approach learned reasonable representation to separate different users. Here we use 2000 testing utterances with speakers from 4 different time zones (500 for each).

Figure 5: t-SNE embedding of topic features for speakers in different time zones.

Figure 6 shows that decreases faster when is larger, indicating the additional controlling loss can be effectively reduced using Straight-through training.

Figure 6: decreases faster when is larger.

Table 7 shows some additional results on controlling the bit, in addition to Table 3.

Context: A:planning on making a trip to hawaii
B:lol if only people knew what we are really going there for lolol
A:hahaha . you we might get judge a little
B:a little ? probably a lot haha
plain: A:it’s okay. we should totally go together. we can become friends
B:lol i’m not sure. i’m going to vegas instead!
A:haha. i feel you. i just want to get out of all of these.
B:let’s get on our way!!
electronic on: A:it’s okay. we should totally go together. i will friend you on fb
B:lol sounds good. i will use my phone
A:sure. let me add you to my contact
B:i’m going to do that. i will add you too.
Table 7: Examples of bits manipulations. Plain denotes without manipulation. on denotes bit is activated (set to ) when generating the response
Source: A:hello how may i help you ?
U:hi i want to look up a play
A:what play
U:what time does the secret miracle opening on ?
Human: A:the secret miracle is scheduled for 8 pm on saturday
U:ok book me for tickets
A:ok sure
U:how much is that ?
A:50
U:ok book it now
A:done
CoCon-TP: A:9:00 PM friday
U:are there any tickets available for saturday there ?
A:yes tickets do have a deal at 8 pm
U:ok book me for tickets
A:ok ticket
U:reserve for 3 seats please
A:okay done
S2S: A:it ’s on saturday
U:and what time ?
A:2:00 PM
U:thank you
A:you ’re welcome
U:thanks for the help
A:you ’re welcome
Source: A:hello how may i help you ?
U:i need to learn more about the rules for tag
A:i can certainly help with game rules . what do you want to learn about the game tag ?
U:how do i pick the first person to be ’ it ’
Human: A: the first person to ask who is it is considered ’ it ’.
U:hm , that makes sense . how do you know who is the winner ?
A:you could keep track of times a person has been ’ it ’
U:oh , okay thanks so much
A:can i help you with something else today ?
U:no , that will be all today
A:thanks for taking the time to ask me a question !
CoCon-TP: A:the game of tag is a person who gets tagged the least .
U:is there a different rule for tag ?
A:yes , i can help you with that . there is a lot of tag
U:okay , what is the best time to play the most ?
A:the winner is the person that gets tagged the least
U:oh , okay , i ’ll play that game . thanks for the tips .
A:you are welcome . i am always here when you need to learn .
S2S: A:the person who asks who is it is declared ’ it ’.
U:okay , thank you
A:you ’re welcome .
U:okay , thanks
A:you ’re welcome !
U:have a good day
A:you too .
Table 8: Generated Responses for maluuba dataset (in consecutive generation manner)
Figure 7: Questionnaire used to elicit pairwise judgments from crowdsourced annotators. Candidate responses were presented in random order.

Appendix C Maluuba results

We provide some generated samples for maluuba dataset in Table 8. All compared models use first 4 turns as seed and generate the remaining 7 turns by taking 4 previous 333previous turn use ground truth for first 4 and generated utterances for the rest turns as context.

Appendix D Human evaluation

Human evaluation was conducted using the form shown in Figure 7. The two response candidates were presented in random order to the judges, who used a Likert scale to indicate their preferences. To make the questionnaire less abstract to judges, persona was evaluated in terms of which response better reflected the tone and style of Person A as observable in the prior turns. The distributions of judgments for each of the questions are shown in Tables 9 through 12.

Distribution of Pairwise Topic Consistency Preferences
Our Method 5 4 3 2 1 Baseline
CoCon-TP 12.60% 32.60% 22.30% 24.45% 8.05% seq2seq
CoCon-TP 10.20% 29.85% 23.10% 28.75% 8.10% persona
CoCon-TP 5.10% 16.40% 26.85% 33.70% 17.95% human
Table 9: Distribution of topical consistency preferences (%) for our model (CoCon-TP) compared with seq2seq and persona baselines, according to a five-point Likert scale. A 5 indicates a strong preference for CoCon-TP; a 1 indicates strong preference for the alternative system.
Distribution of Pairwise Persona Preferences
Our Method 5 4 3 2 1 Baseline
CoCon-TP 11.30% 29.65% 29.85% 22.50% 6.70% seq2seq
CoCon-TP 8.30% 27.35% 34.10% 23.70% 6.55% persona
CoCon-TP 4.45% 16.90% 33.35% 30.95% 14.35% human
Table 10: Distribution of persona consistency preferences (%) for our model (CoCon-TP) compared with seq2seq and persona baselines, according to a five-point Likert scale. A 5 indicates a strong preference for CoCon-TP; a 1 indicates strong preference for the alternative system.
Distribution of Pairwise Relevance Preferences
Our Method 5 4 3 2 1 Baseline
CoCon-TP 13.70% 31.05% 24.15% 22.80% 8.30% seq2seq
CoCon-TP 10.15% 28.60% 25.55% 25.85% 9.85% persona
CoCon-TP 4.75% 15.70% 28.10% 31.95% 19.50% human
Table 11: Distribution of relevance preferences (%) for our model (CoCon-TP) compared with seq2seq and persona baselines, according to a five-point Likert scale. A 5 indicates a strong preference for CoCon-TP; 1 indicates strong preference for the alternative system.
Distribution of Pairwise Informativeness Preferences
Our Method 5 4 3 2 1 Baseline
CoCon-TP 12.85% 29.95% 27.90% 22.60% 6.70% seq2seq
CoCon-TP 10.45% 28.20% 30.20% 23.55% 7.60% persona
CoCon-TP 4.40% 15.05% 29.80% 32.70% 18.05% human
Table 12: Distribution of informativeness preferences (%) for our model (CoCon-TP) compared with seq2seq and persona baselines, according to a five-point Likert scale. A 5 indicates a strong preference for CoCon-TP; 1 indicates strong preference for the alternative system.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
345716
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description