Towards Facilitating Empathic Conversations in Online Mental Health Support

Towards Facilitating Empathic Conversations in Online Mental Health Support: A Reinforcement Learning Approach

Abstract.

Online peer-to-peer support platforms enable conversations between millions of people who seek and provide mental health support. If successful, web-based mental health conversations could improve access to treatment and reduce the global disease burden. Psychologists have repeatedly demonstrated that empathy, the ability to understand and feel the emotions and experiences of others, is a key component leading to positive outcomes in supportive conversations. However, recent studies have shown that highly empathic conversations are rare in online mental health platforms.

In this paper, we work towards improving empathy in online mental health support conversations. We introduce a new task of empathic rewriting which aims to transform low-empathy conversational posts to higher empathy. Learning such transformations is challenging and requires a deep understanding of empathy while maintaining conversation quality through text fluency and specificity to the conversational context. Here we propose Partner, a deep reinforcement learning (RL) agent that learns to make sentence-level edits to posts in order to increase the expressed level of empathy while maintaining conversation quality. Our RL agent leverages a policy network, based on a transformer language model adapted from GPT-2, which performs the dual task of generating candidate empathic sentences and adding those sentences at appropriate positions. During training, we reward transformations that increase empathy in posts while maintaining text fluency, context specificity, and diversity. Through a combination of automatic and human evaluation, we demonstrate that Partner successfully generates more empathic, specific, and diverse responses and outperforms NLP methods from related tasks such as style transfer and empathic dialogue generation. This work has direct implications for facilitating empathic conversations on web-based platforms.

1234567

1. Introduction

Figure 1. An overview of the empathic rewriting task. Given a post from support seeker and a low-empathy response, the task is to rewrite the response for making it more empathic, through text insertions and deletions. This task requires inferring specific feelings and experiences from seeker’s post and using them for making appropriate changes to the response through empathic mechanisms like emotional reactions, interpretations, and explorations (Sharma et al., 2020).

Online mental health support platforms such as TalkLife (talklife.co) are used by millions of users for expressing emotions, sharing stigmatized experiences, and receiving peer support. These platforms might help improve access to mental health support as mental health care remains a global challenge with widespread shortages of workforce (Olfson, 2016), limited in-person treatment options, and other barriers like stigma (White and Dorman, 2001). A key component of providing successful support is empathy, the ability to understand or feel the emotions and experiences of others (Elliott et al., 2011). Quantitative evidence shows that empathic interactions have strong associations with symptom improvement in mental health support (Elliott et al., 2018) and are instrumental in building therapeutic alliance and rapport (Bohart et al., 2002; Robert et al., 2011). Yet, highly empathic conversations are rare on online support platforms (Sharma et al., 2020).

Empowering peer supporters on online support platforms with feedback and training, for example through machine-in-the-loop writing systems (Clark et al., 2018; Tanana et al., 2019), has the potential to help supporters express higher levels of empathy and in turn improve the effectiveness of these platforms (Miner et al., 2019; Imel et al., 2015; Sharma et al., 2020). Traditional methods for training empathy (e.g., in-person counselor training) do not scale to the millions of users of online support platforms. However, computational methods that can support peer-supporters by suggesting ways to modify existing conversation utterances to make them more empathic may help meet this need of feedback and training, and indirectly benefit support seekers on the platform.

In this paper, we introduce Empathic Rewriting, a new task that aims to transform low-empathy conversational posts to higher empathy (Figure 1). For example, given a post from a support seeker ”I can’t deal with this part of my bipolar. I need help.” and a corresponding low-empathy response ”Don’t worry! Try to relax. Anyone you can talk to?”, we want to increase empathy in the response by transforming it to ”Being Manic is no fun. It’s scary! I’m so sorry to hear this is troubling you. Try to relax. Anyone you can talk to?”; the rewritten response should communicate more empathy through understanding of feelings and experiences (”Being manic is no fun. It’s scary”) and display of felt emotions (”I’m so sorry to hear this is troubling you”). Performing such transformations is a challenging task: First, empathy is a complex, conceptually nuanced construct and requires understanding the feelings and experiences shared by the support seeker. In the example above, one needs to understand that being ”bipolar” can be ”scary”, involves ”manic” phases, and communicate this in the response. Second, for empathic rewriting to be purposeful, it should not undermine other conversation goals like language fluency, context specificity, and diversity. Making changes that lead to ungrammatical posts with empathic portions (e.g., ”Scary it is manic being”) may not be helpful and obstruct useful feedback. Further, making the same transformation to every response (e.g., rewrite every response to ”I understand how you feel”) would lead to non-specific and generic responses reducing the overall conversational quality (See et al., 2019; Li et al., 2016a). Third, the task of empathic rewriting requires changes that go beyond simple word-level transformations, often requiring multiple new sentences to be added or replaced (e.g., three sentence insertions and one sentence removal in the example in Figure 1). This is different from related style transfer tasks (Shen et al., 2017; Li et al., 2018) where even changing a single word may suffice for transferring from negative to positive sentiment (e.g., replace ”bad” with ”good” in the sentence ”the movie was bad”). Finally, supervised methods commonly used for similar tasks such as style transfer (Shen et al., 2017; Li et al., 2018) and content debiasing (Pryzant et al., 2020; Ma et al., 2020) usually require a large parallel dataset. Such a dataset is not yet available for empathic rewriting and hard to collect as it would require large number of clinical psychologists and counselors well-versed in the complex construct of empathy.

To address the challenges described above, we propose Partner 8, a deep reinforcement learning (RL) model for the task of empathic rewriting (Section 5). We design an RL agent which learns to add new empathic sentences to posts or replace existing sentences in posts with more empathic ones. The agent operates on a pair of seeker post and the original response post (which rarely is highly empathic (Sharma et al., 2020)) and makes edits to the response at the level of a sentence by simultaneously (a) identifying positions in the original response post where changes are required, and (b) generating empathic sentences for insertion or replacement at the identified positions (Section 5.3). We model this agent using a policy network based on a transformer decoder model adapted from GPT-2 (Radford et al., 2019). We build upon existing large-scale pre-training of GPT-2 on conversations, as done in DialoGPT (Zhang et al., 2020), and modify it to perform the two simultaneous actions of identifying positions and generating empathic sentences for empathic rewriting (Section 5.4). Through carefully constructed scoring functions, we reward transformations that increase empathy in posts while maintaining text fluency, context specificity, and diversity (Section 5.5).

Evaluating complex conversational constructs such as empathy is fundamentally challenging (Sharma et al., 2020). Therefore, we combine comprehensive automatic evaluation with expert-based human evaluation. Our experiments demonstrate that Partner can effectively increase empathy in posts in fluent, specific, and diverse ways and outperforms baselines used in related sequence-to-sequence generation tasks by in empathy improvement (Section 6). Also, Partner is the only approach that consistently improves empathy and does not lead to a loss of empathy when rewriting an already highly empathic post, while all baselines tend to propose large number of edits that only make the situation worse (Section 6.1). Lastly, through comprehensive human evaluation, we show that experts in clinical psychology prefer rewritings of Partner compared to baselines, based on empathy, specificity, and fluency (Section 6.4). We view our approach and findings as a key step towards building AI systems for facilitating empathic conversations on online mental health support platforms, but these insights may generalize beyond mental health to other conversational settings on web-based platforms. We share our code publicly at https://github.com/behavioral-data/PARTNER.

2. Related Work

We build upon prior work on NLP for online mental health support, empathic dialogue generation, reinforcement learning for text rewriting and natural language generation, and AI-assisted writing.

2.1. NLP for online mental health support

Broadly, our work relates to existing research on NLP for online mental health support. These efforts have predominantly focused on analyzing techniques that are effective for seeking and providing conversational support such as adaptability to various contexts and diversity of responses (Althoff et al., 2016; Pérez-Rosas et al., 2019; Zhang and Danescu-Niculescu-Mizil, 2020; Sharma and De Choudhury, 2018; Yang et al., 2019). Researchers have also built methods for identifying therapeutic actions (Lee et al., 2019), quantifying language development of counselors (Zhang et al., 2019), and detecting cognitive restructuring (Pruksachatkun et al., 2019) in supportive conversations. Here, we focus on a particular conversation technique – empathy – which is key in counseling and mental health support (Castonguay and Hill, 2017; Elliott et al., 2011). Our work builds on previous efforts on understanding and building computational methods for identifying empathy in online health communities (Khanpour et al., 2017), face-to-face therapy (Gibson et al., 2016; Pérez-Rosas et al., 2017), and text-based peer-to-peer support (Sharma et al., 2020). We extend this work by learning to improve empathy in online mental health support conversations through a reinforcement learning method for empathic rewriting (Section 5).

2.2. Empathic dialogue generation

Our task of empathic rewriting is related to empathic dialogue generation but has a key difference as it involves making empathic changes to existing responses instead of generating new responses from scratch. While research on generating empathic dialogue has mainly focused on chit-chat, open-domain conversations (Rashkin et al., 2019; Lin et al., 2019; Majumder et al., 2020), we work on conversations in online mental health support. Moreover, most emapthic dialogue generation methods have a tendency of enabling empathic conversations through emotional grounding (Rashkin et al., 2019) or emotion mimicking (Majumder et al., 2020). In mental health support, however, communicating the cognitive aspects of empathy related to understanding the experiences and feelings of others are more valued by mental health professionals (Sharma et al., 2020; Truax and Carkhuff, 1967; Selman, 1980). We extend this work with the task of empathic rewriting (Section 4) and by leveraging both emotional and cognitive aspects of empathy, using a theoretically-grounded framework of empathy (Sharma et al., 2020) (Section 5).

2.3. Text rewriting and AI-assisted systems

Text rewriting is a broad subarea in natural language processing that includes tasks such as style transfer (Shen et al., 2017; Li et al., 2018), content debiasing (Pryzant et al., 2020; Ma et al., 2020), and controllable text generation (Hu et al., 2017). We propose empathic rewriting as a new text rewriting task in which conversational utterances are rewritten for increasing them in empathy (Section 4). This task presents unique challenges different from other text rewriting tasks: it requires understanding empathy in conversational contexts and leveraging that understanding for making empathic changes while ensuring high conversational quality in terms of language fluency, context specificity, and diversity.

Here, we propose a reinforcement learning (RL) model for the task of empathic rewriting (Section 5). Previous work has used RL for the task of sentiment transfer (Luo et al., 2019) by only using text generations as actions. Here, we design an RL agent that simultaneously learns to (a) identify positions for making improvements and (b) generating empathic sentences for insertion or replacement at the identified positions. These actions are important because the task of empathic rewriting requires changes that go beyond simple word-level transformations, as common in sentiment transfer tasks (e.g., change ”bland” to ”delicious” in ”the food was bland” for transferring from negative to positive sentiment).

Prior work has built systems that leverage identification of effective conversational strategies such as asking open-ended questions for training users in counseling (Huang et al., 2020). Computational methods that can perform empathic rewriting can be used for suggesting ways to make conversations more empathic in similar feedback and training systems for mental health support and counseling. In a related context, researchers have built AI tools for writing assistance in negotiations (Zhou et al., 2019), composing emails (Chen et al., 2019), language translation (Santy et al., 2019), creative writing (Clark et al., 2018), and communication of politeness (Fu and Danescu-Niculescu-Mizil, 2020).

3. Dataset Description

In this section, we describe the dataset used for the task of empathic rewriting.

3.1. The TalkLife platform

TalkLife (talklife.co) is the largest online peer-to-peer support platform for mental health support. It enables conversations between people seeking support (support seekers) and people providing support (peer supporters) in a thread-like setting. We call the post authored by a support seeker as seeker post, and the response by a peer supporter as response post. Table 1 describes the statistics of conversational threads on the TalkLife platform.

Curating mental health-related conversations. As noted by Sharma et al. (Sharma et al., 2020), the TalkLife platform hosts a significant number of common social media interactions (e.g., Happy mother’s day). Here, we focus our analyses on mental health-related conversations and filter out such posts. We manually annotate 3k posts with answers to the question ”Is the seeker talking about a mental health related issue or situation in his/her post?”. Using this annotated dataset, we train a standard text classifier based on BERT (Devlin et al., 2019) (achieving an accuracy of 85%). We apply this classifier to the entire TalkLife dataset and create a filtered dataset of mental health-related conversations. This dataset contains 3.33M interactions from 1.48M seeker posts.

Dataset Statistics TalkLife
# of Seeker posts 10.9M
# of Response posts 26.9M
# of Users 642K
Observation Period May 2012 to June 2020
Table 1. Statistics of the TalkLife dataset.
Figure 2. Expression of high levels of empathy is very low in online support platforms, especially for Interpretations (IP) and Explorations (EX). Emotional reactions (ER) are slightly more common.

3.2. Creating a dataset of empathic posts

Training supervised methods would require a large parallel dataset of corresponding pairs of posts with low and high empathy, respectively. As empathy is a complex phenomenon, collecting such a dataset is challenging and would likely require psychology experts. Here, we create a large non-parallel dataset with empathy measurements for training unsupervised and self-supervised computational models and a small parallel dataset with expert empathic rewritings for conducting evaluations.

Computational labeling with empathy measurements. We computationally label our dataset of 3.33M interactions with empathy measurements using a recently proposed framework of expressed empathy in mental health support (Sharma et al., 2020). This framework consists of three empathy communication mechanisms – (1) Emotional Reactions (expressing emotions such as warmth, compassion), (2) Interpretations (communicating an understanding of feelings and experiences), and (3) Explorations (improving understanding of the seeker by exploring feelings and experiences). For each communication mechanism, the authors design a three-point scale (0 to 2). We computationally label all pairs of (seeker post, response post) in our dataset based on this empathy scale. For this, we use a classification model (RoBERTa-based, bi-encoder attention with an accuracy of 80%) developed by Sharma et al. (Sharma et al., 2020). Figure 2 shows the statistics which indicate that high levels of empathy expressions are uncommon in online support platforms, highlighting the need for building systems for improving empathy (e.g., through feedback using empathic rewriting (Section 4)). We use this dataset for a supervised warm-start training in our reinforcement learning model (Section 5.6) and for training unsupervised baselines (Section 6.2).

Expert empathic rewritings. Additionally, we create a small parallel dataset of 180 pairs of corresponding low and rewritten high empathy response posts with rewritings from people having substantial expertise in empathy, mental health, and therapy (six graduate students in clinical psychology; none are co-authors). We showed them pairs of seeker and response posts and asked them to modify the response post for improving it in empathy. This expert-based dataset is designed to represent the best possible responses and we use it as a ground truth for evaluation (Section 6.4).

3.3. Privacy, ethics, and disclosure

The dataset was sourced with license and consent from the TalkLife platform. All personally identifiable information (user and platform identifiers) in our dataset was removed. This work was approved by our Institutional Review Board. We do not make any treatment recommendations or diagnostic claims.

Towards preventing unsafe rewritings. We acknowledge that building computational models for intervention in high-stakes settings such as mental health necessitates ethical considerations. There is a risk that in attempting to help, responses could have the opposite effect, which could be deadly in cases of self-harm. No current computational approach will identify and respond to harm-related utterances perfectly (Miner et al., 2020). Thus, risk mitigation steps are appropriate in this context. Here, we remove all posts containing pre-defined unsafe regular expressions (e.g., ”*[want to commit suicide]*”) from our analyses and training in collaboration with mental health professionals. Future work testing or deploying AI-systems should assess safety-related risk, and also potential sources of bias (e.g., race, ethnicity, age, or gender bias in training data or models).

4. Problem Definition and Goals

We formulate empathic rewriting and state the associated goals.

4.1. Empathic Rewriting

We introduce empathic rewriting, a new task that aims to transform low-empathy conversational posts to higher empathy. In contrast with empathic dialogue generation (Rashkin et al., 2019; Lin et al., 2019; Majumder et al., 2020), where the objective is to generate empathic posts from scratch, this task requires making changes to existing posts in order to make them empathic. This is more consistent with realistic use-cases in difficult, high-stakes settings such as online support systems, which are likely to augment, rather than replace humans (Miner et al., 2019).

Formally, let be a seeker post and be a corresponding response post. We aim to transform into its more empathic counterpart .

Figure 3. Partner uses a deep reinforcement learning approach for Empathic Rewriting. It leverages a transformer language model for performing the two actions of (1) selecting positions for insertion or replacement and (2) generating candidate empathic sentences. It uses four reward functions that promote increase in empathy, text fluency, sentence coherence, context specificity, and diversity.

4.2. Goals

For empathic rewriting to be useful in improving mental health support conversations, the rewriting process should achieve specific goals related to empathy, conversation and natural language generation quality, and purposeful and precise feedback:

Theoretically-grounded empathy. Empathy is complex and conceptually nuanced; over time psychology research has emphasized multiple aspects of empathy (Bohart and Greenberg, 1997; Duan and Hill, 1996; Batson, 2009; Davis, 1980). For example, computational research typically defines empathy as reacting with emotions of warmth and compassion (Buechel et al., 2018). However, psychotherapy research emphasizes aspects of empathy related to communicating cognitive understanding of feelings and experiences of others  (Selman, 1980). For empathic rewriting to be useful and potentially adopted in online mental health support, we need to design methods grounded in psychology and psychotherapy research. Here, we adopt the theoretically-grounded framework of empathy designed by Sharma et al. (Sharma et al., 2020). We leverage empathy measurements based on this framework as (1) reward signals in our model for empathic rewriting (Section 5.5), and (2) an automatic evaluation metric for judging improvements in empathy from various rewriting models (Section 6.3).

Context specificity and response diversity. Consider a rewriting approach that transforms every response to a generic but empathic response (e.g., ”That must have been really hard for you”). While this approach may seem to ”solve” empathic rewriting, it suffers from two key issues. First, the responses generated by this approach would lack specificity to the emotions and experiences shared in the seeker post, which is important for empathy and effective mental health support (Robert et al., 2011; Majumder et al., 2020). Second, performing this same transformation to millions of responses on online platforms would dramatically reduce response diversity which has been shown to be important for mental health support (Althoff et al., 2016) as well as in general dialogue research (See et al., 2019; Li et al., 2016a).

Thus, the task of empathic rewriting interplays with other issues related to conversation and natural language generation quality and effective mental health support. Ensuring that the rewritten response is specific and diverse, along with empathic is challenging but critical for obtaining purposeful transformations. In this work, we learn rewriting actions that simultaneously achieve the goals of context specificity and response diversity using a reinforcement learning approach (Section 5.5) and we evaluate these goals using a combination of automatic and human evaluation (Section 6.3,6.4).

Text fluency and sentence coherence. In addition, only generating empathic words or phrases may not be sufficient. Without appropriate measures, the rewriting process may lead to an ungrammatical, non-fluent final response (e.g., ”Scary being is it manic”). Also, making changes that are incoherent with the original response may not be appropriate (e.g., changing ”Sorry to hear that you lost your job. I hope you get a new job soon.” to ”Sorry to hear that you lost your job. Congrats on your job promotion. I hope you get a new job soon.”). In this paper, we avoid such responses with non-fluent and incoherent portions through carefully constructed reward functions (Section 5.5) and conduct both automatic and human evaluations of models on text fluency and sentence coherence (Section 6.3,6.4).

Rewriting for feedback and training. An important way in which the task of empathic rewriting can be used is for providing feedback and training to people through machine-in-the-loop writing systems (Clark et al., 2018; Tanana et al., 2019). For humans to adopt such feedback, however, the rewriting process should make changes that are precise and specific to the original response. This means that the number of changes should be kept minimal and that the changes themselves should be suitable to the original response. For example, adding 10 sentences to a one sentence response may not be useful. Here, we train a reinforcement learning agent which learns when to stop making changes through a special ”stopping” action (Section 5.3). We evaluate the number of transformations different models need for empathic rewriting through a standard edit-distance based scoring metric (Section 6.3).

5. Partner: Empathic rewriting using reinforcement learning

Here, we present Partner, a reinforcement learning model for the task of empathic rewriting. We first explain the general reinforcement learning framework and its applicability to our setting. We then describe the various components of our model (states, actions, policy, and rewards) and our training strategy.

5.1. Reinforcement Learning Framework

We adopt the standard reinforcement learning framework consisting of a collection of states , a set of actions , a policy , and rewards  (Sutton and Barto, 2018). In this framework, given a state , an agent takes an action according to the policy . The policy defines whether the agent should take action in a state . The goal of the reinforcement learning agent is to learn a policy which maximizes the reward .

Here, we design a reinforcement learning model for the task of empathic rewriting. Conceptually, our agent leverages context from the seeker post which it uses for making specific empathic changes. Alongside, it operates on the response post, looks for areas where empathy could be improved, and works on those improvements in fluent, coherent, specific, and diverse ways. Moreover, it ensures that the changes are minimal and precise by learning when to stop through a special ”stopping” action.

In our reinforcement learning model, we construct states based on seeker posts and fixed-length contiguous spans in the associated response posts (Section 5.2). Insertion, replacement, and deletion of sentences in response posts are defined as actions (Section 5.3). We learn a policy that uses transformer language models at its core (Section 5.4). We design a reward function that favors empathic, fluent, coherent, specific, and diverse transformations (Section 5.5).

5.2. State: seeker post & fixed-length contiguous spans of response post

Our agent simultaneously operates on seeker post and fixed-length contiguous spans of response post. The use of seeker post helps us in leveraging conversational context, thereby enabling transformations that are specific to the feelings and experiences shared in the seeker post. The response post is used for making transformations. The use of fixed-length contiguous spans enables a static action set.

Formally, let contain sentences . At each step, we focus on a contiguous window of sentences starting from the th sentence . Then, our state is denoted by the pair (, ). Our policy uses a string containing concatenated with separated by a special ¡SPLIT¿ token (as commonly used in BERT-like models (Devlin et al., 2019)).

5.3. Actions: sentence-level edits

Our agent takes actions at the level of a sentence, i.e. it either inserts new sentences or replaces existing sentences with newer ones. A deletion operation is equivalent to replacing a sentence by an empty string. Our agent can make word-level changes by replacing the original sentence with a slightly different sentence containing only word-level edits. We focus on sentence-level edits because the task of empathic rewriting requires changes that go beyond simple word-level edits. Empathic responses typically contain multiple sentences with different goals such as emotional reactions, interpretations, and explorations (Sharma et al., 2020); generating these sentences and using them for making changes to the response is important for empathic rewriting.

In a state (, ), our agent simultaneously takes two actions – () select a position in for insertion or replacement, () generate a candidate empathic sentence. The action space of consists of 2k+2 actions – k+1 positions for insertions, k positions for replacements, and one special action for no insertion or replacement, which stops the agent from making any further changes. The action space of consists of all arbitrary-length sentences. We denote the action taken by our agent as = (, ) .

5.4. Policy

At its core, our policy has a transformer language model consisting of a stack of masked multi-head self-attention layers, based on GPT-2 (for a detailed description, see Vaswani et al. (Vaswani et al., 2017), Radford et al. (Radford et al., 2019)). It takes as input an encoded representation of our state (, ) and generates the action = (, ).

() Selecting a position for insertion or replacement. Given (, ) as input, we want to identify a position in where changes need to be made for improving empathy through insertion or replacement operations. A sentence window has positions for insertions and positions for replacement. Then, our task is to select one of these positions. We formulate this as a classification problem with classes. The first classes represent one of the potential positions and the last class represents the ”stopping” action of not selecting any position, thereby stopping the agent from making any changes and keeping the response span unchanged.

For selecting this position, we first encode the input string ” ¡SPLIT¿ ” using the transformer block of GPT-2. We then pass this encoded representation through a linear layer to get the prediction of the position for insertion or replacement. We denote our position classifier as .

() Generating a candidate sentence. Given (, ) as input, we want to generate a candidate sentence to be used for making changes to . We frame this task as a language modeling problem where the objective is to generate that maximizes the conditional probability .

Similar to the position selection action, we first encode our input string ” ¡SPLIT¿ ” using the transformer block of GPT-2. We then compute a probability distribution over vocabulary tokens by transforming the encoded representation into a vocabulary-sized vector through a softmax layer. Finally, we use top-p sampling (Holtzman et al., 2019)9 over this probability distribution to generate the desired . The generation is terminated when the sampling process encounters a special end-of-sequence token.

5.5. Rewards

Our reward functions aim to increase empathy in posts and maintain text fluency, sentence coherence, context specificity, and diversity:

Change in empathy. The task of empathic rewriting requires transformations that can increase empathy of posts. Thus, we want to reward actions that increase empathy of and penalize actions that decrease empathy of . Let be a function that measures empathy of posts. Then, the change in empathy reward, , is defined as:

(1)

Here, we estimate using the empathy classification model developed by Sharma et al. (Sharma et al., 2020) for predicting empathy levels of responses. Sharma et al. (Sharma et al., 2020) leverage a theoretically-grounded framework of empathy consisting of three empathy communication mechanisms (emotional reactions, interpretations, and explorations) and devise a scale of empathy levels from 0 to 6. They train a classification model (RoBERTa (Liu et al., 2019), accuracy 80%) for predicting empathy of response posts on this scale. We use their trained model as which gives us empathy scores of s in the range of 0 to 6.

Text fluency. We want to prevent actions that lead to outputs that are highly empathic but not fluent or grammatically correct. Therefore, we want to reward actions that lead to fluent outputs and penalize actions resulting in non-fluent outputs. Here, we operationalize text fluency as the inverse of perplexity of the generated s. We define the text fluency reward, as:

(2)

where is a general language model for English and is the number of words in . Here, we use GPT-2 (Radford et al., 2019) as our , following previous work (Ma et al., 2020; Dai et al., 2019).

Sentence coherence. A key component of our action space is the addition of the candidate sentence to the original response. While the candidate sentence might be highly empathic and fluent, it may not be well-suited for the response to which it would be added, leading to incoherent sentences in the transformed response . This may not be handled by perplexity which tends to give high scores to posts where individual sentences are all fluent but are not coherent at the macro response level. Here, we design a reward function, that measures coherence of the candidate sentence with the response span . measures the average sentence coherence probability between a candidate sentence and existing sentences in the response.

First, we create a dataset of likely coherent and incoherent sentence pairs. Given two sentences and in a response , we call (, ) a potential coherent sentence pair. We randomly sample a sentence which is not a part of responses posted to the current seeker post and call (, ) a potential incoherent sentence pair (). Next, we train a text classification model, based on BERT (Devlin et al., 2019), on this dataset. We take softmax at the last layer which gives us probabilities of a sentence pair being coherent () or incoherent (). Then, our sentence coherence reward is defined as:

(3)

Mutual information for specificity and diversity. In the process of empathic rewriting, the final rewritten response may become generic (e.g., ”I understand how you feel”) thereby affecting the overall conversation quality (See et al., 2019; Li et al., 2016a). In order to ensure specificity to the seeker post and diversity of responses, we exploit the idea of maximizing mutual information between seeker post and the rewritten response post (Li et al., 2016a, b). Our mutual information reward is defined as:

(4)

where is the transformer language model used in our policy and is an identical language model for performing the reverse task of generating seeker post from the rewritten response.

Total reward. Our total reward is .

5.6. Optimization and training

Warm-start using supervised learning. We use the pre-trained weights of DialoGPT (Zhang et al., 2020) for initializing our transformer language model. Next, we use a warm-start strategy using supervised learning on a parallel dataset of (low empathy, high empathy) pairs, following previous work in reinforcement learning for dialogue generation (Li et al., 2016b). We create this dataset using simple heuristics: We identify highly empathic sentences (with scores ) in our dataset of empathic interactions (Section 3.2). For a seeker post and response post having a high empathy sentence , we create a dataset with ( ¡SPLIT¿ , ) pairs.10 Using this dataset, we finetune our DialoGPT-initialized transformer language model.

REINFORCE with a baseline value for training. We use the standard REINFORCE algorithm (Williams, 1992) for training our agent. Our loss function is defined as:

(5)

where is the set of our parameters and is a baseline estimate of the reward (running average of previous 100 reward values) used for stabilizing training.

Experimental setup. We use a batch size of 16 and train our model for 20000 steps using a learning rate of 1e-5. We use , , , , and (selected using a grid-search approach with three values for each hyperparameter).

6. Experiments

Model Change in empathy () Perplexity () Specificity () Diversity () Sentence coherence () Edit rate ()
distinct-1 distinct-2
Dialogue Generation DialoGPT 0.4698 8.6500 0.8921 0.0382 0.1334 0.6683 1.3520
MIME 1.2069 9.0171 0.8837 0.0031 0.0198 0.3687 1.8193
Seq-to-Seq Generation Latent Seq. 0.9745 8.7143 0.8512 0.0001 0.0002 0.9252 7.8853
BART -0.0611 7.2040 0.8878 0.0722 0.3945 0.4560 0.7496
Partner 1.6410 7.3641 0.9052 0.0659 0.3807 0.3030 0.9654
Table 2. Performance of Partner and comparisons with dialogue generation and other sequence-to-sequence generation baselines on the set of automatic metrics. Partner outperforms all baselines in empathy improvement and generates fluent, specific, and diverse outputs with lower edits. () indicates higher is better, () indicates lower is better.

In this section, we present experiments for analyzing the performance of Partner on the task of empathic rewriting. We first describe automatic evaluation metrics (Section 6.1) based on the desired goals for empathic rewriting (Section 4.2), baseline approaches and ablations (Section 6.2), and demonstrate results on the automatic evaluation metrics (Section 6.3). Since evaluation using automated metrics in language generation tasks are often not robust (Liu et al., 2016), we additionally present human evaluation results from people having expertise in therapy and mental health (Section 6.4). We end with a qualitative discussion on the model’s performance (Section 6.5).

6.1. Automatic evaluation metrics

We use a number of automatic metrics, based on the goals associated with empathic rewriting (Section 4.2), for evaluating computational approaches:

  • Change in empathy: A key metric for successful empathic rewriting is how much the empathy has changed from original response to the rewritten response. Similar to our change in empathy reward function (Section 5.5), we measure this change using the empathy classification model developed by Sharma et al. (Sharma et al., 2020). This model computes empathy scores in the range 0 to 6 (leading to change of empathy ranging from -6 to 6).

  • Perplexity: Similar to our text fluency reward (Section 5.5), we measure perplexity for quantifying fluency of the rewritten responses. For this, we use a pre-trained GPT-2 language model that has not been fine-tuned on our dataset, following previous work (Ma et al., 2020; Dai et al., 2019).

  • Sentence coherence: Since empathic rewriting requires changes at sentence level, ensuring coherent sentences in the final rewritten response is crucial. Here, we measure sentence coherence using the scoring mechanism developed in Section 5.5.

  • Specificity: The rewritten response should be specific to the seeker post. Following Xu et al. (Xu et al., 2018), we measure specificity using word embedding similarity between seeker post and rewritten response post (using embeddings from BERT (Devlin et al., 2019)).

  • Diversity: Since empathic rewriting has implications on millions of conversations on online mental health platforms, ensuring diversity of responses is important. Here, we measure diversity using the distinct-1 and distinct-2 metrics, following Li et al. (Li et al., 2016a). The two metrics compute the number of distinct unigrams and bigrams respectively divided by the total number of tokens.

  • Edit rate: The changes in empathic rewriting should be minimal and precise. Here, we use edit rate (Snover et al., 2006) to measure the number of changes between the original response and the rewritten response. Edit rate is defined by the Levenshtein distance between the two responses divided by the length of the original response.

6.2. Baselines and Ablations

As the task of empathic rewriting has not been explored before, we compare against baseline approaches from the related tasks of dialogue generation and style transfer. Our baselines are:

  • DialoGPT (Zhang et al., 2020): A large-scale dialogue response generation model, based on GPT-2 (Radford et al., 2019) and pre-trained on Reddit conversations.

  • MIME (Majumder et al., 2020): An empathic dialogue generation model which exploits emotion mimicking while accounting for emotion polarity (positive or negative).

  • Deep latent sequence model (He et al., 2019): A deep generative model designed for unsupervised style transfer.

  • BART (Lewis et al., 2019): An encoder-decoder model for sequence-to-sequence language generation.

DialoGPT and MIME baselines completely disregard the original response; the rewritten response is the response generated given a seeker post by the respective dialogue generation models. Deep latent sequence model and BART perform a sequence-to-sequence generation from a (seeker post, original response post) pair to a response with higher empathy. We use publicly-available implementations of all our baselines. We further fine-tune deep latent sequence model on the dataset of empathy-labeled interactions (Section 3.2) and BART on the heuristic-based dataset created for warm-start (Section 5.6).

Additionally, we investigate the importance of different components of our model using the following ablated baselines:

  • Warm-start only, no RL training: We analyze the performance of the model at the end of our warm-start stage, i.e. without any RL training.

  • No coherence reward: We train the model without using the sentence coherence reward.

  • No mutual information: We train the model without using the mutual information component.

(a) Partner and MIME are effective at increasing empathy in zero-empathy responses. However, Partner is more effective in increasing empathy in low, non-zero empathic responses and doesn’t make an already empathic post worse.
(b) Partner makes lesser number of changes compared to baselines. The changes are relatively more for less empathic responses which also tend to be shorter.
Figure 6. Analysis of empathic rewritings. All error bars in this paper are 95% confidence intervals.

6.3. Automatic metrics results

Baseline Results. Table 2 reports the results of Partner on the automatic evaluation metrics and comparisons with baselines. We find that empathic rewriting through Partner achieves the largest change in empathy (35% more than the next best approach, MIME) and is more specific than all baselines. MIME generates empathic outputs (+1.21 change in empathy) but the generations have low diversity (86% less than Partner) indicating similar responses for most seeker posts. BART generates outputs with lowest perplexity, highest diversity, and lowest edit rate, which is consistent with substantial improvements to language models in recent years (Brown et al., 2020). However, to our surprise, the rewritten responses through BART receive an overall drop of 0.06 in empathy, indicating that the model is unable to perform the task of empathic rewriting well and only generates non-empathic, fluent, diverse text.

Our specificity metric is a little hard to interpret with values having a really small range (0.85 to 0.9). However, with human-based evaluation, we find that a difference of 0.05 on specificity (between Partner and latent seq.) translates to a 90% preference towards Partner in terms of fluency (Section 6.4). Moreover, while Partner has the lowest sentence coherence score, we observe that most other baselines generate 1-2 sentence responses where achieving high coherence between sentences is expected (e.g., a one-sentence response has a coherence of 1.0).

Analysis of empathic rewritings. Adapting to different types of original responses and making appropriate changes is an important aspect of empathic rewriting. A low empathic response needs a lot more improvements and edits than a highly empathic response. Figure (a)a shows the change in empathy of responses given their original empathy levels. We find that Partner performs better than baselines in improving responses with low empathy. Importantly, only Partner succeeds at not deteriorating responses that are already highly empathic, indicating the effectiveness of Partner at adapting to responses with different empathy levels. We also analyze the number of edits by each model on responses with different original empathy levels (Figure (b)b). Partner not only effects a greater change in empathy than baselines, it achieves so with the least number of edits for both low and high empathy responses.

Ablation Results. Table 3 reports results on ablated versions of Partner. Only using warm-start and no RL training is +0.2783 points better than the related off-the-self DialoGPT baseline on empathy improvement. However, the RL training in Partner further improves over this warm-start model by +0.8929 points. Using the coherence and mutual information rewards leads to small performance improvements, particularly in empathy (+0.03).

Model Change in empathy () Perplexity () Specificity () Diversity () Sentence coherence () Edit rate ()
distinct-1 distinct-2
Partner 1.6410 7.3641 0.9052 0.0659 0.3807 0.3030 0.9654
- no coherence 1.6127 7.2806 0.9055 0.0663 0.3844 0.3005 1.0108
- no mutual info. 1.6132 7.3274 0.9045 0.0674 0.3859 0.3078 1.0071
- warm-start only 0.7481 7.1858 0.9027 0.0816 0.4238 0.2935 1.0327
Table 3. Ablation results. Warm-start improves over DialoGPT but is still much worse than Partner in empathy improvement, highlighting the effectiveness of our RL-based training.

6.4. Human evaluation results

Since automatic evaluation in language generation is often not robust (Liu et al., 2016), we perform a human evaluation on our key metrics (empathy, fluency, and specificity) through A/B testing. We recruit six graduate students in clinical psychology with expertise in empathy and mental health support11 and ask them to compare outputs from Partner against other baseline models, ablations, and expert empathic rewritings (Section 3.2) given the same input. Presenting a seeker post, a rewritten response post from Partner, and a rewritten response post from a baseline/ablation/expert-rewrite, we ask them to choose (a) response post which is more empathic, (b) response post which is more fluent, and (c) response post which is more specific. For each model, we collect evaluations on 50-100 examples.

Results: Baselines and ablations. Figure 7 shows the percentage of instances in which Partner was preferred over other baselines and ablations (values indicate preference towards Partner). We find that rewritten responses from Partner are preferred for empathic and specific responses over all baselines. DialoGPT is judged more fluent (Figure 4a) but generates responses following similar templates (e.g., ”I’m sorry you…. I hope you….”). Moreover, Partner has 55% preference for empathy over ablations where coherence and mutual information rewards are not used ().

Results: Expert rewritings. The most appropriate way of performing empathic rewriting is through human experts. However, experts with training in therapy and mental health support are limited (Olfson, 2016) which makes it infeasible to employ them for millions of conversations on online support platforms. We use the small dataset of 180 empathic rewritings from experts to establish what the gold-standard performance for empathic rewritings in mental health support looks like. Unsurprisingly, experts are preferred 80-90% times over Partner in empathy, fluency, and specificity (). However, there are a few cases (the remaining 10-20% examples) in which Partner rewritings are preferred; these are typically instances where Partner is able to make empathic changes to responses while the experts leave it unchanged.

Results: BLEU scores. We also use the dataset of expert empathic rewritings (Section 3.2) as a ground truth of empathic rewritings and compare outputs of Partner, baselines, and ablations based on this ground truth using the BLEU metric (Papineni et al., 2002) (Table 4). We find that the outputs from Partner are closest to expert rewritings (86% better than the next best baseline, BART).

Figure 7. Human evaluation of empathy, fluency, and specificity in rewritings from Partner vs. (a) rewritings from baseline models, and (b) rewritings from ablations. Partner is preferred over baselines and ablations in empathy and specificity and is competitive in fluency.
Model BLEU score
Partner 0.1391
Baselines DialoGPT 0.0722
MIME 0.0808
Latent Seq 0.0254
BART 0.0956
Ablations - no coherence 0.1335
- no mutual info. 0.1297
- warm-start only 0.1281
Table 4. BLEU scores of all models against empathic rewritings from experts. The rewritings from Partner are closest to expert rewritings.

6.5. Qualitative examples

We present example rewritings from Partner and baselines in Table 5. Partner generates rewritings that leverage both seeker post and original response post for empathic responses (e.g., inferring seeker’s perspective from posts – from the seeker post ”I feel like nobody cares about my existence”, Partner infers ”It’s hard to find others who can relate”, generating empathic sentences which can be coherently inserted to the response post – Partner coherently transforms ”What happened between you two?” to ”What happened between you two? What caused you to break?”).

Seeker post Response post Model Rewritten response post
I feel like nobody cares about my existence. I feel the same. Partner It’s hard to find others who can relate. I feel the same.
DialoGPT I care.
MIME That is awful, i am sorry to hear that. I hope you feel better.
Latent Seq. I’m sorry to hear that.
BART I think i feel the same way. I do.
I overthink everything my friends say. I disregard them when they praise me but if they say something negative I take it so personally. I’m sick of being so paranoid. I understand completely. I feel the same way. Partner It’s hard when you’re with such people and they are being like that to you. I understand completely. I feel the same way.
DialoGPT I’m the same way. I’m not sure why, but I just don’t like being told I’m being paranoid.
MIME That is a shame, i am sure it will not be a bit of a bit!
Latent Seq. I know that, i feel the same way i’m sorry to.
BART I’m sorry, I understand completely. I feel the same way. I’m here for you.
I hope I didn’t lose my only two best friends. They are the only ones I could relate to. What happened between you two? Partner That’s awful. What happened between you two? What caused you to break?
DialoGPT I hope you didn’t lose them. I don’t want to lose them.
MIME That is a good thing to do.
Latent Seq. I’m sorry to hear that.
BART What happened? What happened between you two?
Table 5. Qualitative examples of empathic rewriting using Partner and baseline methods. Partner generates rewritings that leverage both seeker post and original response post for empathic responses. It infers perspective from seeker posts (”I feel like nobody cares about my existenceIt’s hard to find others who can relate”) and generates empathic sentences which can be coherently inserted to response posts (”What happened between you two?What happened between you two? What caused you to break?”)

7. Discussion and Conclusion

The burden of mental illness and addiction globally is overwhelming, and common mental disorders and addiction are some of the most debilitating illnesses worldwide (Collins et al., 2011). Existing mental health resources and interventions are ill-suited to the size of the need. Online mental health support platforms that make use of peer supporters is one route to scaling up support, but the greatest challenge is to effectively train or scaffold the peer supporters. Our empathic rewriting approach represents a foundational proof-of-concept of how computational methods may help peer supporters online.

Rewriting human-generated responses may be an effective approach to balancing the benefits and risks of using artificial intelligence in mental health settings. By combining human knowledge of context and experience, our approach can both train online peer-supporters with real-time examples, and provide support seekers with more empathic responses. Importantly, this human-in-the-loop approach can help mitigate some of the risks related to ignoring or encouraging self-harm, or insensitive comments related to race/ethnicity/gender (Li et al., 2020; Luxton et al., 2012; Collings and Niederkrotenthaler, 2012).

Summary of contributions. Our work proposes a new task of empathic rewriting for transforming low-empathy conversational posts in online mental health support platforms to higher empathy. For this task, we develop and train Partner, a reinforcement learning model which makes sentence-level edits to posts for making them empathic. Through extensive experiments based on automatic and human evaluation, we show that Partner can effectively generate more empathic posts and outperforms baseline methods from related tasks.

Acknowledgments

We would like to thank TalkLife and Jamie Druitt for their support and for providing us access to a TalkLife dataset. We also thank the members of UW Behavioral Data Science group and the anonymous reviewers for their suggestions and feedback. A.S. and T.A. were supported in part by NSF grant IIS-1901386, Bill & Melinda Gates Foundation (INV-004841), the Allen Institute for Artificial Intelligence, and a Microsoft AI for Accessibility grant. A.S.M. was supported by grants from the National Institutes of Health, National Center for Advancing Translational Science, Clinical and Translational Science Award (KL2TR001083 and UL1TR001085) and the Stanford Human-Centered AI Institute. D.C.A. was supported in part by an NIAAA K award (K02 AA023814).

Conflict of Interest Disclosure. Dr. Atkins is a co-founder with equity stake in a technology company, Lyssn.io, focused on tools to support training, supervision, and quality assurance of psychotherapy and counseling.

Footnotes

  1. copyright: acmcopyright
  2. journalyear: 2021
  3. doi: 10.1145/1122445.1122456
  4. conference: The Web Conference 2021; Apr 19-23, 2021; Ljubljana, Slovenia
  5. booktitle: WWW’21, Apr 2021
  6. price: 15.00
  7. isbn: 978-1-4503-XXXX-X/18/06
  8. emPAthic RewriTing in meNtal hEalth suppoRt
  9. For generating every word in a sequence, top-p sampling (or nucleus sampling) chooses from the smallest set of words whose total probability is more than p.
  10. refers to the full response post with the sentence removed
  11. Most participants were PhD students in second or subsequent years of their degree program.

References

  1. Large-scale analysis of counseling conversations: an application of natural language processing to mental health. TACL 4, pp. 463–476. Cited by: §2.1, §4.2.
  2. These things called empathy: eight related but distinct phenomena.. Cited by: §4.2.
  3. Empathy.. J. C. Norcross (Ed.), Psychotherapy relationships that work: Therapist contributions and responsiveness to patients. Cited by: §1.
  4. Empathy reconsidered: new directions in psychotherapy.. American Psychological Association. Cited by: §4.2.
  5. Language models are few-shot learners. arXiv preprint arXiv:2005.14165. Cited by: §6.3.
  6. Modeling empathy and distress in reaction to news stories. In EMNLP, Cited by: §4.2.
  7. How and why are some therapists better than others?: understanding therapist effects.. American Psychological Association. Cited by: §2.1.
  8. Gmail smart compose: real-time assisted writing. In SIGKDD, Cited by: §2.3.
  9. Creative writing with a machine in the loop: case studies on slogans and stories. In IUI, Cited by: §1, §2.3, §4.2.
  10. Suicide prevention and emergent media: surfing the opportunity. Hogrefe Publishing. Cited by: §7.
  11. Grand challenges in global mental health. Nature 475 (7354), pp. 27–30. Cited by: §7.
  12. Style transformer: unpaired text style transfer without disentangled latent representation. Cited by: §5.5, 2nd item.
  13. A multidimensional approach to individual differences in empathy. Journal of Personality and Social Psychology. Cited by: §4.2.
  14. Bert: pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT, Cited by: §3.1, §5.2, §5.5, 4th item.
  15. The current state of empathy research.. Journal of counseling psychology 43 (3), pp. 261. Cited by: §4.2.
  16. Empathy.. Psychotherapy 48 (1), pp. 43. Cited by: §1, §2.1.
  17. Therapist empathy and client outcome: an updated meta-analysis.. Psychotherapy 55 (4), pp. 399. Cited by: §1.
  18. Facilitating the communication of politeness through fine-grained paraphrasing. In EMNLP, Cited by: §2.3.
  19. A deep learning approach to modeling empathy in addiction counseling. Interspeech. Cited by: §2.1.
  20. A probabilistic formulation of unsupervised text style transfer. In ICLR, Cited by: 3rd item.
  21. The curious case of neural text degeneration. arXiv preprint arXiv:1904.09751. Cited by: §5.4.
  22. Toward controlled generation of text. In ICML, Cited by: §2.3.
  23. Challenges in building intelligent open-domain dialog systems. ACM Transactions on Information Systems (TOIS) 38 (3), pp. 1–32. Cited by: §2.3.
  24. Computational psychotherapy research: scaling up the evaluation of patient–provider interactions.. Psychotherapy 52 (1), pp. 19. Cited by: §1.
  25. Identifying empathetic messages in online health communities. In IJCNLP), pp. 246–251. Cited by: §2.1.
  26. Identifying therapist conversational actions across diverse psychotherapeutic approaches. In Proceedings of the Sixth Workshop on Computational Linguistics and Clinical Psychology, pp. 12–23. Cited by: §2.1.
  27. Bart: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461. Cited by: 4th item.
  28. A diversity-promoting objective function for neural conversation models. In NAACL-HLT, Cited by: §1, §4.2, §5.5, 5th item.
  29. Deep reinforcement learning for dialogue generation. In EMNLP, Cited by: §5.5, §5.6.
  30. Delete, retrieve, generate: a simple approach to sentiment and style transfer. In ACL, Cited by: §1, §2.3.
  31. Developing a delivery science for artificial intelligence in healthcare. NPJ Digital Medicine 3 (1), pp. 1–3. Cited by: §7.
  32. Moel: mixture of empathetic listeners. In EMNLP, Cited by: §2.2, §4.1.
  33. How not to evaluate your dialogue system: an empirical study of unsupervised evaluation metrics for dialogue response generation. In EMNLP, Cited by: §6.4, §6.
  34. Roberta: a robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. Cited by: §5.5.
  35. A dual reinforcement learning framework for unsupervised text style transfer. In IJCAI, Cited by: §2.3.
  36. Social media and suicide: a public health perspective. American journal of public health 102 (S2), pp. S195–S200. Cited by: §7.
  37. PowerTransformer: unsupervised controllable revision for biased language correction. Cited by: §1, §2.3, §5.5, 2nd item.
  38. MIME: mimicking emotions for empathetic response generation. In EMNLP, Cited by: §2.2, §4.1, §4.2, 2nd item.
  39. Assessing the accuracy of automatic speech recognition for psychotherapy. npj Digital Medicine 3 (1), pp. 1–8. Cited by: §3.3.
  40. Key considerations for incorporating conversational ai in psychotherapy. Frontiers in psychiatry 10. Cited by: §1, §4.1.
  41. Building the mental health workforce capacity needed to treat adults with serious mental illnesses. Health Affairs 35 (6), pp. 983–990. Cited by: §1, §6.4.
  42. BLEU: a method for automatic evaluation of machine translation. In ACL, Cited by: §6.4.
  43. Understanding and predicting empathic behavior in counseling therapy. In ACL, Cited by: §2.1.
  44. What makes a good counselor? learning to distinguish between high-quality and low-quality counseling conversations. In ACL, Cited by: §2.1.
  45. Moments of change: analyzing peer-based cognitive support in online mental health forums. In CHI, Cited by: §2.1.
  46. Automatically neutralizing subjective bias in text. In AAAI, Cited by: §1, §2.3.
  47. Language models are unsupervised multitask learners. OpenAI Blog 1 (8), pp. 9. Cited by: §1, §5.4, §5.5, 1st item.
  48. Towards empathetic open-domain conversation models: a new benchmark and dataset. In ACL, Cited by: §2.2, §4.1.
  49. Empathy. Psychotherapy 48 (1), pp. 43–49. Cited by: §1, §4.2.
  50. INMT: interactive neural machine translation prediction. In EMNLP (System Demonstrations), Cited by: §2.3.
  51. What makes a good conversation? how controllable attributes affect human judgments. In NAACL-HLT, Cited by: §1, §4.2, §5.5.
  52. Growth of interpersonal understanding. Academic Press. Cited by: §2.2, §4.2.
  53. A computational approach to understanding empathy expressed in text-based mental health support. In EMNLP, Cited by: Figure 1, §1, §1, §1, §1, §2.1, §2.2, §3.1, §3.2, §4.2, §5.3, §5.5, 1st item.
  54. Mental health support and its relationship to linguistic accommodation in online communities. In CHI, Cited by: §2.1.
  55. Style transfer from non-parallel text by cross-alignment. In NeurIPS, Cited by: §1, §2.3.
  56. A study of translation edit rate with targeted human annotation. In Proceedings of association for machine translation in the Americas, Vol. 200. Cited by: 6th item.
  57. Reinforcement learning: an introduction. MIT press. Cited by: §5.1.
  58. Development and evaluation of clientbot: patient-like conversational agent to train basic counseling skills. JMIR. Cited by: §1, §4.2.
  59. Modern applications in psychology. Toward effective counseling and psychotherapy: Training and practice. Hawthorne, NY, US: Aldine Publishing Co. Cited by: §2.2.
  60. Attention is all you need. In NeurIPS, Cited by: §5.4.
  61. Receiving social support online: implications for health education. Health education research 16 (6), pp. 693–707. Cited by: §1.
  62. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning. Cited by: §5.6.
  63. Better conversations by modeling, filtering, and optimizing for coherence and diversity. Cited by: 4th item.
  64. The channel matters: self-disclosure, reciprocity and social support in online cancer support groups. In CHI, Cited by: §2.1.
  65. Balancing objectives in counseling conversations: advancing forwards or looking backwards. In ACL, Cited by: §2.1.
  66. Finding your voice: the linguistic development of mental health counselors. In ACL, Cited by: §2.1.
  67. DialoGPT: large-scale generative pre-training for conversational response generation. In ACL, system demonstration, Cited by: §1, §5.6, 1st item.
  68. A dynamic strategy coach for effective negotiation. In SIGdial, Cited by: §2.3.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
426427
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description