Flexible End-to-End Dialogue System for Knowledge Grounded Conversation

Flexible End-to-End Dialogue System for Knowledge Grounded Conversation

Abstract

In knowledge grounded conversation, domain knowledge plays an important role in a special domain such as Music. The response of knowledge grounded conversation might contain multiple answer entities or no entity at all. Although existing generative question answering (QA) systems can be applied to knowledge grounded conversation, they either have at most one entity in a response or cannot deal with out-of-vocabulary entities. We propose a fully data-driven generative dialogue system GenDS that is capable of generating responses based on input message and related knowledge base (KB). To generate arbitrary number of answer entities even when these entities never appear in the training set, we design a dynamic knowledge enquirer which selects different answer entities at different positions in a single response, according to different local context. It does not rely on the representations of entities, enabling our model deal with out-of-vocabulary entities. We collect a human-human conversation data (ConversMusic) with knowledge annotations. The proposed method is evaluated on CoversMusic and a public question answering dataset. Our proposed GenDS system outperforms baseline methods significantly in terms of the BLEU, entity accuracy, entity recall and human evaluation. Moreover,the experiments also demonstrate that GenDS works better even on small datasets.

1Introduction

Daily conversations generally depends on individual’s knowledge. This is known as knowledge grounded conversation [4]. In Figure 1, we show an example of knowledge grounded conversation, in which two friends are talking about music on their own knowledge base. To reply “I like Jay’s music. Do you have any recommendation?”, user A has to know some songs of the singer.

Figure 1: An example of knowledge-grounded non-goal-driven dialogue between two user A and B. Each user has its own private KB. In this example, two users are talking about the singer Jay Chou.
Figure 1: An example of knowledge-grounded non-goal-driven dialogue between two user A and B. Each user has its own private KB. In this example, two users are talking about the singer Jay Chou.

It is necessary to emphasize that the knowledge grounded conversation is different from QA [15], as the former does not limit the number of entities in responses. For example, in Figure 1, two friends are talking about a singer named Jay, user A does not need any knowledge when replying “Great! I hope he can sing more songs“, and the entities to reply the question “I like Jay’s music. Do you have any recommendation?” are also not unique. We can regard QA as a special case of knowledge grounded conversation.

Han et al. [4] first built rule-based chit-chat dialogue systems with structured knowledge. Ghazvininejad et al.[3] took unstructured text as external knowledge to enhance traditional chit-chat dialogue systems by replying more informatively. In [3], all entities are presented by distributed representations, thus a large amount of data are required to figure out the relations between entities. In this paper, we aim to build an end-to-end knowledge grounded conversation model with structured KB, which is more effective to represent the relations between entities.

Another category of related work is generative QA with KB [5]. Many existing end-to-end QA models, such as GenQA [15] and COREQA [6], can generate responses with facts retrieved from KB. However, these models are not able to deal with out-of-vocabulary entities. GenQA [15] even cannot generate multiple entities. Besides, their decoding process relied on the representations of entities learned from conversation data, where the entities are sparse in the conversations (see details in Dataset section). Hence, their models require a large amount of data.

In this paper, we propose a fully data-driven generative dialogue system called GenDS, which can generate responses based on input message and structured KB. We introduce a dynamic knowledge enquirer, in order to generate an arbitrary number of entities (even when the entities never appear in the training set). Based on local contexts, the system can select entities at various positions within a single response. Specifically, the dynamic knowledge enquirer updates the entity generation probability based on the previous context. This is independent of the representations of entities, which enables our model to handle out-of-vocabulary entities. With experiments, we find that the dynamic knowledge enquirer can punish repeated entities and remember the co-occurred entities.

In summary, our contributions are three-fold.

  • We propose a fully data-driven generative dialogue system GenDS that is capable of generating responses with any number of entities. A dynamic knowledge enquirer is proposed to select different answer entities according to different local contexts.

  • We have collected a real world dataset named ConversMusic with human annotation, which will be released to the public after acceptance. To the best of our knowledge, there is no public conversation dataset with annotated knowledge.

  • We evaluate our method on two datasets, namely a collected real-world music chatting dataset (ConversMusic) and a public question answering dataset. We show that the proposed method improves baseline models in terms of BLEU, entity accuracy, entity recall and human evaluation.

2Related Work

Data-driven non-goal-oriented dialogue system

Recently, there is a trend towards developing fully data-driven dialogue systems. Seq2Seq [13] learning, which can predict target sequence given source sequence, has been widely applied in such systems. Specifically, Shang et al. [11] first utilized the encoder and decoder framework to generate responses on micro-blogging websites. Sordoni et al. [12] extended it by conditioning the response generation on context vector, which is the encoding vector of three past consecutive utterances. Yao er al. [14] employed an intention network to maintain the relevance of responses. Serban et al. [9] built an end-to-end dialogue system with generative hierarchical neural network. Serban et al. [10] designed a latent variable RNN to model the complex dependencies between the sub-sequences, where the latent variables represent semantics of the sentence.
Data-driven non-goal-oriented dialogue system with external knowledge Recent studies realized that non-goal-driven dialogue systems cannot reply substantively. This is caused by the isolation from external knowledge. Therefore, researchers began to incorporate external knowledge to enhance reply generation. Han et al. [4] proposed a rule-based dialogue system by filling the response templates with retrieved KB. Ghazvininejad et al.[3] utilized external textual information as the unstructured knowledge. As demonstrated, the external textual information can convey more relevant information to responses.
Data-driven QA with external knowledge Some recent work used external structured knowledge graph to build end-to-end question answering systems. Yin et al. [15] proposed a seq2seq-based model where answers were generated in two ways, where one was based on a language model and the other was by some entities retrieved from the KB. He et al. [6] further introduced another generation mechanism: copying words from original question. Besides, they also studied the cases where questions require multiple facts.

3Problem

In this section, we first define notations and then introduce the problem setting.

3.1Notation

Matrices are denoted in bold capital case, column vectors are in bold lower case and scalars are in lower case. denotes -layer neural network function. An input message is denoted by where is the number of words in the question. A response is denoted by where is the number of words in the response. A knowledge fact represented as a triple (subject,predicate,object), denotes as . Specifically, subjects and objects are also known as entities, and predicate is the relation between the subject and object. A knowledge base is a set of all possible facts, denoted by , where is the number of facts in the knowledge base.

3.2Problem Definition

Given an input message, the problem is to generate an appropriate response based on knowledge base. The system firstly retrieves an arbitrary number of related facts from the knowledge base, then generates a response with the relevant facts. All the input messages and responses are comprised of two kinds of words, respectively the common words and the knowledge words. The knowledge words are entities in the knowledge base 1, while the rest are common words. The inputs of the problem are:

  1. An input message .

  2. A knowledge base containing all possible facts.

  3. A list of entity types .

The output of the problem is:

  1. A response . The response might contain arbitrary number of common and knowledge words.

For model training, the related facts for each message response pair are provided as training data, denoted by , where are the facts related to the current message and is the number of message response pairs in the training data. Our goal is to learn a dialogue model from and . Given a new message , the model can identify related facts from and generate response .

4Model Framework of GenDS

In this section, we will introduce the components of the GenDS system. The GenDS system has three components, which are listed as follows:

  1. A candidate facts retriever first detects possible entities in the input message , then retrieves a set of possible facts from the knowledge base , based on the detected entities .

  2. A message encoder encodes the input message into a set of intent vectors at each time step, denoted by .

  3. A reply decoder takes and as input and generates the final response word by word.

4.1Candidate Facts Retriever

The candidate facts retriever identifies facts that are related to the input message in the KB. We denote the entities by by . can be identified by keyword matching (e.g.,a singer, concert or song), or detected by more advanced methods such as entity linking or named entity recognition. Based on the detected entities , we can retrieve the relevant facts from . In traditional QA setting like GenQA, the assumption is that the subjects only appear in the input messages and the objects only appear in the responses. However, in the knowledge grounded conversation, the subject and object can occur together in one message or response. Thus, we retrieve facts with subjects matched with and objectives matched with , denoted as and respectively. We use to denote the union of and as

Notice that we do not restrict the amount of entities and retrieved facts, since different messages may need facts of variable sizes to generate reply.

4.2Message Encoder

The message encoder is designed to catch the user’s intent. In this situation, the names of the entities are not essential and we can replace the entities by their types. For example, if we replace the entity in the message recommend me songs of JAY with its type, the transformed message recommend me songs of People can still express the user’s intent asking for song recommendation. After transformation, we do not need to learn the word embeddings of entities. Thus, we replace the entity in by its type, and feed the transformed message into a RNN encoder word by word to get hidden representations of .

4.3Reply Decoder

The reply decoder generates the final response based on the user intention and candidate facts . There are two categories of possible words in the generated response, the common words ( ) and knowledge words ()2. We introduce a knowledge gate to determine which kind of words to be generated at each time step. In order to generate arbitrary number of possible entities in a single response, we propose a dynamic knowledge enquirer which can select entities according to the local contexts at various positions within a response. The dynamic knowledge enquirer can generate new knowledge words outside the scope of training data because it does not need to learn the embedding vectors for knowledge words.

The probability of generating the answer is defined as:

where is all parameters in the GenDS model. The generation probability of is specified by

where is the hidden state of the decoder model and is the value of the knowledge gate at time step . Based on the value of the knowledge gate , the probability can be further decomposed as:

where is the probability of generated by the common word generator and is the probability of generated by the dynamic knowledge enquirer.

Common Word Generator

The common word generator generates the common word . Firstly, we calculate a message context vector by using the attention mechanism [1] on the message hidden vectors with the current generator hidden state as:

where is computed by

The generator hidden state is updated from the previous hidden state , the word embedding of previously predicted symbol , and the message context vector . The hidden state of the common word generator is updated as:

where denotes vector concatenation, and is a neural network function which is GRU [1] in this paper.
The probability of generating common word is defined as:

Since we do not learn the word embeddings of entities, we replace the entity by its type, and use the entity type’s word embedding instead. The entity type may discard some information for generating the following words. Thus, we fuse the word embedding of and the word embedding of into the word embedding of previous predicted symbols by one layer neural network.

Dynamic Knowledge Enquirer

The dynamic knowledge enquirer generates knowledge word by ranking all entities in the retrieved facts , according to their dynamic entity score. We regard objects in and subjects in as candidate entities. In order to generate multiple entities, the dynamic entity score incorporates both the message and the local context during decoding process. Specifically, we define three scores, the message matching score , the entity update score and the entity type update score , and the product of these scores as dynamic entity score.

The message matching score denotes the matching probability for each candidate entity in with intent vectors of message. The message matching score of candidate entity is obtained by a 2-layer neural network as follows:

where is the last hidden state of message encoder, are entities in retrieved facts , and is the concatenation of the word embedding of entity ’s type and corresponding predicate in retrieved fact. The message matching score is invariant during the decoding process.

To take the history context into accounts, we compute the entity update score and the entity type update score respectively:

where is the one hot vector of last generated word and is the word embedding of the entity type of .

The entity update score is determined by the last generated word, and the entity type update score depends on the word embedding of last generated word and the entity’s type embedding.

The final dynamic entity score is computed as:

Final Response Generation with the knowledge gate

In order to generate the final response with the common word generation and the dynamic knowledge enquirer, we introduce a binary knowledge gate at each time step . If equals , the common word generate will be used to generate an common word . If equals , the dynamic knowledge enquirer will be used to generate an knowledge word. The knowledge gate is defined as

where is one layer MLP,and will be replace with its type if it is entity.

In summary, the is generated as:

4.4Training

For GenDS, We need to learn the parameters in message encoder, common word generator, and dynamic knowledge enquirer. In experiments, we found that if we strictly require the generated entities exactly same as ground truth, the model devotes to find entities same as ground truth and has little thinking of language model. This will degrade fluency of response. Thus, we train our system with multi-task learning [16]:

  1. the model is trained with ground truth as output

  2. the output of task2 is to replace the entity in ground truth with its type

We use our GenDS model for task 1, and the standard Seq2Seq model with attention [1] for task 2.The task 2 can be regarded as the simplified version of task 1, whose goal is to generate fluent response and correct entity type. The task 2 can make up for the fluency ignorance of task 1. Two tasks share the message encoder, and common word generator decoder, and are trained with maximum likelihood estimation (MLE) as objective function.

5Experiment

In this section, we first describe the datasets used in the experiments. Then we describe the experiment setup and evaluation metrics. Finally, we present the experiment results in two datasets.

5.1Dataset

MusicConvers

We collect a human-to-human dialogue dataset from outsourcing in four months. All outsourcers are employed by an IT company in China. The new dataset named MusicConvers are composed of knowledge grounded conversations in music domain. Outsourcers are asked to generate dialogues by talking about the music with their own knowledge. To simulate different individuals, speakers are given the predefined music knowledge as their private knowledge. Then two speakers start to talk based on given knowledge. We build the knowledge base by filtering the KB collected in [15]. The filtered KB is domain specific in music. Notice that the speakers may talk referring to their own KB. Hence, some new KB may appear in their conversations and is finally added to the given KB. To limit the range of topic, the given KB is restricted to one singer. We find that if speakers have different background knowledge, they are likely to talk more since they can obtain unseen knowledge from each other. Hence, the KBs shown to two speakers are overlapped but not exactly the same. We label the triples appear in each sentence. One example is shown in Figure 1. Table 1 shows the statistics of the dataset. We also find that the entities are sparse in our dataset. For example, entities occur less than three times. The entity sparsity conforms to common conversation style. In real life, individuals need only a small amount of common words to talk. But with the extension of their knowledge, they will enrich their vocabulary with more entities.

Table 1: Statistics of the MusicConvers dataset
dialogues 9993
vocabulary size for message 3256
vocabulary size for response 2976
entities 5988
knowledge triples 7612
relation in KB 66

Music Question Answering

GenQA [15] is a large open-domain QA crawled from public websites, where each answer needs only one triple. However, in daily life, answers often consist of multiple facts. [6] extended the GenQA by adding more entities to questions. Unfortunately, this dataset has large redundancy. In other words the same question may appear several times. This redundancy may bring biases in the tests, since many questions may appear in the train data and test data simultaneously. Therefore, we remove those duplicated questions. We also filter out the QA pairs unrelated to music. The statistics of our music domain question answering are shown in Table 2.

Table 2: Statistics of the QA dataset
QA pair 30312
vocabulary size for message 12576
vocabulary size for response 13807
entities 7176
knowledge triples 6238
relation in KB 25

5.2Settings

We adopt one-layer GRU [2] with 160 hidden units and 160-dimensional word vectors for both the message encoder and common word generator. We use the Adam learning rule to update gradients in all experimental configurations. We train all models with learning rate as 1.0 for 5 epochs; after that, we decrease the learning rate by half and continue to train at least 5 epochs. Gradients are clipped at 5 to avoid gradient explosion. We randomly split the data into train () and test ).

5.3Baselines

We compare our model with seq2seq model with attention (S2SA), which is widely used in chit-chat dialogues system. To the best of our knowledge, there is no previous work on end-to-end knowledge grounded conversation with structured KB. Since existing generative QA models can be applied in knowledge grounded conversation, we also use generative QA model GenQA [15] as our baselines. To prove the effectiveness of dynamic knowledge enquirer, we improve GenQA with dynamic entity generation probability (GenQAD) , where the entity generation probability is determined by decoder hidden state , the intent vectors of message , and triple embedding . This is also the improvement of COREQA [6] for GenQA. We design the invariants of our model GenDS-Single and GenDS-Static to illustrate the benefits of multi-task and dynamic knowledge enquirer respectively. GenDS-Single is trained with single task where the output is the ground truth. GenDS-Single only uses the question matching score as the entity score Equation 1, which is invariable during decoding.

Table 3: Automatic Evaluation on the Music dataset
Models BLEU Precision Recall
S2SA 0.11
GenQA 0.05
GenQAD 0.06
GenDS-Single 0.108
GenDS-Static 0.108
GenDS 0.122
Table 4: Human Evaluation on the MusicConvers dataset.
Models Grammar Context Relevance Correctness
S2SA 1.76 0.87 0.16
GenQA 1.28 0.95 0.41
GenQAD 1.67 1.11 0.51
GenDS-Single

2.16

1.67 1.18
GenDS-Static 1.97 1.42 0.96
GenDS 2.03 1.55 0.89
Table 5: Automatic Evaluation on the QA dataset
Models BLEU Precision Recall
S2SA 0.05
GenQA 0.12
GenQAD 0.13
GenDS-Single 0.226
GenDS-Static 0.19
GenDS 0.227

5.4Evaluation metrics

We compare GenDS with baselines by both automatic evaluation and human evaluation. Following the existing works, we employ the BLEU [8] automatic evaluation. However, BLEU only reflects the words occurrence between the ground truth and the generated response. To measure the information correctness, we define the entity-accuracy, entity-recall inspired by the accuracy, recall in classification evaluation. The entity-accuracy is the percentage of right generated entities in generated entities, and the entity-recall is the percentage of right generated entities in ground truth entities. For entity-accuracy and entity-recall, we only handle the samples whose ground truths have entities. We also recruit human annotators to judge the quality of the generated responses with aspects of grammar, context relevance and information correctness. All scores range from to . Higher score represents better performance in terms of the above three metrics.

Table 6: Case Study For Knowledge Grounded Conversation. Entities are underlined, and true entities are in bold.
Message I forgot what songs has sung?

忘记了,胡灵唱过什么歌?

S2SA still

还有全新经验

GenQA Meet in Guangzhou, Meet in Guangzhou, Meet in Guangzhou, Meet in Guangzhou and so on

广州之约 ,广州之约 ,广州之约 ,广州之约 等等

GenQAD should be I am still your lover

就是 我依然是你的情人

GenDS-Static Song of Wind, Song of Wind

听风的歌 ,听风的歌

GenDS-Single I am still your lover, Song of Wind

我依然是你的情人,听风的歌

Table 7: Case Study For Unseen Knowledge Words. Unseen Entities are underlined, and true entities are in bold.
Message Didn’t know this singer, do you know

没听过这个歌手哎,你知道薛之谦么

S2SA I knew her songs

听过她的歌呀

GenQA Animal World singing the song is that he wrote the lyrics of the day more than a few days

动物世界 唱的歌 都 是 他 写 填词 日 超多 日 超多 日

GenQAD Heard of these two

听 过 这两首 是 听 过 这两首 了 哦

GenDS-Single is the song of , all written by himself

王子归来 是 薛之谦 的 歌 , 都 是 作词 。

5.5Evaluation Results

MusicConvers:

The BLEU score computed with 1-ngram is shown in Table 3. Figure ? shows the entity-accuracy and entity-recall. The human evaluation result is shown in Table 4. We compute the mean score of each metric. For automatic evaluation, GenDS shows the best performance on BLEU and entity-accuracy, while GenDS-single achieves the highest entity-recall. Although GenDS does not overwhelm on S2SA in terms of BLEU, it improves entity accuracy and recall by and respectively. This indicates that GenDS can reply the message with more correct information. S2SA cannot respond with correct information, which is mainly due to the lack of grounding into external knowledge. Although GenQA and GenQAD can incorporate KB in response, their performance on entity-accuracy and entity-recall still cannot compete with GenDS. For GenQA, the entity generation probability is fixed during decoding. As a consequence, GenQA cannot generate different entities. For GenQAD, its update mechanism of entity generation probability is less effective than our dynamic knowledge enquirer. BLEU scores of GenQA and GenQAD are lowest among all models. We think that MusicConvers has no enough data for GenQA and GenQAD to learn reliable entity representations. This illustrate thet GenDS can achieve decent performance even on small dataset. GenDS achieves higher BLEU than GenDS-Single, which confirms the benefit of multi-task learning for improving the fluency. For human evaluation, the GenDS-single achieves the best performance in terms of grammar, context relevance and information correctness. Although S2SA can generate fluent responses, these responses contain little correct information and are less semantically relevant with the message. The performance of GenDS is slightly worse than GenDS-single. We infer that is due to the task 2, where GenDS tends to generate fluent response instead of correct information.

MusicQA

The automatic evaluation results on MusicQA are shown in Table 5 and Figure ?. GenDS improves BLEU score, entity-accuracy and entity-recall significantly compared with S2SA, GenQA and GenQAD. GenQA does not obtain comparable performance with the original QA dataset [15]. This may be the due to our mitigation of redundancy for the dataset. Unlike MusicConvers, GenDS-static exhibits decent performance on entity-accuracy and entity-recall. questions in MusicQA only contain one entity in answer. Thus, most of messages do not need dynamic knowledge enquirer to generate multiple entities. However, GenDS still achieves higher entity-accuracy and entity-recall than GenDS-static. This verifies that the proposed dynamic knowledge enquirer is useful even when only one entity is generated.

5.6Case Studies

Table 6 compares models with some examples in text data. Entities are underlined, and true generated entities are in bold. Although S2SA may generate response with entities, it hardly generates true entities. Without dynamic knowledge word generation probability, GenDS-static and GenQA can not generate different entities in one response. GenDS-Single can generate multiple entities. This indicates that the dynamic knowledge enquirer learns to punish the generated words. To verify the validity on unseen entities, we expand the KB with new knowledge triples, and outsourcers provide the input messages based on the new KB. Table 7 shows the responses of these messages, where unseen entities are underlines, and true generated entities are in bold. GenDS-single can generate the decent response with multiple correct entities even when the entities in the input message are not included in training data. Besides, GenDS-single can use the unseen entities in new KB as response. Table 7 shows that ability of GenDS-Single to generate different entity types in one response. In this example, the singer is generated after the song. Such ability indicates that the dynamic knowledge enquirer can find the co-occurrence of different entities to entities with different types.

5.7Conclusion

We propose an end-to-end knowledge grounded conversation model, GenDS, to incorporate structured KB in response generation. The model can generate responses with any number of answer entities, even when these entities never appear in the training set. It outperforms traditional non-goal-driven dialogue system S2SA and generative QA models on MusicConvers and MusicQA datasets. Being able to deal with unseen entities, GenDS is scalable with new KB. For further work, we plan to improve the GenDS with transfer learning [7], such that GenDS can be transferred to another domain like sport.

Footnotes

  1. We use knowledge words and entities interchangeably.
  2. Although some words may appear simultaneously in and , they have different meaning. For example, “love” in is a verb, but may be one song name in . Thus, we consider the word with same name in and as different word. In other words, there is no overlapping between and . We also add the entity type and relations in .

References

  1. 2014.
    Bahdanau, D.; Cho, K.; and Bengio, Y. Neural machine translation by jointly learning to align and translate.
  2. 2014.
    Cho, K.; Van Merriënboer, B.; Bahdanau, D.; and Bengio, Y. On the properties of neural machine translation: Encoder-decoder approaches.
  3. 2017.
    Ghazvininejad, M.; Brockett, C.; Chang, M.-W.; Dolan, B.; Gao, J.; Yih, W.-t.; and Galley, M. A knowledge-grounded neural conversation model.
  4. 2015.
    Han, S.; Bang, J.; Ryu, S.; and Lee, G. G. Exploiting knowledge base to generate responses for natural language dialog listening agents.
  5. 2017.
    Hao, Y.; Zhang, Y.; Liu, K.; He, S.; Liu, Z.; Wu, H.; and Zhao, J. An end-to-end model for question answering over knowledge base with cross-attention combining global knowledge.
  6. 2017.
    He, S.; Liu, C.; Liu, K.; and Zhao, J. Generating natural answers by incorporating copying and retrieving mechanisms in sequence-to-sequence learning.
  7. 2010.
    Pan, S. J., and Yang, Q. A survey on transfer learning.
  8. 2002.
    Papineni, K.; Roukos, S.; Ward, T.; and Zhu, W.-J. Bleu: a method for automatic evaluation of machine translation.
  9. 2016.
    Serban, I. V.; Sordoni, A.; Bengio, Y.; Courville, A. C.; and Pineau, J. Building end-to-end dialogue systems using generative hierarchical neural network models.
  10. 2017.
    Serban, I. V.; Sordoni, A.; Lowe, R.; Charlin, L.; Pineau, J.; Courville, A. C.; and Bengio, Y. A hierarchical latent variable encoder-decoder model for generating dialogues.
  11. 2015.
    Shang, L.; Lu, Z.; and Li, H. Neural responding machine for short-text conversation.
  12. 2015.
    Sordoni, A.; Galley, M.; Auli, M.; Brockett, C.; Ji, Y.; Mitchell, M.; Nie, J.-Y.; Gao, J.; and Dolan, B. A neural network approach to context-sensitive generation of conversational responses.
  13. 2014.
    Sutskever, I.; Vinyals, O.; and Le, Q. V. Sequence to sequence learning with neural networks.
  14. 2015.
    Yao, K.; Zweig, G.; and Peng, B. Attention with intention for a neural network conversation model.
  15. 2015.
    Yin, J.; Jiang, X.; Lu, Z.; Shang, L.; Li, H.; and Li, X. Neural generative question answering.
  16. 2017.
    Zhang, Y., and Yang, Q. A survey on multi-task learning.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
10239
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description