Knowledge Enhanced Hybrid Neural Network for Text Matching

Knowledge Enhanced Hybrid Neural Network for Text Matching

Yu Wu    , Wei Wu    , Zhoujun Li    , Ming Zhou    
State Key Lab of Software Development Environment, Beihang University, Beijing, China
    Microsoft Research, Beijing, China
{wuyu,lizj}@buaa.edu.cn {wuwei,mingzhou}@microsoft.com
The work was done when the first author was an intern in Microsoft Research Asia.
Abstract

Long text brings a big challenge to semantic matching due to their complicated semantic and syntactic structures. To tackle the challenge, we consider using prior knowledge to help identify useful information and filter out noise to matching in long text. To this end, we propose a knowledge enhanced hybrid neural network (KEHNN). The model fuses prior knowledge into word representations by knowledge gates and establishes three matching channels with words, sequential structures of sentences given by Gated Recurrent Units (GRU), and knowledge enhanced representations. The three channels are processed by a convolutional neural network to generate high level features for matching, and the features are synthesized as a matching score by a multilayer perceptron. The model extends the existing methods by conducting matching on words, local structures of sentences, and global context of sentences. Evaluation results from extensive experiments on public data sets for question answering and conversation show that KEHNN can significantly outperform the-state-of-the-art matching models and particularly improve the performance on pairs with long text.

Knowledge Enhanced Hybrid Neural Network for Text Matching


Yu Wuthanks: The work was done when the first author was an intern in Microsoft Research Asia.    , Wei Wu    , Zhoujun Li    , Ming Zhou State Key Lab of Software Development Environment, Beihang University, Beijing, China     Microsoft Research, Beijing, China {wuyu,lizj}@buaa.edu.cn {wuwei,mingzhou}@microsoft.com

Introduction

Semantic matching is a fundamental problem in many NLP tasks such as question answering (QA) (?), conversation (?), and paraphrase identification (?). Take question-answering as an example. Given a question and an answer passage, one can employ a matching function to measure their matching degree. The matching degree reflects how likely the passage can be used as an answer to the question.

The challenge of text matching lies in semantic gaps between natural language sentences. Existing work tackles the challenge by representing sentences or their semantic and syntactic relations from different levels of abstractions with neural networks (??). These models only rely on the text within a pair to perform matching, whereas we find that sentences in a pair could have very complicated semantic and syntactic structures, and it is difficult for the-state-of-the-art neural models to extract useful features from such sentences to bridge the semantic gaps in the text pair. Table 1 gives an example from community QA to illustrate the challenge. The answer is very long111The original answer has words. and contains a lot of information that well compare the two schools but semantically far from the question (e.g., “horse riding” and “lances swords”). The information makes the answer a high quality one, but hinders the existing models from establishing the semantic relations between the question and the answer in matching. Similarly, when questions become long, matching also becomes difficult. In practice, such long text is not rare. For example, in a public QA data set, question answer pairs are longer than 60 words (question length plus answer length). More seriously, the-state-of-the-art model can only achieves matching accuracy on pairs longer than 60 words compared to its performance on pairs shorter than 30 words. These evidence indicates us that improving matching performance on pairs with long text is important but challenging, because the semantic gap is even bigger in such pairs.

Question : Which school is better Voltaire or Bonaparte?
Answer : Both are good schools but Bonaparte will teach your kids to become a good leader but they concentrate mainly on outdoor physical activities, manoeuvers, strategies. Horse riding and lances swords are their speciality…. On the other hand Voltaire will make your child more of a philosopher! They encourage independent thinking…and mainly concentrates on indoor activities! They inculcate good moral values in the child and he will surely grow up to be a thinking person!
Table 1: A difficult example from QA

We study semantic matching in text pairs, and particularly, we aim to improve matching accuracy on long text. Our idea is that since it is difficult to establish the matching relations for pairs with long text only by themselves, we consider incorporating prior knowledge into the matching process. The prior knowledge could be topics, tags, and entities related to the text pair, and represents a kind of global context obtained elsewhere compared to local context such as phrases, syntactic elements obtained within the text in the pair. In matching, the global context can help filter out noise, and highlight parts that are important to matching. For instance, if we have a tag “family” indicating the category of the question in Table 1 in community QA, we can use the tag to enhance the matching between the question and the answer. “Family” reflects the global semantics of the question. It strengthens the effect of its semantically similar words like “kids”,“child” and “activity” in QA matching, and at the same time reduce the influence of “horse riding” and “lances swords” to matching. With the tag as a bridge, the semantic relation between the question and the answer can be identified, which is difficult to achieve only by themselves.

We propose a knowledge enhanced hybrid neural network (KEHNN) to leverage the prior knowledge in matching. Given a text pair, KEHNN exploits a knowledge gate to fuse the semantic information carried by the prior knowledge into the representation of words and generates a knowledge enhanced representation for each word. The knowledge gate is a non-linear unit and controls how much information from the word is kept in the new representation and how much information from the prior knowledge flows to the representation. By this means, noise from the irrelevant words is filtered out, and useful information from the relevant words is strengthened. The model then forms three channels to perform matching from multiple perspectives. Each channel models the interaction of two pieces of text in a pair by a similarity matrix. The first channel matches text pairs on words. It calculates the similarity matrix by word embeddings. The second channel conducts matching on local structures of sentences. It captures sequential structures of sentences in the pair by a Bidirectional Recurrent Neural Network with Gated units (BiGRU) (?), and constructs the similarity matrix with the hidden vectors given by BiGRU. In the last channel, the knowledge enhanced representations, after processed by another BiGRU to further capture the sequential structures, are utilized to construct the similarity matrix. Since the prior knowledge represents global semantics of the text pair, the channel performs matching from a global context perspective. The three channels then exploit a convolutional neural networks (CNN) to extract compositional relations of the matching elements in the matrices as high level features for matching. The features are finally synthesized as a matching score by a multilayer perceptron (MLP). The matching architecture lets two objects meet at the beginning, and measures their matching degree from multiple perspectives, thus the interaction of the two objects are sufficiently modeled.

We conduct experiments on public data sets for QA and conversation. Evaluation results show that KEHNN can significantly outperform the-state-of-the-art matching methods, and particularly improve the matching accuracy on long text.

Our contributions in this paper are three-folds: 1) proposal of leveraging prior knowledge to improve matching on long text; 2) proposal of a knowledge enhanced hybrid neural network which incorporates prior knowledge into matching in a general way and conducts matching on multiple levels; 3) empirical verification of the effectiveness of the proposed method on two public data sets.

Related Work

Early work on semantic matching is based on bag of words (?) and employs statistical techniques like LDA (?) and translation models (?) to overcome the semantic gaps. Recently, neural networks have proven more effective on capturing semantics in text pairs. Existing methods can be categorized into two groups. The first group follows a paradigm that matching is conducted by first representing sentences as vectors. Typical models in this group include DSSM (?), NTN (?), CDSSM (?), Arc1 (?), CNTN (?), and LSTMs (?). These methods, however, lose useful information in sentence representation, and leads to the emergence of methods in the second group. The second group matches text pairs by an interaction representation of sentences which allows them to meet at the first step. For example, MV-LSTM (?) generates the interaction representation by LSTMs and neural tensors, and then uses k-max pooling and a multi-layer perceptron to compute a matching score. MatchPymid (?) employs CNN to extract features from a word similarity matrix. More effort along this line includes DeepMatch (?), MultiGranCNN (?), ABCNN (?), Arc2 (?), Match-SRNN (?), and Coupled-LSTM (?). Our method falls into the second group, and extends the existing methods by introducing prior knowledge into matching and conducting matching with multiple channels.

Approach

Problem Formalization

Suppose that we have a data set , where and are two pieces of text, and and represent the -th word of and respectively, and is the number of instances. is a label indicating the matching degree between and . In addition to , we have prior knowledge for and denoted as and respectively. Our goal is to learn a matching model with and . Given a new pair with prior knowledge , predicts the matching degree between and .

To learn , we need to answer two questions: 1) how to use prior knowledge in matching; 2) how to perform matching with both text pairs and prior knowledge. In the following sections, we first describe our method on incorporating prior knowledge into matching, then we show details of our model.

Knowledge Gate

Inspired by the powerful gate mechanism (??) which controls information in and out when processing sequential data with recurrent neural networks (RNN), we propose using knowledge gates to incorporate prior knowledge into matching. The underlying motivation is that we want to use the prior knowledge to filter out noise and highlight the useful information to matching in a piece of text. Formally, let denote the embedding of a word in text and denote the representation of the prior knowledge of . Knowledge gate is defined as

(1)

where is a sigmoid function, and , are parameters. With , we define a knowledge enhanced representation for as

(2)

where is an element-wise multiplication operation. Equation (2) means that prior knowledge is fused into matching by a combination of the word representation and the knowledge representation. In the combination, the knowledge gate element-wisely controls how much information from word is preserved, and how much information from prior knowledge flows in. The advantage of the element-wise operation is that it offers a way to precisely control the contributions of prior knowledge and words in matching. Entries of lie in . The larger an entry of is, the more information from the corresponding entry of will be kept in . In contrast, the smaller an entry of is, the more information from the corresponding entry of will flow into . Since is determined by both and and learned from training data, it will keep the useful parts in the representations of and the prior knowledge and at the same time filter out noise from them.

Matching with Multiple Channels

Figure 1: Architecture of KEHNN

With the knowledge enhanced representations, we propose a knowledge enhanced hybrid neural network (KEHNN) which conducts matching with multiple channels. Figure 1 gives the architecture of our model. Given a pair , the model looks up an embedding table and represents and as and respectively, where are the embeddings of the -th word of and respectively. and are used to create three similarity matrices, each of which is regarded as an input channel of a convolutional neural network (CNN). CNN extracts high level features from the similarity matrices. All features are finally concatenated and synthesized by a multilayer perceptron (MLP) to form a matching score.

Specifically, in channel one, , element in similarity matrix is calculated by

(3)

where could be ReLU or tanh. matches and on words.

In channel two, we employ bidirectional gated recurrent units (BiGRU) (?) to encode and into hidden vectors. A BiGRU consists of a forward RNN and a backward RNN. The forward RNN processes as it is ordered (i.e., from to ), and generates a sequence of hidden states . The backward RNN reads the sentence in its reverse order (i.e., from to ) and generates a sequence of backward hidden states . BiGRU then forms the hidden vectors of as by concatenating the forward and the backward hidden states. More specifically, is calculated by

(4)
(5)
(6)
(7)

where and are an update gate and a reset gate respectively, and , , , , , are parameters. The backward hidden state is obtained in a similar way. Following the same procedure, we get as the hidden vectors of . With the hidden vectors, , we calculate element in similarity matrix by

(8)

where and are parameters. Since BiGRU encodes sequential information of sentences into hidden vectors, matches and on local structures (i.e., sequential structures) of sentences.

In the last channel, we employ another BiGRU to process the sequences of and which consists of the knowledge enhanced representations in Equation (2), and obtain the knowledge enhanced hidden states and for and respectively. Similar to channel two, , element in similarity matrix is given by

(9)

where and are parameters. Prior knowledge represents a kind of global semantics of and , and therefore matches and on global context of sentences.

The similarity matrices are then processed by a CNN to abstract high level features. , CNN regards a similarity matrix as an input channel, and alternates convolution and max-pooling operations. Suppose that denotes the output of feature maps of type- on layer-, where , . On convolution layers (i.e. ), we employ a 2D convolution operation with a window size , and define as

(10)

where is a ReLU, and and are parameters of the -th feature map on the -th layer, and is the number of feature maps on the -th layer. An max pooling operation follows a convolution operation and can be formulated as

(11)

where and are the width and the height of the 2D pooling respectively.

The output of the final feature maps are concatenated as a vector and fed to a two-layer feed-forward neural network (i.e., MLP) to calculate a matching score :

(12)

where , , , and are parameters. is softmax and is tanh.

KEHNN inherits the advantage of 2D CNN (??) that matching two objects by letting them meet at the beginning. Moreover, it constructs interaction matrices by considering multiple matching features. Therefore semantic relations between the two objects can be sufficiently modeled and leveraged in building the matching function. Our model extends the existing models (?) by fusing extra knowledge into matching and conducting matching with multiple channels.

We learn by minimizing cross entropy (?) with and . Let denote the parameters of our model. Then the objective function of learning can be formulated as

(13)

where in the number of instances in , and is the number of values of labels in . returns the -th element from the -dimensional vector , and is or , indicating whether equals to or not. We optimize the objective function using back-propagation and the parameters are updated by stochastic gradient descent with Adam algorithm (?). As regularization, we employ early-stopping (?) and dropout (?) with rate of 0.5. We set the initial learning rate and the batch size as and respectively.

Prior Knowledge Acquisition

Prior knowledge plays a key role to the success of our model. As described above, in learning, we expect prior knowledge to represent global context of input. In practice, we can use tags, keywords, topics, or entities that are related to the input as instantiation of the prior knowledge. Such prior knowledge could be obtained either from the metadata of the input, or from extra algorithms, and represent a summarization of the overall semantics of the input. Algorithms include tag recommendation (?), keyword extraction (?), topic modeling (?) and entity linking (?) can be utilized to extract the prior knowledge from multiple resources like web documents, social media and knowledge base.

In our experiments, we use question categories as the prior knowledge in the QA task, because the categories assigned by the askers can reflect the question intention. For conversation task, we pre-trained a Twitter LDA model (?) on external large social media data, as the topics learning from social media could help us group text with similar meaning in a better way. Both the categories and the topics represent a high level abstraction from human or an automatic algorithm to the QA pairs or the message-response pairs, and therefore, they can reflect the global semantics of the input of the two tasks. As a consequence, our knowledge gate can learn a better representation for matching with the prior knowledge.

Experiments

We tested our model on two matching tasks: answer selection for question answering and response selection for conversation.

Baseline

We considered the following models as baselines:

Multi-layer perceptron (MLP): each sentence is represented as a vector by averaging its word vectors. The two vectors were fed to a two-layer feedforward neural network to calculate a matching score. MLP shared the embedding tables with our model.

DeepMatch: the matching model proposed in (?) which only used topic information to perform matching.

CNNs: the Arc1 model and the Arc2 model proposed by Hu et al. (?).

CNTN: the convolution neural tensor network (?) proposed for community question answering.

MatchPyramid: the model proposed by Pang et al. (?) who match two sentences using an approach of image recognition. The model is a special case of our model with only channel one.

LSTMs: sentence vectors are generated by the last hidden state of LSTM (?), or the attentive pooling result of all hidden states (?). We denote the two models as LSTM and LSTM.

MV-LSTM: the model (?) generates an interaction vector by combining hidden states of two sentences given by a shared BiLSTM. Then the interaction vector is fed to an MLP to compute the matching score.

We implemented all baselines and KEHNN by an open-source deep learning framework Theano (?) . For all baselines and our model, we set the dimension of word embedding (i.e.,) as and the maximum text length (i.e., and ) as . In LSTMs, MV-LSTM, and BiGRU in our model, we set the dimension of hidden states as (i.e., ). We only used one convolution layer and one max-pooling layer in all CNN based models, because we found that the performance of the models did not get better with the number of layers increased. For Arc2, MatchPyramid, and KEHNN, we tuned the window size in convolution and pooling in , and found that is the best choice. The number of feature maps is . For Arc1 and CNTN, we selected the window size from and set it as . The number of feature maps is . In MLP, we tuned the dimension of the hidden layer in and set it as . and in KEHNN shared word embeddings, knowledge embeddings, parameters of BiGRUs, and parameters of the knowledge gates. All tuning was conducted on validation sets. The activation functions in baselines are the same as those in our model.

 

Data question answer answers per question
Training 2600 16541 6.36
Dev 300 1645 5.48
Test 329 1976 6.00

 

Table 2: Statistics of the answer selection data set

Answer Selection

The goal of answer selection is to recognize high quality answers in answer candidates of a question. We used a public data set of answer selection in SemEval 2015 (?), which collects question-answer pairs from Qatar Living Forum222http://www.qatarliving.com/forum and requires to classify the answers into categories (i.e. in our model) including good, potential and bad. The ratio of the three categories is . The statistics of the data set is summarized in Table 2. We used classification accuracy as an evaluation metric.

Specific Setting

In this task, we regarded question categories tagged by askers as prior knowledge (both and ). There are categories in the Qatar Living data. Knowledge vector was initialized by averaging the embeddings of words in the category. For all baselines and our model, the word embedding and the topic model (in DeepMatch) were trained on a Qatar living raw text provided by SemEval-2015 333http://alt.qcri.org/semeval2015/task3/index.php?id=data-and-tools. We fixed the word embedding during the training process, and set in Equation (3), (8), (9) as ReLU.

 

ACC
MLP 0.713
DeepMatch 0.682
Arc1 0.715
Arc2 0.715
CNTN 0.735
MatchPyramid 0.717
LSTM 0.725
LSTM 0.736
MV-LSTM 0.735
KEHNN 0.748
JAIST 0.725

 

Table 3: Evaluation results on answer selection

Results

JAIST, the champion of the task in SemEval15, used 12 features and an SVM classifier and achieved an accuracy of . From Table 3, we can see that advanced neural networks, such as CNTN, MV-LSTM, LSTM and KEHNN, outperform JAIST’s model, indicating that hand-crafted features are less powerful than deep learning methods. Models that match text pairs by interaction representations like Arc2 and MatchPyramid are not better than models that perform matching with sentence embeddings like Arc1. This is because the training data is small and we fixed the word embedding in learning. LSTM based models in general performs better than CNN based models, because they can capture sequential information in sentences. KEHNN leverages both the sequential information and the prior knowledge from categories in matching by a CNN with multiple channels. Therefore, it outperforms all other methods, and the improvement is statistically significant (t-test with p-value ). It is worthy to note that the gap between different methods is not big. This is because answers labeled as ”potential” only cover of the data and are hard to predict.

Response Selection

Response selection is important for building retrieval-based chatbots (?). The goal of the task is to select a proper response for a message from a candidate pool to realize human-machine conversation. We used a public English conversation data set, the Ubuntu Corpus (?), to conduct the experiment. The corpus consists of a large number of human-human dialogue about Ubuntu technique. Each dialogue contains at least 3 turns, and we only kept the last two utterances as we study text pair matching and ignore context information. We used the data pre-processed by Xu et al. (?)444https://www.dropbox.com/s/2fdn26rj6h9bpvl/ubuntudata.zip?dl=0, in which all urls and numbers were replaced by “” and “” respectively to alleviate the sparsity issue. The training set contains million message-response pairs with a ratio between positive and negative responses, and both the validation set and the test set have million message-response pairs with a ratio between positive and negative responses. We followed Lowe et al. (?) and employed recall at position in candidates as evaluation metrics and denoted the metrics as . indicates if the correct response is in the top results from candidates.

Specific Setting

In this task, we trained a topic model to generate topics for both messages and responses as prior knowledge. We crawled 8 million questions (question and description) from the ”Computers & Internet” category in Yahoo! Answers, and utilized these data to train a Twitter LDA model (?) with topics. In order to construct and , we separately assigned a topic to a message and a response by the inference algorithm of Twitter LDA. Then we transformed the topic to a vector by averaging the embeddings of top words under the topic. Word embedding tables were initialized using the public word vectors available at http://nlp.stanford.edu/projects/glove (trained on Twitter) and updated in learning. Tanh is used as in Equation (3), (8), (9).

 

R@1 R@1 R@2 R@5
MLP 0.651 0.256 0.380 0.703
DeepMatch 0.593 0.345 0.376 0.693
Arc1 0.665 0.221 0.360 0.684
Arc2 0.736 0.380 0.534 0.777
CNTN 0.743 0.349 0.512 0.797
MatchPyramid 0.743 0.420 0.554 0.786
LSTM 0.725 0.361 0.494 0.801
LSTM 0.758 0.381 0.545 0.801
MV-LSTM 0.767 0.410 0.565 0.800
KEHNN 0.786 0.460 0.591 0.819

 

Table 4: Evaluation results on response selection

Results

Table 4 reports the evaluation results on response selection. Our method outperforms baseline models on all metrics, and the improvement is statistically significant (t-test with p-value ). In the data set, as the training data becomes large and we updated word embedding in learning, Arc2 and MatchPyraimd are much better than Arc1. LSTM based models perform better than CNN based models, which is consistent with the results in the QA task.

Discussions

We first investigate the performance of KEHNN in terms of text length, as shown in Table 5. We compared our model with 2 typical matching models: LSTM and MV-LSTM. We binned the text pairs into buckets, according to the length of the concatenation of the two pieces of text. Pair represents the number of pairs that fall into the bucket. From the results, we can see that on relatively short text (i.e., length in ), KEHNN performs comparably well with MV-LSTM, while on long text, KEHNN significantly improves the matching accuracy. The results verified our claim that matching with multiple channels and prior knowledge can enhance accuracy on long text. Note that on the Ubuntu data, all models perform worse on short text than them on long text. This is because we ignored context for short message-response pairs, while long pairs are usually independent with context and have complete semantics.

Furthermore, we also report the contributions of different channels of our model in Table 6. We can see that channel two is the most powerful one on the conversation data, while channel three is the best one on the QA data. This is because the prior knowledge in the conversation data is automatically generated rather than obtained from meta-data like that in the QA data. The automatically generated prior knowledge contains noise which hurts the performance of channel three. The full model outperforms all single channels consistently, demonstrating that matching with multiple channels could leverage the three types of features and sufficiently model the semantic relations in text pairs.

 

Length
Pair 203 689 579 505
LSTM 0.768 0.705 0.728 0.726
MV-LSTM 0.788 0.708 0.746 0.739
KEHNN 0.792 0.721 0.765 0.746

 

(a) QA dataset

 

Length
Pair 253578 207772 33618 5032
LSTM 0.707 0.748 0.732 0.718
MV-LSTM 0.726 0.752 0.725 0.694
KEHNN 0.724 0.774 0.785 0.791

 

(b) Ubuntu dataset
Table 5: Accuracy on different length of text

 

Conversation QA
R@1 R@1 R@2 R@5 ACC
only 0.743 0.420 0.554 0.786 0.717
only 0.779 0.425 0.565 0.800 0.734
only 0.750 0.360 0.531 0.791 0.738
KEHNN 0.786 0.460 0.591 0.819 0.748

 

Table 6: Comparison of different channels

Conclusion

This paper proposed KEHNN that can leverages prior knowledge in semantic matching. Experimental results show that our model can significantly outperform state-of-the-art matching models on two matching tasks.

References

  • [AlessandroMoschitti, Glass, and Randeree 2015] AlessandroMoschitti, P. L. W.; Glass, J.; and Randeree, B. 2015. Semeval-2015 task 3: Answer selection in community question answering. SemEval-2015 269.
  • [Bahdanau, Cho, and Bengio 2014] Bahdanau, D.; Cho, K.; and Bengio, Y. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.
  • [Blei, Ng, and Jordan 2003] Blei, D. M.; Ng, A. Y.; and Jordan, M. I. 2003. Latent dirichlet allocation. the Journal of machine Learning research 3:993–1022.
  • [Chung et al. 2014] Chung, J.; Gulcehre, C.; Cho, K.; and Bengio, Y. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555.
  • [Dolan, Quirk, and Brockett 2004] Dolan, B.; Quirk, C.; and Brockett, C. 2004. Unsupervised construction of large paraphrase corpora: Exploiting massively parallel news sources. In Proceedings of the 20th international conference on Computational Linguistics, 350. Association for Computational Linguistics.
  • [Han, Sun, and Zhao 2011] Han, X.; Sun, L.; and Zhao, J. 2011. Collective entity linking in web text: a graph-based method. In Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval, 765–774. ACM.
  • [Hochreiter and Schmidhuber 1997] Hochreiter, S., and Schmidhuber, J. 1997. Long short-term memory. Neural computation 9(8):1735–1780.
  • [Hu et al. 2014] Hu, B.; Lu, Z.; Li, H.; and Chen, Q. 2014. Convolutional neural network architectures for matching natural language sentences. In Advances in Neural Information Processing Systems, 2042–2050.
  • [Huang et al. 2013] Huang, P.-S.; He, X.; Gao, J.; Deng, L.; Acero, A.; and Heck, L. 2013. Learning deep structured semantic models for web search using clickthrough data. In Proceedings of the 22nd ACM international conference on Conference on information & knowledge management, 2333–2338. ACM.
  • [Kingma and Ba 2014] Kingma, D., and Ba, J. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
  • [Koehn, Och, and Marcu 2003] Koehn, P.; Och, F. J.; and Marcu, D. 2003. Statistical phrase-based translation. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology-Volume 1, 48–54. Association for Computational Linguistics.
  • [Lawrence and Giles 2000] Lawrence, S., and Giles, C. L. 2000. Overfitting and neural networks: conjugate gradient and backpropagation. In Neural Networks, 2000. IJCNN 2000, Proceedings of the IEEE-INNS-ENNS International Joint Conference on, volume 1, 114–119. IEEE.
  • [Levin and Fleisher 1988] Levin, E., and Fleisher, M. 1988. Accelerated learning in layered neural networks. Complex systems 2:625–640.
  • [Liu, Qiu, and Huang 2016] Liu, P.; Qiu, X.; and Huang, X. 2016. Modelling interaction of sentence pair with coupled-lstms. arXiv preprint arXiv:1605.05573.
  • [Lowe et al. 2015] Lowe, R.; Pow, N.; Serban, I.; and Pineau, J. 2015. The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems. arXiv preprint arXiv:1506.08909.
  • [Lu and Li 2013] Lu, Z., and Li, H. 2013. A deep architecture for matching short texts. In Advances in Neural Information Processing Systems, 1367–1375.
  • [Pang et al. 2016] Pang, L.; Lan, Y.; Guo, J.; Xu, J.; Wan, S.; and Cheng, X. 2016. Text matching as image recognition.
  • [Qiu and Huang 2015] Qiu, X., and Huang, X. 2015. Convolutional neural tensor network architecture for community-based question answering. In Proceedings of the 24th International Joint Conference on Artificial Intelligence (IJCAI), 1305–1311.
  • [Ramos 2003] Ramos, J. 2003. Using tf-idf to determine word relevance in document queries. In Proceedings of the first instructional conference on machine learning.
  • [Shen et al. 2014] Shen, Y.; He, X.; Gao, J.; Deng, L.; and Mesnil, G. 2014. A latent semantic model with convolutional-pooling structure for information retrieval. In Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management, 101–110. ACM.
  • [Socher et al. 2011] Socher, R.; Huang, E. H.; Pennin, J.; Manning, C. D.; and Ng, A. Y. 2011. Dynamic pooling and unfolding recursive autoencoders for paraphrase detection. In Advances in Neural Information Processing Systems, 801–809.
  • [Socher et al. 2013] Socher, R.; Chen, D.; Manning, C. D.; and Ng, A. 2013. Reasoning with neural tensor networks for knowledge base completion. In Advances in Neural Information Processing Systems, 926–934.
  • [Srivastava et al. 2014] Srivastava, N.; Hinton, G. E.; Krizhevsky, A.; Sutskever, I.; and Salakhutdinov, R. 2014. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research 15(1):1929–1958.
  • [Tan, Xiang, and Zhou 2015] Tan, M.; Xiang, B.; and Zhou, B. 2015. Lstm-based deep learning models for non-factoid answer selection. arXiv preprint arXiv:1511.04108.
  • [Theano Development Team 2016] Theano Development Team. 2016. Theano: A Python framework for fast computation of mathematical expressions. arXiv e-prints abs/1605.02688.
  • [Voorhees and others 1999] Voorhees, E. M., et al. 1999. The trec-8 question answering track report. In Trec, volume 99, 77–82.
  • [Wan et al. 2015] Wan, S.; Lan, Y.; Guo, J.; Xu, J.; Pang, L.; and Cheng, X. 2015. A deep architecture for semantic matching with multiple positional sentence representations. arXiv preprint arXiv:1511.08277.
  • [Wan et al. 2016] Wan, S.; Lan, Y.; Xu, J.; Guo, J.; Pang, L.; and Cheng, X. 2016. Match-srnn: Modeling the recursive matching structure with spatial rnn. arXiv preprint arXiv:1604.04378.
  • [Wang et al. 2013] Wang, H.; Lu, Z.; Li, H.; and Chen, E. 2013. A dataset for research on short-text conversations. In EMNLP, 935–945.
  • [Wu et al. 2015] Wu, Y.; Wu, W.; Li, Z.; and Zhou, M. 2015. Mining query subtopics from questions in community question answering. In AAAI, 339–345.
  • [Wu et al. 2016] Wu, Y.; Wu, W.; Li, Z.; and Zhou, M. 2016. Improving recommendation of tail tags for questions in community question answering. In Thirtieth AAAI Conference on Artificial Intelligence.
  • [Xu et al. 2016] Xu, Z.; Liu, B.; Wang, B.; Sun, C.; and Wang, X. 2016. Incorporating loose-structured knowledge into lstm with recall gate for conversation modeling. arXiv preprint arXiv:1605.05110.
  • [Yin and Schütze 2015] Yin, W., and Schütze, H. 2015. Multigrancnn: An architecture for general matching of text chunks on multiple levels of granularity. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics (ACL), 63–73.
  • [Yin et al. 2015] Yin, W.; Schütze, H.; Xiang, B.; and Zhou, B. 2015. Abcnn: Attention-based convolutional neural network for modeling sentence pairs. arXiv preprint arXiv:1512.05193.
  • [Zhao et al. 2011] Zhao, W. X.; Jiang, J.; Weng, J.; He, J.; Lim, E.-P.; Yan, H.; and Li, X. 2011. Comparing twitter and traditional media using topic models. In Advances in Information Retrieval. Springer. 338–349.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
12462
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description