Knowledge as a Teacher: Knowledge-Guided Structural Attention Networks

Knowledge as a Teacher: Knowledge-Guided Structural Attention Networks

Abstract

Natural language understanding (NLU) is a core component of a spoken dialogue system. Recently recurrent neural networks (RNN) obtained strong results on NLU due to their superior ability of preserving sequential information over time. Traditionally, the NLU module tags semantic slots for utterances considering their flat structures, as the underlying RNN structure is a linear chain. However, natural language exhibits linguistic properties that provide rich, structured information for better understanding. This paper introduces a novel model, knowledge-guided structural attention networks (K-SAN), a generalization of RNN to additionally incorporate non-flat network topologies guided by prior knowledge. There are two characteristics: 1) important substructures can be captured from small training data, allowing the model to generalize to previously unseen test data; 2) the model automatically figures out the salient substructures that are essential to predict the semantic tags of the given sentences, so that the understanding performance can be improved. The experiments on the benchmark Air Travel Information System (ATIS) data show that the proposed K-SAN architecture can effectively extract salient knowledge from substructures with an attention mechanism, and outperform the performance of the state-of-the-art neural network based frameworks.

1Introduction

In the past decade, goal-oriented spoken dialogue systems (SDS), such as the virtual personal assistants Microsoft’s Cortana and Apple’s Siri, are being incorporated in various devices and allow users to speak to systems freely in order to finish tasks more efficiently. A key component of these conversational systems is the natural language understanding (NLU) module—-it refers to the targeted understanding of human speech directed at machines [39]. The goal of such “targeted” understanding is to convert the recognized user speech into a task-specific semantic representation of the user’s intention, at each turn, that aligns with the back-end knowledge and action sources for task completion. The dialogue manager then interprets the semantics of the user’s request and associated back-end results, and decides the most appropriate system action, by exploiting semantic context and user specific meta-information, such as geo-location and personal preferences [25].

A typical pipeline of NLU includes: domain classification, intent determination, and slot filling [39]. NLU first decides the domain of user’s request given the input utterance, and based on the domain, predicts the intent and fills associated slots corresponding to a domain-specific semantic template. For example, Figure ? shows a user utterance, “show me the flights from seattle to san francisco” and its semantic frame, find_flight(origin=“seattle”, dest=“san francisco”). It is easy to see the relationship between the origin city and the destination city in this example, although these do not appear next to each other. Traditionally, domain detection and intent prediction are framed as utterance classification problems, where several classifiers such as support vector machines and maximum entropy have been employed [13]. Then slot filling is framed as a word sequence tagging task, where the IOB (in-out-begin) format is applied for representing slot tags as illustrated in Figure ?, and hidden Markov models (HMM) or conditional random fields (CRF) have been employed for slot tagging [29].

An example utterance annotated with its semantic slots in the IOB format (S).
An example utterance annotated with its semantic slots in the IOB format (S).

With the advances on deep learning, deep belief networks (DBNs) with deep neural networks (DNNs) have been applied to domain and intent classification tasks [33]. Recently, proposed an RNN architecture for intent determination. For slot filling, deep learning has been viewed as a feature generator and the neural architecture can be merged with CRFs [45]. and later employed RNNs for sequence labeling in order to perform slot filling. However, the above studies benefit from large training data without leveraging any existing knowledge. When tagging sequences RNNs consider them as flat structures, with their underlying linear chain structures, potentially ignoring the structured information typical of natural language sequences.

Hierarchical structures and semantic relationships contain linguistic characteristics of input word sequences forming sentences, and such information may help interpret their meaning. Furthermore, prior knowledge would help in the tagging of sequences, especially when dealing with previously unseen sequences [40]. Prior work exploited external web-scale knowledge graphs such as Freebase and Wikipedia for improving NLU [14] and proposed approaches that leverage linguistic knowledge encoded in parse trees for language understanding, where the extracted syntactic structural features and semantic dependency features enhance inference model learning, and the model achieves better language understanding performance in various domains.

Even with the emerging paradigm of integrating deep learning and linguistic knowledge for different NLP tasks [35], most of the previous work utilized such linguistic knowledge and knowledge bases as additional features as input to neural networks, and then learned the models for tagging sequences. These feature enrichment based approaches have some possible limitations: 1) poor generalization and 2) error propagation. Poor generalization comes from the mismatch between knowledge bases and the input data, and then the incorrectly extracted features due to errors in previous processing propagate errors to the neural models. In order to address the issues and better learn the sequence tagging models, this paper proposes knowledge-guided structural attention networks, K-SAN, a generalization of RNNs that automatically learn the attention guided by external or prior knowledge and generate sentence-based representations specifically for modeling sequence tagging. The main difference between K-SAN and previous approaches is that knowledge plays the role of a teacher to guide networks where and how much to focus attention considering the whole linguistic structure simultaneously. Our main contributions are three-fold:

  • End-to-end learning
    To our knowledge, this is the first neural network approach that utilizes general knowledge as guidance in an end-to-end fashion, where the model automatically learns important substructures with an attention mechanism.

  • Generalization for different knowledge
    There is no required schema of knowledge, and different types of parsing results, such as dependency relations, knowledge graph-specific relations, and parsing output of hand-crafted grammars, can serve as the knowledge guidance in this model.

  • Efficiency and parallelizability
    Because the substructures from the input utterance are modeled separately, modeling time may not increase linearly with respect to the number of words in the input sentence.

In the following sections, we empirically show the benefit of K-SAN on the targeted NLU task.

2Related Work

Knowledge-Based Representations There is an emerging trend of learning representations at different levels, such as word embeddings [27], character embeddings [20], and sentence embeddings [19]. In addition to fully unsupervised embedding learning, knowledge bases have been widely utilized to learn entity embeddings with specific functions or relations [2]. Different from prior work, this paper focuses on learning composable substructure embeddings that are informative for understanding.

Recently linguistic structures are taken into account in the deep learning framework. and both proposed dependency-based approaches to combine deep learning and linguistic structures, where the model used tree-based n-grams instead of surface ones to capture knowledge-guided relations for sentence modeling and classification. utilized lexicalized dependency paths to learn embedding representations for semantic role labeling. However, the performance of these approaches highly depends on the quality of “whole” sentence parsing, and there is no control of degree of attentions on different substructures. Learning robust representations incorporating whole structures still remains unsolved. In this paper, we address the limitation by proposing K-SAN to learn robust representations of whole sentences, where the whole representation is composed of the salient substructures in order to avoid error propagation.

Neural Attention and Memory Model One of the earliest work with a memory component applied to language processing is memory networks [43], which encode facts into vectors and store them in the memory for question answering (QA). Following their success, proposed dynamic memory networks (DMN) to additionally capture position and temporality of transitive reasoning steps for different QA tasks. The idea is to encode important knowledge and store it into memory for future usage with attention mechanisms. Attention mechanisms allow neural network models to selectively pay attention to specific parts. There are also various tasks showing the effectiveness of attention mechanisms.

However, most previous work focused on the classification or prediction tasks (predicting a single word given a question), and there are few studies for NLU tasks (slot tagging). Based on the fact that the linguistic or knowledge-based substructures can be treated as prior knowledge to benefit language understanding, this work borrows the idea from memory models to improve NLU. Unlike the prior NLU work that utilized representations learned from knowledge bases to enrich features of the current sentence, this paper directly learns a sentence representation incorporating memorized substructures with an automatically decided attention mechanism in an end-to-end manner.

3Knowledge-Guided Structural Attention Networks (K-SAN)

For the NLU task, given an utterance with a sequence of words/tokens , our model is to predict corresponding semantic tags for each word/token by incorporating knowledge-guided structures. The proposed model is illustrated in Figure 1. The knowledge encoding module first leverages external knowledge to generate a linguistic structure for the utterance, where a discrete set of knowledge-guided substructures is encoded into a set of vector representations (Section 3.1). The model learns the representation for the whole sentence by paying different attention on the substructures (Section 3.2). Then the learned vector encoding the knowledge-guided structure is used for improving the semantic tagger (Section 4).

Figure 1: The illustration of knowledge-guided structural attention networks (K-SAN) for NLU.
Figure 1: The illustration of knowledge-guided structural attention networks (K-SAN) for NLU.

3.1Knowledge Encoding Module

The knowledge-guided substructures of dependency parsing, x_i, on an example sentence s.
The knowledge-guided substructures of dependency parsing, , on an example sentence .

The prior knowledge obtained from external resources, such as dependency relations, knowledge bases, etc., provides richer information to help decide the semantic tags given an input utterance. This paper takes dependency relations as an example for knowledge encoding, and other structured relations can be applied in the same way. The input utterance is parsed by a dependency parser, and the substructures are built according to the paths from the root to all leaves [4]. For example, the dependency parsing of the utterance “show me the flights from seattle to san francisco” is shown in Figure ?, where the associated substructures are obtained from the parsing tree for knowledge encoding. Here we do not utilize the dependency relation labels in the experiments for better generalization, because the labels may not be always available for different knowledge resources. Note that the number of substructures may be less than the number of words in the utterance, because non-leaf nodes do not have corresponding substructure in order to reduce the duplicated information in the model. The top-left component of Figure 1 illustrates the module for modeling knowledge-guided substructures.

3.2Model Architecture

The model embeds all knowledge-guided substructures into a continuous space and stores embeddings of all ’s in the knowledge memory. The representation of the input utterance is then compared with encoded knowledge representations to integrate the carried structure guided by knowledge via an attention mechanism. Then the knowledge-guided representation of the sentence is taken together with the word sequence for estimating the semantic tags. Four main procedures are described below.

Encoded Knowledge Representation To store the knowledge-guided structure, we convert each substructure (e.g. path starting from the root to the leaf in the dependency tree), , into a structure vector with dimension by embedding the substructure in a continuous space through the knowledge encoding model . The input utterance is also embedded to a vector with the same dimension through the model .

We apply the three types for knowledge encoding models, and , in order to model multiple words from a substructure or an input sentence into a vector representation: 1) fully-connected neural networks (NN) with linear activation, 2) recurrent neural networks (RNN), and 3) convolutional neural networks (CNN) with a window size 3 and a max-pooling operation. For example, one of substructures shown in Figure ?, “show flights seattle from”, is encoded into a vector embedding. In the experiments, the weights of and are tied together based on their consistent ability of sequence encoding.

Knowledge Attention Distribution In the embedding space, we compute the match between the current utterance vector and its substructure vector by taking their inner product followed by a softmax.

where and can be viewed as attention distribution for modeling important substructures from external knowledge in order to understand the current utterance.

Sentence Representation In order to encode the knowledge-guided structure, a vector is a sum over the encoded knowledge embeddings weighted by the attention distribution.

which indicates that the sentence pays different attention to different substructures guided from external knowledge. Because the function from input to output is smooth, we can easily compute gradients and back propagate through it. Then the sum of the substructure vector and the current input embedding are then passed through a neural network model to generate an output knowledge-guided representation .

where we employ a fully-connected dense network for .

Sequence Tagging To estimate the tag sequence corresponding to an input word sequence , we use an RNN module for training a slot tagger, where the knowledge-guided representation is fed into the input of the model in order to incorporate the structure information.

4Recurrent Neural Network Tagger

4.1Chain-Based RNN Tagger

Given , the model is to predict where the tag is aligned with the word . We use the Elman RNN architecture, consisting of an input layer, a hidden layer, and an output layer [11]. The input, hidden and output layers consist of a set of neurons representing the input, hidden, and output at each time step , , , and , respectively.

where is a smooth bounded function such as tanh, and is the probability distribution over of semantic tags given the current hidden state . The sequence probability can be formulated as

The model can be trained using backpropagation to maximize the conditional likelihood of the training set labels.

To overcome the frequent vanishing gradients issue when modeling long-term dependencies, gated RNN was designed to use a more sophisticated activation function than a usual activation function, consisting of affine transformation followed by a simple element-wise nonlinearity by using gating units [8], such as long short-term memory (LSTM) and gated recurrent unit (GRU) [15]. RNNs employing either of these recurrent units have been shown to perform well in tasks that require capturing long-term dependencies [26]. In this paper, we use RNN with GRU cells to allow each recurrent unit to adaptively capture dependencies of different time scales [7], because RNN-GRU can yield comparable performance as RNN-LSTM with need of fewer parameters and less data for generalization [8]

A GRU has two gates, a reset gate , and an update gate [7]. The reset gate determines the combination between the new input and the previous memory, and the update gate decides how much the unit updates its activation, or content.

where is a logistic sigmoid function.

Then the final activation of the GRU at time , , is a linear interpolation between the previous activation and the candidate activation :

where is an element-wise multiplication. When the reset gate is off, it effectively makes the unit act as if it is reading the first symbol of an input sequence, allowing it to forget the previously computed state. Then can be computed by ( ?).

4.2Knowledge-Guided RNN Tagger

In order to model the encoded knowledge from previous turns, for each time step , the knowledge-guided sentence representation in (Equation 1) is fed into the RNN model together with the word . For the plain RNN, the hidden layer can be formulated as

to replace (Equation 2) as illustrated in the right block of Figure 1. RNN-GRU can incorporate the encoded knowledge in the similar way, where can be added into gating mechanisms for modeling contextual knowledge similarly.

The joint tagging model that incorporates a chain-based RNN tagger (upper block) and a knowledge-guided RNN tagger (lower block).
The joint tagging model that incorporates a chain-based RNN tagger (upper block) and a knowledge-guided RNN tagger (lower block).

4.3Joint RNN Tagger

Because the chain-based tagger and the knowledge-guided tagger carry different information, the joint RNN tagger is proposed to balance the information between two model architectures. Figure ? presents the architecture of the joint RNN tagger.

where is the weight for balancing chain-based and knowledge-guided information. By jointly considering chain-based information () and knowledge-guided information (), the joint RNN tagger is expected to achieve better generalization, and the performance may be less sensitive to poor structures from external knowledge. In the experiments, is set to for balancing two sides. The objective of the proposed model is to maximize the sequence probability in (Equation 3), and the model can be trained in an end-to-end manner, where the error would be back-propagated through the whole architecture.

5Experiments

Table 1: The F1 scores of predicted slots on the different size of ATIS training examples, where K-SAN utilizes the dependency relations parsed from the Stanford parser. Small: 1/40 set; Medium: 1/10 set; Large: original set. ( indicates that the performance is significantly better than all baseline models with in the t-test.)
Encoder (/) Knowledge Tagger Small Medium Large
Baseline - CRF 58.94 78.74 89.73
- RNN 68.58 84.55 92.97
CNN RNN 73.57 85.52 93.88
Structural - CRF 59.55 78.71 90.13
DCNN RNN 70.24 83.80 93.25
Tree-RNN RNN 73.50 83.92 92.28
Proposed K-SAN (NN) RNN 74.11 85.97 93.98
K-SAN (RNN) RNN 73.13 86.85 94.97

5.1Experimental Setup

The dataset for experiments is the benchmark ATIS corpus, which is extensively used by the NLU community [26]. There are 4978 training utterances selected from Class A (context independent) in the ATIS-2 and ATIS-3, while there are 893 utterances selected from the ATIS-3 Nov93 and Dec94. In the experiments, we only use lexical features. In order to show the robustness to data scarcity, we conduct the experiments with 3 different sizes of training data (Small, Medium, and Large), where Small is 1/40 of the original set, Medium is 1/10 of the original set, and Large is the full set. The evaluation metrics for NLU is F-measure on the predicted slots1.

For experiments with K-SAN, we parse all data with the Stanford dependency parser [4] and represent words as their embeddings trained on the in-domain data, where the parser is pre-trained on PTB. The loss function is cross-entropy, and the optimizer we use is adam with the default setting [18], where the learning rate , , , and . The maximum iteration for training our K-SAN models is set as 300. The dimensionality of input word embeddings is 100, and the hidden layer sizes are in . The dropout rates are set as . All reported results are from the joint RNN tagger, and the hyperparameters are tuned in the dev set for all experiments.

5.2Baseline

To validate the effectiveness of the proposed model, we compare the performance with the following baselines.

  • Baseline:

    • CRF Tagger [40]: predicts a semantic slot for each word with a context window (size = 5).

    • RNN Tagger [26]: predicts a semantic slot for each word.

    • CNN Encoder-Tagger [17]: tag semantic slots with consideration of sentence embeddings learned by a convolutional model.

  • Structural: The NLU models utilize linguistic information when tagging slots, where DCNN and Tree-RNN are the state-of-the-art approaches for embedding sentences with linguistic structures.

    • CRF Tagger [40]: predicts slots based on the lexical (5-word window) and syntactic (dependent head in the parsing tree) features.

    • DCNN [23]: predicts slots by incorporating sentence embeddings learned by a convolutional model with consideration of dependency tree structures.

    • Tree-RNN [38]: predicts slots with sentence embeddings learned by an RNN model based on the tree structures of sentences.

The visualization of the decoded knowledge-guided structural attention for both relations and words learned from different size of training data. Relations and words with darker color indicate higher attention weights generated by the proposed K-SAN with CNN. The slot tags are shown in the figure for reference. Note that the dependency relations are incorrectly parsed by the Stanford parser in this example, but our model is still able to benefit from the structural information.
The visualization of the decoded knowledge-guided structural attention for both relations and words learned from different size of training data. Relations and words with darker color indicate higher attention weights generated by the proposed K-SAN with CNN. The slot tags are shown in the figure for reference. Note that the dependency relations are incorrectly parsed by the Stanford parser in this example, but our model is still able to benefit from the structural information.

5.3Slot Filling Results

Table 1 shows the performance of slot filling on different size of training data, where there are three datasets (Small, Medium, and Large use 1/40, 1/10, and whole training data). For baselines (models without knowledge features), CNN Encoder-Tagger achieves the best performance on all datasets.

Among structural models (models with knowledge encoding), Tree-RNN Encoder-Tagger performs better for Small data but slightly worse than the DCNN Encoder-Tagger.

CNN [17] performs better compared to DCNN [23] and Tree-RNN [38], even though CNN does not leverage external knowledge when encoding sentences. When comparing the NLU performance between baselines and other state-of-the-art structural models, there is no significant difference. This suggests that encoding sentence information without distinguishing substructure may not capture salient semantics in order to improve understanding performance.

Among the proposed K-SAN models, CNN for encoding performs best on Small (75% on F1) and Medium (88% on F1), and RNN for encoding performs best on the Large set (95% on F1). Also, most of the proposed models outperform all baselines, where the improvement for the small dataset is more significant. This suggests that the proposed models carry better generalization and are less sensitive to unseen data. For example, given an utterance “which flights leave on monday from montreal and arrive in chicago in the morning”, “morning” can be correctly tagged with a semantic tag B-arrive_time.period_of_day by K-SAN, but it is incorrectly tagged with B-depart_time.period_of_day by baselines, because knowledge guides the model to pay correct attention to salient substructures. The proposed model presents the state-of-the-art performance on the large dataset (RNN-BLSTM in baselines), showing the effectiveness of leveraging knowledge-guided structures for learning embeddings that can be used for specific tasks and the robustness to data scarcity and mismatch.

5.4Attention Analysis

In order to show the effectiveness of boosting performance by learning correct attention from much smaller training data through the proposed model, we present the visualization of the attention for both words and relations decoded by K-SAN with CNN in the Figure ?. The darker color of blocks and lines indicates the higher attention for words and relations respectively. From the figure, the words and the relations with higher attention are the most crucial parts for predicting correct slots, e.g. origin, destination, and time. Furthermore, the difference of attention distribution between three datasets is not significant; this suggests that our proposed model is able to pay correct attention to important substructures guided by the external knowledge even the training data is scarce.

Figure 2: The constructing procedure of knowledge-guided substructures, x_i, on an example sentence s.
Figure 2: The constructing procedure of knowledge-guided substructures, , on an example sentence .

(a) Syntax: the dependency tree

Figure 3: The constructing procedure of knowledge-guided substructures, x_i, on an example sentence s.
Figure 3: The constructing procedure of knowledge-guided substructures, , on an example sentence .

(b) Semantics: the AMR graph

Table 2: The F1 scores of predicted slots with knowledge from different resources.
Approach Small Medium Large

CRF

Dependency Tree Stanford - 59.55 78.71 90.13
SyntaxNet - 61.09 78.87 90.92
AMR Graph Rule-Based - 59.55 79.15 89.97
JAMR - 61.12 78.64 90.25

K-SAN (CNN)

Dependency Tree Stanford 53 74.60 87.99 94.86
SyntaxNet 25 74.35 88.40 95.00
JAMR 8 74.27 88.27 94.89

5.5Knowledge Generalization

In order to show the capacity of generalization to different knowledge resources, we perform the K-SAN model for different knowledge bases. Below we compare two types of knowledge formats: dependency tree and Abstract Meaning Representation (AMR). AMR is a semantic formalism in which the meaning of a sentence is encoded as a rooted, directed, acyclic graph [1], where nodes represent concepts, and labeled directed edges represent the relations between two concepts. The formalism is based on propositional logic and neo-Davidsonian event representations [28]. The semantic concepts in AMR were leveraged to benefit multiple NLP tasks [22]. Unlike syntactic information from dependency trees, the AMR graph contains semantic information, which may offer more specific conceptual relations. Figure 3 shows the comparison of a dependency tree and an AMR graph associated with the same example utterance and how the knowledge-guided substructures are constructed.

Table 2 presents the performance of CRF and K-SAN with CNN taggers that utilize dependency relations and AMR edges as knowledge guidance on the same datasets, where CRF takes the head words from either dependency trees or AMR graphs as additional features and K-SAN incorporates knowledge-guided substructures as illustrated in Figure 3. The dependency trees are obtained from the Stanford dependency parser or the SyntaxNet parser2, and AMR graphs are generated by a rule-based AMR parser or JAMR3.

Among four knowledge resources (different types and obtained from different parsers), all results show the similar performance for three sizes of datasets. The maximum number of substructures for the dependency tree is larger than the number in the AMR graph (53 and 25 v.s. 19 and 8), because syntax is more general and may provide richer cues for guiding more attention while semantics is more specific and may offer stronger guidance. In sum, the models applying four different resources achieve similar performance, and all significantly outperform the state-of-the-art NLU tagger, showing the effectiveness, generalization, and robustness of the proposed K-SAN model.

6Conclusion

This paper proposes a novel model, knowledge-guided structural attention networks (K-SAN), that leverages prior knowledge as guidance to incorporate non-flat topologies and learn suitable attention for different substructures that are salient for specific tasks. The structured information can be captured from small training data, so the model has better generalization and robustness. The experiments show benefits and effectiveness of the proposed model on the language understanding task, where all knowledge-guided substructures captured by different resources help tagging performance, and the state-of-the-art performance is achieved on the ATIS benchmark dataset.

Footnotes

  1. The used evaluation script is conlleval.
  2. https://github.com/tensorflow/models/tree/master/syntaxnet
  3. https://github.com/jflanigan/jamr

References

  1. 2013.
    Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. Abstract meaning representation for sembanking.
  2. 2015.
    Asli Celikyilmaz and Dilek Hakkani-Tur. Convolutional neural network based semantic tagging with entity embeddings.
  3. 2003.
    Ciprian Chelba, Monika Mahajan, and Alex Acero. Speech utterance classification.
  4. 2014.
    Danqi Chen and Christopher D Manning. A fast and accurate dependency parser using neural networks.
  5. 2014.
    Yun-Nung Chen, Dilek Hakkani-Tur, and Gokan Tur. Deriving local relational surface forms from dependency-based entity embeddings for unsupervised spoken language understanding.
  6. 2015.
    Yun-Nung Chen, William Yang Wang, Anatole Gershman, and Alexander I Rudnicky. Matrix factorization with knowledge graph propagation for unsupervised spoken language understanding.
  7. 2014.
    Kyunghyun Cho, Bart van Merriënboer, Dzmitry Bahdanau, and Yoshua Bengio. On the properties of neural machine translation: Encoder-decoder approaches.
  8. 2014.
    Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling.
  9. 1967.
    Donald Davidson. The logical form of action sentences.
  10. 2013.
    Anoop Deoras and Ruhi Sarikaya. Deep belief network based semantic taggers for spoken language understanding.
  11. 1990.
    Jeffrey L Elman. Finding structure in time.
  12. 2013.
    Alan Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recurrent neural networks.
  13. 2003.
    Patrick Haffner, Gokhan Tur, and Jerry H Wright. Optimizing svms for complex call classification.
  14. 2013.
    Larry P Heck, Dilek Hakkani-Tür, and Gokhan Tur. Leveraging knowledge graphs for web-scale unsupervised semantic parsing.
  15. 1997.
    Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory.
  16. 2013.
    Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry Heck. Learning deep structured semantic models for web search using clickthrough data.
  17. 2014.
    Yoon Kim. Convolutional neural networks for sentence classification.
  18. 2014.
    Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization.
  19. 2014.
    Quoc V Le and Tomas Mikolov. Distributed representations of sentences and documents.
  20. 2015.
    Wang Ling, Tiago Luís, Luís Marujo, Ramón Fernandez Astudillo, Silvio Amir, Chris Dyer, Alan W Black, and Isabel Trancoso. Finding function in form: Compositional character models for open vocabulary word representation.
  21. 2013.
    Jingjing Liu, Panupong Pasupat, Yining Wang, Scott Cyphers, and James Glass. Query understanding enhanced by hierarchical parsing structures.
  22. 2015.
    Fei Liu, Jeffrey Flanigan, Sam Thomson, Norman Sadeh, and Noah A Smith. Toward abstractive summarization using semantic representations.
  23. 2015a.
    Mingbo Ma, Liang Huang, Bing Xiang, and Bowen Zhou. Dependency-based convolutional neural networks for sentence embedding.
  24. 2015b.
    Yi Ma, Paul A Crook, Ruhi Sarikaya, and Eric Fosler-Lussier. Knowledge graph inference for spoken dialog systems.
  25. 2004.
    Michael F McTear. Spoken dialogue technology: toward the conversational user interface.
  26. 2015.
    Grégoire Mesnil, Yann Dauphin, Kaisheng Yao, Yoshua Bengio, Li Deng, Dilek Hakkani-Tur, Xiaodong He, Larry Heck, Gokhan Tur, Dong Yu, et al. Using recurrent neural networks for slot filling in spoken language understanding.
  27. 2013.
    Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of words and phrases and their compositionality.
  28. 1990.
    Terence Parsons. Events in the semantics of english: A study in subatomic semantics.
  29. 1992.
    Roberto Pieraccini, Evelyne Tzoukermann, Zakhar Gorelov, Jean-Luc Gauvain, Esther Levin, Chin-Hui Lee, and Jay G Wilpon. A speech understanding system based on statistical representation of semantics.
  30. 2015.
    Suman Ravuri and Andreas Stolcke. Recurrent neural network and lstm models for lexical utterance classification.
  31. 2016.
    Michael Roth and Mirella Lapata. Neural semantic role labeling with dependency path embeddings.
  32. 1999.
    Alexander Rudnicky and Wei Xu. An agenda-based dialog management architecture for spoken language systems.
  33. 2011.
    Ruhi Sarikaya, Geoffrey E Hinton, and Bhuvana Ramabhadran. Deep belief nets for natural language call-routing.
  34. 2014.
    Ruhi Sarikaya, Geoffrey E Hinton, and Anoop Deoras. Application of deep belief networks for natural language understanding.
  35. 2014.
    Richard Socher, Andrej Karpathy, Quoc V Le, Christopher D Manning, and Andrew Y Ng. Grounded compositional semantics for finding and describing images with sentences.
  36. 2015.
    Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. End-to-end memory networks.
  37. 2014.
    Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks.
  38. 2015.
    Kai Sheng Tai, Richard Socher, and Christopher D Manning. Improved semantic representations from tree-structured long short-term memory networks.
  39. 2011.
    Gokhan Tur and Renato De Mori. Spoken language understanding: Systems for extracting semantic information from speech.
  40. 2010.
    Gokhan Tur, Dilek Hakkani-Tür, and Larry Heck. What is left to be understood in atis?
  41. 2012.
    Gokhan Tur, Li Deng, Dilek Hakkani-Tür, and Xiaodong He. Towards deeper understanding: Deep convex networks for semantic utterance classification.
  42. 2005.
    Ye-Yi Wang, Li Deng, and Alex Acero. Spoken language understanding.
  43. 2015.
    Jason Weston, Sumit Chopra, and Antoine Bordesa. Memory networks.
  44. 2016.
    Caiming Xiong, Stephen Merity, and Richard Socher. Dynamic memory networks for visual and textual question answering.
  45. 2013.
    Puyang Xu and Ruhi Sarikaya. Convolutional neural network based triangular CRF for joint intent detection and slot filling.
  46. 2014.
    Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. Embedding entities and relations for learning and inference in knowledge bases.
  47. 2013.
    Kaisheng Yao, Geoffrey Zweig, Mei-Yuh Hwang, Yangyang Shi, and Dong Yu. Recurrent neural networks for language understanding.
  48. 2014.
    Kaisheng Yao, Baolin Peng, Yu Zhang, Dong Yu, Geoffrey Zweig, and Yangyang Shi. Spoken language understanding using long short-term memory neural networks.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minumum 40 characters
Add comment
Cancel
Loading ...
10808
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description