A Neural-Symbolic Approach to Natural Language Tasks

A Neural-Symbolic Approach to Natural Language Tasks

Qiuyuan Huang, Paul Smolensky, Xiaodong He, Li Deng, Dapeng Wu
{qihua,psmo,xiaohe}@microsoft.com, l.deng@ieee.org, dpwu@ufl.edu
Microsoft Research AI
Redmond, WA
This work was carried out while PS was on leave from Johns Hopkins University. LD is currently at Citadel. DW is with University of Florida, Gainesville, FL 32611.
Abstract

Deep learning (DL) has in recent years been widely used in natural language processing (NLP) applications due to its superior performance. However, while natural languages are rich in grammatical structure, DL has not been able to explicitly represent and enforce such structures. This paper proposes a new architecture to bridge this gap by exploiting tensor product representations (TPR), a structured neural-symbolic framework developed in cognitive science over the past 20 years, with the aim of integrating DL with explicit language structures and rules. We call it the Tensor Product Generation Network (TPGN), and apply it to 1) image captioning, 2) classification of the part of speech of a word, and 3) identification of the phrase structure of a sentence. The key ideas of TPGN are: 1) unsupervised learning of role-unbinding vectors of words via a TPR-based deep neural network, and 2) integration of TPR with typical DL architectures including Long Short-Term Memory (LSTM) models. The novelty of our approach lies in its ability to generate a sentence and extract partial grammatical structure of the sentence by using role-unbinding vectors, which are obtained in an unsupervised manner. Experimental results demonstrate the effectiveness of the proposed approach.

1 Introduction

In this paper we attempt to address a triple challenge:

  • to achieve good performance on difficult tasks — image captioning, classification of the part of speech (POS) of a word, and identification of the phrase structure of a sentence — by

  • producing grammatically interpretable representations that are acquired through deep learning

  • in a Deep Neural Network (DNN) architecture possessing a sound rationale based in a general theory of intelligent information processing that integrates neural and symbolic computation.

Deep learning is an important tool in many current natural language processing (NLP) applications. However, language rules or structures cannot be explicitly represented in deep learning architectures. The tensor product representation developed in [Smolensky(1990), Smolensky & Legendre(2006)] has the potential of integrating deep learning with explicit rules (such as logical rules, grammar rules, or rules that summarize real-world knowledge). This paper develops a TPR approach for deep-learning-based NLP applications, introducing the Tensor Product Generation Network (TPGN) architecture. To demonstrate the effectiveness of the proposed architecture, we apply it to three important NLP applications: 1) image captioning, 2) classification of POS of a word (a.k.a. POS tagging), and 3) identification of the phrase structure of a sentence.

A TPGN model generates natural language descriptions via learned representations. The representations learned in a crucial layer of the TPGN can be interpreted as encoding grammatical roles for the words being generated. This layer corresponds to the role-encoding component of a general, independently-developed architecture for neural computation of symbolic functions, including the generation of linguistic structures. The key to this architecture is the notion of Tensor Product Representation (TPR), in which vectors embedding symbols (e.g., lives, frodo) are bound to vectors embedding structural roles (e.g., verb, subject) and combined to generate vectors embedding symbol structures ([frodo lives]). TPRs provide the representational foundations for a general computational architecture called Gradient Symbolic Computation (GSC), and applying GSC to the task of natural language generation yields the specialized architecture defining the model presented here. The generality of GSC means that the results reported here have implications well beyond the particular tasks we address here.

The paper is organized as follows. Section 2 discusses related work. In Section 3, we review the basics of tensor product representation. Section 4 presents the rationale for our proposed architecture. Section 5 describes our proposed model in detail. In Section 6, we present our experimental results. Finally, Section 7 concludes the paper.

2 Related work

Deep learning plays a dominant role in many NLP applications due to its exceptional performance. Hence, we focus on recent deep-learning-based literature for NLP applications, especially three important NLP applications: 1) image captioning, 2) POS tagging, and 3) identification of the phrase structure of a sentence.

Most existing DL-based image captioning methods [Mao et al.(2015), Vinyals et al.(2015), Devlin et al.(2015), Chen & Lawrence Zitnick(2015), Donahue et al.(2015), Karpathy & Fei-Fei(2015), Kiros et al.(2014a), Kiros et al.(2014b)] involve two phases/modules: 1) image analysis, typically by a Convolutional Neural Network (CNN), and 2) a language model for caption generation ([Fang et al.(2015)]). The CNN module takes an image as input and outputs an image feature vector or a list of detected words with their probabilities. The language model is used to create a sentence (caption) out of the detected words or the image feature vector produced by the CNN.

There are mainly two approaches to natural language generation in image captioning. The first approach takes the words detected by a CNN as input, and uses a probabilistic model, such as a maximum entropy (ME) language model, to arrange the detected words into a sentence. The second approach takes the penultimate activation layer of the CNN as input to a Recurrent Neural Network (RNN), which generates a sequence of words (the caption) [Vinyals et al.(2015)].

The work reported here follows the latter approach, adopting a CNN + RNN-generator architecture. Specifically, instead of using a conventional RNN, we propose a recurrent network that has substructure derived from the general GSC architecture: one recurrent subnetwork holds an encoding — which is treated as an approximation of a TPR — of the words yet to be produced, while another recurrent subnetwork generates a sequence of vectors that is treated as a sequence of roles to be unbound from , in effect, reading out a word at a time from . Examining how the model deploys these roles allows us to interpret them in terms of grammatical categories; roughly speaking, a sequence of categories is generated and the words stored in are retrieved and spelled out via their categories.

The second task we consider is POS tagging. Methods for automatic POS tagging include unigram tagging, bigram tagging, tagging using Hidden Markov Models (which are generative sequence models), maximum entropy Markov models (which are discriminative sequence models), rule-based tagging, and tagging using bidirectional maximum entropy Markov models [Jurafsky & Martin(2017)]. The celebrated Stanford POS tagger of [Manning(2017)] uses a bidirectional version of the maximum entropy Markov model called a cyclic dependency network in [Toutanova et al.(2003)], which achieves 97.24% accuracy on the Penn Treebank WSJ data set [Toutanova et al.(2003)].

Methods for automatic phrase detection/classification, our third task, include supervised learning for a classifier with a set of features extracted from a context window that surrounds the word to be classified [Jurafsky & Martin(2017)]. The input of the classifier is features extracted from the context window including the words themselves, their parts-of-speech, and the phrase types of the preceding inputs in the window. The output of the classifier will be the type of the phrase containing the word.

3 Review of tensor product representation

Tensor product representation (TPR) is a general framework for embedding a space of symbol structures into a vector space. This embedding enables neural network operations to perform symbolic computation, including computations that provide considerable power to symbolic NLP systems [Smolensky & Legendre(2006), Smolensky(2012)]. Motivated by these successful examples, we are inspired to extend the TPR to the challenging task of learning image captioning. And as a by-product, the symbolic character of TPRs makes them amenable to conceptual interpretation in a way that standard learned neural network representations are not.

A particular TPR embedding is based in a filler/role decomposition of . A relevant example is when is the set of strings over an alphabet . One filler/role decomposition deploys the positional roles , where the filler/role binding assigns the ‘filler’ (symbol) to the position in the string. A string such as is uniquely determined by its filler/role bindings, which comprise the (unordered) set . Reifying the notion role in this way is key to TPR’s ability to encode complex symbol structures.

Given a selected filler/role decomposition of the symbol space, a particular TPR is determined by an embedding that assigns to each filler a vector in a vector space , and a second embedding that assigns to each role a vector in a space . The vector embedding a symbol is denoted by and is called a filler vector; the vector embedding a role is and called a role vector. The TPR for is then the following 2-index tensor in :

(1)

where denotes the tensor product. The tensor product is a generalization of the vector outer product that is recursive; recursion is exploited in TPRs for, e.g., the distributed representation of trees, the neural encoding of formal grammars in connection weights, and the theory of neural computation of recursive symbolic functions. Here, however, it suffices to use the outer product; using matrix notation we can write (1) as:

(2)

Generally, the embedding of any symbol structure is ; here: [Smolensky(1990), Smolensky & Legendre(2006)].

A key operation on TPRs, central to the work presented here, is unbinding, which undoes binding. Given the TPR in (2), for example, we can unbind to get ; this is achieved simply by . Here is the unbinding vector dual to the binding vector . To make such exact unbinding possible, the role vectors should be chosen to be linearly independent. (In that case the unbinding vectors are the rows of the inverse of the matrix containing the binding vectors as columns, so that while for all other role vectors ; this entails that , the filler vector bound to . Replacing the matrix inverse with the pseudo-inverse allows approximate unbinding when the role vectors are not linearly independent).

Figure 1: Architecture of TPGN, a TPR-capable generation network. “” denotes the matrix-vector product.

4 A TPR-capable generation architecture

In this work we propose an approach to network architecture design we call the TPR-capable method. The architecture we use (see Fig. 1) is designed so that TPRs could, in theory, be used within the architecture to perform the target task — here, generating a caption one word at a time. Unlike previous work where TPRs are hand-crafted, in our work, end-to-end deep learning will induce representations which the architecture can use to generate captions effectively.

In this section, we consider the problem of image captioning. As shown in Fig. 1, our proposed system is denoted by . The input of is an image feature vector and the output of is a caption. The image feature vector is extracted from a given image by a pre-trained CNN. The first part of our system is a sentence-encoding subnetwork which maps to a representation which will drive the entire caption-generation process; contains all the image-specific information for producing the caption. (We will call a caption a “sentence” even though it may in fact be just a noun phrase.)

If were a TPR of the caption itself, it would be a matrix (or 2-index tensor) which is a sum of matrices, each of which encodes the binding of one word to its role in the sentence constituting the caption. To serially read out the words encoded in , in iteration 1 we would unbind the first word from , then in iteration 2 the second, and so on. As each word is generated, could update itself, for example, by subtracting out the contribution made to it by the word just generated; denotes the value of when word is generated. At time step we would unbind the role occupied by word of the caption. So the second part of our system — the unbinding subnetwork — would generate, at iteration , the unbinding vector . Once produces the unbinding vector , this vector would then be applied to to extract the symbol that occupies word ’s role; the symbol represented by would then be decoded into word by the third part of , i.e., the lexical decoding subnetwork , which outputs , the 1-hot-vector encoding of .

Recalling that unbinding in TPR is achieved by the matrix-vector product, the key operation in generating is thus the unbinding of within , which amounts to simply:

(3)

This matrix-vector product is denoted “” in Fig. 1.

Thus the system of Fig. 1is TPR-capable. This is what we propose as the Tensor-Product Generation Network (TPGN) architecture. The learned representation will not be proven to literally be a TPR, but by analyzing the unbinding vectors the network learns, we will gain insight into the process by which the learned matrix gives rise to the generated caption.

What type of roles might the unbinding vectors be unbinding? A TPR for a caption could in principle be built upon positional roles, syntactic/semantic roles, or some combination of the two. In the caption a man standing in a room with a suitcase, the initial a and man might respectively occupy the positional roles of and ; standing might occupy the syntactic role of verb; in the role of Spatial-P(reposition); while a room with a suitcase might fill a 5-role schema . In fact we will see evidence below that our network learns just this kind of hybrid role decomposition.

What form of information does the sentence-encoding subnetwork need to encode in ? Continuing with the example of the previous paragraph, needs to be some approximation to the TPR summing several filler/role binding matrices. In one of these bindings, a filler vector — which the lexical subnetwork will map to the article a — is bound (via the outer product) to a role vector which is the dual of the first unbinding vector produced by the unbinding subnetwork : . In the first iteration of generation the model computes , which then maps to a. Analogously, another binding approximately contained in is . There are corresponding bindings for the remaining words of the caption; these employ syntactic/semantic roles. One example is . At iteration 3, decides the next word should be a verb, so it generates the unbinding vector which when multiplied by the current output of , the matrix , yields a filler vector which maps to the output standing. decided the caption should deploy standing as a verb and included in the binding . It similarly decided the caption should deploy in as a spatial preposition, including in the binding ; and so on for the other words in their respective roles in the caption.

5 System Description

Figure 2: The sentence-encoding subnet and the unbinding subnet are inter-connected LSTMs; encodes the visual input while the encode the words of the output caption.

The unbinding subnetwork and the sentence-encoding network of Fig. 1 are each implemented as (1-layer, 1-directional) LSTMs (see Fig. 2); the lexical subnetwork is implemented as a linear transformation followed by a softmax operation. In the equations below, the LSTM variables internal to the subnet are indexed by 1 (e.g., the forget-, input-, and output-gates are respectively ) while those of the unbinding subnet are indexed by 2.

Thus the state updating equations for are, for = caption length:

(4)
(5)
(6)
(7)
(8)
(9)

where , , , , , , , is the (element-wise) logistic sigmoid function; is the hyperbolic tangent function; the operator denotes the Hadamard (element-wise) product; , , , , , , , , . For clarity, biases — included throughout the model — are omitted from all equations in this paper. The initial state is initialized by:

(10)

where is the vector of visual features extracted from the current image by ResNet [Gan et al.(2017)] and is the mean of all such vectors; . On the output side, is a 1-hot vector with dimension equal to the size of the caption vocabulary, , and is a word embedding matrix, the -th column of which is the embedding vector of the -th word in the vocabulary; it is obtained by the Stanford GLoVe algorithm with zero mean [Pennington et al.(2017)]. is initialized as the one-hot vector corresponding to a “start-of-sentence” symbol.

For in Fig. 1, the state updating equations are:

(11)
(12)
(13)
(14)
(15)
(16)

where , , and . The initial state is the zero vector.

The dimensionality of the crucial vectors shown in Fig. 1, and , is increased from to as follows. A block-diagonal matrix is created by placing copies of the matrix as blocks along the principal diagonal. This matrix is the output of the sentence-encoding subnetwork . Now, following Eq. (3), the ‘filler vector’ — ‘unbound’ from the sentence representation with the ‘unbinding vector’ — is obtained by Eq. (17).

(17)

Here , the output of the unbinding subnetwork , is computed as in Eq. (18), where is ’s output weight matrix.

(18)

Finally, the lexical subnetwork produces a decoded word by

(19)

where is the softmax function and is the overall output weight matrix. Since plays the role of a word de-embedding matrix, we can set

(20)

where is the word-embedding matrix. Since is pre-defined, we directly set by Eq. (20) without training through Eq. (19). Note that and are learned jointly through the end-to-end training.

Figure 3: Pre-training of TPGN.

Fig. 3 shows a pre-training method for initializing TPGN. During the pre-training phase, there is no image input, i.e., image feature vector . In Fig. 3, at time , the LSTM module takes a sentence of length as input and outputs a vector () at time . That is, the LSTM converts a sentence into , which is the input of TPGN. We use end-to-end training to train the whole system shown in Fig. 3. After finishing pre-training, we let and use images as input to train the TPGN in Fig. 1, initialized by the pretrained parameter values.

The architecture in Fig. 3 allows us to design a POS tagger and a phrase detector/classifier.

A POS tagger is designed in the following way. First, apply a given sentence to the input of a trained system shown in Fig. 3. The resulting sequence of vectors are considered as the feature for sentence . A POS tagger takes the vectors as input and produces the POS tag for each word in the sentence . A POS tagger can be realized by a support vector machine or bidirectional LSTM. To train the POS tagger in a supervised manner, we need to have input features and output the POS tag of each word (). The POS tag of each word can be obtained by the Stanford tagger [Manning(2017)].

A phrase classifier is designed in an analogous way. We change only the outputs of the classifier, replacing the POS with the phrase type of each word, which can be obtained by the Stanford parser [Manning(2017)]. A phrase classifier can be realized by a bidirectional LSTM.

6 Experimental results

6.1 Dataset

To evaluate the performance of our proposed architecture, we use the COCO dataset [COCO(2017)]. The COCO dataset contains 123,287 images, each of which is annotated with at least 5 captions. We use the same pre-defined splits as [Karpathy & Fei-Fei(2015), Gan et al.(2017)]: 113,287 images for training, 5,000 images for validation, and 5,000 images for testing. We use the same vocabulary as that employed in [Gan et al.(2017)], which consists of 8,791 words.

6.2 Evaluation of image captioning system

For the CNN of Fig. 1, we used ResNet-152 [He et al.(2016)], pretrained on the ImageNet dataset. The feature vector has 2048 dimensions. Word embedding vectors in are downloaded from the web [Pennington et al.(2017)]. The model is implemented in TensorFlow [Abadi et al.(2015)] with the default settings for random initialization and optimization by backpropagation.

In our experiments, we choose (where is the dimension of vector ). The dimension of is (while is ); the vocabulary size ; the dimension of and is .

Methods METEOR BLEU-1 BLEU-2 BLEU-3 BLEU-4 CIDEr
NIC [Vinyals et al.(2015)] 0.666 0.451 0.304 0.203
CNN-LSTM 0.238 0.698 0.525 0.390 0.292 0.889
TPGN 0.243 0.709 0.539 0.406 0.305 0.909
Table 1: Performance of the proposed TPGN model on the COCO dataset.

The main evaluation results on the MS COCO dataset are reported in Table 1. The widely-used BLEU [Papineni et al.(2002)], METEOR [Banerjee & Lavie(2005)], and CIDEr [Vedantam et al.(2015)] metrics are reported in our quantitative evaluation of the performance of the proposed schemes. In evaluation, our baseline is the widely used CNN-LSTM captioning method originally proposed in [Vinyals et al.(2015)]. For comparison, we include results in that paper in the first line of Table 1. We also re-implemented the model using the latest ResNet feature and report the results in the second line of Table 1. Our re-implementation of the CNN-LSTM method matches the performance reported in [Gan et al.(2017)], showing that the baseline is a state-of-the-art implementation. As shown in Table 1, compared to the CNN-LSTM baseline, the proposed TPGN significantly outperforms the benchmark schemes in all metrics across the board. The improvement in BLEU- is greater for greater ; TPGN particularly improves generation of longer subsequences. The results clearly attest to the effectiveness of the TPGN architecture.

6.3 Evaluation of the POS tagger

We run the system shown in Fig. 3 with 5,000 sentences from the COCO test set as input, and obtain an unbinding vector of each word in the sentence produced by the TPGN system.

We design two POS taggers for classifying the POS of each word . The first POS tagger is realized by a kernel support vector machine with stochastic gradient descent, where a radial basis function kernel is used. The input of the classifier is unbinding vectors corresponding to a window of words, whose center is the word to be classified. For example, if is the word to be classified, unbinding vectors corresponding to a window of words are supplied as input to the classifier. Note that, to classify word , we need to add to make a window of words. Since words ( or ) do not exist, we assign a 625-dimensional unbinding vector (each dimension of which equals 0.5) to each of ( or ). The output of the classifier is the POS of the word in the center of the window. We use the unbinding vectors and POS tags of 4,000 sentences for training, and the unbinding vectors of 1,000 sentences for testing. We use the Stanford parser in [Manning(2017)] to identify the POS of each word in the 5,000 sentences.

Window size 1 3 5 7 9 11 13
Precision 0.757 0.944 0.937 0.946 0.934 0.929 0.919
Recall 0.763 0.929 0.936 0.942 0.928 0.927 0.922
F-measure 0.760 0.937 0.936 0.944 0.931 0.928 0.921
Table 2: Performance of POS tagger.

Table 2 shows the results for classifying the POS of the words in a sentence. It can be seen that using the unbinding vector of a word can classify the POS of the word with an accuracy of 76.3%, which means that a single unbinding vector contains important, but partial, grammatical information about the corresponding word. If the unbinding vectors of neighboring words are used, the accuracy of POS classification can be significantly increased to over 92%. The highest accuracy is achieved when the window size is 7; the F-score is 94.4%.

The second POS tagger is realized by a bidirectional LSTM (B-LSTM) with a hidden dimension of 625. The input of the B-LSTM is a sequence of unbinding vectors of a sentence; the output of the B-LSTM is a sequence of POS tags, each of which corresponds to one word in the sentence. We use 4,000 sentences for training, and 1,000 sentences for testing. The accuracy of classification is 97.7%, comparable to that of the state-of-the-art POS taggers [Toutanova et al.(2003)].

6.4 Evaluation of the phrase classifier

As for POS tagging, we run the system shown in Fig. 3 with 5,000 sentences (from the COCO test set) as input, and obtain unbinding vector of each word in the sentence produced by the TPGN system.

We design a phrase classifier by a B-LSTM with a hidden dimension of 625. The input of the B-LSTM is a sequence of unbinding vectors of a sentence; the output of the B-LSTM is a sequence of phrase types (e.g., noun phrase, verb phrase), each of which corresponds to one word in the sentence. We use 4,000 sentences for training, and 1,000 sentences for testing. The accuracy of classification is 84%. Phrase classifiers are also evaluated by precision, recall, and the F-measure [Jurafsky & Martin(2017)]. Precision measures the percentage of system-provided phrases that were correct. Correct here means that both the boundaries of the phrase and the phrase’s type are correct. The precision, recall, and F-measure of our phrase classifier are 82.5%, 79.4%, 80.9%, respectively, which are comparable to the state-of-the-art phrase classifier in [Zhu et al.(2013)].

Figure 4: A generated parse tree for a sentence where NP, WHNP, VP, ADVP, DT, NN, WDT, VBZ, VBG, PRP$, IN, and TO denote noun phrase, Wh-noun phrase, verb phrase, adverb phrase, determiner, noun, Wh-determiner, Verb (3rd person singular present), verb (gerund or present participle), possessive pronoun, preposition, and “to”, respectively.

Combining the results in Sections 6.3 and 6.4, we are able to create an incomplete four-level parse tree shown in Fig. 4. In our future work, we will design a system to create a complete parse tree of a sentence, given the unbinding vectors of the sentence.

7 Conclusion

In this paper, we proposed a new Tensor Product Generation Network (TPGN) for natural language generation and related tasks. The model has a novel architecture based on a rationale derived from the use of Tensor Product Representations for encoding and processing symbolic structure through neural network computation. In evaluation, we tested the proposed model on captioning with the MS COCO dataset, a large-scale image captioning benchmark. Compared to widely adopted LSTM-based models, the proposed TPGN gives significant improvements on all major metrics including METEOR, BLEU, and CIDEr. Moreover, we observe that the unbinding vectors contain important grammatical information, which allows us to design effective POS tagger and phrase detector/classifier with unbinding vectors as input. Our findings in this paper show great promise of TPRs. In the future, we will explore extending TPR to a variety of other NLP tasks.

References

  • [Abadi et al.(2015)] Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mané, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. URL https://www.tensorflow.org/. Software available from tensorflow.org.
  • [Banerjee & Lavie(2005)] Satanjeev Banerjee and Alon Lavie. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In Proceedings of the ACL workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization, pp. 65–72. Association for Computational Linguistics, 2005.
  • [Chen & Lawrence Zitnick(2015)] Xinlei Chen and C Lawrence Zitnick. Mind’s eye: A recurrent visual representation for image caption generation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2422–2431, 2015.
  • [COCO(2017)] COCO. Coco dataset for image captioning. http://mscoco.org/dataset/#download, 2017.
  • [Devlin et al.(2015)] Jacob Devlin, Hao Cheng, Hao Fang, Saurabh Gupta, Li Deng, Xiaodong He, Geoffrey Zweig, and Margaret Mitchell. Language models for image captioning: The quirks and what works. arXiv preprint arXiv:1505.01809, 2015.
  • [Donahue et al.(2015)] Jeffrey Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, and Trevor Darrell. Long-term recurrent convolutional networks for visual recognition and description. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2625–2634, 2015.
  • [Fang et al.(2015)] Hao Fang, Saurabh Gupta, Forrest Iandola, Rupesh K Srivastava, Li Deng, Piotr Dollár, Jianfeng Gao, Xiaodong He, Margaret Mitchell, John C Platt, et al. From captions to visual concepts and back. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 1473–1482, 2015.
  • [Gan et al.(2017)] Zhe Gan, Chuang Gan, Xiaodong He, Yunchen Pu, Kenneth Tran, Jianfeng Gao, Lawrence Carin, and Li Deng. Semantic compositional networks for visual captioning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017.
  • [He et al.(2016)] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778, 2016.
  • [Jurafsky & Martin(2017)] Daniel Jurafsky and James H Martin. Speech and Language Processing. 3rd ed. draft edition edition, 2017.
  • [Karpathy & Fei-Fei(2015)] Andrej Karpathy and Li Fei-Fei. Deep visual-semantic alignments for generating image descriptions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3128–3137, 2015.
  • [Kiros et al.(2014a)] Ryan Kiros, Ruslan Salakhutdinov, and Rich Zemel. Multimodal neural language models. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pp. 595–603, 2014a.
  • [Kiros et al.(2014b)] Ryan Kiros, Ruslan Salakhutdinov, and Richard S Zemel. Unifying visual-semantic embeddings with multimodal neural language models. arXiv preprint arXiv:1411.2539, 2014b.
  • [Manning(2017)] Christopher Manning. Stanford parser. https://nlp.stanford.edu/software/lex-parser.shtml, 2017.
  • [Mao et al.(2015)] Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, Zhiheng Huang, and Alan Yuille. Deep captioning with multimodal recurrent neural networks (m-rnn). In Proceedings of International Conference on Learning Representations, 2015.
  • [Papineni et al.(2002)] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pp. 311–318. Association for Computational Linguistics, 2002.
  • [Pennington et al.(2017)] Jeffrey Pennington, Richard Socher, and Christopher Manning. Stanford glove: Global vectors for word representation. https://nlp.stanford.edu/projects/glove/, 2017.
  • [Smolensky(1990)] Paul Smolensky. Tensor product variable binding and the representation of symbolic structures in connectionist systems. Artificial intelligence, 46(1-2):159–216, 1990.
  • [Smolensky(2012)] Paul Smolensky. Symbolic functions from neural computation. Philosophical Transactions of the Royal Society — A: Mathematical, Physical and Engineering Sciences, 370:3543 – 3569, 2012.
  • [Smolensky & Legendre(2006)] Paul Smolensky and Géraldine Legendre. The harmonic mind: From neural computation to optimality-theoretic grammar. Volume 1: Cognitive architecture. MIT Press, 2006.
  • [Toutanova et al.(2003)] Kristina Toutanova, Dan Klein, Christopher D Manning, and Yoram Singer. Feature-rich part-of-speech tagging with a cyclic dependency network. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology-Volume 1, pp. 173–180. Association for Computational Linguistics, 2003.
  • [Vedantam et al.(2015)] Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. Cider: Consensus-based image description evaluation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4566–4575, 2015.
  • [Vinyals et al.(2015)] Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. Show and tell: A neural image caption generator. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3156–3164, 2015.
  • [Zhu et al.(2013)] Muhua Zhu, Yue Zhang, Wenliang Chen, Min Zhang, and Jingbo Zhu. Fast and accurate shift-reduce constituent parsing. In Proceedings of Annual Meeting of the Association for Computational Linguistics (ACL), pp. 434–443, 2013.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
44799
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description