Neural Self Talk: Image Understanding via
Continuous Questioning and Answering
In this paper we consider the problem of continuously discovering image contents by actively asking image based questions and subsequently answering the questions being asked. The key components include a Visual Question Generation (VQG) module and a Visual Question Answering module, in which Recurrent Neural Networks (RNN) and Convolutional Neural Network (CNN) are used. Given a dataset that contains images, questions and their answers, both modules are trained at the same time, with the difference being VQG uses the images as input and the corresponding questions as output, while VQA uses images and questions as input and the corresponding answers as output. We evaluate the self talk process subjectively using Amazon Mechanical Turk, which show effectiveness of the proposed method.
Neural Self Talk: Image Understanding via
Continuous Questioning and Answering
Yezhou Yang, Yi Li, Cornelia Fermuller, and Yiannis Aloimonos University of Maryland, College Park, MD 20742 Toyota Research Institute of North America, Ann Arbor, MI 48105
Acclaimed as “one of the last cognitive tasks to be performed well by computers” [?], exploring and analyzing novel visual scenes is a journey of continuous discovery, which requires not just passively detecting objects and segmenting the images, but arguably more importantly, actively asking the right questions and subsequently closing the semantic loop by answering the questions being asked.
This paper proposes a framework that can continuously discover novel questions on an image, and then provide legitimate answers. This “self talk” approach for image understanding goes beyond visual classification by introducing a theoretically infinite interaction between a natural language question generation module and a visual question answering module. Under this architecture, the “thought process” for image understanding can be revealed by a sequence of consecutive question and answer pairs (Fig. 1).
Our “self talk” framework has two “executives” that takes their roles iteratively: 1) question generation, which is responsible for asking the right questions, and 2) question answering, which accepts the questions and generate potential answers. With the rapid development in computer vision and machine learning [?; ?; ?; ?; ?; ?] there are a few tools developed for this seemingly intuitive philosophy in Artificial Intelligence, but self-talk is certainly beyond the aggregation of tools, because it is fundamentally a challenging chicken egg problem.
1) Questions from a single image can be as diversified as possible. Researchers have attempted a few approaches that mostly centered on asking limited questions such as “what” (e.g., object and action recognition) and “where” (e.g., place recognition). Unfortunately, questions can be anything related or unrelated to the given picture. This puzzling issue of unconstrained questions can be traced back to the original Turing test111 “Would the questions have to be sums, or could I ask it what it had had for breakfast?” Turing “Oh yes, anything.” [?], and the solution is still elusive.
Luckily, researchers have advanced the viewpoint that if we are able to develop a semantic understanding of a visual scene, we should be able to produce natural language descriptions of such semantics. This “image captioning” perspective are indeed exciting achievements, but it is only limited to generate descriptive captions, thus we propose to consider the question “Can we generate questions, based on images?”.
2) Evaluating the correctness of automatic questions answering is in the realm of Turing test. The “Visual Question Answering” [?] problem recently becomes an important area in computer vision and machine learning, and sometimes it is referred as Visual Turing challenge [?]. A few approaches [?; ?] have shown that deep neural nets again can be trained to answer a related question for an arbitrary scene with promising success.
3) The semantic loop between the above two “executives” must be closed. While the above two “executives” are very interesting entities, they cannot achieve the “self talking” by their own. On one hand, the image captioning task neglects the importance of the thought process behind the appearance. Also, the amount of information covered by a finite language description is limited. These limitations have been pointed out by several recent works [?; ?; ?] and have been addressed partially by introducing middle layer knowledge representations. On the other hand, the setting of the visual question answering task requires as input a related question given by human beings. These questions themselves inevitably contain information about the image, which are recognized by human beings and only available through human intervention. Several recent results on the image VQA benchmarks indicate that language only information seem to contribute to the most of the good performance and how important the role of the visual recognition is still unclear.
In our formalism, the input of the final deep trained system is solely an image. Both questions and answers are generated from the trained models. Also, we want to argue that the capability of raising relevant and reasonable questions actively is the key to intelligent machinery. Thus, the main contributions of this paper are twofold: 1) we propose to automatically generate “self talk” for arbitrary image understanding, a conceptually intuitive yet AI-challenging task; 2) we propose an image question generation module based on deep learning method.
Figure. 2 illustrates the flow chart of our approach (Sec.3). In Sec.4, we report experiments on two publicly available datasets (DAQUAR for indoor domain [?] and COCO for arbitrary domain [?]). Specificaly, we 1) evaluate the quality of the generated questions using standard language based metrics similar to image captioning and 2) use Amazon Mechanical Turk (AMT)-based evaluations of the generated question-answer pairs. We further discuss the insights from experimental results and challenges beyond them.
2 Related Work
Our work is related mainly to three lines of research of natural image understanding: 1) question generation, 2) image captioning and 3) visual question answering.
Question Generation is one of the key challenges in natural languages. Previous approaches of question generation from natural language sentences are mainly through template matching in a conservative manner [?; ?; ?]. [?] proposed to use parsing based approach to synthetically create question and answer pairs from image annotations. In this paper, we propose a visual question generation module through a technique directly adapted from image captioning system [?], which is data driven and the potential output questions space is significantly larger than previous parsing or template based approaches, and the trained module only takes in image as input.
In Image Captioning, in addition to the deep neural nets based approaches mentioned in Sec. 1 we also share our roots with the works of generating textual descriptions. This includes the works that retrieves and ranks sentences from training sets given an image such as [?], [?], [?], [?]. [?], [?], [?], [?], [?] are some of the works that have generated descriptions by stitching together annotations or applying templates on detected image content.
In the filed of Visual Question Answering, very recently researchers spent a significant amount of efforts on both creating datasets and proposing new models [?; ?; ?; ?]. Interestingly both [?] and [?] adapted MS-COCO [?] images and created an open domain dataset with human generated questions and answers. The creation of these visual question answering testbed costs more than 20 people year of effort using Amazon Turk platform, and some questions are very challenging which actually require logical reasoning in order to answer correctly. Both [?] and [?] use recurrent networks  to encode the sentence and output the answer. Specifically, [?] applies a single network to handle both encoding and decoding, while [?] divides the task into an encoder network and a decoder one. More recently, the work from [?] reported state-of-the-art VQA performance using multiple benchmarks. The progress is mainly due to formulating the task as a classification problem and focusing on the domain of questions that can be answered with one word. The visual question answering module adopts this approach.
3 Self talk: Theory and Practice
3.1 Theory and Motivation
The phenomenon of “self talk” has been studied in the field of psychology for hundreds of years. The term is defined as a special form of intrapersonal communication: a communicator’s internal use of language or thought. Using the terms of computer science and engineering, it could be useful to envision intrapersonal communication occurring in the mind of the individual in a model which contains a sender, receiver, and a potential feedback loop. This process happens consciously or sub-consciously in our mind. The capability of self-raising questions and answer them is also crucial for learning. Question raising and answering facilitate the learning process. For example, in the field of education, reciprocal questioning has been studied as a strategy, where students take on the role of the teacher by formulating their own list of questions about a reading material. In this paper, we regard this as another challenge for computers, and we believe that one key to intelligence is raising the right questions.
The benefits of modeling scene understanding task as a revealing of the “self talk” of the intelligent agents are mainly twofold: 1) the understanding of the scene can be revealed step by step and the failure cases could be tracked to specific question answer pairs. In other words, the process is more transparent; 2) theoretically the number of questions could be infinite and the question and answer loop could be never ending. This is especially crucial for active agent, such as movable robots, while their view of the scene keeps changing by moving around space, and the “self talk” in this scenario is never-ending. For a specific task, such as scene category recognition, this formulation has been proven to be efficient [?].
From a practical point of view, the revealing of “self talk” makes computers more human like, and the presented system has application potential in creating robotic companions [?]. Note that as human being, we make mistakes, and some of them are “cute” mistakes. In Sec. 4, we show that our system makes many “cute” mistakes too, which actually makes it more human-like.
3.2 Our Approach
We have two hypotheses to validate in this work: 1) with the current progress in image captioning, a system can be trained to generate reasonable and relevant questions, and 2) by incorporating it with a visual question answering system, a system could be trained to generate human like “self talk” with promising success.
In this section, we introduce a frustratingly straightforward policy to generate a sequence of questions for the purpose of “self talk”. We repeat this sampling process times (five times typically in our experiments). For each question generated and the accompanied original image , we pass it through the VQA module to achieve an answer . In such a manner we achieve the “self talk” question and answers pairs . The “self talk” is further evaluated by Amazon Mechanical Turk based human evaluation.
In this section, we assume an input set of images and their questions raised by human annotators. In our scenario, these are full images and their questions set. We adopted the method from [?], where a simple but effective extension is introduced from previously developed Recurrent Neural Networks (RNNs)  based language models to train image captioning model effectively. For the purpose of a self-contained work, we briefly go over the method here.
Specifically, during the training of our image question generation module, the multimodal RNN takes the image pixels and a sequence of input vectors . It then computes a sequence of hidden states and a sequence of outputs by iterating the following recurrence relation from to .
In the equations above, , , , , and , are learnable parameters, and is the last layer of a pre-trained Convolutional Neural Network (CNN) . The output vector holds the (unnormalized) log probabilities of words in the dictionary and one additional dimension for a special END token. In the approach, the image context vector to the RNN is only given at the first iteration. A typical size of the hidden layer of the RNN is 512 neurons.
The RNN is trained to combine a word , the previous context to predict the next word in the generated question. The RNNâs predictions on the image information via bias interactions on the first step. The training proceeds as follows (refer to Figure.3a)): First set , to a special START vector, and the desired label as the first word in the training question. Then set to the word vector of the first word and expect the network to predict the second word, etc. Finally, on the last step when represents the last word, the target label is set to a special END token. The cost function is to maximize the log probability assigned to the target labels (here, a Softmax classifier).
During testing time, to generate one question, we first compute the image representation , and then set , to the START vector and compute the distribution over the first word . We sample each word in the question from the distribution, set its embedding vector as , and repeat this process until the END token is generated. In the rest of the paper, we denote this question generation process as .
In this section, we assume an input set of images and their annotated question answer pairs from human labelers. We adopted the approach from [?], which introduced a model builds directly on top of the long short-term memory (LSTM) [?] sentence model and is called the âVIS+LSTMâ model. It treats the image as one word of the question as shown in Figure.3b).
The model uses the last hidden layer of the 19-layer Oxford VGG Conv Net [?] trained on ImageNet 2014 Challenge as the visual embeddings. The CNN part of the model is kept frozen during training. The model also uses word embedding model from general purpose skip-gram embedding [?]. In our experiments, the word embedding is kept dynamic (trained with the rest of the model). Please refer to [?] for the details. In the rest of the paper, we denote this trained VQA module as .
Amazon Mechanical Turk based Evaluation
For the generated question answer (“self talk”) pairs, since there are no groundtruth annotations that could be used for automatic evaluation, we designed a Amazon Mechanical Turk (AMT) based human evaluation metric to report.
We ask the Turkers to imagine they have a companion robot whose name is “self talker”. Once they bring the robot to a place shown in the image give, the robot started to generate questions and then self-answer the questions as if he is talking to himself. Specifically we ask the Turkers to evaluate three metrics: 1) Readability: how readable the “self talker” ’s “self talk”. Scores range from 1: not readable at all, to 5: no grammatical errors. Grammatically sound “self-talk” have better readability ratings; 2) Correctness: how correct the “self talk” is. “self-talk” content that correctly describes the image content with higher precision have better correctness ratings (range from 1 to 5); 3) Human likeness: how human-like does the robot perform (range from 1 to 5).
We test the presented approach on two visual question answering (VQA) datasets, namely, DARQUAR [?] and MSCOCO-VQA [?]. In the experiments on these two datasets, we first report the question generation performance using standard image captioning language based evaluation metric. Then, in order to evaluate the performance of the “self talk” we report the AMT results and provide further discussion.
We first briefly describe the two testing-beds we are using for the experiments.
DAQUAR: Indoor Scenes: DAQUAR [?] vqa dataset contains 12,468 human question answer pairs on 1,449 images of indoor scene. The training set contains 795 images and 6,793 question answer pairs, and the testing set contains 654 images and 5,675 question answer pairs. We run experiments for the full dataset with all classes, instead of their âreduced setâ where the output space is restricted to only 37 object categories and 25 test images in total. This is because the full dataset is much more challenging and the results are more meaningful in statistics.
COCO: General Domain: MSCOCO-VQA [?] is the latest VQA dataset that contains open-ended questions about arbitrary images collect from the Internet. This dataset contains 369,861 questions and 3,698,610 ground truth answers based on 123,287 MSCOCO images. These questions and answers are sentence-based and open-ended. The training and testing split follows MSCOCO-VQA official split. Specifically, we use 82,783 images for training and 40,504 validation images for testing. The variation of the images in this dataset is large and till now it is considered as the largest general domian VQA dataset. The effort of collecting this dataset cost over 20 people year working time using Amazon Mechanical Turk interface.
4.2 Question Generation Evaluation
We now evaluate the ability of our RNN model to raise questions about a given image. We first trained our Multimodal RNN to generate questions on full images with the goal of verifying that the model is rich enough to support the mapping from image data to sequences of words. We report the BLEU [?], METEOR [?], ROUGE [?] and CIDEr [?] scores computed with the coco-caption code [?]. Each method evaluates a candidate generated question by measuring how well it matches a set of several reference questions (averagely eight questions for DAQUAR dataset, and three questions for MSCOCO-VQA) written by humans.
|DAQUAR question MAX||.512||.361||.761||.81||.735||.635||.361|
|DAQUAR question SAMPLE||.143||.256||.631||.685||.561||.428||.337|
|coco-VQA question MAX||.331||.178||.493||.594||.422||.291||.193|
|coco-VQA question SAMPLE||.133||.127||.342||.388||.220||.117||.064|
To further validate the performance of question generation, we further list the performance metrics reported in the state-of-the-art image captioning work [?]. From Table. 1, except CIDEr score, the question generation performance is comparable with the state-of-the-art image captioning performance. Note that for CIDEr score is a consensus based metric. The facts that, 1) coco-VQA has three reference ground-truth questions while coco-Caption has five and 2) human annotated questions by its nature varies more than captions, makes it hard to achieve high CIDEr score for question generation task.
4.3 “Self talk” Evaluation
In Table. 2 we report the average score as well as its standard deviation for each metric. We randomly drawn 100 and 1000 testing samples from DAQUAR and MSCOCO-VQA testing sets for the human evaluation reported here. From the human evaluation, we can see that the questions generated have achieved close to human readability. The correctness of the generated “self talk” averagely has some relevance to the image and according to Turkers, the imagined companion robot acts averagely beyond “a bit like human being” but below the “half human, half machine” category.
We also asked the Turkers to choose from five immediate feelings after their companion robot’s performance. Fig. 7 and Fig. 8 depicts the feedback we got from the users. Given the fact that the performance of the “self talker” robot is still far from human performance, most of the Turkers thought they like such a robot or feel its amusing. And only very few of the users felt scared, which indicates that our image understanding performance is far from being trapped into the so-called “uncanny valley” [?] of machine intelligence. At the end of our evaluation, we also asked Turkers to comment about what the robot’s performance. Some example comments could be found in Fig. 6.
5 Conclusion and Future Work
In this paper, we consider the image understanding problem as a self-questioning and answer process and we present a primitive “self talk” generation method based on two deep neural network modules. From the experimental evaluation on both the performance of question generation and final “self talk” pairs, we show that the presented method achieved a decent amount of success. There are still several potential pathways to improve the performance of intelligent “self talk”.
The role of common-sense knowledge. Common-sense knowledge has a crucial role in question raising and answering process for human beings [?]. The experimental result shows that our system by learning the model from large annotated question answer pairs, it implicitly encodes a certain level of common-sense. The real challenge is to deal with situations that the visual input conflicts with the common-sense learned from context data. In our experiment, it seems that the model is biased towards to trust his common sense more than the visual input. How to incorporate either logical or numerical forms of common-sense into end-to-end based image understanding system is still an open problem.
Creating a story-line. When human beings perform intrapersonal communication, we tend to follow a logic flow or so-called story-line. This requires a question generation modules that takes in consideration the answers from previous questions for consideration. This indicates a more sophisticated dialogue generation process (such as a cognitive dialogue [?]), and it can also potentially prevent self-contradictions happened in this paper’s generated results (see last comment in Fig. 6).
As indicated from AMT feedback, human users felt it is cute and fondness to have a robot companion that moves around and talkative. Another open avenue is to integrate the current trained model onto a robot platform and through interaction with users to continuously refine its trained model.
- [Aditya et al., 2015] Somak Aditya, Yezhou Yang, Chitta Baral, Cornelia Fermuller, and Yiannis Aloimonos. From images to sentences through scene description graphs using commonsense reasoning and knowledge. arXiv preprint arXiv:1511.03292, 2015.
- [Ali et al., 2010] Husam Ali, Yllias Chali, and Sadid A Hasan. Automation of question generation from sentences. In Proceedings of QG2010: The Third Workshop on Question Generation, pages 58–67, 2010.
- [Aloimonos and Fermüller, 2015] Yiannis Aloimonos and Cornelia Fermüller. The cognitive dialogue: A new model for vision implementing common sense reasoning. Image and Vision Computing, 34:42–44, 2015.
- [Antol et al., 2015] Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh. Vqa: Visual question answering. In International Conference on Computer Vision (ICCV), 2015.
- [Brown et al., 2005] Jonathan C Brown, Gwen A Frishkoff, and Maxine Eskenazi. Automatic question generation for vocabulary assessment. In Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing, pages 819–826. Association for Computational Linguistics, 2005.
- [Chen and Zitnick, 2014] Xinlei Chen and C Lawrence Zitnick. Learning a recurrent visual representation for image caption generation. arXiv preprint arXiv:1411.5654, 2014.
- [Chen et al., 2015] Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollar, and C Lawrence Zitnick. Microsoft coco captions: Data collection and evaluation server. arXiv preprint arXiv:1504.00325, 2015.
- [Donahue et al., 2014] Jeff Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, and Trevor Darrell. Long-term recurrent convolutional networks for visual recognition and description. arXiv preprint arXiv:1411.4389, 2014.
- [Elliott and Keller, 2013] Desmond Elliott and Frank Keller. Image description using visual dependency representations. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, EMNLP 2013, 18-21 October 2013, Grand Hyatt Seattle, Seattle, Washington, USA, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1292–1302, 2013.
- [Farhadi et al., 2010] Ali Farhadi, Mohsen Hejrati, Mohammad Amin Sadeghi, Peter Young, Cyrus Rashtchian, Julia Hockenmaier, and David Forsyth. Every picture tells a story: Generating sentences from images. In Proceedings of the 11th European Conference on Computer Vision: Part IV, ECCV’10, pages 15–29, Berlin, Heidelberg, 2010. Springer-Verlag.
- [Gao et al., 2015] Haoyuan Gao, Junhua Mao, Jie Zhou, Zhiheng Huang, Lei Wang, and Wei Xu. Are you talking to a machine? dataset and methods for multilingual image question answering. arXiv preprint arXiv:1505.05612, 2015.
- [Heilman and Smith, 2010] Michael Heilman and Noah A Smith. Good question! statistical ranking for question generation. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 609–617. Association for Computational Linguistics, 2010.
- [Hochreiter and Schmidhuber, 1997] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997.
- [Hodosh et al., 2013] Micah Hodosh, Peter Young, and Julia Hockenmaier. Framing image description as a ranking task: Data, models and evaluation metrics. Journal of Artificial Intelligence Research, pages 853–899, 2013.
- [Johnson et al., 2015] Justin Johnson, Ranjay Krishna, Michael Stark, Jia Li, Michael Bernstein, and Li Fei-Fei. Image retrieval using scene graphs. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2015.
- [Karpathy and Li, 2014] Andrej Karpathy and Fei-Fei Li. Deep visual-semantic alignments for generating image descriptions. arXiv preprint arXiv:1412.2306, 2014.
- [Kiros et al., 2014] Ryan Kiros, Ruslan Salakhutdinov, and Richard S Zemel. Unifying visual-semantic embeddings with multimodal neural language models. arXiv preprint arXiv:1411.2539, 2014.
- [Kulkarni et al., 2011] Girish Kulkarni, Visruth Premraj, Sagnik Dhar, Siming Li, Yejin Choi, Alexander C Berg, and Tamara L Berg. Baby talk: Understanding and generating image descriptions. In Proceedings of the 24th CVPR, 2011.
- [Kuznetsova et al., 2012] Polina Kuznetsova, Vicente Ordonez, Alexander C. Berg, Tamara L. Berg, and Yejin Choi. Collective generation of natural image descriptions. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers - Volume 1, ACL ’12, pages 359–368, Stroudsburg, PA, USA, 2012. Association for Computational Linguistics.
- [Lavie, 2014] Michael Denkowski Alon Lavie. Meteor universal: Language specific translation evaluation for any target language. ACL 2014, page 376, 2014.
- [Lin et al., 2014] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In Computer Vision–ECCV 2014, pages 740–755. Springer, 2014.
- [Lin, 2004] Chin-Yew Lin. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out: Proceedings of the ACL-04 workshop, volume 8, 2004.
- [Ma et al., 2015] Lin Ma, Zhengdong Lu, and Hang Li. Learning to answer questions from image using convolutional neural network. arXiv preprint arXiv:1506.00333, 2015.
- [Malinowski and Fritz, 2014] Mateusz Malinowski and Mario Fritz. Towards a visual turing challenge. arXiv preprint arXiv:1410.8027, 2014.
- [Malinowski et al., 2015] Mateusz Malinowski, Marcus Rohrbach, and Mario Fritz. Ask your neurons: A neural-based approach to answering questions about images. arXiv preprint arXiv:1505.01121, 2015.
- [Mao et al., 2014] Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, and Alan L Yuille. Explain images with multimodal recurrent neural networks. arXiv preprint arXiv:1410.1090, 2014.
- [Mori et al., 2012] Masahiro Mori, Karl F MacDorman, and Norri Kageki. The uncanny valley [from the field]. Robotics & Automation Magazine, IEEE, 19(2):98–100, 2012.
- [Ordonez et al., 2011] Vicente Ordonez, Girish Kulkarni, and Tamara L. Berg. Im2text: Describing images using 1 million captioned photographs. In John Shawe-Taylor, Richard S. Zemel, Peter L. Bartlett, Fernando C. N. Pereira, and Kilian Q. Weinberger, editors, NIPS, pages 1143–1151, 2011.
- [Papineni et al., 2002] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Association for Computational Linguistics, 2002.
- [Pennington et al., 2014] Jeffrey Pennington, Richard Socher, and Christopher D. Manning. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP 2014), pages 1532–1543, 2014.
- [Rapaport, 2005] William J. Rapaport. The turing test: Verbal behavior as the hallmark of intelligence edited by stuart shieber. Computational Linguistics, 31(3):407–412, September 2005.
- [Ren et al., 2015] Mengye Ren, Ryan Kiros, and Richard Zemel. Exploring models and data for image question answering. arXiv preprint arXiv:1505.02074, 2015.
- [Schuster et al., 2015] Sebastian Schuster, Ranjay Krishna, Angel Chang, Li Fei-Fei, and Christopher D. Manning. Generating semantically precise scene graphs from textual descriptions for improved image retrieval. In Proceedings of the Fourth Workshop on Vision and Language, pages 70–80, Lisbon, Portugal, September 2015. Association for Computational Linguistics.
- [Simonyan and Zisserman, 2014] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
- [Socher et al., 2014] Richard Socher, Andrej Karpathy, Quoc V. Le, Christopher D. Manning, and Andrew Y. Ng. Grounded compositional semantics for finding and describing images with sentences. TACL, 2:207–218, 2014.
- [Stork, 1998] David G Stork. HAL’s Legacy: 2001’s Computer as Dream and Reality. MIT Press, 1998.
- [Vedantam et al., 2014] Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. Cider: Consensus-based image description evaluation. arXiv preprint arXiv:1411.5726, 2014.
- [Vinyals et al., 2014] Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. Show and tell: A neural image caption generator. arXiv preprint arXiv:1411.4555, 2014.
- [Yang et al., 2011] Yezhou Yang, Ching Lik Teo, Hal Daumé, III, and Yiannis Aloimonos. Corpus-guided sentence generation of natural images. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP ’11, pages 444–454, Stroudsburg, PA, USA, 2011. Association for Computational Linguistics.
- [Yao et al., 2010] Benjamin Z. Yao, Xiong Yang, Liang Lin, Mun Wai Lee, and Song Chun Zhu. I2t: Image parsing to text description. Proceedings of the IEEE, 98(8):1485–1508, 2010.
- [Yu et al., 2011] Xiaodong Yu, Cornelia Fermuller, Ching Lik Teo, Yezhou Yang, and Yiannis Aloimonos. Active scene recognition with vision and language. In Computer Vision (ICCV), 2011 IEEE International Conference on, pages 810–817. IEEE, 2011.