Leveraging Visual Question Answering for Image-Caption Ranking

Leveraging Visual Question Answering for Image-Caption Ranking

Xiao Lin  Devi Parikh Bradley Department of Electrical and Computer Engineering,
Virginia Tech
11email: {linxiao,parikh}@vt.edu
Abstract

Visual Question Answering (VQA) is the task of taking as input an image and a free-form natural language question about the image, and producing an accurate answer. In this work we view VQA as a “feature extraction” module to extract image and caption representations. We employ these representations for the task of image-caption ranking. Each feature dimension captures (imagines) whether a fact (question-answer pair) could plausibly be true for the image and caption. This allows the model to interpret images and captions from a wide variety of perspectives. We propose score-level and representation-level fusion models to incorporate VQA knowledge in an existing state-of-the-art VQA-agnostic image-caption ranking model. We find that incorporating and reasoning about consistency between images and captions significantly improves performance. Concretely, our model improves state-of-the-art on caption retrieval by 7.1% and on image retrieval by 4.4% on the MSCOCO dataset.

Keywords:
Visual question answering, image-caption ranking, mid-level concepts

1 Introduction

Visual Question Answering (VQA) is an “AI-complete” problem that requires knowledge from multiple disciplines such as computer vision, natural language processing and knowledge base reasoning. A VQA system takes as input an image and a free-form open-ended question about the image and outputs the natural language answer to the question. A VQA system needs to not only recognize objects and scenes but also reason beyond low-level recognition about aspects such as intention, future, physics, material and commonsense knowledge. For example (: Who is the person in charge in this picture? : Chef) reveals the most important person and occupation in the image. Moreover, answers to multiple questions about the same image can be correlated and may reveal more complex interactions. For example (: What is this person riding? : Motorcycle) and (: What is the man wearing on his head? : Helmet) might reveal correlations observable in the visual world due to safety regulations.

Figure 1: Aligning images and captions requires high-level reasoning e.g. “a batter up at the plate” would imply that a player is holding a bat, posing to hit the baseball and there might be another player nearby waiting to catch the ball. There is rich knowledge in Visual Question Answering (VQA) corpora containing human-provided answers to a variety of questions one could ask about images. We propose to leverage knowledge in VQA by using VQA models learned on images and captions as “feature extraction” modules for image-caption ranking.

Today’s VQA models, while far from perfect, may already be picking up on these semantic correlations of the world. If so, they may serve as an implicit knowledge resource to help other tasks. Just like we do not need to fully understand the theory behind an equation to use it, can we already use VQA knowledge captured by existing VQA models to improve other tasks?

In this work we study the problem of using VQA knowledge to improve image-caption ranking. Consider the image and its caption in Figure 1. Aligning them not only requires recognizing the batter and that it is a baseball game (mentioned in the caption), but also realizing that a batter up at the plate would imply that a player is holding a bat, posing to hit the baseball and there might be another player nearby waiting to catch the ball (seen in the image). Image captions tend to be generic. As a result, image captioning corpora may not capture sufficient details for models to infer this knowledge.

Fortunately VQA models try to explicitly learn such knowledge from a corpus of images, each with associated questions and answers. Questions about images tend to be much more specific and detailed than captions. The VQA dataset of [1] in particular has a collection of free-form open-ended questions and answers provided by humans. These images also have associated captions [32].

We propose to leverage VQA knowledge captured by such corpora for image-caption ranking by using VQA models learned on images and captions as “feature extraction” schemes to represent images and captions. Given an image and a caption, we choose a set of free-form open-ended questions and use VQA models learned on images and captions to assess probabilities of their answers. We use these probabilities as image and caption features respectively. In other words, we embed images and captions into the space of VQA questions and answers using VQA models. Such VQA-grounded representations interpret images and captions from a variety of different perspectives and imagine beyond low-level recognition to better understand images and captions.

We propose two approaches that incorporate these VQA-grounded representations into an existing state-of-the-art111To the best of our knowledge on MSCOCO [32], [24] has the state-of-the-art caption retrieval performance. [34] has the state-of-the-art image retrieval performance. VQA-agnostic image-caption ranking model [24]: fusing their predictions and fusing their representations. We show that such VQA-aware models significantly outperform the VQA-agnostic model and set state-of-the-art performance on MSCOCO image-caption ranking. Specifically, we improve caption retrieval by 7.1% and image retrieval by 4.4%.

This paper is organized as follows: Section 2 introduces related works. We first introduce VQA and image-caption ranking tasks as our building blocks in Section 3, then detail our VQA-based image-caption ranking models in Section 4. Experiments and results are reported in Section 5. We conclude in Section 6.

2 Related Work

Visual Question Answering. Visual Question Answering (VQA) [1] is the task of taking an image and a free-form open-ended question about the image and automatically predicting the natural language answer to the question. VQA may require fine-grained recognition, object detection, activity recognition, multi-modal and commonsense knowledge. Large datasets [36, 43, 59, 17, 1] have been made available to cover the diversity of knowledge required for VQA. Most notably the VQA dataset [1] contains 614,163 questions and ground truth answers on 204,721 images of the MSCOCO [32] dataset.

Recent VQA models [37, 43, 17, 63, 1, 34] explore state-of-the-art deep learning techniques combining Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs).  [1] also explores a slight variant of VQA that answers a question about the image by reading a caption describing the image instead of looking at the image itself. We call this variant VQA-Caption.

VQA is a challenging task in its early stages. In this work we propose to use both VQA and VQA-Caption models as implicit knowledge resources. We show that current VQA models, while far from perfect, can already be used to improve other multi-modal AI tasks; specifically image-caption ranking.

Semantic mid-level visual representations. Previous works have explored the use of attributes [15, 5, 56], parts [3, 60], poselets [4, 61], objects [31], actions [44] and contextual information [18, 51, 9] as sematic mid-level representations for visual recognition. Benefits of using such semantic mid-level visual representations include improving fine-grained visual recognition, learning models of visual concepts without example images (zero-shot learning [30, 39]) and improving human-machine communication where a user can explain the target concept during image search [29, 26], or give a classifier an explanation of labels [10, 40]. Recent works also explore using word embeddings [47] and free-form text [12] as representations for zero-shot learning of new object categories. [22] proposes scene graphs for image retrieval. [2] proposes using abstract scenes as an intermediate representation for zero-shot action recognition. Closest to our work is the use of objects, actions, scenes [14], attributes and object interactions [28] for generating and ranking image captions. In this work we propose to use free-form open-ended questions and answers as mid-level representations and we show that they provide rich interpretations of images and captions.

Commonsense knowledge for visual reasoning. Recently there has been a surge of interest in visual reasoning tasks that require high-level reasoning such as physical reasoning [19, 62], future prediction [16, 55, 41], object affordance prediction [64] and textual tasks that require visual knowledge [33, 52, 45]. Such tasks can often benefit from reasoning with external commonsense knowledge resources. [65] uses a knowledge base learned on object categories, attributes, actions and object affordances for query-based image retrieval. [54] learns to anticipate future scenes from watching videos for action and object forecasting. [33] learns to imagine abstract scenes from text for textual tasks that need visual understanding. [52, 45] evaluate the plausibility of commonsense assertions by verifying them on collections of abstract scenes and real images, respectively, to leverage the visual common sense in those collections. Our work explores the use of VQA corpora which have both visual (image) and textual (captions) commonsense knowledge for image-caption ranking.

Images and captions. Recent works [23, 6, 24, 57, 38, 35] have made significant progress on automatic image caption generation and ranking by applying deep learning techniques for image recognition [27, 46, 50] and language modeling [7, 49] on large datasets [8, 32]. Algorithms can now often generate accurate, human-like natural-language captions for images. However, evaluating the quality of such automatically generated open-ended image captions is still an open research problem [13, 53].

On the other hand, ranking images given captions and ranking captions given images require a similar level of image and language understanding, but are amenable to automatic evaluation metrics. Recent works on image-caption ranking mainly focus on improving model architectures. [24, 38] study different architectures for projecting CNN image representations and RNN caption representations into a common multi-modal space. [35] uses multi-modal CNNs for image-caption ranking. [23] aligns image and caption fragments using CNNs and RNNs. Our work takes an orthogonal approach to previous works. We propose to leverage knowledge in VQA corpora containing questions about images and associated answers for image-caption ranking. Our proposed VQA-based image and caption representations provide complementary information to those learned using previous approaches on a large image-caption ranking dataset.

3 Building Blocks: Image-Caption Ranking and VQA

In this section we present image-caption ranking and VQA modules that we build on top of.

3.1 Image-caption ranking

The image-caption ranking task is to retrieve relevant images given a query caption, and relevant captions given a query image. During training we are given image-caption pairs that each corresponds to an image and its caption . For each pair we sample other images in addition to so the image retrieval task becomes retrieving from images given caption . We also sample random captions in addition to so the caption retrieval task becomes retrieving from captions given image .

Our image-caption ranking models learn a ranking scoring function such that the corresponding retrieval probabilities:

(1)

are maximized. Let be parameterized by (to be learnt). We formulate an objective function for as the sum of expected negative log-likelihoods of image and caption retrieval over all image-caption pairs :

(2)

Recent works on image-caption ranking often construct by combining a vectorized image representation which is usually hidden layer activations in a CNN pretrained for image classification, with a vectorized caption representation which is usually a sentence encoding computed using an RNN in a multi-modal space. Such scoring functions rely on large image-caption ranking datasets to learn knowledge necessary for image-caption ranking and do not leverage knowledge in VQA corpora. We call such models VQA-agnostic models.

In this work we use the publicly available state-of-the-art image-caption ranking model of [24] as our baseline VQA-agnostic model. [24] projects a -dimensional CNN activation for image and a -dimensional RNN latent encoding for caption to the same -dimensional common multi-modal embedding space as unit-norm vectors and :

(3)

The multi-modal scoring function is defined as their dot product .

The VQA-agnostic model of [24] uses the 19-layer VGGNet [46] () for image encoding and an RNN with Gated Recurrent Units [7] () for caption encoding. The RNN and parameters are jointly learned on the image-caption ranking training set using a margin-based objective function.

3.2 Vqa

VQA is the task of given an image and a free-form open-ended question about , generating a natural language answer to that question. Similarly, VQA-Caption task proposed by [1] takes a caption of an image and a question about the image, then generates an answer . In [1] the generated answers are evaluated using . That is, is 100% correct if at least 3 humans (out of 10) provide the answer .

We closely follow [1] and formulate VQA as a classification task over top most frequent answers from the training set. The oracle accuracies of picking the best answer for each question within this set of answers are 89.37% on training and 88.83% on validation. During training, given triplets of image , question and ground truth answer , we optimize the negative log-likelihood (NLL) loss to maximize the probability of the ground truth answer given by the VQA model. Similarly given triplets of caption , question and ground truth answer , we optimize the NLL loss to maximize the VQA-Caption model probability .

Following [1], for a VQA question we first encode the input image using the 19-layer VGGNet [46] as a 4,096-dimensional image encoding , and encode the question using a 2-layer RNN with 512 Long Short-Term Memory (LSTM) units [20] per layer as a 2,048-dimensional question encoding . We then project and into a common 1,024-dimensional multi-modal space as and :

(4)

As in [1] we then compute the representation for the image-question pair by element-wise multiplying and : . The scores for 1,000 answers are given by:

(5)

We jointly learn the question encoding RNN and parameters during training.

For the VQA-Caption task given caption and question , we use the same network architecture and learning procedure as above, but using the most frequent 1,000 words in training captions as the dictionary to construct a 1,000 dimensional bag-of-words encoding for caption as to replace the image feature and compute , respectively.

The VQA and VQA-Caption models are learned on the train split of the VQA dataset [1] using 82,783 images, 413,915 captions and 248,349 questions. These models achieve VQA validation set accuracies of 54.42% (VQA) and 56.28% (VQA-Caption), respectively. Next, they are used as sub-modules in our image-caption ranking approach.

Figure 2: Images and captions sorted by and assessed by our VQA (top) and VQA-Caption (bottom) models respectively. Indeed, images and captions that are more plausible for the pairs are scored higher.

4 Approach

To leverage knowledge in VQA for image-caption ranking, we propose to represent the images and the captions in the VQA space using VQA and VQA-Caption models. We call such representations VQA-grounded representations.

4.1 VQA-grounded representations

Let’s say we have a VQA model , a VQA-Caption model and a set of questions and their plausible answers (one for each question) , . Then given an image and a caption , we first extract the dimensional VQA-grounded activation vectors for and for such that each dimension of and is the log probability of the ground truth answer given a question .

(6)

For example if the pairs are (: What is the person riding?, : Motorcycle) and (: What is the man wearing on his head?, : Helmet), and verify if the person in image and caption respectively is riding a motorcycle. At the same time and verify whether the man in and is wearing a helmet. Figure 1 shows another example.

In cases where there is not a man in the image or the caption, i.e. the assumption of is not met, and may still reflect if there were a man or if the assumption of were fulfilled, could he be wearing a helmet. In other words, even if there is no person present in the image or mentioned in the caption, the model may still assess the plausibility of a man wearing a helmet or a motorcycle being present. This imagination beyond what is depicted in the image or caption can be helpful in providing additional information when reasoning about the compatibility between an image and a caption. We show qualitative examples of this imagination or plausibility assessment for selected pairs in Figure 2 where we sort images and captions based on and . Indeed, scenes where the corresponding fact (e.g., man is wearing a helmet) is more likely to be plausible are scored higher. 222Nonetheless, checking if a question applies to the target image and caption is also desirable. Contemporary work [42] has looked at modeling , and can be incorporated in our approach as an additional feature.

Based on the activation vectors and , we then compute the VQA-grounded vector representations and for and by projecting and to a -dimensional vector embedding space:

(7)

Here is a non-linear activation function.

By verifying question-answer pairs on image and caption and computing vector representations on top of them, the VQA-grounded representations and explicitly project image and caption into VQA space to utilize knowledge in the VQA corpora. However, that comes at a cost of losing information such as the sentence structure of the caption and image saliency. These information can also be important for image-caption ranking. As a result, We find VQA-grounded representations are most effective when they are combined with baseline VQA-agnostic models, so we propose two strategies for fusing VQA-grounded representations with baseline VQA-agnostic models: combining their prediction scores or score-level fusion (Figure 3 left) and combining their representations or representation-level fusion (Figure 3 right).

4.2 Score-level fusion

A simple strategy to combine our VQA-grounded model with a VQA-agnostic image-ranking model is to combine them at the score level. Given image and caption , we first compute the VQA-grounded score as the dot product between the VQA-grounded representations of image and caption . We then combine it with the VQA-agnostic scoring function to get the final scoring function :

Figure 3: We propose score-level fusion (left) and representation-level fusion (right) to utilize VQA for image-caption ranking. They use VQA and VQA-Caption models as “feature extraction” schemes for images and captions and use those features to construct VQA-grounded representations. The score-level fusion approach combines the scoring functions of a VQA-grounded model and a baseline VQA-agnostic model. The representation-level fusion approach combines VQA-grounded representations and VQA-agnostic representations to produce a VQA-aware scoring function.
(8)

We first learn on the image-caption ranking training set, and then learn and on a held out validation set to avoid overfitting.

4.3 Representation-level fusion

An alternative to combining the VQA-agnostic and VQA-grounded representations at the score level is to inject the VQA-grounding at the representation level. Given the VQA-agnostic -dimensional image and caption representations and used by the baseline model, we first compute the VQA-grounded representations for image and for caption introduced in Section 4.1. And then they are combined with VQA-agnostic representations to produce VQA-aware representations for image and for caption by projecting them to a -dimensional multi-modal embedding space as follows:

(9)

The final image-caption ranking score is then

(10)

In experiments, we jointly learn (for projecting and to the VQA-grounded representations , ) with (for computing the combined VQA-aware representations and ) on the image-caption ranking training set by optimizing Eq. 2.

Score-level fusion and representation-level fusion models are implemented as multi-layer neural networks. All activation functions are (for speed) and dropout layers [48] are inserted after all layers to avoid overfitting. We set the dimensions of the multi-modal embedding spaces and to 4,096 so they are large enough to capture necessary concepts for image-caption ranking. Optimization hyperparameters are selected on the validation set. We optimize both models using RMSProp with batch size 1,000 at learning rate 1e-5 for score-level fusion and 1e-4 for representation-level fusion. Optimization runs for 100,000 iterations with learning rate decay every 50,000 iterations.

Our main results in Section 5.1 use question-answer pairs, sampled 3 questions per image with their ground truth answers with respect to their original images from 1,000 random VQA training images. We discuss using different numbers of question-answer pairs and different strategies for selecting the question-answer pairs in Section 5.4.

5 Experiments and Results

We report results on MSCOCO [32] which is the largest available image-caption ranking dataset. Following the splits of [23, 24] we use all 82,783 MSCOCO train images with 5 captions per image as our train set, 413,915 image-caption pairs in total. Note that this is the same split as the train split in the VQA dataset [1] we used to train our VQA and VQA-Caption models. The validation set consists of 1,000 images sampled from the original MSCOCO validation images. The test set consists of 5,000 images sampled from the original MSCOCO validation images that were not in the image-caption ranking validation set. Same as the train set, there are 5 captions available for each validation and test image.

We follow the evaluation metric of [23] and report caption and image retrieval performances on the first 1,000 test images following [23, 25, 38, 34, 24]. Given a test image, the caption retrieval task is to find any 1 out of its 5 captions from all 5,000 test captions. Given a test caption, the image retrieval task is to find its original image from all 1,000 test images. We report recall@(1, 5, 10): the fraction of times a correct item was found among the top (1, 5, 10) predictions.

5.1 Image-caption ranking results

Table 1 shows our main results on MSCOCO. Our score-level fusion VQA-aware model using question-answer pairs (“ score-level fusion VQA-aware”) achieves 46.9% caption retrieval recall@1 and 35.8% image retrieval recall@1. This model shows an improvement of 3.5% caption and 4.8% image retrieval recall@1 over the state-of-the-art VQA-agnostic model of [24].

MSCOCO
Approach Caption Retrieval Image Retrieval
R@1 R@5 R@10 R@1 R@5 R@10
Random 0.1 0.5 1.0 0.1 0.5 1.0
DVSA [23] 38.4 69.9 80.5 27.4 60.2 74.8
FV (GMM+HGLMM) [25] 39.4 67.9 80.9 25.1 59.8 76.6
-RNN-vgg [38] 41.0 73.0 83.5 29.0 42.2 77.0
- [34] 42.8 73.1 84.1 32.6 68.6 82.8
Kiros et al[24] (VQA-agnostic) 43.4 75.7 85.8 31.0 66.7 79.9
N=3000 score-level fusion VQA-grounded only 37.0 67.9 79.4 26.2 60.1 74.3
N=3000 score-level fusion VQA-aware 46.9 78.6 88.9 35.8 70.3 83.6
N=0 representation-level fusion VQA-agnostic 45.8 76.8 86.1 33.6 67.8 81.0
N=3000 representation-level fusion VQA-aware 50.5 80.1 89.7 37.0 70.9 82.9
Table 1: Caption retrieval and image retrieval performances of our models compared to baseline models on MSCOCO image-caption ranking test set. Powered by knowledge in VQA corpora, both our score-level fusion and representation-level fusion VQA-aware approaches outperform state-of-the-art VQA-agnostic models by a large margin

Our representation-level fusion approach adds an additional layer on top of the VQA-agnostic representations, resulting in a deeper model, so we experiment with adding an additional layer to the VQA-agnostic model for a fair comparison. That is equivalent to representation-level fusion using question-answer pair (“ representation-level fusion”, i.e. deeper VQA-agnostic). Comparing with the VQA-agnostic model of  [24], adding this additional layer improves performance by 2.4% caption and 2.6% image retrieval recall@1.

By leveraging VQA knowledge our “ representation-level fusion VQA-aware” model achieves 50.5% caption retrieval recall@1 and 37.0% image retrieval recall@1, which further improves 4.7% and 3.4% over the VQA-agnostic representation-level fusion model. These improvements are consistent with our score-level fusion approach so this shows that the VQA corpora consistently provide complementary information to image-caption ranking.

To the best of our knowledge, the representation-level fusion VQA-aware result is the best result on MSCOCO image-caption ranking and significantly surpasses previous best results by as much as 7.1% in caption retrieval and 4.4% image retrieval recall@1.

Our VQA-grounded model alone (“ score-level fusion VQA-grounded only”) achieves 37.0% caption and 26.2% image retrieval recall@1. This indicates that the VQA activations and which evaluate the plausibility of facts (question-answer pairs) in images and captions are informative representations.

Figure 4 shows qualitative results on image retrieval comparing our approach ( score-level fusion) with the VQA-agnostic model. By looking at several top retrieved images from our model for the failure case (last column), we find that our model seems to have picked up on a correlation between bats and helmets. It seems to be looking for helmets in retrieved images, while the ground truth image does not have one.

Figure 4: Qualitative image retrieval results of our score-level fusion VQA-aware model (middle) and the VQA-agnostic model (bottom). The true target image is highlighted (green if VQA-aware found it, red if VQA-agnostic found it but VQA-aware did not).

We also experiment with using the hidden activations available in the VQA and VQA-Caption models ( and in Section 3.2) as image and caption encodings in place of the VQA activations ( and in Section 4.1). Using these hidden activations of the VQA models is conceptually similar to using the hidden activations of CNNs pretrained on ImageNet as features [11]. These features achieve 46.8% caption retrieval recall@1 and 35.2% image retrieval recall@1 for score-level fusion, and 49.3% caption retrieval recall@1 and 37.9% image retrieval recall@1 for representation-level fusion which are as good as our semantic features and . This shows that our semantically meaningful features, and , performs as well as their corresponding non-sematic representations and using both score-level fusion and representation-level fusion. Note that such hidden activations may not always be available in different VQA models and the semantic features have the added benefit of being interpretable (e.g., Figure 2).

5.2 Ablation study

As an ablation study, we compare the following four models: 1) full representation-level fusion: our full N = 3000 representation-level fusion model that includes both image and caption VQA representations; 2) caption-only representation-level fusion: the same representation-level fusion model but using the VQA representation only for the caption, , and not for the image; 3) image-only representation-level fusion: the same model but using the VQA representation only for the image, , and not for the caption; 4) deeper VQA-agnostic: The N = 0 representation-level fusion model described earlier that does not use VQA representations for neither the image nor the caption.

Table 2 summarizes the results. We see that incrementally adding more VQA-knowledge improves performance. Both caption-only and image-only models outperform the deeper VQA-agnostic baseline. The full representation-level fusion model which combines both representations yields the best performance.

5.3 The role of VQA and caption annotations

MSCOCO
Approach Caption Retrieval Image Retrieval
R@1 R@5 R@10 R@1 R@5 R@10
Deeper VQA-agnostic 45.8 76.8 86.1 33.6 67.8 81.0
Caption-only representation-level fusion 47.3 77.3 86.6 35.5 69.3 81.9
Image-only representation-level fusion 47.0 80.0 89.6 36.4 70.1 82.3
Full representation-level fusion 50.5 80.1 89.7 37.0 70.9 82.9
Table 2: Ablation study evaluating the gain in performance as more VQA-knowledge is incorporated in the model

In this work we transfer knowledge from one vision-language task (i.e. VQA) to another (i.e. image-caption ranking). However, VQA annotations and caption annotations serve different purposes.

The target language to be retrieved is caption language, and not VQA language.  [1] showed qualitatively and quantitatively that the two languages are statistically quite different (in terms of information contained, and in terms of nouns, adjectives, verbs, etc. used). As a result, VQA can not be thought of as providing additional “annotations” for the captioning task. Instead, VQA provides different perspectives/views of the images (and captions). It provides an additional feature representation. To better utilize this representation for an image-caption ranking task, one would still require sufficient ground truth caption annotations for images. In fact, with varying amounts of ground truth (caption) annotations, the VQA-aware representations show improvements in performance across the board. See Figure 5 (left).

A better analogy of our VQA representation is hidden activations (e.g., fc7) from a CNN trained on ImageNet. Having additional ImageNet annotations would improve the fc7 feature. But to map this fc7 feature to captions, one would still require sufficient caption annotations. Conceptually, caption annotations and category labels in ImageNet play two different roles. The former provides ground truth for the target task at hand (image-caption ranking), and having additional annotations for the target application typically helps. The latter helps learn a better image representation (which may provide improvements in a variety of tasks).

5.4 Number of question-answer pairs

Our VQA-grounded representations extract image and caption features based on question-answer pairs. It is important for there to be enough question-answer pairs to cover necessary aspects for image-caption ranking. We experiment with using pairs (or facts) for both score-level and representation-level fusion. Figure 5 (right) shows caption and image retrieval performances of our approaches with varying . Performance of both score-level and representation-level fusion approaches improve quickly from to , and then starts to level off after .

Figure 5: Left: caption retrieval and image retrieval performances of the VQA-agnostic model compared with our score-level fusion VQA-aware model trained using 1 to 5 captions per image. The VQA representations in the VQA-aware model provide consistent performance gains. Right: caption retrieval and image retrieval performances of our score-level fusion and representation-level fusion approaches with varying number of pairs used for feature extraction.

An alternative to sampling 3 question-answer pairs per image on 1,000 images to get questions is to sample 1 question-answer pair per image from 3,000 images. Sampling multiple pairs from the same image provides correlated pairs. For example (: What are these animals? : Giraffes) and (: Would this animal fit in a house? : No). Using such correlated pairs, the model could potentially better predict if there is a giraffe in the image by jointly reasoning if the animal looks like a giraffe and the if the animal would fit in a house, if the VQA and VQA-Caption models have not already picked up such correlations. In experiments, sampling 3 question-answer pairs per image for correlated pairs does not significantly outperform sampling 1 question-answer pair per image which performs (47.7%, 35.4%) (image, caption) recall@1 using score-level fusion, so we hypothesize that our VQA and Caption-QA models have already captured such correlations.

6 Conclusion

VQA corpora provide rich multi-modal information that is complementary to knowledge stored in image captioning corpora. In this work we take the novel perspective of viewing VQA as a “feature extraction” module that captures VQA knowledge. We propose two approaches – score-level and representation-level fusion – to integrate this knowledge into an existing image-caption ranking model. We set new state-of-the-art by improving caption retrieval by 7.1% and image retrieval by 4.4% on MSCOCO.

Improved individual modules, i.e., VQA models and VQA-agnostic image-caption ranking models, end-to-end training, and an attention mechanism that selects question-answer pairs (facts) in an image-specific manner may further improve the performance of our approach.

7 Acknowledgment

This work was supported in part by the Allen Distinguished Investigator awards by the Paul G. Allen Family Foundation, a Google Faculty Research Award, a Junior Faculty award by the Institute for Critical Technology and Applied Science (ICTAS) at Virginia Tech, a National Science Foundation CAREER award, an Army Research Office YIP award, and Office of Naval Research YIP award to D. P. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the U.S. Government or any sponsor.

Appendix

Appendix A VQA Models

Figure 6 illustrates the network architectures of our VQA and VQA-Caption models.

Figure 6: Our VQA and VQA-Caption network architectures. Details of the VQA and VQA-Caption models can be found in our paper.

Appendix B Results on MSCOCO–5K, Flickr8k and Flickr30k

Table 3 shows results on MSCOCO using all 5,000 test images following the protocol of [23]. Retrieving from 5,000 test images is more challenging than retrieving from 1,000 test images so the performances of all models are lower. However, the trends are consistent with results on 1,000 test images reported in the main paper. Our score-level fusion model achieves 22.8% caption retrieval R@1 and 15.5% image retrieval R@1, outperforming the VQA-agnostic model by 4.7% and 2.8%. Our representation-level fusion model achieves 23.5% caption retrieval R@1 and 16.7% image retrieval R@1.

Flickr8k [21] and Flickr30k [58] consist of 8,000 and 30,000 images, respectively, collected from Flickr. Each image in Flickr8k and Flickr30k is annotated with 5 image captions. Following the evaluation protocol of [23] we use 1,000 images for validation, 1,000 images for testing, the rest for training and report recall@(1, 5, 10) for caption retrieval and image retrieval on test.

Table 4 and Table 5 show results on Flickr8k and Flickr30k dataset, respectively. Our VQA-aware model shows consistent improvements over the VQA-agnostic model on both datasets. On Flickr8k our score-level fusion approach achieves 24.3% caption retrieval R@1 and 17.2% image retrieval R@1, which outperforms the VQA-agnostic model by 2.0% and 2.3%. On Flickr30k our score-level fusion approach achieves 33.9% caption retrieval R@1 and 24.9% image retrieval R@1, which outperforms the VQA-agnostic model by 4.1% and 2.9%.

Note that the VQA and VQA-Caption models are trained on MSCOCO which is a different dataset. Yet, they consistently improve image-caption ranking on Flickr8k and Flickr30k. It shows that our VQA-grounded image and caption representations generalize across datasets. Fine-tuning on these datasets, and incorporating our approach on top of state-of-the-art captioning approaches on these datasets (Instead of [24] which is state-of-the-art on MSCOCO but not Flickr) may further improve our performance.

Both Flickr8k and Flickr30k are smaller compared with the MSCOCO dataset. Our representation-level fusion model overfits to the training sets despite using dropout.

Appendix C Qualitative examples

Fig. 7 shows additional qualitative examples of image retrieval and caption retrieval using our score-level fusion model (VQA-aware) and the baseline VQA-agnostic model (VQA-agnostic).

MSCOCO 5K test images
Approach Caption Retrieval Image Retrieval
R@1 R@5 R@10 R@1 R@5 R@10
Random 0.1 0.5 1.0 0.1 0.5 1.0
DVSA [23] 16.5 39.2 52.0 10.7 29.6 42.2
FV (GMM+HGLMM) [25] 17.3 39.0 50.2 10.8 28.3 40.1
Kiros et al[24] (VQA-agnostic) 18.1 43.5 56.8 12.7 34.0 47.3
N=3000 score-level fusion VQA-grounded only 15.7 37.9 50.3 11.0 29.5 42.0
N=3000 score-level fusion VQA-aware 22.8 49.8 63.0 15.5 39.1 52.6
N=0 representation-level fusion VQA-agnostic 20.6 47.1 60.3 14.9 37.8 50.9
N=3000 representation-level fusion VQA-aware 23.5 50.7 63.6 16.7 40.5 53.8
Table 3: Results on MSCOCO using all 5,000 test images
Flickr8k
Approach Caption Retrieval Image Retrieval
R@1 R@5 R@10 R@1 R@5 R@10
Random 0.1 0.5 1.0 0.1 0.5 1.0
DVSA [23] 16.5 40.6 54.2 11.8 32.1 43.8
FV (GMM+HGLMM) [25] 31.0 59.3 73.7 21.3 50.0 64.8
-RNN-AlexNet [38] 14.5 37.2 48.5 11.5 31.0 42.4
- [34] 24.8 53.7 67.1 20.3 47.6 61.7
Kiros et al[24] (VQA-agnostic) 22.3 48.7 59.8 14.9 38.3 51.6
N=3000 score-level fusion VQA-grounded only 10.5 31.5 42.7 7.6 22.8 33.5
N=3000 score-level fusion VQA-aware 24.3 52.2 65.2 17.2 42.8 57.2
Table 4: Results on Flickr8k dataset
Flickr30k
Approach Caption Retrieval Image Retrieval
R@1 R@5 R@10 R@1 R@5 R@10
Random 0.1 0.5 1.0 0.1 0.5 1.0
DVSA [23] 22.2 48.2 61.4 15.2 37.7 50.5
FV (GMM+HGLMM) [25] 35.0 62.0 73.8 25.0 52.7 66.0
RTP (weighted distance) [Plummer_2015] 37.4 63.1 74.3 26.0 56.0 69.3
-RNN-vgg [38] 35.4 63.8 73.7 22.8 50.7 63.1
- [34] 33.6 64.1 74.9 26.2 56.3 69.6
Kiros et al[24] (VQA-agnostic) 29.8 58.4 70.5 22.0 47.9 59.3
N=3000 score-level fusion VQA-grounded only 17.6 40.5 51.2 12.7 31.9 42.5
N=3000 score-level fusion VQA-aware 33.9 62.5 74.5 24.9 52.6 64.8
Table 5: Results on Flickr30k dataset
Figure 7: Qualitative results of image retrieval and caption retrieval at rank 1, 2 and 3 using our score-level fusion VQA-aware model and the baseline VQA-agnostic model. The true target images and captions are highlighted.

References

  • [1] Antol, S., Agrawal, A., Lu, J., Mitchell, M., Batra, D., Zitnick, C.L., Parikh, D.: VQA: Visual question answering. In: ICCV (2015)
  • [2] Antol, S., Zitnick, C.L., Parikh, D.: Zero-shot learning via visual abstraction. In: ECCV (2014)
  • [3] Berg, T., Belhumeur, P.N.: Poof: Part-based one-vs.-one features for fine-grained categorization, face verification, and attribute estimation. In: CVPR (2013)
  • [4] Bourdev, L., Maji, S., Brox, T., Malik, J.: Detecting people using mutually consistent poselet activations. In: ECCV (2010)
  • [5] Branson, S., Wah, C., Schroff, F., Babenko, B., Welinder, P., Perona, P., Belongie, S.: Visual recognition with humans in the loop. In: ECCV (2010)
  • [6] Chen, X., Lawrence Zitnick, C.: Mind’s eye: A recurrent visual representation for image caption generation. In: CVPR (2015)
  • [7] Cho, K., van Merriënboer, B., Bahdanau, D., Bengio, Y.: On the properties of neural machine translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259 (2014)
  • [8] Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: CVPR (2009)
  • [9] Doersch, C., Gupta, A., Efros, A.A.: Unsupervised visual representation learning by context prediction. In: ICCV (2015)
  • [10] Donahue, J., Grauman, K.: Annotator rationales for visual recognition. In: ICCV (2011)
  • [11] Donahue, J., Jia, Y., Vinyals, O., Hoffman, J., Zhang, N., Tzeng, E., Darrell, T.: Decaf: A deep convolutional activation feature for generic visual recognition. arXiv preprint arXiv:1310.1531 (2013)
  • [12] Elhoseiny, M., Saleh, B., Elgammal, A.: Write a classifier: Zero-shot learning using purely textual descriptions. In: ICCV (2013)
  • [13] Elliott, D., Keller, F.: Comparing automatic evaluation measures for image description. In: Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. pp. 452–457 (2014)
  • [14] Farhadi, A., Hejrati, M., Sadeghi, M.A., Young, P., Rashtchian, C., Hockenmaier, J., Forsyth, D.: Every picture tells a story: Generating sentences from images. In: ECCV (2010)
  • [15] Farhadi, A., Endres, I., Hoiem, D., Forsyth, D.: Describing objects by their attributes. In: CVPR (2009)
  • [16] Fouhey, D.F., Zitnick, C.L.: Predicting object dynamics in scenes. In: CVPR (2014)
  • [17] Gao, H., Mao, J., Zhou, J., Huang, Z., Wang, L., Xu, W.: Are you talking to a machine? dataset and methods for multilingual image question answering. In: NIPS (2015)
  • [18] Gupta, A., Davis, L.S.: Beyond nouns: Exploiting prepositions and comparative adjectives for learning visual classifiers. In: ECCV (2008)
  • [19] Hamrick, J., Battaglia, P., Tenenbaum, J.B.: Internal physics models guide probabilistic judgments about object dynamics. In: Proceedings of the 33rd Annual Meeting of the Cognitive Science Society, Boston, MA (2011)
  • [20] Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural computation 9(8), 1735–1780 (1997)
  • [21] Hodosh, M., Young, P., Hockenmaier, J.: Framing image description as a ranking task: Data, models and evaluation metrics. In: JAIR. pp. 853–899 (2013)
  • [22] Johnson, J., Krishna, R., Stark, M., Li, L.J., Shamma, D., Bernstein, M., Fei-Fei, L.: Image retrieval using scene graphs. In: CVPR (2015)
  • [23] Karpathy, A., Fei-Fei, L.: Deep visual-semantic alignments for generating image descriptions. In: CVPR (2015)
  • [24] Kiros, R., Salakhutdinov, R., Zemel, R.: Unifying visual-semantic embeddings with multimodal neural language models. In: TACL (2015)
  • [25] Klein, B., Lev, G., Sadeh, G., Wolf, L.: Associating neural word embeddings with deep image representations using fisher vectors. In: CVPR (2015)
  • [26] Kovashka, A., Parikh, D., Grauman, K.: Whittlesearch: Image Search with Relative Attribute Feedback. In: CVPR (2012)
  • [27] Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: NIPS (2012)
  • [28] Kulkarni, G., Premraj, V., Dhar, S., Li, S., Choi, Y., Berg, A.C., Berg, T.L.: Baby talk: Understanding and generating simple image descriptions. In: CVPR (2011)
  • [29] Kumar, N., Berg, A.C., Belhumeur, P.N., Nayar, S.K.: Describable visual attributes for face verification and image search. In: IEEE TPAMI (2011)
  • [30] Lampert, C.H., Nickisch, H., Harmeling, S.: Learning to detect unseen object classes by between-class attribute transfer. In: CVPR (2009)
  • [31] Li, L.J., Su, H., Fei-Fei, L., Xing, E.P.: Object bank: A high-level image representation for scene classification & semantic feature sparsification. In: NIPS (2010)
  • [32] Lin, T.Y., Maire, M., Belongie, S., Bourdev, L., Girshick, R., Hays, J., Perona, P., Ramanan, D., Zitnick, C.L., Dollár, P.: Microsoft coco: Common objects in context. In: ECCV (2014)
  • [33] Lin, X., Parikh, D.: Don’t just listen, use your imagination: Leveraging visual common sense for non-visual tasks. In: CVPR (2015)
  • [34] Ma, L., Lu, Z., Li, H.: Learning to answer questions from image using convolutional neural network. arXiv preprint arXiv:1506.00333 (2015)
  • [35] Ma, L., Lu, Z., Shang, L., Li, H.: Multimodal convolutional neural networks for matching image and sentence. In: ICCV (2015)
  • [36] Malinowski, M., Fritz, M.: A multi-world approach to question answering about real-world scenes based on uncertain input. In: NIPS (2014)
  • [37] Malinowski, M., Rohrbach, M., Fritz, M.: Ask your neurons: A neural-based approach to answering questions about images. In: ICCV (2015)
  • [38] Mao, J., Xu, W., Yang, Y., Wang, J., Huang, Z., Yuille, A.: Deep captioning with multimodal recurrent neural networks (m-rnn). In: ICLR (2015)
  • [39] Parikh, D., Grauman, K.: Relative attributes. In: ICCV (2011)
  • [40] Parkash, A., Parikh, D.: Attributes for classifier feedback. In: ECCV (2012)
  • [41] Pirsiavash, H., Vondrick, C., Torralba, A.: Inferring the why in images. CoRR abs/1406.5472 (2014), http://arxiv.org/abs/1406.5472
  • [42] Ray, A., Christie, G., Bansal, M., Batra, D., Parikh, D.: Question relevance in VQA: identifying non-visual and false-premise questions. arXiv preprint arXiv:1606.06622 (2016)
  • [43] Ren, M., Kiros, R., Zemel, R.: Exploring models and data for image question answering. In: NIPS (2015)
  • [44] Sadanand, S., Corso, J.J.: Action bank: A high-level representation of activity in video. In: CVPR (2012)
  • [45] Sadeghi, F., Divvala, S.K., Farhadi, A.: Viske: Visual knowledge extraction and question answering by visual verification of relation phrases. In: CVPR (2015)
  • [46] Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. CoRR abs/1409.1556 (2014)
  • [47] Socher, R., Ganjoo, M., Manning, C.D., Ng, A.: Zero-shot learning through cross-modal transfer. In: NIPS (2013)
  • [48] Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: A simple way to prevent neural networks from overfitting. JMLR (2014)
  • [49] Sutskever, I., Vinyals, O., Le, Q.V.: Sequence to sequence learning with neural networks. In: NIPS (2014)
  • [50] Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: CVPR (2015)
  • [51] Tang, K., Paluri, M., Fei-fei, L., Fergus, R., Bourdev, L.: Improving image classification with location context. In: ICCV (2015)
  • [52] Vedantum, R., Lin, X., Batra, T., Zitnick, C.L., Parikh, D.: Learning common sense through visual abstraction. In: ICCV (2015)
  • [53] Vedantum, R., Zitnick, C.L., Parikh, D.: Cider: Consensus-based image description evaluation. In: CVPR (2015)
  • [54] Vondrick, C., Pirsiavash, H., Torralba, A.: Anticipating the future by watching unlabeled video. arXiv preprint arXiv:1504.08023 (2015)
  • [55] Walker, J., Gupta, A., Hebert, M.: Patch to the future: Unsupervised visual prediction. In: CVPR (2014)
  • [56] Wang, Y., Mori, G.: A discriminative latent model of object classes and attributes. In: ECCV (2010)
  • [57] Xu, K., Ba, J., Kiros, R., Cho, K., Courville, A., Salakhutdinov, R., Zemel, R., Bengio, Y.: Show, attend and tell: Neural image caption generation with visual attention. In: ICML (2015)
  • [58] Young, P., Lai, A., Hodosh, M., Hockenmaier, J.: From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. In: TACL. vol. 2, pp. 67–78 (2014)
  • [59] Yu, L., Park, E., Berg, A.C., Berg, T.L.: Visual madlibs: Fill in the blank image generation and question answering. arXiv preprint arXiv:1506.00278 (2015)
  • [60] Zhang, N., Farrell, R., Iandola, F., Darrell, T.: Deformable part descriptors for fine-grained recognition and attribute prediction. In: ICCV (2013)
  • [61] Zhang, N., Paluri, M., Ranzato, M., Darrell, T., Bourdev, L.: Panda: Pose aligned networks for deep attribute modeling. In: CVPR (2014)
  • [62] Zheng, B., Zhao, Y., Yu, J., Ikeuchi, K., Zhu, S.C.: Beyond point clouds: Scene understanding by reasoning geometry and physics. In: CVPR (2013)
  • [63] Zhou, B., Tian, Y., Sukhbaatar, S., Szlam, A., Fergus, R.: Simple baseline for visual question answering. arXiv preprint arXiv:1512.02167 (2015)
  • [64] Zhu, Y., Fathi, A., Fei-Fei, L.: Reasoning about object affordances in a knowledge base representation. In: ECCV (2014)
  • [65] Zhu, Y., Zhang, C., Re´, C., Fei-Fei, L.: Building a large-scale multimodal knowledge base for visual question answering. arXiv preprint arXiv:1310.1531 (2013)
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
24848
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description