Describing Natural Images Containing Novel Objects with Knowledge Guided Assitance

Describing Natural Images Containing Novel Objects with
Knowledge Guided Assitance

Aditya Mogadala Karlsruhe Institute of TechnologyKarlsruheGermany aditya.mogadala@kit.edu Umanga Bista Australian National UniversityCanberraAustralia umanga.bista@anu.edu Lexing Xie Australian National UniversityCanberraAustralia lexing.xie@anu.edu  and  Achim Rettinger Karlsruhe Institute of TechnologyKarlsruheGermany rettinger@kit.edu
Abstract.

Images in the wild encapsulate rich knowledge about varied abstract concepts and cannot be sufficiently described with models built only using image-caption pairs containing selected objects. We propose to handle such a task with the guidance of a knowledge base that incorporate many abstract concepts. Our method is a two-step process where we first build a multi-entity-label image recognition model to predict abstract concepts as image labels and then leverage them in the second step as an external semantic attention and constrained inference in the caption generation model for describing images that depict unseen/novel objects. Evaluations show that our models outperform most of the prior work for out-of-domain captioning on MSCOCO and are useful for integration of knowledge and vision in general.

Knowlede Base Semantic Attention, Image Caption Generation, Deep Neural Networks, Images with Novel Objects
copyright: rightsretainedccs: Computing methodologies Natural language generationccs: Computing methodologies Computer vision tasksccs: Computing methodologies Neural networks

1. Introduction

Content on the Web is highly heterogeneous comprising mostly visual and textual modalities. In most cases, these modalities complement each other to illustrate the semantics of concepts and objects. Many approaches have leveraged such multi-view content for grounding textual with visual information or vice versa. Natural language processing (NLP) tasks like monolingual word similarity (Lazaridou et al., 2015) and language learning (Chrupała et al., 2015) were improved by grounding textual with visual information. The grounding of visual with textual information has shown improvement for existing computer vision (CV) tasks such as image annotation (Srivastava and Salakhutdinov, 2012). The aforementioned tasks can be further subdivided into two broad categories where the first category of tasks combines the semantics of multiple modalities to achieve common representation for representational tasks such as generation of descriptions for images or videos (Venugopalan et al., 2014; Vinyals et al., 2017) and visual question answering (Antol et al., 2015). While the second category of tasks leverages cross-modal semantics to identify the relationship between visual and textual content for attaining referential grounding (Boleda et al., 2017).

Several approaches are proposed to solve varied tasks separately with methods from both categories. However, procedures from both categories can benefit by complementing each other. For example, many approaches for the representational task of generating captions for images are inspired by encoder-decoder architecture or its variations that encompass an attention mechanism (Bahdanau et al., 2014) learn model using only (visual)image-(textual)caption parallel corpora. This causes the model to fail in describing those images which contain unseen/novel objects and concepts not part of parallel captions. Also, the vocabulary is limited to frequent words in the captions and often fails to incorporate rare or infrequent words. Given such challenges, we strive towards a solution that addresses the bottleneck in the aforementioned representational task.

Usage of structured information provided by a knowledge base (KB) (Lehmann et al., 2015) has shown to assist several textual tasks such as question answering over structured data (Bordes et al., 2014), language modeling (Ahn et al., 2016), and generation of factoid questions (Serban et al., 2016). Our hypothesis is that caption generation for images containing unseen/novel objects can significantly benefit from employing structured information (henceforth called knowledge) provided by a KB.

Thus in this paper, we aim to address the task of generating captions that include unseen visual objects in images with our proposed solution termed as knowledge guided assistance (KGA). The aim of KGA is to operate as referential grounding by providing external semantic attention during training and also work as a dynamic constraint while inference of a caption generation model. In particular, KGA assists to generate captions for unseen/novel objects in images which lack parallel captions. This makes KGA differ from earlier approaches that perform similar task such as deep compositional captioning (DCC) (Hendricks et al., 2016) and novel object captioner (NOC) (Venugopalan et al., 2017) by not depending solely on the corpus specific word semantics for next word prediction in a caption generation model. Also, when compared with constrained beam search (CBS) (Anderson et al., 2017) KGA incorporates more information from textual caption data by not constraining solely on image tags. In this respect, KGA is closer to LSTM-C (Yao et al., 2017) which uses object classifiers and copying mechanism, but KGA diverges by using attention mechanism and dynamic weight transfer as opposed to word copying. Figure 1 presents an overview and the main contributions of this paper are summarized as follows:

Figure 1. Intention of KGA for representational task of unseen/novel image object caption generation.
  • We designed a novel approach to improve the representational task of caption generation including unseen/novel visual objects with the assistance from a knowledge base.

  • We created a multi-label image classifier for grounding depicted visual objects to knowledge base entities.

  • We conducted an extensive experimental evaluation showing effectiveness of KGA.

2. Related Work

Our related work can be drawn from many closely aligned areas.

2.1. Grounding Natural Language in Images

The grounding of natural language in images is employed to comprehend objects and their relationships. Flickr30k Entities (Plummer et al., 2015) is one such approach that augments the Flickr30k dataset images with bounding boxes using all noun phrases present in their parallel textual descriptions. We also leveraged textual descriptions for grounding in images, but to explicitly relate objects with their knowledge base entities. Other approaches (Fereshteh et al., 2015; Johnson et al., 2015) also tried to relate knowledge to images, but not by explicitly linking it to KB. However to extract visual knowledge for supporting tasks such as question answering and image retrieval.

2.2. Attention Mechanism in Caption Generation

Initially, attention mechanism was applied to tasks such as image caption generation (Xu et al., 2015) with two different possibilities i.e. soft and hard attention. Another recent improvement is seen in performing adaptive attention with visual sentinel (Lu et al., 2016) by identifying when to look inside an image for cues and not every time as aforementioned would do it. As both of the aforementioned approaches would look entire image to add global context, region-based attention (Jin et al., 2015) was proposed to experience visual perception where the attention shifting among the visual regions imposes a thread of visual ordering. Slightly, deviating from earlier visual feature centric approaches, the attribute based attention (You et al., 2016) extracted attributes from an image and used them as input vectors resembles our approach.

2.3. Dealing with Rare/OoV Words

Usage of external vocabulary or the structured data is becoming prominent in many neural network models. Goal of these approaches is to copy information from external sources whenever the neural network models fail to predict with certain confidence. Recently, a neural knowledge language model (Ahn et al., 2016) was proposed to improve language modeling with external structured data to improve tasks which are dependent on entities and also extended for text generation (Lebret et al., 2016). Some approaches (Gu et al., 2016) have used copying mechanism to deal with out-of-vocabulary (OoV) words, while few adopted pointing mechanism (Merity et al., 2016). Our approach for constrained inference fall in-line with such approaches, but rather prefer to enhance neural model weights than direct copying.

3. Describing Images with Novel Objects Using Knowledge Guided Assistance (KGA)

In this section, we present our caption generation model for generating captions for unseen/novel image objects with support from KGA. The core goal of KGA is to introduce external semantic attention (ESA) into the learning caption generation models and KGA also work as a constraint for transferring learned weights between seen and unseen semantic and word image labels during inference.

3.1. Caption Generation Model

Our image caption generation model (henceforth, KGA-CGM) combines three important components: a language model pre-trained on unpaired textual corpora, external semantic attention (ESA) and image features with a textual (T), semantic (S) and visual (V) layer (i.e. TSV layer) for predicting the next word in the sequence when learned using image-caption pairs. In the following, we present each of these components separately while Figure 2 presents the overall architecture of KGA-CGM.

Figure 2. Our caption generation model (KGA-CGM) built with three components. A language model implemented with a 2-layer forward LSTM where L1-F and L2-F represents layer-1 and layer-2 respectively, a multi-word-label classifier to generate image visual features and multi-entity-label classifier to support ESA. represents the input caption word, the semantic attention, the output of probability distribution over all words and the predicted word at each time step . BOS and EOS represent the special beginning and end of sentence tokens respectively.

3.1.1. Language Model

This component is crucial to transfer the sentence structure for unseen visual objects. Language model is implemented with two long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997) layers to predict the next word given previous words in a sentence. If represent the input to the forward LSTM of layer-1 for capturing forward input sequences into hidden sequence vectors (), where is the final time step. Then encoding of input word sequences into hidden layer-1 sequences at each time step is achieved as follows:

(1)

Similarly, layer-1 hidden sequences supplied as input to layer-2 are encoded as follows:

(2)

where represent hidden layer parameters. Finally, the encoded hidden sequence () at time step is then used for predicting the probability distribution of the next word given by Equation 3.

(3)

The softmax layer is only used while training with unpaired textual corpora and not used when learned with image captions.

3.1.2. External Semantic Attention (ESA)

Attention mechanism was first introduced by Bahdanau et al. (2014) in an encoder-decoder architecture. It was particularly useful for dynamically changing context while decoding. Later, the attention mechanism was extended to other tasks such as speech recognition (Chorowski et al., 2015) and multimodal representational tasks. A feasible case of multimodal representational task is image caption generation, where attention is perceived from two different aspects i.e. visual attention (Xu et al., 2015) and attribute attention (You et al., 2016). Visual attention leveraged spatial image features, while attribute attention leveraged attributes obtained from an image.

Our objective from ESA is similar to attribute attention, but leverages entity annotations as semantic labels obtained using a multi-entity-label image classifier (discussed in the later section). Entity annotations obtained are analogous to image patches and attributes.

In formal terms, if is an entity annotation label and the entity annotation vector among set of entity annotation vectors () and the attention weight of then is calculated at each time step using Equation 4.

(4)
(5)

where given by Equation 5 and represent scoring function which conditions on hidden state () of a caption language model. It can be observed that the scoring function is crucial for deciding attention weights. Also, relevance of the hidden state with each entity annotation is calculated using Equation 6.

(6)

where , is a bilinear parameter matrix. Once the attention weights are calculated, the soft attention weighted annotation vector of the context , which is a dynamic representation of the caption at time step is given by Equation 7

(7)

Here, and represent the cardinality of entity class annotations per image-caption pair instance.

3.1.3. Image Features

The image aligned with captions are used to extract visual features using multi-word-label image classifier (discussed more in later sections). To be consistent with other approaches (Hendricks et al., 2016; Venugopalan et al., 2017) and for a fair comparison, our visual features () also have each index corresponding to the probability of word-label annotation in the image.

3.1.4. TSV layer

Once the output from all components is acquired, the TSV layer is employed to integrate their features i.e. textual (), semantic () and visual () yielded by language model, ESA and images respectively. Thus, TSV acts as a transformation layer for molding three different feature spaces into a single common space for prediction of next word in the sequence.

If , and represent vectors acquired at each time step from language model, ESA and images respectively. Then the integration at TSV layer of KGA-CGM is provided by Equation 8.

(8)

where and are linear conversion matrices and is the image-caption pair training dataset vocabulary size.

3.1.5. Word Prediction

The output from the TSV layer at each time step is further used for predicting the next word in the sequence using a softmax layer given by Equation 9.

(9)

3.2. KGA-CGM Training

To learn parameters of KGA-CGM, first we freeze the parameters of the language model trained using unpaired textual corpora. Thus, enabling only those parameters to be learned with image-caption pairs emerging from ESA and TSV layer such as and . KGA-CGM is now trained to optimize the cost function that minimizes the sum of the negative log likelihood of the appropriate word at each time step given by Equation 10.

(10)

Where represent the length of sentence (i.e. caption) with beginning of sentence (BOS), end of sentence (EOS) tokens at -th training sample and as a number of samples used for training.

3.3. KGA-CGM Constrained Inference

Inference is not straightforward as in earlier image caption generation approaches (Vinyals et al., 2017). Unseen/novel image objects have no parallel captions throughout training, hence they will never be generated in a caption during inference. Thus, unseen/novel objects require guidance from similar objects or external sources during inference. Earlier approaches such as DCC (Hendricks et al., 2016) have leveraged similar concepts (i.e image word-labels) to transfer weights between seen and unseen word-labels. However, similar labels are found only using word embeddings of textual corpora and are not constrained on images. This obstructs the view from an image leading to spurious results. We resolve such issues by constraining the weight transfer between seen and unseen image labels (i.e. both semantic and word) with help from KGA. As the first step, we identify closest similar word-label of unseen objects with their Glove embeddings (Pennington et al., 2014) learned using unpaired textual corpora. Now, for transferring weights between seen and unseen image labels, we perform dynamic weight transfer during test image caption generation with help of entity annotation semantic labels () provided by multi-entity-label image classifier. Whenever the word predicted by our KGA-CGM model is the closest similar word of an unseen object then the unseen object is checked for its presence in . If the unseen object is part of , then direct transfer of weights is performed between seen and unseen with [unseen,:]=[closest,:] and [unseen,:]=[closest,:] and then setting [unseen,closest], [closest,unseen]=0 to remove mutual dependencies of their presence in an image. For the next image test sample, weights are again set back to their initial states. Beam search is used to consider the best sentences at time to identify the sentence at next time step. In this research, we perform experiments with =1 and =3.

4. Learning Multi-Label Image Classifiers

It can be perceived from earlier sections that the important constituents that influence KGA are the image semantic labels and features. Image features embody objects/actions/scenes identified in an image, while semantic labels capture the external semantic attention and entity class labels. In this section, we present the approach to extract both image features and semantic labels.

4.1. Multi-Word-label Image Classifier

To extract image features, emulating Hendricks et al. (Hendricks et al., 2016) a multi-word-label classifier is built using the caption aligned to an image by extracting part-of-speech (POS) tags such as nouns, verbs and adjectives attained for each word. For example, the caption “A young child brushes his teeth at the sink” contains word-labels such as “young (JJ)”, “child (NN)”, “teeth (NN)” etc., that represent abstract concepts in an image. An image classifier is trained now with multiple word-labels using a sigmoid cross-entropy loss by fine-tuning VGG-16 (Simonyan and Zisserman, 2014) pre-trained on the training part of the ILSVRC-2012.

4.2. Multi-Entity-label Image Classifier

To extract semantic labels which are analogous to the word-labels, the multi-entity-label classifier is build with entity labels attained from a knowledge base annotation tool such as DBpedia spotlight111https://github.com/dbpedia-spotlight/. Considering the caption presented in the aforementioned section, entities extracted from the caption are “Brush222http://dbpedia.org/resource/Brush” and “Tooth333http://dbpedia.org/resource/Tooth”, which will be treated as semantic labels. An image classifier is now trained with multiple entity-labels using sigmoid cross-entropy loss by fine-tuning VGG-16 (Simonyan and Zisserman, 2014) pre-trained on the training part of the ILSVRC-2012.

5. Experimental Setup

In this section, we describe the experimental setup employed for our experiments.

5.1. Resources and Datasets

Our approaches are dependent on several resources such as toolkits etc., while datasets are utilized to conduct evaluations. In the following, resources, datasets and evaluation measures are presented.

5.1.1. Knowledge Bases (KBs)

There are several openly available KBs such as DBpedia444http://wiki.dbpedia.org/, Wikidata555https://www.wikidata.org/wiki/Wikidata:Main_Page, and YAGO666http://www.mpi-inf.mpg.de/departments/databases-and-information-systems/research/yago-naga/yago/downloads/ which cover general knowledge about entities and their relationships. We choose DBpedia as our KB for entity annotation, as it is one of the extensively used resource for semantic annotation and disambiguation (Lehmann et al., 2015).

5.1.2. Unseen Objects in Image-Caption Dataset

To evaluate KGA-CGM, we use the subset of the MSCOCO (Lin et al., 2014) dataset proposed by Hendricks et al. (2016). The dataset is obtained by clustering 80 image object category labels into 8 clusters and then selecting one object from each cluster to be held out from the training set. Now the training set does not contain the images and sentences of those 8 objects represented by bottle, bus, couch, microwave, pizza, racket, suitcase and zebra. Thus making the MSCOCO training dataset to constitute 70,194 image-caption pairs. While validation set of 40504 image-caption pairs are again divided into 20252 each for testing and validation. Now, the goal of KGA-CGM is to generate caption for those test images which contain these 8 unseen object categories.

5.2. Evaluation Measures

To evaluate the representational task of caption generation, we use evaluation metrics same as earlier approaches (Hendricks et al., 2016; Venugopalan et al., 2017; Yao et al., 2017) such as METEOR and also SPICE (Anderson et al., 2017) to check the effectiveness of generated caption. CIDEr (Vedantam et al., 2015) metric is not used as it is required to calculate inverse document frequency used by this metric across the entire test set and not just unseen object subsets. F1-score is also calculated to measure the presence of unseen objects in the generated captions when compared against reference captions.

6. Experiments

The experiments are conducted to evaluate the efficacy of the proposed approaches and their dependencies.

6.1. Implementing Multi-Label Classifiers

The key component of unseen image object caption generation is comprehending visual information. To realize it, we re-use existing and also build our own image multi-label classifier.

6.1.1. Multi-Word-Label Classifier

The principal role of the multi-word-label classifier is to provide image features for caption generation. We use the pre-trained model (Hendricks et al., 2016) trained on the subset of MSCOCO using the approach presented earlier. The image features extracted represent the probabilities of 471 image labels occurring in a given image.

6.1.2. Multi-Entity-label Classifier

The goal of multi-entity-label classifier is to recognize multiple semantic labels per image. To build the classifier, training set of MSCOCO dataset constituting 82,783 training image-caption pairs are used to extract around 812 unique labels with an average of 3.2 labels annotated per image. Additionally, features for the images are extracted using different layers such as pool5, fc6 and fc7 of VGG-16 (Simonyan and Zisserman, 2014) pre-trained on ILSVRC-2012 for fine-tuning with our semantic labels. Also, to comprehend the contribution of aforementioned layers to the accuracy of the classifier, we throughly analyzed separately the classifiers built using pool5, fc6 and fc7 as the initialization layers before fine-tuning. Our analysis revealed that pool5 features overfit even with regularization.

To address this challenge, we trained a classifier with Caffe777http://caffe.berkeleyvision.org/ by fine-tuning the layers above fc6 and fc7 which gave us an improvement in the accuracy as observed in Table 1.

Model
Hyper Parameters pool5 fc6 fc7
weight_decay 0.05 0.03 0.01
base_lr 0.001 0.0003 0.003
gamma 0.5 0.5 0.33
stepsize 7.5K 10K 8K
maxiter 60K 50K 40K
momentum 0.9 0.9 0.9
batch_size 256 256 256
Results
Validation Loss 11.0035 10.1152 10.3372
Accuracy@12 0.6572 0.7018 0.6868
Accuracy@K 0.4526 0.4892 0.4778
Table 1. Validation results of different VGG-16 layers. Hyper parameters are used to fine-tune Caffe VGG-16 model. Accuracy@K is calculated by predicting as many labels as in ground truth for each image.

The classifier fine-tuned on fc6 features constitute two fully connected layers of 4096 and 812 neurons above fc6, with the first layer having 50% dropout and ReLU activation while the output layer comprises a sigmoid activation. Similarly, the classifier fine-tuned with fc7 features have an output layer of 812 neurons comprising a sigmoid activation. The loss function used during training is sigmoid cross-entropy, while only sigmoid is used during prediction for exhibiting the presence of label probabilities. Figure 3 shows the predictions on the test dataset. It can be observed that fc6 gave the best result with an accuracy around 70% for top-12 and 74.4% top-16.

Figure 3. Accuracy of the predicted labels on the test set by Multi Entity-Label Classifier.

6.2. Entity-Label Embeddings

We presented that the acquisition of labels for multi-entity-label classifiers was obtained by the DBpedia spotlight entity annotation and disambiguation tool. These labels (i.e. entities) are expected to encapsulate general knowledge (e.g. encyclopedic knowledge) which is inter linked. Approaches (Socher et al., 2013) earlier have transformed such entities in a knowledge base into embeddings to capture their relational information for tasks such as knowledge base completion. In our research as well, we see the efficacy of these embeddings for caption generation. Entity embeddings leverages external semantic information to be used for attention. To obtain entity-label embeddings, we adopted the RDF2Vec (Ristoski and Paulheim, 2016) approach and generated 500 dimensional vector representations for all 812 labels used to represent images in the entire MSCOCO.

We further qualitatively evaluate these entity-label embeddings to check there affect on caption generation. Most images are respresented with more than one entity-label, thus providing multi-label information for each image. However, directly using their embeddings for ESA can affect caption generation if the label embeddings are not closely related. To check for their closely relatedness, we perform entity similarity. Table 2 shows the results of uneen/novel mscoco objects.

Unseen Object Top-5 Closely Related Entities
Bottle Wine_bottle, Wine_glass, Table_setting
Nap_(textile), Tablecloth
Bus Truck, Double-decker_bus, Transit_bus
Cargo, Tram
Couch Pillow, Cupboard, Bathtub
Hair_dryer, Living_room
Microwave Blender, Oven, Paper_bag
Dishwasher, Refrigerator
Pizza Pasta, Pepperoni, Salad
Sauce, Grilling
Racket Ball, Flying_disc, Snowboard
Glove, Cricket_ball
Suitcase Baggage, Backpack, Hair_dryer
Apron, Bathtub
Zebra Giraffe, Elephant, Horn_(anatomy)
Calf, Ox
Table 2. Top-5 closely related entities of unseen MSCOCO Objects

It can be perceived from the Table 2 that most of the closely related entities always co-occur in an image as shown with few examples in the paper. Thus enhancing the caption generation model with ESA proven to be effective.

We also performed t-SNE visualization of all entity-labels to check how cluster together. It can be seen from the Figure 4 visualization some of the closely related objects which occur in the same context cluster close to each other.

Figure 4. t-sne visualization of the entity-label embeddings.

6.3. Novel Objects Description

In this section, we evaluate KGA-CGM for unseen/novel image objects caption generation.

6.3.1. Implementation

One of the important component of KGA-CGM model is language model. Even though its weights are fixed during learning, the words in a caption are initially set with pre-trained Glove (Pennington et al., 2014) embeddings of 256 dimensions while the hidden layer dimensions are set to 512. Information about other components i.e. image features and semantic labels are already discussed in earlier sections. KGM-CGM is then trained with Adam optimizer with gradient clipping having maximum norm of 1.0 for about 1550 epochs. Validation data is used for fine tuning parameters and model selection.

6.3.2. Ablation Study

To understand how different aspects of training KGA-CGM influence the unseen image objects caption generation, we perform ablation study by removing different components of KGA-CGM. Table 3 present the results obtained.

Model Beam METEOR SPICE F1
None 1 19.7 11.7 0
Only ESA 1 20.5 12.8 0
Only CI 1 20.1 12.3 39.8
ESA+CI 1 22.2 14.6 54.5
ESA+CI 1 21.5 13.9 48.9
Table 3. Affect on KGA-CGM (Avg)

It can be noticed that None, which refers to the no usage of either ESA or constrained inference (CI) in the KGA-CGM model have F1 measure 0. Enabling ESA into this basic caption generation model has shown an increase in the METEOR and SPICE as observed in Only ESA. However, F1 measure has remained 0 due to no transfer of weights between seen and unseen image objects. Alternatively, enabling CI showed a jump in F1 measure as seen in Only CI. However, both METEOR and SPICE are lower than Only ESA due to missing weights for ESA. Enabling both ESA and CI make our complete KGA-CGM model equipped with both semantic attention from the image as well as constrained transfer of weights providing highest METEOR and SPICE scores of 22.2 and 14.6 respectively as observed in ESA+CI. Also, it has increased F1 measure when compared to Only CI. This shows that the coherent and accurately generated caption is important for presence of an object in caption. We also analyzed the effect of beam size on KGA-CGM and it can be observed that increasing beam size during inference has shown a drop in all measures. This can be attributed to the usage of terms which are outside unseen objects captions vocabulary and are more general to entire caption dataset.

6.3.3. Quantitative Analysis

We compared our complete KGA-CGM model with the other existing models that generated captions for the unseen MSCOCO image objects. To have a fair comparison, only those results are compared that used VGG-16 to generate image features. Table 4 shows the comparison of average scores based on METEOR, SPICE and F1 on all 8 unseen image objects with beam size 1 and greater than that (). It can be noticed that KGA-CGM with beam size 1 was comparable to other approaches even though it used fixed vocabulary and image tags. For example, CBS (Anderson et al., 2017) used expanded vocabulary of 21,689 when compared to 8802 by us. Also, our word-labels per image are fixed, while CBS uses a varying size of predicted image tags (T1-4). This makes it nondeterministic and can increase uncertainty, as varying tags will either increase or decrease the performance. In Table 5, we also present individual scores from all 8 unseen objects separately. Though our average scores were comparable to other approaches, for some of the unseen objects we attain state of the art results. We also add more analysis about the important components used in the KGA-CGM model in the Appendix.

Model Beam METEOR SPICE F1-score
DCC (Hendricks et al., 2016) 1 21.0 13.4 39.7
NOC (Venugopalan et al., 2017) 1 20.7 - 50.5
KGA-CGM 1 22.2 14.6 54.5
NOC (Venugopalan et al., 2017) 1 21.3 - 48.8
LSTM-C (Yao et al., 2017) 1 23.0 - 55.6
CBS(T4) (Anderson et al., 2017) 1 23.3 15.9 54.0
KGA-CGM 1 21.5 13.9 48.9
Table 4. Average measure of all 8 unseen objects of MSCOCO with beam size 1 and greater than that ()
Metric Model Beam bottle bus couch microwave pizza racket suitcase zebra
DCC (Hendricks et al., 2016) 1 4.6 29.8 45.9 28.1 64.6 52.2 13.2 79.9
KGA-CGM 1 26.4 54.2 42.1 50.9 70.8 75.3 25.6 90.7
F1 NOC (Venugopalan et al., 2017) 1 17.7 68.7 25.5 24.7 69.3 55.3 39.8 89.0
CBS(T4) (Anderson et al., 2017) 1 16.3 67.8 48.2 29.7 77.2 57.1 49.9 85.7
LSTM-C (Yao et al., 2017) 1 29.6 74.4 38.7 27.8 68.1 70.2 44.7 91.4
KGA-CGM 1 22.2 42.5 34.4 48.1 69.6 63.1 22.6 88.5
DCC (Hendricks et al., 2016) 1 18.1 21.6 23.1 22.1 22.2 20.3 18.3 22.3
KGA-CGM 1 21.5 20.3 23.0 22.6 21.4 27.0 18.7 22.8
METEOR NOC (Venugopalan et al., 2017) 1 21.2 20.4 21.4 21.5 21.8 24.6 18.0 21.8
KGA-CGM 1 21.3 19.2 23.5 23.2 21.7 22.5 18.0 22.5
SPICE KGA-CGM 1 13.1 12.6 14.9 13.3 13.2 19.8 10.6 19.6
KGA-CGM 1 12.6 11.6 14.6 13.6 13.1 16.7 10.3 18.6
Table 5. Individual measures for all 8 unseen objects. Best results are highlighted, while underline shows second best.

6.3.4. Qualitative Analysis

In the Table 6, sample predictions of our best KGA-CGM model is presented. It can be observed that entity-labels has shown an influence for caption generation. Since, entities as image labels are already disambiguated, it attained high similarity in the prediction of a word thus adding useful semantics.

Image Unseen Object NOC Our Predicted Caption Predicted Entity-Labels (Top-3)
bottle A wine bottle sitting on a table next to a wine bottle A bottle of wine sitting on top of a table Wine_glass, Wine_bottle, Bottle
bus Bus driving down a street next to a bus stop. A white bus is parked on the street Bus,Public_Transport,Transit_Bus
couch A woman sitting on a chair with a large piece of cake on her arm A woman sitting on a couch with a remote Cake,Couch,Glass
microwave A kitchen with a refrigerator, refrigerator, and refrigerator. A kitchen with a microwave, oven and a refrigerator Refrigerator,Oven,Microwave_Oven
pizza A man standing next to a table with a pizza in front of it. A man is holding a pizza in his hands Pizza,Restaurant,Hat
racket A woman court holding a tennis racket on a court A woman playing tennis on a tennis court with a racket Tennis, Racket_(sports_equipment), Court
suitcase A cat laying on a suitcase on a bed. A cat laying inside of a suitcase on a bed Cat,Baggage,Black_Cat
zebra Zebras standing together in a field with zebras A group of zebras standing in a line Zebra,Enclosure,Zoo
Table 6. Predictions of KGA-CGM compared to NOC (Venugopalan et al., 2017) on MSCOCO with beam size 1.

7. Conclusion and Future Work

In this paper, we presented an approach to generate captions for images that lack parallel captions during training. Experimental results on unseen/novel image objects captioning exhibit that usage of structured information encapsulated in the form of relational knowledge (i.e KB) has unveiled a way to build connection between real world objects to their visual information. In future, we plan to expand our models to build multimedia knowledge bases that automatically can be queried based on relational information between images.

Appendix A Appendix

This appendix provides information on some of the quantitative and qualitative results of individual components in KGA-CGM. Also, more qualitative results of the generated captions on held-out MSCOCO objects is presented.

a.1. Language Model Hidden Layers

As presented in the paper, language model in our caption generation model (i.e KGA-CGM) is a 2-layer forward LSTM. For learning KGA-CGM with image-caption pairs, input caption word embeddings are chosen to be 256 dimensions, while the LSTM hidden layer dimensions for both layer-1 and layer-2 is selected as 512. However, varying hidden layer dimensions can show an influence on the caption generation results. In this section, we vary the hidden layer dimensions and analyze the consequences. Table 7 shows the METEOR, SPICE and F1 average measures on 8 unseen MSCOCO objects.

Layer-1 Layer-2 Beam METEOR SPICE F1-score
256 256 1 20.9 13.5 50.8
256 512 1 21.1 13.6 48.2
512 512 1 22.2 14.6 54.5
256 256 1 20.2 13.2 42.9
256 512 1 20.2 13.1 41.8
512 512 1 21.5 13.9 48.9
Table 7. Effect on KGA-CGM with varying LSTM hidden layer dimensions in Language model.

a.2. KGA-CGM More Qualitative Results

The attention weights () of ESA contributes to the caption generation. Figure 5 visualizes the attention weights of the captions presented in the Table 6.

(a) Bottle
(b) Bus
(c) Couch
(d) Microwave
(e) Pizza
(f) Racket
(g) Suitcase
(h) Zebra
Figure 5. Visualization of ESA attention weights of the captions presented in Table 6. X-axis labels represent generated captions and Y-axis labels are predicted entity-labels. Scores are normalized between 0 and 1.

References

  • (1)
  • Ahn et al. (2016) Sungjin Ahn, Heeyoul Choi, Tanel Pärnamaa, and Yoshua Bengio. 2016. A neural knowledge language model. arXiv preprint arXiv:1608.00318 (2016).
  • Anderson et al. (2017) Peter Anderson, Basura Fernando, Mark Johnson, and Stephen Gould. 2017. Guided Open Vocabulary Image Captioning with Constrained Beam Search. In EMNLP.
  • Antol et al. (2015) Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. 2015. Vqa: Visual question answering. In ICCV. 2425–2433.
  • Bahdanau et al. (2014) Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 (2014).
  • Boleda et al. (2017) Gemma Boleda, Sebastian Padó, Nghia The Pham, and Marco Baroni. 2017. Living a discrete life in a continuous world: Reference with distributed representations. arXiv preprint arXiv:1702.01815 (2017).
  • Bordes et al. (2014) Antoine Bordes, Sumit Chopra, and Jason Weston. 2014. Question answering with subgraph embeddings. arXiv preprint arXiv:1406.3676 (2014).
  • Chorowski et al. (2015) Jan K Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, and Yoshua Bengio. 2015. Attention-based models for speech recognition.. In NIPS. 577–585.
  • Chrupała et al. (2015) G. Chrupała, A. Kádár, and A. Alishahi. 2015. Learning language through pictures. arXiv preprint arXiv:1506.03694 (2015).
  • Fereshteh et al. (2015) Sadeghi Fereshteh, Santosh K. Kumar Divvala, and Ali Farhadi. 2015. Viske: Visual knowledge extraction and question answering by visual verification of relation phrases.. In CVPR. 1456–1464.
  • Gu et al. (2016) Jiatao Gu, Zhengdong Lu, Hang Li, and Victor OK Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. arXiv preprint arXiv:1603.06393 (2016).
  • Hendricks et al. (2016) Lisa Anne Hendricks, Subhashini Venugopalan, Marcus Rohrbach, Raymond Mooney, Kate Saenko, and Trevor Darrell. 2016. Deep compositional captioning: Describing novel object categories without paired training data. In CVPR. 1–10.
  • Hochreiter and Schmidhuber (1997) Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation 9, 8 (1997), 1735–1780.
  • Jin et al. (2015) Junqi Jin, Kun Fu, Runpeng Cui, Fei Sha, and Changshui Zhang. 2015. Aligning where to see and what to tell: image caption with region-based attention and scene factorization. arXiv preprint arXiv:1506.06272 (2015).
  • Johnson et al. (2015) Justin Johnson, Ranjay Krishna, Michael Stark, Li-Jia Li, David Shamma Shamma, Michael Bernstein, and Li. Fei-Fei. 2015. Image retrieval using scene graphs.. In CVPR. 3668–3678.
  • Lazaridou et al. (2015) A. Lazaridou, N.T. Pham, and M. Baroni. 2015. Combining language and vision with a multimodal skip-gram model. arXiv preprint arXiv:1501.02598 (2015).
  • Lebret et al. (2016) Rémi Lebret, David Grangier, and Michael Auli. 2016. Generating text from structured data with application to the biography domain. ArXiv e-prints, March (2016).
  • Lehmann et al. (2015) Jens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N Mendes, et al. 2015. DBpedia–a large-scale, multilingual knowledge base extracted from Wikipedia. Semantic Web (2015).
  • Lin et al. (2014) Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In ECCV. Springer, 740–755.
  • Lu et al. (2016) Jiasen Lu, Caiming Xiong, Devi Parikh, and Richard Socher. 2016. Knowing When to Look: Adaptive Attention via A Visual Sentinel for Image Captioning. arXiv preprint arXiv:1612.01887 (2016).
  • Merity et al. (2016) Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2016. Pointer Sentinel Mixture Models. arXiv preprint arXiv:1609.07843 (2016).
  • Pennington et al. (2014) Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation.. In EMNLP. 1532–1543.
  • Plummer et al. (2015) Bryan A. Plummer, Liwei Wang, Chris M. Cervantes, Juan C. Caicedo, Julia Hockenmaier, and Svetlana Lazebnik. 2015. Flickr30k entities: Collecting region-to-phrase correspondences for richer image-to-sentence models.. In ICCV. 2641–2649.
  • Ristoski and Paulheim (2016) Petar Ristoski and Heiko Paulheim. 2016. Rdf2vec: Rdf graph embeddings for data mining. In International Semantic Web Conference. Springer, 498–514.
  • Serban et al. (2016) Iulian Vlad Serban, Alberto García-Durán, Caglar Gulcehre, Sungjin Ahn, Sarath Chandar, Aaron Courville, and Yoshua Bengio. 2016. Generating factoid questions with recurrent neural networks: The 30m factoid question-answer corpus. arXiv preprint arXiv:1603.06807 (2016).
  • Simonyan and Zisserman (2014) Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014).
  • Socher et al. (2013) Richard Socher, Danqi Chen, Christopher D Manning, and Andrew Ng. 2013. Reasoning with neural tensor networks for knowledge base completion. In Advances in neural information processing systems. 926–934.
  • Srivastava and Salakhutdinov (2012) N. Srivastava and R.R. Salakhutdinov. 2012. Multimodal learning with deep boltzmann machines.. In NIPS. 2222–2230.
  • Vedantam et al. (2015) Ramakrishna Vedantam, Lawrence C. Zitnick, and Devi Parikh. 2015. Cider: Consensus-based image description evaluation. In CVPR. 4566–4575.
  • Venugopalan et al. (2017) Subhashini Venugopalan, Lisa Anne Hendricks, Marcus Rohrbach, Raymond Mooney, Trevor Darrell, and Kate Saenko. 2017. Captioning images with diverse objects. In CVPR.
  • Venugopalan et al. (2014) Subhashini Venugopalan, Huijuan Xu, Jeff Donahue, Marcus Rohrbach, Raymond Mooney, and Kate Saenko. 2014. Translating videos to natural language using deep recurrent neural networks. arXiv preprint arXiv:1412.4729 (2014).
  • Vinyals et al. (2017) Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2017. Show and tell: Lessons learned from the 2015 mscoco image captioning challenge. IEEE transactions on pattern analysis and machine intelligence 39, 4 (2017), 652–663.
  • Xu et al. (2015) Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron C Courville, Ruslan Salakhutdinov, Richard S Zemel, and Yoshua Bengio. 2015. Show, Attend and Tell: Neural Image Caption Generation with Visual Attention. In ICML, Vol. 14. 77–81.
  • Yao et al. (2017) Ting Yao, Pan Yingwei, Li Yehao, and Tao Mei. 2017. Incorporating Copying Mechanism in Image Captioning for Learning Novel Objects. In CVPR.
  • You et al. (2016) Quanzeng You, Hailin Jin, Zhaowen Wang, Chen Fang, and Jiebo Luo. 2016. Image captioning with semantic attention. In CVPR. 4651–4659.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
1315
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description