CNN+CNN: Convolutional Decoders for Image Captioning

CnnCNN: Convolutional Decoders for Image Captioning

Qingzhong Wang and Antoni B. Chan
Department of Computer Science, City University of Hong Kong,

Image captioning is a challenging task that combines the field of computer vision and natural language processing. A variety of approaches have been proposed to achieve the goal of automatically describing an image, and recurrent neural network (RNN) or long-short term memory (LSTM) based models dominate this field. However, RNNs or LSTMs cannot be calculated in parallel and ignore the underlying hierarchical structure of a sentence. In this paper, we propose a framework that only employs convolutional neural networks (CNNs) to generate captions. Owing to parallel computing, our basic model is around faster than NIC (an LSTM-based model) during training time, while also providing better results. We conduct extensive experiments on MSCOCO and investigate the influence of the model width and depth. Compared with LSTM-based models that apply similar attention mechanisms, our proposed models achieves comparable scores of BLEU-1,2,3,4 and METEOR, and higher scores of CIDEr. We also test our model on the paragraph annotation dataset [22], and get higher CIDEr score compared with hierarchical LSTMs.

1 Introduction

Figure 1: Language models: (a) RNN-based model, which takes the current word and the previous hidden states as input [31]; (b) the tree structure of a sentence, where {S, NP, VP, PP} denote sentence, noun phrase, verb phrase, and preposition phrase [26, 25]; (3) CNN-based model, which is much denser than a tree structure, but is able to learn a tree structure [20]. denotes zero padding.

Human beings are able to describe what they see, and image captioning is a task that can make a computer have this ability. To achieve this goal requires at least three models: (1) vision model—to extract visual features from images, (2) language model—to generate captions, (3) the connection between vision and language models. Image captioning combines two fields—computer vision and natural language processing (NLP) to address the challenge of understanding both images and their descriptions.

Recently, increasing research has been devoted to image captioning, and a variety of methods have been proposed [16, 8, 21, 31, 34, 36, 33, 2, 19, 9, 35]. Almost all of the current proposed methods are under the framework of CNNRNN, in which a CNN is used for the vision model, and an RNN is employed to generate sentences. Moreover, there are different ways to connect the CNN and RNN. A naive way is directly feeding the output of the CNN into the RNN [21, 31]. However, this naive approach treats objects in an image the same and ignores the salient objects when generating one word. To imitate the visual attention mechanism of humans, attention modules have been introduced into the CNNRNN framework [34, 36, 19].

Although the CNNRNN framework is popular and provides satisfying results, there are disadvantages of sequential RNNs: (1) RNNs have to be calculated step-by-step, which is not amenable to parallel computing during training; (2) there is a long path between the start and end of the sentence using RNNs (see Fig. 1a), which easily forgets the long-range information. Tree structures can make a shorter path between the start and end words (see Fig. 1b), but trees require special processing that is not easily parallelizable. Also trees are defined by NLP (e.g., noun phrase, adjectives, verb phrase, etc), which is a hand-crafted structure that may not be optimal for the captioning task.

An alternative language model to RNNs and trees are CNNs applied to the sentence to merge words layer by layer [22, 20], thus learning a tree structure of the sentence (see Fig. 1c). CNNs can be implemented in parallel, and have a larger receptive field size (can see more words) using less layers. E.g., given a sentence composed of 10 words, a RNN needs to iterate 10 times to get a representation of the sentence (equivalent to 10 layers), while a CNN with kernel size 3 only needs 5 layers. Using wider kernels, CNNs are able to tackle longer sentences with less layers. CNN have been widely applied to the field of NLP, leading to improved performances in many tasks, such as language modeling, machine translation and text classification [5, 10, 11, 13, 4].

Inspired by the applications of CNNs in the field of NLP, we develop a framework that only employ CNNs for image captioning. The main contributions of this paper are:

  • We propose a CNNCNN framework for image captioning, which is faster than LSTM-based models and outperforms them on some metrics.

  • We propose a hierarchical attention module to connect the vision CNN with the language CNN, which significantly improves the performance.

  • We investigate the influence of the hyper-parameters, including the number of layers and the kernel width of the language CNN. The receptive field of the language CNN can be increased by stacking more layers or increasing the kernel width, and our experiments show that increasing the kernel width is better.

2 Related work

CnnRNN models

Many methods have been proposed to automatically generate captions of images, and the most popular framework is CNNRNN. Most work endeavors to modify the language model and the connection between the vision and language models to improve the performance.

The m-RNN model [21] uses a vanilla RNN combined with different CNNs (AlexNet or VGG-net) – the RNN hidden states and the CNN output are fed into a multimodal block to fuse the image and language features at each time step, and a softmax layer predicts the next word. However, the vanilla RNNs suffers from the vanishing gradient problem. Therefore, the neural image captioning (NIC) model employs a LSTM as a decoder [31]. At the beginning, the NIC model takes the image feature vector as input, and then the visual information is passed through the recurrent path.

In both m-RNN and NIC, an image is represented by a single vector, which ignores different areas and objects in the image. A spatial attention mechanism is introduced into image captioning model in [34], which allows the model to pay attention to different areas at each time step.

Most recently, [36, 33, 35] have included semantics, which are image annotations produced by an image classifier, into image captioning. A semantic attention model is proposed in [36], which applies a fully convolutional network (FCN) to detect semantics first and then computes a weight for each semantic at each time step. Although this semantic attention significantly improves the performance, to some extent, it has a problem of prediction error accumulation along generated sequences [35]. Alternatively, [9] uses semantics differently, by generating the parameters of an RNN or LSTM from the output of the semantic detector. This model can be trained in a end-to-end manner.

LSTMs have also been used hierarchically to model a shallow tree structure. [28] proposes a phrase LSTM model, which has two levels of LSTMs, one to model the sentence composed of phrases, and another to generate words in a phrase. Similarly, [32] applies a skeleton LSTM to generate the skeleton words of the sentence, and an attribute LSTM to generate adjective words that modify the skeleton words, which is a coarse-to-fine model. Hierarchical LSTMs have also been used to generate a paragraph description consisting of multiple sentences to describe an image [15].

To our knowledge, our work is the first that applies CNNs to completely replace RNNs for image captioning. Inspired by [20], we propose a hierarchical attention module, which aims to learn the co-relationship between concepts at each level and image areas. Furthermore, our attention module uses the dot-product, which has less parameters and can be calculated faster than the MLP-based attention in [34, 19].


Deep learning methods have dominated the field of NLP, and CNNs are an important tool to solve NLP problems. In [3], CNNs are applied to several NLP tasks, such as chunking, part-of-speech tagging, named entity recognition, and semantic role labeling. This CNN-based model provides accurate results with fast speed, and is capable of learning representations instead of employing hand-crafted features.

[13] develops a CNN model with a max-over-time pooling layer for sentence classification. [4] experiments with very deep CNNs for text classification, and the results suggest that depth and max-pooling improve the performance.

In terms of language modeling, [5] introduces a new activation function called gated linear units (GLU), which applies an output gate to each unit, and can be trained faster than ReLUs. In [10], GLUs are adopted for machine translation, where a convolutional sequence-to-sequence model outperforms the popular LSTM-based sequence-to-sequence model [27, 1]. In [11], dilated convolution is employed to increase the receptive field of the CNN.

In this paper, our proposed model adopts language CNNs without pooling layers, which is different from [13, 4]. Furthermore, we use causal convolution (similar to [29]), so that our model can generate captions word-by-word.

3 Model

CNNs show relatively strong abilities to tackle very long sequences [29]. Inspired by CNNs used for NLP, we propose a CNNCNN frame work for image captioning. There are four modules in our framework: (1) vision module, which is adopted to “watch” images; (2) language module, which is to model sentences; (3) attention module, which connects the vision module with the language module; (4) prediction module, which takes the visual features from the attention module and concepts from the language module as input and predicts the next word. Fig. 2 illustrates the proposed framework for image captioning.

Figure 2: The CNNCNN model for image captioning. The vision CNN extracts features from the image, and the language CNN is applied to model the sentence. The attention module and prediction module fuse the information from image and word contexts. The convolutions in the language CNN use causal filters (depending only on the present and past words), to enable feed-forward sentence generation. For each convolutional layer in the language module, we use zero vectors to pad the sentence matrix along the length axis, where denotes the kernel size (here, ).

The vision module

The vision module is a CNN without fully connected layer, whose output is a feature map. In each position of the feature map, the dimensional feature vector represents part of the image. Define to be the list of image feature vectors, where , is the position index in the feature map, and is the number of positions. In this paper, we use VGG-16 [24] as the CNN for the vision module.

The language module

The language model is based on a CNN without pooling, which is very different from the typical RNN-based framework (e.g., [21, 31, 7, 34]). RNNs adopt a recurrent path to memorize the context, while CNNs use kernels and stack multiple layers to model the context.

Let be a sentence with words. We first use a look-up table to project each word into an embedding space with dimensions, and calculate the embeddings , where . In this stage, a sentence is represented by a matrix. A stack of convolutional layers with gated linear units (GLUs) follows the embedding layer, which is calculated using the following equations:


where denote the kernels of the th layer, and are biases, denotes the convolution operator, denotes the element-wise multiplication, and . plays the role of a gate and is a linear transformation. In our framework . Note that the convolution filters are causal filters, only depending on the current and past inputs. During inference, this structure allows a sentence to be generated using a feed-forward process where the predicted word at the output layer is used as the next word input.

For modeling sentences, the length of the output is required to be the same as the input sentence. Since there are no pooling layers and fully connected layers, we need to add zero-padding at the beginning of the input and hidden layers. If the convolutional kernel width is , which indicates that it considers concepts111The bottom-level represents words, whereas hidden layers represent different concepts. in each step, the word/concept matrices needs to be zero padded with zero vectors before convolution. The output of the CNN is a set of concepts , where .

Figure 3: The attention module. We apply dot-product and soft-max to calculate the weight for each position and concept, which only has one parameter matrix (linear transformation).

The attention module

Intuitively, to predict different words, different objects in the images should be attended and input into the prediction module. The attention module takes the visual features v and the concepts c at the top level as input, which is shown in Fig. 3. For each concept and visual feature vector , we calculate a score as follows:


where is a parameter matrix. Therefore each concept corresponds to a score vector , indicating matches with the image features. Then is fed into a softmax layer, which assigns a weight for the visual feature vector ,


We use the weighted sum operation to calculate the final attention vector as follows,


The output of our attention module is the attention features , where , corresponding to each word in the sentence.

The prediction module

Figure 4: The hierarchical attention model. The structure of hierarchical attention moves up the layers, not to the right, so as to prevent sideways connections in the same layer (as in an RNN).

As shown in Fig. 2, our proposed prediction module takes the attention features a and concepts c as input. Our prediction module is a one-hidden layer neural network. At the th sentence position, and are fed into the network, the output is the prediction probability of next word ,


where and are the parameters, and denotes the number of hidden units in the prediction module. is a leaky ReLU, and , where represents the vocabulary size. During training the input image-sentence pair is given, therefore the prediction module can be implemented as a convolutional kernel, which is faster than RNN-based models.

3.1 Hierarchical attention

The basic CNNCNN model in the previous sub-section extracts attention features from the top-level of the language model. However, different levels of the language CNN represent different concept levels that could benefit from visual inputs. Hence, we also consider a hierarchical attention (hier-att) module (see Fig. 4), where attention vectors are computed at each level of the language model and fed into the next level,


where is computed using equations (4-5) on the concepts at level , and .

In contrast to RNN-based models that calculate attention maps in a left-right (word-by-word) manner, our attention maps are calculated in a bottom-up (layer-by-layer) manner, so as to prevent sideways connections in the same layer. This allows our model to be trained in parallel over all words in the sentence, rather than word-by-word. Note that the model still sees attended features of previous words (from the lower layer) through the causal convolution layer.

3.2 Training and inference

During training, as image-caption pairs are given, the convolution structures are applied in the normal way, and the loss function for each sentence is the cross-entropy,


where denotes the ground-truth label and is the prediction probability. The first term is the same with the loss function in [31], and the second term is the regularization term, where denotes the weights in our framework.

During inference, the caption is generated given the image using a feed-forward process. The caption is initialized as zero padding and a start-token S, and is fed as the input sentence to the model to predict the probability of the next word. The predicted word is appended to the caption, and the process is repeated until the ending token /S is predicted, or the maximum length is reached. The predicted words are selected using a naive greedy method (most probable word), which is equivalent to a beam-search algorithm with beam width of 1.

4 Experiments

We present experiments using our proposed CNNCNN model on three datasets.

4.1 Dataset and experimental setup


MSCOCO is the most popular dataset for image captioning, comprising 82,783 training and 40,504 validation images. Each image has 5 human annotated captions. As in [12, 34], we split the images into 3 datasets, consisting of 5,000 validation and 5,000 testing images, and 113,287 training images. Our vocabulary contains 10,000 words.


Flickr30k contains 31,783 images, each of which has 5 captions. Following [12], we use 1,000 images for validation and 1000 images for testing, and the remainder for training. The vocabulary size is 10,000.

Paragraph annotation dataset (PAD)

A dataset with 19,551 images from MSCOCO and visual genome (VG). Each image is described by a paragraph comprising several sentences. The average length of the paragraph is 67.5 words, while that of MSCOCO dataset is 11.3. Generating such long captions is much more challenging. We use the same split as [15], and our vocabulary size is 5,000.

Implementation details

For all datasets, we set . In our framework, the vision module is the VGG-16 network without fully connected layers, which is pre-trained on ImageNet.222 We use feature map to compute attention features ( and ). The size of the hidden layer in the prediction module is , and =1e-5. We apply Adam optimizer [14] with mini-batch size 10 to train our model. For VGG-16, we set the learning rate to 1e-5, and for other parameters, the learning rate starts from 1e-3 and decays every 50k steps.333For PAD, the learning rate decays every 10k steps. The stopping criteria is based on the validation loss. The weights are initialized by the truncated normal initializer with . The model was implemented in TensorFlow.

During training time, the images are resized to and then we randomly crop patches as inputs. During testing time, the images are resize into without cropping.444We employ the code provided by the following link: The maximum caption length is 70 for MSCOCO and Flickr30k, and 150 for PAD.

Compared models

We compare our proposed model with and without hierarchical attention. We also compare other popular models: DeepVS [12], m-RNN [21], Google NIC [31], LRCN [7], hard-ATT and soft-ATT [34] on MSCOCO and Flickr30k datasets, and Sentence-Concat, DenseCap-Concat, Image-Flat, Regions-Flat-Scratch, Regions-Flat-Pretrained, and Regions-Hierarchical [15] on PAD.


We report the following metrics: BLEU-1,2,3,4 [23], Meteor [6], Rouge-L [17] and CIDEr [30], which are computed using the MSCOCO toolkit [18]. For all metrics, higher values indicate better performance.

Model B-1 B-2 B-3 B-4 M R C

DeepVS [12] 0.625 0.450 0.321 0.230 0.195 - 0.660

m-RNN [21] 0.670 0.490 0.350 0.250 - - -

NIC [31] 0.666 0.461 0.329 0.246 - - -

LRCN [7] 0.697 0.519 0.380 0.278 0.229 0.508 0.837

Hard-ATT [34] 0.718 0.504 0.357 0.250 0.230 - -

Soft-ATT [34] 0.707 0.492 0.344 0.243 0.239 - -

Ours (w/o hier-att) 0.688 0.513 0.370 0.265 0.234 0.507 0.839

Ours (w/ hier-att) 0.685 0.511 0.369 0.267 0.234 0.510 0.844

DeepVS [12] 0.573 0.369 0.240 0.157 0.153 - 0.247

m-RNN [21] 0.60 0.41 0.28 0.19 - - -

NIC [31] 0.663 0.423 0.277 0.183 - - -

LRCN [7] 0.587 0.39 0.25 0.165 - - -

Hard-ATT [34] 0.669 0.439 0.296 0.199 0.185 - -

Soft-ATT [34] 0.667 0.434 0.288 0.191 0.185 - -

Ours, w/o hier-att 0.577 0.401 0.276 0.190 0.184 0.425 0.352

Ours, w/ hier-att 0.607 0.425 0.292 0.199 0.191 0.442 0.395

Table 1: Performance on MSCOCO and Flickr30k dataset. B, M, R, C stand for BLEU, Meteor, Rouge-L and CIDEr, respectively. The bold, red, and blue numbers are the highest value, 2nd highest and 3rd highest values, respectively.

4.2 Results on MSCOCO and Flickr30k

Figure 5: Examples of generated descriptions. Black, blue and red text correspond to captions from NIC, our model, and our model with hier-att.

Table 1 shows the results on MSCOCO and Flickr30k. Note that all models only use the image information without semantics or attributes boosting. In terms of the B-1 score, which only considers bigrams, our model performs slightly worse than Hard- and Soft-ATT models, but better than NIC and m-RNN on MSCOCO. In terms of B-2,3,4 and M metrics, the performance of our model ranks 2nd, and the values are marginally lower than the LRCN model, which applies multilayer LSTMs to generate captions. This suggests that adopting CNNs as decoder is competitive or can be better than using LSTMs. When we apply our proposed hierarchical attention model, the B-4, M, R and CIDEr scores exhibit improvement. Using hier-att with our model improves the CIDEr score from 0.839 (no hier-att) to 0.844 and the Meteor score from 0.507 to 0.510.

On Flickr30k, our proposed model also provides the comparable results, and can slightly improve the M score compared with LSTM-based models. Moreover, our proposed hierarchical attention significantly improves all scores compared with the model without hierarchical attention. The CIDEr score is improved from 0.352 to 0.395 (12% increase), B-1 score is improved by 3% from 0.507 to 0.607 and other scores are improved by around 1%.

An advantage of our proposed CNNCNN model is the training speed. We compare our 6-layer model without hierarchical attention with NIC. When we set the batch size to 32, and train our model and NIC on the same machine with a GTX1080 GPU, it takes 0.24 seconds per batch for our model, and 0.64 seconds for NIC. Our model is about faster than NIC for training.555NIC is the simplest among the LSTM-based models. After training 50k steps (each step uses one batch), the loss decreases by around 70%, and the model is able to output reasonable descriptions. In the inference stage, both our CNNCNN model and LSTM-based model generate one word at a time, and thus, the inference speed is almost the same.

Note that the NIC model contains about 6.7M parameters, while our proposed model contains more than 16.0M parameters. Although our model is more complex, it can be trained faster because it can better take advantage of parallelization since there are no recurrent (left-right) connections between words within the same layer. Examples of generated descriptions are shown in Fig. 5.

4.3 Results on paragraph generation

Model B-1 B-2 B-3 B-4 M R C

0.311 0.151 0.076 0.040 0.120 - 0.068

0.332 0.169 0.085 0.045 0.127 - 0.125

0.340 0.200 0.122 0.077 0.128 - 0.111

0.373 0.217 0.131 0.081 0.135 - 0.111

0.383 0.229 0.142 0.090 0.142 - 0.121

0.419 0.241 0.142 0.087 0.160 - 0.135

Ours, 20-layer,
0.350 0.202 0.118 0.068 0.140 0.263 0.106

Ours, 20-layer,
0.311 0.171 0.099 0.057 0.127 0.263 0.120

Ours, 20-layer,
0.350 0.194 0.107 0.059 0.133 0.259 0.152

Table 2: Results on the paragraph annotation dataset. The bold numbers represent the highest score. We use a 20-layer CNN with skip connections every 3 layers, and we do not use hierarchical attention for this task. The results are from [15].
Figure 6: Examples of generated paragraph description.

We evaluate our model on the paragraph annotation dataset, in which an image has a much longer caption than MSCOCO and Flickr30k. Table 2 shows the results. Our proposed model exhibits comparable performance with Image-Flat when considering B-1,2,3,4, which is lower than the Regions-Hierarchical. Our model with kernel size 7 significantly improves the CIDEr score, which is specifically designed to evaluate image captioning models. Some examples generated by our 20-layer model with kernel size 7 are shown in Fig. 6.

4.4 Wider or Deeper?

Dataset #layers #para B-1 B-2 B-3 B-4 M R C

6, w/o hier-att 2 16.0 M 0.679 0.504 0.363 0.262 0.231 0.505 0.827

3 16.9 M 0.683 0.509 0.367 0.264 0.234 0.507 0.829

5 18.7 M 0.687 0.511 0.368 0.264 0.233 0.509 0.838

7 20.5 M 0.688 0.513 0.370 0.265 0.234 0.507 0.839

6, w/ hier-att 2 16.0 M 0.682 0.508 0.368 0.266 0.233 0.508 0.836

3 16.9 M 0.684 0.514 0.372 0.269 0.235 0.511 0.842

5 18.7 M 0.685 0.511 0.369 0.267 0.234 0.510 0.844

7 20.5 M 0.686 0.510 0.367 0.264 0.234 0.510 0.834
20 w/o hier-att 3 24.5 M 0.683 0.505 0.365 0.264 0.234 0.505 0.838

6, w/o hier-att 2 16.0 M 0.552 0.378 0.257 0.175 0.179 0.421 0.342

3 16.9 M 0.577 0.396 0.267 0.181 0.180 0.426 0.350

5 18.7 M 0.577 0.396 0.268 0.183 0.183 0.424 0.340

7 20.5 M 0.577 0.401 0.276 0.190 0.184 0.425 0.352

6, w/ hier-att 2 16.0 M 0.581 0.403 0.277 0.190 0.182 0.433 0.368

3 16.9 M 0.607 0.425 0.292 0.199 0.191 0.442 0.395

5 18.7 M 0.588 0.402 0.270 0.185 0.183 0.428 0.352

7 20.5 M 0.593 0.412 0.282 0.193 0.187 0.437 0.386
20 w/o hier-att 3 24.5 M 0.589 0.411 0.284 0.198 0.175 0.425 0.320
Table 3: The influence of the kernel width and number of layers. #layers denotes the number of layers, denotes the width of kernel, and #para is the number of parameters of the model, which reflects the complexity of the model.

In this experiment, we compare the performance of our model with different kernel widths and depths, with results shown in Table 3. Designing and training deeper networks is popular, yet in our experiments on image captioning it is unnecessary to use deeper network on these datasets. The results suggest that for MSCOCO and Flickr30k datasets 6-layer networks with kernel size 3, and hierarchical attention is the optimal choice.

For our proposed model without hierarchical attention, increasing the kernel width is a better choice both on MSCOCO and Flickr30k. When the kernel size increases from 2 to 7, the CIDEr score increases from 0.827 to 0.839 on MSCOCO, and from 0.342 to 0.352 on Flickr30k, which is better than the 20-layer network.

Note that the average length of the captions in MSCOCO is 11.6, and the receptive field of a 6-layer CNN with kernel size 3 is 13, which is longer than many sentences in MSCOCO. Increasing the kernel width or the network depth increases the complexity of the model, which makes it is more difficult to train and easier to overfit. Based on our experiments, we suggest using and setting the depth such that the receptive field is the average length of the sentences, which gives relatively good balance between the complexity and model performance.

4.5 Attention module analysis

Figure 7: Visualization of attention maps learned by our model. The left-most images are the input images, and each bold-underline word corresponds to one attention map. Here we show nouns, verbs, prepositions and numbers.
Figure 8: Example failure cases. Scenes with many objects are challenging for the visual attention model.
Figure 9: Visualization of the attention maps at the top-level of our hierarchical attention. The hierarchical attention can handle many-object scenes.

We use the MSCOCO test images to visualize the attention maps and descriptions. The attention maps are visualized by upsampling the 1414 attention maps with bilinear interpolation and Gaussian smoothing (see Fig. 7). Our proposed model is able to pay attention to correct area when predicting the corresponding words. However, it is difficult to quantify the attention module. E.g., when the model predicts the word “man” (second row in Fig. 7), it also pays attention to the woman, and vice versa. For the prepositions and verbs, our model attends to the correct area, e.g. when the word “next” is present, the areas around the elephant are highlighted.

When there are many objects in an image, although the model can pay attention to the areas that contain objects, it predicts the wrong words or repeats a words several times, as seen in Fig. 8. The top row shows that the model predicts the word “chair” two times, and the attention maps are similar. The bottom row shows that although the objects are attended, the model predicts the wrong words “stove” and “sink”, which are likely influenced by the preceding context “a kitchen with a”.

We also visualize our model with hierarchical attention in Fig. 9, which shows the attention maps at the top-level for the predicted words. The model with hierarchical attention tends to pay attention to small areas at the top level, and for the scenes with many objects it is able to focus on the relatively correct areas, which is similar to the coarse-to-fine procedure. Compared with the failure cases in Fig. 8, using hierarchical attention provides more details, which could benefit caption generation.

5 Conclusions

We have developed a CNNCNN framework for image captioning and explored the influence of the kernel width and the layer depth of the language CNN. We have shown that the ability of the CNN-based framework is competitive to LSTM-based models, but can be trained faster. We also visualize the learned attention maps to show that the model is able to learn concepts and pay attention to the corresponding areas in the images in a meaningful way.


  • [1] D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate. ICLR, 2015.
  • [2] L. Chen, H. Zhang, J. Xiao, L. Nie, J. Shao, W. Liu, and T.-S. Chua. Sca-cnn: spatial and channel-wise attention in convolutional networks for image captioning. CVPR, 2017.
  • [3] R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa. Natural language processing (almost) from scratch. JMLR, 12:2493–2537, 2011.
  • [4] A. Conneau, H. Schwenk, L. Barrault, and Y. Lecun. Very deep convolutional networks for text classification. EACL, 2017.
  • [5] Y. N. Dauphin, A. Fan, M. Auli, and D. Grangier. Language modeling with gated convolutional networks. arXiv, 2016.
  • [6] M. Denkowski and A. Lavie. Meteor universal: Language specific translation evaluation for any target language. EACL 2014 Workshop on Statistical Machine Translation, 2014.
  • [7] J. Donahue, L. A. Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, , and T. Darrell. Long-term recurrent convolutional networks for visual recognition and description. CVPR, 2015.
  • [8] H. Fang, S. Gupta, F. Iandola, R. K. Srivastava, L. Deng, P. Dollar, J. Gao, X. He, M. Mitchell, J. C. Platt, C. L. Zitnick, and G. Zweig. From captions to visual concepts and back. CVPR, 2015.
  • [9] Z. Gan, C. Gan, X. He, Y. Pu, K. Tran, J. Gao, L. Carin, and L. Deng. Semantic compositional networks for visual captioning. CVPR, 2017.
  • [10] J. Gehring, M. Auli, D. Grangier, D. Yarats, and Y. N. Dauphin. Convolutional sequence to sequence learning. arXiv, 2017.
  • [11] N. Kalchbrenner, L. Espeholt, K. Simonyan, A. van den Oord, A. Graves, and K. Kavukcuoglu. Neural machine translation in linear time. arXiv, 2016.
  • [12] A. Karpathy and L. Fei-Fei. Deep visual-semantic alignments for generating image descriptions. CVPR, 2015.
  • [13] Y. Kim. Convolutional neural networks for sentence classification. EMNLP, 2014.
  • [14] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. ICLR, 2015.
  • [15] J. Krause, J. Johnson, R. Krishna, and L. Fei-Fei. A hierarchical approach for generating descriptive image paragraphs. CVPR, 2017.
  • [16] G. Kulkarni, V. Premraj, V. Ordonez, S. Dhar, S. Li, Y. Choi, A. C. Berg, and T. L. Berg. Babytalk: understanding and generating simple image descriptions. IEEE Trans. on PAMI, 35(12):2891–2903, 2013.
  • [17] C.-Y. Lin. Rouge: A package for automatic evaluation of summaries. ACL 2004 Workshop, 2004.
  • [18] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollar, and C. L. Zitnick. Microsoft coco: Common objects in context. ECCV, 2014.
  • [19] J. Lu, C. Xiong, D. Parikh, and R. Socher. Knowing when to look: adaptive attention via a visual sentinel for image captioning. CVPR, 2017.
  • [20] L. Ma, Z. Lu, L. Shang, and H. Li. Multimodal convolutional neural networks for matching image and sentence. ICCV, 2015.
  • [21] J. Mao, W. Xu, Y. Yang, J. Wang, Z. Huang, and A. Yuille. Deep captioning with multimodal recurrent neural networks (m-rnn). ICLR, 2015.
  • [22] Z. Niu, M. Zhou, L. Wang, X. Gao, and G. Hua. Hierarchical multimodal lstm for dense visual-semantic embedding. ICCV, 2017.
  • [23] K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu. Bleu: a method for automatic evaluation of machine translation. ACL, 2002.
  • [24] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. ICLR, 2015.
  • [25] R. Socher, J. Bauer, C. D. Manning, and A. Y. Ng. Parsing with compositional vector grammars. ACL, 2013.
  • [26] R. Socher, C. Lin, A. Y. Ng, and C. D. Manning. Parsing natural scenes and natural language with recursive neural networks. ICML, 2011.
  • [27] I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to sequence learning with neural networks. NIPS, 2014.
  • [28] Y. H. Tan and C. S. Chan. phi-lstm: A phrase-based hierarchical lstm model for image captioning. ACCV, 2016.
  • [29] A. van den Oord, S. Dieleman, H. Zen, K. Simonyan, O. Vinyals, A. Graves, N. Kalchbrenner, A. Senior, and K. Kavukcuoglu. Wavenet: A generative model for raw audio. arXiv, 2016.
  • [30] R. Vedantam, C. L. Zitnick, and D. Parikh. Cider: Consensus-based image description evaluation. CVPR, 2015.
  • [31] O. Vinyals, A. Toshev, S. Bengio, and D. Erhan. Show and tell: A neural image caption generator. CVPR, 2015.
  • [32] Y. Wang, Z. Lin, X. Shen, S. Cohen, and G. W. Cottrell. Skeleton key: Image captioning by skeleton-attribute decomposition. CVPR, 2017.
  • [33] Q. Wu, C. Shen, L. Liu, A. Dick, , and A. v. d. Hengel. What value do explicit high level concepts have in vision to language problems? CVPR, 2016.
  • [34] K. Xu, J. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhudinov, R. Zemel, and Y. Bengio. Show, attend and tell: Neural image caption generation with visual attention. ICML, 2015.
  • [35] T. Yao, Y. Pan, Y. Li, Z. Qiu, and T. Mei. Boosting image captioning with attributes. ICCV, 2017.
  • [36] Q. You, H. Jin, Z. Wang, C. Fang, and J. Luo. Image captioning with semantic attention. CVPR, 2016.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description