Multilingual Image Description
with Neural Sequence Models
We introduce multilingual image description, the task of generating descriptions of images given data in multiple languages. This can be viewed as visually-grounded machine translation, allowing the image to play a role in disambiguating language. We present models for this task that are inspired by neural models for image description and machine translation. Our multilingual image description models generate target-language sentences using features transferred from separate models: multimodal features from a monolingual source-language image description model and visual features from an object recognition model. In experiments on a dataset of images paired with English and German sentences, using BLEU and Meteor as a metric, our models substantially improve upon existing monolingual image description models.
|Desmond Elliott††thanks: Authors contributed equally to this paper.|
|ILLC, University of Amsterdam; Centrum Wiskunde & Informatica|
|Stella Frank**footnotemark: *|
|ILLC, University of Amsterdam|
|Department of Engineering, University of Cambridge|
Automatic image description — the task of generating natural language sentences for an image — has thus far been exclusively performed in English, due to the availability of English datasets. However, the applications of automatic image description, such as text-based image search or providing image alt-texts on the Web for the visually impaired, are also relevant for other languages. Current image description models are not inherently English-language specific, so a simple approach to generating descriptions in another language would be to collect new annotations and then train a model for that language. Nonetheless, the wealth of image description resources for English suggest a cross-language resource transfer approach, which is what we explore here. In other words: How can we best use resources for Language A when generating descriptions for Language B?
We introduce multilingual image description and present a multilingual multimodal image description model for this task. Multilingual image description is a form of visually-grounded machine translation, in which parallel sentences are grounded against features from an image. This grounding can be particularly useful when the source sentence contains ambiguities that need to be resolved in the target sentence. For example, in the German sentence “Ein Rad steht neben dem Haus”, “Rad” could refer to either “bicycle” or “wheel”, but with visual context the intended meaning can be more easily translated into English. In other cases, source language features can be more precise than noisy image features, e.g. in identifying the difference between a river and a harbour.
Our multilingual image description model adds source language features to a monolingual neural image description model (Karpathy & Fei-Fei, 2015; Vinyals et al., 2015, inter-alia). Figure 1 depicts the overall approach, illustrating the way we transfer feature representations between models. Image description models generally use a fixed representation of the visual input taken from a object detection model (e.g., a CNN). In this work we add fixed features extracted from a source language model (which may itself be a multimodal image description model) to our image description model. This is distinct from neural machine translation models which train source language feature representations specifically for target decoding in a joint model (Cho et al., 2014; Sutskever et al., 2014). Our composite model pipeline is more flexible than a joint model, allowing the reuse of models for other tasks (e.g., monolingual image description, object recognition) and not requiring retraining for each different language pair. We show that the representations extracted from source language models, despite not being trained to translate between languages, are nevertheless highly successful in transferring additional informative features to the target language image description model.
In a series of experiments on the IAPR-TC12 dataset of images described in English and German, we find that models that incorporate source language features substantially outperform target monolingual image description models. The best English-language model improves upon the state-of-the-art by 2.3 bleu4 points for this dataset. In the first results reported on German image description, our model achieves a 8.8 Meteor point improvement compared to a monolingual image description baseline. The implication is that linguistic and visual features offer orthogonal improvements in multimodal modelling (a point also made by Silberer & Lapata (2014) and Kiela & Bottou (2014)). The models that include visual features also improve over our translation baselines, although to a lesser extent; we attribute this to the dataset being exact translations rather than independently elicited descriptions, leading to high performance for the translation baseline. Our analyses show that the additional features improve mainly lower-quality sentences, indicating that our best models successfully combine multiple noisy input modalities.
Our multilingual image description models are neural sequence generation models, with additional inputs from either visual or linguistic modalities, or both. We present a family of models in sequence of increasing complexity to make their compositional character clear, beginning with a neural sequence model over words and concluding with the full model using both image and source features. See Figure 2 for a depiction of the model architecture.
2.1 Recurrent Language Model (LM)
The core of our model is a Recurrent Neural Network model over word sequences, i.e., a neural language model (lm) (Mikolov et al., 2010). The model is trained to predict the next word in the sequence, given the current sequence seen so far. At each timestep for input sequence , the input word , represented as a one-hot vector over the vocabulary, is embedded into a high-dimensional continuous vector using the learned embedding matrix (Eqn 1). A nonlinear function is applied to the embedding combined with the previous hidden state to generate the hidden state (Eqn 2). At the output layer, the next word is predicted via the softmax function over the vocabulary (Eqn 3).
In simple RNNs, in Eqn 2 can be the tanh or sigmoid function. Here, we use an LSTM111The LSTM produced better validation performance than a Gated Recurrent Unit (Cho et al., 2014). to avoid problems with longer sequences (Hochreiter & Schmidhuber, 1997). Sentences are buffered at timestep 0 with a special beginning-of-sentence marker and with an end-of-sequence marker at timestep . The initial hidden state values are learned, together with the weight matrices .
2.2 Multimodal Language Model (mlm)
The Recurrent Language Model (lm) generates sequences of words conditioned only on the previously seen words (and the hidden layer), and thus cannot use visual input for image description. In the multimodal language model (mlm), however, sequence generation is additionally conditioned on image features, resulting in a model that generates word sequences corresponding to the image. The image features (for visual) are input to the model at at the first timestep222Adding the image features at every timestep reportedly results in overfitting (Karpathy & Fei-Fei, 2015; Vinyals et al., 2015), with exception of the m-RNN (Mao et al., 2015).:
2.3 Translation Model (source-lm target-lm)
Our translation model is analogous to the multimodal language model above: instead of adding image features to our target language model, we add features from a source language model. This feature vector is the final hidden state extracted from a sequence model over the source language, the source-lm. The initial state for the target-lm is thus defined as:
We follow recent work on sequence-to-sequence architectures for neural machine translation (Cho et al., 2014; Sutskever et al., 2014) in calling the source language model the ‘encoder’ and the target language model the ‘decoder’. However, it is important to note that the source encoder is a viable model in its own right, rather than only learning features for the target decoder. We suspect this is what allows our translation model to learn on a very small dataset: instead of learning based on long distance gradients pushed from target to source (as in the sequence-to-sequence architecture), the source model weights are updated based on very local LM gradients. Despite not being optimised for translation, the source features turn out to be very effective for initialising the target language model, indicating that useful semantic information is captured in the final hidden state.
2.4 Multilingual Multimodal Model (source-mlm target-mlm)
Finally, we can use both the image and the source language features in a combined multimodal translation model. If the image features are input on both the source and the target side, this results in a doubly multimodal multilingual model (source-mlm target-mlm). There are two alternative formulations: image features are input only to the source (source-mlm target-lm) or only the target model (source-lm target-mlm). The initial state of the target-mlm, regardless of source model type, is:
2.5 Generating Sentences
We use the same description generation process for each model. First, a model is initialised with the special beginning-of-sentence token and any image or source features. At each timestep, the generated output is the maximum probability word at the softmax layer, o, which is subsequently used as the input token at timestep +1. This process continues until the model generates the end-of-sentence token, or a pre-defined number of timesteps (30, in our experiments, which is slightly more than the average sentence length in the training data).
We use the IAPR-TC12 dataset, originally introduced in the ImageCLEF shared task for object segmentation and later expanded with complete image descriptions (Grubinger et al., 2006). This dataset contains 20,000 images with multiple descriptions in both English and German. Each sentence corresponds to a different aspect of the image, with the most salient objects likely being described in the first description (annotators were asked to describe parts of the image that hadn’t been covered in previous descriptions). We use only the first description of each image. Note that the English descriptions are the originals; the German data was professionally translated from English. Figure 3 shows an example image-bitext tuple from the dataset. We perform experiments using the standard splits of 17,665 images for training, from which we reserve 10% for hyperparameter estimation, and 1,962 for evaluation.
The descriptions are lowercased and tokenised using the ptbtokenizer.py script from the MS COCO evaluation tools333https://github.com/tylin/coco-caption. We discarded words in the training data observed fewer than 3 times. This leaves a total of 272,172 training tokens for English over a vocabulary of 1,763 types; and 223,137 tokens for German over 2,374 types. Compared to the Flickr8K, Flickr30K, or MS COCO datasets, the English descriptions in the IAPR-TC12 dataset are long, with an average length of 23 words.444This difference in length resulted in difficulties in initial experiments with pre- or co-training using other datasets. We plan on pursuing this further in future work, since the independence of the source encoder in our model makes this kind of transfer learning very natural.
We extract the image features from the pre-trained VGG-16 CNN object recognition model (Simonyan & Zisserman, 2015). Specifically, our image features are extracted as fixed representations from the penultimate layer of the CNN, in line with recent work in this area.
mlm: the first baseline is a monolingual image description model, i.e. a multimodal language model for the target language with no source language features, but with image features.
source-lm target-lm: the second baseline is our translation model trained on only source and target descriptions without visual features. The final hidden state of the source-lm, after it has generated the source sentence, is input to the target-lm.
3.3 Multilingual Multimodal Model Variants
source-mlm target-mlm: In this model, both of lms in the translation baseline are replaced with multimodal language models. The source features input to the target model are thus multimodal, i.e. they are word and image features captured over the source-language sentence. The target decoder is also conditioned on the image features directly. Note that the source and target W matrices are parameterised separately.
source-lm target-mlm: The source language features are generated by a lm; visual features are input only in the target model.
source-mlm target-lm: Visual input is given only to the source-mlm and the target-lm uses a single input vector from the source-mlm. This source encoder combines both linguistic and visual cues, to the extent that the visual features are represented in the source-mlm feature vector.
We use an LSTM (Hochreiter & Schmidhuber, 1997) as in the recurrent language model. The hidden layer size is set to 256 dimensions. The word embeddings are 256-dimensional and learned along with other model parameters. We also experimented with larger hidden layers (as well as with deeper architectures), and while that did result in improvements, they also took longer to train. The image features are the 4096-dimension penultimate layer of the VGG-16 object recognition network (Simonyan & Zisserman, 2015) applied to the image.
3.5 Training and optimisation
The models are trained with mini-batches of 100 examples towards the objective function (cross-entropy of the predicted words) using the ADAM optimiser (Kingma & Ba, 2014). We do early stopping for model selection based on bleu4: if validation bleu4 has not increased for 10 epochs, and validation language model perplexity has stopped decreasing, training is halted.
We apply dropout over the image features, source features, and word representations with to discourage overfitting (Srivastava et al., 2014). The objective function includes an L2 regularisation term with =1e.
All results reported are averages over three runs with different Glorot-style uniform weight initialisations (Glorot & Bengio, 2010). We report image description quality using bleu4 (Papineni et al., 2002), Meteor (Denkowski & Lavie, 2014), and language-model perplexity. Meteor has been shown to correlate better with human judgements than bleu4 for image description (Elliott & Keller, 2014). The bleu4 and Meteor scores are calculated using MultEval (Clark et al., 2011).
|En mlm||14.2 0.3||15.4 0.2||6.7 0.0|
|De lm En lm||21.3 0.5||19.6 0.2||6.0 0.1|
|Mao et al. (2015)||20.8||—||6.92|
|De mlm En mlm||18.0 0.3||18.0 0.2||6.3 0.1|
|De lm En mlm||17.3 0.5||17.6 0.5||6.3 0.0|
|De mlm En lm||23.1 0.1||20.9 0.0||5.7 0.1|
The results for image description in both German and English are presented in Tables 1 and 2; generation examples can be seen in Figures 6, 7, 8 in Appendix B555Visit https://staff.fnwi.uva.nl/d.elliott/GroundedTranslation/ to see 1,766 examples generated by each model for the validation data.. To our knowledge, these are the first published results for German image description. Overall, we found that English image description is easier than German description, as measured by bleu4 and Meteor scores. This may be caused by the more complex German morphology, which results in a larger vocabulary and hence more model parameters.
The English monolingual image description model (En-mlm) is comparable with state-of-the-art models, which typically report results on the Flickr8K / Flickr30K dataset. En-mlm achieves a bleu4 score of 15.8 on the Flickr8K dataset, nearly matching the score from Karpathy & Fei-Fei (2015) (16.0), which uses an ensemble of models and beam search decoding. On the IAPR-TC12 dataset, the En-mlm baseline outperforms Kiros et al. (2014)666Kiros et al. (2014) report bleu1-2-3, their best model is reported at 9.8 bleu3.. Mao et al. (2015) report higher performance, but evaluate on all reference descriptions, making the figures incomparable.
All multilingual models beat the monolingual image description baseline, by up to 8.9 bleu4 and 8.8 Meteor points for the best models. Clearly the features transferred from the source models are useful for the target-lm or target-mlm description generator, despite the switch in languages.
|De mlm||9.5 0.2||20.4 0.2||10.35 0.1|
|En lm De lm||17.8 0.7||29.9 0.5||8.95 0.4|
|En mlm De mlm||11.4 0.7||23.2 0.9||9.69 0.1|
|En lm De mlm||12.1 0.5||24.0 0.3||10.2 0.7|
|En mlm De lm||17.0 0.3||29.2 0.2||8.84 0.3|
The translation baseline without visual features performs very well777The bleu4 and Meteor scores in Table 2 for En lm De lm and En mlm De lm are not significantly different according to the MultEval approximate randomization significance test.. This indicates the effectiveness of our translation model, even without joint training, but is also an artifact of the dataset. A different dataset with independently elicited descriptions (rather than translations of English descriptions) may result in worse performance for a translation system that is not visually grounded, because the target descriptions would only be comparable to the source descriptions.
Overall, the multilingual models that encode the source using an mlm outperform the source-lm models. On the target side, simple lm decoders perform better than mlm decoders. This can be explained to some extent by the smaller number of parameters in models that do not input the visual features twice. Incorporating the image features on the source side seems to be more effective, possibly because the source is constrained to the gold description at test time, leading to a more coherent match between visual and linguistic features. Conversely, the target-mlm variants tend to be worse sentence generators than the lm models, indicating that while visual features lead to useful hidden state values, there is room for improving their role during generation.
What do source features add beyond image features? Source features are most useful when the baseline mlm does not successfully separate related images. The image description models have to compress the image feature vector into the same number of dimensions as the hidden layer in the recurrent network, effectively distilling the image down to the features that correspond to the words in the description. If this step of the model is prone to mistakes, the resulting descriptions will be of poor quality. However, our best multilingual models are initialised with features transferred from image description models in a different language. In these cases, the source language features have already compressed the image features for the source language image description task.
Qualitatively, we can illustrate this effect using Barnes-Hut t-SNE projections of the initial hidden representations of our models (van der Maaten, 2014). Figure 4 shows the t-SNE projection of the example from Figure 7 using the initial hidden state of an En mlm (left) and the target side of the De mlm En mlm (right). In the monolingual example, the nearest neighbours of the target image are desert scenes with groups of people. Adding the transferred source features results in a representation that places importance on the background, due to the fact that it is consistently mentioned in the descriptions. Now the nearest neighbours are images of mountainous snow regions with groups of people.
Which descriptions are improved by source or image features? Figure 5 shows the distribution of sentence-level Meteor scores of the baseline models (monolingual mlm and monomodal lm lm) and the average per-sentence change when moving to our best performing multilingual multimodal model (source-mlm target-lm). The additional source language features (compared to mlm) or additional modality (compared to lm lm) result in similar patterns: low quality descriptions are improved, while the (far less common) high quality descriptions deteriorate.
Adding image features seems to be riskier than adding source language features, which is unsurprising given the larger distance between visual and linguistic space, versus moving from one language to another. This is also consistent with the lower performance of mlm baseline models compared to lmlm models.
An analysis of the lmmlm model (not shown here) shows similar behaviour to the mlmlm model above. However, for this model the decreasing performance starts earlier: the lmmlm model improves over the lmlm baseline only in the lowest score bin. Adding the image features at the source side, rather than the target side, seems to filter out some of the noise and complexity of the image features, while the essential source language features are retained. Conversely, merging the source language features with image features on the target side, in the target-mlm models, leads to a less helpful entangling of linguistic and noisier image input, maybe because too many sources of information are combined at the same time (see Eqn 6).
6 related work
The past few years have seen numerous results showing how relatively standard neural network model architectures can be applied to a variety of tasks. The flexibility of the application of these architectures can be seen as a strong point, indicating that the representations learned in these general models are sufficiently powerful to lead to good performance. Another advantage, which we have exploited in the work presented here, is that it becomes relatively straightforward to make connections between models for different tasks, in this case image description and machine translation.
Automatic image description has received a great deal of attention in recent years (see Bernardi et al. (2016) for a more detailed overview of the task, datasets, models, and evaluation issues). Deep neural networks for image description typically estimate a joint image-sentence representation in a multimodal recurrent neural network (RNN) (Kiros et al., 2014; Donahue et al., 2014; Vinyals et al., 2015; Karpathy & Fei-Fei, 2015; Mao et al., 2015). The main difference between these models and discrete tuple-based representations for image description (Farhadi et al., 2010; Yang et al., 2011; Li et al., 2011; Mitchell et al., 2012; Elliott & Keller, 2013; Yatskar et al., 2014; Elliott & de Vries, 2015) is that it is not necessary to explicitly define the joint representation; the structure of the neural network can be used to estimate the optimal joint representation for the description task. As in our mlm, the image–sentence representation in the multimodal RNN is initialised with image features from the final fully-connected layer of a convolutional neural network trained for multi-class object recognition (Krizhevsky et al., 2012). Alternative formulations input the image features into the model at each timestep (Mao et al., 2015), or first detect words in an image and generate sentences using a maximum-entropy language model (Fang et al., 2015).
In the domain of machine translation, a greater variety of neural models have been used for subtasks within the MT pipeline, such as neural network language models (Schwenk, 2012) and joint translation and language models for re-ranking in phrase-based translation models (Le et al., 2012; Auli et al., 2013) or directly during decoding (Devlin et al., 2014). More recently, end-to-end neural MT systems using Long Short-Term Memory Networks and Gated Recurrent Units have been proposed as Encoder-Decoder models for translation (Sutskever et al., 2014; Bahdanau et al., 2015), and have proven to be highly effective (Bojar et al., 2015; Jean et al., 2015).
In the multimodal modelling literature, there are related approaches using visual and textual information to build representations for word similarity and categorization tasks (Silberer & Lapata, 2014; Kiela & Bottou, 2014; Kiela et al., 2015). Silberer & Lapata combine textual and visual modalities by jointly training stacked autoencoders, while Kiela & Bottou construct multi-modal representations by concatenating distributed linguistic and visual feature vectors. More recently, Kiela et al. (2015) induced a bilingual lexicon by grounding the lexical entries in CNN features. In all cases, the results show that the bimodal representations are superior to their unimodal counterparts.
We introduced multilingual image description, the task of generating descriptions of an image given a corpus of descriptions in multiple languages. This new task not only expands the range of output languages for image description, but also raises new questions about how to integrate features from multiple languages, as well as multiple modalities, into an effective generation model.
Our multilingual multimodal model is loosely inspired by the encoder-decoder approach to neural machine translation. Our encoder captures a multimodal representation of the image and the source-language words, which is used as an additional conditioning vector for the decoder, which produces descriptions in the target language. Each conditioning vector is originally trained towards its own objective: the CNN image features are transferred from an object recognition model, and the source features are transferred from a source-language image description model. Our model substantially improves the quality of the descriptions in both directions compared to monolingual baselines.
The dataset used in this paper consists of translated descriptions, leading to high performance for the translation baseline. However, we believe that multilingual image description should be based on independently elicited descriptions in multiple languages, rather than literal translations. Linguistic and cultural differences may lead to very different descriptions being appropriate for different languages (For example, a polder is highly salient to a Dutch speaker, but not to an English speaker; an image of a polder would likely lead to different descriptions, beyond simply lexical choice.) In such cases image features will be essential.
A further open question is whether the benefits of multiple monolingual references extend to multiple multilingual references. Image description datasets typically include multiple reference sentences, which are essential for capturing linguistic diversity within a single language (Rashtchian et al., 2010; Elliott & Keller, 2013; Hodosh et al., 2013; Chen et al., 2015). In our experiments, we found that useful image description diversity can also be found in other languages instead of in multiple monolingual references.
In the future, we would like to explore attention-based recurrent neural networks, which have been used for machine translation (Bahdanau et al., 2015; Jean et al., 2015) and image description (Xu et al., 2015). We also plan to apply these models to other language pairs, such as the recently released PASCAL 1K Japanese Translations dataset (Funaki & Nakayama, 2015). Lastly, we aim to apply these types of models to a multilingual video description dataset (Chen & Dolan, 2011).
D. Elliott was supported by an Alain Bensoussain Career Development Fellowship. S. Frank is supported by funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement Nr. 645452.
We thank Philip Schulz, Khalil Sima’an, Arjen P. de Vries, Lynda Hardman, Richard Glassey, Wilker Aziz, Joost Bastings, and Ákos Kádár for discussions and feedback on the work. We built the models using the Keras library, which is built on-top of Theano. We are grateful to the Database Architectures Group at CWI for access to their K20x GPUs.
- Auli et al. (2013) Auli, Michael, Galley, Michel, Quirk, Chris, and Zweig, Geoffrey. Joint language and translation modeling with recurrent neural networks. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pp. 1044–1054. Association for Computational Linguistics, 2013.
- Bahdanau et al. (2015) Bahdanau, Dzmitry, Cho, Kyunghyun, and Bengio, Yoshua. Neural machine translation by jointly learning to align and translate. In Proceedings of ICLR, 2015.
- Bernardi et al. (2016) Bernardi, Raffaella, Cakici, Ruken, Elliott, Desmond, Erdem, Aykut, Erdem, Erkut, Ikizler-Cinbis, Nazli, Keller, Frank, Muscat, Adrian, and Plank, Barbara. Automatic description generation from images: A survey of models, datasets, and evaluation measures. To appear in JAIR, 2016.
- Bojar et al. (2015) Bojar, Ondřej, Chatterjee, Rajen, Federmann, Christian, Haddow, Barry, Huck, Matthias, Hokamp, Chris, Koehn, Philipp, Logacheva, Varvara, Monz, Christof, Negri, Matteo, Post, Matt, Scarton, Carolina, Specia, Lucia, and Turchi, Marco. Findings of the 2015 workshop on statistical machine translation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pp. 1–46, Lisbon, Portugal, September 2015. Association for Computational Linguistics.
- Chen & Dolan (2011) Chen, David L. and Dolan, William B. Building a persistent workforce on Mechanical Turk for multilingual data collection. In Proceedings of The 3rd Human Computation Workshop (HCOMP 2011), August 2011.
- Chen et al. (2015) Chen, Xinlei, Fang, Hao, Lin, Tsung-Yi, Vedantam, Ramakrishna, Gupta, Saurabh, Dollár, Piotr, and Zitnick, C. Lawrence. Microsoft COCO captions: Data collection and evaluation server. CoRR, abs/1504.00325, 2015.
- Cho et al. (2014) Cho, Kyunghyun, van Merrienboer, Bart, Gulcehre, Caglar, Bahdanau, Dzmitry, Bougares, Fethi, Schwenk, Holger, and Bengio, Yoshua. Learning phrase representations using RNN encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1724–1734. Association for Computational Linguistics, 2014.
- Clark et al. (2011) Clark, Jonathan H., Dyer, Chris, Lavie, Alon, and Smith, Noah A. Better hypothesis testing for statistical machine translation: Controlling for optimizer instability. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: Short Papers - Volume 2, pp. 176–181, 2011.
- Denkowski & Lavie (2014) Denkowski, M. and Lavie, A. Meteor Universal: Language Specific Translation Evaluation for Any Target Language. In EACL Workshop on Statistical Machine Translation, 2014.
- Devlin et al. (2014) Devlin, Jacob, Zbib, Rabih, Huang, Zhongqiang, Lamar, Thomas, Schwartz, Richard, and Makhoul, John. Fast and robust neural network joint models for statistical machine translation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1370–1380. Association for Computational Linguistics, 2014.
- Donahue et al. (2014) Donahue, J., Hendricks, L. A., Guadarrama, S., Rohrbach, M., Venugopalan, S., Saenko, K., and Darrell, T. Long-term recurrent convolutional networks for visual recognition and description. CoRR, abs/1411.4389, 2014.
- Elliott & de Vries (2015) Elliott, Desmond and de Vries, Arjen P. Describing images using inferred visual dependency representations. In ACL, pp. 42–52, 2015.
- Elliott & Keller (2013) Elliott, Desmond and Keller, Frank. Image Description using Visual Dependency Representations. In EMNLP, pp. 1292–1302, 2013.
- Elliott & Keller (2014) Elliott, Desmond and Keller, Frank. Comparing Automatic Evaluation Measures for Image Description. In ACL, pp. 452–457, 2014.
- Fang et al. (2015) Fang, Hao, Gupta, Saurabh, Iandola, Forrest, Srivastava, Rupesh K., Deng, Li, Dollar, Piotr, Gao, Jianfeng, He, Xiaodong, Mitchell, Margaret, Platt, John C., Lawrence Zitnick, C., and Zweig, Geoffrey. From captions to visual concepts and back. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2015.
- Farhadi et al. (2010) Farhadi, Ali, Hejrati, M, Sadeghi, Mohammad Amin, Young, P, Rashtchian, C, Hockenmaier, J, and Forsyth, David. Every picture tells a story: Generating sentences from images. In ECCV, 2010.
- Funaki & Nakayama (2015) Funaki, Ruka and Nakayama, Hideki. Image-mediated learning for zero-shot cross-lingual document retrieval. In Empirical Methods in Natural Language Processing 2015, pp. 585–590, 2015.
- Glorot & Bengio (2010) Glorot, Xavier and Bengio, Yoshua. Understanding the difficulty of training deep feedforward neural networks. In International Conference on Artificial Intelligence and Statistics, pp. 249–256, 2010.
- Grubinger et al. (2006) Grubinger, M., Clough, P. D., Muller, H., and Thomas, D. The IAPR TC-12 benchmark: A new evaluation resource for visual information systems. In LREC, 2006.
- Hochreiter & Schmidhuber (1997) Hochreiter, Sepp and Schmidhuber, Jürgen. Long short-term memory. Neural Comput., 9(8):1735–1780, November 1997. ISSN 0899-7667.
- Hodosh et al. (2013) Hodosh, Micah, Young, P, and Hockenmaier, J. Framing Image Description as a Ranking Task: Data, Models and Evaluation Metrics. Journal of Artificial Intelligence Research, 47:853–899, 2013.
- Jean et al. (2015) Jean, Sébastien, Firat, Orhan, Cho, Kyunghyun, Memisevic, Roland, and Bengio, Yoshua. Montreal neural machine translation systems for wmt’15. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pp. 134–140, Lisbon, Portugal, September 2015. Association for Computational Linguistics.
- Karpathy & Fei-Fei (2015) Karpathy, Andrej and Fei-Fei, Li. Deep visual-semantic alignments for generating image descriptions. arXiv, 2015.
- Kiela & Bottou (2014) Kiela, Douwe and Bottou, Léon. Learning Image Embeddings using Convolutional Neural Networks for Improved Multi-Modal Semantics. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP-14), 2014.
- Kiela et al. (2015) Kiela, Douwe, Vulić, Ivan, and Clark, Stephen. Visual bilingual lexicon induction with transferred convnet features. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pp. 148–158, 2015.
- Kingma & Ba (2014) Kingma, Diederik P. and Ba, Jimmy. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014.
- Kiros et al. (2014) Kiros, Ryan, Salakhutdinov, Ruslan, and Zemel, Rich. Multimodal neural language models. In ICML, pp. 595–603, 2014.
- Krizhevsky et al. (2012) Krizhevsky, Alex, Sutskever, Ilya, and Hinton, Geoffrey E. Imagenet classification with deep convolutional neural networks. In Pereira, F., Burges, C.J.C., Bottou, L., and Weinberger, K.Q. (eds.), Advances in Neural Information Processing Systems 25, pp. 1097–1105. Curran Associates, Inc., 2012.
- Le et al. (2012) Le, Hai-Son, Allauzen, Alexandre, and Yvon, Francois. Continuous space translation models with neural networks. In Proceedings of NAACL HLT, pages 39–48, 2012.
- Li et al. (2011) Li, S, Kulkarni, G, Berg, T L, Berg, A C, and Choi, Y Young. Composing simple image descriptions using web-scale n-grams. In CoNLL, 2011.
- Mao et al. (2015) Mao, Junhua, Xu, Wei, Yang, Yi, Wang, Jiang, Huang, Zhiheng, and Yuille, Alan. Deep captioning with multimodal recurrent neural networks (m-RNN). ICLR, 2015.
- Mikolov et al. (2010) Mikolov, Tomas, Karafiát, Martin, Burget, Lukas, Cernocký, Jan, and Khudanpur, Sanjeev. Recurrent neural network based language model. In Proceedings of INTERSPEECH, 2010.
- Mitchell et al. (2012) Mitchell, Margaret, Han, Xufeng, Dodge, Jesse, Mensch, Alyssa, Goyal, Amit, Berg, A C, Yamaguchi, Kota, Berg, T L, Stratos, Karl, Daume, III, Hal, and III. Midge: generating image descriptions from computer vision detections. In EACL, 2012.
- Papineni et al. (2002) Papineni, K., Roukos, S., Ward, T., and Zhu, W.-J. BLEU: A method for automatic evaluation of machine translation. In ACL, 2002.
- Rashtchian et al. (2010) Rashtchian, C., Young, P., Hodosh, M., and Hockenmaier, J. Collecting image annotations using Amazon’s Mechanical Turk. In NAACLHLT Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk, 2010.
- Schwenk (2012) Schwenk, Holger. Continuous space translation models for phrase-based statistical machine translation. In In Proceedings of COLING (Posters), pages 1071–1080, 2012.
- Silberer & Lapata (2014) Silberer, Carina and Lapata, Mirella. Learning grounded meaning representations with autoencoders. In Proceedings of ACL, 2014.
- Simonyan & Zisserman (2015) Simonyan, Karen and Zisserman, Andrew. Very deep convolutional networks for large-scale image recognition. In ICLR ’15, 2015.
- Srivastava et al. (2014) Srivastava, Nitish, Hinton, Geoffrey, Krizhevsky, Alex, Sutskever, Ilya, and Salakhutdinov, Ruslan. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15:1929–1958, 2014.
- Sutskever et al. (2014) Sutskever, Ilya, Vinyals, Oriol, and Le, Quoc V. V. Sequence to sequence learning with neural networks. In Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N.D., and Weinberger, K.Q. (eds.), Advances in Neural Information Processing Systems 27, pp. 3104–3112. Curran Associates, Inc., 2014.
- van der Maaten (2014) van der Maaten, L.J.P. Accelerating t-sne using tree-based algorithms. Journal of Machine Learning Research, pp. 3221–3245, 2014.
- Vinyals et al. (2015) Vinyals, Oriol, Toshev, Alexander, Bengio, Samy, and Erhan, Dumitru. Show and tell: A neural image caption generator. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2015.
- Xu et al. (2015) Xu, Kelvin, Ba, Jimmy, Kiros, Ryan, Cho, Kyunghyun, Courville, Aaron C., Salakhutdinov, Ruslan, Zemel, Richard S., and Bengio, Yoshua. Show, attend and tell: Neural image caption generation with visual attention. CoRR, abs/1502.03044, 2015.
- Yang et al. (2011) Yang, Y, Teo, C L, Daume, III, Hal, and Aloimonos, Y. Corpus-guided sentence generation of natural images. In EMNLP, 2011.
- Yatskar et al. (2014) Yatskar, M, Vanderwende, L, and Zettlemoyer, L. See no evil, say no evil: Description generation from densely labeled images. SEM, 2014.
Appendix A Validation results
|English = 256||BLEU4|
|En mlm||15.99 0.38|
|De mlm En mlm||20.63 0.07|
|De mlm En lm||27.55 0.41|
|De lm En mlm||19.44 0.65|
|De lm En lm||23.78 0.71|
|German = 256|
|De mlm||11.87 0.37|
|En mlm De mlm||16.03 0.35|
|En mlm De lm||21.88 0.13|
|En lm De mlm||15.42 0.26|
|En lm De lm||21.22 0.74|
Appendix B Example Descriptions
We present examples of the descriptions generated by the models studied in this paper. In Figure 6, the monolingual mlm generates the best descriptions. However, in Figures 7 and 8, the best descriptions are generated by transferring source mlm features into a target mlm or a target lm.