Hindi Visual Genome: A Dataset for Multimodal English-to-Hindi Machine Translation

Hindi Visual Genome: A Dataset for Multimodal English-to-Hindi Machine Translation

Shantipriya Parida             Ondřej Bojar
Charles University, Faculty of Mathematics and Physics,
Institute of Formal and Applied Linguistics,
Malostranské naměstí 25, 118 00
Prague, Czech Republic
{parida,bojar}@ufal.mff.cuni.cz
\AndSatya Ranjan Dash
School of Compter Application,
KIIT University, Bhubaneswar-24,
Odisha, India
sdashfca@kiit.ac.in
Corresponding author
Abstract

Visual Genome is a dataset connecting structured image information with English language. We present “Hindi Visual Genome”, a multimodal dataset consisting of text and images suitable for English-Hindi multimodal machine translation task and multimodal research. We have selected short English segments (captions) from Visual Genome along with associated images and automatically translated them to Hindi with manual post-editing which took the associated images into account. We prepared a set of 31525 segments, accompanied by a challenge test set of 1400 segments. This challenge test set was created by searching for (particularly) ambiguous English words based on the embedding similarity and manually selecting those where the image helps to resolve the ambiguity.

Our dataset is the first for multimodal English-Hindi machine translation, freely available for non-commercial research purposes. Our Hindi version of Visual Genome also allows to create Hindi image labelers or other practical tools.

Hindi Visual Genome also serves in Workshop on Asian Translation (WAT) 2019 Multi-Modal Translation Task.

 

Hindi Visual Genome: A Dataset for Multimodal English-to-Hindi Machine Translation


 A Preprint
Shantipriya Parida             Ondřej Bojarthanks: Corresponding author Charles University, Faculty of Mathematics and Physics, Institute of Formal and Applied Linguistics, Malostranské naměstí 25, 118 00 Prague, Czech Republic {parida,bojar}@ufal.mff.cuni.cz Satya Ranjan Dash School of Compter Application, KIIT University, Bhubaneswar-24, Odisha, India sdashfca@kiit.ac.in

July 27, 2019

Keywords Visual Genome   Multimodal Corpus   Parallel Corpus   Word Embedding   Neural Machine Translation (NMT)   Image Captioning

1 Introduction

Multimodal content is gaining popularity in machine translation (MT) community due to its appealing chances to improve translation quality and its usage in commercial applications such as image caption translation for online news articles or machine translation for e-commerce product listings [1, 2, 3, 4]. Although the general performance of neural machine translation (NMT) models is very good given large amounts of parallel texts, some inputs can remain genuinely ambiguous, especially if the input context is limited. One example is the word “mouse" in English (source) which can be translated into different forms in Hindi based on the context (e.g. either a computer mouse or a small rodent).

Data Set Items
Training Set 28,932
Development Test Set (D-Test) 998
Evaluation Test Set (E-Test) 1595
Challenge Test Set (C-Test) 1,400
Table 1: Hindi Visual Genome corpus details. One item consists of an English source segment, its Hindi translation, the image and a rectangular region in the image.

There is a limited number of multimodal datasets available and even fewer of them are also multilingual. Our aim is to extend the set of languages available for multimodal experiments by adding a Hindi variant of a subset of Visual Genome.

Visual Genome (http://visualgenome.org/, [5]) is a large set of real-world images, each equipped with annotations of various regions in the image. The annotations include a plain text description of the region (usually sentence parts or short sentences, e.g. “a red ball in the air”) and also several other formally captured types of information (objects, attributes, relationships, region graphs, scene graphs, and question-answer pairs). We focus only on the textual descriptions of image regions and provide their translations into Hindi.

The main portion of our Hindi Visual Genome is intended for training purposes of tools like multimodal translation systems or Hindi image labelers. Every item consists of an image, a rectangular region in the image, the original English caption from Visual Genome and finally our Hindi translation. Additionally, we create a challenge test set with the same structure but a different sampling that promotes the presence of ambiguous words in the English captions with respect to their meaning and thus their Hindi translation. The final corpus statistics of the “Hindi Visual Genome" are in Table 1.

The paper is organized as follows: In Section 2, we survey related multimodal multilingual datasets. Section 3 describes the way we selected and prepared the training set. Section 4 is devoted to the challenge test set: the method to find ambiguous words and the steps taken when constructing the test set, its final statistics and a brief discussion of our observations. We conclude in Section 5.

Creating such a dataset enables multimodal experimenting with Hindi for various applications and it can also facilitate the exploration of how the language is grounded in vision.

2 Related Work

Multimodal neural machine translation is an emerging area where translation takes more than text as input. It also uses features from image or sound for generating the translated text. Combining visual features with language modeling has shown better result for image captioning and question answering [6, 7, 8].

Many experiments were carried out considering images to improve machine translation, i.a. for resolving ambiguity due to different senses of words in different contexts. One of the starting points is “Flickr30k" [9], a multilingual (English-German, English-French, and English-Czech) shared task based on multimodal translation was part of WMT 2018 [10]. [11] proposed a multimodal NMT system using image feature for Hindi-English language pair. Due to the lack of English-Hindi multimodal data, they used a synthetic training dataset and manually curated development and test sets for Hindi derived from the English part of Flickr30k corpus [12]. [13] proposed a probabilistic method using pictures for word prediction constrained to a narrow set of choices, such as possible word senses. Their results suggest that images can help word sense disambiguation.

Different techniques then followed, using various neural network architectures for extracting and using the contextual information. One of the approaches was proposed by [1] for multimodal translation by replacing image embedding with an estimated posterior probability prediction for image categories.

3 Training Set Preparations

To produce the main part of our corpus, we have automatically translated and manually post-edited the English captions of “Visual Genome” corpus into Hindi.

The starting point were 31525 randomly selected images from Visual Genome. Of all the English-captioned regions available for each of the images, we randomly select one. To obtain the Hindi translation, we have followed these steps:

  1. We translated all 31525 captions into Hindi using the NMT model (Tensor-to-Tensor, [14]) specifically trained for this purpose as described in [15].

  2. We uploaded the image, the source English caption and its Hindi machine translation into a “Translation Validation Website",111http://ufallab.ms.mff.cuni.cz/~parida/index.html which we designed as a simple interface for post-editing the translations. One important feature was the use of a Hindi on-screen keyboard222https://hinkhoj.com/api/ to enable proper text input even for users with limited operating systems.

  3. Our volunteers post-edited all the Hindi translations. The volunteers were selected based on their Hindi language proficiency.

  4. We manually verified and finalized the post-edited files to obtain the training and test data.

The split of the 31525 items into the training, development and test sets as listed in Table 1 was again random.

4 Challenge Test Set Preparations

In addition to the randomly selected 31525 items described above, we prepared a challenge test set of 1400 segments which need images for word sense disambiguation. To achieve this targeted selection, we first found the most ambiguous words from the whole “Visual Genome” corpus and then extracted segments containing the most ambiguous words. The overall steps for obtaining the ambiguous words are shown in Figure 1.

Figure 1: Overall pipeline for ambiguous word finding from input corpus.

The detailed sequence of processing steps was as follows:

  1. Translate all English captions from the Visual Genome dataset (3.15 millions unique strings) using a baseline machine translation systems into Hindi, obtaining a synthetic parallel corpus. In this step, we used Google Translate.

  2. Apply word alignment on the synthetic parallel corpus using GIZA++ [16], in a wrapper333https://github.com/ufal/qtleap/blob/master/cuni_train/bin/gizawrapper.pl that automatically symmetrizes two bidirectional alignments; we used the intersection alignment.

  3. Extract all pairs of aligned words in the form of a “translation dictionary". The dictionary contains key/value pairs of the English word () and all its Hindi translations (), i.e. it has the form of the mapping .

  4. Train Hindi word2vec (W2V) [17] word embeddings. We used the gensim444https://radimrehurek.com/gensim/tut1.html [18] implementation and trained it on IITB Hindi Monolingual Corpus555http://www.cfilt.iitb.ac.in/iitb_parallel/iitb_corpus_download/ which contains about 45 million Hindi sentences. Using such a large collection of Hindi text improves the quality of the obtained embeddings.

  5. For each English word from the translation dictionary (see Step 3), get all Hindi translation words and their embeddings (Step 4).

  6. Apply -means clustering algorithm to the embedded Hindi words to organize them according to their word similarity.

    If we followed a solid definition of word senses and if we knew how many there are for a given source English word and how they match the meanings of the Hindi words, the would correspond to the number of Hindi senses that the original English word expresses. We take the pragmatic approach and apply -means for a range of values ( from 2 to 6).

  7. Evaluate the obtained clusters with the Silhouette Score, Davies-Bouldin Index (DBI), and Calinski-Harabaz Index (CHI) [19, 20]. Each of the selected scores reflects in one way or another the cleanliness of the clusters, their separation. For the final sorting (Step 8), we mix these scores using a simple average function.

    The rationale behind using these scores is that if the word embeddings of the Hindi translations can be clearly clustered into 2 or more senses, then the meaning distinctions are big enough to indicate that the original English word was ambiguous. The exact number of different meanings is not too important for our purpose.

  8. Sort the list in descending order to get the most ambiguous words (as approximated by the mean of clustering measures) at the top of the list.

  9. Manually check the list to validate that the selected ambiguous words indeed potentially need an image to disambiguate them. Select a cutoff and extract the most ambiguous English words.

The result of this semi-automatic search and manual validation of most ambiguous words was a list of 19 English words. For each of these words, we selected and extracted a number of items available in the original Visual Genome and provided the same manual validation of the Hindi translation as for the training and regular test sets. Incidentally, 7 images and English captions occur in both the training set and the challenge test set.666The English segments appearing in both the training data and the challenge test set are: A round concert block, Man stand in crane, Street sign on a pole in english and chinese, a fast moving train, a professional tennis court, bird characters on top of a brown cake, players name on his shirt. The overlap in images (but using different regions) is larger: 359.

Table 2 lists the selected most ambiguous English words and the number of items in the final challenge test set with the given word in the English side. We tried to make a balance and the frequencies of the ambiguous words in the challenge test set roughly correspond to the original frequencies in Visual Genome.

Word Segment Count
1 Stand 180
2 Court 179
3 Players 137
4 Cross 137
5 Second 117
6 Block 116
7 Fast 73
8 Date 56
9 Characters 70
10 Stamp 60
11 English 42
12 Fair 41
13 Fine 45
14 Press 35
15 Forms 44
16 Springs 30
17 Models 25
18 Forces 9
19 Penalty 4
Total 1400
Table 2: Challenge test set: distribution of the ambiguous words.
(a) Street sign advising of penalty.
(b) The penalty box is white lined.
Figure 2: An illustration of two meanings of the word “penalty” exemplified with two images.

Figure 2 illustrates two sample items selected for the word “penalty” (Hindi translation omitted here). We see that for humans, the images are clearly disambiguating the meaning of the word: the fine to be paid for honking vs. the kick in a soccer match.

Arguably, the surrounding English words in the source segments (e.g. “street” vs. “white lined”) can be used by machine translation systems to pick the correct translation even without access to the image. The size of the original dataset of images with captions however did not allow us to further limit the selection to segments where the text alone is not sufficient for the disambiguation.

5 Conclusion and Future Work

We presented a multimodal English-to-Hindi dataset. To the best of our knowledge, this is the first such dataset that includes an Indian language. The dataset can serve e.g. in Hindi image captioning but our primary intended use case was research into the employment of images as additional input to improve machine translation quality.

To this end, we created also a dedicated challenge test set with text segments containing ambiguous words where the image can help with the disambiguation. With this goal, the dataset also serves in WAT 2019777http://lotus.kuee.kyoto-u.ac.jp/WAT/WAT2019/index.html shared task on multi-modal translation.888https://ufal.mff.cuni.cz/hindi-visual-genome/wat-2019-multimodal-task

We illustrated that the text-only information in the surrounding words could be sufficient for the disambiguation. One interesting research direction would be thus to ignore all the surrounding words and simply ask: given the image, what is the correct Hindi translation of this ambiguous English word. Another option we would like to pursue is to search larger datasets for cases where even the whole segment does not give a clear indication of the meaning of an ambiguous word.

Our “Hindi Visual Genome" is available for research and non-commercial use under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 License999https://creativecommons.org/licenses/by-nc-sa/4.0/ at http://hdl.handle.net/11234/1-2997.

6 Acknowledgments

We are grateful to Vighnesh Chenthil Kumar, a summer intern from IIIT Hyderabad at Charles University for his help with the semi-automatic search for the most ambiguous words. The work was carried out during Shantipriya Parida’s post-doc funded by Charles University.

This work has been supported by the grants 19-26934X (NEUREM3) of the Czech Science Foundation and “Progress” Q18+Q48 of Charles University, and using language resources distributed by the LINDAT/CLARIN project of the Ministry of Education, Youth and Sports of the Czech Republic (projects LM2015071 and OP VVV VI CZ.02.1.01/0.0/0.0/16013/0001781).

References

  • [1] Chiraag Lala, Pranava Madhyastha, Josiah Wang, and Lucia Specia. Unraveling the contribution of image captioning and neural machine translation for multimodal machine translation. The Prague Bulletin of Mathematical Linguistics, 108(1):197–208, 2017.
  • [2] Anya Belz, Erkut Erdem, Katerina Pastra, and Krystian Mikolajczyk, editors. Proceedings of the Sixth Workshop on Vision and Language, VL@EACL 2017, Valencia, Spain, April 4, 2017. Association for Computational Linguistics, 2017.
  • [3] Desmond Elliott and Ákos Kádár. Imagination improves multimodal translation. CoRR, abs/1705.04350, 2017.
  • [4] Mingyang Zhou, Runxiang Cheng, Yong Jae Lee, and Zhou Yu. A visual attention grounding neural model for multimodal machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3643–3653, 2018.
  • [5] Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. Visual genome: Connecting language and vision using crowdsourced dense image annotations. International Journal of Computer Vision, 123(1):32–73, 2017.
  • [6] Nasrin Mostafazadeh, Chris Brockett, Bill Dolan, Michel Galley, Jianfeng Gao, Georgios P. Spithourakis, and Lucy Vanderwende. Image-grounded conversations: Multimodal context for natural question and response generation. CoRR, abs/1701.08251, 2017.
  • [7] Linjie Yang, Kevin D. Tang, Jianchao Yang, and Li-Jia Li. Dense captioning with joint inference and visual context. CoRR, abs/1611.06949, 2016.
  • [8] Chang Liu, Fuchun Sun, Changhu Wang, Feng Wang, and Alan L. Yuille. MAT: A multimodal attentive translator for image captioning. CoRR, abs/1702.05658, 2017.
  • [9] Desmond Elliott, Stella Frank, Khalil Sima’an, and Lucia Specia. Multi30k: Multilingual english-german image descriptions. arXiv preprint arXiv:1605.00459, 2016.
  • [10] Loïc Barrault, Fethi Bougares, Lucia Specia, Chiraag Lala, Desmond Elliott, and Stella Frank. Findings of the third shared task on multimodal machine translation. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 304–323, 2018.
  • [11] Koel Dutta Chowdhury, Mohammed Hasanuzzaman, and Qun Liu. Multimodal neural machine translation for low-resource language pairs using synthetic data. ACL 2018, page 33, 2018.
  • [12] Bryan A Plummer, Liwei Wang, Chris M Cervantes, Juan C Caicedo, Julia Hockenmaier, and Svetlana Lazebnik. Flickr30k entities: Collecting region-to-phrase correspondences for richer image-to-sentence models. In Proceedings of the IEEE international conference on computer vision, pages 2641–2649, 2015.
  • [13] Kobus Barnard and Matthew Johnson. Word sense disambiguation with pictures. Artificial Intelligence, 167(1-2):13–30, 2005.
  • [14] Ashish Vaswani, Samy Bengio, Eugene Brevdo, Francois Chollet, Aidan N. Gomez, Stephan Gouws, Llion Jones, Łukasz Kaiser, Nal Kalchbrenner, Niki Parmar, Ryan Sepassi, Noam Shazeer, and Jakob Uszkoreit. Tensor2tensor for neural machine translation. CoRR, abs/1803.07416, 2018.
  • [15] Shantipriya Parida and Ondřej Bojar. Translating Short Segments with NMT: A Case Study in English-to-Hindi. In Proceedings of EAMT 2018, 2018.
  • [16] Franz Josef Och and Hermann Ney. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19–51, 2003.
  • [17] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. CoRR, abs/1301.3781, 2013.
  • [18] Radim Řehůřek and Petr Sojka. Software Framework for Topic Modelling with Large Corpora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pages 45–50, Valletta, Malta, May 2010. ELRA. http://is.muni.cz/publication/884893/en.
  • [19] Renato Cordeiro de Amorim and Christian Hennig. Recovering the number of clusters in data sets with noise features using feature rescaling factors. Information Sciences, 324:126–145, 2015.
  • [20] D. L. Davies and D. W. Bouldin. A cluster separation measure. IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI-1(2):224–227, April 1979.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
383822
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description