Understanding Visual Ads by Aligning Symbols and Objects using Co-Attention

Understanding Visual Ads by Aligning Symbols and Objects using Co-Attention

Karuna Ahuja  Karan Sikka  Anirban Roy  Ajay Divakaran
(karuna.ahuja, karan.sikka, anirban.roy, ajay.divakaran)@sri.com
SRI International, Princeton, NJ
Abstract

We tackle the problem of understanding visual ads where given an ad image, our goal is to rank appropriate human generated statements describing the purpose of the ad. This problem is generally addressed by jointly embedding images and candidate statements to establish correspondence. Decoding a visual ad requires inference of both semantic and symbolic nuances referenced in an image and prior methods may fail to capture such associations especially with weakly annotated symbols. In order to create better embeddings, we leverage an attention mechanism to associate image proposals with symbols and thus effectively aggregate information from aligned multimodal representations. We propose a multihop co-attention mechanism that iteratively refines the attention map to ensure accurate attention estimation. Our attention based embedding model is learned end-to-end guided by a max-margin loss function. We show that our model outperforms other baselines on the benchmark Ad dataset and also show qualitative results to highlight the advantages of using multihop co-attention.

1 Introduction

We address the problem of understanding visual advertisement which is a special case of visual content analysis [9]. While current vision approaches can successfully address object [13, 12, 14] and scene [25, 7, 3] centric interpretations of an image, deeper subjective interpretations such as rhetoric, symbolism, etc. remain challenging and have drawn limited attention from the vision community.

Figure 1: Figure shows the image and symbol attention on a PSA ad about the importance of following safety laws. Our algorithm learns to iteratively infer the reference of the symbol ‘death’ with the relevant image content.

Recently, visual ad-understanding has been addressed by Hussain et al. [9] and a dataset has been introduced to evaluate ad-understanding approaches. The authors introduce several sub-problems such as identifying the underlying topics in the ad, predicting the sentiments and symbolism referenced in the ads, associating an action and corresponding reason in response to an ad. In this work, we target understanding of ad images by formulating the task as mutimodal matching of the image and its corresponding human generated statements [24]. These statements were obtained from annotators by asking them to answer questions such as "I should do this because". [9]. Note that for interpreting the rhetoric conveyed by an ad image, we need to exploit the semantic, symbolic, and sentimental references made in the image. Moreover, these alternate references are correlated and influence each other to convey a specific message. For example, an ad corresponding to symbols ‘humor’ and ‘fun’ are likely to invoke ‘amused’ sentiment rather than ‘sad’. Thus, we consider the interactions between these references for interpreting ads in the proposed approach.

Currently, the amount of labeled data for the task of understanding ads is limited as annotating images with symbols and sentiments is both subjective and ambiguous. To tackle these challenges, we propose a novel weakly supervised learning (WSL) algorithm that learns to effectively combine multiple references present in an image by using an iterative co-attention mechanism. In this work, we focus only on semantic references (made via visual content) and symbolic references, and later discuss ideas for including sentiments and object information within our model.

We first obtain scores for symbolic and other references made in an image by using pretrained model trained on labeled data [9]. These scores describe symbols at an image level instead at a region-level granularity due to the difficulty of labeling region-symbol associations. This is often referred to as WSL setting [18, 5, 10] and poses specific challenges in understanding ads as different regions are generally associated with different symbols. As previously mentioned, we pose decoding ads as a multimodal matching problem and use Visual Semantic Embeddings (VSE) [12] to jointly embed an image and sentence to a common vector space for establishing correspondence. However, due to the ambiguity of region-label associations, VSE may fail to correctly align visual regions with symbolic references and thus unable to fuse information in an optimal manner for decoding the ad. This motivates us to leverage an attention driven approach [2, 22], where the prediction task is used to guide the alignment between the input modalities by predicting attention maps. For example, in Visual Question Answering (VQA) the task of predicting answers is used to train the question to image attention module [15, 1, 4].

Attention has been shown to improve tasks such as VQA [15], object and scene understanding [18, 5], action recognition [10, 19, 6]. Commonly used top-down attention identifies the discriminative regions of an image based on the final task [24, 20]. In ad understanding, image regions may be associated with multiple symbols. Thus the standard top-down attention may get confused due to the many-to-many mappings between image regions and image labels. To address this issue, we consider co-attention [15, 16] to implement an alternating attention from a set of image-level symbols to image regions and vice-versa. Moreover, recent works demonstrate that the attention maps can be refined in subsequent iterations [16, 21, 23]. Thus, we consider multi-hop attention (fig. 1) between image regions and symbols where the attention is computed iteratively while attention estimation in current step depends on the attention from the previous step. We finally fuse information from attended image and symbols from different iterations for multimodal embedding. We also leverage bottom-up and top-down attention [24, 1] by estimating attention on object proposals leading to improved alignments. Our work differs from the work by Ye et al. [24] which uses (top-down) attention to attend to image regions and combine additional information from other references by simple fusion and additional learning constraints. Moreover, our model is principled in using attention to fuse information from visual and other modalities by using co-attention with multiple hops. Our initial experiments show that adopting co-attention with multiple hops outperforms the standard top-down and bottom-up attention in terms of overall ad-understanding performance.

2 Approach

Figure 2: Figure (best seen in color) shows a block diagram of our algorithm (VSE-CoAtt-2) that uses co-attention with multiple hops between visual and symbol modalities. The blocks in red and blue compute attention for image and symbols respectively by using the attended vector from other modality.

We propose a co-attention based VSE approach, referred to as VSE-CoAtt, for jointly embedding advertisements and their corresponding ad messages. VSE-CoAtt estimates attention for image regions by using symbol information and subsequently uses the attended image to predict attention for image symbols. This co-attention formulation allows our method to align visual and symbol modalities and fuse them effectively. We also propose a multihop version of our algorithm, referred to as VSE-CoAtt-2, that iteratively refines attention masks for the two modalities (visual and symbolic) and summarizes the information to compute similarity with an ad statement (fig. 2). We denote an ad image as . The given ground-truth statements are denoted as . We use an embedding to represent each word and use an LSTM [8] to encode a sentence, denoted as . We use object proposals, denoted and , to attend to salient image regions [1]. We use the curated list of symbols, denoted as , as provided by the authors of the dataset [9] and encode them using GloVe vector [17]. We also assume that we have scores ( for symbol ) either provided by a human annotator or predicted using another CNN. We use a CNN to extract features from each bounding box and denote them as . We begin the iterations for our model by initializing the attended vector (also referred to as summary vector) for symbols (denoted as ) by:

(1)

We compute the raw attention scores for the object proposals by using the attended symbol vector as shown below. We use softmax to normalize the attention scores, denoted as , and finally compute the summary vector for images

(2)
(3)

where is used to project visual features to the symbol space. We use a number subscript in and to denote the iteration index for the multihop version. We use a similar operation to compute attention for symbol using the previously attended image vector as shown below:

(4)

We use the given symbol probabilities to weigh the attention maps so as to focus on symbols present in the image as shown below:

(5)

We consider co-attention with only two hops to avoid overfitting. We obtain the final features for visual and symbol modalities by fusing the attended vectors at different iterations using an addition operation i.e. . Similar to Kiros et al. [12], we first linearly project and then use cosine similarity to compute similarity between and the ad statements . In order to learn the model parameters, we use a max-margin based ranking loss which enforces the matching score of an image-symbol pair to be higher with its true sentences and vice-versa. We define loss for a training sample pair with ground-truth ad messages as:

(6)

3 Experiments

Dataset: Following Ye et al. [24], we evaluate the task of visual ad understanding by matching ad images to their corresponding human generated sentences on the Ads dataset [9]. We follow the data splits and experimental protocol used by Ye et al. [24] and rank 50 statements (3 related and 47 unrelated from the same topic). Since the proposed model explores the possibility of including additional knowledge, we evaluate our approach on a subset of the ADs dataset which have at least one symbol annotation that belongs to one of the clusters as in [9]. Different from Ye et al., we make no distinction between the public service announcements (PSAs) and product ads and combine them during evaluation. During evaluation, we rank the statements for each image based on their similarity score and report mean of the top rank of the ground-truth statements for images (mean rank metric). Our dataset consists of images partitioned into cross-validation splits- provided by [24].

Implementation details: We extract the features from images (and boxes) using ResNet-101 and consider top object proposals [26]. We experimented with regions proposals trained on the symbol bounding boxes, as in [24] but found the performance to be lower. For learning, we use an Adam [11] optimizer with a learning rate of . We implement several baselines as shown in tab. 1 and use the same features and learning settings for a fair comparison. The baseline VSE model [12] with attention (VSE-Att) and without attention (VSE) use features before and after the average pooling layer respectively. Since our model builds on a bottom-up and top-down attention framework, [1] that uses object proposals instead of feature maps for attention, we also implement two variants of VSE with object proposals: 1) using average pooling (VSE-P) and 2) using attention over the proposals (VSE-P-Att) (similar to Ye et al. [24]). We implement four variants of our algorithm that include a single hop co-attention (VSE-CoAtt), a two hop co-attention (VSE-CoAtt-2), and two similar implementations (VSE-CoAtt-wt and VSE-CoAtt-2-wt) that weigh the symbol initialization (eq. 1) with symbol probabilities .

3.1 Results and Ablation Studies

As show in the Tab. 1, proposed VSE-CoAtt-2 outperforms all the baseline which justifies the importance of multihop co-attention for ad-understanding. For example, considering attention cues, VSE-CoAtt-2 achieves a lower mean rank of than VSE (mean rank ) which does not consider attention. The advantage of using co-attention is evident in the performance of VSE-CoAtt (rank of ) versus VSE-P-Att (rank of ), that uses a fixed attention template for the visual modality. We also observe benefit of using multiple hops, that aggregates information from multiple steps of visual and symbol attention, while comparing the mean rank of of VSE-CoAtt versus mean rank of of VSE-CoAtt-2. The results while using per-symbol probabilities for initializing iterations for attention seem to be lower for both with and without multihop attention. This could be happening due to overfitting since we only have a few number of images.

METHOD Box Att. Co- Mean Rank
Proposals Att.
VSE
VSE-Att
VSE-P
VSE-P-Att
VSE-CoAtt
VSE-CoAtt-2 6.58
VSE-CoAtt-wt
VSE-CoAtt-2-wt
Table 1: Comparison of our method (VSE-CoAtt and VSE-CoAtt-2) with different baselines.

4 Conclusion and Future Work

We propose a novel approach leveraging multihop co-attention for understanding visual ads. Our model uses multiple iterations of attention to summarize visual and symbolic cues for an ad image. We perform multimodal embedding of images and statements to establish their correspondence. Our experiments show the advantages of multihop co-attention over vanilla attention for ad-understanding.

Beyond the presented work, we are currently working on incorporating additional cues such as sentiments and objects inside our model. To resolve the problem of limited training data for subjective references, we are using weakly labeled data on the internet to train models and form associations between objects and symbols. We plan to use these associations to regularize the predicted attention by our model.

References

  • [1] P. Anderson, X. He, C. Buehler, D. Teney, M. Johnson, S. Gould, and L. Zhang. Bottom-up and top-down attention for image captioning and vqa. arXiv preprint arXiv:1707.07998, 2017.
  • [2] D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.
  • [3] M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele. The cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3213–3223, 2016.
  • [4] A. Das, H. Agrawal, L. Zitnick, D. Parikh, and D. Batra. Human attention in visual question answering: Do humans and deep networks look at the same regions? Computer Vision and Image Understanding, 163:90–100, 2017.
  • [5] T. Durand, N. Thome, and M. Cord. Weldon: Weakly supervised learning of deep convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4743–4752, 2016.
  • [6] R. Girdhar and D. Ramanan. Attentional pooling for action recognition. In Advances in Neural Information Processing Systems, pages 33–44, 2017.
  • [7] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 580–587, 2014.
  • [8] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997.
  • [9] Z. Hussain, M. Zhang, X. Zhang, K. Ye, C. Thomas, Z. Agha, N. Ong, and A. Kovashka. Automatic understanding of image and video advertisements. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1100–1110. IEEE, 2017.
  • [10] A. Kar, N. Rai, K. Sikka, and G. Sharma. Adascan: Adaptive scan pooling in deep convolutional neural networks for human action recognition in videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3376–3385, 2017.
  • [11] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  • [12] R. Kiros, R. Salakhutdinov, and R. S. Zemel. Unifying visual-semantic embeddings with multimodal neural language models. arXiv preprint arXiv:1411.2539, 2014.
  • [13] R. Krishna, Y. Zhu, O. Groth, J. Johnson, K. Hata, J. Kravitz, S. Chen, Y. Kalantidis, L.-J. Li, D. A. Shamma, et al. Visual genome: Connecting language and vision using crowdsourced dense image annotations. International Journal of Computer Vision, 123(1):32–73, 2017.
  • [14] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740–755. Springer, 2014.
  • [15] J. Lu, J. Yang, D. Batra, and D. Parikh. Hierarchical question-image co-attention for visual question answering. In Advances In Neural Information Processing Systems, pages 289–297, 2016.
  • [16] H. Nam, J.-W. Ha, and J. Kim. Dual attention networks for multimodal reasoning and matching. arXiv preprint arXiv:1611.00471, 2016.
  • [17] J. Pennington, R. Socher, and C. Manning. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543, 2014.
  • [18] A. Roy and S. Todorovic. Combining bottom-up, top-down, and smoothness cues for weakly supervised image segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3529–3538, 2017.
  • [19] S. Sharma, R. Kiros, and R. Salakhutdinov. Action recognition using visual attention. arXiv preprint arXiv:1511.04119, 2015.
  • [20] E. W. Teh, M. Rochan, and Y. Wang. Attention networks for weakly supervised object localization. In BMVC, 2016.
  • [21] H. Xu and K. Saenko. Ask, attend and answer: Exploring question-guided spatial attention for visual question answering. In European Conference on Computer Vision, pages 451–466. Springer, 2016.
  • [22] K. Xu, J. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhudinov, R. Zemel, and Y. Bengio. Show, attend and tell: Neural image caption generation with visual attention. In International Conference on Machine Learning, pages 2048–2057, 2015.
  • [23] Z. Yang, X. He, J. Gao, L. Deng, and A. Smola. Stacked attention networks for image question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 21–29, 2016.
  • [24] K. Ye and A. Kovashka. Advise: Symbolism and external knowledge for decoding advertisements. arXiv preprint arXiv:1711.06666, 2017.
  • [25] B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and A. Oliva. Learning deep features for scene recognition using places database. In Advances in neural information processing systems, pages 487–495, 2014.
  • [26] C. L. Zitnick and P. Dollár. Edge boxes: Locating object proposals from edges. In European Conference on Computer Vision, pages 391–405. Springer, 2014.

Appendix A Appendix

In this section we show some additional results and provide some more details about the algorithm. We display results for four visual ads corresponding to fast food, road violence, gun violence, and road safety respectively as shown in figure.3. We observe that our algorithm is able to both identify as well as refine symbol and image attention in multiple iterations. For e.g., in the advertisement on road safety (last row), the algorithm refines symbol attention in the second hop by shifting attention from unrelated symbols like ‘power’ and ’hunger’ to relevant symbols such as ‘danger’ and ‘safety’ over the course of multihop iterations.

Weakly supervised algorithms often suffer from the problem of identifying the most discriminative region(s) in an image. Hence, they may fail to cover all possible salient regions and result in overfitting. To prevent this problem, we apply a heuristic technique wherein for a given iteration, we suppress the image attention scores (prior to softmax operation) of the regions which received scores greater than or equal to times the highest score in iteration. We manually set the scores to a low value (). Although this simple step improved the results, we need to further investigate the use of additional constraints ( e.g. spatial and semantic) to discover other salient regions.

Figure 3: Figure shows examples of attention scores generated by our algorithm
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
211796
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description