Multimodal Transformer with Pointer Network for the DSTC8 AVSD Challenge

Multimodal Transformer with Pointer Network for the DSTC8 AVSD Challenge

Abstract

Audio-Visual Scene-Aware Dialog (AVSD) is an extension from Video Question Answering (QA) whereby the dialogue agent is required to generate natural language responses to address user queries and carry on conversations. This is a challenging task as it consists of video features of multiple modalities, including text, visual, and audio features. The agent also needs to learn semantic dependencies among user utterances and system responses to make coherent conversations with humans. In this work, we describe our submission to the AVSD track of the 8th Dialogue System Technology Challenge. We adopt dot-product attention to combine text and non-text features of input video. We further enhance the generation capability of the dialogue agent by adopting pointer networks to point to tokens from multiple source sequences in each generation step. Our systems achieve high performance in automatic metrics and obtain 5th and 6th place in human evaluation among all submissions.

Introduction

Figure 1: Examples from the AVSD benchmark [2]. Each line includes an utterance from human (H) and the corresponding response from the agent (A).

AVSD is a challenging task as it involves complex dependencies from features of multiple modalities. First, the video input typically involves both visual and audio features, each of which contains various information related to the current dialogue context and user utterance. For example, in Figure 1, certain questions from user concern either visual or audio information or both. The two types of features can complement each other to support the dialogue agent to generate responses. Second, dialogues also involve complex semantic dependencies among dialogue turns, each consisting of a pair of user utterance and system response. For example, in the second turn of the second (lower) example in Figure 1, the dialogue agent needs to refer to the previous user utterance and system response to understand what the user is asking. We are motivated to address these two challenges by adopting the Multimodal Transformer Network (MTN) [14]. The model adopts attention mechanism that focuses on the interaction between each token position in text sequences and each temporal step of video visual and audio features. The multi-head structure allows the models to project feature representation to different feature spaces and detect different types of dependencies. In addition, while previous work has achieved promising results through complex reasoning neural networks to select important video features [19] [10], we further investigate the generation capability of the models by using pointer network that can point to tokens from multiple source sequences during each generation step. Pointer network is widely used in summarization problem whereby pointer network is used to copy tokens from the source text input to generate summarizing sentences. We are motivated by this strategy and adopt into video dialogue task to enhance the quality of the generated system responses. We experiment with various model variants and notice interesting findings to improve our model performance. we noted that adopting pointer network generation can boost performance significantly. This could be explained due to the enhanced generation capability of the models being able to copy tokens from relevant text input. We present comprehensive experiments and reported results on the validation and test sets which lead to these findings.

Related Work

The AVSD benchmark in DSTC7 and DSTC8 is considered an extension of two major research directions: Video QA and Dialogues. Video QA models [25] [5] [6] aim to improve the text and vision reasoning to be able to answer questions from users about a given video. Compared to Video QA, video dialogues such as the AVSD benchmark, however, has two major challenges: (1) First, in video dialogues, the model is required to have a strong language understanding over text input, including not only user queries but also past dialogue turns. A user query may refer to previous information mentioned in previous dialogue turns and such references must be learned to answer the query correctly. (2) Secondly, most of video QA models [11] [15] [12] are more suitable for the retrieval-based setting. In this setting, the model is typically given a list of response candidates and the model has to select one of them as the output. Compared to AVSD, this retrieval-based setting might not be appropriate as dialogue agents need to be able to converse with human users by generating natural responses rather than selecting from a predefined list of sentences. We are motivated by these two major challenges and propose to improve the language modeling part through pointer networks [24]. Adapting to a video dialogue task, we enhance our generative network with pointer distributions over source sequences and construct multiple vocabulary distributions during each generation steps.

Models

The input includes a video , dialogue history of turns. Each turns consists of a pair of (human utterance, sytem response) , and current human utterance . The output is a system response . The input video can contain features in different modalities, including vision, audio, and text (such as video caption or subtitle). Given current dialogue turn , we can denote each text input as a sequence of tokens. Each token is represented by a unique token index from a vocabulary set . We denote the dialogue history , user utterance , text input of video , and output response .

Video Features

Following similar video-based NLP tasks such as video captioning [1] and video QA [11], we assume access to a pretrained model to extract visual or audio features of input video. We extracted the visual features from a pretraind 3D-CNN based ResNext [8] similarly as [19]. The 3D-CNN model extracts the video features based on clips rather than frames. The clip-based information is expected to be more consistent and less noisy than frame-based information. To sample clips, we use a window size of 16 video frames and stride of 16 frames. We denote the extracted features as the representation for visual features, where is the number of resulting video clips and 2048 is the output dimension in ResNext. We used the ResNext101 pretrained on the Kinetics Human Action Video benchmark. For audio features, following [10], we use a pretrained VGGish model [9]. This model is based on the image CNN model VGG to extract the temporal variation of video sound. The output is a 128-dimensional representation. We denote the extracted features as the representation .

Baseline

The baseline for AVSD benchmark [2] [10] was provided by the organizers and based on feature fusioning approach proposed by [26]. Video features of multiple modalities, including visual and audio, are combined by passing them through a linear transformation to a common target dimension. The projected representation is used as input to a softmax layer to combine scores of each temporal steps of visual or audio features.

Multimodal Transformer Network (MTN)

We adopt the MTN model proposed by [14] in the AVSD benchmark in DSTC8. To improve the performance, we enhance the generation capability of the model and investigate an ensemble approach. We summarize the MTN model and our changes below.

Multi-head Attention. The MTN model adopts the multi-head dot-product attention mechanism proposed by [22] to obtain dependencies between each token in text sequences and temporal variation of video features. Specifically, the model considered attention from query to other video feature modalities, including visual and sound features. The output from this attention network is used as input in the decoder. The decoder adopts a similar attention mechanism but the attention direction is from the target system responses to other information. We denote the attention operation of 2 sequence representations from to as defined by [22] as:

(1)
(2)

where is the embedding dimension. The attention operation is combined with feed-forward network and skip connection to combine information of the original with . The attention is performed over multiple rounds and in each round, the output is used as input to the next attention steps. This technique allows progressive feature learning to detect complex dependencies between different information. MTN adopts the Equation 1 and 2 in query-guided and target-response-guided attention layers to obtain dependencies between user queries/target responses and other input. First, user query/utterance is used to select important video and audio features of the video. For each type of features, the embeddings of user query is passed to a self-attention layer and another attention layer that attends on video information. Firstly, the query features are used to attend on temporal visual information:

Each output has the same dimension as . Similarly, the query features are used to attend on temporal audio information:

The self-attention is applied separately for each feature type to allow the model to independently select different information from user query for different types of video features. The two representations and contain temporally attended audio and visual features from video. They are passed to the decoder network which processes information from text input (user queries, dialogue history) as well as video input. Specifically, the target responses is embedded into representation and passed to 4 text-to-text attention: self-attention, response-to-dialogue-history attention, and response-to-query-attention, and response-to-caption attention.

The last output is used to attend on the video attended features obtained from query-guided attention layers.

The MTN architecture allows the information from different text input and video information from different modalities is incorporated sequentially into the target response representation. Adopting the skip connection technique, MTN network can be used to progressively learn and refine signals obtained in each attention steps. For query-guided attention layers, the progressive learning is done by replacing or as as in the next round of attention. Similarly, in decoder layers, signals can be further attended progressively by replacing as in the next round of attention.

Pointer Generator. We examine an extension of MTN by adopting the pointer network [24] to generate system responses. We propose to use pointer network to point to tokens from different input text sequences and construct different vocabulary distribution where is a predefined vocabulary set based on words in the training set. Given an input text with embedding representation and the output from the last attention layer from the decoder , we construct the pointer distribution by the dot-product attention:

(3)

For each position in , the pointer distribution is used to construct a distribution over vocabulary set where the probability of each token is accumulated from the pointer distribution of the corresponding position. Given a position in the target response, the vocabulary distribution of this position is defined based on the pointer distribution is defined as:

(4)

where denotes the row from probability matrix . the We concatenate the probability in all position to obtain . For each text input sequence, we obtain the pointer distribution and corresponding vocabulary distribution: for , for , and for . Besides these pointer-based vocabulary distributions, we adopt a linear transformation layer with Softmax to allow the models to generate tokens not included in any text input sequences.

(5)

where . To combine the vocabulary distributions, we compute importance scores based on a contextual vectors concatenated from the component input text representations and the output of the decoder.

(6)
(7)

where is the expanded version of to match the dimensions of , and . The final vocabulary distribution is computed as the weighted sum of pointer-based distributions and generation-based distribution based on the score matrix . The resulting distribution is denoted as .

Optimization. We optimize the model by training it to minimize the generation loss:

(8)

In addition, a key component of the MTN model is the auxiliary loss function applied to the output of query-guided attention. This technique was proposed by [14] to make the training more stable by using the output of attended features as representations for re-generating the user query. This auto-encoder technique was motivated from the multi-task learning approach in neural machine translation (NMT) [17]. The difference is that MTN extracts the intermediate representations from the (auto-)encoder as video signals for decoding responses rather than just the hidden states of an LSTM encoder of the source sequence in the NMT setting. To re-generate user queries, the output from query-guided attention is passed to a linear transformation. We share the weights of the linear layer with in Equation 5.

(9)
(10)

The auto-encoding loss is defined as:

(11)
(12)

The model are jointly trained with all losses.

(13)

We simply set and to 1 for joint training.

Ensemble Models. A popular technique to improve the performance is to ensemble models trained in different settings. In our submission, we ensemble models trained independently with different video feature types and different feature pretrained models. In each model , we obtain the output vocabulary distribution . The ensembled vocabulary distribution is simply the sum of all vocabulary distributions of component model variants. The resulting summation is passed through a normalization layer to normalize all values from 0 to 1.

(14)

Experiments

Dataset

We use the AVSD dataset provided in DSTC8 [2] [10] which contains dialogues grounded on the Charades videos [20]. Following the same track in the DSTC7 challenge, the DSTC8 organizers provided crowd-sourced data of video-based dialogues, including user questions and system responses constructed as dialogues, video captions, and video summaries. We present a summary of the dataset for training, validation, and test set in Table 1. The statistics of the official test dataset for DSTC8 challenge are comparable to those in the DSTC7 challenge: 1,710 dialogues and more than 6,700 dialogue turns. Please refer to more details on data collection described in [2]. We construct the vocabulary set including unique tokens in the training set. In our experiments, we use the provided video summary annotation as the video-dependent text input.

# Train Val. DTSC7 Test
Dialogs 7,659 1,787 1,710
Turns 153,180 35,740 13,490
Words 1,450,754 339,006 110,252
Table 1: Summary of DSTC8 AVSD benchmark.

Training Procedure

We adopt the Adam optimizer [13] with , , and . We adopt a learning rate strategy similar to [22]. We set the learning rate warm-up step to 13,000 training steps and train models up to epochs. We initialize all models with uniform distribution [7]. We select the best models based on the average loss per epoch in the validation set. We experiments with following model hyper-parameters: embedding dimension , number of rounds of attention , attention heads . We tuned hyper-parameters following grid-search over the validation set. We allow the pointer generator to point to tokens of video summary and the last user query. Experiment results with other combinations of input text sequences for pointer generator are reported in the Ablation Analysis. In all experiments with more than one feature type, we adopt the ensemble strategy as described above. We select a batch size of 32 and dropout rate of 0.5. The dropout is applied to all layers except the generator network layers. We train our models by applying label smoothing [21] on the target system responses . During inference, we adopt a beam search technique with a beam size of and a length penalty of .

Results

We report the objective scores, including BLEU [18], METEOR [3], ROUGE-L [16], and CIDEr [23]. The metrics are formulated to compute the word overlapping between predicted responses and ground-truth responses.

Results on DSTC8 Test

Visual Audio BLEU1 BLEU2 BLEU3 BLEU4 METEOR ROUGE-L CIDEr Human
ResNext - 0.724 0.599 0.496 0.414 0.269 0.570 1.101 -
I3D(RGB) - 0.729 0.602 0.500 0.417 0.273 0.573 1.108 -
I3D(Flow) - 0.724 0.597 0.496 0.413 0.270 0.566 1.110 -
- VGGish 0.730 0.603 0.500 0.417 0.274 0.576 1.113 -
ResNext+I3D(RGB) - 0.695 0.583 0.491 0.416 0.259 0.559 1.087 -
ResNext+I3D(Flow) - 0.696 0.585 0.495 0.421 0.261 0.561 1.098 3.609
ResNext VGGish 0.701 0.587 0.494 0.419 0.263 0.564 1.097 3.612
- - 0.735 0.603 0.497 0.410 0.274 0.573 1.108 -
Table 2: Result summary on the test dataset in the AVSD benchmark for DSTC8.
Visual Audio BLEU1 BLEU2 BLEU3 BLEU4 METEOR ROUGE-L CIDEr
ResNext - 0.750 0.619 0.514 0.427 0.280 0.580 1.189
I3D(RGB) - 0.750 0.617 0.510 0.424 0.282 0.579 1.185
I3D(Flow) - 0.750 0.616 0.511 0.427 0.280 0.579 1.188
- VGGish 0.751 0.618 0.511 0.426 0.278 0.580 1.186
ResNext+I3D(RGB) - 0.734 0.615 0.517 0.439 0.277 0.574 1.177
ResNext+I3D(Flow) - 0.735 0.616 0.519 0.441 0.277 0.573 1.177
ResNext VGGish 0.727 0.609 0.515 0.439 0.275 0.574 1.167
- - 0.752 0.614 0.507 0.421 0.283 0.577 1.185
Table 3: Result summary on the test dataset in AVSD benchmark with test data from the DSTC7.
Pointer
Source Sequence
BLEU1 BLEU2 BLEU3 BLEU4 METEOR ROUGE-L CIDEr
Summary+Query 0.750 0.619 0.514 0.427 0.280 0.580 1.189
History+Query 0.738 0.602 0.494 0.408 0.274 0.568 1.140
Summary+History+Query 0.739 0.607 0.495 0.407 0.272 0.567 1.142
Summary 0.744 0.612 0.505 0.422 0.276 0.576 1.155
Query 0.748 0.609 0.499 0.412 0.279 0.573 1.143
History 0.738 0.603 0.492 0.401 0.279 0.560 1.061
None 0.733 0.597 0.489 0.405 0.269 0.562 1.120
Table 4: Result summary of different model variants of pointer network generator. The results are tested on the test data of AVSD benchmark with test data from the DSTC7.

We first report the results on the DSTC8 test dataset. The results were released by the competition organizer as the ground-truth labels are not publicly accessible. We submitted different model variants based on the settings of input: (1) text only and (2) text and video. In the text-only setting, we remove any visual or audio features and only use text input (including video caption) as input to our model. In the text-and-video setting, we submitted different versions of our models that either use visual or audio (or both) features combined. For visual features, besides ResNext101 as our main visual features, we also utilize the I3D features provided by the organizer. The features are extracted from an I3D [4] model pretrained on the Kinetics dataset. The features have a dimension 2048, the same as ResNext101 features.

From Table 2, we noted that the performance among the visual features i.e. ResNext, I3D(RGB), and I3D(Flow), are comparable, especially between I3D(RGB) and I3D(Flow), the differences between objective metrics are minor. When only using audio features extracted from VGGish, we note that the performance slightly improves but not significantly as compared to only using visual features. As compared to the original MTN approach [14], we noted the difference in performance between models that use either visual or audio features is substantially reduced. We noted similar observations as we compared the difference of performance between models that only use text features and models that use visual features. We expect these performance gains come from using pointer generators which can point to tokens in the source sequences i.e. user queries and video summaries. Since AVSD is formulated as a generation task with evaluation metrics based on similarity between the generated sentences and the ground truth, we could substantially improve the performance by focusing on the language component of the model. We also observed that using a simple ensemble technique could improve the performance, mainly in BLEU-based metrics. In this case, the ensemble strategy acts as a regularization factor on the vocabulary distribution of the output, resulting in more semantically correct output sentences. However, other metrics do not improve or reduce when performing model ensemble. We obtain the human evaluation scores from the organizers for two of our models. Our models achieve human scores of more than 3.6 on a scale of 4 and were ranked top 5 and 6 among all submissions in the AVSD track.

Results on DSTC7 Test

We also reported the results of the submitted models mentioned above but tested on the test set of DSTC7. We note similar observations as ones seen with the test data in DSTC8. The overall performance is, however, higher in DSTC7 than DSTC8. This reveals that the new test data in DSTC8 is more challenging and the current approach could be further improved.

Ablation Analysis

We evaluated our models with different variants of pointer networks by allowing the models pointing to tokens of different combinations of the text input sequences. In these experiments, we choose the video-and-test setting and only use visual features extracted from the pretrained ResNext101. From Table 4, we have the following observations. First, most of the MTN models with our proposed pointer network shows improvement over one that only uses a linear transformation to generate tokens. The performance gain is substantial when we allow the models to point to source sequences of video summaries and user queries. However, the performance is slightly affected when we use pointer network to point to tokens in dialogue history because user queries and dialogue history typically contain more useful information than dialogue history to generate system responses. Secondly, when combining different input sequences with multiple pointer networks, the model with the best performance is one that contains pointers to both video summaries and user queries. By extending the pointer network to MTN and adopting a dynamic combination of vocabulary distributions among pointers, we can boost the language generation capability of the models and generate better responses.

Conclusion

In this paper, we present our submission AVSD track of the DSTC8 challenge. Our submissions achieve competitive performance in both human evaluation and automatic metrics, including BLEU, ROUGE, METEOR, and CIDEr. The task is challenging because it involves video information of multiple modalities, including visual and audio information, and it requires strong language modeling capability to generate natural dialogue responses. In this work, we focus on the second aspect by adopting pointer networks in generative components. Our experiment results show that adopting this technique in video dialogues can improve the quality of the responses. In the future, we will focus to extend on the first aspect by improving the multimodal reasoning capability between language, visual, and audio features.

References

  1. N. Aafaq, N. Akhtar, W. Liu, S. Z. Gilani and A. Mian (2019) Spatio-temporal dynamics and semantic attribute enriched visual encoding for video captioning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 12487–12496. Cited by: Video Features.
  2. H. Alamri, C. Hori, T. K. Marks, D. Batra and D. Parikh (2018) Audio visual scene-aware dialog (avsd) track for natural language generation in dstc7. In DSTC7 at AAAI2019 Workshop, Vol. 2. Cited by: Figure 1, Baseline, Dataset.
  3. S. Banerjee and A. Lavie (2005) METEOR: an automatic metric for mt evaluation with improved correlation with human judgments. In Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization, pp. 65–72. Cited by: Results.
  4. J. Carreira and A. Zisserman (2017) Quo vadis, action recognition? a new model and the kinetics dataset. In proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6299–6308. Cited by: Results on DSTC8 Test.
  5. C. Fan, X. Zhang, S. Zhang, W. Wang, C. Zhang and H. Huang (2019) Heterogeneous memory enhanced multimodal attention model for video question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1999–2007. Cited by: Related Work.
  6. J. Gao, R. Ge, K. Chen and R. Nevatia (2018) Motion-appearance co-memory networks for video question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6576–6585. Cited by: Related Work.
  7. X. Glorot and Y. Bengio (2010) Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pp. 249–256. Cited by: Training Procedure.
  8. K. Hara, H. Kataoka and Y. Satoh (2018) Can spatiotemporal 3d cnns retrace the history of 2d cnns and imagenet?. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 6546–6555. Cited by: Video Features.
  9. S. Hershey, S. Chaudhuri, D. P. Ellis, J. F. Gemmeke, A. Jansen, R. C. Moore, M. Plakal, D. Platt, R. A. Saurous and B. Seybold (2017) CNN architectures for large-scale audio classification. In Acoustics, Speech and Signal Processing (ICASSP), 2017 IEEE International Conference on, pp. 131–135. Cited by: Video Features.
  10. C. Hori, H. Alamri, J. Wang, G. Wichern, T. Hori, A. Cherian, T. K. Marks, V. Cartillier, R. G. Lopes and A. Das (2019) End-to-end audio visual scene-aware dialog using multimodal attention-based video features. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2352–2356. Cited by: Introduction, Video Features, Baseline, Dataset.
  11. Y. Jang, Y. Song, Y. Yu, Y. Kim and G. Kim (2017) Tgif-qa: toward spatio-temporal reasoning in visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2758–2766. Cited by: Related Work, Video Features.
  12. J. Kim, M. Ma, K. Kim, S. Kim and C. D. Yoo (2019) Progressive attention memory network for movie story question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8337–8346. Cited by: Related Work.
  13. D. P. Kingma and J. Ba (2015) Adam: a method for stochastic optimization. In International Conference on Learning Representations (ICLR), Cited by: Training Procedure.
  14. H. Le, D. Sahoo, N. Chen and S. Hoi (2019-07) Multimodal transformer networks for end-to-end video-grounded dialogue systems. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, pp. 5612–5623. External Links: Link, Document Cited by: Introduction, Multimodal Transformer Network (MTN), Multimodal Transformer Network (MTN), Results on DSTC8 Test.
  15. J. Lei, L. Yu, M. Bansal and T. Berg (2018-October-November) TVQA: localized, compositional video question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, pp. 1369–1379. External Links: Link, Document Cited by: Related Work.
  16. C. Lin (2004) Rouge: a package for automatic evaluation of summaries. Text Summarization Branches Out. Cited by: Results.
  17. M. Luong, Q. V. Le, I. Sutskever, O. Vinyals and L. Kaiser (2016-05) Multi-task sequence to sequence learning. In International Conference on Learning Representations (ICLR), San Juan, Puerto Rico. Cited by: Multimodal Transformer Network (MTN).
  18. K. Papineni, S. Roukos, T. Ward and W. Zhu (2002) BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pp. 311–318. Cited by: Results.
  19. R. Sanabria, S. Palaskar and F. Metze (2019) CMU sinbad’s submission for the dstc7 avsd challenge. In DSTC7 at AAAI2019 workshop, Vol. 6. Cited by: Introduction, Video Features.
  20. G. A. Sigurdsson, G. Varol, X. Wang, A. Farhadi, I. Laptev and A. Gupta (2016) Hollywood in homes: crowdsourcing data collection for activity understanding. In European Conference on Computer Vision, pp. 510–526. Cited by: Dataset.
  21. C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens and Z. Wojna (2016) Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2818–2826. Cited by: Training Procedure.
  22. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser and I. Polosukhin (2017) Attention is all you need. In Advances in Neural Information Processing Systems 30, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan and R. Garnett (Eds.), pp. 5998–6008. External Links: Link Cited by: Multimodal Transformer Network (MTN), Training Procedure.
  23. R. Vedantam, C. Lawrence Zitnick and D. Parikh (2015) Cider: consensus-based image description evaluation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4566–4575. Cited by: Results.
  24. O. Vinyals, M. Fortunato and N. Jaitly (2015) Pointer networks. In Advances in Neural Information Processing Systems 28, C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama and R. Garnett (Eds.), pp. 2692–2700. External Links: Link Cited by: Related Work, Multimodal Transformer Network (MTN).
  25. D. Xu, Z. Zhao, J. Xiao, F. Wu, H. Zhang, X. He and Y. Zhuang (2017) Video question answering via gradually refined attention over appearance and motion. In Proceedings of the 25th ACM international conference on Multimedia, pp. 1645–1653. Cited by: Related Work.
  26. H. Yu, J. Wang, Z. Huang, Y. Yang and W. Xu (2016) Video paragraph captioning using hierarchical recurrent neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4584–4593. Cited by: Baseline.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
409349
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description