\thesection Qualitative Evaluation

In this supplementary material we provide additional quantitative (see \secrefsec:quan) and qualitative (see \secrefsec:qual) results.

\section

Quantitative Evaluation \labelsec:quan

\textbf

Ensemble model for VisDial v0.9: For VisDial v1.0, a simple ensemble technique has significantly improved the results as discussed in the main paper. We observe a similar effect for VisDial v0.9, pushing the current state of the art for MRR from 0.6525 to 0.6892 as summarized in \tabreftab:abl-atten. We achieve this result with an ensemble of 9 models which differ only by the initial seed. For VisDial v1.0 we report a 5 model ensemble score. Due to restriction of the number of submissions to the evaluation server we could not evaluate a larger ensemble model. The results in \tabreftab:abl-atten suggest that the VisDial v1.0 score in the paper can be further improve with a larger ensemble model.

\textbf

Analysis of Factor Graph Attention weights: To infer the attention belief for a utility, \ie, , we aggregate marginalized joint and local interactions and also local-information and prior terms. To calibrate each cue, we use scalar weights, \ie, . To obtain a better understanding of the reasoning process and analyze attention, we suggest an importance score: \beS(γ) = \frac—m_γ⋅γ—∑_δ∈{^w_i, w_i, (w_i, j)_j∈\mathcalU} —m_δ⋅δ—, \labeleq:score \eewhere is the weight of a cue and is the mean term of the corresponding cue , which was calculated over the entire validation set. Note that are the scalar weights. captures the importance of the -th cue for utility . A high score means the -th utility attention belief heavily relies on cue . Similarly, , capture the importance of local-interactions, local-information and prior cues for the -th utility. We report the scores in \tabreftab:weights. We observe that the answer utility relies mostly on local-interactions. The question heavily relies on the prior, but also makes use of history answers and question cues. The caption ignores all utilities other than the prior. The image question utility is the most important cue. Interestingly, we observe importance of priors. Image attention relies on the captions, while the caption ignores all the cues and preserves the prior behavior. The history question and answers rely on the question and the local factors.

\textbf

Computation and insignificant interactions: Upon training interactions may be found to be unnecessary. Our model can be optimized easily: 1) The score in \equrefeq:score, can be used to omit less significant interactions. Previous multimodal attention doesn’t model pairwise interaction scores, making it hard to eliminate computations. 2) For the same image but different question, we can re-use calculated joint interactions, such as local-interaction, image-caption, \etc. This is impossible for approaches that pool cues since the question changes. 3) As mentioned in Sec. 4.2, it’s possible to share weights between similar utilities, \eg, different history questions/answers.

Currently, we don’t consider most of those options, as the model trains quickly (8 hours vs. 33 hours of previous state-of-the-art) and fits into a single 12GB GPU.

\includegraphics

[width=1]figs/potentials.pdf

Figure \thefigure: Two images each with two questions. We illustrate scores obtained from different types of factors. Local-info denotes ‘Image-Local-Information,’ Question refers to ‘Image-Question,’ \etc. We observe ‘Image-Question’ to have the highest variance between different questions, since its heat map differs the most. ‘Image-Question’ also correlates the most with the final attention.
\resizebox ! \Xhline2 Model MRR R@1 R@5 R@10 Mean FGA 0.6525 51.43 82.08 89.56 4.35 Ensemble of 2 FGA 0.6711 53.56 83.83 90.97 3.92 Ensemble of 3 FGA 0.6786 54.28 84.71 91.69 3.73 Ensemble of 4 FGA 0.6819 54.56 85.19 92.10 3.62 Ensemble of 5 FGA 0.6848 54.82 85.57 92.38 3.55 Ensemble of 6 FGA 0.6860 54.95 85.71 92.52 3.50 Ensemble of 7 FGA 0.6869 54.97 85.91 92.67 3.47 Ensemble of 8 FGA 0.6881 55.10 86.04 92.77 3.44 Ensemble of 9 FGA 0.6892 55.16 86.26 92.95 3.39 \Xhline2 \resizebox ! \Xhline2 A Q C I (0.125) ( 0.988 ) (0.593) (0.205) (0.607) (0.122) ( 0.004) (0.186) (0.121) (0.304) (0.075) (0.001) (0.123) (0.085) (0.017) \Xhline2
Table \thetable: Analysis of ensemble models for VisDial v0.9. With an ensemble of 9 models we observe an improvement of more than 3% over the single model.
Table \thetable: For each utility (column) we show the three most related cues based on the score given in \equrefeq:score. We provide in parenthesis. , and indicate prior, local-information and local-interactions of the utility in the column.

\thesection Qualitative Evaluation

Factors visualization: We provide additional visualization in \figreffig:onecol. We visualize scores for each image region obtained from different types of factors. ‘Image-Local-Information,’ ‘Image-Caption’ and ‘Image-Local-Interaction’ are constant for different questions, while ‘Image-Question,’ ‘Image-Answer,’ ‘Image-History-Q’ and ‘Image-History-A’ change for every question. We calculated the variance of interactions and observe that ‘Image-Question’ has the highest variance (), while ‘Image-Answer,’ ‘Image-History-Q’ and ‘Image-History-A’ have a variance of . Beyond the importance score, the high-variance also suggests that the ‘Image-Question’ cue is most important. Attention over dialogs: In \figreffig:res, we present a randomly-picked set of 50 images along with their corresponding dialogs. An automatic script is used to generate the figures. We highlight that image attention is aware of the scene in the question context, and able to attend to correct foreground or background regions. Question attention attends to informative words, and answer attention frequently correlates with the predicted answer. History attention emphasizes nuances.

\subfloat\includegraphics

[width=0.87]figs/supp/final73

Figure \thefigure: 50 dialogs along with question, answers and history attention. The predicted answer (\ie, A) and ground-truth answer (\ie, GT) are also provided.
\ContinuedFloat\subfloat\includegraphics

[width=0.87]figs/supp/final71

\ContinuedFloat\subfloat\includegraphics

[width=0.87]figs/supp/final87

\ContinuedFloat\subfloat\includegraphics

[width=0.87]figs/supp/final72

\ContinuedFloat\subfloat\includegraphics

[width=0.87]figs/supp/final160

\ContinuedFloat\subfloat\includegraphics

[width=0.87]figs/supp/final150

\ContinuedFloat\subfloat\includegraphics

[width=0.87]figs/supp/final162

\ContinuedFloat\subfloat\includegraphics

[width=0.87]figs/supp/final148

\ContinuedFloat\subfloat\includegraphics

[width=0.87]figs/supp/final6

\ContinuedFloat\subfloat\includegraphics

[width=0.87]figs/supp/final124

\ContinuedFloat\subfloat\includegraphics

[width=0.87]figs/supp/final125

\ContinuedFloat\subfloat\includegraphics

[width=0.87]figs/supp/final151

\ContinuedFloat\subfloat\includegraphics

[width=0.87]figs/supp/final153

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minumum 40 characters
   
Add comment
Cancel
Loading ...
351410
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description