MediaEval 2018: Predicting Media Memorability
In this paper, we present the Predicting Media Memorability task, which is proposed as part of the MediaEval 2018 Benchmarking Initiative for Multimedia Evaluation. Participants are expected to design systems that automatically predict memorability scores for videos, which reflect the probability of a video being remembered. In contrast to previous work in image memorability prediction, where memorability was measured a few minutes after memorization, the proposed dataset comes with âshort-termâ and âlong-termâ memorability annotations. All task characteristics are described, namely: the task’s challenges and breakthrough, the released data set and ground truth, the required participant runs and the evaluation metrics.
Following the rapid expansion of the image memorability prediction’s research field [1, 2, 3, 4], the challenge has recently been extended to videos [5, 6, 7]. An important motivation for video memorability (VM) prediction derives from the need for new techniques that can help to organize and retrieve digital content, to make it more useful in our daily lives. The problem is a pressing one since media platforms, such as social networks, search engines, and recommender systems deal with growing amounts of content data day after day. Like other cues of video importance, such as aesthetics or interestingness, memorability can be regarded as useful to help make a choice between otherwise comparable videos. Consequently, a large number of applications, e.g., education and learning, content retrieval and search, content summarization, storytelling, targeted advertising, content recommendation and filtering, would benefit from models capable of ranking videos according to their memorability.
Despite its potential of being an active area of reseach in the computer vision community, VM prediction suffers from two main obstacles that were described in . Firstly, among the previous attempts at predicting VM [5, 6, 7] no clear definition of VM arises, nor does a common and unified protocol for its measurement exist, contrary to what can be found in the literature for image memorability. Secondly, no large-scale dataset is available, for the community to build its models. The purpose of this task is therefore to propose a public benchmark to assess the memorability of videos, based on a publicly released large-scale dataset and on an objective and clear measurement protocol.
Ii Task description
The Predicting Media Memorability Task requires participants to build systems that are capable of predicting how memorable a video is, by computing for each video a memorability score. Participants will be provided with an extensive dataset of videos with memorability annotations. The ground truth has been collected through recognition tests, and, for this reason, reflects objective measures of memory performance. In contrast to previous work on image memorability prediction [1, 2], where memorability was measured a few minutes after memorization, the dataset comes with both âshort-termâ and âlong-termâ memorability annotations. Because memories continue to evolve in long-term memory , in particular during the first day following memorization (see e.g., the forgetting curve in the seminal work of Ebbinghaus ), we expect long-term memorability annotations to be more representative of long-term memory performance, which is more relevant in many applications.
Participants will be required to train computational models capable of inferring video memorability from visual content. Optionally, descriptive titles attached to the videos may be used. Two subtasks will be offered to participants:
Short-term Memorability Subtask: the task involves predicting a âshort-termâ memorability score for a given video.
Long-term Memorability Subtask: the task involves predicting a âlong-termâ memorability score for a given video.
For the two subtasks, depending on the runs, participants will be allowed to use external data.
Iii Data description
The dataset is composed of 10,000 short soundless videos shared under a license that allows their use and redistribution in the context of MediaEval 2018. These 10,000 videos were split into 8,000 videos for the development set and 2,000 videos for the test set. They were extracted from raw footage used by professionals when creating content. Of 7s-duration each, they are varied and contain different scenes types. Each video also comes with its original title. These titles can often be seen as a list of tags (textual metadata) that might be useful to infer the memorability of the videos. Participants are free to use them or not.
To facilitate participation from various communities, we also provide some pre-computed content descriptors. Two of them are video-dedicated features: C3D spatio-temporal visual features that are obtained by extracting the output of the final classification layer of the C3D model, a 3-dimensional convolutional network proposed for generic video analysis , and HMP , the histogram of motion patterns for each video. Additional frame-based features are provided that were extracted on three key-frames (first, middle and last frames) for each video: HoG descriptors (Histograms of Oriented Gradients)  are calculated on 32x32 windows on a grey scale version of each frame; LBP (Local Binary Patterns)  are calculated for patches of 8x15 pixels; InceptionV3 features  correspond to the output of the fc7 layer of the InceptionV3 deep network; ORB features  result from a fusion of FAST keypoint detector and BRIEF descriptor and Color histograms are computed in the HSV space. Additionally, following the work in , a set of Aesthetic visual features, composed of color, texture and object based descriptors, aggregated through the computation of their mean and median values, are extracted for each 10-frame of one single video.
Iv Ground truth
Iv-a Protocol to measure video memorability
To collect memorability annotations, we introduced a new protocol to measure human long-term memory performance for videos, partly inspired by . The protocol consists of two parts, and is based upon recognition tests for our memorability scores to reflect objective measures of memory performance. The first part consists of interlaced viewing and recognition tasks. Participants viewed a sequence of 180 videos, 40 of which being targets videos, i.e., repeated videos, and the other being fillers, i.e., videos that occurred only once. Their task was to press the space bar whenever they detected a repetition. After 24 to 72 hours, participants viewed a new sequence of videos consisting of 40 targets, which were videos randomly chosen from the fillers of the first part, and 120 new fillers. In contrast to previous work on image memorability prediction [1, 2], where memorability was measured a few minutes after memorization, memory performance was therefore measured twice: a few minutes after memorization and again (on different items) 24-72 hours later. Thus, the dataset comes with both short-term and long-term memorability annotations. These two scores will allow a comparison of the participants’ systems for both short and long term memorability prediction. However, because of the difficulty to collect data after a long delay through crowdsourcing, the number of annotations is bigger for short-term memorability scores than for long-term ones. On average, each video received 38 annotations in the short-term recognition task and 13 annotations in the long-term recognition task. For each video in the development set, we provide the number of annotations for both the short-term and long-term recognition tasks.
Iv-B Memorability scores calculation
We assigned an initial memorability score to each video, defined as the percentage of correct detections by participants, for both short-term and long-term memory performances.
The short-term raw scores are further refined by applying a linear transformation that takes into account the memory retention duration to correct/normalize the scores. Indeed, in our measurement protocol, the second occurrence (i.e., repetition) of a video happens after variable time intervals (i.e., each video is repeated after a variable number of other videos randomly chosen in the range of [45;100] videos). In , using a similar approach for images, it has been shown that memorability scores evolve as a function of the time interval between repeats while memorability ranks are largely conserved. We were able to prove the same relation for videos. Thus, as in , we use this information to apply a correction to our raw memorability scores to explicitly account for the difference in interval lengths, with the objective for our short-term memorability scores to be the most representative of the typical memory performance after the max interval (i.e., 100 videos). Because we observed that memorability decreases linearly when the retention duration increases, we decided to apply a linear correction. Nevertheless, note that the applied correction only has a little effect on the memorability scores both in term of absolute and relative values.
On the contrary, we did not apply any correction for long-term memorability scores. Indeed, we observed no specific relationship between retention duration and long-term memorability from our collected scores. This was expected from what can be found in the literature: according to our protocol, the second measure was carried out 24 to 72 hours after the first measure. After such a long retention duration, it is expected that the memory performance is no more subjected to substantial decrease due to the retention duration.
V Run description
Every team can submit up to 10 runs, 5 per subtask. For each subtask, a required run is defined. Short-term memorability subtask – required run: Any information (extracted from the content, the provided features, the short-term memorability scores or external data) is allowed to build the systems, except the use of the long-term memorability scores which is not allowed. Long-term memorability subtask – required run: Any information (extracted from the content, the provided features, the long-term memorability scores or external data) is allowed to build the systems, except the use of the short-term memorability scores which is not allowed. Apart from these required runs, any additional run for each subtask will be considered as a general run, i.e., anything is allowed, both from the method point of view and the information sources.
For both subtasks, the official evaluation metric will be the Spearman’s rank correlation between the predicted memorability scores and the ground-truth memorability scores computed over all test videos. Although the task remains a prediction task, only the ranking of the different videos will be evaluated by the official metric. The choice of the Spearman’s rank correlation as official measure indeed corresponds to a desire of normalizing the output of the different systems and making the comparison easier. For this reason, participants are encouraged to really consider the task as a prediction task. Other classic metrics will also be computed and provided to the participants for the sake of comparison between the different runs and systems.
A complete and comparative framework for the evaluation of video memorability is proposed. Details on the methods and results of each individual participant team can be found in the working note papers of the MediaEval 2018 workshop proceedings.
We would like to thank Ricardo Manhaes Savii (Federal University of SÃ£o Paulo) and Mihai Gabriel Constantin (University Politehnica of Bucharest) for providing the features that accompany the released data. Bogdan Ionescu’s work was partially supported by the Romanian Ministry of Innovation and Research, UEFISCDI, project SPIA-VA, agreement 2SOL/2017, grant PN-III-P2-2.1-SOL-2016-02-0002 Mats Sjöberg’s work has been supported by the Academy of Finland in the project 313988 and the European Union’s Horizon 2020 Research and Innovation Programme under Grant Agreement No 780069.
-  P. Isola, J. Xiao, A. Torralba, and A. Oliva, “What makes an image memorable?” in Proc. IEEE Int. Conf. on Computer Vision and Pattern Recognition (CVPR). IEEE, 2011, pp. 145–152.
-  A. Khosla, A. S. Raju, A. Torralba, and A. Oliva, “Understanding and predicting image memorability at a large scale,” in Proc. IEEE Int. Conf. on Computer Vision (ICCV), 2015, pp. 2390–2398.
-  Y. Baveye, R. Cohendet, M. Perreira Da Silva, and P. Le Callet, “Deep learning for image memorability prediction: the emotional bias,” in Proc. ACM Int. Conf. on Multimedia (ACMM), 2016, pp. 491–495.
-  J. Fajtl, V. Argyriou, D. Monekosso, and P. Remagnino, “Amnet: Memorability estimation with attention,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 6363–6372.
-  J. Han, C. Chen, L. Shao, X. Hu, J. Han, and T. Liu, “Learning computational models of video memorability from fmri brain imaging,” IEEE transactions on cybernetics, vol. 45, no. 8, pp. 1692–1703, 2015.
-  S. Shekhar, D. Singal, H. Singh, M. Kedia, and A. Shetty, “Show and recall: Learning what makes videos memorable,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 2730–2739.
-  R. Cohendet, K. Yadati, N. Q. Duong, and C.-H. Demarty, “Annotating, understanding, and predicting long-term video memorability,” in Proc. of the ICMR 2018 Workshop, Yokohama, Japan, June 11-14, 2018.
-  J. L. McGaugh, “Memory–a century of consolidation,” Science, vol. 287, no. 5451, pp. 248–251, 2000.
-  J. M. Murre and J. Dros, “Replication and analysis of ebbinghausâ forgetting curve,” PloS one, vol. 10, no. 7, p. e0120644, 2015.
-  D. Tran, L. Bourdev, R. Fergus, L. Torresani, and M. Paluri, “Learning spatiotemporal features with 3d convolutional networks,” in Proc. IEEE Int. Conf. on Computer Vision (ICCV), 2015, pp. 4489–4497.
-  J. Almeida, N. J. Leite, and R. d. S. Torres, “Comparison of video sequences with histograms of motion patterns,” in Image Processing (ICIP), 2011 18th IEEE International Conference on. IEEE, 2011, pp. 3673–3676.
-  N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, vol. 1. IEEE, 2005, pp. 886–893.
-  D.-C. He and L. Wang, “Texture unit, texture spectrum, and texture analysis,” IEEE transactions on Geoscience and Remote Sensing, vol. 28, no. 4, pp. 509–512, 1990.
-  C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 2818–2826.
-  E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “Orb: An efficient alternative to sift or surf,” in Computer Vision (ICCV), 2011 IEEE international conference on. IEEE, 2011, pp. 2564–2571.
-  A. F. Haas, M. Guibert, A. Foerschner, S. Calhoun, E. George, M. Hatay, E. Dinsdale, S. A. Sandin, J. E. Smith, M. J. Vermeij et al., “Can we measure beauty? computational evaluation of coral reef aesthetics,” PeerJ, vol. 3, p. e1390, 2015.
-  P. Isola, J. Xiao, D. Parikh, A. Torralba, and A. Oliva, “What makes a photograph memorable?” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 36, no. 7, pp. 1469–1482, 2014.