Multimodal Content Representation and Similarity Ranking of Movies

Multimodal Content Representation and Similarity Ranking of Movies

29 January 2016
Abstract

In this paper we examine the existence of correlation between movie similarity and low level features from respective movie content. In particular, we demonstrate the extraction of multi-modal representation models of movies based on subtitles, audio and metadata mining. We emphasize our research in topic modeling of movies based on their subtitles. In order to demonstrate the proposed content representation approach, we have built a small dataset of 160 widely known movies. We assert movie similarities, as propagated by the singular modalities and fusion models, in the form of recommendation rankings. We showcase a novel topic model browser for movies that allows for exploration of the different aspects of similarities between movies and an information retrieval system for movie similarity based on multi-modal content.

Multimodal Content Representation and Similarity Ranking of Movies


Konstantinos Bougiatiotis
Software and Knowledge Engineering Lab,
Institute of Informatics and Telecommunications,
National Center of Scientific Research Demokritos, Greece,

bogas.ko@gmail.com 2nd. author
Theodoros Giannakopoulos
Computational Intelligence Lab
Institute of Informatics and Telecommunications,
National Center of Scientific Research Demokritos, Greece,

tyiannak@gmail.com


\@float

copyrightbox[b]

\end@float
  • Information systems Multimedia and multimodal retrieval; Multimedia information systems; Computing methodologies Machine learning;

    • Topic Modeling; Latent Dirichlet Allocation; Movie Audio Analysis; Multimodal Fusion; Information Retrieval

      In order to cope with the overwhelming amount of data, we are in dire need of recommendation systems, to browse through item collections. This is also the case when looking at motion pictures in particular. There exist many systems providing movie recommendation services, most of which can be classified into either collaborative filtering systems, such as MovieLens111https://movielens.org/, either content-based systems, like jinni222http://www.jinni.com/, or hybrid systems, as is IMDB333http://www.imdb.com/. However, all these systems rely on human-generated information in order to create a corresponding representation and assess movie to movie similarity, not taking into account the raw content of the movie itself but solely building upon annotations made by the users.

      This paper introduces the more ambitious objective of representing each movie directly from its content. We are striving to find correlations between low level content similarity from different movie information modules and high level association of those movies. This will lead us to innovative ways of defining movie similarity, explore latent semantic knowledge from topic models and boost traditional information retrieval systems with information from heterogeneous content sources. The usage of the multimedia signal of the movies has been limited to particular applications such as emotion extraction[?] or violent content detection[?, ?]. An interesting application where topic modeling is used in the movie domain, aims at creating movie summaries containing those movie scenes that best embody the gist of the topic the movie mainly belongs[?] to.

      The rest of this paper is organized as follows. Firstly, the general workflow and details of the proposed method are explained (Section Multimodal Content Representation and Similarity Ranking of Movies). We then present our data collection and ground truth generation methodology (Section Multimodal Content Representation and Similarity Ranking of Movies). In the following section (Section Multimodal Content Representation and Similarity Ranking of Movies) the experimental results are presented. We close by drawing conclusions and outlining topics for further research (Section Multimodal Content Representation and Similarity Ranking of Movies).

      The overall scheme of the methodology described in the current work is presented in Figure Multimodal Content Representation and Similarity Ranking of Movies. In summary, the following steps, with regard to the different modalities, are carried out :

      • Text Analysis: Preprocessing, followed by the training of a topic model (through Latent Dirichlet Allocation), of the subtitles for each movie, in order to represent the corresponding movies as vectors with topic weights.

      • Audio Analysis: two supervised audio classification models that result in a music-related and an audio event-related representation.

      • Metadata Analysis: Parsing metadata information about each movies’ cast, director and genre into categorical feature vectors.

      • Data Fusion and Content Similarity: Fusing the similarity matrices that were generated through the previous steps, we yield recommendations for each movie.

      Figure \thefigure: Workflow diagram of the proposed method

      We start by applying essential preprocessing steps on each subtitles’ document. The main steps involved in this preprocessing stage are: (a) Regular expressions removal (filtering out timestamps and non-textual characters), (b) Tokenization-case folding (tokenizing and reducing letters to lower case), (c) Lemmatization based on WordNet database[?] (unifying variations of the same term due to inflectional morphology)

      As a second preprocessing stage, we apply word filtering. Firstly, common and movie-domain specific stopwords are removed. Then, we remove words which provide low information for each document. These are words with low intra-document and high inter-document frequency. At the end of this preprocessing set of procedures, we acquire the bag of words(BoW) representation for the documents, which further leads to a multi-dimensional vector of term frequencies for each movie.

      BoW representation is far too sparse and highly-dimensional to be used efficiently, so we deploy a topic modeling algorithm, namely Latent Dirichlet Allocation(LDA)[?], a probabilistic topic model where the fundamental idea is that all the documents(movies) in the collection share the same set of topics, but each document exhibits those topics in different proportions. We used a Collapsed Gibbs Sampling version of the algorithm[?] and after numerous empirical evaluations we settled for topics as the optimal value.

      In Figure Multimodal Content Representation and Similarity Ranking of Movies we depict topics generated from our movie collection, presented as word clouds, where the size of each word is proportional to the importance of the word for this topic. If we observe the resulting topics, we can see that they are well formulated and coherent. For example, the top left topic is highlighted by words such as dad, father, mom, son, school, defining a family related topic while the bottom right exhibits mainly words like men, colonel, war, general, defining a war related topic.

      Figure \thefigure: Word Clouds examples for 4 Topics

      Let it be noted that tf-idf weighting[?] and Latent Semantic Indexing[?] methods, with latent dimensions, have also been applied, for benchmarking purposes and comparison against the proposed LDA model.

      The audio signal is a very important channel of information with regards to a movie’s content: music tracks, musical background themes, sound effects, speech, sound events, they all play a vital role in forming the movie "style". Therefore, a content representation approach should also take into account these aspects of information. Towards this end, we have extracted two types of information: (a) music-genre statistics and (b) audio event statistics.

      In particular, we have trained two separate supervised models using Support Vector Machines, in order to classify all movie segments to a set of predefined classes related either to audio events or musical genres. Towards this end, the pyAudioAnalysis library has been used [?] to extract audio features both in a short-term and in a long-term basis. Then, each long-term segment (represented by a vector of long-term audio features as described in [?]) is fed as input to both musical genre and audio event classifiers. The result of this process is a sequence of music-genres and a sequence of audio events. The final representation that corresponds to the whole movie is provided by to vectors that represent the percentage of each musical-genre or audio event class.

      Note that, in order to train the two classifiers a separate and independent dataset has been annotated. The adopted classes for musical genres are: blues, classical, country, electronic, jazz, rap, reggae and rock. The audio event classes are: music, speech, three types of environmental sounds (low energy background noise, abrupt sounds and constant high energy sounds), gunshots, screams and fights.

      Feature extraction from metadata is much more straightforward. Utilizing publicly available information regarding the cast, the directors and the genres of the movies in our collection from IMDB, we create a categorical vector for each movie, where each cell contains a binary value, or , denoting relation between the movie and the categorical value.

      Having represented the movies as feature vectors, we can define similarity between these vectors to correspond to the similarity of their respective movies. We compute the cosine similarity between all movie pairs (), in the different representation spaces:

      This results in a similarity matrix between movies for each modality. In order to combine these content-specific similarities we adopted a simple weighting scheme between the similarity matrices, where the optimal weights for each modality are pinpointed after extensive experimentation.

      In order to demonstrate the usefulness of low-level content representation of the movies for similarity purposes, as well as browsing and exploration of content, we have compiled a real-world dataset of 160 movies. These movies have been selected from the Top 250 Movies444http://www.imdb.com/chart/top. Our purpose was to use movies that are widely known and therefore the quality of the results can be easily assessed. Moreover, the dataset is populated with different types of movies to avoid metadata-specific bias, such as genre or casting. The subtitles were downloaded from an open source database555http://www.opensubtitles.org/en/search and were hand-checked for mistakes.

      However, to evaluate the similarity rankings generated by the different modalities we need a ground-truth similarity between the movies-documents of the dataset, against which we can pitch our results. Towards this end, we used the Tag-Genome[?] dataset to create a ground-truth similarity matrix between the movies. Every movie is represented as a vector in a tag-space with unique tags. The tags can be a wide variety of words-phrases such as adjectives("funny", "dark", "adopted from book"), nouns("plane", "fight") metadata("tarantino", "oscar") and so on, that act as descriptors for the movies. Having this representation for each movie we obtained the ground-truth movie similarity matrix, as before.

      Moving on, in order to appraise the quality of the similarities of the individual content representation models, we utilized the similarity rankings created by the aforementioned matrices. Specifically, for each movie we are interested in the similarity ranking of the first recommendation generated by each model. We calculate the median position, of the first recommendations over all the movies, as ranked in the ground truth similarity matrix. This information-retrieval measure conveys the similarity ranking accuracy for each model. Let us note here that, we used the median of the rankings as it is more robust in skewed collections, like these of the rankings for each model.

      As a supplementary measure we calculate the percentage of the recommendations that belong to the top 10 recommendations for each movie, according to the ground truth similarity. This serves as a recall type of measure, indicating the sensitivity of each model.

      Tables Multimodal Content Representation and Similarity Ranking of Movies and Multimodal Content Representation and Similarity Ranking of Movies presents the results for each individual model, as well as, the best data fusion models, over the possible combinations of models. The best combinations arose after lengthy experimentation.

      Model Median Ranking for 1st Rec Top 10 of 1st Rec
      Tf-idf 18 42.5
      LSI 15.5 41.8
      LDA 15.5 44.3
      Audio (A) 51 13.1
      Music (M) 57 14.3
      Metadata (MD) 8 57.5
      Table \thetable: Singular Modalities
      Model Median Ranking for 1st Rec Top 10 of 1st Rec
      MD + LSI 4 65.0
      MD + T + A 3 65.6
      MD + LSI + A + M 3 68.7
      Table \thetable: Fusion Models

      It can be clearly seen that as far as individual modalities are concerned (Table Multimodal Content Representation and Similarity Ranking of Movies) the metadata model outperforms the rest. That is to be expected, as metadata are high-level features, attributed to movies by humans and, to some extend, correlated with the tag representation of the ground truth similarity. Examining the content-representation models we can see that regarding the textual models, LDA marginally outperforms LSI in the percentage measure, while tying in similarity ranking. On the other hand, the audio modalities as standalone models aren’t suitable for recommendations.

      However, after inspecting (Table Multimodal Content Representation and Similarity Ranking of Movies) we can see that expanding the fusion with more models, enhances the performance of the resulting fusion models. Overall, we see at least improvement on the results of the best singular model(namely metadata) with the addition of the content models. This is a really promising outcome, since it proves that low-level movie information can lead to a performance boosting of a content-based recommendation system.

      In this short paper, we adumbrated a multimodal similarity model for movies, based on raw content. In particular, we focused on topic modeling techniques on the subtitles’ content. The basic outcomes of our research, as shown above, are the following:

      1. A complete methodology for similarity extraction and retrieval for movies based on low-level features.

      2. The most important and promising outcome of the experimentation is that low-level feature models exploit latent information that boost the performance of human-generated information models. They can therefore be adopted in the context of a multimodal content-based recommendation system.

      3. Experimentation has shown that LDA and LSI latent spaces, offer good representations for the movies, with negligible differences in results. LSI is by far more efficient in terms of memory, time and complexity to LDA, however LDA offers a much more coherent topic mapping of the movies, suitable for topic browsing and similarity discovery. This is portrayed in the qualitative example in Figure Multimodal Content Representation and Similarity Ranking of Movies.

        Figure \thefigure: Topics-Movies association diagram

        We demonstrate how our topic model has clustered certain movies together based on their relevance through specific topics. In the top of the chain we have the most striking example, where Princess Mononoke and Seven Samurai were grouped together. These movies are not similar according to conventional recommendation systems because one is an animation film and the other is an epic war drama. However, both are set in Japanese rural villages during feudal ages, with striking Japanese cultural elements such as strong religious beliefs, contact with nature and even consumption of rice. All these connecting details are captured in a topic whose main words are fight, god, rice, forest, lord, village, as shown in the figure. Likewise for the rest illustrated examples.

      These results verify the core ideas of this work and inspire many future directions for our research. In particular:

      • Implement scalable and efficient methods by adding more movies to our database and testing different topic models such as Hierarchical Dirichlet Processes[?].

      • Experiment with visual features from the movies, augmenting this hybrid fusion system with rich visual ques.

      • Examine more sophisticated fusion schemes and add user preferences (collaborative filtering) towards a complete recommendation system.

      • [1] D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent dirichlet allocation. J. Mach. Learn. Res., 3:993–1022, Mar. 2003.
      • [2] S. Deerwester, S. T. Dumais, G. W. Furnas, T. K. Landauer, and R. Harshman. Indexing by latent semantic analysis. JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE, 41(6):391–407, 1990.
      • [3] C. Fellbaum. WordNet: An Electronic Lexical Database. Bradford Books, 1998.
      • [4] T. Giannakopoulos. pyaudioanalysis: An open-source python library for audio signal analysis. PloS one, 10(12):e0144610, 2015.
      • [5] T. Giannakopoulos, D. Kosmopoulos, A. Aristidou, and S. Theodoridis. Violence content classification using audio features. In Proceedings of the 4th Helenic Conference on Advances in Artificial Intelligence, SETN’06, pages 502–507, Berlin, Heidelberg, 2006. Springer-Verlag.
      • [6] N. Malandrakis, A. Potamianos, G. Evangelopoulos, and A. Zlatintsi. A supervised approach to movie emotion tracking. In ICASSP, pages 2376–2379. IEEE, 2011.
      • [7] A. K. McCallum. Mallet: A machine learning for language toolkit. 2002. http://mallet.cs.umass.edu.
      • [8] J. Nam, M. Alghoniemy, and A. H. Tewfik. Audio-visual content-based violent scene characterization. In ICIP (1), pages 353–357, 1998.
      • [9] R. Ren, H. Misra, and J. Jose. Semantic based adaptive movie summarisation. In S. Boll, Q. Tian, L. Zhang, Z. Zhang, and Y.-P. Chen, editors, Advances in Multimedia Modeling, volume 5916 of Lecture Notes in Computer Science, pages 389–399. Springer Berlin Heidelberg, 2010.
      • [10] G. Salton and M. J. McGill. Introduction to Modern Information Retrieval. McGraw-Hill, 1983.
      • [11] Y. Teh, M. Jordan, M. Beal, and D. Blei. Hierarchical dirichlet processes. Journal of the American Statistical Association, 101(476):1566–1581, 2006.
      • [12] J. Vig, S. Sen, and J. Riedl. The tag genome: Encoding community knowledge to support novel interaction. TiiS, 2(3):13, 2012.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
49860
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description