From visual words to a visual grammar: using language modelling for image classification

From visual words to a visual grammar: using language modelling for image classification

Antonio Foncubierta-Rodríguez    Henning Müller    Adrien Depeursinge

The Bag–of–Visual–Words (BoVW) is a visual description technique that aims at shortening the semantic gap by partitioning a low–level feature space into regions of the feature space that potentially correspond to visual concepts and by giving more value to this space. In this paper we present a conceptual analysis of three major properties of language grammar and how they can be adapted to the computer vision and image understanding domain based on the bag of visual words paradigm. Evaluation of the visual grammar shows that a positive impact on classification accuracy and/or descriptor size is obtained when the technique are applied when the proposed techniques are applied.

1 Introduction

Image retrieval and classification has been an extremely active research domain with hundreds of publications in the past 20 years [1, 2, 3]. Initiatives like the PASCAL Visual Object Class (VOC) challenge [4] and ImageCLEF [5] have attracted many research groups to compare their methods for retrieval and classification tasks.

The Bag–of–Visual–Words (BoVW) is a visual description technique that aims at shortening the semantic gap by partitioning a low–level feature space into regions of the features space that potentially correspond to visual concepts. These regions are called visual words in an analogy to text–based retrieval and the bag of words approach. An image can be described by assigning a visual word to each of the feature vectors that describe local regions of the images (either via a dense grid sampling or interest points), and then representing the set of feature vectors by a histogram of the visual words. One of the most interesting characteristics of the BoVWs is that the set of visual words is created based on the actual data and therefore only concepts present in the data will be part of the visual vocabulary [6]. The creation of the vocabulary is normally based on a clustering method (e.g. k–means, DENCLUE) to identify local clusters in the feature space and then assigning a visual word to each of the cluster centers. This has been investigated previously, either by searching for the optimal number of visual words [7], by using various clustering algorithms [8] instead of k–means or by selecting interest points to obtain the features [9]. Although the BoVW is widely used in the literature [10, 11] there is a strong performance variation within similar experiments when considering different vocabulary sizes [7].

Fisher Vectors [12, 13] have been proposed to overcome some of the limitations of the BoVW, improving the classification accuracy or retrieval precision. In [14] Chatfield et al. performed exhaustive comparisons between various feature encoding methods and histogram–based BoVW. In the reported results, Fisher Vectors perform better in terms of precision and accuracy than the baseline BoVW. However, the improvements in performance were obtained at the cost of descriptors that are 64% to 412% longer than those used by the baseline.

On the other hand, some language modelling concepts have already been exported from text to BoVW–based techniques, such as stop–words [15, 16, 17, 18]. These methods have proven that classification accuracy can be improved by removing noisy words rather than by increasing the dimensionality of the descriptor. However, the use of language modelling techniques is limited to a small set of situations and even though results are promising no generic model is given to take advantage of this application.

In this article a novel, generic, language modelling–based model is proposed. The Visual Grammar model provides tools that allow improving image understanding by computer–based techniques without increasing the size of the descriptor using linguistic concepts that are easily understood by humans: topics, meaningfulness, synonymy and polysemy.

The rest of the article is organized as follows: section 2 sets the notation used in the rest of the article, section 3 introduces the Visual Grammar Model and its three transformations (meaningfulness in section 3.2, synonymy in section 3.3 and polysemy in section 3.4), experimental evaluation using the PASCAL VOC 2007 and the ImageCLEF 2013 datasets is covered in section 4 and discussion of results and conclusions are left for sections 5 and 6.

2 Notation

In this paper we use terms from a variety of research domains, including image analysis, text retrieval, linguistics and machine learning. In this section we present the notation that we use, and define terms from a conceptual point of view.

  • A visual instance, is the basic unit of visual information that we are interested in describing. It may correspond to a 2D image, a video, a volumetric image or a region of interest in any of them. Formally, we will refer to visual instances using the letter , indexed by .

  • A collection or corpus is the set containing all visual instances: .

  • A feature is a measurable value of the visual properties of the visual instance, e.g. a filter response. Formally we refer to features using the letter . Features can be grouped into feature vectors in a feature space

  • A visual word is a specific region of the feature space , created by the clustering of the feature space. A visual word is defined to be an item from a vocabulary .

  • The bag of visual words of a visual instance is a multiset of elements where each item belongs to the vocabulary . Each can be represented by the histogram of visual words, a –dimensional vector where the –th component is the multiplicity of the word in the visual instance : , with .

3 The visual grammar model

Representing visual information using a histogram of visual words poses the obvious question of how visual words are chosen in order to convey a meaningful description of the visual instance. It also requires optimizing the relative weights of words according to their importance, meaningfulness, ambiguity, etc. Weighting of word importance in text retrieval is a challenging area where various models have been proposed [19, 20] with tf–idf111TF–IDF refers to a word weighting scheme where the term frequency (tf) and the inverse document frequency (idf) are considered. and BM25 being among the most popular ones [21, 22, 23].

Visual words are often generated using a clustering method in a feature space populated with training data. Experimental results show that there is no optimal number of visual words for all image description tasks [7, 11]. Larger vocabularies can produce smaller, more compact clusters that are able to model subtle differences among neighboring visual words. But they can also split meaningful clusters into various words with a similar meaning (synonyms), with a smaller weight in the histogram. On the other hand, smaller vocabularies merge words into a single large cluster that contains a mixture of all meanings (polysemy).

The cornerstone of this paper is to identify relations among visual words to improve image understanding. Identifying the topics present in a collection and quantifying word relevance for each of the topics is a first approximation to understanding word–level relations. Later, these relations are further analyzed in terms of the synonymy and polysemy concepts.

3.1 Visual topics

In spoken or written language, not all words contain the same amount of information. Specifically, the grammatical class of a word is tightly linked to the amount of meaning it conveys. E.g. nouns and adjectives (open grammatical classes) can be considered more informative than prepositions and pronouns (closed grammatical classes).

Similarly, in a vocabulary of visual words generated by clustering a feature space populated with training data, not all words are useful to describe the appearance of the visual instances.

From an information theoretical point of view, a bag of (visual) words containing elements can be seen as observations of a random variable . The unpredictability or information content of the observation corresponding to the visual word is


This explains why nouns or adjectives contain, in general, more information than prepositions or pronouns. Those words belonging to a closed class are more probable than those belonging to a much richer class. According to Equation 1, information is related to unlikelihood of a word.

In a bag of visual words scheme for visual understanding it is important to use very specific words with high discriminative power. On the other hand, using very specific words alone does not always allow to establish and recognize similarities. This can be done by establishing a concept that generalizes very specific words that share similar meanings into a less specific visual topic, as shown in Figure 1.


max width=

Visual topics

Visual words

Continuous feature space


Figure 1: Conceptual model of visual topics, words and features. Whereas continuous features are the most informative descriptors from an information theoretical point of view, visual words generalize feature points that are close in the feature space. We propose visual topics as a higher generalization level, modelling partially shared meanings among words.

In the definition of Probabilistic Latent Semantic Analysis (PLSA222PLSA is an extension of Latent Semantic Analysis (LSA) [24], a language modelling technique that maps documents to a vector space of reduced dimensionality, called latent semantic space, based on a Singular Value Decomposition (SVD) of the terms–documents occurrence matrix.[25], Hofmann defines a generative model that states that the observed probability of a word or term occurring in a given document is linked to a latent or unobserved set of topics (also called aspects) in the text.

Since it does not set any requirements on the nature of the low level features that yield these co–occurrence matrices (other than being discrete), the extension to visual words is straightforward. PLSA in combination with visual words for classification and retrieval purposes was also applied in [26, 27]. In [15] PLSA is proposed to remove noisy visual words. This approach is further extended with the concept of meaningfulness in [17], obtaining reductions of up to 92% of the vocabulary size without significant effect on image classification accuracy.

Definition 1 (PLSA–based visual topic)

A visual topic is an unobserved or latent variable so that the probability of observing the word in the visual instance :

The model is fit via the EM (Expectation–Maximization) algorithm. For the expectation step:


and for the maximization step:


where denotes the number of times the term occurred in the visual instance ; and refers to the total number of visual words in the visual instance .

These steps are repeated until convergence or until a termination condition is met. As a result, two probability matrices are obtained: the word–concept probability matrix and the concept–visual instance probability matrix .

3.2 Meaningfulness transformation

Arguably, the most obvious transformation is to weight words according to their meaningfulness. As a first approximation to topic–based word weighting, visual significance for each visual word/topic pair can be quantified. Following the ideas from [15, 17] we define the visual significance of a word for a given topic. This quantifies how much a word belongs to a given topic.

Definition 2 (Topic–based significance)

Given a visual topic and the set of probabilities , the significance of a word for the visual topic is defined as the ratio of elements in with a lower conditional probability than :

Definition 3 (Visual meaningfulness)

The visual meaningfulness of a visual word is its maximum topic–based significance level:

Given a meaningfulness threshold , words that are not meaningful for any concept at this level can be removed from the visual word space, producing a meaningfulness–truncated feature space. This approach was tested in [17], achieving reduction ratios of up to of the feature space with a limited cost in classification accuracy and retrieval precision.

Instead of using a hard decision based on a meaningfulness threshold, a transformation can be defined to weight visual words according to their meaningfulness.

Definition 4 (Meaningfulness–transformed visual word space)

Let be a histogram vector where each component represents the multiplicity of a visual word, and a meaningfulness transformation matrix:


Then, the vector is the histogram vector of visual words in the meaningfulness–transformed space.


3.3 Synonymy transformation

As stated above, one of the key aspects of the bag of visual words approach is that the visual words are learnt from a training data set. If the visual word creation was controlled, words would be produced at the desired level of specificity: one word for each visual pattern to be distinguished. However, supervising the creation of visual words with class–based ground truth goes against the notion of learning the visual patterns present in the data independently from the classes. Furthermore, the number of visual patterns might be unknown and/or independent of the number of classes. E.g.: in a multi–class situation, two or more classes might partially share a visual pattern, or a single class might have several visual appearances. Figure 1(a) illustrates such a situation.

Synonymy is the property of two words that have the same meaning. Although it can be discussed that absolute synonymy might not exist, as the choice of one word over its equivalent already conveys meaning. This concept is known as paradigmatic relation, i.e. a word belongs to a paradigm or group of words with similar meaning, and the choice of one over the other words from the same group is as informative as the shared meaning of the group. In Figure 1(b), these relations are expressed using a graph: words that partially belong to the same paradigm are linked. Since words are not merged, information is preserved, and synonymy relations provide additional information.


max width=

Bimodal class

Partially shared appearance


(a) Visual words alone are not able to model partially shared appearances or bimodal classes.

max width=







(b) Synonymy graph of visual word modelling partially shared appearance and bimodal classes.
Figure 2: A multi–class situation where classes can have various visual appearances (green squares distributed into two clusters) or partially share a visual pattern (purple and red stars and circles belonging to the same cluster). A graph can represent the synonymy relations among words when meaning is shared.

From a text analysis point of view, synonymy relations can be inferred from the distribution and association of words, topics and documents.

The distributional hypothesis [28, 24, 29] states that words with similar meanings occur in similar contexts in the corpus and therefore have a similar contextual distribution. In the bag of visual words approach, a context might be a complete visual instance (image, video, etc.) or a subregion of the visual instance. The use of the context as a subregion of a visual instance mimics the use of n–grams, where word occurrences are studied in contiguous groups. Choosing the size of the context is equivalent to choosing the length of the n–grams.

The distributional hypothesis is one of the most extended for discovering semantic relations among words. However, synonymy is one step beyond the semantic relation, and introduces the notion of equivalence or complementarity. Therefore, if two visual words are synonyms or equivalent, it is very unlikely that they will be used together in the same context. Instead, they will probably have a complementary distribution.

In [30] an information theoretic measure is defined for analyzing word associations in a document corpus. In a bag of visual words approach, we can use a similar definition to measure the associatedness of two words.

Definition 5 (Point–wise mutual information)

The point–wise mutual information or association ratio of a pair of visual words is:

where is estimated counting the number of occurrences of both and in the same visual instance (or subregion of it) and are estimated counting the number of occurrences of in the whole corpus.

Using the point–wise mutual information (PMI) as a measure of associatedness, we can propose a PMI–based definition of the two other requirements for pairwise synonymy between visual words: complementary distribution and similar contextual distribution.

Definition 6 (Complementary distribution)

A pair of visual words have a complementary distribution in the collection when co–occurrence of the two words in the same context is less probable than occurrence of both the words separately. In such a case, the PMI satisfies:

Definition 7 (Contextual distribution)

The similarity of the contextual distribution of a pair of words can be measured by the angle of the vectors , where is a –dimensional vector where each component is the associatedness of the word with the word , with . Two words have a similar contextual distribution if



and is the synonymy threshold.

If the conditions of complementary distribution and contextual distribution are met, then the amount of synonymy of two visual words can be quantified.

Definition 8 (Synonymy value)

The synonymy value of two words is the maximum significance value for which both words are significant for the same visual topic.


The synonymy value enables a transformation of the visual word space considering words with similar meaning but also preserving the choice of one word over the synonyms, since it is informative. We propose a synonymy–based transformation of the visual word space, where each transformed word is a linear combination of all its synonyms.

Definition 9 (Synonymy–transformed visual word space)

Let be a histogram vector where each component represents the multiplicity of a visual word, and a symmetric synonymy transformation matrix:


where measures the synonymy of the visual words and .


Then, the vector is the histogram vector of visual words in the synonymy–transformed space


Synonymy is a symmetric relation, and so is the transformation matrix. However, by allowing several relations for the same word, two transformed visual words will remain distinguishable as long as they do not share the same synonymy relations at the same levels. This preserves the information contained in the paradigmatic relations.

3.4 Polysemy transformation

As explained in section 3, visual words generated using clustering might produce ambiguous words that are linked to various visual appearances. This behaviour is shown by word 5 in Figure 2. Polysemy is the property of a word that has two or more different meanings. Polysemic visual words are sources of ambiguity in the description of the visual instances, since they can refer to different visual appearances. A visual word is polysemic in wide sense if there are at least two visual topics to which the visual word belongs. Based on the topic–based significance defined in Section 2, a polysemy threshold for each word can be defined.

Definition 10 (Polysemy threshold)

The polysemy threshold of a visual word , , is the largest value that satisfies that there are at least two topics for which the word is significant above the threshold:

Words with various meanings are ambiguous, which is not a desirable property for a descriptive feature.

Therefore a transformation of the visual word space should reduce the weight of ambiguous, polysemic words while relatively increasing the weight of specific words with clear meanings.

Definition 11 (Polysemy–transformed visual word space)

Let be a histogram vector where each component represents the multiplicity of a visual word, and a polysemy transformation matrix:


where is the polysemy weight of the visual word :


Then, the vector is the histogram vector of visual words in the polysemy–transformed space.


3.5 Grammatical similarity

The bag of visual words defines a feature space where each dimension is a word, and the components of each feature vector are the occurrences of the word in a visual instance. One of the most frequently used approaches for comparing similarity of two vectors in a feature space is computing the distance between two points: if two points are separated by a small distance, they are considered similar. Therefore, distances are considered dissimilarity measures. The Euclidean distance is the simplest choice but many other distances have been proposed: Manhattan, Bhattacharya, Mahalanobis, etc., each with their own properties and conditions under which they are optimal [31]. Instead of measuring dissimilarity by using a distance, it is possible to use a similarity measure, where high values correspond to higher similarities. Bullinaria and Levy [32] compared various distances and the cosine similarity measure in word similarity. The cosine was found the best way of measuring similarity of two bag of words vectors. It was also found to be computationally efficient [33].

Formally, the cosine similarity of two visual instances is the cosine of the angle that their feature vectors define in the bag of visual words space. Let and be two histogram vectors of the bag of visual words of the images and , each with elements.


The cosine similarity between the visual instances and is:


The use of the cosine similarity has two important advantages:

  • It is a bounded measure of similarity, which takes the value 1 for histograms pointing in the same direction and 0 for orthogonal directions. The value, meaning vectors with opposite directions does not take place in the pure bag of visual words model, where vector components are positive by definition.

  • It is not biased by the absolute number of visual words in a visual instance, since only the relative direction is considered.

Based on the use of the cosine similarity and the various transformations to the feature space, we can define the grammatical similarity of two visual instances.

Definition 12

(Grammatical similarity) The grammatical similarity between two visual instances and , represented by the histogram vectors and is:


This similarity measure has the advantages of the cosine measure, but also considers the relative properties of visual words based on their behaviour in the training data.

4 Evaluation

In this section we describe the results of experimental evaluation of the visual grammar approach.

4.1 Pascal VOC challenge

According to the challenge website333, the goal of the Visual Object Classes (VOC) challenge is to recognize objects from 20 visual object classes in realistic scenes (i.e. not pre–segmented objects). It is fundamentally a supervised learning learning problem in that a training set of labelled images is provided.

The results on the Pascal data set were obtained using the feature encoding evaluation framework provided by Chatfield et al. [14], since it allows the comparison with other methods. Table 2 shows the results obtained using the visual grammar transformation without adding the cosine similarity metric (which is not supported by the toolkit).

Method Descriptor length MAP(%)
VG (0.9) 4824 46.93
VG (0.8) 16504 52.57
VG (0.7) 21703 53.49
VG (0.6) 24029 53.61
VG (0.5) 24804 53.46
VG (0.4) 24964 53.60
VG (0.3) 24983 53.83
VG (0.2) 24990 53.97
VG (0.1) 24995 53.85
BoVW baseline 25000 53.85
Fisher* 41000 61.69
Super Vector* 132000 58.16
Table 1: Mean average precision on the VOC2007 dataset, compared to other methods. The results from the Fisher and Super Vector encodings are those reported by Chatfield et al. [14]
Table 2: Mean average precision on the VOC2007 dataset, compared to other methods. The results from the Fisher and Super Vector encodings are those reported by Chatfield et al. [14], whereas the BoVW was computed using the toolkit without enabling fine tuning features

4.2 ImageCLEF modality classification

In order to extend the evaluation including the cosine similarity, the ImageCLEF modality classification dataset was used.

Image modality is one of the characteristics of medical image retrieval that practitioners would like to see included in existing systems [34]. Medical image search engines such as GoldMiner444 and Yottalook555 contain modality filters to improve retrieval results. Whereas DICOM headers often contain meta data that can be used to filter modalities, this information is lost when exporting images for publication in journals or conferences where images are stored as JPG, GIF or PNG files. In this case visual appearance is key to identify modalities or the caption text can be analyzed for respective keywords to identify modalities. The ImageCLEFmed evaluation campaign contains a modality classification task that is regarded as an essential part for image retrieval systems. In 2013, the modality classification training set contained 2,896 images from the medical literature organized in a hierarchy of 31 image types [35].

Images were described with a BoVW based on SIFT [36] descriptors. This representation has been commonly used for image retrieval because it can be computed efficiently [11, 37, 38]. The SIFT descriptor is invariant to translations, rotations and scaling transformations and robust to moderate perspective transformations and illumination variations. SIFT encodes the salient aspects of the grey–level images gradient in a local neighborhood around each interest point.

Evaluation with separate training and test sets was performed using all combinations of the following parameters:

  1. Two SIFT–based visual vocabularies with 100 and 500 visual words.

  2. A varying number of visual topics from 25 to 350 in steps of 25.

  3. A varying meaningfulness threshold from 50% to 100%.

  4. A K–NN classifier with values 1, 5 and 10.

Figures 3 and 4 show the results obtained with all configurations for each vocabulary using a 1–NN classifier.

Figure 3: Classification accuracy versus effective vocabulary size, compared to the baseline approach using a 1–nearest neighbor classifier on an initial vocabulary of 100 words.
Figure 4: Classification accuracy versus effective vocabulary size, compared to the baseline approach using a 1–nearest neighbor classifier on an initial vocabulary of 500 words.

5 Discussion

The impact of the visual grammar approach on classification and retrieval tasks lie on two areas: first, an increase in the classification accuracy and second, a reduction of the descriptor size.

According to the relative accuracy with respect to the baseline, three effects can be discussed in terms of the descriptor size.

  • Stable, statistically significant improved accuracy () was obtained for small reductions of the vocabulary size when using the cosine similarity together with the visual grammar transformation.

  • Similar accuracies to the baseline approach were obtained while maintaining a moderate to large reduction of the vocabulary size regardless of the similarity metric used.

  • Extreme reductions beyond 10-20% of the initial vocabulary size significantly reduced accuracy as well as vocabulary size.

6 Conclusions

In this paper we present a conceptual analysis of three major properties of language grammar and how they can be adapted to the computer vision and image understanding domain based on the bag of visual words paradigm. Meaningfulness of visual words is quantified for dimensionality reduction or feature weighting. Synonymy is modelled according to a set of criteria that enable defining relations between pairs of visual words and quantifying them. Polysemy of visual words is also quantified and identified as a source of ambiguity.

These properties are able to define three transformations of the standard bag of visual words space, and a similarity measure based on the cosine is proposed, incorporating all the transformations.

Evaluation of the visual grammar shows that a positive impact on classification accuracy and/or descriptor size when the proposed techniques are applied.

The visual grammar transformation only outperforms recent methods such as the Fisher vectors and super vector encoding in descriptor size but cannot provide better accuracy if it is not used in combination with the cosine similarity. It also provides a framework that can identify relations between features from different types as well as reducing dimensionality of the descriptors.

Future work includes combining various types of features to identify relationships among vocabularies of different nature and exploring in depth the interactions among the various proposed transformations.


This work was partially supported by the Swiss National Science Foundation (FNS) in the MANY2 project (205320–141300), the EU Framework Program under grant agreements 257528 (KHRESMOI) and 258191 (PROMISE).


  • [1] Henning Müller, Nicolas Michoux, David Bandon, and Antoine Geissbuhler. A review of content–based image retrieval systems in medicine–clinical benefits and future directions. International Journal of Medical Informatics, 73(1):1–23, 2004.
  • [2] Ceyhun Akgül, Daniel Rubin, Sandy Napel, Christopher Beaulieu, Hayit Greenspan, and Burak Acar. Content–Based Image Retrieval in Radiology: Current Status and Future Directions. Journal of Digital Imaging, 24(2):208–222, 2011.
  • [3] Lilian H Y Tang, R Hanka, and H H S Ip. A review of intelligent content–based indexing and browsing of medical images. HIJ, 5:40–49, 1999.
  • [4] M. Everingham, S. M. A. Eslami, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The pascal visual object classes challenge: A retrospective. International Journal of Computer Vision, 111(1):98–136, January 2015.
  • [5] Henning Müller, Paul Clough, Thomas Deselaers, and Barbara Caputo, editors. {ImageCLEF} – Experimental Evaluation in Visual Information Retrieval, volume 32 of The Springer International Series On Information Retrieval. Springer, Berlin Heidelberg, 2010.
  • [6] Kristen Grauman and Bastian Leibe. Visual Object Recognition. Morgan & Claypool Publishers, 2011.
  • [7] Antonio Foncubierta-Rodríguez, Adrien Depeursinge, and Henning Müller. Using Multiscale Visual Words for Lung Texture Classification and Retrieval. In Hayit Greenspan, Henning Müller, and Tanveer Syeda Mahmood, editors, Medical Content–based Retrieval for Clinical Decision Support, volume 7075 of MCBR–CDS 2011, pages 69–79. Lecture Notes in Computer Sciences (LNCS), 2012.
  • [8] Alexander Hinneburg and Hans-Henning Gabriel. {DENCLUE} 2.0: Fast Clustering Based on Kernel Density Estimation. Advances in Intelligent Data Analysis VII, 4723/2007:70–80, 2007.
  • [9] Sebastian Haas, Rene Donner, Andreas Burner, Markus Holzer, and Georg Langs. Superpixel–based Interest Points for Effective Bags of Visual Words Medical Image Retrieval. In Hayit Greenspan, Henning Müller, and Tanveer Syeda-Mahmood, editors, Medical Content-based Retrieval for Clinical Decision Support, volume 7075 of MCBR-CDS 2011. Lecture Notes in Computer Sciences (LNCS), 2011.
  • [10] Uri Avni, Hayit Greenspan, Eli Konen, Michal Sharon, and Jacob Goldberger. X–ray Categorization and Retrieval on the Organ and Pathology Level, Using Patch–Based Visual Words. IEEE Transactions on Medical Imaging, 30(3):733–746, 2011.
  • [11] Dimitrios Markonis, Alba de Herrera, Ivan Eggel, and Henning Müller. Multi–scale Visual Words for Hierarchical Medical Image Categorization. In {SPIE} Medical Imaging 2012: Advanced PACS–based Imaging Informatics and Therapeutic Applications, volume 8319, pages 83190F—-11, 2012.
  • [12] Josip Krapac, Jakob Verbeek, and Frederic Jurie. Modeling Spatial Layout with Fisher Vectors for Image Categorization. In International Conference on Computer Vision, 2011.
  • [13] Jorge Sánchez, Florent Perronnin, Thomas Mensink, and Jakob Verbeek. Image classification with the Fisher vector: Theory and practice. International journal of computer vision, 105(3):222–245, 2013.
  • [14] Ken Chatfield, Victor Lempitsky, Andrea Vedaldi, and Andrew Zisserman. The devil is in the details: an evaluation of recent feature encoding methods. In British Machine Vision Conference, 2011.
  • [15] Pierre Tirilly, Vincent Claveau, and Patrick Gros. Language modeling for bag-of-visual words image categorization. In Proceedings of the 2008 international conference on Content-based image and video retrieval, pages 249–258. ACM, 2008.
  • [16] Qi Tian, Shiliang Zhang, Wengang Zhou, Rongrong Ji, Bingbing Ni, and Nicu Sebe. Building descriptive and discriminative visual codebook for large-scale image applications. Multimedia Tools and Applications, 51(2):441–477, 2011.
  • [17] Antonio Foncubierta-Rodríguez, Alba de Herrera, and Henning Müller. Medical Image Retrieval using Bag of Meaningful Visual Words: Unsupervised visual vocabulary pruning with {PLSA}. In ACM Multimedia Workshop on Multimedia Information Indexing and Retrieval for Healthcare, MIIRH ’13, pages 75–82. ACM, 2013.
  • [18] Antonio Foncubierta-Rodríguez, Alba de Herrera, and Henning Müller. Meaningful Bags of Words for Medical Image Classification and Retrieval. Springer, 2014.
  • [19] Yiming Yang and Jan O Pedersen. A comparative study on feature selection in text categorization. In International Conference on Machine Learning, volume 97, pages 412–420, 1997.
  • [20] Warren R Greiff. A Theory of Term Weighting Based on Exploratory Data Analysis. pages 11–19, 1998.
  • [21] Gerard Salton and Michael J McGILL. Introduction to modern information retrieval, 1983.
  • [22] Stephen E Robertson, Cornelis Joost van Rijsbergen, and Martin F Porter. Probabilistic models of indexing and searching. In Proceedings of the 3rd annual ACM conference on Research and development in information retrieval, pages 35–56, 1980.
  • [23] Stephen E Robertson and Steve Walker. Some simple effective approximations to the 2-poisson model for probabilistic weighted retrieval. In Proceedings of the 17th annual international ACM SIGIR conference on Research and development in information retrieval, pages 232–241, 1994.
  • [24] Scott Deerwester, Susan T Dumais, George W Furnas, Thomas K Landauer, and Richard Harshman. Indexing by latent semantic analysis. Journal of the American society for information science, 41(6):391–407, 1990.
  • [25] Thomas Hofmann. Unsupervised learning by probabilistic latent semantic analysis. Machine learning, 42(1-2):177–196, 2001.
  • [26] Anna Bosch, Andrew Zisserman, and Xavier Muñoz. Scene classification via pLSA. In Computer Vision–ECCV 2006, pages 517–530. Springer, 2006.
  • [27] Ismail El sayad, Jean Martinet, Thierry Urruty, and Chabane Djeraba. Toward a higher–level visual representation for content–based image retrieval. Multimedia Tools and Applications, 60(2):455–482, 2012.
  • [28] Zellig Harris. Distributional Structure. WORD 10: 146-162. Reprinted in J. Fodor and J. Katz. The structure of language: Readings in the philosophy of language, pages 775–794, 1954.
  • [29] Peter D Turney, Patrick Pantel, and Others. From frequency to meaning: Vector space models of semantics. Journal of artificial intelligence research, 37(1):141–188, 2010.
  • [30] Kenneth Ward Church and Patrick Hanks. Word association norms, mutual information, and lexicography. Computational linguistics, 16(1):22–29, 1990.
  • [31] Nuno Vasconcelos and Andrew Lippmann. A Unifying View of Image Similarity. In A Sanfeliu, J J Villanueva, M Vanrell, R Alcézar, J.-O. Eklundh, and Y Aloimonos, editors, Proceedings of the 15th International Conference on Pattern Recognition (ICPR 2000), pages 1–4, Barcelona, Spain, 2000. IEEE.
  • [32] John A Bullinaria and Joseph P Levy. Extracting semantic representations from word co-occurrence statistics: A computational study. Behavior Research Methods, 39(3):510–526, 2007.
  • [33] Dominic Widdows. Geometry and meaning. CSLI publications Stanford, 2004.
  • [34] Dimitrios Markonis, Markus Holzer, Sebastian Dungs, Alejandro Vargas, Georg Langs, Sascha Kriewel, and Henning Müller. A Survey on Visual Information Search Behavior and Requirements of Radiologists. Methods of Information in Medicine, 51(6):539–548, 2012.
  • [35] Henning Müller, Jayashree Kalpathy-Cramer, Dina Demner-Fushman, and Sameer Antani. Creating a Classification of Image Types in the Medical Literature for Visual Categorization. In {SPIE} Medical Imaging, 2012.
  • [36] David G Lowe. Distinctive Image Features from Scale–Invariant Keypoints. International Journal of Computer Vision, 60(2):91–110, 2004.
  • [37] Yi Yang and Shawn Newsam. Bag–of–visual–words and spatial extensions for land–use classification. In Proceedings of the 18th SIGSPATIAL International Conference on Advances in Geographic Information Systems, GIS ’10, pages 270–279, New York, NY, USA, 2010. ACM.
  • [38] Yan Ke and Rahul Sukthankar. PCA-SIFT: A More Distinctive Representation for Local Image Descriptors. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2004), volume 2, pages 506–513, Washington, DC, USA, 2004.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description