Semantic Sort: A Supervised Approach to Personalized Semantic Relatedness

Semantic Sort: A Supervised Approach to Personalized Semantic Relatedness

\nameRan El-Yaniv \
\nameDavid Yanay \
\addrDepartment of Computer Science,
Technion - Israel Institute of Technology

We propose and study a novel supervised approach to learning statistical semantic relatedness models from subjectively annotated training examples. The proposed semantic model consists of parameterized co-occurrence statistics associated with textual units of a large background knowledge corpus. We present an efficient algorithm for learning such semantic models from a training sample of relatedness preferences. Our method is corpus independent and can essentially rely on any sufficiently large (unstructured) collection of coherent texts. Moreover, the approach facilitates the fitting of semantic models for specific users or groups of users. We present the results of extensive range of experiments from small to large scale, indicating that the proposed method is effective and competitive with the state-of-the-art.


Sematic sort: A supervised approach to personalized semantic relatedness El-Yaniv, & Yanay \firstpageno1

1 Introduction

In recent years the problem of automatically determining semantic relatedness has been steadily gaining attention among statistical NLP and AI researchers. This surge in semantic relatedness research has been reinforced by the emergence of applications that can greatly benefit from semantic relatedness capabilities. Among these applications we mention targeted advertising \shortciteBroder2007,Ribeiro-Neto2005, information retrieval and web search \shortciteEgozi2008,Varelas2005,Guha2003,Richardson95,Srihari2000, automatic tagging and linking \shortciteKorenLMS11,Schilder2001,Miller1993,Green1999, and text categorization \shortciteGabrilovich2006,Sebastiani2002,Gabrilovich2005.

To motivate the need for semantic analysis capabilities, consider, for example, the difficult task of categorizing short text units (e.g., ads, tweets, search queries, reviews, etc.) using supervised learning. Each specific unit contains very few words, and therefore we can find many units expressing the same idea (and belonging to the same category), which only share function words (e.g., stop-words), but not content words. The pedestrian approach, based on bag-of-words representation, might not be effective in this task because short text units to be categorized often do not share many words with the training examples in their category. It is now clear that it is necessary to represent such texts using semantic features \shortcite¡see, e.g.,¿Sriram2010,MihalceaCS06,Sun2011,Turney02,Yuhua2006. In general, many other applications require some form of deep semantic processing and cannot rely only on shallow syntactical considerations.

In semantic relatedness the goal is to quantify the intensity of how much two terms are related to each other. The relatedness task considers all relation types between two target terms. These relations can be among the known formal linguistics ones, which have a name (such as synonyms, antonyms, hypernyms, meronyms, related nouns, etc.), but in general, such relations can be informal in the sense that they do not have a given name and they express some (perhaps complex) relation between the two terms that has not been studied. For example, consider the following three term pairs

Michael Jordan Basketball
Madonna Pop
Marilyn Monroe

All three pairs, X : Y, are strongly related via a common relation. What would be your assessment of an underlying relation for these three pairs?111The common relation we had in mind is “X is an all times Y star”.

To summarize, the semantic relatedness task involves all possible relations whose number is in principle unbounded. We note that the The NLP literature also considers the task of semantic similarity in which the (rather limited) goal is to quantify the synonymy relation between two terms. Indeed, as argued by \citeABudanitskyH06, semantic relatedness is considered more general than semantic similarity. In this sense the general semantic relatedness task is more difficult.

In this work we consider the semantic relatedness task and we aim at qualifying the relatedness of two given terms where the underlying relation can be formal or informal. However, we do not aim at identifying or characterizing the underlying relation.222Such a relation characterization task is a very interesting problem in and off itself, but is far beyond the scope of our work. Note also that in the standard semantic relatedness setting we consider here (see definitions in Section 3), the terms to be evaluated for relatedness are provided without a context, unlike standard disambiguation settings [<]see, e.g.,¿Lesk1986,Cowie1992,Yarowsky1995,Agirre1996,Schutze1998,LeacockMC1998,Ide1998,Navigli2010. Thus, as most existing works on semantic relatedness, focusing on our or equivalent setup, we do not aim at directly solving the disambiguation problem along the way.333Nevertheless, we note that we believe that it is possible to extend our techniques to disambiguate a term within a context.

Semantic relatedness is an elusive concept. While a rigorous mathematical definition of semantic relatedness is currently beyond grasp, the concept is intuitively clear. Moreover, humans exhibit remarkable capabilities in processing and understanding textual information, which is partly related to their ability to assess semantic relatedness of terms. Even without precise understanding of this intelligent ability, it is still intuitively clear that deep semantic processing of terms and text fragments should heavily rely on background knowledge and experience.

The statistical NLP and AI communities have adopted a pragmatic modus operandi to these questions: even if we don’t know how to define semantic relatedness, we can still create computer programs that emulate it. Indeed, a number of useful heuristic approaches to semantic relatedness have been proposed, and this line of work has proven to be rewarding [<]see, e.g.,¿Resnik95,DekangLin657297,Bloehdorn2007,GabrilovichM07,Cowie1992. In particular, it has been shown that useful semantic relatedness scores can be systematically extracted from large lexical databases or electronic repositories of common-sense and domain-specific background knowledge.

With the exception of a few papers, most of the algorithms proposed for semantic relatedness valuation have been following unsupervised learning or knowledge engineering procedures. Such semantic relatedness valuation functions have been generated, for the most part, using some hand-crafted formulas applied to semantic information extracted from a (structured) background knowledge corpus. The proposed methods have employed a number of interesting techniques, some of which are discussed in Section 2.

One motivation for the present work is the realization that semantic relatedness assessments are relative and subjective rather than absolute and objective. While we can expect some kind of consensus among people on the (relative) relatedness valuations of basic terms, the relatedness assessments of most terms depend on many subjective and personal factors such as literacy, intelligence, context, time and location. For example, the name Michael Jordan is generally strongly related to Basketball, but some people in the machine learning community may consider it more related to Machine Learning. As another example, consider WordSim353 \shortciteFinkelsteinGMRSWR01i, the standard benchmark dataset for evaluating and comparing semantic relatedness measures (see Section 2.1). This benchmark contains some controversial relative preferences between word pairs such as

Arafat-Peace Arafat-Terror

Can you tell which pair is more related in each instance? Obviously, the answer must be personal/subjective. As a final example for the subjective nature of semantic relatedness, let’s consider the Miller and Charles’s dataset [\BCAYMiller \BBA CharlesMiller \BBA Charles1991], which is a distinct subset of the Rubenstein and Goodenough’s dataset [\BCAYRubenstein \BBA GoodenoughRubenstein \BBA Goodenough1965]. Both datasets were annotated using the same score scale by (probably different) human raters. This double rating resulted in different semantic scores and more importantly, in different pair rankings.444The Spearman correlation between the rankings of these datasets is 0.947. It is evident that each dataset expresses the subjective semantics of its human raters.

This sensitivity of semantic relatedness to subjective factors should make it very hard, if not impossible, to satisfy all semantic relatedness needs using an unsupervised or a hand-crafted method. Moreover, the fitting to a particular test benchmark in an unsupervised manner is not necessarily entirely meaningful in certain scenarios. Indeed, some published semantic relatedness measures outperform others in certain benchmarks tests and underperform in others. For example, \citeAPonzettoS07 mentioned that the WordNet-based measures perform better than the Wikipedia-based measures on the Rubenstein and Goodenough benchmark, but the WordNet methods are inferior over WordSim353.

In this work we propose a novel supervised approach to learning semantic relatedness from examples. Following \shortciteA1620758 we model semantic relatedness learning as a binary classification problem where each instance encodes the relative relatedness of two term pairs. Given a labeled training set our goal is to learn a semantic relatedness function capable of determining the labels of unobserved instances. We present an empirical risk minimization (ERM) algorithm that learns by inducing a weighted measure of terms co-occurrence defined over a background knowledge corpus of free-text documents. The labeled examples are used to fit this model to the training data. The resulting algorithm is relatively simple, has only few hyper-parameters, and is corpus independent. Our experiments show that the algorithm achieves notable generalization performance. This is observed over a wide range of experiments on a number of benchmarks. We examine and demonstrate the effectiveness of our algorithm using two radically different background knowledge corpora: an old version of Wikipedia and the books in the Project Gutenberg.

2 Related Work

The literature survey in this section attempts to encompass techniques and algorithms for assessing semantic relatedness. As the class of such techniques is quite large, the discussion here is limited to ideas and works in close vicinity of the present work. Semantic relatedness techniques typically rely on some kind of world or expert knowledge, which we term here background knowledge (BK). The BK is a key element in many methods and we categorize semantic relatedness techniques into three main types according to type and structure of their BK. Lexical methods rely on lexical databases such as WordNet [\BCAYFellbaumFellbaum1998] (George A. Miller began the WordNet project in the mid-1980s) or Roget’s Thesaurus [\BCAYRogetRoget1852]. Wiki methods rely on structured BK corpora like Wikipedia or the Open Directory Project (DMOZ). The structure in Wiki BKs can be manifested in various ways, and the most important ones are semantic coherency of documents and titles, meaningful interlinks (often accompanied with meaningful anchor texts), and hierarchical categorization. Finally, semantic relatedness techniques that rely on unstructured text collections are referred to as structure-free methods. Before delving into these three BK types we divert the discussion in the next subsection and elaborate on standard benchmark datasets for evaluating semantic relatedness techniques.

2.1 Standard Benchmark Datasets

A key contributing element that greatly influenced semantic relatedness research is the presence of benchmark test collections. While the currently available datasets are quite small, they are considered “representative” and meaningful because they were annotated by human raters. Each of these datasets consists of a list of word pairs, along with their numerical relatedness score. In the semantic relatedness literature it is common to evaluate relatedness ranking, , with the corresponding ground truth (conveyed by such datasets), , using the Spearman correlation, defined as,

Rubenstein and Goodenough (R&G) [\BCAYRubenstein \BBA GoodenoughRubenstein \BBA Goodenough1965] were perhaps the first to assemble an annotated semantic dataset. Their dataset consists of 65 word pairs associated with their similarity scores, where mark is assigned to the most similar pairs (often synonyms), and mark to the least similar ones. Miller and Charles (M&C) [\BCAYMiller \BBA CharlesMiller \BBA Charles1991] selected a particular subset from the R&G set consisting of 30 word pairs, which were than ranked using the same 0–4 score scale.

WordSim353 is the most recent semantic benchmark dataset [\BCAYFinkelstein, Gabrilovich, Matias, Rivlin, Solan, Wolfman, \BBA RuppinFinkelstein et al.2001].555WordSim353 is available at This dataset, while still small, is substantially larger and consists of a list of 353 word pairs along with their human evaluated semantic relatedness scores, from (the least related) to (the most related). While the R&G and M&C datasets are used for evaluating semantic similarity measures (i.e., synonym relations), WordSim353 involves a variety of semantic relations and in the past years has been providing a focal point to practical semantic relatedness research. In the discussion below we will mention WordSim353 Spearman correlation scores in cases where they were reported.

2.2 Lexical Methods

Many of the lexical methods rely on the WordNet database [\BCAYFellbaumFellbaum1998] as their BK corpus. WordNet is a lexical database for the English language that was created and is being maintained at the Cognitive Science Laboratory in Princeton University. WordNet organizes English words in groups called synsets, which are sets of synonyms. The lexical relations between synsets are categorized into types such as hypernyms, meronyms, related nouns, “similar to”, etc.666 Y is a hypernym of X if every X is a (kind of) Y. See definitions of the rest of these linguistic relations in In addition to these semantic relations WordNet also provides a polysemy count (the number of synsets that contain the term) for disambiguation. WordNet is intended to be used both manually or automatically to serve applications.

Another lexical database is Roget’s Thesaurus [\BCAYRogetRoget1852]. Although it might be implied from its name, this database is not a dictionary of synonyms, and as stated by Kirkpatrick [\BCAYKirkpatrickKirkpatrick1998]: “it is hardly possible to find two words having in all respect the same meaning, and being therefore interchangeable.” Similarly to WordNet, Rodget’s Thesaurus contains groups of terms, called semicolon groups, which are linked. However, those links are not lexically annotated as in WordNet.

Lexical semantic relatedness methods typically view the lexical database as a graph whose nodes are terms and edges are lexical relations. Semantic relatedness scores are extracted using certain statistics defined on this graph.


Resnik95,Resnik99 generated semantic relatedness scores based on a combination of IS-A (hyponym) relations in WordNet and a structure-free corpus. Each synset in WordNet, , is assigned a probability, , according to the frequency of its descendants (including itself) in a corpus. The information content (ic) of two synsets and is then defined as , where is the set of synsets that are connected by an IS-A directed path to both and ; that is, is the set of all the ancestors of both and . The semantic relatedness (sr) of two terms, and , is defined as

where is the set of synsets in WordNet that contain .

Another attempt to combine WordNet IS-A relations with a structure-free corpus was made by \citeAjournals/corr/cmp-lg-9709008. They weighted a link (lexical relation) between a child node, , and a parent node, , according to the differences in their information content (as proposed by Resnik), the depth of in the hierarchy, the degree of , and the average degree in the whole hierarchy. The semantic relatedness of two terms, and , is valuated by summing up the weights along the shortest path between a synset that contains , and a synset containing . Utilizing their measure, Jiang and Conrath managed to improve upon the Resnik measure.


TKDE2003 defined and calculated the semantic similarity between two words, and , as a function of: (i) the shortest path between and ; (ii) the depth of the first concept in the IS-A hierarchy that subsumes both and ; and (iii) the semantic density of and , which is based on their information content. Li et al. assumed that these three information sources are independent and used several nonlinear functions to combine them.


BanerjeeP03 extended the glosses (definitions in WordNet) overlap measure defined by \citeALesk1986. Given two synsets in WordNet, they enriched their glosses with the glosses of their related synsets, according to WordNet link structure, and calculated semantic relatedness as a function of the overlap between these “enriched glosses.” Banerjee and Pedersen also weighted the terms in the overlap according to the number of words in those terms. \citeAPedersen06 combined co-occurrences in raw text with WordNet definitions to build gloss vectors.


Jarmasz03 calculated the semantic relatedness between two terms as the number of edges in all the pathes between the two terms in a Roget’s Thesaurus graph, achieving 0.54 correlation with WordSim353 dataset [\BCAYJarmaszJarmasz2003].


HughesR07 calculated the Personalized PageRank vector \shortcitePersonalizedPageRank for each node (term) under some representation of WordNet as a graph. They considered three node types: (i) synsets; (ii) TokenPOS, for a word coupled with its part-of-speech tag; and (iii) Token, for a word without its part-of-speech tag. In addition to WordNet’s links, each synset is connected to all the tokens in it or in its gloss. Moreover, they proposed three models to compute the stationary distribution: (i) MarkovLink, which contains WordNet’s links and links from tokens to synsets that contain them; (ii) MarkovGloss, containing only links between tokens and synsets that contain them in their gloss; and (iii) MarkovJoined, containing all the edges in both MarkovLink and MarkovGloss. In order to estimate the similarity between two PageRank vectors, they used the cosine similarity measure, as well as a newly proposed zero-KL Divergence measure, based on Kullback-Leibler (KL) divergence measure of information theory. Hughes and Ramage obtained their best result of 0.552 Spearman correlation with WordSim353, when using the MarkovLink model and the zero-KL Divergence.


Tsatsaronis2009,TsatsaronisVV10 proposed the Omiotis measure. They weighted the relations between synsets in WordNet according to their frequency. Given a WordNet path, , between two synsets, and , they defined its semantic compactness measure (SCM) as the product of edge weights in . In addition, they defined the semantic path elaboration (SPE) of as: , where is the depth in WordNet of the synset in , and is the maximum depth. The compactness of is thus the product of the harmonic mean of depths of consecutive edges, normalized by the maximum depth. The semantic relatedness between and according to is . Finally, they defined the semantic relatedness between and as, , where is the set of all paths between and . Omiotis achieved 0.61 spearman correlation with WordSim353.


Morris1991 introduced the concept of lexical chains between words as an element to represent and find the text structure. They argued that coherent text is assembled from textual units (sentences and phrases) that convey similar meaning. They termed these sequences of textual units as lexical chains. Using these chains they defined text cohesion and determined its meaning. \citeAHirstStOnge1998 constructed these chains from the links between WordNet synsets.

The reader is referred to [\BCAYBudanitsky \BBA HirstBudanitsky \BBA Hirst2001, \BCAYBudanitsky \BBA HirstBudanitsky \BBA Hirst2006] for a study of various other lexical methods. Refer also to [\BCAYPedersen, Patwardhan, \BBA MichelizziPedersen et al.2004] for a freely available software that implements six semantic measures: three information content based measures [\BCAYResnikResnik1995, \BCAYLinLin1998, \BCAYJiang \BBA ConrathJiang \BBA Conrath1997], two path length based measures [\BCAYLeacock \BBA ChodorowLeacock \BBA Chodorow1998, \BCAYWu \BBA PalmerWu \BBA Palmer1994], and a baseline measure that is the inverse of the length of the shortest path between two concepts.

2.3 Wiki Methods


WikiRelate2006,PonzettoS07 are perhaps the first to consider Wikipedia as the source for semantic relatedness information. The relatedness between two terms and is computed by identifying representative Wiki articles and containing those terms in their titles, respectively.777In cases of multiple representative articles, several heuristics were proposed to resolve ambiguity. The semantic relatedness is then derived in several ways using several distance measures between and , such as normalized path-length in the category hierarchy [\BCAYLeacock \BBA ChodorowLeacock \BBA Chodorow1998], information content [\BCAYResnikResnik1995], text overlap (number of common terms, proposed by \citeALesk1986 and \citeABanerjeeP03), etc. The best result of this method (called Wikirelate!) achieved a 0.49 Spearman correlation with WordSim353.


GabrilovichM07 introduced the celebrated Explicit Semantic Analysis (ESA) method, where each term has distributional representation over all Wikipedia articles. The components of the vector are TF-IDF scores [\BCAYSalton \BBA BuckleySalton \BBA Buckley1988] of the term in all articles. The semantic relatedness value of two terms is defined as the cosine of their vectors. Various enhancements and extensions to this basic ESA method were discussed in [\BCAYGabrilovich \BBA MarkovitchGabrilovich \BBA Markovitch2009]; for example, a filter based on link analysis was introduced to obtain more meaningful distributional term representations. ESA achieved a Spearman correlation of 0.75 with WordSim353, and is currently widely recognized as a top performing semantic relatedness method. Moreover, ESA is frequently used as a subroutine in many applications [<]see, e.g.,¿Yeh:2009:WRW:1708124.1708133,RadinskyAGM11,Haralambous2011.


MilneWitten2008 proposed the Wikipedia Link-based Measure (WLM), which utilizes the interlinks between Wikipedia’s articles. They proposed two methods to calculate the relatedness between two articles. The first calculates a weighted vector of the links of each article and returns the cosine of these vectors. The link weighting function is inspired by the TF-IDF measure. The second method utilizes the Normalized Google Distance of \citeAGoogleSimilarityDistance (discussed in Section 4), applied to interlinks counts. Given two terms, WLM selects two representing articles to these terms and returns the average of the above methods.888Milne and Witten also proposed several ways to choose representative articles for a given pair of terms. WLM achieved a Spearman correlation of 0.69 with WordSim353.


Yeh:2009:WRW:1708124.1708133 proposed a method called WikiWalk that utilizes Wikipedia as a graph whose nodes are articles and the interlinks are the edges. Given a text fragment, WikiWalk maps it to a distribution over the nodes and calculates its Personalized PageRank in the graph according to this distribution. Yeh et al. proposed two methods to map the given text to a distribution over nodes: dictionary based, and ESA based. The semantic relatedness of two terms is defined as the cosine similarity between their Personalized PageRank vectors. WikiWalk achieved a Spearman correlation of 0.634 with WordSim353.


RadinskyAGM11 proposed the Temporal Semantic Analysis (TSA), which expands the ESA method mentioned above by adding a temporal dimension to the representation of a concept. As in ESA, TSA represents terms as a weighted concept vector generated from a corpus. However, for each concept, TSA extracts in addition a temporal representation using another corpus whose documents are divided into epochs (e.g., days, weeks, months, etc.). With this extra corpus TSA calculates for each concept its “temporal dynamics,” which is its frequency in each epoch. Given two terms, TSA computes their semantic relatedness by measuring the similarity between the temporal representation of their ESA concepts. TSA obtained 0.82 correlation score with WordSim353, which is the best known result for WordSim353 using an unsupervised learning method.

2.4 Structure-Free Methods

Motivated by Kolmogorov complexity arguments, \citeAGoogleSimilarityDistance introduced a novel structure-free semantic relatedness method, which is essentially a normalized co-occurrence measure. This method, called the “Google similarity distance,” originally used the entire web as the unstructured corpus and relied on a search engine to provide rough assessments of co-occurrence counts. This method is extensively used in our work (without reliance on the entire web and search engines) and is described in Section 4.

Using term-document occurrence count matrix, \shortciteADeerwester90 used Singular Value Decomposition (SVD) to compare the meaning of terms. Applying this measure, \shortciteAFinkelsteinGMRSWR01i achieved 0.56 correlation with WordSim353.999The implementation they used is available online at


DekangLin657297 proposed information-based methods to define and quantify term similarity. \shortciteADagan+Lee+Pereira:99a and \citeATerra2003 experimented with various statistical and information co-occurrence measures, such as mutual information, likelihood ratio, norm, and the KL- and Jensen-Shannon divergences, for estimating semantic relatedness from structure-free corpora.


ReisingerMooney10 generated for a term a number of feature vectors, one for each context in which appears. The feature vector of a certain context contains weights for all terms appearing with t in this context and weights are calculated based on TF-IDF and scores. These feature vectors were then clustered and cluster centroids were taken to represent the meaning of . Using this representation they considered various methods to calculate semantic relatedness of two terms according to similarity of their respective centroids. This method obtained a correlation of 0.77 with WordSim353 by using combined centroids from clusterings of different resolutions. This impressive performance is among the best known.

2.5 Supervised Methods

All the semantic relatedness methods described in previous subsections, as well as many other published results not covered here, can be framed as unsupervised learning techniques, whereby the semantic relatedness scores emerge from the BK corpus, using some hand-crafted techniques without further human supervision. There have been a few successful attempts to utilize supervised learning techniques as well. To the best of our knowledge, all of these works follow a similar methodology whereby the features of a learning instance are assembled from scores obtained by various unsupervised methods (such as those discussed above). Using this feature generation approach one then resorts to known inductive learning techniques such as support vector machines (SVMs) [\BCAYVapnikVapnik1995] to learn a classifier or a regressor.


WikiRelate2006,PonzettoS07 used SVM regression applied to instances whose features were constructed as a hybrid of all the unsupervised techniques described above, which are based on WordNet or Wikipedia. In addition, Strube and Ponzetto used the Jaccard measure [\BCAYSalton \BBA McGillSalton \BBA McGill1983] applied to Google search results counts. Overall, their learning instances were comprised of 12 features (six Wikipedia-based scores, five WordNet-based scores and one Google-based score). They employed a feature selection technique using a genetic algorithm \shortciteMierswa2006, and applied a standard model selection approach using grid search to identify useful hyper-parameters. Overall, they obtained 0.66 correlation with the WordSim353 ground truth.


2007:MSS:1242572.1242675 considered the semantic similarity problem mentioned in Section 1. They constructed a feature vector for a given pair of terms by calculating four well-known co-occurrence measures (Jaccard, Overlap/Simpson coefficient, Dice coefficient and mutual information) and lexico-syntactic templates (e.g., ‘X of Y’, ‘X and Y are’, ‘X of Y’), which were derived from page counts and snippets retrieved using a web search engine. Bollegala et al. employed an SVM to classify whether two terms are synonyms or not. The SVM was trained using examples that were taken from WordNet, considering terms from the same (resp., different) synset as positive (resp., negative) examples. The similarity between and was computed as a function of their feature vector’s location relative to the SVM decision boundary.


1620758 considered the binary classification problem of determining which pair among two term pairs is more related to each other. In their method, each instance, consisting of two pairs, and , is represented as a feature vector constructed using semantic relatedness scores and ranks from other (unsupervised) relatedness methods. Specifically, they considered three structure-free semantic relatedness methods and one lexical semantic relatedness method so that the overall feature vector for an instance, is a 16-dimensional vector (four scores and four ranks for each term pair). Using an SVM classifier they obtained 0.78 correlation with WordSim353. The structure-free BK used for achieving this result consisted of four billion web documents. They reported that the overall computation utilized 2000 CPU cores for 15 minutes (approximately 20 days on one core).

Another attempt to utilize SVMs, where features are constructed using unsupervised scores, is reported by \citeAHaralambous2011. They considered the following unsupervised measures: (i) ESA [\BCAYGabrilovich \BBA MarkovitchGabrilovich \BBA Markovitch2007]; (ii) a weighted shortest path measure based on WordNet; and (iii) another co-occurrence measure, which is a variant of Jaccard’s measure. Using some combination of these three scores, they managed to achieve 0.7996 correlation with WordSim353. By training an SVM over a training set extracted from WordSim353 term pairs (represented by these features) they achieved 0.8654 correlation with WordSim353. This is the best correlation score that was ever reported. Haralambous and Klyuev noted that this impressive result relies on optimizations of the ESA hyper-parameters but the precise details of this optimization were not reported.

Both \shortciteA1620758 and \citeAHaralambous2011 achieved their reported results using 10-fold cross validation, thus utilizing 90% of the available labeled preferences for training.

To summarize, among these works the Agirre et al. approach is the closest to ours, mainly in its formulation of the learning problem. However, our solution methodology is fundamentally different.

3 Problem Setup

We consider a fixed corpus, , defined to be a set of contexts. Each context , , is a textual unit conveying some information in free text. In this work we consider contexts that are sentences, paragraphs or whole documents. Let be a dictionary consisting of all the terms appearing in the corpus. A term may be any frequent phrase (unigram, bigram, trigram, etc.) in the corpus, e.g., “book”, “New York”, “The Holly Land.” Ultimately, our goal is to automatically construct a function that correctly ranks the relatedness of the terms in accordance with the subjective semantics of a given labler. We emphasize that we do not require to provide absolute scores but rather a relative values inducing a complete order over the relatedness of all terms.

We note that in reality this total order assumption doesn’t hold, since the comparison between two term pairs not sharing any term might be meaningless. Furthermore, human preferences may contain cycles, perhaps due to comparisons made using different features (as in the rock-paper-scissors game), or due to noise/confusion. However, we impose total order for simplicity and it reduce the VC-dimension of our hypothesis class (see Section 8).

3.1 Learning Model

Our goal is to construct the function using supervised learning. Specifically, the user will be presented with a training set to be labeled, where each is a quadruple of terms. The binary label, , of the instance should be if the terms in the first pair are more related to each other than the terms in the second pair , and otherwise. Each quadruple along with its label, is also called a preference.

Among all possible quadruples, we restrict our attention only to quadruples in the set,


The reason to focus only on preferences is that any quadruple encodes a meaningless preference, since the semantic relatedness of term pairs such as and preferences such as are trivial.

Denote by , a set of labeled training examples received from the user. We assume that if then . A binary classifier in our context is a function satisfying, for all , the “anti-symmetry” condition


The 0/1 training error of is,

The standard underlying assumption in supervised learning is that (labeled) instances are drawn i.i.d. from some unknown distribution defined over . The classifier is chosen from some hypothesis class . In this work we focus on the realizable setting whereby labels are defined by some unknown target hypothesis . Thus, the underlying distribution reduces to . The performance of a classifier is quantified by its true or (0/1) test error,

3.2 Learning from Preferences vs. Absolute Scores

Why do we choose to ask the user about pairwise preferences rather than requesting an absolute relatedness score of a single pair of terms? Our choice is strongly motivated by recent work showing that answers to such questions are more accurate than answers to questions about absolute quality. In order to extract an absolute score, a user must rely on some implicit global scale, which may or may not exist. We mention the papers \shortciteCarterette08hereor,1414238,1718491 as a small sample of studies that justify this general approach both theoretically and empirically.

4 Adaptive Co-occurrence Model

Recognizing the widely accepted idea that the intensity of semantic relatedness between two terms is a function of their co-occurrence pattern in textual documents, we would like to somehow measure co-occurrence using a corpus of BK where such patterns are manifested. Therefore, a major component of the proposed algorithm is an appropriate co-occurrence measure. However, we also require adaptivity to specific user’s subjective relatedness preferences. Our observation is that such adaptivity can be accomplished by learning from examples user specific weights to be assigned to contexts, as described blow. Overall, our approach is to construct a reasonable initial model, derived only from the BK corpus (without supervision), which fits a rough general consensus on relatedness of basic terms. This initial model is the starting point of a learning process that will refine the model to fit specific user preferences.

In a preliminary study we examined various co-occurrence indices, such as Jaccard measure, pointwise mutual information, KL- and Jensen-Shannon divergences, and latent semantic analysis. Based on this study and some published results [\BCAYDagan, Lee, \BBA PereiraDagan et al.1999, \BCAYTerra \BBA ClarkeTerra \BBA Clarke2003, \BCAYRecchia \BBA JonesRecchia \BBA Jones2009], we selected the normalized semantic distance measure of \citeAGoogleSimilarityDistance.101010Note that Cilibrasi and Vitanyi termed this function “Google similarity distance” and applied it by relying on Google to retrieve proxies for co-occurrence statistics. In our discussion co-occurrence statistics can be obtained in any desirable manner. Specifically, we observed that by itself can achieve a high 0.745 Spearman correlation with WordSim353 (via our implementation using Wikipedia as the BK corpus) thus providing a very effective starting point. We note that information measures are also effective, but not quite as good.111111Pointwise mutual information achieved correlation of 0.73 with WordSim353 [\BCAYRecchia \BBA JonesRecchia \BBA Jones2009]. We also find it appealing that this measure was derived from solid algorithmic complexity principles.

Cilibrasi and Vitanyi defined the semantics of the terms , as the set of all contexts in which they appear together. Than they defined the normalized semantic distance () between to be


where .

The function, like any other absolute scoring function for pairs, induces a permutation over all the term pairs, and therefore, can be utilized as a classifier for semantic relatedness preferences, as required. However, this classifier is constructed blindly without any consideration of the user’s subjective preferences. To incorporate user subjective preferences we introduce a novel extension of that allows for assigning weights to contexts. Define the weighted semantics of the terms as

where is a weight assigned to the context , where we impose the normalization constraint


Thus, given a BK corpus, , and a set of weights,

we define weighted normalized semantic distance () between and is,


where is a normalization constant,

We call the set of weights a semantic model and our goal is to learn an appropriate model from labeled examples.

Recall that our objective is to quantify the relatedness of two terms in a “universal” manner, namely, regardless of the types of relations that link these terms. Is it really possible to learn a single model that will encode coherent semantics universally for all terms and all relations?

At the outset, this objective might appear hard or even impossible to achieve. Additional special obstacle is the modeling of synonym relations. The common wisdom is that synonym terms, which exhibit a very high degree of relatedness, are unlikely to occur in the same context [<]see, e.g.,¿DekangLin657297,BudanitskyH06, especially if the context unit is very small (e.g., a sentence). Can our model capture similarity relations? We empirically investigate these questions in the Sections 6.4, 6.5, 6.6 and 6.7 where we evaluate the performance of our model via datasets that encompass term pairs with various relations. In addition, we further investigate the similarity relations via an ad-hoc experiment in Section 6.8.

5 The SemanticSort Algorithm

Let be any adaptive co-occurrence measure satisfying the following properties: (i) each context has an associated weight in ; (ii) monotonically increases with increasing weight(s) of context(s) in ; and (iii) monotonically decreases with (increasing) weight(s) of context(s) in or .

We now present a learning algorithm that can utilize any such function. We later apply this algorithm while instantiating this function to , which clearly satisfies the required properties. Note, however, that many known co-occurrence measures can be extended (to include weights) and be applied as well.

Relying on we would like utilize empirical risk minimization (ERM) to learn an appropriate model of context weights so as to be consistent with the training set . To this end we designed the following algorithm, called , which minimizes the training error over by fitting appropriate weights to . A pseudocode is provided in Algorithm 1.

The inputs to are , a learning rate factor , a learning rate factor threshold , a decrease threshold , and a learning rate function . When a training example is not satisfied, e.g., and ), we would like to increase the semantic relatedness score of and and decrease the semantic relatedness score of and . achieves this by multiplicatively promoting/demoting the weights of the “good”/“bad” contexts in which and co-occur. The weight increase (resp., decrease) depends on (resp., ), which are defined as follows.

uses to update context weights in accordance with the error magnitude incurred for example , defined as

Thus, we require that is a monotonically decreasing function so that the greater is, the more aggressive and will be. The learning speed of the algorithm depends on these rates, and overly aggressive rates might prevent convergence due to oscillating semantic relatedness scores. Hence, gradually refines the learning rates as follows. Define

as the total sum of the differences over unsatisfied examples. We observe that if decreases at least in in each iteration, then converges and the learning rates remain the same. Otherwise, will update the learning rate to be less aggressive by doubling . Therefore, we require that . Note that the decrease of is only used to control convergence, but we test using the 0/1 loss function as described in Section 6.3. iterates over the examples until its hypothesis satisfies all of them, or exceeds the threshold. Thus, empirical risk minimization in our context is manifested by minimizing .

1:  Initialize:
4:  repeat
6:     for all  do
7:        if (then
9:        end if
12:        if ( then
13:           {This is an unsatisfied example.}
17:           for all  do
19:           end for
20:           for all  do
22:           end for
23:           Normalize weights s.t.
24:        end if
25:     end for
26:     if (then
28:        if (then
29:           return  
30:        end if
31:     end if
33:  until 
Algorithm 1 )

The computational complexity of is as follows. The model requires memory space, since each context is associated with a weight. In addition, saves a mapping from any to its . Thus, every occurrence of a term, , in the corpus is represented in this mapping by the index of the relevant context in . Let be the number of occurrences of in the corpus, and define . Hence, this mapping require space. Overall, the required space is

for learning and classifying. Our experiments in 64bit Java with 1.3GB filtered Wikipedia (using mainly hash tables) required 8GB RAM memory. Due to the normalization constraint (4), when we update a single context’s weight, we influence the weights of all contexts. Therefore, each update due to unsatisfied example requires time complexity. In the worst case scenario, each iteration requires . If we denote by (resp., ) the maximum (resp., minimum) semantic relatedness score of to any example in , then the maximum value of is . In addition, with the exception of at most iterations, decreases every iteration by at least . It follows that the maximum number of iterations is

Thus, the worst case time complexity of is

If we implement using then the normalization constraint (4) is not necessary. Let’s denote as the division factor of a certain normalization, as the semantic relatedness score before the normalization, as the semantic relatedness score after the normalization,


We thus have,

Hence, the worst case time complexity of using is

We emphasize that this is a worst case analysis. In practice, the precise time complexity is mainly dependent on the number of training errors. Assuming that computing depends only on and , this computation requires time complexity. Thus, classifying an instance requires time. If we denote by the total number of unsatisfied examples encountered by during training, and assuming that in our BK corpus, for every term, then the overall time complexity of the learning process is ( using ), since the overall time required to process satisfied instances is negligible. Finally, we note that in all our experiments the total number of iterations was at most 100, and it was always the case that .

6 Empirical Evaluation

To evaluate the effectiveness of we conducted several experiments. One of the barriers in designing these experiments is the lack of labeled dataset of term quadruples as required by our model. The common benchmark datasets are attractive because they were labeled by human annotators, but these datasets are rather small. When considering a small real world application involving even 500 vocabulary terms, we need to be able to compare the relatedness of many of the involved pairs. However, the largest available dataset, WordSim353, contains only pairs121212In effect there are 351 pairs since each of the pairs money -- bank and money -- cash appear twice, with two different scores. In our experiments we simply merged them and used average scores. over its unique vocabulary terms.

6.1 The GSS Dataset

Although we utilized all available datasets in our experiments (see below), we sought a benchmark of significantly larger scale in order to approach real world scenarios. In such scenarios where the vocabulary is large our resources limit us to train only on negligible fraction from the available preferences (the largest fraction is about ). As opposed to these available humanly annotated, where we examined ability to learn human preferences, the larger dataset has a different objective: Verify if learning can be achieved while leveraging such a tiny statistical fraction of the dataset. Furthermore, we want this large dataset to still be positive correlated to human semantic preferences as a sanity check.

Without access to a humanely annotated dataset of a large scale, we synthesized a labeled dataset as follows. Noting that a vocabulary of 1000-2000 words covers about 72%-80% of written English texts [\BCAYFrancis \BBA KuceraFrancis \BBA Kucera1982], we can envision practical applications involving vocabularies of such sizes. We therefore selected a dictionary consisting of the most frequent English words (). For each of the term pairs over we used an independent corpus of English texts, namely the Gutenberg Project, to define the semantic relatedness score of pairs, using the method, applied with sentence based contexts. We call this scoring method the Gutenberg Semantics Score (GSS).

Project Gutenberg is a growing repository that gathers many high quality and classic literature that is freely available on the web. For example, among the books one can find Alice’s Adventures in Wonderland, The Art of War, The Time Machine, Gulliver’s Travels, and many well known fiction ebooks. Currently, Project Gutenberg offers over 36,000 ebooks.131313These ebooks appear in many formats such as HTML, EPUB, Kindle, PDF, Plucker, free text, etc. In this work we used a complete older version of Project Gutenberg from February 1999 containing only 1533 texts bundled by Walnut Creek CDROM. We didn’t try to use any other version and we used this old and small version merely because it was in our possession and it served the purpose of our experiments as mention above. We believe that any version can be utilized as there is no problem in which prevents us from using any different version or other textual corpus?

While GSS is certainly not as reliable as human generated score (for the purpose of predicting human scores), we show below that GSS is positively correlated with human annotation, achieving 0.58 Spearman correlation with the WordSim353 benchmark. Given a set of term pairs together with their semantic relatedness scores (such as those generated by GSS), we construct a labeled set of preferences according to semantic relatedness scores (see definitions in Section 3).

We emphasize that the texts of the Project Gutenberg were taken conclusively and as is, without any modifications, to avoid any selection bias.141414The GSS dataset will be made publicly available. Nevertheless, despite its statistical correlation to human annotation, our main objective is not to evaluate absolute performance scores, but rather to see if generalization can be accomplished by at this scale, and in particular, with an extremely small fraction of the available training examples.

6.2 Background Knowledge Corpora

An integral part of the model is its BK corpus. We conducted experiments using two corpora. The first corpus is the snapshot of Wikipedia from 05/11/05 preprocessed using Wikiprep.151515Wikiprep is an XML preprocessor for Wikipedia, available at We used this old version of Wikipedia only because it was already available preprocessed and, as mention in the previous section, we saw no importance of choosing one version over the other as anyone will do.161616Wikipedia’s dump is contains many macros that need to be processed in order to achieve the raw text. Following [\BCAYGabrilovich \BBA MarkovitchGabrilovich \BBA Markovitch2007], in order to remove small and overly specific articles, we filtered out articles containing either less than 100 non-stopword terms and/or less than 5 incoming links and/or less than 5 outgoing links. The second corpus we used is the Project Gutenberg mentioned above. We emphasize that in all experiments involving GSS scores only Wikipedia was used as the BK corpus. Also, in each experiment we either used Wikipedia or Gutenberg as a BK corpus and not both. In all the experiments we ignored stopwords and stemmed the terms using Porter’s stemmer.171717Porter’s stemmer is available at Finally, We considered three types of contexts: sentences, paragraphs and whole documents. Sentences are parsed using ‘.’ as a separator without any further syntax considerations; paragraphs are parsed using an empty line as a separator. No other preprocessing, filtering or optimizations were conducted. After some tuning, we applied with the following hyper-parameters that gave us the best result: , , , and181818Our brief attempts with various continuous functions (linear or exponential) were not as successful. Thus, we used them because they provided the best performance.

6.3 Evaluation Methodology

Consider a collection of preferences, where each preference is a quadruple, as define in Section 3. When we evaluate performance of the algorithm w.r.t. a training set of size , we choose an -subset, uniformly at random. The rest of the preferences in are taken as the test set.191919Formally speaking, this type of sampling without replacement of the training set, is within a standard transductive learning model [\BCAYVapnikVapnik1998, Sec. 8.1,Setting 1] . However, if remains very large, only 1,000,000 preferences, chosen uniformly at random from , are taken for testing. The training set is fed to . The output of the algorithm is an hypothesis , consisting of a weight vector that includes a component for each context in . Then we apply the hypothesis on the test set and calculate the resulting accuracy (using the 0/1 loss function). This quantity provides a relatively accurate estimate of (one minus) the true error . In order to obtain a learning curve we repeat this evaluation procedure for a monotonically increasing sequence of training set sizes. The popular performance measure in semantic relatedness research is the Spearman correlation coefficient of the ranking obtained by the method to the ground truth ranking. Therefore, we also calculated and reported it as well. In addition, in order to gain statistical confidence in our results, we repeated the experiments for each training set size multiple times and reported the average results. For each estimated average quantity along the leaning curve we also calculated its standard error of the mean (SEM), and depicted the resulting SEM values as error bars.

6.4 Experiment 1: large scale

Figure 1: Experiment 1 (large scale) - Learning curves for test accuracy (solid) and test correlation (dashed), with standard error bars. Lower horizontal line at 0.415 marks the performance of ESA [\BCAYGabrilovich \BBA MarkovitchGabrilovich \BBA Markovitch2005]. Upper horizontal line at 0.476 marks the performance of [\BCAYCilibrasi \BBA VitanyiCilibrasi \BBA Vitanyi2007].

In order to evaluate on ambitious, large scale and quite realistic scenario, we conducted the following experiments. Taking (the top 1000 most frequent terms in Wikipedia) we considered all possible preferences. Note that the number of preferences associated with is huge, containing about quadruples. We labeled the preferences according to GSS as described above. In generating the learning curve we were only able to reach training examples, thus utilizing an extremely small fraction of the available preferences (the largest fraction is about ). Figure 1 presents 0/1 test accuracy and Spearman correlation learning curves. On this figure we also mark the results obtained by two unsupervised methods: (i) using Wikipedia as BK corpus with paragraph level contexts; (ii) the well known ESA method using the same filtered Wikipedia snapshot mentioned above. Both these unsupervised performance scores were calculated by us using our implementations of these methods. It is evident that successfully generalized the training sample and accomplished a notable improvement over its starting point. We believe that these results can serve as a proof of concept and confirm ’s ability to handle real world challenges.

6.5 Experiment 2: medium scale

We repeated the previous experiment now with , taken to be subset of 500 terms from chosen uniformly at random. All other experiment parameters were precisely as in Experiment 1. The resulting learning curves are shown in Figure 2. Clearly, this medium scale problem gave rise to significantly higher absolute performance scores. We believe that the main reason for this improvement (with respect to the large scale experiment) is that with we were able to utilize a larger fraction of preferences in training.

Figure 2: Experiment 2 (medium scale) - Learning curves for test accuracy (solid) and test correlation (dashed),with standard error bars. Lower horizontal line at 0.416 marks the performance of ESA [\BCAYGabrilovich \BBA MarkovitchGabrilovich \BBA Markovitch2005]. Upper horizontal line at 0.482 marks the performance of [\BCAYCilibrasi \BBA VitanyiCilibrasi \BBA Vitanyi2007].

6.6 Experiment 3: small scale

As mentioned in Section 2, most of the top performing known techniques, including the reported supervised methods, evaluated performance with respect to the WordSim353 benchmark. In order to link the proposed approach to the current literature we also conducted an experiment using WordSim353 as a source for labeled preferences. This experiment serves three purposes. First, it can be viewed as a sanity check for our method, now challenging it with humanly annotated scores. Second, it is interesting to examine the performance advantage of our supervised approach vs. no systematic supervision as obtained by the unsupervised methods (we already observed in Experiments 1&2 that our supervised method can improve the scores obtained by ESA and ). Finally, using this experiment we are able compare between and the other known supervised methods that so far have been relying on SVMs.

Figure 3: Experiment 3 (small scale) - Learning curves for test correlation and test accuracy, with standard error bars using either Wikipedia or Gutenberg. Lower horizontal line at 0.82 marks the best known unsupervised result for WordSim353 [\BCAYRadinsky, Agichtein, Gabrilovich, \BBA MarkovitchRadinsky et al.2011]. Upper horizontal line at 0.8654 marks the best known supervised result for WordSim353 [\BCAYHaralambous \BBA KlyuevHaralambous \BBA Klyuev2011].

Figure 3 shows the learning curves obtained by applied with paragraph contexts using either Wikipedia or Gutenberg (but not both together) as a BK corpus. The lower horizontal line, at the 0.82 level, marks the best known unsupervised result obtained for WordSim353 [\BCAYRadinsky, Agichtein, Gabrilovich, \BBA MarkovitchRadinsky et al.2011]. The upper horizontal line, at the 0.8654 level, marks the best known supervised result [\BCAYHaralambous \BBA KlyuevHaralambous \BBA Klyuev2011]. It is evident that quite rapid learning is accomplished using either the Wikipedia or the Gutenberg models, but Wikipedia enables significantly faster learning and smaller sample complexity for each error level. The curves in the internal panel show the corresponding test accuracies (0/1 loss) for the same experiments. Note that meaningful comparisons between and the other (SVM based) supervised methods (described in Section 2.5) can only be made when considering the same train/test partition sizes. Unlike our experimental setting, both \shortciteA1620758 and \citeAHaralambous2011 achieved their reported results (0.78 and 0.8654 correlation with WordSim353, respectively) using 10-fold cross validation, thus utilizing 90% of the available labeled preferences for training. When considering only the best results obtained at the top of the learning curve, outperforms the best reported supervised performance after consuming 1.5% of all the available WordSim353 preferences using the Wikipedia model and after consuming 3% of the preferences using the Project Gutenberg model.

Figure 4: Experiment 3 (small scale) - Learning curves for test correlation with standard error bars using Project Gutenberg applied with sentences, paragraphs and whole document as context types. Lower horizontal line at 0.82 marks the best known unsupervised result for WordSim353 [\BCAYRadinsky, Agichtein, Gabrilovich, \BBA MarkovitchRadinsky et al.2011]. Upper horizontal line at 0.8654 marks the best known supervised result for WordSim353 [\BCAYHaralambous \BBA KlyuevHaralambous \BBA Klyuev2011]. The internal panel zooms into the same curves of sentence- and paragraph-based semantic relatedness, now with logarithmic -axis.

Figure 4 depicts three Gutenberg learning curves: one for each context type. The internal panel zooms into the same curves of sentence- and paragraph-based contexts, now with logarithmically scaled -axis to emphasize their differences. As before, the lower (resp., upper) horizontal line at 0.82 (resp., 0.8654) marks the best known unsupervised (resp., supervised) result for WordSim353 [\BCAYRadinsky, Agichtein, Gabrilovich, \BBA MarkovitchRadinsky et al.2011] [<]resp.,¿Haralambous2011. Clearly, paragraph contexts exhibit the best test performance for almost all training set sizes. In contrast, contexts consisting of whole documents perform poorly, to the extent that even after utilizing the largest training set size, they are still way behind sentences and paragraphs (even without using a single labeled example). A similar comparison (not presented) for Wikipedia contexts showed entirely different picture with all contexts exhibiting very similar (and almost indistinguishable) performance as shown for paragraphs in Figure 3.

6.7 Experiment 4: subjective semantic relatedness

To examine the ability of to adapt to subjective semantic relatedness ranking, we created two new synthetic sets of semantic relatedness scores to all WordSim353 pairs:

  1. A Wikipedia set of scores that was calculated using paragraph-based over Wikipedia;

  2. A Gutenberg set that was generated using paragraph-based over the Gutenberg corpus.

We consider these two sets as proxies for two different “subjective” semantic relatedness preferences.202020Indeed, these two sets exhibited numerous significantly different semantic relatedness valuations. For example, nature and environment received high score in Wikipedia but very low score in Gutenberg, and psychologist and fear were much more similar in Gutenberg than in Wikipedia. Table 1 outlines two learning curves: the first corresponds to learning the Gutenberg preferences using Wikipedia as the BK corpus, and the second, for learning the Wikipedia preferences using Gutenberg as BK. It is evident that in both cases successfully adapted to these subjective preferences achieving excellent test performance in both cases.

Training test size (%) 0 0.5 1 2 4 8
Wiki learns Gutenberg 0.65 0.77 0.85 0.93 0.97 0.99
Gutenberg learns Wiki 0.62 0.73 0.82 0.89 0.94 0.96
Table 1: Experiment 4 (subjective semantic relatedness) - Spearman Correlation.

6.8 Experiment 5: semantic similarity

Synonymous relations are considered among the most prominent semantic relations. Semantic similarity is a sub-domain of semantic relatedness where one attempts to assess the strength of synonymous relations. A widely accepted approach to handle synonyms (and antonyms) is via distributional similarity [\BCAYLinLin1998, \BCAYBudanitsky \BBA HirstBudanitsky \BBA Hirst2006]. In this approach, to determine the similarity of terms and we consider and , the “typical” distributions of terms in close proximity to and , respectively. It is well known that these distributions tend to resemble whenever is similar to , and vice versa. In contrast, computes its similarity scores based on co-occurrence counts, and the conventional wisdom is that synonyms tend not to co-occur. A natural question then is how well and in what way can handle synonymous relations.

In this section we examine and analyze the behavior of on a specialized semantic similarity task. To this end, we use the semantic similarity datasets, namely R&G and M&C, which are introduced and described in Section 2.1.

Figure 5: Experiment 5 (semantic similarity with Miller & Charles dataset) - Learning curves for test correlation (solid) and test accuracy (dashed) with standard error bars. Lower horizontal line at 0.9 marks the best known unsupervised results [\BCAYLi, Bandar, \BBA McLeanLi et al.2003, \BCAYHughes \BBA RamageHughes \BBA Ramage2007]. Upper horizontal line at 0.92 marks the best known supervised result [\BCAYAgirre, Alfonseca, Hall, Kravalova, Pasca, \BBA SoroaAgirre et al.2009].

Figure 6: Experiment 5 (semantic similarity with Rubenstein and Goodenough dataset) - Learning curves for test correlation (solid) and test accuracy (dashed) with standard error bars. Lower horizontal line at 0.8614 marks the best known unsupervised result [\BCAYTsatsaronis, Varlamis, \BBA VazirgiannisTsatsaronis et al.2010]. Upper horizontal line at 0.96 marks the best known supervised result [\BCAYAgirre, Alfonseca, Hall, Kravalova, Pasca, \BBA SoroaAgirre et al.2009].

Figure 5 depicts the results obtained for the M&C dataset. The lower horizontal line, at the 0.9 level, marks the best known unsupervised results obtained for M&C dataset [\BCAYLi, Bandar, \BBA McLeanLi et al.2003, \BCAYHughes \BBA RamageHughes \BBA Ramage2007]. The upper horizontal line, at the 0.92 level, marks the best known supervised result obtained for M&C dataset [\BCAYAgirre, Alfonseca, Hall, Kravalova, Pasca, \BBA SoroaAgirre et al.2009]. Figure 6 depicts the results obtained for the R&G dataset. The lower horizontal line, at the 0.8614 level, marks the best known unsupervised results obtained for R&G dataset [\BCAYTsatsaronis, Varlamis, \BBA VazirgiannisTsatsaronis et al.2010]. The upper horizontal line, at the 0.96 level, marks the best known supervised result obtained for R&G dataset [\BCAYAgirre, Alfonseca, Hall, Kravalova, Pasca, \BBA SoroaAgirre et al.2009]. The learning curves depicted in both figures clearly indicate that learning synonyms using our method is an achievable task, and in fact, can improve upon the distributional similarity methods. While synonyms and antonyms co-occur infrequently, they still do co-occur. It is a nice property of our model that it can leverage these sparse co-occurrence counts and accurately detect synonyms by sufficiently increasing the weights of their mutual contexts.

7 Model Interpretability

The semantic model learned by is encoded in its weight vector . In this section we summarize our initial study to explore the model and gain some insight into its structure. Are the weights in “arbitrarily” optimized to reduce the training error, or is it the case that they are organized in a meaningful and interpretable manner? Can we learn from something about the human rater(s) who tagged the training set? Can we say something about their world knowledge and/or intellectual interests?

Trying to answer the above questions we conducted the following preliminary study. While the results we obtained are not sufficient for fully answering the above questions, they are indicative and suggest that the semantic model contains useful information that can be interpreted and perhaps even be utilized in applications. In our experiments, due to the absence of human annotating resources, we again synthesized a “human rater” whose knowledge is focused on a specific topic.

Given a specific topic in Wikipedia (e.g., sports) we extracted the set of documents pertaining to (using the Wikipedia topic tags), and partitioned uniformly at random into two subsets, and . The subset was used for labeling, and was used as part of the BK corpus together with the rest of the Wikipedia corpus. Our synthetic rater annotated preferences based on applied over , whose articles were partitioned to paragraph units. We call the resulting semantic preferences the -semantics.

Taking as a dictionary, we generated a training set by sampling uniformly at random preferences, which were tagged using the -semantics. We then applied to learn the -semantics using this training set while utilizing (as well as the rest of Wikipedia) as a BK corpus, whose documents were parsed to the paragraph level as well. We then examined the resulting model.

# play player record club
Music Sports Music Sports Music Sports Music Sports
1 band game instrument play release set dance football
2 guitar team play league album season night league
3 instrument season replace game label win heart cup
4 perform player join season band career fan play
5 time football guitar born song finish local divis
6 year first technique team first run house season
7 role score key football new game London manage
8 tour club example professional studio won scene success
9 two year football baseball production score mix found
10 new career hand major sign second radio player
Table 2: Model Interpretability - Top 10 related terms according to Music and Sports Semantics.

Two topics were considered: Music and Sports, resulting in two models: and . In order to observe and understand the differences between these two models, we identified and selected, before the experiment, a few target terms that have ambiguous meanings with respect to Music and Sports. The target terms are:

play,   player,   record,   club.

Table 2 exhibits the top 10 most related terms to each of the target terms according to either or . It is evident that the semantics portrayed by these lists are quite different and nicely represent their topics as we may intuitively expect. The table also emphasizes the inherent subjectivity in semantic relatedness analyses, that should be accounted for when generating semantic models.

Given a topical category in wikipedia, and a hypothesis , we define the aggregate -weight according to , to be the sum of the weights of all contexts that belong to an article that is categorized into or its Wikipedia sub-categories. Also, given a category , we denote by , its initial hypothesis and by , its final hypothesis (after learning).212121The initial hypotheses vary between topics if their respective BK corpora are different. In order to evaluate the influence of the labeling semantics on , we calculated, for each topic the difference between its aggregate -weight according to and according to .

Figure 7: Model Interpretability - Weights increase (upper/green) and decrease (lower/red) of Wikipedia’s major categories according to Music hypotheses.


Figure 8: Model Interpretability - Weights increase (upper/green) and decrease (lower/red) of Wikipedia’s major categories according to Sports hypotheses.

Figures 7 and 8 present the increase/decrease in those aggregate -weights for Wikipedia’s major categories . In both cases of labeling topics, Music or Sports, it is easy to see that, by and large, the aggregate weights of categories that are related to the labeling topic were increased, while weights of unrelated categories were decreased. Surprisingly, when considering the Music topic, many mathematical categories dramatically increased their weight.222222Indeed, both Music and Mathematics share a large vocabulary. Furthermore, it is common wisdom that successful mathematicians are often also accomplished musicians and vice versa. To summarize, it is clear that successfully identified the intellectual affiliation of the synthesized labeler.

While these results aren’t conclusive (and can be viewed as merely anecdotal), we believe that they do indicate that the automatically emerging weights in the model are organized in a meaningful and interpretable manner, which encodes the labeling semantics as a particular weight distribution over the corpus topics. In addition, not only did identify the labeler BK, it also unexpectedly revealed related topics.

8 A Learning-Theoretic Perspective

Here we would like to present some initial thoughts on the learnability of semantic relatedness. Classic learning-theoretic considerations ensure that generalization will be achieved if the hypothesis class will be sufficiently expressive to allow fitting of the training set, but still appropriately restricted to avoid overfitting. Appropriate fitting is of course a function of the hypothesis class expressiveness and the training sample size. Assuming a realizable (noise-free) setting, a classical result in statistical learning theory is that any consistent learning algorithm (that perfectly fits the training set) will require a sample complexity of,

to achieve error with probability over random choices of the training set. Here, , is the VC-dimension of (see, e.g., [\BCAYAnthony \BBA BartlettAnthony \BBA Bartlett1999]). Conversely, it has been shown (for particular worst case distribution and hypothesis class) that,

examples are necessary. Thus, the VC-dimension is a necessary and sufficient dominating factor that will determine the required training sample size if we seek a distribution free bound.

We now show that in our context, a completely unrestricted hypothesis class, , whose hypotheses only satisfy the “anti-symmetry” condition (2), is completely useless, because its VC-dimension is . Therefore, using is of course a triviality because the number of quadruples in is exactly so there is nothing that could be gained by sampling (the proof is provided in Appendix A).

We now consider the hypothesis class of permutations over term pairs. Each hypothesis in this class is in essence a full order over the pairs. It is not hard to prove that the VC-dimension of this class is (see the proof in Appendix B).

The set of permutations, with its VC-dimension, provides a substantial improvement over the set of all (anti-symmetric) hypotheses. However, this dimension is still quite large, and requires huge resources for gathering sufficiently large training sets. In contrast, we already observed the ability of to learn semantic relatedness preferences quite well with relatively small training sets. Can this be explained using VC dimension arguments?

While currently we don’t know how to explicitly evaluate the capacity of the hypothesis space induced by , we observe that the use of a BK corpus through the measure, provides further capacity reductions by placing many constraints on the set of allowable permutations. For example, observe that only updates the weights of contexts that include both terms in a given pair, so it cannot change the semantic relatedness score of terms that do not co-occur; hence, the relative order of unrelated terms is predetermined. Moreover, considering the structure of the measure we know the following lower bound on the semantic relatedness score232323The higher the score is the less the terms are related. of two terms, and ,

It follows that has limited freedom in reducing semantic relatedness scores.

Finally, regularizes context’s weights by normalizing the total sum of the weights. Therefore, an update of a context’s weight influences the weight of all the contexts, which, in turn, influences the semantic relatedness score of all the term pairs in general, and specifically, all the term pairs that contain terms within this context. This mutual dependency was especially evident in the large and medium experiments (Sections 6.4 and 6.5) where the learning complexity was higher.

Such considerations including other statistical and graph-theoretic properties of a particular BK corpus (viewed as a weighted graph whose nodes are terms or term pairs), can in principle be used in attempts to estimate the effective VC-dimension implied by . We believe that such considerations and analyses are important as they can lead to better understanding and improvements of the learning process and perhaps even help in characterizing the role and usefulness of particular BK corpora.

9 Concluding Remarks

Building on successful and interesting ideas, we presented in this paper a novel supervised method for learning semantic relatedness. The proposed algorithm exhibits interesting performance over a large and medium scale problems and excellent performance on small scale problems. In particular, it significantly outperforms the best supervised semantic relatedness method. Perhaps expectedly, our test scores are also distinctly superior to scores obtained by a plethora of unsupervised semantic relatedness methods, but of course this comparison is unfair because our method utilizes labeled examples that must be paid for.

Our research leaves many questions and issues that we find interesting and worthy of further study. Let us now mention a few.

The making of a good BK corpus. Our results indicate that high quality semantic relatedness can be learned with markedly different types of BK corpora. In particular, we showed that semantic relatedness can be learned from a random and relatively small collection of ordinary fiction literature (ebooks in the Project Gutenberg). However, we observe that the corpus “quality” affects both the starting performance and the learning rate. Specifically, the starting performance, before even a single labeled example is introduced, is significantly higher when using Wikipedia as a BK corpus. In fact, this initial performance (obtained by alone) is by itself among the top performing unsupervised methods. Moreover, the learning rate obtained when using Wikipedia as a BK corpus, rather than Project Gutenberg, is clearly faster.

An interesting question here is what makes a BK corpus useful for learning semantic relatedness? Our speculative answer (yet to be investigated) is that a good corpus should consist of semantically coherent contexts that span a wide scope of meanings. For example, when generating the set of contexts from a fiction book, we can dissect the book into sentences, paragraphs, sections, etc. Large contexts (say, sections) will include many more co-occurrence relations than small contexts (say, sentences), but among these relations we expect to see entirely unrelated terms. On the other hand, if we only have very small contexts we will to obtain only a subset of the related terms. Thus, the context size directly affects the precision and recall of observed “meanings” in a set of contexts. The learning curves of Figure 4 hint on such a tradeoff when using the books in Project Gutenberg.

Semantic relatedness between text fragments. Many of the interesting applications mentioned in the introduction can be solved given the ability to evaluate the relatedness between text fragments (in this paper we only considered relatedness between individual terms). One can think of many ways to extend any semantic relatedness measure from terms to text fragments, and many such methods already proposed in the literature \shortcite¡see, e.g.,¿TsatsaronisVV10,Broder2007,Varelas2005. However, an interesting challenge would be to extract a semantic relatedness model using supervised learning where the training examples are relatedness preferences over text fragments. Such a model could be optimized to particular semantic tasks.

Disambiguated semantic relatedness. To the best of our knowledge, all the proposed term-based semantic relatedness methods discussed in the literature follow a similar problem setup where relatedness is evaluated regardless of particular context(s). However, in many applications there exist such contexts that can and should be utilized. For example, it is often the case where we have a target term along with its current context and we need to rank the terms in our dictionary according to relatedness to this target term. In such cases, the textual environment of the target term can be utilized to disambiguate it and contribute to achieve better and more accurate contextual relatedness evaluations. It would be interesting to extend our model and methods to accommodate such contexts.

Active learning. In this work we proposed a passive learning algorithm that utilizes a uniformly sampled training set of preferences. It would be very interesting to consider active learning techniques to cleverly sample training preferences and expedite the learning process. Assuming a realizable setting, and that preferences satisfy transitivity, a straightforward approach would be to use a sorting algorithm to perfectly order term pairs using comparisons (training examples). It is easy to argue that this is also an information theoretic lower bound on the sample complexity. Thus, several questions arise. First, is it possible to approach this bound within an agnostic setting? Second, is it possible to use some underlying structure (e.g., as exhibited in the BK corpus) to achieve a sample complexity of ? Finally, in many applications of interest we can do with ranking only the top most similar terms to the target term. What would be the best theoretical and practically achievable sample complexities in this case? We note that a general active learning algorithm for preferences in the agnostic setting, guaranteeing sample complexity, was very recently proposed by \citeAAilon2011 and \shortciteADBLP:journals/jmlr/AilonBE12.

Convergence and error bounds. Regarding convergence of , it is quite easy to see that the learning process of always converges. This holds because can only increase or stay the same for iterations. This means that it is effectively monotonic decreasing, and it is bounded below by . This threshold was introduced to handle noisy (non-realizable) realistic scenarios. The question is if the is really necessary when the problem is realizable. We conjecture that the answer to this question is “yes” because only updates contexts in which both of the terms in question co-occur. Error analysis is another direction that may shed light on the learning process and perhaps improve the algorithm. It is interesting to address this question within both a statistical learning (see discussion in Section 8), and also under worst case considerations in the spirit of online learning.

Benchmark datasets for semantic relatedness. When considering problems involving preferences over thousands of terms, as perhaps required in large-scale commercial applications, some millions of humanly annotated preferences are required. In contrast, the academic semantic relatedness research is unfortunately solely relying on small sized annotated benchmark datasets, such as WordSim353, which leaves much to be desired. Considering that the typical vocabulary of an English speaking adult consists of several thousands words, a desired benchmark dataset should be of at least one or even two orders of magnitude larger than WordSim353. While acquiring a sufficiently large semantic dataset can be quite costly, we believe that the semantic relatedness research will greatly benefit once it will be introduced.

While a formal understanding of meaning still seems to be beyond reach, we may be closer to a point where computer programs are able to exhibit artificial understanding of meaning. Will large computational resources to process huge corpora, together with a very large set of labeled training examples be sufficient?


  • [\BCAYAgirre, Alfonseca, Hall, Kravalova, Pasca, \BBA SoroaAgirre et al.2009] Agirre, E., Alfonseca, E., Hall, K., Kravalova, J., Pasca, M., \BBA Soroa, A. \BBOP2009\BBCP. \BBOQA study on similarity and relatedness using distributional and wordnet-based approaches\BBCQ  In \BemNAACL, \BPGS 19–27.
  • [\BCAYAgirre \BBA RigauAgirre \BBA Rigau1996] Agirre, E.\BBACOMMA \BBA Rigau, G. \BBOP1996\BBCP. \BBOQWord sense disambiguation using conceptual density\BBCQ  In \BemProceedings of the 16th conference on Computational linguistics - Volume 1, COLING, \BPGS 16–22.
  • [\BCAYAilonAilon2011] Ailon, N. \BBOP2011\BBCP. \BBOQActive Learning Ranking from Pairwise Preferences with Almost Optimal Query Complexity\BBCQ  In \BemNeural Information Processing Systems.
  • [\BCAYAilon, Begleiter, \BBA EzraAilon et al.2012] Ailon, N., Begleiter, R., \BBA Ezra, E. \BBOP2012\BBCP. \BBOQActive learning using smooth relative regret approximations with applications\BBCQ  \BemJournal of Machine Learning Research - Proceedings Track, \Bem23, 19.1–19.20.
  • [\BCAYAnthony \BBA BartlettAnthony \BBA Bartlett1999] Anthony, M.\BBACOMMA \BBA Bartlett, P. \BBOP1999\BBCP. \BemNeural Network Learning; Theoretical Foundations. Cambridge University Press.
  • [\BCAYBanerjee \BBA PedersenBanerjee \BBA Pedersen2003] Banerjee, S.\BBACOMMA \BBA Pedersen, T. \BBOP2003\BBCP. \BBOQExtended gloss overlaps as a measure of semantic relatedness\BBCQ  In \BemIJCAI, \BPGS 805–810.
  • [\BCAYBloehdorn \BBA MoschittiBloehdorn \BBA Moschitti2007] Bloehdorn, S.\BBACOMMA \BBA Moschitti, A. \BBOP2007\BBCP. \BBOQStructure and semantics for expressive text kernels\BBCQ  In \BemProceedings of the sixteenth ACM conference on Conference on information and knowledge management, CIKM, \BPGS 861–864.
  • [\BCAYBollegala, Matsuo, \BBA IshizukaBollegala et al.2007] Bollegala, D., Matsuo, Y., \BBA Ishizuka, M. \BBOP2007\BBCP. \BBOQMeasuring semantic similarity between words using web search engines\BBCQ  In \BemWWW, \BPGS 757–766.
  • [\BCAYBroder, Fontoura, Josifovski, \BBA RiedelBroder et al.2007] Broder, A., Fontoura, M., Josifovski, V., \BBA Riedel, L. \BBOP2007\BBCP. \BBOQA semantic approach to contextual advertising\BBCQ  In \BemProceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval, SIGIR, \BPGS 559–566.
  • [\BCAYBudanitsky \BBA HirstBudanitsky \BBA Hirst2001] Budanitsky, A.\BBACOMMA \BBA Hirst, G. \BBOP2001\BBCP. \BBOQSemantic distance in wordnet: An experimental, application-oriented evaluation of five measures\BBCQ  In \BemWorkshop on WordNet and Other Lexical Resources, Second meeting of the North American Chapter of the Association for Computational Linguistics, \BPGS 29–34.
  • [\BCAYBudanitsky \BBA HirstBudanitsky \BBA Hirst2006] Budanitsky, A.\BBACOMMA \BBA Hirst, G. \BBOP2006\BBCP. \BBOQEvaluating wordnet-based measures of lexical semantic relatedness\BBCQ  \BemComputational Linguistics, \Bem32(1), 13–47.
  • [\BCAYCarterette, Bennett, Chickering, \BBA DumaisCarterette et al.2008] Carterette, B., Bennett, P., Chickering, D., \BBA Dumais, S. \BBOP2008\BBCP. \BBOQHere or there: Preference judgments for relevance\BBCQ  In \BemECIR.
  • [\BCAYCilibrasi \BBA VitanyiCilibrasi \BBA Vitanyi2007] Cilibrasi, R.\BBACOMMA \BBA Vitanyi, P. \BBOP2007\BBCP. \BBOQThe google similarity distance\BBCQ  \BemIEEE Transactions on Knowledge and Data Engineering, \Bem19, 370–383.
  • [\BCAYCowie, Guthrie, \BBA GuthrieCowie et al.1992] Cowie, J., Guthrie, J., \BBA Guthrie, L. \BBOP1992\BBCP. \BBOQLexical disambiguation using simulated annealing\BBCQ  In \BemProceedings of the 14th conference on Computational linguistics - Volume 1, \BPGS 359–365.
  • [\BCAYDagan, Lee, \BBA PereiraDagan et al.1999] Dagan, I., Lee, L., \BBA Pereira, F. \BBOP1999\BBCP. \BBOQSimilarity-based models of cooccurrence probabilities\BBCQ  \BemMachine Learning, \Bem34(1-3), 43–69.
  • [\BCAYDas Sarma, Gollapudi, \BBA PanigrahyDas Sarma et al.2010] Das Sarma, A., Gollapudi, S., \BBA Panigrahy, R. \BBOP2010\BBCP. \BBOQRanking mechanisms in twitter-like forums\BBCQ  In \BemWSDM, \BPGS 21–30.
  • [\BCAYDeerwester, Dumais, Furnas, Landauer, \BBA HarshmanDeerwester et al.1990] Deerwester, S., Dumais, S., Furnas, G., Landauer, T., \BBA Harshman, R. \BBOP1990\BBCP. \BBOQIndexing by latent semantic analysis\BBCQ  \BemJournal of the American Society for Information Science, \Bem41(6), 391–407.
  • [\BCAYEgozi, Gabrilovich, \BBA MarkovitchEgozi et al.2008] Egozi, O., Gabrilovich, E., \BBA Markovitch, S. \BBOP2008\BBCP. \BBOQConcept-based feature generation and selection for information retrieval\BBCQ  In \BemAAAI.
  • [\BCAYEyke, Johannes, Weiwei, \BBA KlausEyke et al.2008] Eyke, H., Johannes, F., Weiwei, C., \BBA Klaus, B. \BBOP2008\BBCP. \BBOQLabel ranking by learning pairwise preferences\BBCQ  \BemAI, \Bem172(16-17), 1897–1916.
  • [\BCAYFellbaumFellbaum1998] Fellbaum \BBOP1998\BBCP. \BemWordNet: An Electronic Lexical Database (Language, Speech, and Communication).
  • [\BCAYFinkelstein, Gabrilovich, Matias, Rivlin, Solan, Wolfman, \BBA RuppinFinkelstein et al.2001] Finkelstein, L., Gabrilovich, E., Matias, Y., Rivlin, E., Solan, Z., Wolfman, G., \BBA Ruppin, E. \BBOP2001\BBCP. \BBOQPlacing search in context: the concept revisited\BBCQ  In \BemWWW, \BPGS 406–414.
  • [\BCAYFrancis \BBA KuceraFrancis \BBA Kucera1982] Francis, W.\BBACOMMA \BBA Kucera, H. \BBOP1982\BBCP. \BemFrequency analysis of English usage: Lexicon and grammer. Houghton Mifflin.
  • [\BCAYGabrilovich \BBA MarkovitchGabrilovich \BBA Markovitch2005] Gabrilovich, E.\BBACOMMA \BBA Markovitch, S. \BBOP2005\BBCP. \BBOQFeature generation for text categorization using world knowledge\BBCQ  In \BemIJCAI, \BPGS 1048–1053.
  • [\BCAYGabrilovich \BBA MarkovitchGabrilovich \BBA Markovitch2006] Gabrilovich, E.\BBACOMMA \BBA Markovitch, S. \BBOP2006\BBCP. \BBOQOvercoming the brittleness bottleneck using wikipedia: Enhancing text categorization with encyclopedic knowledge\BBCQ  In \BemAAAI, \BPGS 1301–1306.
  • [\BCAYGabrilovich \BBA MarkovitchGabrilovich \BBA Markovitch2007] Gabrilovich, E.\BBACOMMA \BBA Markovitch, S. \BBOP2007\BBCP. \BBOQComputing semantic relatedness using wikipedia-based explicit semantic analysis\BBCQ  In \BemIJCAI, \BPGS 1606–1611.
  • [\BCAYGabrilovich \BBA MarkovitchGabrilovich \BBA Markovitch2009] Gabrilovich, E.\BBACOMMA \BBA Markovitch, S. \BBOP2009\BBCP. \BBOQWikipedia-based semantic interpretation for natural language processing\BBCQ  \BemAI Research, \Bem34, 443–498.
  • [\BCAYGreenGreen1999] Green, S. \BBOP1999\BBCP. \BBOQBuilding hypertext links by computing semantic similarity\BBCQ  \BemIEEE Transactions on Knowledge and Data Engineering.
  • [\BCAYGuha, McCool, \BBA MillerGuha et al.2003] Guha, R., McCool, R., \BBA Miller, E. \BBOP2003\BBCP. \BBOQSemantic search\BBCQ  In \BemProceedings of the 12th international conference on World Wide Web, WWW, \BPGS 700–709.
  • [\BCAYHaralambous \BBA KlyuevHaralambous \BBA Klyuev2011] Haralambous, Y.\BBACOMMA \BBA Klyuev, V. \BBOP2011\BBCP. \BBOQA Semantic Relatedness Measure Based on Combined Encyclopedic, Ontological and Collocational Knowledge\BBCQ  \BemArXiv e-prints.
  • [\BCAYHirst \BBA St-OngeHirst \BBA St-Onge1998] Hirst, G.\BBACOMMA \BBA St-Onge, D. \BBOP1998\BBCP. \BBOQLexical chains as representations of context for the detection and correction of malapropisms\BBCQ  In \BemWordNet: an electronic lexical database, \BPGS 305–332. The MIT Press.
  • [\BCAYHughes \BBA RamageHughes \BBA Ramage2007] Hughes, T.\BBACOMMA \BBA Ramage, D. \BBOP2007\BBCP. \BBOQLexical semantic relatedness with random graph walks\BBCQ  In \BemEMNLP-CoNLL, \BPGS 581–589.
  • [\BCAYIde \BBA VéronisIde \BBA Véronis1998] Ide, N.\BBACOMMA \BBA Véronis, J. \BBOP1998\BBCP. \BBOQIntroduction to the special issue on word sense disambiguation: the state of the art\BBCQ  \BemComput. Linguist., \Bem24, 2–40.
  • [\BCAYJarmaszJarmasz2003] Jarmasz, M. \BBOP2003\BBCP. \BemRoget’s thesaurus as a lexical resource for natural language processing. Master’s thesis, University of Ottawa.
  • [\BCAYJarmasz \BBA SzpakowiczJarmasz \BBA Szpakowicz2003] Jarmasz, M.\BBACOMMA \BBA Szpakowicz, S. \BBOP2003\BBCP. \BBOQS.: Roget’s thesaurus and semantic similarity\BBCQ  In \BemIn: Proceedings of the RANLP-2003, \BPGS 212–219.
  • [\BCAYJiang \BBA ConrathJiang \BBA Conrath1997] Jiang, J.\BBACOMMA \BBA Conrath, D. \BBOP1997\BBCP. \BBOQSemantic similarity based on corpus statistics and lexical taxonomy\BBCQ  \BemCoRR, \Bemcmp-lg/9709008.
  • [\BCAYKirkpatrickKirkpatrick1998] Kirkpatrick, B. \BBOP1998\BBCP. \BemRoget’s thesaurus of English words and phrases; 1998 ed.
  • [\BCAYKoren, Liberty, Maarek, \BBA SandlerKoren et al.2011] Koren, Y., Liberty, E., Maarek, Y., \BBA Sandler, R. \BBOP2011\BBCP. \BBOQAutomatically tagging email by leveraging other users’ folders\BBCQ  In \BemKDD, \BPGS 913–921.
  • [\BCAYLeacock \BBA ChodorowLeacock \BBA Chodorow1998] Leacock, C.\BBACOMMA \BBA Chodorow, M. \BBOP1998\BBCP. \BBOQCombining Local Context and WordNet Similarity for Word Sense Identification\BBCQ  \BemAn Electronic Lexical Database.
  • [\BCAYLeacock, Miller, \BBA ChodorowLeacock et al.1998] Leacock, C., Miller, G., \BBA Chodorow, M. \BBOP1998\BBCP. \BBOQUsing corpus statistics and wordnet relations for sense identification\BBCQ  \BemComput. Linguist., \Bem24, 147–165.
  • [\BCAYLeskLesk1986] Lesk, M. \BBOP1986\BBCP. \BBOQAutomatic sense disambiguation using machine readable dictionaries: how to tell a pine cone from an ice cream cone\BBCQ  In \BemProceedings of the 5th annual international conference on Systems documentation, \BPGS 24–26.
  • [\BCAYLi, Bandar, \BBA McLeanLi et al.2003] Li, Y., Bandar, Z., \BBA McLean, D. \BBOP2003\BBCP. \BBOQAn Approach for Measuring Semantic Similarity between Words Using Multiple Information Sources\BBCQ  \BemIEEE Transactions on Knowledge and Data Engineering.
  • [\BCAYLi, McLean, Bandar, O’Shea, \BBA CrockettLi et al.2006] Li, Y., McLean, D., Bandar, Z., O’Shea, J., \BBA Crockett, K. \BBOP2006\BBCP. \BBOQSentence similarity based on semantic nets and corpus statistics\BBCQ  \BemIEEE Transactions on Knowledge and Data Engineering, \Bem18, 1138–1150.
  • [\BCAYLinLin1998] Lin, D. \BBOP1998\BBCP. \BBOQAn information-theoretic definition of similarity\BBCQ  In \BemICML, \BPGS 296–304.
  • [\BCAYMierswa, Wurst, Klinkenberg, Scholz, \BBA EulerMierswa et al.2006] Mierswa, I., Wurst, M., Klinkenberg, R., Scholz, M., \BBA Euler, T. \BBOP2006\BBCP. \BBOQYale: rapid prototyping for complex data mining tasks\BBCQ  In \BemProceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, KDD, \BPGS 935–940.
  • [\BCAYMihalcea, Corley, \BBA StrapparavaMihalcea et al.2006] Mihalcea, R., Corley, C., \BBA Strapparava, C. \BBOP2006\BBCP. \BBOQCorpus-based and knowledge-based measures of text semantic similarity\BBCQ  In \BemAAAI.
  • [\BCAYMiller \BBA CharlesMiller \BBA Charles1991] Miller, G.\BBACOMMA \BBA Charles, W. \BBOP1991\BBCP. \BBOQContextual correlates of semantic similarity\BBCQ  \BemLanguage and Cognitive Processes, \Bem6(1), 1–28.
  • [\BCAYMiller, Leacock, Tengi, \BBA BunkerMiller et al.1993] Miller, G., Leacock, C., Tengi, R., \BBA Bunker, R. \BBOP1993\BBCP. \BBOQA semantic concordance\BBCQ  In \BemProceedings of the workshop on Human Language Technology, HLT, \BPGS 303–308.
  • [\BCAYMilne \BBA WittenMilne \BBA Witten2008] Milne, D.\BBACOMMA \BBA Witten, I. \BBOP2008\BBCP. \BBOQAn effective, low-cost measure of semantic relatedness obtained from wikipedia links\BBCQ  In \BemWikipedia and AI: An Evolving Synergy.
  • [\BCAYMorris \BBA HirstMorris \BBA Hirst1991] Morris, J.\BBACOMMA \BBA Hirst, G. \BBOP1991\BBCP. \BBOQLexical cohesion computed by thesaural relations as an indicator of the structure of text\BBCQ  \BemComput. Linguist., \Bem17, 21–48.
  • [\BCAYNavigli \BBA LapataNavigli \BBA Lapata2010] Navigli, R.\BBACOMMA \BBA Lapata, M. \BBOP2010\BBCP. \BBOQAn experimental study of graph connectivity for unsupervised word sense disambiguation\BBCQ  \BemIEEE Transactions on Pattern Analysis and Machine Intelligence, \Bem32, 678–692.
  • [\BCAYPage, Brin, Motwani, \BBA WinogradPage et al.1999] Page, L., Brin, S., Motwani, R., \BBA Winograd, T. \BBOP1999\BBCP. \BBOQThe pagerank citation ranking: Bringing order to the web.\BBCQ  Technical report 1999-66, Stanford InfoLab. Previous number = SIDL-WP-1999-0120.
  • [\BCAYPatwardhan \BBA PedersenPatwardhan \BBA Pedersen2006] Patwardhan, S.\BBACOMMA \BBA Pedersen, T. \BBOP2006\BBCP. \BBOQUsing wordnet based context vectors to estimate the semantic relatedness of concepts\BBCQ  In \BemProceedings of the EACL 2006 Workshop Making Sense of Sense - Bringing Computational Linguistics and Psycholinguistics Together, \BPGS 1–8.
  • [\BCAYPedersen, Patwardhan, \BBA MichelizziPedersen et al.2004] Pedersen, T., Patwardhan, S., \BBA Michelizzi, J. \BBOP2004\BBCP. \BBOQWordnet: Similarity - measuring the relatedness of concepts\BBCQ  In \BemAAAI, \BPGS 1024–1025.
  • [\BCAYPonzetto \BBA StrubePonzetto \BBA Strube2007] Ponzetto, S. P.\BBACOMMA \BBA Strube, M. \BBOP2007\BBCP. \BBOQKnowledge derived from wikipedia for computing semantic relatedness\BBCQ  \BemJAIR, \Bem30, 181–212.
  • [\BCAYRadinsky, Agichtein, Gabrilovich, \BBA MarkovitchRadinsky et al.2011] Radinsky, K., Agichtein, E., Gabrilovich, E., \BBA Markovitch, S. \BBOP2011\BBCP. \BBOQA word at a time: computing word relatedness using temporal semantic analysis\BBCQ  In \BemWWW, \BPGS 337–346.
  • [\BCAYRadinsky \BBA AilonRadinsky \BBA Ailon2011] Radinsky, K.\BBACOMMA \BBA Ailon, N. \BBOP2011\BBCP. \BBOQRanking from pairs and triplets: information quality, evaluation methods and query complexity\BBCQ  In \BemWeb Search and Data Mining.
  • [\BCAYRecchia \BBA JonesRecchia \BBA Jones2009] Recchia, G.\BBACOMMA \BBA Jones, M. \BBOP2009\BBCP. \BBOQMore data trumps smarter algorithms: Comparing pointwise mutual information with latent semantic analysis\BBCQ  \BemBehavior Research Methods.
  • [\BCAYReisinger \BBA MooneyReisinger \BBA Mooney2010] Reisinger, J.\BBACOMMA \BBA Mooney, R. \BBOP2010\BBCP. \BBOQMulti-prototype vector-space models of word meaning\BBCQ  In \BemNAACL), \BPGS 109–117.
  • [\BCAYResnikResnik1995] Resnik, P. \BBOP1995\BBCP. \BBOQUsing information content to evaluate semantic similarity in a taxonomy\BBCQ  In \BemIJCAI, \BPGS 448–453.
  • [\BCAYResnikResnik1999] Resnik, P. \BBOP1999\BBCP. \BBOQSemantic similarity in a taxonomy: An information-based measure and its application to problems of ambiguity in natural language\BBCQ  \BemJournal of Artificial Intelligence Research, \Bem11, 95–130.
  • [\BCAYRibeiro-Neto, Cristo, Golgher, \BBA Silva de MouraRibeiro-Neto et al.2005] Ribeiro-Neto, B., Cristo, M., Golgher, P., \BBA Silva de Moura, E. \BBOP2005\BBCP. \BBOQImpedance coupling in content-targeted advertising\BBCQ. SIGIR, \BPGS 496–503.
  • [\BCAYRichardson \BBA SmeatonRichardson \BBA Smeaton1995] Richardson, R.\BBACOMMA \BBA Smeaton, A. \BBOP1995\BBCP. \BBOQUsing wordnet in a knowledge-based approach to information retrieval\BBCQ  \BTR.
  • [\BCAYRogetRoget1852] Roget, P. \BBOP1852\BBCP. \BBOQRoget’s thesaurus of english words and phrases\BBCQ  \BemLongman Group Ltd.
  • [\BCAYRubenstein \BBA GoodenoughRubenstein \BBA Goodenough1965] Rubenstein, H.\BBACOMMA \BBA Goodenough, J. \BBOP1965\BBCP. \BBOQContextual correlates of synonymy\BBCQ  \BemACM, \Bem8, 627–633.
  • [\BCAYSalton \BBA BuckleySalton \BBA Buckley1988] Salton, G.\BBACOMMA \BBA Buckley, C. \BBOP1988\BBCP. \BBOQTerm-weighting approaches in automatic text retrieval\BBCQ  \BemInformation Processing & Management, \Bem24(5), 513 – 523.
  • [\BCAYSalton \BBA McGillSalton \BBA McGill1983] Salton, G.\BBACOMMA \BBA McGill, M. \BBOP1983\BBCP. \BemIntroduction to Modern Information Retrieval.
  • [\BCAYSchilder \BBA HabelSchilder \BBA Habel2001] Schilder, F.\BBACOMMA \BBA Habel, C. \BBOP2001\BBCP. \BBOQFrom temporal expressions to temporal information: semantic tagging of news messages\BBCQ  In \BemProceedings of the workshop on Temporal and spatial information processing - Volume 13, TASIP, \BPGS 1–8.
  • [\BCAYSchützeSchütze1998] Schütze, H. \BBOP1998\BBCP. \BBOQAutomatic word sense discrimination\BBCQ  \BemComput. Linguist., 97–123.
  • [\BCAYSebastianiSebastiani2002] Sebastiani, F. \BBOP2002\BBCP. \BBOQMachine learning in automated text categorization\BBCQ  \BemACM Comput. Surv., \Bem34, 1–47.
  • [\BCAYSrihari, Zhang, \BBA RaoSrihari et al.2000] Srihari, R., Zhang, Z., \BBA Rao, A. \BBOP2000\BBCP. \BBOQIntelligent Indexing and Semantic Retrieval of Multimodal Documents\BBCQ  \BemInformation Retrieval, \Bem2, 245–275.
  • [\BCAYSriram, Fuhry, Demir, Ferhatosmanoglu, \BBA DemirbasSriram et al.2010] Sriram, B., Fuhry, D., Demir, E., Ferhatosmanoglu, H., \BBA Demirbas, M. \BBOP2010\BBCP. \BBOQShort text classification in twitter to improve information filtering\BBCQ  In \BemProceeding of the 33rd international ACM SIGIR conference on Research and development in information retrieval, SIGIR, \BPGS 841–842.
  • [\BCAYStrube \BBA PonzettoStrube \BBA Ponzetto2006] Strube, M.\BBACOMMA \BBA Ponzetto, S. \BBOP2006\BBCP. \BBOQWikirelate! computing semantic relatedness using wikipedia\BBCQ  In \BemAAAI.
  • [\BCAYSun, Wang, \BBA YuSun et al.2011] Sun, X., Wang, H., \BBA Yu, Y. \BBOP2011\BBCP. \BBOQTowards effective short text deep classification\BBCQ  In \BemProceedings of the 34th international ACM SIGIR conference on Research and development in Information, SIGIR, \BPGS 1143–1144.
  • [\BCAYTerra \BBA ClarkeTerra \BBA Clarke2003] Terra, E.\BBACOMMA \BBA Clarke, C. \BBOP2003\BBCP. \BBOQFrequency estimates for statistical word similarity measures\BBCQ  In \BemNAACL, \BPGS 165–172.
  • [\BCAYTsatsaronis, Varlamis, \BBA VazirgiannisTsatsaronis et al.2010] Tsatsaronis, G., Varlamis, I., \BBA Vazirgiannis, M. \BBOP2010\BBCP. \BBOQText relatedness based on a word thesaurus\BBCQ  \BemJAIR, \Bem37, 1–39.
  • [\BCAYTsatsaronis, Varlamis, Vazirgiannis, \BBA NørvågTsatsaronis et al.2009] Tsatsaronis, G., Varlamis, I., Vazirgiannis, M., \BBA Nørvåg, K. \BBOP2009\BBCP. \BBOQOmiotis: A thesaurus-based measure of text relatedness\BBCQ  In \BemProceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases: Part II, ECML PKDD, \BPGS 742–745.
  • [\BCAYTurneyTurney2002] Turney, P. \BBOP2002\BBCP. \BBOQThumbs up or thumbs down? semantic orientation applied to unsupervised classification of reviews\BBCQ  In \BemACL, \BPGS 417–424.
  • [\BCAYVapnikVapnik1995] Vapnik, V. \BBOP1995\BBCP. \BemThe Nature of Statistical Learning Theory.
  • [\BCAYVapnikVapnik1998] Vapnik, V. \BBOP1998\BBCP. \BemStatistical Learning Theory. Wiley Interscience, New York.
  • [\BCAYVarelas, Voutsakis, Raftopoulou, Petrakis, \BBA MiliosVarelas et al.2005] Varelas, G., Voutsakis, E., Raftopoulou, P., Petrakis, E., \BBA Milios, E. \BBOP2005\BBCP. \BBOQSemantic similarity methods in wordnet and their application to information retrieval on the web\BBCQ  In \BemACM international workshop on Web information and data management, WIDM, \BPGS 10–16.
  • [\BCAYWu \BBA PalmerWu \BBA Palmer1994] Wu, Z.\BBACOMMA \BBA Palmer, M. \BBOP1994\BBCP. \BBOQVerb semantics and lexical selection\BBCQ  In \BemACL, \BPGS 133–138.
  • [\BCAYYarowskyYarowsky1995] Yarowsky, D. \BBOP1995\BBCP. \BBOQUnsupervised word sense disambiguation rivaling supervised methods\BBCQ  In \BemProceedings of the 33rd annual meeting on Association for Computational Linguistics, ACL, \BPGS 189–196.
  • [\BCAYYeh, Ramage, Manning, Agirre, \BBA SoroaYeh et al.2009] Yeh, E., Ramage, D., Manning, C., Agirre, E., \BBA Soroa, A. \BBOP2009\BBCP. \BBOQWikiwalk: random walks on wikipedia for semantic relatedness\BBCQ  In \BemWorkshop on Graph-based Methods for Natural Language Processing, \BPGS 41–49.

Appendix A. The VC-dimension of an unrestricted hypothesis class

Lemma 1.



We say that if

where is the lexicographic order. Thus, induces a complete order over term pairs, (with ), because the lexicographic order induces a complete order. Also, given a quadruple , we define its inverse preference,

Recall the definition of (1), and let , be the set of quadruples satisfying (it is not hard to see that is unique),

As , we have that either


It follows that

It is easy to see that , which holds because in each of the two term pairs the order between the different terms is fixed, and the order between the different pairs themselves is fixed as well.

Consider any , and set . Now define

as follows


To show that is a proper hypothesis, satisfying condition (2), we consider the following mutually exclusive cases.
Case A: . In this case we have,

Case B: . We now have,

Case C: Now and ,

Case D: In this case and ,