Learning to Rank Query Recommendationsby Semantic Similarity

Learning to Rank Query Recommendations
by Semantic Similarity

Abstract

The web logs of the interactions of people with a search engine show that users often reformulate their queries. Examining these reformulations shows that recommendations that precise the focus of a query are helpful, like those based on expansions of the original queries. But it also shows that queries that express some topical shift with respect to the original query can help user access more rapidly the information they need.

We propose a method to identify from search engine query logs possible candidate queries that can be recommended to focus or shift a topic. This method combines various click-based, topic-based and session based ranking strategies and uses supervised learning in order to maximize the semantic similarity between the query and the recommendations, while at the same time we diversify them.

We evaluate our method using the query/click logs of a Japanese web search engine and we show that the combination of the three methods proposed is significantly better than any of them taken individually.

Learning to Rank Query Recommendations

by Semantic Similarity


Sumio Fujita
Yahoo! JAPAN Research
Midtown Tower, Akasaka
Tokyo 107-6211, Japan

sufujita@yahoo-corp.jp
Georges Dupret
Yahoo! Labs
701 First Avenue, Sunnyvale
CA, 94089-0703, USA

gdupret@yahoo-inc.com
Ricardo Baeza-Yates
Yahoo! Research
Diagonal 177, 9th floor
Barcelona, Spain

rbaeza@acm.org


\@float

copyrightbox[b]

\end@float
\@ssect

Categories and Subject Descriptors H.3.3 [Information Storage and Retrieval]: Information Search and Retrieval-Search Process; H.3.5 [Information Storage and Retrieval]: Online Information Services-Web based services

  • Algorithms, Experimentation, Performance

    • Web search, query logs, click logs, query recommendation.

      \@ssect

      Problem Statement Information retrieval is often an interactive process where a user successively refines his or her original search query, switch focus and approach her/his goal in several steps. Assisting users in this process makes it less cumbersome. Query suggestions are particularly useful on mobile devices and for Asian languages with complex character sets where typing queries is particularly inconvenient and time consuming.

      Query recommendation engines should not limit themselves to proposing more focused queries, but should also suggest queries that are a reasonable switch in focus. This is confirmed by examining search engine query log data. For example, the most frequent queries after “toyota” are “honda”, “nissan” and “lexus”, none of which are a direct refinement of the original query. As another example, the most frequent query after “driver’s license renewal” is “slight violence of traffic laws”, which may prevent drivers from renewing their driver’s license.

      Search engines sometimes suggest queries with some additional modifiers, focusing on a particular aspect of the previous query. According to Jansen et al. [?], queries which initiated a new session are in 31% cases followed by query reformulations of the type ‘specialization’ or ‘specialization with reformulation’. Such drill down operations are not necessarily observed more frequently than topic shifting. Topic shifting occurs especially when users engage in complex tasks like researching for a new vehicle and comparing competing candidate models, or when they look for information on how to renew a driver license including ancillary tasks, like discovering office hours, finding the required forms, the office address, etc. Boldi [?] pointed out that the typically useful recommendations are either specializations or topic shifting, which they refer to as “parallel moves”.

      Unlike pre-retrieval query suggestions, which frequently propose automatic query completion right in the query box, query recommendation provides semantically related queries and exclude trivially synonymous queries, since state-of-the-art commercial search engines are good enough to cover minor spelling variations or even some miss-spellings. Nevertheless, diversifying query recommendations would help for polysemic queries.

      \@ssect

      Methodology Query recommendations are often based on clustering methods with the inconvenience that queries falling in the same cluster are some time more ambiguous and less helpful than the original query. Instead, we formulate in this work three distinct methods of extracting query recommendations from a search engine’s click-through logs. These methods induce directed links between queries existing in the logs and hence have the potential to overcome the limitations of the clustering methods. The first method is based on the position of the clicked URLs in the search ranking of the original query and its potential recommendations. The second is based on reformulations of the original query that can be easily detected in the logs using the query surface forms. Users reformulate queries for a variety of reasons: because the original formulation is too ambiguous or carry other meanings they did not intend, or because the results returned by the engine are not adequate. The third method is also based on query reformulations but it is based on co-occurrence relations of the queries in the sessions. We show that each method has its own advantages and drawbacks. The first method sometimes leads to recommendations that are more difficult to understand because it tends to include Web jargon, but it is sometimes more useful than the simple reformulation method because it leverages the topical knowledge of other users. By construction, the second method rarely drifts from the original search topic and tends to be limited to specializations of the original query. This results in safer recommendations with less coverage. Variants of this method are used by many commercial search engines because it is safer and more predictable. The last method is better suited for shifting topics because it provides more diverse recommendations such as parallel move reformulations [?]. On the other hand, trivial variants or completely unrelated queries are not useful. Each method has distinct capabilities and short-comings, and then it would be interesting to develop a method that chooses the best candidates and offer to the user an improved set of recommendations. This is the objective of this work.

      \@ssect

      Assumptions Since most useful recommendations are either specializations or parallel moves, it is better to use distinct methods to cover both types. It is also necessary to exclude trivial synonyms and unrelated queries. We make the following assumption: in the semantic hierarchy of information needs, locating the original user query at the center, generalization queries reside in the upper part of the hierarchy and specialization queries in the lower. The neighbouring queries in the semantic hierarchy are generally useful. In order to identify such queries, we combine the recommendation candidates from three methods and learn the ranking function according to semantic similarities reflected in the topological relations in the semantic hierarchy. We schematized the relations in Figure Learning to Rank Query Recommendations by Semantic Similarity, where too close queries are not useful as recommendations. On the other hand, either specializations or parallel moves are useful to help the searcher with drill down or shift operations respectively.

      Figure \thefigure: Schematic view of semantic relations of related queries.
      \@ssect

      Contribution The problem we address in this paper is how to combine such candidate recommendations with different characteristics to diversify them. One possible solution is to infer the intention of the user: does she/he intend to drill down into the topic or will she/he quit the current sub topic and move to the next sub topic? This would undoubtedly be a very hard task. A priori, any query may be followed by the user drilling down for more precise information or shifting the intention. This depends among other things on the quality of the results the user finds on the result page. User decisions and consecutive search actions are not only query dependent but also user and context dependent. Instead of attempting to predict the user’s state of mind, we propose to minimize the risk of dissatisfying the user by proposing carefully various solutions. Expressed in the terms of our prior assumption, we try to maximize the semantic similarity between the original query and recommended queries by combining different types of recommendations, to make them more diverse. We will not use lexical features such as semantic categories of query terms, since such lexical knowledge has a usually fairly limited coverage.

      \@ssect

      Organization In Section Learning to Rank Query Recommendations by Semantic Similarity, we present some related works that make use of query and click logs. We present the methods used to extract the different types of recommendations in Section Learning to Rank Query Recommendations by Semantic Similarity. We show empirically that the session based method is good at identifying shifting queries whereas the other two methods favor focused faceted queries. We combine these three methods to maximize the semantic similarity measure in Section Learning to Rank Query Recommendations by Semantic Similarity. We describe the supervised learning algorithm we use in Section Learning to Rank Query Recommendations by Semantic Similarity. In Section Learning to Rank Query Recommendations by Semantic Similarity, we report the results of an empirical study based on the click logs of a popular Japanese search engine and Section Learning to Rank Query Recommendations by Semantic Similarity concludes the paper.

      \@ssect

      Click Log Analysis Click logs typically contain information such as the search query string, time stamp, browser identifier, clicked URLs, and rank positions. Although correctly interpreting clicks is not straightforward [?, ?], click information is often used as an implicit feedback on URL relevance.

      Beeferman and Berger studied Web search query logs and clustered click-through data by iterating two steps: (a) combining the two most similar queries and (b) combining the two most similar URLs [?]. The generated clusters were used to enhance query recommendations. Baeza-Yates et al. proposed a query recommendation technique using query clustering based on the similarity of clicked URLs [?]. Dupret and Mendoza also addressed query recommendation using click-through data but focused on document ranking [?]. Xue et al. [?] used click-through data to create metadata for Web pages. They estimated document-to-document similarities on the basis of co-clicked queries and query strings used as metadata or tags. These estimates were then spread over similar documents. Craswell and Szummer, who used click-through data for image retrieval, experimented with backward random walks [?]. Their method is based on query-to-document transition probabilities on a click graph. Baeza-Yates and Tiberi extracted semantic relations between queries on the basis of set relations of clicked URLs [?]. Antonellis et al. proposed the Simrank++ [?] method in which query similarity is propagated through click bipartite graphs. They used the query similarity measure to rewrite queries in order to extend advertisement matching. Again, such a measure of structural-context similarity might be adequate for the task such as query rewrites for sponsored search where rewriting to a practical synonymous query is effective.

      \@ssect

      Term Expansion Based Jones et al. extracted query substitutions from the same user sessions by identifying correlated term pairs and substituting phrases [?]. Jones’ work addressed query rewrites in sponsored search contexts where the “precise rewriting” such as “automobile insurance automotive insurance”, is mostly preferred. However, the current state of the art search engines return very similar results to these two queries.

      \@ssect

      Query Session Based Spink et al. surveyed information related to successive Web searches [?] and found that the information involves changes and shifts in search terms, search strategies, and relevance judgments. Jansen et al. analyzed successive queries in large Web search query logs [?], and He et al. tried to detect session boundaries on the basis of search patterns and time intervals in query logs [?]. Fonseca et al. extracted query relations by using association rules from the same user sessions [?]. Boldi et al. analyzed search user sessions, classified query reformulation types [?], and derived query-flow graphs for the extracted query recommendations. They pointed out that the typically useful recommendations are either specializations or parallel moves while trivial variants or completely unrelated queries are not useful. Cao et al. applied click-based clustering to session-based query suggestions [?] and they claim that the context awareness helps to better understand user’s search intent and to make more meaningful suggestions. However, they do not evaluate well if the context awareness really improves the suggestion utility due to the lack of an adequate baseline.

      In this work we focus on the generation of query recommendations through the use of inter-query relations in Web search logs. As we have seen in the previous section, log based query recommendation techniques fall into one of three approaches, namely click-based, term expansion based and session based. Each approach intends to capture patterns of different user activities from the query logs. Queries are related by a co-click relation in view of users clicking on the same URL in response to them. Queries are also related to their possible expansion by adding facet modifiers, i.e. co-topic relations. Finally, queries are related by their co-occurrence in a user session.

      The following three methods extract these three types of inter-query relations representing user behaviors in the logs: either a specialization/refinement of the information need or a parallel move from the original search intent. The methods are simple although they are intended to extract candidates thoroughly, so that they are adequately combined and re-ranked by a supervised learning algorithm to maximize the semantic similarity measure.

      This method compares the positions in the search results of the documents clicked during a query session. If a query different from query better orders —according to a suitable measure—the clicked documents in a significant number of sessions of , then is a candidate query for recommendation. There is a fundamental basis for considering the clicked document rankings rather than the simple similarity of clicked page sets. Take for example the multi-faceted query “curry,” The documents that a user selects can help identify a posteriori his information need: if he or she is interested in how to cook curry, he or she will select pages related to cooking rather than those related to the origin of “curry” in Indian culinary history. The assumption is that savvier users with the same information need will probably express the query less ambiguously and enter “curry recipe,” for example, as the query. The hypothesis we wish to investigate here is whether documents clicked by a previous user are ranked higher in the “curry recipe” results than in the “curry” results. If they are, we can retrieve the “curry recipe” query from the log and recommend it.

      More formally, suppose that is a clicked URL111We use “document,” “page,” and “URL” interchangeably. in the results for query . For each such clicked URL , we assume the existence of a set of queries for which URL is ranked higher in the results. This set might be empty. We hypothesize that such queries are potential recommendations for .

      We first define the URL cover of a query  as the set of URLs clicked in response to query , and the query cover of URL , as the set of queries for which URL  is clicked. We define as the rank position of URL for query . The set of best rank co-click queries for query , , is as follows:

      We estimate the strength of the relations between a query and its candidate recommendations in accordance with the following weighting scheme. We define  as the number of clicks on  in response to query  as the total number of clicks in response to query  as the total number of clicks on  regardless of the query and  as the set of all queries. We define the probability  as follows:

      with

      This approach can be regarded as a special case of the session-based recommendation proposed by Dupret and Mendoza [?]. In this approach, each single click is considered to be a single session. This is clearly distinct from the approach used in query clustering methods because it explicitly uses the positions of the documents in the results list.

      Commercial search engines commonly use expansions of input query string in logs as recommendations. Here, we introduce a variation that takes advantage of a characteristic of the Japanese language. The agglutinant nature of the Japanese language makes it comparatively easy to detect topic-facet structure in queries. In practice, a facet directive in Japanese is easily identified as a word that appears as the last term of a significant number of distinct queries. In our experiments, if a word is the last of at least five distinct query strings, it can be safely regarded as a facet word as long as queries appearing fewer than ten times are eliminated from the logs. Thus, from the topic-part-only query “curry”, we may induce “curry recipe”, “curry restaurant”, and other queries with different directives.

      We define a co-topic query as a query expanded by the addition of a facet directive. As for co-click relations, we define a weighting scheme that captures the strength of the relation between the original query and a co-topic recommendation  based on the following probability: we first define as the set of co-topic queries formed over , the similarity is expressed as:

      This relation normally represents a specialization of the original concept by adding a facet directive which restrictively modifies the original concept.

      This last method identifies the query reformulations observed a significant number of times during the sessions of users. Co-session queries are queries submitted consecutively from the same user in a time interval typically no longer than 5 minutes. Co-session queries includes not only the reformulation or rewriting of queries, such as in the co-topic relation, but also queries that reflect a shift in information needs. (A more complete nomenclature of the relations extracted this way can be found in [?].)

      We define the set as the set of queries sharing a co-session relation with . The strength of a co-session relation between and is estimated as a probability:

      where denotes the count of the query preceded by the query in the same user session.

      This method is relatively robust to mistakes during the segmentation of user activities in session: if and do not belong to the same session, will be small, leading to a relation with a low strength.

      It is not straightforward to assess the quality of query recommendations. To evaluate the three methods presented in the previous section, we use the semantic similarity of the queries after they are mapped into a category hierarchy. We adopt a similarity measure between query pairs by Baeza-Yates and Tiberi [?] who evaluated semantic relations between queries connected by an edge of their click cover graph. For this purpose, they use the Open Directory Project222http://www.dmoz.org/, where queries are matched against the directory content to find the categories where they belong. We apply the same methodology but using the Yahoo! JAPAN directory333http://dir.yahoo.co.jp/ because it has a more complete coverage of Japanese queries.

      Baeza-Yates and Tiberi use the following similarity function on the categories matching two queries and :

      where is the longest common prefix of the category paths and where the queries and were found, respectively. This is intuitively reasonable: consider for example the query “Spain”. The query term is found in “Regional / Countries / Spain” while “Barcelona” is found in “Regional / Countries / Spain / Autonomous Communities / Catalonia / Cities / Barcelona,”. Then, the similarity is .

      However, we needed to make some adjustment because in the Yahoo! directory, a subcategory like “Spain” might appear below diverse top categories such as “Maps / By region / Countries”, “Arts / By region / Countries”, or “Recreation / Travel / By region / Countries”. We therefore use the following similarity function:

      where is the number of common subparts of two category paths that match the queries. The previous similarity function measures the ratio of the hyper concepts that the two categories share whereas this new function considers the facet similarity of subcategories.

      To associate Yahoo! categories with each query, we used the directory search application programming interface (API), which returns a list of categorized sites retrieved by “AND” boolean queries. This presumably favors co-topic relations over co-click relations because registered sites retrieved by the expanded query are also retrieved by original query due to the “AND” operation. As a categorized site is retrieved, the procedure votes to its category. The category with maximum number of votes is assigned to the query. Inter query similarity depends on similarities of category pairs, and the maximum similarity through category pairs was selected as the final score.

      For query recommendation, queries that are virtually the same are useless, so we excluded queries falling in the group of trivial variants. Queries were grouped in accordance with the clicked URL set by an online single-pass clustering using a vectorial representation of each URL set, where the component is the click frequency of the URL in response to the query.

      Identifying the user intention from contextual information is a very difficult task and is not guaranteed to be effective. Instead, we take a more conservative approach and we combine the three methods described above. We attempt to take advantage of each method strength but also hedge against bad recommendations by providing some conservative specialization queries, some serendipitous queries and by proposing some “topic shifting” queries. In other words, we diversify the set of recommended queries.

      We formulate the problem as a “learning to rank” task for which we use the similarity measure defined in Section Learning to Rank Query Recommendations by Semantic Similarity. We use gradient boosting decision trees (GBDT) described in [?] because of the robustness to overfitting, the scalability and the ability to handle highly non-linear problems of this method.

      \@ssect

      Training Data For training and test pairs, we calculated the similarity measure described in Section Learning to Rank Query Recommendations by Semantic Similarity as the target attribute. For this we cleaned the data and added random query pairs to augment the number of negative examples and balance the training set. The details are given in Section Learning to Rank Query Recommendations by Semantic Similarity.

      \@ssect

      Feature Set

      We defined the quantities , and in Section Learning to Rank Query Recommendations by Semantic Similarity. On top of these features, we defined 24 features as described in Table Learning to Rank Query Recommendations by Semantic Similarity. Facet extraction features are extracted from the query logs. We adopted the query textual features used in [?]. Cosine similarities are computed based on the bag of character bigrams and on chunks, i.e. contiguous character strings split by a white space. As result click features, we measure how the queries are multi-faceted with respect to user behavior on the result sets. Click entropy is used to reflect query ambiguity as in Teevan et al. [?]. For query session co-occurrence we derive features from pair of queries directly following one another in a user session. is adopted from Jones et al. [?]. We introduced this to identify significant query pairs from sessions. A high value means a strong dependency between two adjacent queries in a session.

      Facet extraction features
      co-click query probability
      co-topic query probability
      co-session query probability
      Click frequency of
      Click frequency of
      Total topic frequency of
      Query textual features
      Character length of
      Character length of
      Chunk length of
      Chunk length of
      Levenshtein distance of and by multi-byte character basis
      Levenshtein distance of and by single-byte basis
      Cosine similarities between bag of chunk (keyword) representations of and
      Cosine similarities between bag of character bigrams representations of and
      Result click features
      Search result click entropy of
      Search result click entropy of
      Query session co-occurrence features
      Entropy of the query following
      Log likelihood ratio of observing after in the same session
      Target attribute feature
      Category similarity between and
      Table \thetable: Features used for the supervised learning.
      \@ssect

      Learning Models

      As mentioned above, we use gradient boosting decision trees (GBDT [?]). This is an additive regression model over an ensemble of shallow regression trees.

      It iteratively fits an additive model:

      where is a regression tree at iteration , weighted by parameter , with a finite number of parameter , consisting of split regions and corresponding weights, which are optimized such that a certain loss function is minimized as follows:

      At iteration , tree is induced to fit the negative gradient by least squares:

      where is the gradient over current prediction function:

      Each non-terminal node in the tree represents the condition of a split on a feature space and each terminal node represents a region. The improvement criterion to evaluate splits of a current terminal region into two subregions is as follows:

      where and are the mean response of left and right subregions, respectively, and and are the corresponding sums of weights. We evaluate the relative importance of each feature by the normalized sum of through all the nodes corresponding to the feature.

      To evaluate our proposed combined method, we used a sample of the query log of a Japanese commercial search engine. First, query-clicked URL pairs that appear only once were removed. Second, identical query-URL pairs with the same browser cookie (i.e., queries from the same client) were counted only once to improve robustness against spam. Third, we selected from the log the 4,544 queries that contain one of the seven most frequent facet directives appearing in Japanese web search. Table Learning to Rank Query Recommendations by Semantic Similarity shows the statistics of our evaluation data. On the basis of these initial queries, we extracted 188,737 query pairs, among which, 70,041 pairs are in a best rank co-click relation, 77,991 pairs in a co-topic relation, and 66,612 pairs in a co-session relation. From them, we excluded pairs where either query failed to be assigned to any category. At the end, we obtained 86,544 query URL pairs, which we split into two sets to carry out a two fold cross validation. We supplemented training pairs by 82,212 randomly combined pairs of queries and recommended queries, which act as negative or counter-examples. Notice that average semantic similarities between pairs are high for co-click pairs.

      Data type Numbers Avg. sim.
      Original queries 4,544 (–)
      Co-click pairs 70,041 (25,114) 0.8075
      Co-topic pairs 77,991 (28,454) 0.7954
      Co-session pairs 66,612 (41,179) 0.6837
      Combined pairs 188,737 (86,544) 0.7326
      Random pairs 188,737 (82,212) 0.3215
      Table \thetable: Statistics of Evaluation data. The number of categorized pairs are between parentheses.

      Given a ranked query list , the discounted cumulative gain (DCG) at the rank threshold is defined as follows:

      where is the score according to the judgement at the rank in .

      We assigned five grades to the similarity of each query pair, namely “perfect” (above 0.75), “excellent” (between 0.75 and 0.5), “good” (between 0.5 and 0.25), “fair” (below 0.25 but above 0.0), and “poor” (at 0) according to the value range of similarities. We assign scores of 10, 7, 3, 0.5 and 0.0 to these five grade labels.

      The ideal ranked query list  is obtained by ranking the recommendations in decreasing order of their label values. It is used to define the normalized DCG. In particular, we use the normalized DCG at 5 (NDCG5), defined as follows:

      The average precision (AP) of a ranked list is defined as usual:

      where is the binary judgement of the relevance of item in the list. We set this to 1 if the grade is “excellent” or better and 0 otherwise. The mean average precision (MAP) of a set of test queries is the mean AP through this set.

      Table Learning to Rank Query Recommendations by Semantic Similarity compares the NDCG5 and MAP values of the single methods and machine learned combined methods. Also included are the results of simply taking a linear combination of the query scores of each method computed separately.

      \@ssect

      Co-click Relations

      The BRCCQs typically represent a drill down from the original query. It does not necessarily share any lexical part with the original query but it shares at least a clicked document with the original query. It often represents specializations but sometimes parallel moves (“ipod” “itune”) or generalization (“ANA” “airplane”).

      \@ssect

      Co-topic Relations

      The CTQs also represent a drill down from the original query. It necessarily shares some lexical part with the original query but it does not necessarily share any clicked document with it. As expected from higher evaluation measures, they seem to be homogeneous because they share the left substring. But the coverage is limited especially for longer queries that are already specific enough. It provides conservative recommendations but strictly limited to specialization queries.

      \@ssect

      Co-session relations

      The CSQs might represent a drill down from the original query but it also include topic shifts. It does not necessarily share any lexical part nor any clicked document with the original query. As have been noted, parallel move queries are characteristic of this method. For example, against the original query “ANA”444All Nippon Airways or ANA offers domestic flights in Japan., all of the top five recommendations are either competing traffic companies such as “JAL”, “Skymark” (the names of other airline companies), JR (railway company), or travel agent companies such as “JTB” and “HIS”. This is useful to a searcher who arranges a travel plan. In the case of the query: “JR”, the names of three out of six JR regional railway companies appear as well as “ANA”.

      We used half of the pairs for training and the rest of the pairs for evaluation. For training, similarity measures are used as the target function to learn. After convergence is achieved, we use the model to rank the queries.

      Because the combined ranking uses many more features other than , and , the ranking is very different from a simple mixture of three basic rankings.

      As shown in Table Learning to Rank Query Recommendations by Semantic Similarity, the combined ranking learned by GBDT achieves the best scores. The improvements from the single methods amount to between +1.8% and +4.1% with NDCG5; all results being statistically significant according to a Wilcoxon test (). With MAP, the conclusions are similar. In general, the combination of two methods is better than any single method and combining the three methods improves the performance further, especially in terms of MAP.

      A visual inspection of Fig. Learning to Rank Query Recommendations by Semantic Similarity where the precision-recall curves are drawn confirms these results. The combined ranking outperforms any single methods over the whole recall range. As seen in the graphs, the improvement is not trivial whereas the differences between the three single methods are small.

      Ranking method NDCG5 MAP
      0.9134 0.8570
      0.9238 0.8602
      0.9036 0.8538
      0.9308 0.8716
      0.9153 0.8622
      0.9202 0.8660
      0.9271 0.8720
      Combined by GBDT 0.9405 0.8978
      Table \thetable: Recommendation ranking evaluated by NDCG5 and MAP.
      Figure \thefigure: Precision-recall curves of co-click, co-topic, co-session and combined ranking of recommendations.

      Finally, Table Learning to Rank Query Recommendations by Semantic Similarity shows the relative importance of the features listed in Table Learning to Rank Query Recommendations by Semantic Similarity. Although – the cosine similarity between the bag of character bigrams representations of two queries – is the most important feature partially because of the evaluation bias mentioned in Section Learning to Rank Query Recommendations by Semantic Similarity, other nine features account for more than 10% of its importance. We understand from this that the proposed feature set is very effective for this task. The feature, as well as other textual features, tends to promote queries sharing lexical items with original, i.e. typically found in the CTQ sets. On the other hand, the second more important features – the log likelihood ratio of observing after in the same session – and – the next query entropy of – are related to CSQs. The features are related to the popularity of the queries while the click entropy features are related to the click variance. This confirms our initial hypothesis that the three different methods of identifying potential query recommendations are complementary and combining them is beneficial.

      Rank Feature Importance
      1 100.00
      2 68.72
      3 51.36
      4 30.29
      5 27.80
      6 22.23
      7 19.46
      8 17.26
      9 17.25
      10 11.91
      11 9.65
      12 7.76
      13 7.63
      14 6.31
      15 5.02
      16 5.01
      17 2.97
      18 2.62
      19 2.03
      20 1.32
      21 1.30
      Table \thetable: Relative importance of features averaging through two fold training sets.

      We use three methods of extracting recommendations from search logs to improve the quality of the suggested queries. The first method exploits the clicked document position in the ranking and selects as candidate recommendation queries existing in the logs that have a higher rank for the clicked document. The second method is based on the observation that users often refine their query by adding terms. The third method uses the query sequences in search sessions and recommends some typical topic shifts from the query.

      We carried out experiments on a sample query log of a commercial search engine in Japan to compare the three methods. We observed that each method has its own advantages and drawbacks: the first one, based on the position of the clicked documents, is sometimes more difficult to understand at first glance, but recommendations may turn out to be more useful than those extracted from query reformulations; the second tends to be limited to specializations of the original query, which usually offer safer recommendations but less coverage; the last one is good in the case of a topic shift or mission change. The preliminary experiments conducted on the Yahoo! directory revealed a good semantic similarity between the extracted query pairs. By construction, the second method of adding a facet to a query (CTQ) rarely drifts from the original search topic. On the other hand, the first method (BRCQQ) that consists in identifying queries that would rank higher the clicked documents tend to surface more specific, sometimes jargon like queries. This occasionally leads to incomprehensible recommendations, at least to our understanding (although they might make sense for the users who issued them). CTQ and variations on this method are used by many commercial search engines owing to its more conservative nature but BRCCQ might be a more effective way of recommending totally new, eye-opening queries in a more exploratory fashion, despite the risk of recommending over-specific or over-generic queries. Queries extracted from user sessions (CSQ) provides more diverse recommendations such as parallel move reformulations or even topic changes if those happen frequently in the logs (e.g. searching for an image after having looked for some film star).

      In conclusion, each recommendation method has its own merits and drawbacks, which is the reason why we combined them. Adopting semantic similarities as the target attribute, we learned to combine recommendations from the three different methods in a new ranking according to the similarity to the original query. We showed that the resulting ranking out-performs any of the individual rankings as well as their linear combinations in terms of NDCG5 and MAP.

      As the next step of this study, we will try to select recommendations so as to maximize the facet diversity. Consequently, we need to evaluate the diversity in recommendation ranking. Evaluation of query recommendations is also an important issue in this research area and relatively less investigated than that of document search, as evaluating diversified results is problematic even for this case.

      • [1] I. Antonellis, H. Garcia-Molina, and C.-C. Chang. Simrank++: query rewriting through link analysis of the click graph. PVLDB, 1(1):408–421, 2008.
      • [2] R. Baeza-Yates and A. Tiberi. Extracting semantic relations from query logs. In Proceedings of the 13th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 76–85, 2007. San Jose, CA, USA.
      • [3] R. A. Baeza-Yates, C. A. Hurtado, and M. Mendoza. Improving search engines by query clustering. JASIST, 58(12):1793–1804, 2007.
      • [4] D. Beeferman and A. Berger. Agglomerative clustering of a search engine query log. In Proceedings of the sixth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 407–416, 2000.
      • [5] P. Boldi, F. Bonchi, C. Castillo, and S. Vigna. From "dango" to "japanese cakes": Query reformulation models and patterns. In WI-IAT ’09, pages 183–190, 2009.
      • [6] H. Cao, D. Jiang, J. Pei, Q. He, Z. Liao, E. Chen, and H. Li. Context-aware query suggestion by mining click-through and session data. In Proceeding of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 875–883, 2008.
      • [7] N. Craswell and M. Szummer. Random walks on the click graph. In Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval, pages 239–246, 2007.
      • [8] G. Dupret and C. Liao. A model to estimate intrinsic document relevance from the clickthrough logs of a web search engine. In Proceedings of the Third ACM International Conference on Web Search and Web Data Mining, WSDM 2010, New York City, USA, pages 181–190, 2010.
      • [9] G. Dupret and M. Mendoza. Recommending Better Queries from Click-Through Data. In Proceedings of the 12th International Symposium on String Processing and Information Retrieval(SPIRE 2005),LNCS 3246, pages 41–44. Springer, 2005.
      • [10] B. M. Fonseca, P. B. Golgher, B. Pôssas, B. A. Ribeiro-Neto, and N. Ziviani. Concept-based interactive query expansion. In Proceedings of the 14th ACM international conference on Information and knowledge management, pages 696–703, 2005.
      • [11] J. H. Friedman. Greedy function approximation: A gradient boosting machine. Annals of Statistics, 29:1189–1232, 2000.
      • [12] D. He, A. Göker, and D. J. Harper. Combining evidence for automatic web session identification. Information Processing & Management, 38(5):727–742, 2002.
      • [13] B. J. Jansen, D. L. Booth, and A. Spink. Patterns of query reformulation during web searching. JASIST, 60(7):1358–1371, 2009.
      • [14] T. Joachims, L. Granka, B. Pan, H. Hembrooke, and G. Gay. Accurately interpreting clickthrough data as implicit feedback. In Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval, pages 154–161, 2005.
      • [15] R. Jones, B. Rey, O. Madani, and W. Greiner. Generating query substitutions. In Proceedings of the 15th international conference on World Wide Web, pages 387–396, 2006.
      • [16] A. Spink, J. Bateman, and B. J. Jansen. Searching heterogeneous collections on the web: behaviour of excite users. Information Research, 4(2), 1998.
      • [17] J. Teevan, S. T. Dumais, and D. J. Liebling. To personalize or not to personalize: modeling queries with variation in user intent. In Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval, SIGIR ’08, pages 163–170, New York, NY, USA, 2008. ACM.
      • [18] G.-R. Xue, H.-J. Zeng, Z. Chen, Y. Yu, W.-Y. Ma, W. Xi, and W. Fan. Optimizing web search using web click-through data. In Proceedings of the thirteenth ACM international conference on Information and knowledge management, pages 118–126, 2004. Washington, D.C., USA.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
13792
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description