Solving the apparent diversity-accuracy dilemma of recommender systems

Solving the apparent diversity-accuracy dilemma of recommender systems

Tao ZhouDepartment of Physics, University of Fribourg, Chemin du Musée 3, CH-1700 Fribourg, Switzerland Department of Modern Physics and Nonlinear Science Center, University of Science and Technology of China, Hefei 230026, China Research Center for Complex Systems Science, University of Shanghai for Science and Technology, Shanghai 200093, China 1 2 3    Zoltán KuscsikDepartment of Theoretical Physics and Astrophysics, P. J. Šafárik University, Park Angelinum 9, Košice 04001, Slovak Republic 1 4    Jian-Guo Liu 1 2 3    Matúš Medo 1    Joseph R. Wakeling 1    Yi-Cheng Zhang
Abstract

Recommender systems use data on past user preferences to predict possible future likes and interests. A key challenge is that while the most useful individual recommendations are to be found among diverse niche objects, the most reliably accurate results are obtained by methods that recommend objects based on user or object similarity. In this paper we introduce a new algorithm specifically to address the challenge of diversity and show how it can be used to resolve this apparent dilemma when combined in an elegant hybrid with an accuracy-focused algorithm. By tuning the hybrid appropriately we are able to obtain, without relying on any semantic or context-specific information, simultaneous gains in both accuracy and diversity of recommendations.

information filtering — recommender systems — hybrid algorithms
\contributor

Submitted to Proceedings of the National Academy of Sciences of the United States of America

{article}\dropcap

Getting what you want, as the saying goes, is easy: the hard part is working out what it is that you want in the first place [1]. Whereas information filtering tools like search engines typically require the user to specify in advance what they are looking for [2, 3, 4, 5], this challenge of identifying user needs is the domain of recommender systems [5, 6, 7, 8], which attempt to anticipate future likes and interests by mining data on past user activities.

Many diverse recommendation techniques have been developed, including collaborative filtering [6, 9], content-based analysis [10], spectral analysis [11, 12], latent semantic models and Dirichlet allocation [13, 14], and iterative self-consistent refinement [15, 16, 17]. What most have in common is that they are based on similarity, either of users or objects or both: for example, e-commerce sites such as Amazon.com use the overlap between customers’ past purchases and browsing activity to recommend products [18, 19], while the TiVo digital video system recommends TV shows and movies on the basis of correlations in users’ viewing patterns and ratings [20]. The risk of such an approach is that, with recommendations based on overlap rather than difference, more and more users will be exposed to a narrowing band of popular objects, while niche items that might be very relevant will be overlooked.

The focus on similarity is compounded by the metrics used to assess recommendation performance. A typical method of comparison is to consider an algorithm’s accuracy in reproducing known user opinions that have been removed from a test data set. An accurate recommendation, however, is not necessarily a useful one: real value is found in the ability to suggest objects users would not readily discover for themselves, that is, in the novelty and diversity of recommendation [21]. Despite this, most studies of recommender systems focus overwhelmingly on accuracy as the only important factor (for example, the Netflix Prize [22] challenged researchers to increase accuracy without any reference to novelty or personalization of results). Where diversification is addressed, it is typically as an adjunct to the main recommendation process, based on restrictive features such as semantic or other context-specific information [23, 24].

The clear concern is that an algorithm that focuses too strongly on diversity rather than similarity is putting accuracy at risk. Our main focus in this paper is to show that this apparent dilemma can in fact be resolved by an appropriate combination of accuracy- and diversity-focused methods. We begin by introducing a “heat-spreading” algorithm designed specifically to address the challenge of diversity, with high success both at seeking out novel items and at enhancing the personalization of individual user recommendations. We show how this algorithm can be coupled in a highly efficient hybrid with a diffusion-based recommendation method recently introduced by our group [25]. Using three different datasets from three distinct communities, we employ a combination of accuracy- and diversity-related metrics to perform a detailed study of recommendation performance and a comparison to well-known methods. We show that not only does the hybrid algorithm outperform other methods but that, without relying on any semantic or context-specific information, it can be tuned to obtain significant and simultaneous gains in both accuracy and diversity of recommendations.

1 Methods

1.1 Recommendation procedure

Since explicit ratings are not always available [26], the algorithms studied in this paper are selected to work with very simple input data: users, objects, and a set of links between the two corresponding to the objects collected by particular users (more explicit preference indicators can be easily mapped to this “unary” form, albeit losing information in the process, whereas the converse is not so). These links can be represented by an adjacency matrix where if object is collected by user and otherwise (throughout this paper we use Greek and Latin letters respectively for object- and user-related indices). Alternatively we can visualize the data as a bipartite user-object network with nodes, where the degrees of object and user nodes, and , represent respectively the number of users who have collected object and the number of objects collected by user .

Recommendation scores are calculated for each user and each of their uncollected objects, enabling the construction of a sorted recommendation list with the most-recommended items at the top. Different algorithms generate different object scores and thus different rankings.

1.2 Algorithms

The heat spreading (HeatS) algorithm introduced here employs a process analogous to heat diffusion across the user-object network. This can be related to earlier work using a “heat conduction” algorithm to generate recommendations [27, 28], but with some key differences. The earlier algorithm operates on an object-object network derived from an explicit ratings structure, which washes out information about novelty or popularity of objects and consequently limits the algorithm to considering questions of accuracy and not diversity. The algorithm also requires multiple iterations to converge to a steady state. By contrast HeatS requires no more than unary data, and generates effective recommendations in a single pass.

HeatS works by assigning objects an initial level of “resource” denoted by the vector (where is the resource possessed by object ), and then redistributing it via the transformation , where

(1)

is a row-normalized matrix representing a discrete analogy of a heat diffusion process. Recommendations for a given user are obtained by setting the initial resource vector in accordance with the objects the user has already collected, that is, by setting . The resulting recommendation list of uncollected objects is then sorted according to in descending order.

HeatS is a variant on an earlier probabilistic spreading (ProbS) algorithm introduced by our group [25], which redistributes resource in a manner akin to a random walk process. Whereas HeatS employs a row-normalized transition matrix, that of ProbS is column-normalized,

(2)

with the resource redistribution and resulting object scores then being given by .

A visual representation of the resource spreading processes of ProbS and HeatS is given in Fig. 1: in ProbS (a–c) the initial resource placed on objects is first evenly distributed among neighboring users, and then evenly redistributed back to those users’ neighboring objects. By contrast HeatS (d–f) redistributes resource via an averaging procedure, with users receiving a level of resource equal to the mean amount possessed by their neighboring objects, and objects then receiving back the mean of their neighboring users’ resource levels. (Note that in ProbS total resource levels remain constant, whereas in HeatS this is not so.) Due to the sparsity of real datasets, these “physical” descriptions of the algorithms turn out to be more computationally efficient in practice than constructing and using the transition matrices and .

To provide a point of comparison we also employ two methods well-known in the recommender systems literature. Global ranking (GRank) recommends objects according to their overall popularity, sorting them by their degree in descending order. While computationally cheap, GRank is not personalized (apart from the exclusion of different already-collected objects) and in most cases it performs poorly.

A much more effective method is user similarity (USim), a well known and widely used technique that recommends items frequently collected by a given user’s “taste mates” [8]. The taste overlap between users and is measured by the cosine similarity,

(3)

and if user has not yet collected object , its recommendation score is given by

(4)

with the final recommendation list for user being sorted according to in descending order.

1.3 Hybrid methods

A basic but very general means of creating hybrid algorithms is to use weighted linear aggregation [23]: if methods X and Y report scores of and respectively, then a hybrid score for object can be given by

(5)

where the normalizations address the fact that different methods may produce scores on very different scales. By varying the parameter , we can tune the hybrid X+Y to favor the characteristics of one method or the other.

Though easy to implement, this approach has the disadvantage of requiring two independent recommendation calculations, thus increasing computational cost. HeatS and ProbS, however, are already fundamentally linked, with their recommendation processes being determined by different normalizations of the same underlying matrix (in fact, their transition matrices are the transpose of each other). A much more elegant hybrid can thus be achieved by incorporating the hybridization parameter into the transition matrix normalization:

(6)

where gives us the pure HeatS algorithm, and gives us pure ProbS (other hybrid forms are possible but give inferior performance: Fig. S1 of supporting information [SI] provides a comparison of the different alternatives). In contrast to Eq. 5, this HeatS+ProbS hybrid has a computational complexity of order no greater than ProbS or HeatS alone. Note that while in the present work takes a universal value, there is no reason in principle why we cannot use different values for each individual target user.

1.4 Datasets

Three different datasets (Table 1) were used to test the above algorithms, differing both in subject matter (movies, music and internet bookmarks) and in quantitative aspects such as user/object ratios and link sparsity. The first (Netflix) is a randomly-selected subset of the huge dataset provided for the Netflix Prize [22], while the other two (RYM and Delicious) were obtained by downloading publicly-available data from the music ratings website RateYourMusic.com and the social bookmarking website Delicious.com (taking care to anonymize user identity in the process).

While the Delicious data is inherently unary (a user has either collected a web link or not), the raw Netflix and RYM data contain explicit ratings on a 5-star scale. A coarse-graining procedure was therefore used to transform these into unary form: an object is considered to be collected by a user only if the given rating is 3 or more. Sparseness of the datasets (defined as the number of links divided by the total number of possible user-object pairs) is measured relative to these coarse-grained connections.

1.5 Recommendation performance metrics

To test a recommendation method on a dataset we remove at random 10% of the links and apply the algorithm to the remainder to produce a recommendation list for each user. We then employ four different metrics, two to measure accuracy in recovery of deleted links (A) and two to measure recommendation diversity (D):

(A1) Recovery of deleted links, . An accurate method will clearly rank preferable objects more highly than disliked ones. Assuming that users’ collected objects are indeed preferred, deleted links should be ranked higher on average than the other uncollected objects. So, if uncollected object is listed in place for user , the relative rank should be smaller if is a deleted link (where objects from places to have the same score, which happens often in practice, we give them all the same relative ranking, ). Averaging over all deleted links we obtain a quantity, , such that the smaller its value, the higher the method’s ability to recover deleted links.

(A2) Precision and recall enhancement, and . Since real users usually consider only the top part of the recommendation list, a more practical measure may be to consider , the number of user ’s deleted links contained in the top places. Depending on our concerns, we may be interested either in how many of these top places are occupied by deleted links, or how many of the user’s deleted links have been recovered in this way. Averaging these ratios and over all users with at least one deleted link, we obtain the mean precision and recall, and , of the recommendation process [21, 29].

A still better perspective may be given by considering these values relative to the precision and recall of random recommendations, and . If user has a total of deleted links, then (since in general ) and hence averaging over all users, , where is the total number of deleted links. By contrast the mean number of deleted links in the top places is given by and so . From this we can define the precision and recall enhancement,

(7a)
(7b)

Results for recall are given in SI (Figs. S2 and S3), but are similar in character to those shown here for precision.

(D1) Personalization, . Our first measure of diversity considers the uniqueness of different users’ recommendation lists—that is, inter-user diversity. Given two users and , the difference between their recommendation lists can be measured by the inter-list distance,

(8)

where is the number of common items in the top places of both lists: identical lists thus have whereas completely different lists have . Averaging over all pairs of users with at least one deleted link we obtain the mean distance , for which greater or lesser values mean respectively greater or lesser personalization of users’ recommendation lists.

(D2) Surprisal/novelty, . The second type of diversity concerns the capacity of the recommender system to generate novel and unexpected results—to suggest objects a user is unlikely to already know about. To measure this we use the self-information or “surprisal” [30] of recommended objects, which measures the unexpectedness of an object relative to its global popularity. Given an object , the chance a randomly-selected user has collected it is given by and thus its self-information is . From this we can calculate the mean self-information of each user’s top objects, and averaging over all users with at least one deleted link we obtain the mean top- surprisal .

Note that unlike the metrics for accuracy, the diversity-related measures could be averaged over all users regardless of whether they have deleted links or not, but the final results do not differ significantly. Where metrics depend on , different choices result in shifts in the precise numbers but relative performance differences between methods remain unchanged so long as . Extended results are available in SI (Figs. S4 and S5); a value of was chosen for the results displayed here in order to reflect the likely length of a practical recommendation list.

2 Results

2.1 Individual algorithms

A summary of the principal results for all algorithms, metrics and datasets is given in Table 2.

ProbS is consistently the strongest performer with respect to accuracy, with USim a close second, while both GRank and HeatS perform significantly worse (the latter reporting particularly bad performance with respect to precision enhancement). By contrast with respect to the diversity metrics HeatS is by far the strongest performer: ProbS has some success with respect to personalization, but along with USim and GRank performs weakly where surprisal (novelty) is concerned.

That GRank has any personalization at all () stems only from the fact that it does not recommend items already collected, and different users have collected different items. The difference in GRank’s performance between Netflix, RYM and Delicious can be ascribed to the “blockbuster” phenomenon common in movies, far less so with music and web links: the 20 most popular objects in Netflix are each collected by on average 31.7% of users, while for RYM the figure is 7.2% and for Delicious only 5.6%.

The opposing performances of ProbS and HeatS—the former favoring accuracy, the latter personalization and novelty—can be related to their different treatment of popular objects. The random-walk procedure of ProbS favors highly-connected objects, whereas the averaging process of HeatS favors objects with few links: for example, in the Delicious dataset the average degree of users’ top 20 objects as returned by ProbS is 346, while with HeatS it is only 2.2. Obviously the latter will result in high surprisal values, and also greater personalization, as low-degree objects are more numerous and a method that favors them has a better chance of producing different recommendation lists for different users. On the other hand randomly-deleted links are clearly more likely to point to popular objects, and methods that favor low-degree objects will therefore do worse; hence the indiscriminate but populist GRank is able to outperform the novelty-favoring HeatS.

If we deliberately delete only links to low-degree objects, the situation is reversed, with HeatS providing better accuracy, although overall performance of all algorithms deteriorates (Table 3 and Fig. S6). Hence, while populism can be a cheap and easy way to get superficially accurate results, it is limited in scope: the most appropriate method can be determined only in the context of a given task or user need. The result also highlights the very distinct and unusual character of HeatS compared to other recommendation methods.

2.2 Hybrid methods

Given that different algorithms serve different purposes and needs, is it possible to combine two (or more) in such a way as to obtain the best features of both? With HeatS favoring diversity and ProbS accuracy, their hybrid combination (Eq. 6) might be expected to provide a smooth transition from one to the other. In fact, the situation is even more favorable: while pure HeatS represents the optimum for novelty, it is possible to obtain performance improvements relative to all other metrics by tuning the hybridization parameter appropriately (Fig. 2). The accuracy of ProbS can thus be maintained and even improved while simultaneously attaining diversity close to or even exceeding that of HeatS. Alternatively, diversity can be favored while minimizing the cost in terms of accuracy.

Depending on the particular needs of a system and its users, one can define an arbitrary utility function and choose to optimize it: Table 4 gives as an example the percentage improvements that can be made, relative to pure ProbS (), if we choose to minimize . Shared improvements are obtained for all metrics except with the Delicious dataset, where minimizing has a negative effect on . However, from Fig. 2 we can see that even in this case it is possible to choose a value of to simultaneously improve all metrics relative to ProbS.

Although HeatS+ProbS provides the best performance when taking into account all the metrics, other hybrids (constructed using the more general method of Eq. 5) can provide some valuable individual contributions (Fig. S7). HeatS+USim behaves similarly to HeatS+ProbS, but with generally smaller performance improvements. A more interesting hybrid is to combine the poorly-performing GRank with either HeatS or ProbS. These combinations can have a dramatic effect on link recovery: for RYM either can be tuned to produce an improvement in of almost 30% (relative to pure ProbS), compared to only 6.8% for the HeatS+ProbS hybrid (Table 4).

The explanation for these improvements stems from the way in which ProbS and HeatS interact with sparse datasets. Coverage of uncollected objects is limited to those sharing a user in common with an object collected by the target user (Fig. 1): all others receive a score of zero and so share a common (and large) relative rank, where is the number of objects with zero score. GRank, with its universal coverage, is able to differentially rank these objects and so lower their contributions to . Consequently, while incorporating it too strongly has a deleterious effect on the other metrics, a small GRank contribution can provide a useful enhancement to recommendation coverage—notably in “cold start” cases where little or nothing is known about a user.

3 Discussion

Recommender systems have at their heart some very simple and natural social processes. Each one of us looks to others for advice and opinions, learning over time who to trust and whose suggestions to discount. The paradox is that many of the most valuable contributions come not from close friends but from people with whom we have only a limited connection—“weak ties” who alert us to possibilities outside our regular experience [31].

The technical challenges facing recommender systems involve similar paradoxes. The most reliably accurate algorithms are those based on similarity and popularity of users and objects, yet the most valuable recommendations are those of niche items users are unlikely to find for themselves [21]. In this paper we have shown how this apparent dilemma can be resolved by an appropriate combination of diversity- and accuracy-focused methods, using a hybrid algorithm that joins a method with proven high accuracy with a new algorithm dedicated specifically to the production of novel and personalized recommendations. Their combination allows not merely a compromise between the two imperatives but allows us to simultaneously increase both accuracy and diversity of recommendations. By tuning the degree of hybridization the algorithms can be tailored to many custom situations and requirements.

We expect these results to be general: while we have presented a particular set of algorithms and datasets here, other recommender systems must face the same apparent dilemma and we expect them to benefit from a similar hybrid approach. It is interesting to note that while the Netflix Prize focused solely on accuracy, the winning entry in fact took a diversification approach, in this case based on tracking the changes in user opinions over time [32].

The algorithms presented here rely on no more than unary data and can thus place diversity at the heart of the recommendation process while still being applicable to virtually any dataset. More detailed sources of information can nevertheless be used to extend the recommendation process. Topical information and other measures of item-item similarity can be used to further diversify recommendation lists [24]: user-generated classifications such as tags [33, 34, 35] may be useful here. The HeatS and ProbS algorithms, and hence their hybrid, can be further customized by modifying the initial allocation of resource [36] to increase or decrease the influence of selected objects on the recommendation process. The hybridization process itself can be extended by incorporating techniques such as content-based or semantic analyses [23].

The ultimate measure of success for any recommender system is of course in the appreciation of its users, and in particular the ability of the system to serve their often very distinct needs. While in this paper we have optimized the hybrid from a global perspective, there is no reason why it cannot be tuned differently for each individual user—either by the system provider or by users themselves. This last consideration opens the door to extensive future theoretical and empirical research, bringing diversity and personalization not just to the contents of recommendation lists, but to the recommendation process itself.

Acknowledgements.
We are grateful to Yi-Kuo Yu for useful comments and conversations, and to two anonymous referees for their valuable feedback. This work was supported by Swiss National Science Foundation grant 200020-121848, Swiss State Ministry for Education and Research grant C05.0148 (Physics of Risk), and National Natural Science Foundation of China grants 10635040 and 60744003. We also acknowledge funding from the Liquid Publications and QLectives projects (EU FET-Open grants 213360 and 231200) during the final stages of this work.

References

  • [1] Masum H, Zhang Y-C (2004) Manifesto for the reputation society. First Monday 9: 7.
  • [2] Hanani U, Shapira B, Shoval P (2001) Information filtering: overview of issues, research and systems. User Model User-Adap Interact 11:203–259.
  • [3] Brin S, Page L (1998) The anatomy of a large-scale hypertextual Web search engine. Comput Netw ISDN Syst 30:107–117.
  • [4] Kleinberg JM (1999) Authoritative sources in a hyperlinked environment. J ACM 46:604–632.
  • [5] Belkin NJ (2000) Helping people find what they don’t know. Commun ACM 43:58–61.
  • [6] Goldberg D, Nichols D, Oki BM, Terry D (1992) Using collaborative filtering to weave an information tapestry. Commun ACM 35:61–70.
  • [7] Resnick P, Varian HR (1997) Recommender systems. Commun ACM 40:56–58.
  • [8] Adomavicius G, Tuzhilin A (2005) Toward the next generation of recommender systems: a survey of the state-of-the-art and possible extensions. IEEE Trans Knowl Data Eng 17:734–749.
  • [9] Schafer JB, Frankowski D, Herlocker J, Sen S (2007) Collaborative filtering recommender systems. Lect Notes Comput Sci 4321:291–324.
  • [10] Pazzani MJ, Billsus D (2007) Content-based recommendation systems. Lect Notes Comput Sci 4321:325–341.
  • [11] Goldberg K, Roeder T, Gupta D, Perkins C (2001) Eigentaste: a constant time collaborative filtering algorithm. Inf Retr 4:133–151.
  • [12] Maslov S, Zhang Y-C (2001) Extracting hidden information from knowledge networks. Phys Rev Lett 87:248701.
  • [13] Hofmann T (2004) Latent semantic models for collaborative filtering. ACM Trans Inf Syst 22:89–115.
  • [14] Blei DM, Ng AY, Jordan MI (2003) Latent Dirichlet allocation. J Mach Learn Res 3:993–1022.
  • [15] Laureti P, Moret L, Zhang Y-C, Yu Y-K (2006) Information filtering via iterative refinement. Europhys Lett 75:1006–1012.
  • [16] de Kerchove C, Van Dooren P (2008) Reputation systems and optimization. SIAM News 41 (March).
  • [17] Ren J, Zhou T, Zhang Y-C (2008) Information filtering via self-consistent refinement. Europhys Lett 82:58007.
  • [18] Schafer JB, Konstan JA, Riedl J (2001) E-commerce recommendation applications. Data Min Knowl Disc 5:115–153.
  • [19] Linden G, Smith B, York J (2003) Amazon.com recommendations: item-to-item collaborative filtering. IEEE Internet Comput 7:76–80.
  • [20] Ali K, van Stam W (2004) TiVo: making show recommendations using a distributed collaborative filtering architecture. Proc 10th ACM SIGKDD Int Conf Knowl Disc Data Min 394–401.
  • [21] Herlocker JL, Konstan JA, Terveen K, Riedl JT (2004) Evaluating collaborative filtering recommender systems. ACM Trans Inf Syst 22:5–53.
  • [22] Bennett J, Lanning S (2007) The Netflix prize. Proc KDD Cup Workshop 2007 3–6.
  • [23] Burke R (2002) Hybrid recommender systems: survey and experiments. User Model User-Adap Interact 12:331–370.
  • [24] Ziegler C-N, McNee SM, Konstan JA, Lausen G (2005) Improving recommendation lists through topic diversification. Proc 14th Int World Wide Web Conf 22–32.
  • [25] Zhou T, Ren J, Medo M, Zhang Y-C (2007) Bipartite network projection and personal recommendation. Phys Rev E 76:046115.
  • [26] Claypool M, Brown D, Le P, Waseda M (2001) Inferring user interest. IEEE Internet Comput 5:32–39.
  • [27] Zhang Y-C, Blattner M, Yu Y-K (2007) Heat conduction process on community networks as a recommendation model. Phys Rev Lett 99:154301.
  • [28] Stojmirović A, Yu Y-K (2007) Information flow in interaction networks. J Comput Biol 14:1115–1143.
  • [29] Swets JA (1963) Information retrieval systems. Science 141:245–250.
  • [30] Tribus M (1961) Thermostatics and Thermodynamics (Van Nostrand, Princeton, NJ).
  • [31] Granovetter M (1973) The strength of weak ties. Am J Sociol 78:1360–1380.
  • [32] Koren Y (2009) The BellKor solution to the Netflix Grand Prize; Töscher A, Jahrer M, Bell R (2009) The BigChaos solution to the Netflix Grand Prize; Piotte M, Chabbert M (2009) The Pragmatic Theory solution to the Netflix Grand Prize. Technical reports submitted to the Netflix Grand Prize.
  • [33] Hotho A, Jäschke R, Schmitz C, Stumme G (2006) Information retrieval in folksonomies: search and ranking. Lect Notes Comput Sci 4011:411–426.
  • [34] Cattuto C, Loreto V, Pietronero L (2007) Semiotic dynamics and collaborative tagging. Proc Natl Acad Sci USA 104:1461–1464.
  • [35] Zhang Z-K, Zhou T, Zhang Y-C (2010) Personalized recommendation via integrated diffusion on user-item-tag tripartite graphs. Physica A 389:179–186.
  • [36] Zhou T, Jiang L-L, Su R-Q, Zhang Y-C (2008) Effect of initial configuration on network-based recommendation. Europhys Lett 81:58004.
Figure 1: The HeatS (a,b,c) and ProbS (d,e,f) algorithms (Eqs. 1 and 2) at work on the bipartite user-object network. Objects are shown as squares, users as circles, with the target user indicated by the shaded circle. While the HeatS algorithm redistributes resource via a nearest-neighbour averaging process, the ProbS algorithm works by an equal distribution of resource among nearest neighbours.
Figure 2: Performance of the HeatS+ProbS hybrid algorithm (Eq. 6) on the three different datasets. By varying the hybridization parameter between pure HeatS () and pure ProbS () it is possible to gain simultaneous performance enhancements with respect to both accuracy ( and ) and diversity ( and ) of recommendations. Tuning in this fashion allows the algorithm to be customized and optimized for different user or community needs.
dataset users objects links sparsity
Netflix 10 000 6 000 701 947
RYM 33 786 5 381 613 387
Delicious 10 000 232 657 1 233 997
Table 1: Properties of the tested datasets.
Netflix RYM Delicious
method
GRank 0.057 58.7 0.450 1.79 0.119 57.3 0.178 4.64 0.314 147 0.097 4.23
USim 0.051 68.5 0.516 1.82 0.087 150 0.721 5.17 0.223 249 0.522 4.49
ProbS 0.045 70.4 0.557 1.85 0.071 185 0.758 5.32 0.210 254 0.783 5.81
HeatS 0.102 0.11 0.821 12.9 0.085 121 0.939 10.1 0.271 30.8 0.975 12.6
Table 2: Performance of the recommendation algorithms according to each of the four metrics: recovery of deleted links, precision enhancement, personalization, and surprisal.
method
GRank 0.327 0.000 0.525 1.68
USim 0.308 0.000 0.579 1.72
ProbS 0.279 0.014 0.610 1.74
HeatS 0.262 0.679 0.848 13.1
Table 3: Performance of individual recommendation algorithms for a probe set consisting of only low-degree () objects.
dataset
Netflix 0.23 10.6% 16.5% 28.5% 28.8%
RYM 0.41 6.8% 10.8% 20.1% 17.2%
Delicious 0.66 1.2% -6.0% 22.5% 61.7%
Table 4: Tuning the HeatS+ProbS hybridization parameter to optimize for produces simultaneous improvements in other metrics. The relative changes are given in percentage terms against the pure ProbS algorithm.

Supporting information
Zhou et al., “Solving the apparent diversity-accuracy dilemma
of recommender systems”

Figure S1. Elegant hybrids of the HeatS and ProbS algorithms can be created in several ways besides that given in Eq. 6 of the paper: for example , or . While performs well only with respect to , Eq. 6 and both have their advantages. However, Eq. 6 is somewhat easier to tune to different requirements since it varies more slowly and smoothly with . The results shown here are for the RateYourMusic dataset.

Figure S2. Precision and recall provide complementary but contrasting measures of accuracy: the former considers what proportion of selected objects (in our case, objects in the top places of the recommendation list) are relevant, the latter measures what proportion of relevant objects (deleted links) are selected. Consequently, recall (red) grows with , whereas precision (blue) decreases. Here we compare precision and recall for the HeatS+ProbS hybrid algorithm on the Delicious and Netflix datasets. While quantitatively different, the qualitative performance is very similar for both measures.

Figure S3. A more elegant comparison can be obtained by considering precision and recall enhancement, that is, their values relative to that of randomly-sorted recommendations: and (Eqs. 7a, b in the paper). Again, qualitative performance is close, and both of these measures decrease with increasing , reflecting the inherent difficulty of improving either measure given a long recommendation list.

Figure S4. Comparison of the diversity-related metrics and when two different averaging procedures are used: averaging only over users with at least one deleted link (as displayed in the paper) and averaging over all users. The different procedures do not alter the results qualitatively and make little quantitative difference. The results shown are for the RateYourMusic dataset.

Figure S5. Comparison of performance metrics for different lengths of recommendation lists: (red), (green) and (blue). Strong quantitative differences are observed for precision enhancement and personalization , but their qualitative behaviour remains unchanged. Much smaller differences are observed for surprisal .

Figure S6. Our accuracy-based metrics all measure in one way or another the recovery of links deleted from the dataset. Purely random deletion will inevitably favor high-degree (popular) objects, with their greater proportion of links, and consequently methods that favor popular items will appear to provide higher accuracy. To study this effect, we created two special probe sets consisting of links only to objects whose degree was less than some threshold (either 100 or 200): links to these objects were deleted with probability , while links to higher-degree objects were left untouched. The result is a general decrease in accuracy for all algorithms—unsurprisingly, since rarer links are inherently harder to recover—but also a reversal of performance, with the low-degree-favoring HeatS now providing much higher accuracy than the high-degree-oriented ProbS, USim and GRank. The results shown here are for the Netflix dataset.

Figure S7. In addition to HeatS+ProbS, various other hybrids were created and tested using the method of Eq. 5 in the paper, where for hybrid X+Y, corresponds to pure X and pure Y. The results shown here are for the Netflix dataset. The HeatS+USim hybrid offers similar but weaker performance compared to HeatS+ProbS; combinations of GRank with other methods produce significant improvements in , the recovery of deleted links, but show little or no improvement of precision enhancement and poor results in diversity-related metrics. We can conclude that the proposed HeatS+ProbS hybrid is not only computationally convenient but also performs better than combinations of the other methods studied.

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
101452
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description