An Integrated Recommender Algorithm for Rating Prediction

An Integrated Recommender Algorithm for Rating Prediction

Abstract

Recommender system is currently widely used in many e-commerce systems, such as Amazon, eBay, and so on. It aims to help users to find items which they may be interested in. In literature, neighborhood-based collaborative filtering and matrix factorization are two common methods used in recommender systems. In this paper, we combine these two methods with personalized weights on them. Rather than using fixed weights for these two methods, we assume each user has her/his own preference over them. Our results shows that our algorithm outperforms neighborhood-based collaborative filtering algorithm, matrix factorization algorithm and their combination with fixed weights.

I Introduction

Recommender systems are now widely deployed in may e-commerces, like Amazon, eBay, Epinions and so on, as these platforms become more and more popular. The main purpose of recommender systems is to provide users a list of items that they may be interested in. Items in the list are ranked based on metrics like similarity, relevance and so on. Such type of recommender systems are often mentioned as top-N recommendations [1] [2] [3]. Besides ranked list, in some case, researchers are also interested in predicting rating that users will rate for items. This is always called rating prediction problem, and many works belong to this category [4] [5] [6] [7] [8] [9].

Recommendation approaches can be basically divided into two categories: content based and collaborative filtering based approaches [10]. Among them collaborative filtering based approach are wildly used by many works [4] [11] [9] [6]. Collaborative filtering based approach can be further divided into neighborhood-based (or memory based) and model based collaborative filtering approach [10]. Classical neighborhood-based collaborative filtering approaches assume similar users (neighbors) have similar tastes on items such that their purchase or rating behaviors are also very similar [12]. In traditional neighborhood-based collaborative filtering approach, user-user (item-item) similarities are calculated by two users’ previous purchase or rating behaviors (two items’ common buyers). While model based collaborative filtering approach, like Matrix Factorization [13], model both users and items with some latent factors, and these latent factors are learned in the training stage. In this paper, we propose a new algorithm which integrates neighborhood-based collaborative filtering (CF) and Matrix Factorization (MF). When considering these two methods together, rather than assign them with fixed weights for all the users, we assume that each user has her/his own preference over them.

In this paper, the Twitter-based movie rating dataset, MovieTweeting [14], is chosen for the recommender system development. The rating data are extracted from Twitter, in which users rated a movie on IMDB, and posted the score on their Twitter timeline. The organizer of MovieTweeting extracts the ratings as well as relevant information from Twitter, and posts the dataset on Internet, inviting public to develop customized recommender system based on this dataset.

The interesting part of the dataset is the access to real social behavior for each user, led by the real Twitter IDs provided in the dataset. Meanwhile, the dataset also gives the entries of the rated movie, by IMDB movie IDs. Unlike usual recommender dataset, which only contains rating values associated to anonymous users and items, the MovieTweeting dataset gives linkages to the real world, enable the developers to search more probabilities in potential useful data.

In the following of this paper, we first introduce the background and related works in Section II. We analyze the social connections among users in SectionIII. After investigation, the proposed social network is too sparse. Also we review two very classical algorithms in recommender systems: neighborhood-based collaborative filtering and matrix factorization in Section IV. In Section V, we integrated these two methods together with another baseline method, and incorporate them into a single model. Furthermore, we proposed a improved version, in which we assume that each user has her/his own preference over these two methods. We call it integrated algorithm 2.0. In Section VI, experimental results show that integrated algorithm can perform much better than neighborhood-based collaborative filtering and matrix factorization individually. Also by taking users’ different preferences over these methods into account, we can even achieve better. We conclude this paper in Section VII

Ii Background and Related Works

Ii-a Collaborative Filtering

Collaborative filtering is the most wildly used approach and has acceptable accuracy in many cases [10]. And it can be divided into neighborhood-based and model based approach. In this paper we mainly focus on and use neighborhood-based approach. To recommend items to a user, it assumes that similar users have similar tastes such that items purchased by similar users may also be of interest for her. Similarly, based on user’s previous purchased items, similar items might also be attractive to her. Based on which type of similarity measured, it can further be divided into user base and item based approach. Collaborative filtering tries to find out similar users or items using users’ previous purchase behaviors. Therefore, it takes user-item rating or purchase matrix () as input, in which each row represents a user and each column represents an item. For example, in the input matrix, represents user ’s rating or purchase behavior on item .

Besides neighborhood-based approach, matrix factorization is another popular approach used in recommender systems. It assumes that users and items have the same amount of hidden features. Therefore, both users and items can be represented by matrix, in which m is the number of users/items, and n is the number of hidden features. Then, the prediction becomes the matrix factorization problem.

Ii-B Related Works

As seen by many researchers, traditional collaborative filtering approach have the data sparsity issue and does not solve cold starts problem very well, there are many works proposed to solve this problem.

TaRS [4] uses collaborative filtering approach along with social trust information to produce advice. It uses trust propagation – MoleTrust [15] to infer indirect trust among users such that more users can be connected and increase the coverage. Based on TaRS, [7] proposed a model which also takes distrust into account as well. [12] also use trust metric as weights, but at the same time it keeps similarity. It filters out links in the trust network if two users’ similarity is below a threshold.

Using similarity and social trust information, [9] can even cluster users into groups and find groups’ behavior patterns instead of single user’s behavior pattern. [16] learned social networks’ small world property and also cluster users together in order to do better prediction.

The most similar work with our new algorithm is [17]. It Combines neighborhood-based and model based approach, which can use either explicit or implicit social information in recommender systems.

Iii Social Trust Network

The decisions people make are usually influenced by others, especially the ones they trust. This idea constructs the so-called Social Trust Network [4]. It was difficult to construct such network back couple decades ago since most information was not quantified, while such network construction becomes available nowadays since the trust between people can be observed from their behavior on social network website.

Given real Twitter IDs from the dataset provided, we are able to connect the movie ratings to the raters’ real life. On Twitter, a well-known social network website, user can retweet other user’s tweet. The retweeting behavior makes trust between people observable. We can make a guess, if user A retweets user B’s tweet, we can assume user A tends to trust user B. This could help us predict a possible movie rating that is not rated by user A but is rated by user B who is trusted by user A.

Thus, a trust indicator between a pair of user can be formulated as Equation 1,

(1)

while, the represents the number of retweets posted by user i from user j. The more posts user i has retweeted user j, the high level trust of user i toward user j. If the trust network is observed valid, the resulting trust factor can later be integrated to overall mathematical model.

Attempt of Network Construction

We first extracted social content posted on the Twitter of each user in the training set. A clip of the retweeting data is shown in Table I.

Username Location Retweeted Username Retweet Content Timestamp
GreatBritain_GB Great Britain sonianitiwadee
Cooler than he :
Adidas…
Tue Oct 15 00:49:16
EDT 2014
kvakke Oslo, Norway netliferesearch i dag. Veldig l�rerikt. :)
Mon Nov 03
05:28:35 EST 2014
luisferreras Dime FernandoSued1
Dios mio, asi quedo el
veh de Oscar
Sat Oct 25 21:44:40
EDT 2014
TABLE I: A Clip of Retweeting Data

The first column contains the users provided in the MovieTweeting dataset, while the third column contains the users who are retweeted by the users in first column. We organize the data and input to Gephi, an open-source software, to plot network diagram using Fruchterman Reingold Algorithm to observe the behavior of that network, as Figure 1 below.

Fig. 1: A Glimpse of the Trust Network

Note that each user we focus for Twitter content extraction is from MovieTweeting dataset. All the users in the first column did provide rating information (marked green). However, the users in the third column, or the people being retweeted (marked grey), do not necessarily exist in the MovieTweeting dataset. In other word, the people being trusted in the network did not rate any movie for grant. However, we should only care those being retweeted as well as providing rating information, since only those being trusted can help us predict those followers’ movie preferences.

After investigation, only 0.1% of those being trusted did provide movie rating (also exist in the training dataset). And only 0.5% of training set user are influenced by trust network. The sparse network impact is way too small. Thus, this approach is discarded from the overall model.

Iv Collaborative Filtering

Iv-a Neighborhood-based Collaborative Filtering

Neighborhood-based collaborative filtering (CF) [12] is one of the most classical recommendation system algorithm. Basically it assumes that similar users have similar tastes on items, or similar items will attract same users. To measure similarity among users or items, we use cosine similarity as shown in Equations 2 and 3.

In calculating cosine similarity, we deduct user or item’s average rating or from . This is because deviations from average ratings are more useful in inferring users’ preferences. Ratings can not directly reflect users’ preference, as it will be affected by users’ basic favor (average ratings). For example, if user rates item for 4, and his average rating is also 4. For user , he rates item for 3, but his average rating is 2. In this example, user shows more favor in item than user . Therefore, it is necessary to remove average ratings from when considering user or item similarity.

Besides this, we also take the number of common items rated by two users (or the number of common users rating two items) into account. It is obvious that given the same deviation, more common items two users rate, more similar they are. So we introduce the second term in Equations 2 and 3. is the set of items which rated by both user and , and is the set of users who rate both item and .

(2)
(3)

Intuitively, more similar user or item is, more important the corresponding rating is. Based on what similarity metrics used, CF can be divided into two categories: user-based and item based. Equations 4 and 5 show prediction function of user-based and item-based separately. Here is the baseline from average ratings, as shown in Equation 6.

(4)
(5)
(6)

In real applications, we only consider top most similar users or items when we do prediction. We will see how affects CF’s performance on out dataset.

Iv-B Matrix Factorization

Matrix factorization (MF) [18] is another popular algorithm for recommender systems. Unlike neighborhood-based collaborative filtering, it does not require semantic explanation and no domain expert needed. Although it also assumes that there exists user preference factors and item feature factors, they are not directly available. Instead, it assume there are certain number of hidden factors, which capture users’ features and items’ features.

In MF, we assume there are hidden factors to model users’ preference. Then users can be represented by a matrix, which is called . Here is the number of users. Each row of represents a user and each column of represents one users’ feature. In order to relate users with items, items are also represented by a matrix , which has rows and also columns. In such way, predicted rating can be written as Equation 7.

(7)

We show objective function in Equation 8. It includes two parts: error term and regularization term. Regularization term is used to avoid over fitting. To solve this problem, we use alternating least squares for and . For each rating pair in the training dataset, we update and according to Equations 9 and 10.

(8)
(9)
(10)

To get the optimal solution, we iterate again and again, until it converges. Note that in each iteration all rating pairs are visited. After each iteration ends, we compare objective function with the previous iteration, if it does not change a lot, we consider it is converged.

For all iterations, we print out their s and we select the minimum one as our result. Also we know that the number of latent factors can affect both and time complexity, we will see how can affect results later.

V Integrated Algorithm

V-a Integrated CF and MF 1.0

neighborhood-based collaborative filtering and matrix factorization are two widely used algorithms in recommender systems. However, both of them themselves are not perfect. When calculating user or item similarities in neighborhood-based collaborative filtering algorithm, it only consider two users’ or two items’ common ratings, all other rating information is not used at all. On the other hand, matrix factorization leverages all users’ and items’ ratings to model their features. But it does not take user-user or item-item relationship into account. To overcome this problem, like in in [17] we integrate these two algorithms together in a single model. The prediction functions in Equation 11 contains three terms: bias baseline, matrix factorization predicted deviation from bias, and neighborhood-based collaborative filtering predicted deviation from bias.

(11)

To incorporate these two algorithms, it is intuitive for matrix factorization, but we modify a little bit for the neighborhood-based collaborative filtering part. In the traditional neighborhood-based collaborative filtering algorithm, weights is user dependent. How much item will affect user ’s rating on item does not only depend on , but also ’s similarities with other items. But as stated in [19], it is helpful to make these weights global and user independent. In such a way, is treated as variables, and we can learn it in the training stage. Here is among the top-N similar items (or users) of item , for each rating pair in the training dataset. Selection is based on the similarity metric mentioned in Equation 3 for pairs of items. Also we assume and are variables, which means users’ and items’ bias are changing over time. Therefore, given the prediction function, we can write our objective function as in Equation 12.

(12)

Again , and are regularization parameters. Therefore, our goal is to minimize the objective function. To solve this optimization problem, we use Stochastic gradient descent method. Instead of calculating gradients over whole training dataset, we approximate it at single examples. And we will update variables for each given training pairs. The learning rate is controlled by parameters , and . If we denote error between predicted rating and actual rating as for user and item pair in the training dataset, updating process can be written as following.

We continue to update these variables until it converges, which means the objective function remains stable.

V-B Integrated CF and MF 2.0

In the above integrated algorithm, we assume bias baseline, neighborhood-based collaborative filtering and matrix factorization are equally important such that we just simply add them together. But it can be the case that different users may favor different methods among these three algorithms. For example, user ’s behaviors may be very similar with bias baseline such that neighborhood-based collaborative filtering and matrix factorization should not affect much. However, for user , it is possible that his rating behavior is more similar with matrix factorization than other two. We realize that it is necessary to model users’ preferences over three methods. Therefore we put user-based weights (, and ) for three methods. The prediction function is in Equation 13.

(13)

Correspondingly, its objective function can be written as 14.

(14)

Similarly, we use Stochastic gradient descent method to solve this optimization problem, updating process can be seen as following. And we show this method in Algorithm 1.

Input: K: latent dimension, N: top N similar items, : constant parameter (100) in similarity calculation, maxIter: maximum number of iterations, : converge condition, : regularization parameters, : learning rates, R: training data, T: testing data
Output: MAE
1 Calculate average ratings , , ;
2 Initialize P and Q with ;
3 Initialize , and with ;
4 for  to  do
5       for  to  do
6             Calculate ;
7            
8       end for
9      
10 end for
11for  to  do
12       Sort and select top N similar items ;
13       Initialize with similarity score;
14      
15 end for
16t=0;
17 while  do
18       ++t;
19       for all  do
20             ;
21             ;
22             ;
23             ;
24             ;
25             ;
26             for each  do
27                   ;
28                  
29             end for
30            ;
31             ;
32             ;
33            
34       end for
35      Calculate MAE on T;
36       if  then
37             break;
38            
39       end if
40      
41 end while
42return MAE;
Algorithm 1 Integrated CF and MF algorithm

Vi Results and Conclusions

Vi-a Dataset Description

We use the same dataset – MoiveTweeting as [14] [20]. It is a up-to-date dataset and we download it on Nov 7, 2014. It is like that we take a snapshot of the dataset at that time. MoiveTweeting collects all tweets from Twitter having the format “*I rated #IMDB*”. In such tweets, it extracts user ids and movie ids, associated with ratings. Therefore, it can be seen as a user-item purchase matrix. It is also the dataset used in the ACM RecSys Challenge 2014.

Originally, the dataset contains 22,079 users and 13,618 items in the training dataset. But we find that some of them do not appear in the testing dataset. On the other hand some users and items in the testing dataset never appear in the training dataset. Therefore, we remove such kind of users and items from the dataset. After pruning, details of the dataset can be seen in Table II.

# of total users 24,924
# of total items 15,142
# of users in R 22,079
# of items in R 13,618
# of pairs in R 170,285
# of users in T 5,717
# of items in T 4,226
# of pairs in T 16,848
TABLE II: Dataset details

Vi-B Performance Comparison

In this section, we compare our integrated algorithms with neighborhood-based collaborative filtering and matrix factorization methods.

Parameters

For the MF method, we let in Equation 8 equals to 10, as it gives us the best performance. And for algorithms using stochastic gradient descent method, we set and . In CF and MF integrated algorithm 1.0, we set , , , , , . And for integrated algotrithm 2.0, we set , , , , , , , .

Comparisons

We compare the integrated methods with neighborhood-based collaborative filtering and matrix factorization methods separately. In these two comparisons, we increase N and K. We compare CF, CF_MF1.0 and CF_MF2.0 algorithms’ performances when we increase N from 5 to 50, with K fixed at 20 for CF_MF1.0 and CF_MF 2.0 algorithms. Similarly, we increase K from 5 to 100 to compare MF, CF_MF1.0 and CF_MF2.0 algorithms’ performances with N fixed at 10. Prediction error MAE are shown in Figure 2 and 3 separately.

Fig. 2: Comparison of CF, CF_MF1.0 and CF_MF2.0 integrated algorithms
Fig. 3: Comparison of MF, CF_MF1.0 and CF_MF2.0 integrated algorithms

From Figure reffig:performance1 we can see that our integrated methods achieve more than improvement over neighborhood-based collaborative filtering. we also note that when we increase N, the results do not change a lot. We can see from Figure 3 that CF_MF1.0 algorithm improve accuracy by more than than traditional matrix factorization method. And CF_MF2.0 can even achieve more than CF_MF1.0. This means that by assuming that users have different favors over three methods can achieve more improvement than equally treating them.

In order to illustrate more clearly K’s influence on algorithms’ performance, we show CF_MF1.0 and CF_MF2.0’s performance again in Figure 4, along with their running time. We can see that increasing K can help us to reduce error. However, at the same time running time is also increasing. Therefore there exists trade-off between prediction accuracy and running time. We list MAEs of CF_MF1.0 and CF_MF2.0 in Table III.

Fig. 4: CF_MF1.0 and CF_MF2.0 MAE and running time
K CF_MF1.0 CF_MF2.0
5 2.18379 2.13825
10 2.18109 2.13351
15 2.17971 2.12938
20 2.17875 2.12625
25 2.17801 2.12418
30 2.17739 2.12281
35 2.17688 2.12189
40 2.17643 2.12127
45 2.17604 2.12084
50 2.17568 2.12053
60 2.17507 2.12010
70 2.17455 2.11974
80 2.17411 2.11937
90 2.17372 2.11893
100 2.17337 2.11884
TABLE III: CF_MF1.0 and CF_MF2.0’s MAE

Vii Conclusions and Future Works

In this paper, we propose a new algorithm which integrates neighborhood-based collaborative filtering (CF) and Matrix Factorization (MF). When considering these two methods together, rather than assign them with fixed weights for all the users, we assume that each user has her/his own preference over them. Our results on the MovieTweetings dataset shows that our algorithm outperforms neighborhood-based collaborative filtering algorithm, matrix factorization algorithm and their combination with fixed weights.

For integrated algorithms, we can still do parameter evaluations based on evaluation dataset. Also we may consider some constrains on variables, like , , and . The integrated algorithms are flexible, it is easy to add other terms, like social side information into it.

The future work will be focused on the social network analysis. Since the relations between pairs of users do not work due to sparsity, attention should be paid to individual background information, such as age, location, gender, education level, etc.

Viii Work Distribution

This work is based on a course project (Recommender systems, IUPUI, Fall, 2014).

Yefeng Ruan extracts users’ tweets from Twitter, implements CF and MF algorithms separately, proposes and implements CF and MF integrated algorithms 1.0 and 2.0, also compares and analyzes results.

Tzu-Chun Lin analyzes the social relationship among users and also implement SVD algorithm. But as it is same as MF algorithm, we do not present it here.

References

  1. M. Jamali and M. Ester, “Using a trust network to improve top-n recommendation,” in Proceedings of the Third ACM Conference on Recommender Systems, RecSys ’09, (New York, NY, USA), pp. 181–188, ACM, 2009.
  2. P. Cremonesi, Y. Koren, and R. Turrin, “Performance of recommender algorithms on top-n recommendation tasks,” in Proceedings of the Fourth ACM Conference on Recommender Systems, RecSys ’10, (New York, NY, USA), pp. 39–46, ACM, 2010.
  3. T. Zhao, J. McAuley, and I. King, “Leveraging social connections to improve personalized ranking for collaborative filtering,” in Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management, CIKM ’14, (New York, NY, USA), pp. 261–270, ACM, 2014.
  4. P. Massa and P. Avesani., “Trust-aware recommender systems,” in RecSys, USA.
  5. X. Liu and K. Aberer, “Soco: A social network aided context-aware recommender system,” in Proceedings of the 22Nd International Conference on World Wide Web, WWW ’13, (Republic and Canton of Geneva, Switzerland), pp. 781–802, International World Wide Web Conferences Steering Committee, 2013.
  6. H. Ma, H. Yang, M. R. Lyu, and I. King, “Sorec: Social recommendation using probabilistic matrix factorization,” in Proceedings of the 17th ACM Conference on Information and Knowledge Management, CIKM ’08, (New York, NY, USA), pp. 931–940, ACM, 2008.
  7. A. Nazemian, H. Gholami, and F. Taghiyareh, “An improved model of trust-aware recommender systems using distrust metric,” in Proceedings of the 2012 International Conference on Advances in Social Networks Analysis and Mining (ASONAM 2012), ASONAM ’12, (Washington, DC, USA), pp. 1079–1084, IEEE Computer Society, 2012.
  8. S. Rendle, “Learning recommender systems with adaptive regularization,” in Proceedings of the Fifth ACM International Conference on Web Search and Data Mining, WSDM ’12, (New York, NY, USA), pp. 133–142, ACM, 2012.
  9. X. Ma, H. Lu, and Z. Gan, “Improving recommendation accuracy by combining trust communities and collaborative filtering,” in Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management, CIKM ’14, (New York, NY, USA), pp. 1951–1954, ACM, 2014.
  10. X. Yang, Y. Guo, Y. Liu, and H. Steck, “A survey of collaborative filtering based social recommender systems,” Computer Communications, vol. 41, pp. 1–10, 2014.
  11. Y. Koren, “Collaborative filtering with temporal dynamics,” Commun. ACM, vol. 53, pp. 89–97, Apr. 2010.
  12. J. S. Breese, D. Heckerman, and C. Kadie, “Empirical analysis of predictive algorithms for collaborative filtering,” in Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence, pp. 43–52, Morgan Kaufmann Publishers Inc., 1998.
  13. Y. Koren, R. Bell, and C. Volinsky, “Matrix factorization techniques for recommender systems,” Computer, vol. 42, no. 8, pp. 30–37, 2009.
  14. S. Dooms, T. De Pessemier, and L. Martens, “Movietweetings: a movie rating dataset collected from twitter,” in Workshop on Crowdsourcing and Human Computation for Recommender Systems, CrowdRec at RecSys, vol. 2013, 2013.
  15. P. Massa and P. Avesani, “Trust metrics on controversial users: Balancing between tyranny of the majority,” International Journal on Semantic Web and Information Systems (IJSWIS), vol. 3, no. 1, pp. 39–64, 2007.
  16. W. Yuan, D. Guan, Y.-K. Lee, S. Lee, and S. J. Hur, “Improved trust-aware recommender system using small-worldness of trust networks,” Knowledge-Based Systems, vol. 23, no. 3, pp. 232–238, 2010.
  17. Y. Koren, “Factorization meets the neighborhood: A multifaceted collaborative filtering model,” in Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’08, (New York, NY, USA), pp. 426–434, ACM, 2008.
  18. D. Billsus and M. J. Pazzani, “Learning collaborative information filters.,” in ICML, vol. 98, pp. 46–54, 1998.
  19. R. Bell and Y. Koren, “Scalable collaborative filtering with jointly derived neighborhood interpolation weights,” in Data Mining, 2007. ICDM 2007. Seventh IEEE International Conference on, pp. 43–52, Oct 2007.
  20. A. Said, S. Dooms, B. Loni, and D. Tikk, “Recommender systems challenge 2014,” in Proceedings of the 8th ACM Conference on Recommender Systems, RecSys ’14, (New York, NY, USA), pp. 387–388, ACM, 2014.
264020
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
Edit
-  
Unpublish
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel
Comments 0
Request comment
""
The feedback must be of minumum 40 characters
Add comment
Cancel
Loading ...

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description