References
Abstract

Most accurate recommender systems are black-box models, hiding the reasoning behind their recommendations. Yet explanations have been shown to increase the user’s trust in the system in addition to providing other benefits such as scrutability, meaning the ability to verify the validity of recommendations. This gap between accuracy and transparency or explainability has generated an interest in automated explanation generation methods. Restricted Boltzmann Machines (RBM) are accurate models for CF that also lack interpretability. In this paper, we focus on RBM based collaborative filtering recommendations, and further assume the absence of any additional data source, such as item content or user attributes. We thus propose a new Explainable RBM technique that computes the top- recommendation list from items that are explainable. Experimental results show that our method is effective in generating accurate and explainable recommendations.

 

Explainable Restricted Boltzmann Machines for Collaborative Filtering

 

Behnoush Abdollahi b.abdollahi@louisville.edu

Dept. of Computer Engineering & Computer Science, Knowledge Discovery and Web Mining Lab, University of Louisville, Louisville, KY 40222, USA

Olfa Nasraoui olfa.nasraoui@louisville.edu

Dept. of Computer Engineering & Computer Science, Knowledge Discovery and Web Mining Lab, University of Louisville, Louisville, KY 40222, USA


\@xsect

Explanations for recommendations can have several benefits, such as: helping the user make a good decision (effectiveness), helping the user make a faster decision (efficiency), and revealing the reasoning behind the system’s recommendation (transparency) (Tintarev & Masthoff, 2011; Zanker, 2012). As a result, users are more likely to follow the recommendation and use the system in better ways (Tintarev & Masthoff, 2007; Herlocker et al., 2000). For instance, the Netflix recommender system justifies its movie suggestions by listing similar movies, obtained from the user’s social network. Amazon’s recommender system shows similar items to the ones that the user (or other similar users) have bought or viewed, when recommending a new item using neighborhood based Collaborative Filtering (CF).

CF approches provide recommendations to users based on their collective recorded interests on items, typically relying on the similarity between users or items, giving rise to neighborhood-based CF approaches, which can be user-based or item-based. Neighborhood-based CF methods are white-box approaches that can be explained based on the ratings of similar users or items.

Most accurate recommender systems are model-based methods that are black-boxes. Among model-based approaches are Restricted Boltzmann Machines (RBM) (Hinton, 2010) that can assign a low dimensional set of features to items in a latent space. The newly obtained set of features capture the user’s interests and different items groups; however, it is very difficult to interpret these automatically learned features. Therefore, justification of the recommendation or the reasoning behind the recommended item in these models is not clear. RBM approaches have recently proved to be powerful for designing deep learning techniques to learn and predict patterns in large datasets because they can provide very accurate results (Hinton & Salakhutdinov, 2006). However, they suffer from the lack of interpretation of the results, especially for recommender systems. Lack of explanations can result in users not trusting the suggestions made by the recommender system. Therefore, the only way for the user to assess the quality of a recommendation is by following it. This, however, is contrary to one of the goals of a recommendation system, which is reducing the time that users spend on exploring items. It would be very desirable and beneficial to design recommender systems that can give accurate suggestions, which, at the same time, facilitate conveying the reasoning behind the recommendations to the user. A main challenge in creating a recommender system is to choose an interpretable technique with moderate prediction accuracy or a more accurate technique, such RBM, which does not give explainable recommendations.

\@xsect

Our research question is: can we design an RBM model for a CF recommender engine that suggests items that are explainable, while recommendations remain accurate? Our current scope is limited to CF recommendations where no additional source of data is used in explanations, and where explanations for recommended items can be generated from the ratings given to these items, by the active user’s neighbors only (user-based neighbor style explanation), as shown in Figure 1.

top-3 test use ratings Explainable RBM
movie recommended movie explanation (ratings out of 5)
Annie Hall (Comedy) Miller’s Crossing (Thriller) 8 out of 10 people with similar interests to you have rated this movie 4 and higher.
Carrie (Drama) Sinin’ in the Rain (Comedy) 9 out of 10 people with similar interests to you have rated this movie 3 and higher.
Jaws (Thriller) Psycho (Horror) 4 out of 10 people with similar interests to you have rated this movie 5.
Table 1: Top-3 recommendations and explanations for a test user.
\@xsect\@xsect

There are different ways of classifying explanation styles. Generally explanations can be user-based neighbor-style (Figure 1), item-based neighbor style (also known as influence-style), and keyword-style (Bilgic & Mooney, 2005). A user-based neighbor-style explanation is based on similar users, and generally is used when the CF method is also a user-based neighborhood style method. An item-based neighbor-style explanation is generally used in item-based CF methods by presenting the items that had the highest impact on the recommender system’s decision. A keyword-style explanation is based on the items’ features or users’ demographic data available as content data and is mostly used in content based recommender systems (Bilgic & Mooney, 2005).

In all styles, the explanation may, or may not reflect the underlying algorithm used by the recommender system. Also, data sources employed in the recommendation task, may be different from the data sources used in generating the explanation (Vig et al., 2009; Herlocker et al., 2000; Symeonidis et al., 2008; Bilgic & Mooney, 2005; Billsus & Pazzani, 1999). Hence, the explanation generation is a separate module from the recommender system. However, performing the recommendation task based on the items’ explainability (thus integrating recommendation and explanation) may improve the transparent suggestion of interpretable items to the user, while enjoying the powerful prediction of a model-based CF approach.

Zhang et al. (Zhang et al., 2014) proposed a model-based CF to generate explainable recommendations based on item features and sentiment analysis of user reviews, as data sources, in addition to the ratings data. That said, their approach is similar to our proposed method in that the recommender model suggests highly explainable items as recommendations and in that the recommendation and explanation engines are not separate. They further expanded their feature-level explanation by considering different forms such as word clouds (Zhang, 2015). Also, (Abdollahi & Nasraoui, 2016) proposed explainable MF (EMF) method for explainable CF. In contrast to Zhang et al. (Zhang et al., 2014), EMF approach does not require any additional data such as reviews for explanation generation. Herlocker et al. (Herlocker et al., 2000) performed a detailed study on 21 different styles of explanation generation for neighbor-based CF methods, including content-based explanations that present a list of features from the recommended item to the user. Symeonidis et al. (Symeonidis et al., 2008) also proposed an explanation approach based on the content features of the items. Their recommender system and the explanation approach are both content-based but they used different algorithms. Bilgic and Mooney (Bilgic & Mooney, 2005) proposed three forms of explanations: keyword style, neighbor style and influence style, as separate forms of explanation approaches from their recommender system, a hybrid approach, called LIBRA, that recommends books (Mooney & Roy, 2000). Billsus and Pazzani (Billsus & Pazzani, 1999) presented a keyword style and influence style explanation approach for their news recommendation system. The system generates explanations and adapts its recommendation to the user’s interests based on the user’s preferences and interests. In all the reviewed existing approaches except (Herlocker et al., 2000), the explanations have to resort to external data such as item content or reviews, while useful when available, are outside the scope of the work in this paper that primarily aims at explainability of pure CF without resorting to external data.

Figure 1: An example of a user-based neighbor style explanation for a recommended item.
\@xsect

RBM is a two layer stochastic neural network consisting of visible and hidden units. Each visible unit is connected to all the hidden units in an undirected form. No visible/hidden unit is connected to any other visible/hidden unit. The stochastic, binary visible units encode user preferences on the items from the training data, therefore the state of every visible unit is known. Hidden units are also stochastic, binary variables that capture the latent features. A probability is assigned to each pair of a hidden and a visible vector:

(1)

where is the energy of the system and is a normalizing factor, as defined in (Hinton, 2002).To train for the weights, a Contrastive Divergence method was proposed by Hinton (Hinton, 2002). Salakhutdinov et al. (Salakhutdinov et al., 2007), proposed an RBM framework for CF. Their model assumes one RBM for each user and takes only rated items into consideration when learning the weights. They presented the results of their approach on the Netflix data and showed that their technique was more accurate than Netflix’s own system. The focus of this RBM approach was only on the accuracy and predicting error and not explanation generation.

\@xsect

In this section, we present our explainable RBM framework. First, we present our approach for measuring explainability of each item for each user using explicit rating data. Next, we propose the explainable RBM framework.

Figure 2: Conditional RBM for explainabilty.
\@xsect

Explainability can be formulated based on the rating distribution within the active user’s neighborhood. The main idea is that if many neighbors have rated the recommended item, then this could provide a basis upon which to explain the recommendations, using neighborhood style explanation mechanisms. For user-based neighbor-style explanations, such as the ones shown in Figure 1, we can therefore define the Explainability Score of item for user as follows:

(2)

where is the set of user ’s neighbors, is the rating of on item and is the maximum rating value of on . Neighbors are determined based on the cosine similarity. Without loss of information, is considered as for missing ratings, indicating that user does not contribute to the user-based neighbor-style explanation of item for user . Given this definition, it is obvious that Explainability Score is between zero and one. Item is explainable for user only if its explainability score is larger than zero. When no explanation can be made, the explainability ratio would be zero.

\@xsect

The conditional RBM model takes explainability into account with an additional visible layer, m, with nodes, where is the number of items. Each node has a value between and , indicating the explainability score of the relative item to the current user in the iteration, calculated as explained in Section . The idea is to define a joint distribution over , conditional on the explainability scores, m. Figure 2 presents the conditional RBM model with explainability. Based on (Hinton, 2010), the , , and are defined as:

(3)
(4)
(5)

where , , and are biases, and are the numbers of hidden and visible units, respectively, and and are the weights of the entire network. is the logistic function .

To avoid computing a model, we follow an approximation to the gradient of a different objective function called “Contrastive Divergence” (CD) (Hinton, 2002):

(6)

where is an element of a learned matrix that models the effect of ratings on . Learning , which is the effect of explainability on , using CD, is similar and takes the form:

(7)
\@xsect

We tested our approach on the MovieLens 111http://www.grouplens.org/node/12 ratings data which consists of ratings, on a scale of to , for movies and users. The data is split into training and test sets such that of the latest ratings from each user are selected for the test set and the remaining are used in the training set. Ratings are normalized between and , to be used as RBM input. We compare our results with RBM, Explainable Matrix Factorization (Abdollahi & Nasraoui, 2016), user-based top-n recommendations (Herlocker et al., 1999), and non-personalized most popular items. Each Experiments is run times and the average results are reported. To assess the rating prediction accuracy, we used the Root Mean Squared Error (RMSE) metric: . Note that RMSE can only be calculated for the prediction based methods and not the top-n recommendation techniques. To evaluate the top-n recommendation task, we used the nDCG@10 metric following (Herlocker et al., 2004). The results for RMSE and nDCG, when varying the number of hidden units, , is presented in Figure 3, top row. It can be observed that Explainable RBM (E-RBM) outperforms other approaches when for RMSE and when for nDCG. To evaluate the explainability of the proposed approach, we used Mean Explainability Precision (MEP) and Mean Explainability Recall (MER) metrics (Abdollahi & Nasraoui, 2016) which are higher when the recommend items are explainable as per Eq (2). The results are shown in Figure 3, bottom row. It can be observed that explainable RBM outperforms other approaches in terms of explainability. For a test user, the top-3 rated movies and their genres by the user, in addition to the top-3 recommended movies and the explanations generated using the proposed method, are presented in Table 1. Explanations can be presented to the users using the format shown in Figure 1.

Figure 3: RMSE, nDCG (top row) vs. , and MEP, MER (bottom row) vs. (number of neighbors).
\@xsect

We presented an explainable RBM approach for CF recommendations that achieves both accuracy and interpretability by learning an RBM network that tries to estimate accurate user ratings while also taking into account the explainability of an item to a user. Both rating prediction and explainability are integrated within one learning goal allowing to learn a network model that prioritizes the recommendation of items that are explainable.

References

  • Abdollahi & Nasraoui (2016) Abdollahi, Behnoush and Nasraoui, Olfa. Explainable matrix factorization for collaborative filtering. In Proceedings of the 25th International Conference Companion on World Wide Web, pp. 5–6. International World Wide Web Conferences Steering Committee, 2016.
  • Bilgic & Mooney (2005) Bilgic, Mustafa and Mooney, Raymond J. Explaining recommendations: Satisfaction vs. promotion. In Beyond Personalization Workshop, IUI, volume 5, 2005.
  • Billsus & Pazzani (1999) Billsus, Daniel and Pazzani, Michael J. A personal news agent that talks, learns and explains. In Proceedings of the third annual conference on Autonomous Agents, pp. 268–275. ACM, 1999.
  • Herlocker et al. (1999) Herlocker, Jonathan L, Konstan, Joseph A, Borchers, Al, and Riedl, John. An algorithmic framework for performing collaborative filtering. In Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval, pp. 230–237. ACM, 1999.
  • Herlocker et al. (2000) Herlocker, Jonathan L, Konstan, Joseph A, and Riedl, John. Explaining collaborative filtering recommendations. In Proceedings of the 2000 ACM conference on Computer supported cooperative work, pp. 241–250. ACM, 2000.
  • Herlocker et al. (2004) Herlocker, Jonathan L, Konstan, Joseph A, Terveen, Loren G, and Riedl, John T. Evaluating collaborative filtering recommender systems. ACM Transactions on Information Systems (TOIS), 22(1):5–53, 2004.
  • Hinton (2010) Hinton, Geoffrey. A practical guide to training restricted boltzmann machines. Momentum, 9(1):926, 2010.
  • Hinton (2002) Hinton, Geoffrey E. Training products of experts by minimizing contrastive divergence. Neural Comput., 14(8):1771–1800, August 2002. ISSN 0899-7667. doi: 10.1162/089976602760128018. URL http://dx.doi.org/10.1162/089976602760128018.
  • Hinton & Salakhutdinov (2006) Hinton, Geoffrey E and Salakhutdinov, Ruslan R. Reducing the dimensionality of data with neural networks. Science, 313(5786):504–507, 2006.
  • Mooney & Roy (2000) Mooney, Raymond J and Roy, Loriene. Content-based book recommending using learning for text categorization. In Proceedings of the fifth ACM conference on Digital libraries, pp. 195–204. ACM, 2000.
  • Salakhutdinov et al. (2007) Salakhutdinov, Ruslan, Mnih, Andriy, and Hinton, Geoffrey. Restricted boltzmann machines for collaborative filtering. In Proceedings of the 24th International Conference on Machine Learning, ICML ’07, pp. 791–798, New York, NY, USA, 2007. ACM. ISBN 978-1-59593-793-3. doi: 10.1145/1273496.1273596. URL http://doi.acm.org/10.1145/1273496.1273596.
  • Symeonidis et al. (2008) Symeonidis, Panagiotis, Nanopoulos, Alexandros, and Manolopoulos, Yannis. Justified recommendations based on content and rating data. In WebKDD Workshop on Web Mining and Web Usage Analysis, 2008.
  • Tintarev & Masthoff (2007) Tintarev, Nava and Masthoff, Judith. A survey of explanations in recommender systems. In Data Engineering Workshop, 2007 IEEE 23rd International Conference on, pp. 801–810. IEEE, 2007.
  • Tintarev & Masthoff (2011) Tintarev, Nava and Masthoff, Judith. Designing and evaluating explanations for recommender systems. In Recommender Systems Handbook, pp. 479–510. Springer, 2011.
  • Vig et al. (2009) Vig, Jesse, Sen, Shilad, and Riedl, John. Tagsplanations: explaining recommendations using tags. In Proceedings of the 14th international conference on Intelligent user interfaces, pp. 47–56. ACM, 2009.
  • Zanker (2012) Zanker, Markus. The influence of knowledgeable explanations on users’ perception of a recommender system. In Proceedings of the Sixth ACM Conference on Recommender Systems, RecSys ’12, pp. 269–272, New York, NY, USA, 2012. ACM. ISBN 978-1-4503-1270-7. doi: 10.1145/2365952.2366011. URL http://doi.acm.org/10.1145/2365952.2366011.
  • Zhang (2015) Zhang, Yongfeng. Incorporating phrase-level sentiment analysis on textual reviews for personalized recommendation. In Proceedings of the Eighth ACM International Conference on Web Search and Data Mining, pp. 435–440. ACM, 2015.
  • Zhang et al. (2014) Zhang, Yongfeng, Lai, Guokun, Zhang, Min, Zhang, Yi, Liu, Yiqun, and Ma, Shaoping. Explicit factor models for explainable recommendation based on phrase-level sentiment analysis. In Proceedings of the 37th international ACM SIGIR conference on Research & development in information retrieval, pp. 83–92. ACM, 2014.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
107346
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description