Privacy-Adversarial User Representations in Recommender Systems

Privacy-Adversarial User Representations in Recommender Systems

Abstract.

Latent factor models for recommender systems represent users and items as low dimensional vectors. Privacy risks have been previously studied mostly in the context of recovery of personal information in the form of usage records from the training data. However, the user representations themselves may be used together with external data to recover private user information such as gender and age. In this paper we show that user vectors calculated by a common recommender system can be exploited in this way. We propose the privacy-adversarial framework to eliminate such leakage, and study the trade-off between recommender performance and leakage both theoretically and empirically using a benchmark dataset. We briefly discuss further applications of this method towards the generation of deeper and more insightful recommendations.

Privacy, Representations
1234

1. Introduction

With the increasing popularity of digital content consumption, recommender systems have become a major influencer of online behavior, from what news we read and what movies we watch, to what products we buy. Recommender systems have revolutionized the way items are chosen across multiple domains, moving from the previous active search scenario to the more passive selection of presented content.

A recommender system needs to fulfill two criteria in order to be able to supply relevant recommendations that will be helpful to users. Namely, it has to accurately model the users and the items. The first condition implies that recommenders aim at revealing users’ preferences, their desires and wills, and even learn when to suggest different items(Adomavicius and Tuzhilin, 2015) and how many times to recommend a given item.

Indeed, in recent years we have seen a plethora of advanced methods applied to improve the personalized recommendation problem. Furthermore, modern Collaborative Filtering approaches are becoming increasingly more complex not only in algorithmic terms but also in their ability to process and use additional data. Arguably, demographic information is the most valuable source of information for user modeling. As such, it was used from the early days of recommendation systems (Pazzani, 1999). New deep learning based state of the art methods also utilize demographic information (Covington et al., 2016; Zhao et al., 2014) in order to generate better and more relevant predictions.

While this in-depth modeling of users holds great value, it may also pose severe privacy threats. One major privacy concern, and especially when private information is explicitly used during training, is recovery of records from the training data by an attacker. This aspect of privacy has previously been studied in the context of recommender systems using the framework of differential privacy (see Section 2 below). A related but different threat, which to the best of our knowledge has not yet been addressed in this context, is the recovery of private information that was not present in the training data. In the implicit private information attack, either the recommendations or the user representations within a recommender system are used together with an external information to uncover private information about the users. For example, an attacker with access to the gender of a subset of the users could use their representations and a supervised learning approach to infer the gender of all users.

In this paper we introduce the threat of implicit private information leakage in recommender systems. We show the existence of the leakage both theoretically, and experimentally using a standard recommender and benchmark dataset. Finally, we propose the privacy-adversarial method of constructing a recommender from which the target private information cannot be read out, and show the ability of this method to conceal the private information and the trade-off between recommender performance and private information leakage.

2. Related Work

Since the release of the Netflix Prize dataset (Bennett et al., 2007), a large body of work proved that raw historical usage of users might reveal private information about them (e.g., (Weinsberg et al., 2012; Narayanan and Shmatikov, 2008)). That is, given historical usage of an anonymized user, one can infer the user’s demographics or even their identity. However, we argue that even the users’ representations may reveal private information, without access to the training data.

Privacy has also been studied recently in the context of recommender systems (Berlioz et al., 2015; Nikolaenko et al., 2013; McSherry and Mironov, 2009; Friedman et al., 2016; Liu et al., 2015; Shen and Jin, 2014). This growing body of work has been concerned for the most part with differentially private recommender systems, and always with the aim of guaranteeing that the actual records used to train the system are not recoverable. Unlike these works, we are concerned with the leakage of private information (such as demographics: age, gender, etc.) that was not directly present during training, but was implicitly learned by the system in the process of generating a useful user representation. These two ideas are not mutually exclusive, and may be combined to achieve a better privacy preserving recommender.

The problem of implicit private information studied here is closely related to the one studied in (Zemel et al., 2013). In their work, they look for representations that achieve both group fairness and individual fairness in classification tasks. Individual fairness means that two persons having a similar representation should be treated similarly. Group fairness means that given a group we wish to protect (some subset of the population), the proportion of people positively classified in equals that in the entire population. They achieved this goal by solving an optimization problem to learn an intermediate representation that obfuscates the membership in the protected group. In both cases the aim is to achieve good results on the respective predictive tasks, while using a representation that is agnostic to some aspects of the implicit structure of the domain.

Several works apply adversarial training for the purpose of creating a representation free of a specific attribute (Beutel et al., 2017; Xie et al., 2017; Zhang et al., 2018) (in fact, the original purpose of the method can also be seen as such). To the best of our knowledge, the current paper represents the first application of these ideas in the domain of recommender systems. Furthermore, while all the aforementioned applications experiment with solely one feature at a time, in this work we aim to create a representation free of multiple demographic features.

3. Privacy and User Representations

User representations in recommendation systems are designed to encode the relevant information to determine user-item affinity. As such, to the extent that item affinity is dependent on certain user characteristics (such as demographics), the optimal user representations must include this information as well in order to have the necessary predictive power with respect to recommendations. We formalize this intuition using an information theoretic approach:

Theorem 3.1 ().

Let be an estimator of an outcome variable v associated with a pair of user and item representations, and d any variable associated with users. If then

Proof.

rearranging and using we have:

and therefore:

To see how the final step follows, suppose on the contrary:

then from the above and by the chain rule for information we have:

in contradiction to the relation of entropy and mutual information.

Theorem 3.1 asserts that the predictions must include information about any relevant variable associated with users, if they are better than a certain threshold determined by the relevance of the variable. For example, if age is a strong determinant of the movies a user is likely to want to watch, then by looking at the recommendations for a user we should be a able to extract some information about said user’s age. Next, we show that the same is true about the user representations used by the recommender:

Theorem 3.2 ().

Let be an estimator of outcome variable v, and d a variable associated with users, then:

Proof.

is a function of alone and so by the data processing inequality we have . ∎

Corollary 3.3 ().

As a result of Theorems 3.1 - 3.2, for any meaningful recommendation system and user characteristic there will be information leakage between the user representation and the characteristic. Specifically, for any system , and user characteristic we have that:

This assertion, that we cannot have both perfect privacy and performance in our setting, naturally leads to the question of the trade-off between the extent of information leakage, and the performance of the recommendation system. While the precise point selected on this trade-off curve is likely to be determined by the use-case, it would be reasonable to assume that in any case we will not want to sacrifice privacy unless we gain in performance. This can be understood as a Pareto optimality requirement on the multi-objective defined by the system and privacy objectives:

Definition 3.4 ().

The privacy acceptable subset of a family of recommendation systems of the form: with respect to a recommendation loss and a privacy target is the Pareto front in in the multi-objective .

While the method described in the rest of this paper does not directly address the issue of selecting a privacy-acceptable system, we show that our method is able to dramatically reduce information leakage while maintaining the majority of system performance. Future work will focus on the methods and analytical tools in the spirit of Theorem 3.1 to assert that for a given system with a certain performance, there does not exist a system (in the family under consideration) with at least equal performance and better privacy.

3.1. Privacy-Adversarial Recommendation Systems

In the previous section we showed that for any recommendation system where the user representations capture enough to make good predictions, these representations must reveal information about any pertinent user characteristic. In this section we describe a method to reduce such information from the user representations, in a way that allows to select along the trade-off curve of performance and information leakage.

The method we use borrows the key idea from domain-adversarial training (Ganin et al., 2016), where the aim is to learn a representation that is agnostic to the domain from which the example is drawn. Adapted to the problem at hand, this method enables us to construct a user representation from which the private information can not be read out.

We start with an arbitrary latent factor recommendation system (which we will assume is trained using a gradient method). An additional construct is then appended from the user vectors, the output of which is the private field(s) we wish to censor. During training we follow two goals: (a) we would like to change the recommender parameters to optimize the original system objective, and (b) we would like to change the user vectors only, in order to harm the readout of the private information, while optimizing the readout parameters themselves with respect to the private information readout target. This is achieved by application of the gradient reversal trick introduced in (Ganin et al., 2016), leading to the following update rule for user representation :

(1)

where and are the general learning rate, and the adversarial training learning rate respectively. is the recommendation system loss, and is the loss for the demographic field prediction task.

Two special cases are noteworthy. First, for this formulation reduces back to the regular recommendation system. Second, setting we get the multi-task setting where we are trying to achieve both the recommendation and the demographic prediction task simultaneously.

The gradient descent update for the rest of the recommendation system parameters (namely, the item representation and biases) is done by the regular update rule. Likewise, the parameters for the demographic field readouts are optimized in the same way with respect to their respective classification objectives.

4. Results

Data

The experiments in this section were conducted on the MovieLens 1M dataset (Harper and Konstan, 2016). This extensively studied dataset (see for example (Miller et al., 2003; Chen et al., 2010; Jung, 2012; Peralta, 2007)) includes 1,000,209 ratings from 6,040 users, on 3,706 movies. In addition, demographic information in the form of age and gender is provided for each user. Gender (male/female) is skewed towards male with 71.7% in the male category. Age is divided into groups (0-18, 18-25, 25-35, 35-45, 45-50, 50-56, 56-inf) with 34.7% in the most popular age group, being 25-35. This means that when absolutely no personal data is given about an arbitrary user, the prediction accuracy of gender and age group cannot exceed 71.7% and 34.7% respectively.

Recommendation System

We use the Bayesian Personalized Ranking (BPR) recommendation system (Rendle et al., 2009), a natural choice for ranking tasks. The model is modified with adversarial demographic prediction by appending a linear readout structure from the user-vectors to each of the demographic targets (binary gender and 7-category age). The so called gradient reversal trick is applied between the user-vectors and each of the demographic readout structures, so that effectively during training the user vectors are optimized with respect to the recommendation task, but de-optimized with respect to the gender prediction. At the same time, the demographic readout is optimized to use the current user representation to predict gender and age. The result of this scheme is a user representation that is good for recommendation but not good for demographic prediction (i.e. is purged of the information we do not want it to contain). We note that the same method could be applied to any type of recommendation system which includes user representations and is trained using a gradient based method.

Evaluation

Recommendation systems were evaluated using a hold out set. For each user in the MovieLens 1M Dataset, the final movie that they watched was set aside for testing, and never seen during training. The fraction of users for whom this held out movie was in the top-k recommendation (Koren, 2008) is reported as model accuracy (we use ). Private information in the user representations was evaluated using both a neural-net predictor of the same form used during adversarial training, and a host of standard classifiers (SVMs with various parameters and kernels, Decision Trees, Random Forest – see Table 1). The rest of the results are shown for the original neural classifier with a cross validation procedure.

classifier gender age
large class baseline 71.70 34.70
softmax neural net readout 71.70 34.47
SVM (linear; C=.1) 71.71 34.62
SVM (linear; C=1) 71.71 34.59
SVM (linear; C=10) 71.71 34.59
SVM (RBF kernel) 71.71 29.20
Decision Tree 64.00 25.44
Random Forrest 69.80 29.20
Gradient Boosting 71.79 33.48
Table 1. Verification of the inability to predict demographic fields from user representations trained in the privacy-adversarial method. Results in this table are given for representations of size with
size / 0 .01 .1 1 10
10 76.97 74.21 71.33 71.70 71.60
20 77.55 74.26 71.50 72.30 71.03
50 77.80 74.34 72.24 86.00 74.26
naïve 71.70%
Table 2. Gender prediction from user representations. First column corresponds to the regular recommendation system, and the following columns to privacy-adversarial training with the prescribed value of . Rows correspond to the size of user and item representations. Final row contains the naïve baseline reverting to the predicting the largest class.

Results-privacy

Results show that private demographic information does indeed exist in user representations in the standard recommendation system. Gender prediction (Table 2, column) increases with size of user representation, and reaches 77.8% (recall 71.7% are Male). Likewise, age bracket prediction also increases with size of user representation and reaches 44.90% (largest category is 34.7%). These results serve as the baseline against which the adversarial training models are tested against. Our aim in the privacy-adversarial setting will be to reduce the classification results down to the baseline, reflecting user representations were purged of this private information.

size / 0 .01 .1 1 10
10 41.29 36.28 34.29 34.47 34.31
20 44.16 36.34 33.97 35.70 38.01
50 44.90 36.46 34.62 70.27 52.7
naïve 34.70%
Table 3. Age prediction from user representations. First column corresponds to the regular recommendation system, and the following columns to privacy-adversarial training with the prescribed value of . Rows correspond to the size of user and item representations. Final row contains the naïve baseline reverting to the predicting the largest class.

Results-privacy-adversarial

In the privacy-adversarial setting, overall prediction results for both gender and age are diminished to the desired level of the largest class baseline. With , for example, age prediction is eliminated completely (reducing effectively to the 34.7% baseline) for all sizes of representation, and likewise for gender with representation of size . For size we see some residual predictive power, though it is highly reduced relative to the regular recommendation system.

For the large representation (size ) and large values of in the range of we see an interesting phenomenon of reversal of the effect, with demographic readout sometimes way above the regular recommendation system (e.g when the embedding size = 50 and the gender prediction achieves 86.0%). We suspect this happens due to the relative high learning rate, which causes the system to diverge.

With respect to the trade-off between system performance and privacy, results indicate (Table. 4) that smaller user representations (size ) are preferential for this small dataset. As expected, we see some degradation with adversarial training, but nevertheless we are able to eliminate private information almost entirely with representations of size and while sacrificing only a small proportion of performance (accuracy@10 of 2.88% instead of the 3.05% for the regular system, gender information gap of 0.37% and age information gap of 0.41% from the majority group).

Together, these results show the existence of the privacy leakage in a typical recommendation system, and the ability to eliminate it with privacy-adversarial training while harming the overall system performance only marginally.

size / 0 .01 .1 1 10
10 3.05 2.76 2.88 2.43 2.67
20 3.00 2.68 2.04 2.20 2.07
50 2.65 2.38 2.22 2.22 2.15
Table 4. Recommendation System performance (accuracy@10) with privacy-adversarial training. First column corresponds to the regular recommendation system, and the following columns to privacy-adversarial training with the prescribed value of . Rows correspond to the size of user and item representations.

5. Conclusions and Future Work

Latent factor models for recommender systems represent users and items by low dimensional vectors. In this paper we discuss information leakage from user representations, and show that private demographic information can be read-out even when not used in the training data. We propose the privacy-adversarial framework in the context of recommender systems. An adversarial component is appended to the model for each of the demographic variables we want to obfuscate, so that the learned user representations are optimized in a way that precludes predicting these variables. We show that the proposed framework has the desired privacy preserving effect, while having a minimal overall adverse effect on recommender performance, when using the correct value of the trade-off parameter . Our experiments show that this value should be determined for a given dataset, since values too large lead to instability of the adversarial component.

The method proposed can be used to obfuscate any private variable known during training (in this paper we discuss categorical variables, but the generalization to the continuous case is trivial). While at first glance this may be seen as a shortcoming of the approach, it is interesting to note that it would be inherently infeasible to force the representation not to include any factor implicitly associated with item choice. Clearly, in such a case there would be no information left to drive recommendations. The intended use of the method is rather to hide a small set of protected variables known during training, while using the rest of the implicit information in the usage data to drive recommendations.

An interesting topic for further research is the amount of private information that is available in the top-k recommendations themselves. Since the sole reason private demographic information is present in the user representations is to help drive recommendations, it stands to reason that it would be possible to design a method of reverse-engineering in the form of a readout from the actual recommendations. Such a leakage, to the extent that it indeed exists, would have much further reaching practical implications for privacy and security of individuals.

Another topic for further research is the use of privacy-adversarial training to boost the personalization and specificity of recommendations. By eliminating the demographic (or other profile related) information, suggested items are coerced out of stereotypical templates related to coarse profiling. It is our hope that user testing will confirm that this leads to deeper and more meaningful user models, and overall higher quality recommendations.

Footnotes

  1. copyright: none
  2. conference: .; 2018; .
  3. journalyear: 2018
  4. ccs: Security and privacy Information flow control

References

  1. Gediminas Adomavicius and Alexander Tuzhilin. 2015. Context-aware recommender systems. In Recommender systems handbook. Springer, 191–226.
  2. James Bennett, Stan Lanning, and others. 2007. The netflix prize. In Proceedings of KDD cup and workshop, Vol. 2007. New York, NY, USA, 35.
  3. Arnaud Berlioz, Arik Friedman, Mohamed Ali Kaafar, Roksana Boreli, and Shlomo Berkovsky. 2015. Applying differential privacy to matrix factorization. In Proceedings of the 9th ACM Conference on Recommender Systems. ACM, 107–114.
  4. Alex Beutel, Jilin Chen, Zhe Zhao, and Ed H Chi. 2017. Data decisions and theoretical implications when adversarially learning fair representations. arXiv preprint arXiv:1707.00075 (2017).
  5. Yan Chen, F Maxwell Harper, Joseph Konstan, and Sherry Xin Li. 2010. Social comparisons and contributions to online communities: A field experiment on movielens. American Economic Review 100, 4 (2010), 1358–98.
  6. Paul Covington, Jay Adams, and Emre Sargin. 2016. Deep neural networks for youtube recommendations. In Proceedings of the 10th ACM Conference on Recommender Systems. ACM, 191–198.
  7. Arik Friedman, Shlomo Berkovsky, and Mohamed Ali Kaafar. 2016. A differential privacy framework for matrix factorization recommender systems. User Modeling and User-Adapted Interaction 26, 5 (2016), 425–458.
  8. Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. 2016. Domain-adversarial training of neural networks. The Journal of Machine Learning Research 17, 1 (2016), 2096–2030.
  9. F Maxwell Harper and Joseph A Konstan. 2016. The movielens datasets: History and context. ACM Transactions on Interactive Intelligent Systems (TiiS) 5, 4 (2016), 19.
  10. Jason J Jung. 2012. Attribute selection-based recommendation framework for short-head user group: An empirical study by MovieLens and IMDB. Expert Systems with Applications 39, 4 (2012), 4049–4054.
  11. Yehuda Koren. 2008. Factorization meets the neighborhood: a multifaceted collaborative filtering model. In Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 426–434.
  12. Ziqi Liu, Yu-Xiang Wang, and Alexander Smola. 2015. Fast differentially private matrix factorization. In Proceedings of the 9th ACM Conference on Recommender Systems. ACM, 171–178.
  13. Frank McSherry and Ilya Mironov. 2009. Differentially private recommender systems: Building privacy into the netflix prize contenders. In Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 627–636.
  14. Bradley N Miller, Istvan Albert, Shyong K Lam, Joseph A Konstan, and John Riedl. 2003. MovieLens unplugged: experiences with an occasionally connected recommender system. In Proceedings of the 8th international conference on Intelligent user interfaces. ACM, 263–266.
  15. Arvind Narayanan and Vitaly Shmatikov. 2008. Robust de-anonymization of large sparse datasets. In Security and Privacy, 2008. SP 2008. IEEE Symposium on. IEEE, 111–125.
  16. Valeria Nikolaenko, Stratis Ioannidis, Udi Weinsberg, Marc Joye, Nina Taft, and Dan Boneh. 2013. Privacy-preserving matrix factorization. In Proceedings of the 2013 ACM SIGSAC conference on Computer & communications security. ACM, 801–812.
  17. Michael J Pazzani. 1999. A framework for collaborative, content-based and demographic filtering. Artificial intelligence review 13, 5-6 (1999), 393–408.
  18. Verónika Peralta. 2007. Extraction and integration of movielens and imdb data. Laboratoire Prisme, Université de Versailles, Versailles, France (2007).
  19. Steffen Rendle, Christoph Freudenthaler, Zeno Gantner, and Lars Schmidt-Thieme. 2009. BPR: Bayesian personalized ranking from implicit feedback. In Proceedings of the twenty-fifth conference on uncertainty in artificial intelligence. AUAI Press, 452–461.
  20. Yilin Shen and Hongxia Jin. 2014. Privacy-preserving personalized recommendation: An instance-based approach via differential privacy. In Data Mining (ICDM), 2014 IEEE International Conference on. IEEE, 540–549.
  21. Udi Weinsberg, Smriti Bhagat, Stratis Ioannidis, and Nina Taft. 2012. BlurMe: Inferring and obfuscating user gender based on ratings. In Proceedings of the sixth ACM conference on Recommender systems. ACM, 195–202.
  22. Qizhe Xie, Zihang Dai, Yulun Du, Eduard Hovy, and Graham Neubig. 2017. Controllable Invariance through Adversarial Feature Learning. In Advances in Neural Information Processing Systems. 585–596.
  23. Rich Zemel, Yu Wu, Kevin Swersky, Toni Pitassi, and Cynthia Dwork. 2013. Learning fair representations. In International Conference on Machine Learning. 325–333.
  24. Brian Hu Zhang, Blake Lemoine, and Margaret Mitchell. 2018. Mitigating Unwanted Biases with Adversarial Learning. arXiv preprint arXiv:1801.07593 (2018).
  25. Xin Wayne Zhao, Yanwei Guo, Yulan He, Han Jiang, Yuexin Wu, and Xiaoming Li. 2014. We know what you want to buy: a demographic-based system for product recommendation on microblogs. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 1935–1944.
215431
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
Edit
-  
Unpublish
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel
Comments 0
""
The feedback must be of minumum 40 characters
Add comment
Cancel
Loading ...

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description