Model-Based Reinforcement Learning with Adversarial Training for Online Recommendation

Model-Based Reinforcement Learning with Adversarial Training for Online Recommendation

Xueying Bai, Jian Guan , Hongning Wang
Department of Computer Science, Stony Brook University
Department of Computer Science and Technology, Tsinghua University
Department of Computer Science, University of Virginia,
Both authors contributed equally.

Reinforcement learning is effective in optimizing policies for recommender systems. Current solutions mostly focus on model-free approaches, which require frequent interactions with a real environment, and thus are expensive in model learning. Offline evaluation methods, such as importance sampling, can alleviate such limitations, but usually request a large amount of logged data and do not work well when the action space is large. In this work, we propose a model-based reinforcement learning solution which models the user-agent interaction for offline policy learning via a generative adversarial network. To reduce bias in the learnt policy, we use the discriminator to evaluate the quality of generated sequences and rescale the generated rewards. Our theoretical analysis and empirical evaluations demonstrate the effectiveness of our solution in identifying patterns from given offline data and learning policies based on the offline and generated data.

1 Introduction

Recommender systems are widely used to recommend items that users would be interested in among the huge amount of content on the internet. However, because of users’ different interests and behaviors, only a small fraction of items are viewed by each user with even less feedback recorded. This gives relatively little information on user-recommender interactions for such a large state and action space Chen et al. (2019a), and thus brings challenges to serve users with their favored items at the appropriate time based on historical interactions. Therefore, it is important to develop approaches to learn users’ preferences from the sparse user feedback such as clicks and purchases He et al. (2016), and explore unobserved interactions Koren et al. (2009) to further improve recommenders.

Users’ interests can be short-term or long-term and reflected by different types of feedback Wu et al. (2017). For example, clicks are generally considered as short-term feedback which reflects users’ immediate interests during the interaction, while purchase reveals users’ long-term interests which usually happen after several clicks. Considering both users’ short-term and long-term interests, we frame the recommender system as a reinforcement learning (RL) agent, which aims to maximize users’ overall long-term satisfaction without sacrificing the recommendations’ short-term utility Shani et al. (2005).

Classical model-free RL methods require collecting large quantities of data by interacting with the environment, e.g., a population of users. Therefore, without interacting with real users, a recommender cannot easily probe for reward in previously unexplored regions in the state and action space. However, it is prohibitively expensive for a recommender to interact with users for reward and model updates, because bad recommendations (e.g., for exploration) hurt user satisfaction and increase the risk of user drop out. In this case, it is preferred for a recommender to learn a policy by fully utilizing the logged data that is acquired from other policies instead of direct interactions with users. For this purpose, in this work we take a model-based learning approach, in which we simultaneously estimate a model of user behavior from the offline data and use it to interact with our learning agent to obtain an improved policy.

Model-based RL has a strong advantage of being sample efficient and helping reduce noise in offline data. However, such an advantage can easily diminish due to the inherent bias in its model approximation of the real environment. Moreover, dramatic changes in subsequent policy updates impose the risk of decreased user satisfaction, i.e., inconsistent recommendations across model updates. To address these issues, we introduce adversarial training into a recommender’s policy learning from offline data. The discriminator is trained to differentiate simulated interaction trajectories from real ones so as to debias the user behavior model and improve policy learning. To the best of our knowledge, this is the first work to explore adversarial training over a model-based RL framework for recommendation. We theoretically and empirically demonstrate the value of our proposed solution in policy evaluation. Together, the main contributions of our work are as follows:

  • To avoid the high interaction cost, we propose a unified solution to more effectively utilize the logged offline data with model-based RL algorithms, integrated via adversarial training. It enables robust recommendation policy learning.

  • The proposed model is verified through theoretical analysis and extensive empirical evaluations. Experiment results demonstrate our solution’s better sample efficiency over the state-of-the-art baselines 111Our implementation is available at

2 Related Work

Deep RL for recommendation There have been studies utilizing deep RL solutions in news, music and video recommendations Lu and Yang (2016); Liebman et al. (2015); Zheng et al. (2018). However, most of the existing solutions are model-free methods and thus do not explicitly model the agent-user interactions. In these methods, value-based approaches, such as deep Q-learning Mnih et al. (2015), present unique advantages such as seamless off-policy learning, but are prone to instability with function approximation Sutton et al. (2000); Mnih et al. (2013). And the policy’s convergence in these algorithms is not well-studied. In contrast, policy-based methods such as policy gradient Learning (1998) remain stable but suffer from data bias without real-time interactive control due to learning and infrastructure constraints. Oftentimes, importance sampling Munos et al. (2016) is adopted to address the bias but instead results in huge variance Chen et al. (2019a). In this work, we rely on a policy gradient based RL approach, in particular, REINFORCE Williams (1992); but we simultaneously estimate a user behavior model to provide a reliable environment estimate so as to update our agent on policy.

Model-based RL Model-based RL algorithms incorporate a model of the environment to predict rewards for unseen state-action pairs. It is known in general to outperform model-free solutions in terms of sample complexity Deisenroth et al. (2013), and has been applied successfully to control robotic systems both in simulation and real world Deisenroth and Rasmussen (2011); Meger et al. (2015); Morimoto and Atkeson (2003); Deisenroth et al. (2011). Furthermore, Dyna-Q Sutton (1990); Peng et al. (2018) integrates model-free and model-based RL to generate samples for learning in addition to the real interaction data. Gu et al. (2016) extended these ideas to neural network models, and Peng et al. (2018) further apply the method on task-completion dialogue policy learning. However, the most efficient model-based algorithms have used relatively simple function approximations, which actually have difficulties in high-dimensional space with nonlinear dynamics and thus lead to huge approximation bias.

Offline evaluation The problems of off-policy learning Munos et al. (2016); Precup (2000); Precup et al. (2001) and offline policy evaluation are generally pervasive and challenging in RL, and in recommender systems in particular. As a policy evolves, so does the distribution under which the expectation of gradient is computed. Especially in the scenario of recommender systems, where item catalogues and user behavior change rapidly, substantial policy changes are required; and therefore it is not feasible to take the classic approaches Schulman et al. (2015); Achiam et al. (2017) to constrain the policy updates before new data is collected under an updated policy. Multiple off-policy estimators leveraging inverse-propensity scores, capped inverse-propensity scores and various variance control measures have been developed Thomas and Brunskill (2016); Swaminathan and Joachims (2015b, a); Gilotte et al. (2018) for this purpose.

RL with adversarial training Yu et al. (2017) propose SeqGAN to extend GANs with an RL-like generator for the sequence generation problem, where the reward signal is provided by the discriminator at the end of each episode via a Monte Carlo sampling approach. The generator takes sequential actions and learns the policy using estimated cumulative rewards. In our solution, the generator consists of two components, i.e., our recommendation agent and the user behavior model, and we model the interactive process via adversarial training and policy gradient. Different from the sequence generation task which only aims to generate sequences similar to the given observations, we leverage adversarial training to help reduce bias in the user model and further reduce the variance in training our agent. The agent learns from both the interactions with the user behavior model and those stored in the logged offline data. To the best of our knowledge, this is the first work that utilizes adversarial training for improving both model approximation and policy learning on offline data.

3 Problem Statement

The problem is to learn a recommender that recommends items to maximize cumulative rewards of an online recommendation system by efficiently utilizing offline data. We address this problem with a model-based reinforcement learning solution, which also needs to capture users’ behavior patterns.

Problem A recommender is formed as a learning agent to generate actions under a policy, where each action gives a recommendation list with items. Every time through interactions between the agent and the environment (i.e., users of the system), a set of sequences is recorded, where is the -th sequence containing agent actions, user behaviors and rewards: , represents the reward on (e.g., make a purchase), and is the associated user behavior corresponding to agent’s action (e.g., click on a recommended item). For simplicity, in the rest of paper, we drop the superscript to represent a general sequence . Based on the observed sequences, a policy is learnt to maximize the expected cumulative reward , where is the end time of .

Assumption To narrow the scope of our discussion, we study a typical type of user behavior, i.e., clicks, and make following assumptions: 1) at each time a user must click on one item from the recommendation list; 2) items not clicked in the recommendation list will not influence the user’s future behaviors; 3) rewards only relate to clicked items. For example, when taking the user’s purchase as reward, purchases can only happen on the clicked items.

Learning framework In a Markov Decision Process, an environment consists of a state set , an action set , a state transition distribution , and a reward function , which maps a state-action pair to a real-valued scalar. In this paper, the environment is modeled as a user behavior model , and learnt from offline log data. is reflected by the interaction history before time , and captures the transition of user behaviors. In the meanwhile, based on the assumptions mentioned above, at each time , the environment generates user’s click on items recommended by an agent in based on his/her click probabilities under the current state; and the reward function generates reward for the clicked item .

Our recommendation policy is learnt from both offline data and data sampled from the learnt user behavior model, i.e., a model-based RL solution. We incorporate adversarial training in our model-based policy learning to: 1) improve the user model to ensure the sampled data is close to true data distribution; 2) utilize the discriminator to scale rewards from generated sequences to further reduce bias in value estimation. Our proposed solution contains an interactive model constructed by and , and an adversarial policy learning approach. We name the solution as InteractiveRecGAN, or IRecGAN in short. The overview of our proposed solution is shown in Figure 1.

Figure 1: Model overview of IRecGAN. , and denote the agent model, user behavior model, and discriminator, respectively. In IRecGAN, and interact with each other to generate recommendation sequences that are close to the true data distribution, so as to jointly reduce bias in and improve the recommendation quality in .

4 Interactive Modeling for Recommendation

We present our interactive model for recommendation, which consists of two components: 1) the user behavior model that generates user clicks over the recommended items with corresponding rewards; and 2) the agent which generates recommendations according to its policy. and interact with each other to generate user behavior sequences for adversarial policy learning.

User behavior model Given users’ click observations , the user behavior model first projects the clicked item into an embedding vector at each time 222As we can use different embeddings on the user side and agent side, we use the superscript and to denote this difference accordingly.. The state can be represented as a summary of click history, i.e., . We use a recurrent neural network to model the state transition on the user side, thus for the state we have,

can be functions in the RNN family like GRU Chung et al. (2014) and LSTM Hochreiter and Schmidhuber (1997) cells. Given the action , i.e., the top- recommendations at time , we compute the probability of click among the recommended items via a softmax function,


where is a transformed vector indicating the evaluated quality of each recommended item under state , is the embedding matrix of recommended items, is the click weight matrix, and is the corresponding bias term. Under the assumption that target rewards only relate to clicked items, the reward for (, ) is calculated by:


where is the reward weight matrix, is the corresponding bias term, and is the reward mapping function and can be set according to the reward definition in specific recommender systems. For example, if we make to be the purchase of a clicked item , where if it is purchased and otherwise, can be realized by a Sigmoid function with binary output.

Based on Eq (1) and (2), taking the categorical reward, the user behavior model can be estimated from the offline data via maximum likelihood estimation:


where is a parameter balancing the loss between click prediction and reward prediction, and is the length of the observation sequence . With a learnt user behavior model, user clicks and reward on the recommendation list can be sampled from Eq (1) and (2) accordingly.

Agent The agent should take actions based on the environment’s provided states. However, in practice, users’ states are not observable in a recommender system. Besides, as discussed in Oh et al. (2017), the states for the agent to take actions may be different from those for users to generate clicks and rewards. As a result, we build a different state model on the agent side in to learn its states. Similar to that on the user side, given the projected click vectors , we model states on the agent side by , where denotes the state maintained by the agent at time , is the chosen RNN cell. The start state for the first recommendation is drawn from a distribution . We simply denote it as in the rest of our paper. We should note that although the agent also models states based on users’ click history, it might create different state sequences than that on the user side.

Based on the current state , the agent generates a size- recommendation list out of the entire set of items as its action . The probability of item to be included in under the policy is:


where is the -th row of the action weight matrix , is the entire set of recommendation candidates, and is the corresponding bias term. Following Chen et al. (2019a), we generate by sampling without replacement according to Eq (4). Unlike Chen et al. (2019b), we do not consider the combinatorial effect among the items by simply assuming the users will evaluate them independently (as indicated in Eq (1)).

5 Adversarial Policy Learning

We use the policy gradient method REINFORCE Williams (1992) for the agent’s policy learning, based on both generated and offline data. When generating for , we obtain by Eq (4), by Eq (2), and by Eq (1). represents clicks in the sequence and is generated by and . The generation of a sequence ends at the time if , where is a stopping symbol. The distributions of generated and offline data are denoted as and respectively. In the following discussions, we do not explicitly differentiate and when the distribution of them is specified. Since we start the training of from offline data, it introduces inherent bias from the observations and our specific modeling choices. The bias affects the sequence generation and thus may cause biased value estimation. To reduce the effect of bias, we apply adversarial training to control the training of both and . The discriminator scores are also used to rescale the generated rewards for policy learning. Therefore, the learning of agent considers both sequence generation and target rewards.

5.1 Adversarial training

We leverage adversarial training to encourage our IRecGAN model to generate high-quality sequences that capture intrinsic patterns in the real data distribution. A discriminator is used to evaluate a given sequence , where represents the probability that is generated from the real recommendation environment. The discriminator can be estimated by minimizing the objective function:


However, only evaluates a completed sequence, while it cannot directly evaluate a partially generated sequence at a particular time step . Inspired by Yu et al. (2017), we utilize the Monte-Carlo tree search algorithm with the roll-out policy constructed by and to get sequence generation score at each time. At time , the sequence generation score of is defined as:


where is the set of sequences sampled from the interaction between and .

Given the observations in offline data, should generate clicks and rewards that reflect intrinsic patterns of the real data distribution. Therefore, should maximize the sequence generation objective , which is the expected discriminator score for generating a sequence from the start state. may not generate clicks and rewards exactly the same as those in offline data, but the closeness of its generated data to offline data is still an informative signal to evaluate its sequence generation quality. By setting at any time for offline data, we extend this objective to include offline data (it becomes the data likelihood function on offline data). Following Yu et al. (2017), based on Eq (1) and Eq (2), the gradient of ’s objective can be derived as,


where denotes the parameters of and denotes those of . Based on our assumption, even when can already capture users’ true behavior patterns, it still depends on to provide appropriate recommendations to generate clicks and rewards that the discriminator will treat as authentic. Hence, and are coupled in this adversarial training. To encourage to provide needed recommendations, we include as a sequence generation reward for at time as well. As evaluates the overall generation quality of , it ignores sequence generations after . To evaluate the quality of a whole sequence, we require to maximize the cumulative sequence generation reward . Because does not directly generate the observations in the interaction sequence, we approximate as 0 when calculating the gradients. Putting these together, the gradient derived from sequence generations for is estimated as,


Based on our assumption that only the clicked items influence user behaviors, and only generates rewards on clicked items, we use as an estimation of , i.e., should promote in its recommendation at time . In practice, we add a discount factor when calculating the cumulative rewards to reduce estimation variance Chen et al. (2019a).

5.2 Policy learning

Because our adversarial training encourages IRecGAN to generate clicks and rewards with similar patterns as offline data, and we assume rewards only relate to the clicked items, we use offline data as well as generated data for policy learning and treat the offline rewards as an estimation of the rewards of . Given data , including both offline and generated data, the objective of the agent is to maximize the expected cumulative reward , where . In the generated data, due to the difference in distributions of the generated and offline sequences, the generated reward calculated by Eq (2) might be biased. To reduce such bias, we utilize the sequence generation score in Eq (6) to rescale generated rewards: , and treat it as the reward for generated data. The gradient of the objective is thus estimated by:


is an approximation of with as the discount factor. Overall, the user behavior model is updated only by the sequence generation objective defined in Eq (7) on both offline and generated data; but the agent is updated by both sequence generation and target rewards. Hence, the overall reward for at time is , where is the weight for cumulative target rewards. The overall gradient for is thus:


6 Theoretical Analysis

For one iteration of policy learning in IRecGAN, we first train the discriminator with offline data, which follows and was generated by an unknown logging policy, and the data generated by IRecGAN under with the distribution of . When and are learnt, for a given sequence , by proposition 1 in Goodfellow et al. (2014), the optimal discriminator is .

Sequence generation Both and contribute to the sequence generation in IRecGAN. is updated by the gradient in Eq (7) to maximize the sequence generation objective. At time , the expected sequence generation reward for on the generated data is: The expected value on is: Given the optimal , the sequence generation value can be written as:


Maximizing each term in the summation of Eq (11) is an objective for the generator at time in GAN. According to Goodfellow et al. (2014), the optimal solution for all such terms is . It means can maximize the sequence generation value when it helps to generate sequences with the same distribution as . Besides the global optimal, Eq (11) also encourages to reward each , even if is less likely to be generated from . This prevents IRecGAN to recommend items only considering users’ immediate preferences.

Value estimation The agent should also be updated to maximize the expected value of target rewards . To achieve this, we use discriminator to rescale the estimation of on the generated sequences, and we also combine offline data to evaluate for policy :


where is the generated reward by at time and is the true reward. and represent the ratio of generated data and offline data during model training, and we require . Here we simplify as . As a result, there are three sources of biases in this value estimation:

Based on different sources of biases, the expected value estimation in Eq (12) is:

where . and come from the bias of user behavior model . Because the adversarial training helps to improve to capture real data patterns, it decreases and . Because we can adjust the sampling ratio to reduce , can be small. The sequence generation rewards for agent encourage distribution to be close to . Because , the bias can also be reduced. It shows our method has a bias controlling effect.

7 Experiments

In our theoretical analysis, we can find that reducing the model bias improves value estimation, and therefore improves policy learning. In this section, we conduct empirical evaluations on both real-world and synthetic datasets to demonstrate that our solution can effectively model the pattern of data for better recommendations, compared with state-of-the-art baselines.

7.1 Simulated Online Test

Subject to the complexity and difficulty of deploying a recommender system with real users for online evaluation, we use simulation-based studies to first investigate the effectiveness of our approach following Zhao et al. (2019); Chen et al. (2019b).

Simulated Environment We synthesize an MDP to simulate an online recommendation environment. It has states and items for recommendation, with a randomly initialized transition probability matrix . Under each state , an item ’s reward is uniformly sampled from the range of 0 to 1. During the interaction, given a recommendation list including items selected from the whole item set by an agent, the simulator first samples an item proportional to its ground-truth reward under the current state as the click candidate. Denote the sampled item as , a Bernoulli experiment is performed on this item with as the success probability; then the simulator moves to the next state according to the state transition probability . The special state is used to initialize all the sessions, which do not stop until the Bernoulli experiment fails. The immediate reward is 1 if the session continues to the next step; otherwise 0. In our experiment, and are set to 10, 50 and 10 respectively.

Offline Data Generation We generate offline user logs denoted by with the simulator. We especially control the bias and variance in by changing the logging policy and the size of to compare the performance of different models. We adopt three different logging policies: 1) uniformly random policy , 2) maximum reward policy , and 3) mixed reward policy . Specifically, recommends the top items with the highest ground-truth reward under the current simulator state at each step, while randomly selects items with either the top 20%-50% ground-truth reward or the highest ground-truth reward under a given state. In the meanwhile, we vary the size of from 200 to 10,000.

Figure 2: Online evaluation results of coverage@r and cumulative rewards. We varied different logging policies and the size of simulated offline data.
Figure 3: Online learning results of coverage@1 and coverage@10.

Baselines We compared our IRecGAN with the following baselines: 1). LSTM: only the user behavior model trained on offline data; 2). PG: only the agent model trained by policy gradient on offline data; 3). LSTMD: the user behavior model in IRecGAN, updated by adversarial training.

Experiment Settings The hyper-parameters in all models are set as follows: the item embedding dimension is set to 50, the discount factor in value calculation is set to 0.9, the scale factors and are set to 3 and 1. We apply 2-layer LSTM units with 512-dimension hidden states. The ratio of generated training samples and offline data for each training epoch is set to 1:10. We use an RNN based discriminator in all experiments with details provided in the appendix.

Online Evaluation After we finish model and baselines training on , we deploy the learnt policy to interact with the simulator for online evaluation. We calculated coverage@r to measure the average proportion of the top relevant items in ground-truth that are actually recommended by an agent (i.e., in its top recommendations) across all the time steps. The results of coverage@r under different configurations of offline data generation are reported in Figure 2. LSTM outperforms policy gradient under , because under this logging policy every item has an equal chance to be observed (i.e., full exploration), LSTM better recognizes each item’s true reward via maximum likelihood estimation. A similar comparison is observed between our user model and agent model: LSTMD can capture the true reward even with much fewer data. Under and , it is easy for all models to recognize items with the highest reward. But the low coverage@10 suggests that they fail to capture the overall preference as the items with lower reward are less likely to be clicked. This becomes more serious under that requires a model to differentiate top relevant items from those with moderate reward. By generating more training data via adversarial training, our model performed better than baselines.

After offline training, the average cumulative rewards for all methods are also evaluated and reported in the rightmost bars of Figure 2. The cumulative rewards are calculated by generating 1000 sequences with the environment and take average of their cumulative rewards. IRecGAN has a larger average cumulative reward than other methods under all configurations except with 200 offline sequences. However, on it has less variance than other methods, which indicates the robustness of IRecGAN when the volume of offline data is limited.

Online Learning To evaluate our model’s effectiveness in reducing the bias of value estimation in a more practical setting, we execute online and offline learning alternately. Specifically, we separate the learning into two stages: first, the agents can directly interact with the simulator to update their policies, and we only allow them to generate 200 sequences in this stage; then they turn to the offline stage to reuse their generated data for offline learning. We iterate these two stages and record their recommendation performance during the online learning stage. We compare with the following baselines: 1) PG-online with only online learning, 2) PG-online&offline with online learning and reusing the generated data via policy gradient for offline learning, and 3) LSTM-offline with only offline learning. We train all the models from scratch and report the performance of coverage@1 and coverage@10 over 20 iterations in Figure 3. We can observe that LSTM-offline performs worse than all RL methods, especially in the later stage, due to its lack of exploration. PG-online improves slowly with a high variance, as it does not reuse the generated data. Compared with PG-online&offline, IRecGAN has better convergence and coverage because of its reduced value estimation bias. We also find that coverage@10 is harder to improve. The key reason is that as the model identifies the items with high rewards, it tends to recommend them more often; this gives those less relevant items less chance to be explored, which is in a similar situation to our online evaluation experiments under and . Our model-based RL training alleviates this bias to a certain extend by generating more training sequences, but it cannot totally alleviate it. This reminds us to focus on explore-exploit trade-off in model-based RL in our future work.

7.2 Real-world Offline Test

We also use a large-scale real-world recommendation dataset from CIKM Cup 2016 to evaluate the effectiveness of our proposed solution for offline reranking. We filtered out sessions of length 1 or longer than 40 and items that have never been clicked. We selected the top 40,000 most popular items to construct our recommendation candidate set. We randomly selected 65,284/1,718/1,820 sessions for training/validation/testing purposes, where the average length of sessions is 2.81/2.80/2.77 respectively. The percentage of recorded recommendations that lead to a purchase is 2.31%/2.46%/2.45%. We followed the same setting as in our simulation-based study in this experiment.

Baselines In addition to the baselines we compared in our simulation based study, we also include the following state-of-the-art solutions for recommendation: 1). PGIS: the agent model estimated with importance sampling on offline data to reduce bias; 2). AC: an LSTM model whose setting is the same as our agent model but trained with actor-critic algorithm Lillicrap et al. (2015) to reduce variance; 3). PGU: the agent model trained using offline and generated data, without adversarial training; 4). ACU: AC model trained with both offline and generated data, without adversarial training.

Evaluation Metrics All the models were applied to rerank the given recommendation list at each step of testing sessions in offline data. We used precision@k (p@1 and p@10) to compare different models’ recommendation performance, where we define the clicked items as relevant. In addition, because the logged recommendation list was not ordered, we cannot assess the original logging policy’s performance in the offline data.

P@10 (%) 32.890.50 33.420.40 33.280.71 28.130.45 31.930.17 34.120.52 32.430.22 35.060.48
P@1 (%) 8.200.65 8.550.63 6.250.14 4.610.73 6.540.19 6.440.56 6.63 0.29 6.790.44
Table 1: Rerank evaluation on real-world recommendation dataset.


The results of the offline rerank evaluation are reported in Table 1. With the help of adversarial training, IRecGAN achieved encouraging improvement against all baselines. This verifies the effectiveness of our model-based reinforcement learning, especially its adversarial training strategy for utilizing the offline data with reduced bias. Specially, we compare the results of PG, PGIS, PGU, and IRecGAN. PGIS did not perform as well as PG partially because of the high variance introduced by importance sampling. PGU was able to fit the given data more accurately than PG since there are many items for recommendation and the collected data is limited. However, PGU performed worse than IRecGAN because of the biased user behavior model. And with the help of the discriminator, IRecGAN reduces the bias in the user behavior model to improve value estimation and better policy learning. This is also reflected on its improved user behavior model: LSTMD outperformed LSTM, given both of them are for user behavior modeling.

8 Conclusion

In this work, we developed a practical solution for utilizing offline data to build a model-based reinforcement learning agent for recommendation, with reduced model bias. We introduce adversarial training for joint user behavior model learning and policy update. Our theoretical analysis shows our solution’s promise in reducing bias; our empirical evaluations in both synthetic and real-world recommendation datasets verify the effectiveness of our solution. Several directions left open in our work, including balancing explore-exploit in policy learning with offline data, incorporating richer structures in user behavior modeling, and exploring the applicability of our solution in other off-policy learning scenarios, such as conversational systems.


  • J. Achiam, D. Held, A. Tamar, and P. Abbeel (2017) Constrained policy optimization. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 22–31. Cited by: §2.
  • M. Chen, A. Beutel, P. Covington, S. Jain, F. Belletti, and E. H. Chi (2019a) Top-k off-policy correction for a reinforce recommender system. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining, pp. 456–464. Cited by: §1, §2, §4, §5.1.
  • X. Chen, S. Li, H. Li, S. Jiang, Y. Qi, and L. Song (2019b) Generative adversarial user model for reinforcement learning based recommendation system. In Proceedings of the 36th International Conference on Machine Learning, Vol. 97, pp. 1052–1061. Cited by: §4, §7.1.
  • J. Chung, C. Gulcehre, K. Cho, and Y. Bengio (2014) Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555. Cited by: §4.
  • M. P. Deisenroth, G. Neumann, J. Peters, et al. (2013) A survey on policy search for robotics. Foundations and Trends® in Robotics 2 (1–2), pp. 1–142. Cited by: §2.
  • M. P. Deisenroth, C. E. Rasmussen, and D. Fox (2011) Learning to control a low-cost manipulator using data-efficient reinforcement learning. Cited by: §2.
  • M. Deisenroth and C. E. Rasmussen (2011) PILCO: a model-based and data-efficient approach to policy search. In Proceedings of the 28th International Conference on machine learning (ICML-11), pp. 465–472. Cited by: §2.
  • A. Gilotte, C. Calauzènes, T. Nedelec, A. Abraham, and S. Dollé (2018) Offline a/b testing for recommender systems. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, pp. 198–206. Cited by: §2.
  • I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio (2014) Generative adversarial nets. In Advances in neural information processing systems, pp. 2672–2680. Cited by: §6, §6.
  • S. Gu, T. Lillicrap, I. Sutskever, and S. Levine (2016) Continuous deep q-learning with model-based acceleration. In International Conference on Machine Learning, pp. 2829–2838. Cited by: §2.
  • X. He, H. Zhang, M. Kan, and T. Chua (2016) Fast matrix factorization for online recommendation with implicit feedback. In Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval, pp. 549–558. Cited by: §1.
  • S. Hochreiter and J. Schmidhuber (1997) Long short-term memory. Neural computation 9 (8), pp. 1735–1780. Cited by: §4.
  • Y. Koren, R. Bell, and C. Volinsky (2009) Matrix factorization techniques for recommender systems. Computer (8), pp. 30–37. Cited by: §1.
  • R. Learning (1998) An introduction, richard s. sutton and andrew g. barto. MIT Press. Cited by: §2.
  • E. Liebman, M. Saar-Tsechansky, and P. Stone (2015) Dj-mc: a reinforcement-learning agent for music playlist recommendation. In Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems, pp. 591–599. Cited by: §2.
  • T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra (2015) Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971. Cited by: §7.2.
  • Z. Lu and Q. Yang (2016) Partially observable markov decision process for recommender systems. arXiv preprint arXiv:1608.07793. Cited by: §2.
  • D. Meger, J. C. G. Higuera, A. Xu, P. Giguere, and G. Dudek (2015) Learning legged swimming gaits from experience. In 2015 IEEE International Conference on Robotics and Automation (ICRA), pp. 2332–2338. Cited by: §2.
  • V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller (2013) Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602. Cited by: §2.
  • V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, et al. (2015) Human-level control through deep reinforcement learning. Nature 518 (7540), pp. 529. Cited by: §2.
  • J. Morimoto and C. G. Atkeson (2003) Minimax differential dynamic programming: an application to robust biped walking. In Advances in neural information processing systems, pp. 1563–1570. Cited by: §2.
  • R. Munos, T. Stepleton, A. Harutyunyan, and M. Bellemare (2016) Safe and efficient off-policy reinforcement learning. In Advances in Neural Information Processing Systems, pp. 1054–1062. Cited by: §2, §2.
  • J. Oh, S. Singh, and H. Lee (2017) Value prediction network. In Advances in Neural Information Processing Systems, pp. 6118–6128. Cited by: §4.
  • B. Peng, X. Li, J. Gao, J. Liu, K. Wong, and S. Su (2018) Deep dyna-q: integrating planning for task-completion dialogue policy learning. arXiv preprint arXiv:1801.06176. Cited by: §2.
  • D. Precup, R. S. Sutton, and S. Dasgupta (2001) Off-policy temporal-difference learning with function approximation. In ICML, pp. 417–424. Cited by: §2.
  • D. Precup (2000) Eligibility traces for off-policy policy evaluation. Computer Science Department Faculty Publication Series, pp. 80. Cited by: §2.
  • J. Schulman, S. Levine, P. Abbeel, M. Jordan, and P. Moritz (2015) Trust region policy optimization. In International Conference on Machine Learning, pp. 1889–1897. Cited by: §2.
  • G. Shani, D. Heckerman, and R. I. Brafman (2005) An mdp-based recommender system. Journal of Machine Learning Research 6 (Sep), pp. 1265–1295. Cited by: §1.
  • R. S. Sutton, D. A. McAllester, S. P. Singh, and Y. Mansour (2000) Policy gradient methods for reinforcement learning with function approximation. In Advances in neural information processing systems, pp. 1057–1063. Cited by: §2.
  • R. S. Sutton (1990) Integrated architectures for learning, planning, and reacting based on approximating dynamic programming. In Machine Learning Proceedings 1990, pp. 216–224. Cited by: §2.
  • A. Swaminathan and T. Joachims (2015a) Batch learning from logged bandit feedback through counterfactual risk minimization.. Journal of Machine Learning Research 16 (1), pp. 1731–1755. Cited by: §2.
  • A. Swaminathan and T. Joachims (2015b) The self-normalized estimator for counterfactual learning. In advances in neural information processing systems, pp. 3231–3239. Cited by: §2.
  • P. Thomas and E. Brunskill (2016) Data-efficient off-policy policy evaluation for reinforcement learning. In International Conference on Machine Learning, pp. 2139–2148. Cited by: §2.
  • R. J. Williams (1992) Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning 8 (3-4), pp. 229–256. Cited by: §2, §5.
  • Q. Wu, H. Wang, L. Hong, and Y. Shi (2017) Returning is believing: optimizing long-term user engagement in recommender systems. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, pp. 1927–1936. Cited by: §1.
  • L. Yu, W. Zhang, J. Wang, and Y. Yu (2017) Seqgan: sequence generative adversarial nets with policy gradient. In Thirty-First AAAI Conference on Artificial Intelligence, Cited by: §2, §5.1, §5.1.
  • X. Zhao, L. Xia, Y. Zhao, D. Yin, and J. Tang (2019) Model-based reinforcement learning for whole-chain recommendations. arXiv preprint arXiv:1902.03987. Cited by: §7.1.
  • G. Zheng, F. Zhang, Z. Zheng, Y. Xiang, N. J. Yuan, X. Xie, and Z. Li (2018) DRN: a deep reinforcement learning framework for news recommendation. In Proceedings of the 2018 World Wide Web Conference on World Wide Web, pp. 167–176. Cited by: §2.


1   Details of Discriminator Model

We adopt an RNN-based discriminator for our IRecGAN framework, and model its hidden states by , where denotes the hidden states maintained by the discriminator at time and is the embeddings used in the discriminator side. And we add a multi-layer perceptron which takes as input the hidden states to compute a score through a Sigmoid layer indicating whether the trajectory looks like being generated by real users as follows:

where can be seen as the user’s favorite item of the given , and should be as close to as possible for real user. To ensure the gradient backpropagation, we use Softmax with a temperature 0.1 to approximate the argmax function. Other hyper parameters are set the same with the experiment setting depicted in section 7.1. The optimization target of is formulated as in Eq (5).

2   Algorithm

Input: Offline data; an agent model ; a user behavior model ; a discriminator .
1 Initialize an empty simulated sequences set and a real sequences set . Initialize and with random parameters. Pre-train by maximizing Eq (3). Pre-train via the policy gradient of Eq (10) using only the offline data. and simulate sequences and add them to . Add trajectories to real data set . Pre-train according to Eq (5) using and .
2 for  to  do
3       for  do
4             Empty and then generate simulated sequences and add to . Compute at each step by Eq (6). Extract sequences into . Update U via the policy gradient of Eq (7) with . Update A via the policy gradient of Eq (10) with .
5       end for
6      for  do
7             Empty , then generate simulated sequences by current , and add to . Empty and add sequences from the offline data. Update according to Eq (5) for epochs using and .
8       end for
10 end for
Algorithm 1 IRecGAN
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description