Deep Hierarchical Reinforcement Learning Based Recommendations via Multi-goals Abstraction

Deep Hierarchical Reinforcement Learning Based Recommendations via Multi-goals Abstraction

Abstract.

The recommender system is an important form of intelligent application, which assists users to alleviate from information redundancy. Among the metrics used to evaluate a recommender system, the metric of conversion has become more and more important. The majority of existing recommender systems perform poorly on the metric of conversion due to its extremely sparse feedback signal. To tackle this challenge, we propose a deep hierarchical reinforcement learning based recommendation framework, which consists of two components, i.e., high-level agent and low-level agent. The high-level agent catches long-term sparse conversion signals, and automatically sets abstract goals for low-level agent, while the low-level agent follows the abstract goals and interacts with real-time environment. To solve the inherent problem in hierarchical reinforcement learning, we propose a novel deep hierarchical reinforcement learning algorithm via multi-goals abstraction (HRL-MG). Our proposed algorithm contains three characteristics: 1) the high-level agent generates multiple goals to guide the low-level agent in different stages, which reduces the difficulty of approaching high-level goals; 2) different goals share the same state encoder parameters, which increases the update frequency of the high-level agent and thus accelerates the convergence of our proposed algorithm; 3) an appreciate benefit assignment function is designed to allocate rewards in each goal so as to coordinate different goals in a consistent direction. We evaluate our proposed algorithm based on a real-world e-commerce dataset and validate its effectiveness.

Recommender Systems, Deep Hierarchical Reinforcement Learning, Conversion, Multi-goals
1234567

1. Introduction

In this information era, end users/consumers usually suffer from heavy burden of content and product choices when browsing the Internet. The recommender system is an important form of intelligent application, which assists users to alleviate from such information redundancy and save time of picking up what they want from lots of irrelevant contents and products. More specifically, the recommender agents discover users’ short-term and long-term interests/ preferences from their browsing histories in Internet, e.g., products, news, movies and music, as well as various types of services(Resnick and Varian, 1997; Ricci et al., 2011). They build user models based on their interests/preferences and automatically recommend personalized items so as to satisfy users’ information needs. As a result, the recommender systems have become increasingly popular, and have been applied to a variety of domains in Internet, e.g., e-commerce, news, movies, etc.

To improve the performance of recommender systems, lots of works have been proposed, evolving from the traditional shadow models like the collaborative filtering model(Breese et al., 1998), to the mainstream deep models like the wide&deep model(Cheng et al., 2016) and finally to the trend of deep reinforcement learning based methods (Zhao et al., 2018b). The deep neural networks have shown excellent performance, due to their powerful capabilities of extracting features and relationships. For instance, DIEN(Zhou et al., 2018) designed a interest extractor layer to capture temporal interests from historical behavior sequence. Most of these deep methods are static, which can hardly follow the dynamic changes of users’ preferences. The deep reinforcement learning (DRL) based methods overcome this problem via interacting with users in real time and dynamically adjust the recommendation strategies. For instance, DEERS (Zhao et al., 2018b) adopted a Deep Q-Network framework and integrated both positive and negative feedback simultaneously. Furthermore, the DRL based recommendations maximize the long-term cumulative expected returns, instead of just immediate (short-term) rewards as traditional deep model, which can bring more benefits in the future.

At present, the majority of works about recommender systems focus on optimizing the metric of click and have already achieved great improvements. As the competition becomes fiercer, the recommender agents gradually pay more attention on the metric of conversion, especially in e-commerce recommender systems. On the one hand, the metric of conversion is more realistic as counterfeiting conversion is more difficult. On the other hand, the e-commerce recommender systems usually recommend natural items and display ads together. The advertisers care more about the direct conversions, instead of indirect clicks, so as to guarantee their revenue over investment. Few of works consider the metric of conversion. For instance, Yang et al.(Yang et al., 2016) combined natural language processing and dynamic transfer learning into a unified framework for conversion rate (CVR) prediction. Either of these works only optimize the metric of click or the metric of conversion. The click and conversion are highly correlated, but may not have the positive correlation. An item which is more likely to be clicked, may results in lower probability of conversion, e.g., the item with relative cheap price but poor product quality.

In this paper, we adopt deep reinforcement learning based methods to optimize the metrics of click and conversion jointly. The user behaviors can be treated as a sequential pattern, i.e., from impression, to click and finally conversion. More specifically, when a list of recommended items are exposed to users, users may click some items in which they are interested, and then buy the favorite items. This pattern reflects users’ hierarchical interests. The click signals from part of exposed items reflect various superficial interests such as the curiosity for new items, the return clicks for some previously purchased items, the initial purchase willingness, etc, while the conversion signals from part of clicked items show the pure and deep purchase interests. As a result, the conversion signals are much sparser than the click signals. The existing deep reinforcement learning based methods in recommendations usually treat the conversion signals just as same as the clicks, except for assigning some large weights. For instance, Hu et al.(Hu et al., 2018) assigned the conversion weight according to the price of each product item. The large weights can partially alleviate the sparsity problem of conversion signals. Yet such method requires deep reinforcement learning techniques to track the conversion signals from impressions directly, just as tracking click signals from impressions. This makes sparse conversion signals more likely to be covered by click signals.

To solve this sparsity problem, we propose a deep hierarchical reinforcement learning based recommendation framework, which consists of two components, i.e., high-level agent and low-level agent. More specifically, the high-level agent tries to catch the long-term sparse conversion signals based on users’ click and conversion histories. The actor of the high-level agent automatically sets goals for the low-level agent. On the other hand, the low-level agent captures the short-term click signals based on users’ impression and click histories. The actor of low-level agent interacts with the real-time environment via making actual recommendations and receiving feedback from users. This framework differentiates the hierarchical interests in users’ behavior patterns via hierarchical agents. There exist several problems in this hierarchical reinforcement learning framework. Firstly, how does the high-level agent automatically generate goals for the low-level agent. The high-level goals affect the performance of the framework significantly, but there exists no explicit goals for the high-level agent in recommender systems. Secondly, how does the high-level goals influence the low-level agent. The appropriate way to guide the low-level agent can reduce the difficulty of approaching the high-level goals. Thirdly, how to increase the update frequency of high-level agent so as to accelerate its convergence. The feedback frequency of high-level agent is far less than that of low-level agent.

To tackle these challenges, we further propose a novel deep hierarchical reinforcement learning algorithm (HRL-MG), in which the high-level agent guides the low-level agent via multi-goals abstraction. In the interaction between recommender agents and users, the high-level agent first generates a set of abstract goals based on users’ click and conversion histories, and conveys them to the low-level agent. Each abstract goal has the same form as the action of the low-level agent. Furthermore, different abstract goal guides the low-level agent in different interaction stage. All these make the high-level goals easier to follow and approach. Then, the low-level agent generates actual recommendation items based on users’ browsing and click histories, and collects users’ feedback as external reward. The low-level agent also accepts the internal reward, which is generated from the difference between the action and its corresponding goal. Finally, the low-level agent conveys the users’ feedback to the high-level agent to improve the quality of different goals. To enhance the cooperation of each goal, we design the same state encoder structure for each goal, the parameters of which are also shared by all goals. These parameters are updated when each goal updates its own parameters. In addition, we design an appreciate reward mechanism based on users’ feedback, called benefit assignment function, to coordinate the goals in a consistent direction.

In summary, this paper has the following contributions:

  • To the best of our knowledge, we are the first to propose a DHRL based recommendation framework. The high-level agent catches the long-term sparse conversion signals, while the low-level agent captures the short-term click signals.

  • We propose a novel deep hierarchical reinforcement learning algorithm (HRL-MG), in which the high-level agent guides the low-level agent via multi-goals abstraction. The multiple high-level goals reduce the difficulty for the low-level agent to approach the high-level goals.

  • We design a shared state encoder for each goal so as to accelerate the update frequency and an appreciate benefit assignment function to allocate rewards in each goal so as to coordinate different goals correctly.

  • We carry out the offline and online evaluation based on the real-world e-commerce dataset from JD.com. The experimental results demonstrate the effectiveness of our proposed algorithm.

In this paper, we first introduce the details of our proposed framework in Section 2. Then, we present our training procedure in Section 3. After that, we demonstrate our experiments in Section 4. The related work is discussed in Section 5. At last, we conclude this paper in Section 6.

2. The proposed framework

This section begins with an overview of the proposed recommendation framework based on hierarchical reinforcement learning. Then we introduce the technical details of the high-level agent and the low-level agent.

2.1. Framework Overview

As mentioned above, we model the recommendation task as a Markov Decision Process(MDP) and leverage the techniques of reinforcement learning to automatically learn the optimal recommendation strategy. Users are regarded as the environment, and recommendation system is regarded as the agent. Users’ preferences are the environment state in which the agent is located. According to current state, the agent select an action (giving corresponding recommended item), and then the environment gives feedback: skip, click, order(convert), or leave, etc. The recommendation agent obtains corresponding reward, and the state of the environment is updated, then the next interaction begins.

Based on the above settings, we further consider the sparsity problem of conversion signals. We propose a recommendation framework based on deep hierarchical reinforcement learning, including a high-level agent(HRA) and a low-level agent(LRA). Both two agents have adapted Actor-Critic architectures. In order to express our ideas clearly, firstly we define the notations required.

  • High-level state space : A high-level state is defined as user’s current long-term preference, which is generated based on user’s click and conversion histories, i.e., the items that a user clicked or ordered recently.

  • Low-level state space : A low-level state is defined as user’s current short-term preference, which is generated based on user’s browsing and click histories, i.e., the items that a user browsed or clicked recently.

  • Goal space : A goal is a signal generated based on current high-level state by HRA and is conveyed to LRA to guide its behavior.

  • Action space : An action is a actual recommendation item generated by LRA based on current low-level state .

  • Internal Reward : After the LRA receives a goal from HRA, and then takes an action , the LRA receives internal reward . The internal reward is used to evaluate whether the LRA’s action follows the goal well.

  • External Reward : After the LRA takes an action at the low-level state , i.e., recommending an item to a user, the user browses the item and provides his feedback. He can skip, click or order this item, and the LRA receives immediate external reward according to the user’s feedback.

  • High-level Transition : High-level transition defines the high-level state transition from to when HRA takes goal .

  • High-level Transition : Low-level transition defines the low-level state transition from to when LRA takes action .

  • Discount factor : defines the discount factor when we measure the present value of future reward. In particular, when , the agents only consider the immediate reward. In other words, when , all future rewards can be counted fully into that of the current action.

Specifically, we model the recommendation task as a MDP in which the recommendation system(including HRA and LRA) interacts with environment (or users) over a sequence of time steps. The HRA operates at lower temporal resolution and sets abstract goals which are conveyed and enacted by the LRA. The LRA generates primitive actions at each time step.

As shown in Figure 1, the environment provides a high-level observation state and a low-level observation state at each time step . The HRA observes the high-level state and produces a set of goals when . This provides temporal abstraction, since HRA produces goals only every steps, i.e., these goals will be used to guide LRA in the entire steps. The LRA observes the low-level state and the set of goals , and produces a low-level atomic action based on , which is applied to the environment. Then the LRA receives an internal reward and a external reward . The internal reward is sampled from the internal reward function , which indicates how the LRA follows the goals. The external reward is provided by the environment which represents users’ actual feedback. As the consequence of action , the environment updates the high-level state to with high-level transition and updates the low-level state to with low-level transition . After time steps(from to ), the LRA collects recent external rewards and conveys them to the HRA to improve its performance.

Figure 1. The interaction procedure.

In the interaction precedure mentioned above, the LRA will store the experience for off-policy training. While the HRA will store the experience for off-policy training. The goal of hierarchical reinforcement learning is to find a high-level policy and a low-level policy , which can maximize the cumulative external rewards for the recommendation system.

Both HRA and LRA have adapted Actor-Critic architectures. The Actor architecture of HRA inputs a high-level state and aims to produce a set of abstract goals . The Critic architecture inputs the state and the set of goals , and try to evaluate the expected return achievable by the high-level policy as follows:

(1)

with . All share the same high-level state and evaluate different Q-values of different state-goal pairs. And represents the reward obtained under goal ’s guidance. The Actor architecture of LRA inputs a low-level state and aims to output a deterministic action . The Critic architecture of LRA inputs this state-action pair , and try to evaluate the expected return achievable by the low-level policy as follows:

(2)

where

(3)

represents the total reward that the LRA receives after takes action . And the hyper-parameter regulates the influence of the internal reward.

Next we will elaborate the HRA and LRA architecture for the proposed framework.

2.2. Architecture of High-Level Agent

The high-level agent HRA is designed to generate a set of abstract goals according to user’s long-term preference, thus we propose an adapted Actor-Critic architecture for HRA. We will introduce the encoder structure which is used commonly, and then describe the Actor and Critic architecture of HRA in details.

Encoder for High-Level State Generation

We introduce a RNN with Gated Recurrent Units(GRU) to capture users’ sequential behaviors as users’ long-term preference. The inputs of GRU are user’s last clicked items or last ordered items (sorted in chronological order) before the current time step, While the output is the representation of users’ long-term preference by a vector. The input or is dense and low-dimensional vector representations of items.

We leverage GRU rather than Long Short-Term Memory(LSTM) because that GRU outperforms LSTM for capturing users’ sequential preference in the recommendation task(Hidasi et al., 2015). We use the final hidden state as the output of the RNN layer. In our framework, two such RNN with GRU are used seperately. One of them receives user’s last clicked items as input and outputs the final hidden state , while the other one receives user’s last ordered items as input and outputs the final hidden state . Finally, a linear layer is used to merge the two states and produce the user’s long-term preferences:

(4)

Actor Framework of HRA

The Actor framework of HRA, donated by HActor (shown in Figure 2), is used to generate multi-goals abstraction based on high-level state . Thus the encoder structure mentioned above is used firstly to generate the abstract high-level state . Next, in the framework of HActor, parallel separated fully connected layers are used behind the encoder layers as the goals’ generation layer:

(5)

where parameter represents the bound of the goals and ”tanh” activate function is used since .

Figure 2. The architecture of high-level actor.

In the framework of HActor, All goals share the same encoder structure, but their generation layers are different. That means they get information from the same long-term preference, generate a set of different goals to guide different stages, and improve the encoder and generation layers according to their different feedback.

Due to the existence of the sharing mechanism, in the learning procedure, when each goal gets feedback and updates its related parameters, its generation layer and encoder layers will be updated once. Then, the update frequency of parameters in the encoder layers is times than that in the generation layers. That has two advantages: 1) the update frequency of HActor is greatly improved; 2) the HActor can obtain information from multiple perspectives, which improves its stability.

Critic Framework of HRA

The Critic framework of HRA, donated by HCritic (shown in Figure 3), is designed to leverage an approximator to learn multiple goal-value functions , which is a judgment of whether the goals generated by HActor match the current high-level state . Then, according to , the HActor updates its’ related parameters in a direction of improving performance to generate proper goals in the following iterations.

Figure 3. The architecture of high-level critic.

Thus we need to feed user’s current high-level state and a set of goals into the HCritic. The same strategy in Eq.(4) is followed to capture user’s long-term preference. And then, for each , there are 2 fully connected layers used behind the encoder layers as the state-goal pair’s evaluation layers:

(6)
(7)

where and we use the activation function ”Relu” since .

In the framework of HCritic, parallel separated evaluation layers are placed behind the same encoder layers, estimating the expected returns of the goals according to their benefit assignment functions . The benefit assignment function is mainly related to the rewards in the stage in which the goal is used , and the compensation when the low-level strategy has not converged is also considered. We will discuss the benefit assignment function in Section 3 with more details. Similarly, due to the sharing mechanism, the update speed and convergence stability of HCritic are also improved.

2.3. Architecture of Low-Level Agent

The low-level agent LRA is designed to generate a set of actual recommendation items according to user’s short-term preference, thus we propose an adapted Actor-Critic architecture for LRA. We will introduce the encoder structure which is used commonly, and then describe the Actor and Critic architecture of LRA in details.

Encoder for Low-Level State Generation

In our framework, two RNN with GRU similar to that mentioned in Section 2.2.1 are used seperately. One of them receives user’s last browsed items as input and outputs the final hidden state , while the other one receives user’s last clicked items as input and outputs the final hidden state . Finally, a linear layer is used to merge the two states and produce the user’s short-term preferences:

(8)

Actor Framework of LRA

The Actor framework of LRA, donated by LActor (shown in Figure 4), is used to generate actual recommendation items based on low-level state . Thus the encoder structure mentioned above is used firstly to generate the abstract low-level state . Next, in the framework of LActor, a fully connected layer is used behind the encoder layers as the action generation layer:

(9)

where parameter represents the bound of the action and ”tanh” activate function is used since .

Figure 4. The architecture of low-level actor.

Notice that the generated item embedding may be not in the real item embedding set, that we need to map it to valid item embedding, which will be provided in Section 3.

Critic Framework of LRA

The Critic framework of LRA, donated by LCritic (shown in Figure 5), is designed to leverage an approximator to learn action value functions , which is a judgment of whether the action generated by LActor matches the current low-level state and follows the guidance of the goals well. Then, according to , the LActor updates its’ parameters in a direction of improving performance to generate proper actions in the following iterations.

Thus we need to feed user’s current low-level state and action into the LCritic. The same encoder layers as LActor’s are used to capture user’s short-term preference. And then, there are 2 fully connected layers used behind the encoder layers as the state-action pair’s evaluation layers:

(10)
(11)

where the activation function ”Relu” is used since .

Figure 5. The architecture of low-level critic.

As mentioned in Eq. (2)(3), the update direction of LCritic will be affected by both the external reward and the internal reward . The form of the internal reward function determines the way the goals guide the LRA. Thus a reasonable internal reward function is needed to make goals play different roles at different stages. In this work, we cut a period of steps into average parts, and use each goal in each steps. Cosine similarity is used to measure the gap between action and corresponding goal and produce the internal reward:

(12)

Notice that only one goal is used in each time step, and the internal reward function can be simplified as . The responsibility of each goal is clearly defined and the time consumption is reduced. Other reasonable designs that promote the diversity function of the goals are also encouraged.

3. Training procedure

With the proposed recommendation framework based on hierarchical reinforcement learning, we will discuss the work of parameter training in this section. We first propose an online training algorithm, and then test the framework in the online environment and offline history logs respectively. The details of the test procedure are shown in Appendix C.

3.1. Actual Action Mapping

As mentioned in section 3.3.2, we generate a recommendation item embedding using the user’s short-term preferences . But is a virtual-action because it may not be in the real item embedding set . So we have to map this virtual-action into a real action (a real item embedding). Under this setting, for each , we choose the most similar as the real item embedding. In this work, we use cosine similarity as the metric:

(13)

To reduce the amount of computation, we pre-compute for all and use the item recall mechanism to eliminate irrelevant and redundant items. The details of Mapping Algorithm are shown in Appendix B.1.

Note that when the item embedding set is large, the above method faces the challenge of insufficient computation time and storage space. A nearest neighbor search method based on Hash mapping can map high-dimensional data into a series of compact binary codes(Norouzi and Fleet, 2011). And the similarity relation between the original high-dimensional data is approximated by the distance between the binary codes. It can achieve high calculation speed and reduce storage consumption at the expense of acceptable error, which can be used as an alternative of Algorithm 1.

3.2. Benefit Assignment Function

As mentioned in Section 3.2.2, the benefit assignment function assigns the external reward of the recent c-steps collected by LRA to each goal. There are two main factors to consider: 1. How does LRA perform under the guidance of each goal? 2. How to coordinate different goals in a consistent direction?

In Section 3.3.3, when determine the way the goal guide LRA, we cut a period of steps into average parts, and use each goal in each steps. Thus a natural idea is that we collect the rewards in each steps and assign them to the corresponding goal:

(14)

However, this assignment method does not consider the latter factor. To deal with this problem, we propose an extended benefit assignment function based on the Eq.(14):

(15)

where parameter is the high-level benefit discount factor. In Eq.(15), each goal is assigned with the cumulative discounted external rewards from the beginning of current period of steps to the stage in which it is used, forcing the subsequent goals to improve the overall performance of the entire period. When , it is equivalent to Eq.(14); when , all related reward should be considered equally.

3.3. Training Algorithm

In the proposed recommendation framework based on hierarchical reinforcement learning, both the high-level agent and the low-level agent have adapted Actor-Critic architectures. We utilize DDPG algorithm to train the parameters of both agents. The details of the Online Training Algorithm are shown in Appendix B.2.

In the high-level agent HRA, the HCritic can be trained by minimizing a series of loss function as:

(16)

where represents all parameters used to generate the Q-value , which includes the parameters in the shared encoder layers and the -th evaluation layers of HCritic. The HCritic is trained from samples stored in a high-level replay buffer.

The first term in Eq.(16) is the target for the current period of steps. The parameters from the previous period are fixed when optimizing the loss function . In practice , it is often computationally efficient to optimize the loss function by stochastic gradient descent, rather than computing the expectations over the experience space. The derivatives of loss function with respective to parameters are represented as follows:

(17)

The HActor is updated with the policy gradient:

(18)

where .

Similarly, in the low-level agent LRA, the LCritic can be trained by minimizing the loss function as:

(19)

where represents all parameters in LCritic. The LCritic is trained from samples stored in a low-level replay buffer. Actions stored in the low-level replay buffer are generated by valid-action . This allows the learning algorithm to leverage the information of which action was actually executed to train the LCritic(Dulac-Arnold et al., 2015). The derivatives of loss function with respective to parameters are represented as follows:

(20)

The LActor is updated with the policy gradient:

(21)

where , i.e., is generated by virtual-action. Note that virtual-action is the actual output of LActor. This guarantees that policy gradient is taken at the actual output of policy (Dulac-Arnold et al., 2015).

4. Experiments

In this session, we conduct extensive experiments with a dataset from a real e-commerce company to evaluate the effectiveness of the proposed framework. We mainly focus on two questions: 1) how the proposed framework performs compared to representative baselines; and 2) how the components in the framework contribute to the performance. We first introduce experimental settings. Then we seek answers to the above two questions. Finally, we discuss the impact of important parameters.

4.1. Experiment Settings

We evaluate our method on a dataset of August, 2018 from a real e-commerce company. The statistics about the dataset are shown in Appendix D.

We do online training and test on a simulated online environment. The simulated online environment is trained on users’ logs. The simulator has the similar architecture with LCritic, while the output layer is a softmax layer that predicts the immediate feedback according to current low-level state and a recommendation item . We test the simulator on users’ logs, and experimental results demonstrate that the simulated online environment has overall 90% precision for immediate feedback prediction task. This result suggests that the simulator can accurately simulate the real online environment and predict the online rewards, which enables us to train and test our model on it.

For a new session, the initial high-level and low-level state are collected from the previous sessions of the user. In this work, we leverage previously browsed/clicked/ordered items to generate high-level and low-level state. The external reward of skipped/clicked/ordered are empirically set as 0, 1, and 5, respectively. The dimension of the embedding of items is 50, and we set the discounted factor . For the parameters of the proposed framework, we select them via cross-validation. Corresponding, we also do parameter-tuning for baselines for a fair comparison.

For online test, we leverage the average summation of all rewards in one recommendation session as the metric. For offline test, we select MAP(Turpin and Scholer, 2006) and NDCG@20(40)(Järvelin and Kekäläinen, 2002) as the metrics to measure the performance. The difference of ours from traditional Learn-to-Rank methods is that we rank both clicked and ordered items together,and set them by different rewards, rather than only rank clicked items as that in the Learn-to-Rank setting.

(a) (b)
Figure 6. Training procedure.

4.2. Performance Comparison

First we train the proposed framework HRL-MG to converge in the simulated online environment and then test the performance both in online and offline ways, and compare our framework with DNN, DDPG and HRL.

  • DNN: This is a deep neural network similar to LCritic, with similar encoder layers to catch user’s abstract state and try to evaluate the immediate reward of the current state-action pair. It always recommends items with highest immediate reward.

  • DDPG: Only the low-level agent is used without goals’ guidance. It always recommends items with highest accumulated decay returns evaluated by LCritic.

  • HRL: The proposed framework with set to 1. Its high level agent guides the low level agent with only one goal in time steps.

Here we utilize online training strategy to train DDPG and HRL(similar to method mentioned in Section 3.3). DNN is also applicable to be trained via the rewards generated by simulated online environment.

We do offline test by re-ranking users’ offline logs, while do online test on the simulated online environment mentioned above. As the online test is based on the simulator, we can artificially control the length of recommendation sessions to study the performance in short and long sessions. We define short sessions with 50 recommendation items, while long sessions with 300 recommendation items. The results are shown in Figure 6-8. It can be observed:

(a) (b)
Figure 7. Performance comparison for offline test.
(a) Performance in long sessions. (b) Performance in short sessions.
Figure 8. Performance comparison for online test.
  • Figure 6(a)(b) illustrate the training process of high-level and low-level agent in HRL and our HRL-MG. In Figure 6(a), the high-level agent in HRL has a wrong growth, eventually falls back to the convergence position, while the HRA of HRL-MG grows steadily. This is because multiple goals and sharing mechanism improve the update speed and stability of the high-level agent. In Figure 6(b), the convergence speed of low-level agent in HRL-MG is much faster than that in HRL. Notice that the low-level agent in HRL begins to evolve until the high-level agent converges, while that in HRL-MG doesn’t need. This is because multiple goals greatly reduce the difficulty for the low-level agent to achieve the goal.

  • Figure 7,8 show that DDPG, HRL, and HRL-MG are better than DNN both in offline and online test. This is because DNN only considers immediate reward, while the other three are based on the reinforcement learning, taking long-term cumulative returns into account and achieving higher performance.

  • Figure 7,8 show that HRL and HRL-MG are better than DDPG both in offline and online test. This is because DDPG acts at high time resolution, generates each specific recommendation item according to the current state, and cannot effectively handle sparse conversion signal. While HRL and HRL-MG have hierarchical structures, which can observe in a wider time range, capture the sparse reward signal, and improve the performance of the low-level agent through the guidance of the goals.

  • Figure 7 shows that the performance of HRL-MG is better than HRL’s in offline test. This is because multi-goals help to convey more sparse conversion information, forcing the low-level agent to focus more on improving the conversion.

  • Figure 8 shows that in online test, the cumulative total rewards and orders of HRL-MG are significantly higher than HRL’s, and the cumulative clicks are slightly lower. This shows that HRL-MG is better in improving conversions and overall revenue. There exists a trade-off between click and conversion enhancements because they are not completely positively correlated.

4.3. Parameter Sensitivity

Our method has two key parameters: that controls the influence of internal reward and controls the number of goals. To study the impact of these parameters, we investigate how the proposed framework works with the changes of one parameter, while fixing other parameters.

(a) (b) (c)
Figure 9. Parameter sensitiveness of .

Figure 9 shows the parameter sensitivity of in online recommendation task(long session). The performance for the recommendation achieves peak when . In other words, the high-level goals indeed improve the performance of the framework. It can be observed that as increases, the cumulative clicks gradually decrease, indicating that the conversion information will have a negative impact on clicks due to their incomplete positive correlation. Choosing a suitable can significantly improve the cumulative orders and total rewards.

Figure 10 shows the parameter sensitivity of in online recommendation task(long session). The performance for the recommendation achieves peak when . It can be observed that too many goals will cause the cumulative clicks to decrease and affect the overall performance of the framework. However, when the cumulative clicks have been greatly reduced when or 4, the cumulative orders are still higher than those of a single goal, which fully illustrates the promotion of multi-goals for the conversions.

(a) (b) (c)
Figure 10. Parameter sensitiveness of .

5. Related Work

The recommendation algorithms can be roughly divided into three categories: traditional recommendation algorithms, deep learning based and reinforcement learning based recommendation algorithms. Firstly, traditional recommendation algorithms consists of collaborative filtering(Breese et al., 1998), content-based filtering(Mooney and Roy, 2000),and hybrid methods(Burke, 2002). Secondly, deep learning based recommendation algorithms have become the current mainstream recommendation methods. Deep learning methods can help to learn item embedding from sequences, image or graph information(Covington et al., 2016). It can also extract users’ potential tastes(Wu et al., 2016), or improve the traditional methods directly(Zhang et al., 2017).

Thirdly, reinforcement learning based recommendation algorithms are far more different from the above two categories. It models the recommending procedure as the interaction sequences between users(environment) and recommendation agent, and leverages reinforcement learning to automatically learn the optimal recommendation strategies. For instance, Li et al.(Li et al., 2010) presented a contextual-bandit approach for personalized news article recommendation, in which a part of new items are exposed to balance exploration and exploitation. Zhao et al.(Zhao et al., 2018c, a) proposed a novel page-wise recommendation framework based on reinforcement learning, which can optimize a page of items with proper display based on real-time feedback from users.

Deep hierarchical reinforcement learning is dedicated to expanding and combining existing reinforcement learning methods to solve more complex and difficult problems(S.Sutton et al., 1999; Barto and Mahadevan, 2003). There is no doubt that the recommendation problem is such a problem. Recently, a goal-based hierarchical reinforcement learning framework(Vezhnevets et al., 2017; Nachum et al., 2018) has emerged, with high-level and low-level communicating through goals. However, as far as we know, there is no existing hierarchical reinforcement learning method for recommendation system.

6. Conclusion

In this paper, we propose a novel hierarchical reinforcement learning based recommendation framework, which consists of two components, i.e., high-level agent and low-level agent. The high-level agent tries to catch long-term sparse conversion signals, and automatically sets abstract multi-goals for the low-level agent, while the low-level agent follows different goals in different stage and interacts with real-time environment. The multiple high-level goals reduce the difficulty for the low-level agent to approach the high-level goals and accelerate the convergent rate of our proposed algorithm. The experimental results based on a real-world e-commerce dataset demonstrate the effectiveness of the proposed framework. There are several interesting research directions. Firstly, the low-level agent can be guided in other ways, such as a hidden state representing the long-term preference. Secondly, the framework is general, and more specific information can be used to improve the performance in specific tasks, such as category information of items, user profiles, etc.

Appendix A Discussions on Framework

In practice, there are three basic difficulties in recommendation tasks: a) the number of users that recommendation platform have to serve is up to several hundred millions, and their preferences vary greatly; b) the number of items to be recommended increases rapidly and changes dynamically with time goes by, which means some items are deleted while some are added; c) it is time-consuming to select the optimal one from the set of alternative items.

In reinforcement learning, a) means a huge state space and b) means a huge and dynamic action space. In addition, The action value function is usually highly nonlinear, and many state-action pairs may not appear in the real trace such that it is hard to update their values. Thus traditional reinforcement learning methods such as POMDP(Shani et al., 2005) and Q-learning(Taghipour and Kardan, 2008) are not suitable because they cannot store massive data and handle complex relationships. Deep Q-Network(DQN)(Mnih et al., 2013) is also not applicable, because the huge action space will greatly reduce the update speed of DQN. Therefore, we must leverage deep reinforcement learning method(Lillicrap et al., 2015), using deep neural network as a nonlinear function approximation to approximate the policy and Q-value function simultaneously. Thus a fundamentally Actor-Critic architecture(Sutton and Barto, 1998) is needed. In practice, it is not enough to represent items using only discrete indexes, because such representations have no semantic meaning and do not represent relationships between different items. A common practice is to extract the information of each item like sentences or images and embed them into a continuous abstract action space(Levy and Goldberg, 2014).

Appendix B Algorithm

b.1. Mapping Algorithm

We proposed the mapping algorithm in Algorithm 1. The LActor generates a virtual-action (line 1), and selects the most similar item based on the cosine similarity(line 2). Finally, this item is removed from the item embedding set(line 3), which prevents the same item is recommended repeatedly in a session. Then the LActor recommends to user and receive immediate reward from user.

1:User’s low-level state , item embedding set .
2:Valid recommendation item .
3:Generate proto-action according Eq.(13).
4:Select the most similar item according Eq.(17).
5:Remove item from
6:return
Algorithm 1 Mapping Algorithm.

b.2. Online Training Algorithm

1:Initialize HActor , HCritic , LActor , LCritic with random weights
2:Initialize target network , , , with weights
3:Initialize the capacity of high-level and low-level replay buffer
4:for  do
5:     Initialize clock
6:     Receive initial high-level and low-level state
7:     while  do
8:         Stage 1. Transition Generating Stage
9:         if  then
10:              Generate a set of goals according to Eq.(5)
11:         else
12:              
13:         end if
14:         Select an action according to Alg.1
15:         Execute action and observe external reward
16:         New high-level and low-level state
17:         Store low-level transition in
18:         
19:         if  then
20:              Collect the recent external rewards and store high-level transition in
21:         end if
22:         Stage 2. Parameter updating stage
23:         Sample mini-batch of high-level transitions from
24:         Update HCritic, HActor according to Eq.(17)(18)
25:         Sample mini-batch of low-level transitions from
26:         Update LCritic, LActor according to Eq.(20)(21)
27:         Update the target networks:
28:     end while
29:end for
Algorithm 2 Online Training Algorithm.

The online training algorithm for the proposed recommendation framework based on hierarchical reinforcement learning is presented in Algorithm 2. In each iteration, there are two stages: 1) transition generation stage(lines 8-21); 2) parameter updating stage(lines 22-27). For transition generating stage: given the current high-level state and low-level state , the HRA first generates a set of goals when and conveys them to the LRA(line 10); the LRA recommends an item according to Algorithm 1(line 14); next, the RA observes the external reward (line 15)and updates the high-level and low-level state to (line 16); then, the LRA stores transitions in the low-level replay buffer (line 17); and finally, after time steps(when again), the LRA collects the recent external rewards and conveys them to HRA, the HRA will store the transition in the high-level replay buffer (line 20). For parameter updating stage: the HRA samples mini-batch of transitions from and updates parameters of HActor and HCritic, while the LRA samples mini-batch of transitions from and updates parameters of LActor and LCritic(lines 23-27), following a standard DDPG procedure(Lillicrap et al., 2015).

In the algorithm, we introduce widely used techniques to train our framework. For instance, we use a technique known as experience replay(Lin, 1993)(lines 23,25), and introduce separated evaluation and target networks(Mnih et al., 2013)(lines 2,27), which can help smooth the learning and avoid the divergence of parameters. For the soft updates of target networks(lines 27), we set .

Appendix C The Test Procedure

After the training procedure, the proposed recommendation framework learns parameters , , , . Here we formally present the test procedure of the proposed framework. We design two methods: 1) Online test: to test the framework in online environment where the agents interact with users and receive real-time feedback for the recommended items from users; 2) Offline test: to test the framework based on user’s historical logs.

c.1. Online Test

The online test algorithm in one recommendation is presented in Algorithm 3. The online test procedure is similar with the transition generating stage in Algorithm 2. In each iteration of the recommendation session, given the current low-level state , the LRA recommends an item to user following policy (line 4). Then the LRA observes the external reward from user(line 5) and updates the low-level state to (line 6).

1:Initialize LActor the trained parameters
2:Receive initial low-level state
3:for  do
4:     Select an action according to Alg.1
5:     Execute action and observe external reward
6:     New low-level state
7:end for
Algorithm 3 Online Test Algorithm.

c.2. Offline Test

The intuition of the offline test method is that, for a given recommendation(offline data), the LRA reranks the items in this session. If the proposed framework works well, the clicked/ordered items in this session will be ranked at the top of the new list. The reason why LRA only reranks items in this session rather than items in the while item space is that for the offline dataset, we only have the ground truth rewards of the existing items in this session. The offine test algorithm in one recommendation session is presented in Algorithm 4. In each iteration of an offline test recommendation session, given the low-level state (line 2), the LRA recommends an item following policy (line 4). And then, we add into new recommendation list (line 5), and record ’s external reward from user’s historical data(line 6). Then we update the low-level state to (line 7). Finally, we remove from the item set of the current session(line 8).

1:Item embedding set and corresponding external reward set .
2:Recommendation list with new order.
3:Initialize LActor the trained parameters
4:Receive initial low-level state
5:while  do
6:     Select an action according to Alg.1
7:     Add action into the end of
8:     Record external reward from user’s historical data
9:     New low-level state
10:     Remove from
11:end while
Algorithm 4 Offline Test Algorithm.

Appendix D Statistics on the dataset

Long tail data is filtered in this dataset:

\topruleDataset Date Samples SKU Clicks Orders
\midruleTrain_set Aug.11th 8,596,852 553,156 843,249 46,022
Test_set Aug.12th 2,231,651 287,689 218,053 10,552
\bottomrule
Table 1. Statistics on the dataset(Year:2018)

Appendix E Parameter Sensitivity in Short Session

The parameter sensitivity of and in online recommendation task(short session) are shown in Figure 11,12.

(a) (b) (c)
Figure 11. Parameter sensitiveness of in short session.
(a) (b) (c)
Figure 12. Parameter sensitiveness of in short session.

Footnotes

  1. journalyear: 2019
  2. copyright: acmlicensed
  3. conference: KDD ’19: The 25th ACM SIGKDD Conference on Knowledge Discovery & Data Mining; August 04–08, 2019; Anchorage, Alaska USA
  4. booktitle: KDD ’19: The 25th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, August 04–08, 2019, Anchorage, Alaska USA
  5. price: 15.00
  6. doi: 10.1145/nnnnnnn.nnnnnnn
  7. isbn: 978-x-xxxx-xxx-x/YY/MM

References

  1. Andrew G. Barto and Sridhar Mahadevan. 2003. Recent Advances in Hierarchical Reinforcement Learning. Discrete Event Dynamic Systems 13, 1-2 (2003), 41–77.
  2. John S Breese, David Heckerman, and Carl Kadie. 1998. Empirical analysis of predictive algorithms for collaborative filtering. In Proceedings of the 14th. conference on Uncertainty in artificial intelligence. Morgan Kaufmann Publishers Inc., 43–52.
  3. Robin Burke. 2002. Hybrid recommender systems: Survey and experiments. User modeling and user-adapted interaction 12, 4 (2002), 331–370.
  4. Heng-Tze Cheng, Levent Koc, Jeremiah Harmsen, Tal Shaked, Tushar Chandra, Hrishi Aradhye, Glen Anderson, Greg Corrado, Wei Chai, Mustafa Ispir, Rohan Anil, Zakaria Haque, Lichan Hong, Vihan Jain, Xiaobing Liu Google, and Hemal Shah. 2016. Wide & Deep Learning for Recommender Systems. In Proceedings of the 1st Workshop on Deep Learning for Recommender Systems. ACM, 7–10.
  5. Paul Covington, Jay Adams, and Emre Sargin. 2016. Deep Neural Networks for YouTube Recommendations. In Proceedings of the 10th ACM Conference on Recommender Systems. ACM, 191–198.
  6. Gabriel Dulac-Arnold, Richard Evans, Hado van Hasselt, Peter Sunehag, Timothy Lillicrap, Jonathan Hunt, Timothy Mann, TheophaneWeber, Thomas Degris, and Ben Coppin. 2015. Deep reinforcement learning in large discrete action spaces. arXiv preprint arXiv:1512.07679 (2015).
  7. Balázs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, and Domonkos Tikk. 2015. Session-based recommendations with recurrent neural networks. arXiv preprint arXiv:1511.06939 (2015).
  8. Yujing Hu, Qing Da, Anxiang Zeng, Yang Yu, and Yinghui Xu. 2018. Reinforcement Learning to Rank in E-Commerce Search Engine: Formalization. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. ACM.
  9. Kalervo Järvelin and Jaana Kekäläinen. 2002. Cumulated gain-based evaluation of IR techniques. ACM Transactions on Information Systems (TOIS) 20, 4 (2002), 422–446.
  10. Omer Levy and Yoav Goldberg. 2014. Neural word embedding as implicit matrix factorization. Advances in neural information processing systems (2014), 2177–2185.
  11. Lihong Li, Wei Chu, John Langford, and Robert E. Schapire. 2010. A contextual-bandit approach to personalized news article recommendation. In Proceedings of the 19th international conference on World Wide Web. ACM, 661–670.
  12. Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. 2015. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971 (2015).
  13. Long-Ji Lin. 1993. Reinforcement learning for robots using neural networks. Technical Report, Carnegie-Mellon Univ Pittsburgh PA School of Computer Science.
  14. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. 2013. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602 (2013).
  15. Raymond J Mooney and Loriene Roy. 2000. Content-based book recommending using learning for text categorization. In Proceedings of the 5th ACM conference on Digital libraries. ACM, 195–204.
  16. Ofir Nachum, Shixiang Gu, Honglak Lee, and Sergey Levine. 2018. Data-Efficient Hierarchical Reinforcement Learning. In Advances in neural information processing systems.
  17. Mohammad Norouzi and David Fleet. 2011. Minimal Loss Hashing for Compact Binary Codes. In Proceedings of the 28th International Conference on Machine Learning. 353–360.
  18. Paul Resnick and Hal R Varian. 1997. Recommender systems. Commun. ACM 40, 3 (1997), 56–58.
  19. Francesco Ricci, Lior Rokach, and Bracha Shapira. 2011. Introduction to recommender systems handbook. In Recommender systems handbook. Springer, 1–35.
  20. Guy Shani, David Heckerman, and Ronen I Brafman. 2005. An MDP-based recommender system. Journal of Machine Learning Research 6, Sep (2005), 1265–1295.
  21. Richard S.Sutton, Doina Precup, and Satinder Singh. 1999. Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning. Artificial Intelligence 112, 1-2 (1999), 181–211.
  22. Richard S Sutton and Andrew G Barto. 1998. Reinforcement learning: An introduction. Vol. 1. MIT press, Cambridge.
  23. Nima Taghipour and Ahmad Kardan. 2008. A hybrid web recommender system based on q-learning. In Proceedings of the 2008 ACM symposium on Applied computing. ACM, 1164–1168.
  24. Andrew Turpin and Falk Scholer. 2006. User performance versus precision measures for simple search tasks. In Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval. ACM, 11–18.
  25. Alexander Sasha Vezhnevets, Simon Osindero, Tom Schaul, Nicolas Heess, Max Jaderberg, David Silver, and Koray Kavukcuoglu. 2017. FeUdal Networks for Hierarchical Reinforcement Learning. arXiv preprint arXiv:1703.01161 (2017).
  26. Sai Wu, Weichao Ren, Chengchao Yu, Gang Chen, Dongxiang Zhang, and Jingbo Zhu. 2016. Personal recommendation using deep recurrent neural networks in NetEase. In Data Engineering (ICDE), 2016 IEEE 32nd International Conference on Data Engineering. IEEE, 1218–1229.
  27. Hongxia Yang, Quan Lu, Angus Xianen Qiu, and Chun Han. 2016. Large Scale CVR Prediction through Dynamic Transfer Learning of Global and Local Features. In Proceedings of the 5th International Workshop on Big Data, Streams and Heterogeneous Source Mining: Algorithms, Systems, Programming Models and Applications at KDD 2016, Vol. 53. PMLR, 103–119.
  28. Shuai Zhang, Lina Yao, and Aixin Sun. 2017. Deep Learning based Recommender System: A Survey and New Perspectives. arXiv preprint arXiv:1707.07435 (2017).
  29. Xiangyu Zhao, Long Xia, Liang Zhang, Zhuoye Ding, Dawei Yin, and Jiliang Tang. 2018a. Deep Reinforcement Learning for Page-wise Recommendations. arXiv preprint arXiv:1805.02343 (2018).
  30. Xiangyu Zhao, Liang Zhang, Zhuoye Ding, Long Xia, Jiliang Tang, and Dawei Yin. 2018b. Recommendations with Negative Feedback via Pairwise Deep Reinforcement Learning. In KDD’18: The 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. ACM.
  31. Xiangyu Zhao, Liang Zhang, Zhuoye Ding, Dawei Yin, Yihong Zhao, and Jiliang Tang. 2018c. Deep Reinforcement Learning for List-wise Recommendations. arXiv preprint arXiv:1801.00209 (2018).
  32. Guorui Zhou, Na Mou, Ying Fan, Qi Pi, Weijie Bian, Chang Zhou, Xiaoqiang Zhu, and Kun Gai. 2018. Deep Interest Evolution Network for Click-Through Rate Prediction. arXiv preprint arXiv:1809.03672 (2018).
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
347641
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description