Learning Representations in Reinforcement Learning: An Information Bottleneck Approach

Learning Representations in Reinforcement Learning: An Information Bottleneck Approach

Yingjun Pei 1Beijing University of Posts and Telecommunications

1peiyingjun4@gmail.com
   Xinwen Hou 2Institute of Automation, Chinese Academy of Sciences
2xwhou@nlpr.ia.ac.cn
Abstract

The information bottleneck principle in [25] is an elegant and useful approach to representation learning. In this paper, we investigate the problem of representation learning in the context of reinforcement learning using the information bottleneck framework, aiming at improving the sample efficiency of the learning algorithms. We analytically derive the optimal conditional distribution of the representation, and provide a variational lower bound. Then, we maximize this lower bound with the Stein variational (SV) gradient method (originally developed in [14, 15]). We incorporate this framework in the advantageous actor critic algorithm (A2C)[16] and the proximal policy optimization algorithm (PPO) [21]. Our experimental results show that our framework can improve the sample efficiency of vanilla A2C and PPO significantly. Finally, we study the information bottleneck (IB) perspective in deep RL with the algorithm called mutual information neural estimation(MINE) [3]. We experimentally verify that the information extraction-compression process also exists in deep RL and our framework is capable of accelerating this process. We also analyze the relationship between MINE and our method, through this relationship, we theoretically derive an algorithm to optimize our IB framework without constructing the lower bound.

1 Introduction

In training a reinforcement learning algorithm, an agent interacts with the environment, explores the (possibly unknown) state space, and learns a policy from the exploration sample data. In many cases, such samples are quite expensive to obtain (e.g., requires interactions with the physical environment). Hence, improving the sample efficiency of the learning algorithm is a key problem in RL and has been studied extensively in the literature. Popular techniques include experience reuse/replay, which leads to powerful off-policy algorithms (e.g., [17, 23, 26, 18, 7]), and model-based algorithms (e.g., [11, 12]). Moreover, it is known that effective representations can greatly reduce the sample complexity in RL. This can be seen from the following motivating example: In the environment of a classical Atari game: Seaquest, it may take dozens of millions samples to converge to an optimal policy when the input states are raw images (more than 28,000 dimensions), while it takes less samples when the inputs are 128-dimension pre-defined RAM data[24]. Clearly, the RAM data contain much less redundant information irrelevant to the learning process than the raw images. Thus, we argue that an efficient representation is extremely crucial to the sample efficiency.

In this paper, we try to improve the sample efficiency in RL from the perspective of representation learning using the celebrated information bottleneck framework [25]. In standard deep learning, the experiments in [22] show that during the training process, the neural network first ”remembers” the inputs by increasing the mutual information between the inputs and the representation variables, then compresses the inputs to efficient representation related to the learning task by discarding redundant information from inputs (decreasing the mutual information between inputs and representation variables). We call this phenomena ”information extraction-compression process”(information E-C process). Our experiments shows that, similar to the results shown in [22], we first (to the best of our knowledge) observe the information extraction-compression phenomena in the context of deep RL (we need to use MINE[3] for estimating the mutual information). This observation motivates us to adopt the information bottleneck (IB) framework in reinforcement learning, in order to accelerate the extraction-compression process. The IB framework is intended to explicitly enforce RL agents to learn an efficient representation, hence improving the sample efficiency, by discarding irrelevant information from raw input data. Our technical contributions can be summarized as follows:

  1. We observe that the ”information extraction-compression process” also exists in the context of deep RL (using MINE[3] to estimate the mutual information).

  2. We derive the optimization problem of our information bottleneck framework in RL. In order to solve the optimization problem, we construct a lower bound and use the Stein variational gradient method developed in [15] to optimize the lower bound.

  3. We show that our framework can accelerate the information extraction-compression process. Our experimental results also show that combining actor-critic algorithms (such as A2C, PPO) with our framework is more sample-efficient than their original versions.

  4. We analyze the relationship between our framework and MINE, through this relationship, we theoretically derive an algorithm to optimize our IB framework without constructing the lower bound.

Finally, we note that our IB method is orthogonal to other methods for improving the sample efficiency, and it is an interesting future work to incorporate it in other off-policy and model-based algorithms.

2 Related Work

Information bottleneck framework was first introduced in [25]. They solve the framework by iterative Blahut Arimoto algorithm, which is infeasible to apply to deep neural networks. [22] tries to open the black box of deep learning from the perspective of information bottleneck, though the method they use to compute the mutual information is not precise. [2] derives a variational information bottleneck framework, yet apart from adding prior target distribution of the representation distribution , they also assume that itself must be a Gaussian distribution, which limits the capabilities of the representation function. [20] extends this framework to variational discriminator bottleneck to improve GANs[9], imitation learning and inverse RL.

As for improving sample-efficiency, [17, 26, 18] mainly utilize the experience-reuse. Besides experience-reuse, [23, 8] tries to learn a deterministic policy, [7] seeks to mitigate the delay of off-policy. [11, 12] learn the environment model. Some other powerful techniques can be found in [5].

State representation learning has been studied extensively, readers can find some classic works in the overview [13]. Apart from this overview, [19] shows a theoretical foundation of maintaining the optimality of representation space. [4] proposes a new perspective on representation learning in RL based on geometric properties of the space of value function. [1] learns representation via information bottleneck(IB) in imitation/apprenticeship learning. To the best of our knowledge, there is no work that intends to directly use IB in basic RL algorithms.

3 Preliminaries

A Markov decision process(MDP) is a tuple, , where is the set of states, is the set of actions, is the reward function, is the transition probability function(where is the probability of transitioning to state given that the previous state is and the agent took action in ), and is the starting state distribution. A policy is a map from states to probability distributions over actions, with denoting the probability of choosing action in state .

In reinforcement learning, we aim to select a policy which maximizes , with a slight abuse of notation we denote . Here is a discount factor, denotes a trajectory . Define the state value function as , which is the expected return by policy in state . And the state-action value function is the expected return by policy after taking action in state .

Actor-critic algorithms take the advantage of both policy gradient methods and value-function-based methods such as the well-known A2C[16]. Specifically, in the case that policy is parameterized by , A2C uses the following equation to approximate the real policy gradient :

(1)

where is the accumulated return from time step , is the entropy of distribution and is a baseline function, which is commonly replaced by .

A2C also includes the minimization of the mean square error between and value function . Thus in practice, the total objective function in A2C can be written as:

(2)

where are two coefficients.

In the context of representation learning in RL, (including and ) can be replaced by where is a learnable low-dimensional representation of state . For example, given a representation function with parameter , define . For simplicity, we write as .

4 Framework

4.1 Information Bottleneck in Reinforcement Learning

The information bottleneck framework is an information theoretical framework for extracting relevant information, or yielding a representation, that an input contains about an output . An optimal representation of would capture the relevant factors and compress by diminishing the irrelevant parts which do not contribute to the prediction of . In a Markovian structure where is the input, is representation of and is the label of , IB seeks an embedding distribution such that:

(3)

for every , which appears as the standard cross-entropy loss111Mutual information is defined as , conditional entropy is defined as . In a binary-classification problem, . in supervised learning with a MI-regularizer, is a coefficient that controls the magnitude of the regularizer.

Next we derive an information bottleneck framework in reinforcement learning. Just like the label in the context of supervised learning as showed in (3), we assume the supervising signal in RL to be the accurate value of a specific state for a fixed policy , which can be approximated by an n-step bootstrapping function in practice. Let be the following distribution:

(4)

.This assumption is heuristic but reasonable: If we have an input and its relative label , we now have ’s representation , naturally we want to train our decision function to approximate the true label . If we set our target distribution to be , the probability decreases as gets far from while increases as gets close to .

For simplicity, we just write instead of in the following context.

With this assumption, equation (3) can be written as:

(5)

The first term looks familiar with classic mean squared error in supervisd learning. In a network with representation parameter and policy-value parameter , policy loss in equation(1) and IB loss in (5) can be jointly written as:

(6)

where denotes the MI between and . Notice that itself is a standard loss function in RL as showed in (2). Finally we get the ultimate formalization of IB framework in reinforcement learning:

(7)

The following theorem shows that if the mutual information of our framework and common RL framework are close, then our framework is near-optimality.

Theorem (Near-optimality theorem)

Policy , parameter , optimal policy and its relevant representation parameter are defined as following:

(8)
(9)

Define as and as . Assume that for any , , we have .

4.2 Target Distribution Derivation and Variational Lower Bound Construction

In this section we first derive the target distribution in (7) and then seek to optimize it by constructing a variational lower bound.

We would like to solve the optimization problem in (7):

(10)

Combining the derivative of and and setting their summation to 0, we can get that

(11)

We provide a rigorous derivation of (11) in the appendix(0.A.2). We note that though our derivation is over the representation space instead of the whole network parameter space, the optimization problem (10) and the resulting distribution (11) are quite similar to the one studied in [15] in the context of Bayesian inference. However, we stress that our formulation follows from the information bottleneck framework, and is mathematically different from that in [15]. In particular, the difference lies in the term , which depends on the the distribution we want to optimize (while in [15], the corresponding term is a fixed prior).

The following theorem shows that the distribution in (11) is an optimal target distribution (with respect to the IB objective ). The proof can be found in the appendix(0.A.3).

Theorem

(Representation Improvement Theorem) Consider the objective function , given a fixed policy-value parameter , representation distribution and state distribution . Define a new representation distribution: . We have .

Though we have derived the optimal target distribution, it is still difficult to compute . In order to resolve this problem, we construct a variational lower bound with a distribution which is independent of . Notice that . Now, we can derive a lower bound of in (6) as follows:

(12)

Naturally the target distribution of maximizing the lower bound is:

(13)

4.3 Optimization by Stein Variational Gradient Descent

Next we utilize the method in [14][15][10] to optimize the lower bound.

Stein variational gradient descent(SVGD) is a non-parametric variational inference algorithm that leverages efficient deterministic dynamics to transport a set of particles to approximate given target distributions . We choose SVGD to optimize the lower bound because of its ability to handle unnormalized target distributions such as (13).

Briefly, SVGD iteratively updates the “particles” via a direction function in the unit ball of a reproducing kernel Hilbert space (RKHS) :

(14)

where is chosen as a direction to maximally decrease222In fact, is chosen to maximize the directional derivative of , which appears to be the ”gradient” of the KL divergence between the particles’ distribution and the target distribution ( is unnormalized distribution, is normalized coefficient) in the sense that

(15)

where is the distribution of and is the distribution of . [14] showed a closed form of this direction:

(16)

where is a kernel function(typically an RBF kernel function). Notice that has been omitted.

In our case, we seek to minimize , which is equivalent to maximize , the greedy direction yields:

(17)

In practice we replace with where is a coefficient that controls the magnitude of . Notice that is the greedy direction that moves towards ’s target distribution as showed in (13)(distribution that maximizes ). This means is the gradient of : .

Since our ultimate purpose is to update , by the chain rule, . Then for :

(18)

is given in equation(17). In practice we update the policy-value parameter by common policy gradient algorithm since:

(19)

and update representation parameter by (18).

4.4 Verify the information E-C process with MINE

This section we verify that the information E-C process exists in deep RL with MINE and our framework accelerates this process.

Mutual information neural estimation(MINE) is an algorithm that can compute mutual information(MI) between two high dimensional random variables more accurately and efficiently. Specifically, for random variables X and Z, assume to be a function of and , the calculation of can be transformed to the following optimization problem:

(20)

The optimal function can be approximated by updating a neural network .

With the aid of this powerful tool, we would like to visualize the mutual information between input state and its relative representation : Every a few update steps, we sample a batch of inputs and their relevant representations and compute their MI with MINE, every time we train MINE(update ) we just shuffle and roughly assume the shuffled representations to be independent with :

(21)

Figure(1) is the tensorboard graph of mutual information estimation between and in Atari game Pong, x-axis is update steps and y-axis is MI estimation. More details and results can be found in appendix(0.A.6) and (0.A.7). As we can see, in both A2C with our framework and common A2C, the MI first increases to encode more information from inputs(”remember” the inputs), then decreases to drop irrelevant information from inputs(”forget” the useless information). And clearly, our framework extracts faster and compresses faster than common A2C as showed in figure(1)(b).

(a) MI in A2C
(b) MI in A2C with our framework
Figure 1: Mutual information visualization in Pong

After completing the visualization of MI with MINE, we analyze the relationship between our framework and MINE. According to [3], the optimal function in (20) goes as follows:

(22)

Combining the result with Theorem(4.2), we get:

(23)

Through this relationship, we theoretically derive an algorithm that can directly optimize our framework without constructing the lower bound, we put this derivation in the appendix(0.A.5).

5 Experiments

In the experiments we show that our framework can improve the sample efficiency of basic RL algorithms(typically A2C and PPO). Other results can be found in last two appendices.

In A2C with our framework, we sample by a network where and the number of samples from each state is , readers are encouraged to take more samples if the computation resources are sufficient. We set the IB coefficient as . We choose two prior distributions of our framework, the first one is uniform distribution, apparently when is the uniform distribution, can be omitted. The second one is a Gaussian distribution, which is defined as follows: for a given state , sample a batch of , then: .

We also set as to control the magnitude of . Following [15], the kernel function in (17) we used is the Gaussian RBF kernel where , denotes the median of pairwise distances between the particles . As for the hyper-parameters in RL, we simply choose the default parameters in A2C of Openai-baselines. In summary, we implement the following four algorithms:

A2C with uniform SVIB: Use as the embedding function, optimize by our framework(algorithm(0.A.4)) with being uniform distribution.

A2C with Gaussian SVIB: Use as the embedding function, optimize by our framework(algorithm(0.A.4)) with being Gaussian distribution.

A2C:Regular A2C in Openai-baselines with as the embedding function.

A2C with noise(For fairness):A2C with the same embedding function as A2C with our framework.

Figure(2)(a)-(e) show the performance of four A2C-based algorithms in gym Atari games. We can see that A2C with our framework is more sample-efficient than both A2C and A2C with noise in nearly all 5 games.

(a) A2C-Pong
(b) A2C-AirRaid
(c) A2C-BeamRider
(d) A2C-Qbert
(e) A2C-SpaceInvaders
(f) PPO-Pong
(g) PPO-Qbert
(h) PPO-Breakout
Figure 2: (a)-(e) show the performance of four A2C-based algorithms, x-axis is time steps(2000 update steps for each time step) and y-axis is the average reward over episodes, (f)-(h) show the performance of four PPO-based algorithms, x-axis is time steps(300 update steps for each time step). We make exponential moving average of each game to smooth the curve(In PPO-Pong, we add to all four curves in order to make exponential moving average). We can see that our framework improves sample efficiency of basic A2C and PPO.

Notice that in SpaceInvaders, A2C with Gaussian SVIB is worse. We suspect that this is because the agent excessively drops information from inputs that it misses some information related to the learning process. There is a more detailed experimental discussion about this phenomena in appendix(0.A.7) . We also implement four PPO-based algorithms whose experimental settings are same as A2C except that we set the number of samples as for the sake of computation efficiency. Results can be found in the in figure(2)(f)-(h).

6 Conclusion

We study the information bottleneck principle in RL: We propose an optimization problem for learning the representation in RL based on the information-bottleneck framework and derive the optimal form of the target distribution. We construct a lower bound and utilize Stein Variational gradient method to optimize it. Finally, we verify that the information extraction and compression process also exists in deep RL, and our framework can accelerate this process. We also theoretically derive an algorithm based on MINE that can directly optimize our framework and we plan to study it experimentally in the future work.

7 Acknowledgement

We thank professor Jian Li for paper writing correction.

References

  • [1] D. Abel, D. Arumugam, K. Asadi, Y. Jinnai, M. L. Littman, and L. L. Wong (2019) State abstraction as compression in apprenticeship learning. In Proceedings of the AAAI Conference on Artificial Intelligence. AAAI Press, pp. 3134–3142. Cited by: §2.
  • [2] A. A. Alemi, I. Fischer, J. V. Dillon, and K. Murphy (2016) Deep variational information bottleneck. In Proceedings of the International Conference on Learning Representations. Cited by: §2.
  • [3] M. I. Belghazi, A. Baratin, S. Rajeswar, S. Ozair, Y. Bengio, A. Courville, and R. D. Hjelm (2018) MINE: mutual information neural estimation. arXiv preprint arXiv:1801.04062. Cited by: Learning Representations in Reinforcement Learning: An Information Bottleneck Approach, item 1, §1, §4.4.
  • [4] M. G. Bellemare, W. Dabney, R. Dadashi, A. A. Taiga, P. S. Castro, N. L. Roux, D. Schuurmans, T. Lattimore, and C. Lyle (2019) A geometric perspective on optimal representations for reinforcement learning. International Conference on Learning Representations. Cited by: §2.
  • [5] M. Botvinick, S. Ritter, J. X. Wang, Z. Kurth-Nelson, C. Blundell, and D. Hassabis (2019) Reinforcement learning, fast and slow. Trends in cognitive sciences 23 (5), pp. 408–422. Cited by: §2.
  • [6] T. Chen, D. J. Mankowitz, and S. Mannor (2018) Reward constrained policy optimization. arXiv preprint arXiv:1805.11074. Cited by: Proof.
  • [7] L. Espeholt, H. Soyer, R. Munos, K. Simonyan, V. Mnih, T. Ward, Y. Doron, V. Firoiu, T. Harley, and I. Dunning (2018) IMPALA: scalable distributed deep-rl with importance weighted actor-learner architectures. arXiv preprint arXiv:1802.01561. Cited by: §1, §2.
  • [8] S. Fujimoto, H. Van Hoof, and D. Meger (2018) Addressing function approximation error in actor-critic methods. arXiv preprint arXiv:1802.09477. Cited by: §2.
  • [9] I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, X. Bing, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio (2014) Generative adversarial nets. In International Conference on Neural Information Processing Systems, pp. 2672–2680. Cited by: §2.
  • [10] T. Haarnoja, H. Tang, P. Abbeel, and S. Levine (2017) Reinforcement learning with deep energy-based policies. Proceedings of the 34th International Conference on Machine Learning 70, pp. 1352–1361. Cited by: §4.3.
  • [11] D. Hafner, T. Lillicrap, I. Fischer, R. Villegas, and J. Davidson (2018) Learning latent dynamics for planning from pixels. International Conference on Machine Learning, pp. 2555–2565. Cited by: §1, §2.
  • [12] L. Kaiser, M. Babaeizadeh, P. Milos, B. Osinski, R. H. Campbell, K. Czechowski, D. Erhan, C. Finn, P. Kozakowski, and S. Levine (2019) Model-based reinforcement learning for atari. arXiv preprint arXiv:1903.00374. Cited by: §1, §2.
  • [13] T. Lesort, N. Díaz-Rodríguez, J. F. Goudou, and D. Filliat (2018) State representation learning for control: an overview. Neural Networks 108, pp. S0893608018302053–. Cited by: §2.
  • [14] Q. Liu and D. Wang (2016) Stein variational gradient descent: a general purpose bayesian inference algorithm. Advances in Neural Information Processing Systems 29, pp. 2378–2386. Cited by: Learning Representations in Reinforcement Learning: An Information Bottleneck Approach, §4.3, §4.3.
  • [15] Y. Liu, P. Ramachandran, Q. Liu, and J. Peng (2017) Stein variational policy gradient. arXiv preprint arXiv:1704.02399. Cited by: Learning Representations in Reinforcement Learning: An Information Bottleneck Approach, item 2, §4.2, §4.3, §5.
  • [16] V. Mnih, A. P. Badia, M. Mirza, A. Graves, and K. Kavukcuoglu (2016) Asynchronous methods for deep reinforcement learning. International Conference on Machine Learning, pp. 1928–1937. Cited by: Learning Representations in Reinforcement Learning: An Information Bottleneck Approach, §3.
  • [17] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller (2013) Playing atari with deep reinforcement learning. Computer Science. Cited by: §1, §2.
  • [18] O. Nachum, S. Gu, H. Lee, and S. Levine (2018) Data-efficient hierarchical reinforcement learning. Neural Information Processing Systems, pp. 3307–3317. Cited by: §1, §2.
  • [19] O. Nachum, S. Gu, H. Lee, and S. Levine (2018) Near-optimal representation learning for hierarchical reinforcement learning. International Conference on Learning Representations. Cited by: §2.
  • [20] X. B. Peng, A. Kanazawa, S. Toyer, P. Abbeel, and S. Levine (2018) Variational discriminator bottleneck: improving imitation learning, inverse rl, and gans by constraining information flow. International Conference on Learning Representations. Cited by: §2.
  • [21] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov (2017) Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347. Cited by: Learning Representations in Reinforcement Learning: An Information Bottleneck Approach.
  • [22] R. Shwartz-Ziv and N. Tishby (2017) Opening the black box of deep neural networks via information. arXiv preprint arXiv:1703.00810. Cited by: §0.A.6, §1, §2.
  • [23] D. Silver, G. Lever, N. Heess, T. Degris, D. Wierstra, and M. Riedmiller (2014) Deterministic policy gradient algorithms. In International Conference on Machine Learning, pp. 387–395. Cited by: §1, §2.
  • [24] J. Sygnowski and H. Michalewski (2016) Learning from the memory of atari 2600. arXiv preprint arXiv:1605.01335. Cited by: §1.
  • [25] N. Tishby, F. C. Pereira, and W. Bialek (2000) The information bottleneck method. University of Illinois 411 (29-30), pp. 368–377. Cited by: Learning Representations in Reinforcement Learning: An Information Bottleneck Approach, §1, §2.
  • [26] H. Van Hasselt, A. Guez, and D. Silver (2015) Deep reinforcement learning with double q-learning. Computer ScienceInternational Conference on Machine Learning, pp. 2094–2100. Cited by: §1, §2.

Appendix 0.A Appendix

0.a.1 Proof of Theorem 1

Theorem

(Theorem 1 restated)Policy , parameter , optimal policy and its relevant representation parameter are defined as following:

(24)
(25)

Define as and as . Assume that for any , , we have . Specifically, in value-based algorithm, this theorem also holds between expectation of two value functions.

Proof

From equation(24) we can get:

(26)

From equation(25) we can get:

(27)

These two equations give us the following inequality:

(28)

According to the assumption, naturally we have:

(29)

Notice that if we use our IB framework in value-based algorithm, then the objective function can be defined as:

(30)

where and is the discounted future state distribution, readers can find detailed definition of in the appendix of [6]. We can get:

(31)

0.a.2 Target Distribution Derivation

We show the rigorous derivation of the target distribution in (11).

Denote as the distribution of , as the distribution of . We use as the short hand notation for the conditional distribution . Moreover, we write and = . Notice that . Take the functional derivative with respect to of the first term :

Hence, we can see that

Then we consider the second term. By the chain rule of functional derivative, we have that

(32)

Combining the derivative of and and setting their summation to 0, we can get that

(33)

0.a.3 Proof of Theorem 2

Theorem

(Theorem 2 restated)For , given a fixed policy-value parameter , representation distribution and state distribution , define a new representation distribution:, we have .

Proof

Define as:

(34)
(35)
(36)
(37)

According to the positivity of the KL-divergence, we have .

0.a.4 Algorithm

   initialize network parameters
   initialize hyper-parameters in (17)
   learning rate
   number of samples from
  repeat
     Draw a batch of data from environment
     for each