Tutorial and Survey on Probabilistic Graphical Model and Variational Inference in Deep Reinforcement Learning
Abstract
Probabilistic Graphical Modeling and Variational Inference play an important role in recent advances in Deep Reinforcement Learning. Aiming at a selfconsistent tutorial survey, this article illustrates basic concepts of reinforcement learning with Probabilistic Graphical Models, as well as derivation of some basic formula as a recap. Reviews and comparisons on recent advances in deep reinforcement learning with different research directions are made from various aspects. We offer Probabilistic Graphical Models, detailed explanation and derivation to several use cases of Variational Inference, which serve as a complementary material on top of the original contributions.
I Introduction
Deep Reinforcement Learning has gaining increasing attention recently due to its great success in complicated tasks [mnih2015human], and it has developed in a rapid way. For a brief overview, see [arulkumaran2017brief]. Despite the existing survey, this paper, however, focuses on Probabilistic Graphical Model and Variational Inference, especially Amortized Variational Inference [doersch2016tutorial] and their applications in Deep Reinforcement Learning.
Specifically,

We start from the basics of Reinforcement Learning with probabilistic graphical model [koller2009probabilistic] explanations and extend the discussion to complicated models using variational inference [blei2017variational] in order to have a comprehensive yet brief summary of the topic.

We provide Probabilistic Graphical Models [koller2009probabilistic] for many basic concepts of Reinforcement Learning, as well as recent works on Deep Reinforcement Learning. To our best knowledge, such a comprehensive inclusion of Probabilistic Graphical Models in (Deep) Reinforcement Learning does not yet exist in literature.

We introduce a taxonomy of different Graphical Model and Variational Inference used in Deep Reinforcement Learning, which is also the first time to our best knowledge.

We give detailed derivation for some of the critical results, which is not explicitly stated in the original contributions like [sutton1998introduction, levine2018reinforcement, houthooft2016vime] or in a slightly different way. This makes the paper in a relative standalone position to be self understandable, which is another contribution.
Ia Organization of the paper
Since the paper serves as both a tutorial and survey, we keep the detailed derivation in the main text, instead of moving to appendix. In section IB, we first introduce the fundamentals for Graphical Models and Variational Inference, then we review the basics about reinforcement learning by connecting probabilistic graphical models (PGM) in section IC, as well as the basics and a incomplete overview on the advances about deep reinforcement learning, accompanied with a comparison of different methods in section II. In section III, we discuss how undirected graph could be used in modeling both the value function and the policy, which works well on high dimensional discrete state and action spaces. In section IV, we introduce the directed acyclic graph framework on how to treat the policy as posterior on actions, while adding many proofs that does not exist in the original contributions. In section V, we introduce works on how to use variational inference to approximate the environment model, while adding graphical models and proofs which does not exist in the original contributions.
IB Prerequisite on Probabilistic Graphical Models and Variational Inference, Terminologies and Conventions
Directed Acyclic Graphs (DAG) [bishop2006pattern] as a PGM offers an instinctive way of defining factorized join distributions of Random Variables (RV) by assuming the conditional independence [bishop2006pattern] across the RV though dseparation [bishop2006pattern]. In this paper, we use capital letter to denote a RV, while using the lower case letter to represent the realization of corresponding RV. To avoid symbol collision of using to represent advantage in many RL literature, we sometimes use to represent the RV , as well as using explicitly. For simplicity, we use to represent , the probability of RV take value , as well as use to represent . We use to represent that RV is conditionally independent from RV , when conditional on observation of RV , which is equivalent to write or .
Variational Inference (VI) approximates intractable posterior distribution of neural network, specified in a probabilistic graphical model usually, with a variational proposal posterior distribution, by optimizing the Evidence Lower Bound (ELBO) [blei2017variational], which assigns the values of latent unobservables at the same time. Variational Inference is widely used in Deep Learning Community [sun2019resampling], including approximating the posterior on the weights [blundell2015weight] distribution of neural networks, as well as approximating on the activations distribution [doersch2016tutorial]. VI on the activations of neural networks has been used on Variational AutoEncoder [kingma2013auto], while VI on the weights of the Neural Network has lead to Bayesian Neural Networks [blundell2015weight]. Weight Uncertainty in Neural Networks [blundell2015weight] has been used for tackling the explorationexploitation trade off in bandit problems, using Thompson sampling, which has also been shown to lead to systematic exploration by weights with higher variation [blundell2015weight].
IC Basics about Reinforcement Learning with graphical model
IC1 RL Concepts, Terminology and Convention
As shown in Figure 1, Reinforcement Learning (RL) involves optimizing the behavior of an agent via interaction with the environment. At time , the agent lives on state , By executing an action according to a policy [sutton1998introduction] , the agent jumps to another state , while receiving a reward . Let discount factor decides how much the immediate reward is favored compared to longer term return, with which one could also allow tractability in infinite horizon reinforcement learning [sutton1998introduction], as well as reducing variance in Monte Carlo setting [levine2018reinforcement]. The goal is to maximize the accumulated rewards, which is usually termed return in RL literature.
For simplicity, we interchangeably use two conventions whenever convenient: Suppose an episode last from , with correspond to continuous nonepisodic reinforcement learning. We use another convention of by assuming when episode ends, the agent stays at a self absorbing state with a null action, while receiving null reward.
By unrolling Figure 1, we get a sequence of state, action and reward tuples in an episode, which is coined trajectory [zhao2019maximum, co2018self]. Figure 2 illustrates part of a trajectory in one rollout. The state space and action space , which can be either discrete or continuous and multidimensional, are each represented with one continuous dimension in Figure 2 and plotted in an orthogonal way with different colors, while we use the thickness of the plate to represent the reward space .
IC2 DAGs for (Partially Observed ) Markov Decision Process
Reinforcement Learning is a stochastic decision process, which usually comes with three folds of uncertainty. That is, under a particular stochastic policy characterized by , within a particular environment characterized by state transition probability and reward distribution function , a learning agent could observe different trajectories with different unrolling realizations. This is usually modeled as a Markov Decision Process [sutton1998introduction], with its graphical model shown in Figure 3, where we could define a joint probability distribution over the trajectory of state , action and reward RVs. In Figure 3, we use dashed arrows connecting state and action to represent the policy, upon fixed policy, we have the trajectory likelihood in Equation (1)
(1) 
Upon observation of a state in Figure 3, the action at the time step in question is conditionally independent with the state and action history , which could be denoted as .
A more realistic model, however, is the Partially Observable Markov Decision process [kaelbling1998planning], with its Directed Acyclic Graph [bishop2006pattern] representation shown in Figure 4, where the agent could only observe the state partially of getting through a non invertible function of the latent state and the previous action , as indicated the Figure by , while the distributions on other edges are omitted since they are the same as in Figure 3. Under the graph specification of Figure 4, the observable is no longer Markov, but depends on the whole history. However, by introducing a probability distribution over the hidden state , with , which is termed belief state [kaelbling1998planning], where state takes value in range .
ID Value Function, Bellman Equation, Policy Iteration
Define state value function of state in Equation (2), where the corresponding Bellman Equation is derived in Equation (3).
(2)  
(3) 
, where takes value from , taking value from , and we have used the and in the subscript of the expectation operation to represent the probability distribution of the policy and the environment (including transition probability and reward probability) respectively. State action value function [sutton1998introduction] is defined in Equation (4),
(4)  
(5) 
, where in Equation (5), its relationship to the state value function is stated.
Combining Equation (3) and Equation (4), we have
(6) 
Define optimal policy [sutton1998introduction] to be
(7) 
Taking the optimal policy into the Bellman Equation in Equation (3), we have
(8) 
Taking the optimal policy into Equation (4), we have
(9) 
Based on Equation (9) and Equation (8), we get
(10) 
and
(11) 
For learning the optimal policy and value function, General Policy Iteration [sutton1998introduction] can be conducted, as shown in Figure 5, where a contracting process [sutton1998introduction] is drawn. Starting from initial policy , the corresponding value function could be estimated, which could result in improved policy by greedy maximization over actions. The contracting process is supposed to converge to the optimal policy .
As theoretically fundamentals of learning algorithms, Dynamic programming and Monte Carlo learning serve as two extremeties of complete knowledge of environment and complete model free [sutton1998introduction], while time difference learning [sutton1998introduction] is more ubiquitously used, like a bridge connecting the two extremities. Time difference learning is based on the Bellman update error in Equation (12).
(12) 
IE Policy Gradient and Actor Critic
Reinforcement Learning could be viewed as a functional optimization process. We could define an objective function over a policy , as a functional, characterized by parameter , which could correspond to the neural network weights, for example.
Suppose all episodes start from an auxiliary initial state , which with probability , jumps to different state without reward. characterizes the initial state distribution which only depends on the environment. Let represent the expected number of steps spent on state , which can be calculated by summing up the discounted probability of entering state with steps from auxiliary state , as stated in Equation (13), which can be thought of as the expectation of the R.V. conditional on state .
(13)  
(14) 
In Equation (14), the quantity is calculated by either directly starting from state , which correspond to in Equation (13), or entering state from state with one step, corresponding to in Equation (13).
For an arbitrary state , using and to represent subsequent states as dummy index, we have
(15)  
(16)  
(17)  
(18) 
The terms in square brackets in Equation (18) are simply Equation (17) with and replaced by and . Since , Equation (18) could be written as Equation (19),
(19) 
, where represent the state of steps after and already includes integration of intermediate state before reaching state .
Let objective function with respect to policy be defined to be the value function starting from auxiliary state as in Equation (20).
(20) 
The optimal policy could be obtained by gradient accent optimization, leading to the policy gradient algorithm [sutton1998introduction], as in Equation (25).
(21)  
(22)  
(23)  
(24)  
(25) 
, where is the relative occupancy of state . The integration of with respect to and in the nominator with respect to in Equation (24) is replaced with expectation with respect to interaction with the environment in Equation (25), and is replaced by an estimator , which is usually .
The policy gradient could be augmented to include zero gradient baseline , with respect to objective function in Equation (24), as a function of state , which does not include parameters for policy , since . To reduce variance of the gradient, the baseline is usually chosen to be the state value function estimator to smooth out the variation of at each state, while is updated in a Monte Carlo way by comparing with .
The actorcritic algorithm [sutton1998introduction] decomposes to be , so bootstrap is used instead of Monte Carlo.
Ii Recent advances in Deep Reinforcement Learning
Iia Basics of Deep Reinforcement Learning
Deep Q learning [mnih2015human] makes a breakthrough in using neural network as functional approximator on complicated tasks. It solves the experience correlation problem by using a reply memory and the instability of the target problem with a frozen target network. Specifically, the reinforcement learning is transformed in a supervised learning task by fitting on the target from the replay memory with state as input. However, the target can get drifted easily which leads to unstable learning. In [mnih2015human], a target network is used to provide a stable target for the updating network to be learned before getting updated occasionally. Double Deep Q learning [van2016deep], however, solves the problem by having two Q network and update the parameters in a alternating way.
IiB Taxonomy
While it is difficult to cover all aspects of recent advances in deep reinforcement learning. We pick some interesting research directions and list some contributions in these directions below.
IiB1 On Policy methods
A3C [mnih2016asynchronous] stands out in the asynchronous methods in deep learning [mnih2016asynchronous] which can be run in parallel on a single multicore CPU. Trust Region Policy Optimization [schulman2015trust] and Proximal Policy Optimization [schulman2017proximal] assimilates the natural policy gradient, which use a local approximation to the expected return. The local approximation could serve as a lower bound for the expected return, which can be optimized safely subject to the KL divergence constraint between two subsequent policies, while in practice, the constraint is relaxed to be a regularization.
IiB2 Off Policy methods
Except for Deep Q Learning [mnih2015human] mentioned above, DDPG [lillicrap2015continuous] extends Deterministic Policy Gradient (DPG) [silver2014deterministic] with deep neural network functional approximator, which is an actorcritic algorithm and works well in continuous action spaces.
IiB3 Goal based Reinforcement Learning
In robot manipulation tasks, the goal could be represented with state in some cases [zhao2019maximum]. Universal Value Function Approximator (UVFA) [schaul2015universal] incorporate the goal into the deep neural network, which let the neural network functional approximator also generalize to goal changes in tasks. Work of this direction include [andrychowicz2017hindsight, zhao2019maximum], for example.
IiB4 Exploration with sparse reward
In complicated real environment, an agent has to explore for a long trajectory before it can get any reward as feedback. Due to lack to enough rewards, traditional Reinforcement Learning methods performs poorly, which lead to a lot of contributions in the sufficient exploration methods. The methods using graphical model and variational method we introduce later each use different mechanisms to explore the environments.
IiB5 Replay Memory Manipulation based Method
Replay memory is a critical component in Deep Reinforcement Learning, which solves the problem of correlated transition in one episode. Beyond the uniform sampling of replay memory in Deep Q Network [mnih2015human], Prioritized Experience Replay [schaul2015prioritized] improves the performance by giving priority to those transitions with bigger TD error, while Hindsight Experience Replay (HER) [andrychowicz2017hindsight] manipulate the replay memory with changing goals to transition so as to change reward to promote exploration. Maximum entropy regularized multi goal reinforcement learning [zhao2019maximum] gives priority to those rarely occurred trajectory in sampling, which has been shown to improve over HER [zhao2019maximum].
IiC Comparison
In the following sections, we give detailed explanation on how graphical model and variational inference could be used to model and optimize the reinforcement learning process with each category a different section. Together with the methods mentioned above, we make a comparison of them in Table I, where ”S” means state and ”A” means action, where ”c” means continuous, ”d” means discrete. ”standalone” means whether the algorithm needs to be combined with another algorithm to work or is a standalone method. ”var” means which probability the variational inference is approximating, ”p” means whether the method is on policy or off policy. ”na” means not applicable.
Algorithm  S  A  standalone  var  p 

Deep Q  c  d  y  na  off 
A3C  c  c/d  y  na  on 
TRPO/PPO  c  d  y  na  on 
DDPG  c  c  y  na  off 
Boltzmann  d  d  y  na  on 
VIME  c  c  n  na  
VAST  c  d  n  na  
SoftQ  c  c/d  y  on 
Iii Policy and value function with undirected graphs
We first discuss the application of undirected graphs in deep reinforcement learning, where we use deep belief network here. Rather than modeling conditional distribution, as in directed acyclic graphs, undirected graphs model joint distribution of variables in question and focus on cliques [bishop2006pattern] with free energy associated with it, which could be used to model the value function in reinforcement learning. Restricted Boltzman Machine has nice property of tractable factorized posterior distribution over the latent variable conditioned on observables, instead of having to do gibbs sampling in general Boltzman Machine.
In [sallans2004reinforcement], the authors use Restricted Bolzman Machine to deal with MDPs of large state and action spaces, by modeling the stateaction value function with the negative free energy of the graph, where free energy of the graph could be easily calculated through the product of expert [sallans2004reinforcement]. Specifically, the visible states of the Restricted Bolzmann Machine [sallans2004reinforcement] consists of both state and action binary variables, as shown in Figure 6, where the hidden nodes consist of binary variables, while state variable are dark colored to represent it can be observed and action are light colored to represent it need to be sampled. Together with the auxilliary hidden variables, the undirected graph defines a joint probability distribution over state and action pairs, which defines a stochastic policy network that could sample actions out for on policy learning. Since it is pretty easy to calculate the derivative of the free energy with respect to the coefficient of the network, one could use temporal difference learning to update the coefficients in the network. Thanks to properties of Boltzmann Machine, the conditional distribution of action over state is still Boltzmann distributed, governed by the free energy, by adjusting the temperature, one could also change between different exploration strength.
The conditional distribution of actions under state could serve as the policy, which is
(26) 
, where is the partition function [bishop2006pattern] and we use the negative free energy to approximate the state action value function. Upon the state value function in Equation (26) is learned as a critic [sutton1998introduction], such that its associated policy is defined, MCMC sampling [bishop2006pattern] could be used to sample actions, as an actor [sutton1998introduction]. With the sampled actions, time difference learning method like SARSAR [sutton1998introduction], could be carried out to update the state value function estimation. Such an onpolicy process has been shown to be empirically effective in the large state actions spaces [sallans2004reinforcement].
Iv Variational Inference on Policies
Iva policy as ”optimal” posterior
The Boltzmann Machine defined Product of Expert Model in [sallans2004reinforcement] works well for large state and action spaces, but are limited to discrete specifically binary state and action variables. For continuous state and action spaces, in [haarnoja2017reinforcement], the author proposed deep energy based models with Directed Acyclic Graphs (DAG) [bishop2006pattern], which we reorganize in a different form in Figure 7 with annotations added. The difference with respect to Figure 3 is that, in Figure 7, the reward is not explicit expressed in the directed graphical model. Instead, an auxilliary binary Observable is used to define whether the corresponding action at the current step is optimal or not. The conditional probability of the action being optimal is , which connects conditional optimality with the amount of award received by encouraging the agent to take highly rewarded actions in an exponential manner. Note that the reward here must be negative to ensure the validity of probability, which does not hurt generality since reward range can be translated [levine2018reinforcement].
The Graphical Model in Figure 7 in total defines the trajectory likelihood or the evidence in Equation (27):
(27) 
.
By doing so, the author is forcing a form of functional expression on top of the conditional independence structure of the graph by assigning a likelihood. In this way, calculating the optimal policy of actions distributions becomes an inference problem of calculating the posterior , which reads as, conditional on optimality from current time step until end of episode, and the current current state to be , the distribution of action , and this posterior corresponds to the optimal policy. Observing the dseparation from Figure 7, is conditionally independent of given , , so
IvB Message passing for exact inference on the posterior
In this section, we give detailed derivation on doing exact inference on the policy posterior which is not given in [levine2018reinforcement]. Although the results are not used due to unexpected behavior, there is theoretical insights that is worth being noted.
The graph in Figure 7 is similar to Hidden Markov Models (HMM) [bishop2006pattern], if we could treat the tuple of variable as the latent variable counterpart of a HMM, with emission probability , while the transition probability, is from the variable tuple to a subcomponent of the ”latent” variable tuple .
Similar to the forwardbackward message passing algorithm [bishop2006pattern] in Hidden Markov Models [bishop2006pattern], the posterior could also be calculated by passing messages. We offer a detailed derivation of the decomposition of the posterior in Equation (28), which is not available in [levine2018reinforcement].
(28) 
In Equation (28), we define message and message . If we consider as a prior with a trivial form [levine2018reinforcement], the only policy related term becomes .
In Hidden Markov Models (HMM) [bishop2006pattern], if we use to represent the visible observed state and to represent the hidden latent state, for the series length, then it is essential to calculate the posterior and , which is the marginal of the complete posterior . The posterior marginal could be computed by the forward message and the backward message , which is the probability distribution of observables from current time step until the end of the sequence, conditional on the current latent state.
In contrast, here, only the backward messages are relevant. Additionally, the backward message here is not a probability distribution as in HMM, instead, is just a probability. In Figure 7, the backward message could be decomposed recursively. Since in [levine2018reinforcement] the author only give the conclusion without derivation, we give a detailed derivaion of this recursion in Equation (29).
()  
(29) 
The recursion in Equation (29) start from the last time point of an episode.
IvC Connection between Message Passing and Bellman equation
Taking the logrithm of Equation (29), we get Equation (33)
(33) 
which reduces to the risk seeking backup in Equation (34) as mentioned in [levine2018reinforcement]:
(34) 
The mathematical insight here is that if we define the messages passed on the Directed Acyclic Graph in Figure 7, then message passing correspond to a peculiar version Bellman Equation like backup, which lead to an unwanted risk seeking behavior [levine2018reinforcement].
IvD Variational approximation to ”optimal” policy
Since the exact inference lead to unexpected behavior, approximate inference could be used. The optimization of the policy could be considered as a variational inference problem, and we use the variational policy of the action posterior distribution , which could be represented by a neural network, to compose the proposal variational likelihood of the trajectory as in Equation (35):
(35) 
, where the initial state distribution and the environmental dynamics of state transmission is kept intact. Using the proposal trajectory as a pivot, we could derive the Evidence Lower Bound (ELBO) of the optimal trajectory as in Equation (36), which correspond to an interesting objective function of reward plus entropy return, as in Equation (37).
(36)  
(take )  
(37) 
IvE Connection between policy gradient and Q learning
A representative method belonging to the above mentioned framework is Soft Q [haarnoja2017reinforcement], where the state action value function is defined to be
(38) 
Soft Q carries an soft version of Bellman update similar to Q Learning [sutton1998introduction], which lead to policy improvement with respect to the corresponding functional objective in Equation (39).
(39) 
Setting policy as Equation (32) lead to policy improvement. We offer a detailed proof for a key formula in Equation (40), which is stated in Equation (19) of [haarnoja2017reinforcement] without proof. In Equation (40), we use to implicitly represent to avoid symbol aliasing whenever necessary.