Stochastic Latent ActorCritic:
Deep Reinforcement Learning
with a Latent Variable Model
Abstract
Deep reinforcement learning (RL) algorithms can use highcapacity deep networks to learn directly from image observations. However, these kinds of observation spaces present a number of challenges in practice, since the policy must now solve two problems: a representation learning problem, and a task learning problem. In this paper, we aim to explicitly learn representations that can accelerate reinforcement learning from images. We propose the stochastic latent actorcritic (SLAC) algorithm: a sampleefficient and highperforming RL algorithm for learning policies for complex continuous control tasks directly from highdimensional image inputs. SLAC learns a compact latent representation space using a stochastic sequential latent variable model, and then learns a critic model within this latent space. By learning a critic within a compact state space, SLAC can learn much more efficiently than standard RL methods. The proposed model improves performance substantially over alternative representations as well, such as variational autoencoders. In fact, our experimental evaluation demonstrates that the sample efficiency of our resulting method is comparable to that of modelbased RL methods that directly use a similar type of model for control. Furthermore, our method outperforms both modelfree and modelbased alternatives in terms of final performance and sample efficiency, on a range of difficult imagebased control tasks. Our code and videos of our results are available at our website.^{1}^{1}1https://alexleegk.github.io/slac/
1 Introduction
Deep reinforcement learning (RL) algorithms can automatically learn to solve certain tasks from raw, lowlevel observations such as images. However, these kinds of observation spaces present a number of challenges in practice: on one hand, it is difficult to directly learn from these highdimensional inputs, but on the other hand, it is also difficult to tease out a compact representation of the underlying taskrelevant information from which to learn instead. For these reasons, deep RL directly from lowlevel observations such as images remains a challenging problem. Particularly in continuous domains governed by complex dynamics, such as robotic control, standard approaches still require separate sensor setups to monitor details of interest in the environment, such as the joint positions of a robot or pose information of objects of interest. To instead be able to learn directly from the more general and rich modality of vision would greatly advance the current state of our learning systems, so we aim to study precisely this. Standard modelfree deep RL aims to use direct endtoend training to explicitly unify these tasks of representation learning and task learning. However, solving both problems together is difficult, since an effective policy requires an effective representation, but in order for an effective representation to emerge, the policy or value function must provide meaningful gradient information using only the modelfree supervision signal (i.e., the reward function). In practice, learning directly from images with standard RL algorithms can be slow, sensitive to hyperparameters, and inefficient. In contrast to endtoend learning with RL, predictive learning can benefit from a rich and informative supervision signal before the agent has even made progress on the task or received any rewards. This leads us to ask: can we explicitly learn a latent representation from raw lowlevel observations that makes deep RL easier, through learning a predictive latent variable model?
Predictive models are commonly used in modelbased RL for the purpose of planning (Deisenroth and Rasmussen, 2011; Finn and Levine, 2017; Nagabandi et al., 2018; Chua et al., 2018; Zhang et al., 2019) or generating cheap synthetic experience for RL to reduce the required amount of interaction with the real environment (Sutton, 1991; Gu et al., 2016). However, in this work, we are primarily concerned with their potential to alleviate the representation learning challenge in RL. We devise a stochastic predictive model by modeling the highdimensional observations as the consequence of a latent process, with a Gaussian prior and latent dynamics, as illustrated in Figure 1. A model with an entirely stochastic latent state has the appealing interpretation of being able to properly represent uncertainty about any of the state variables, given its past observations. We demonstrate in our work that fully stochastic state space models can in fact be learned effectively: With a welldesigned stochastic network, such models outperform fully deterministic models, and contrary to the observations in prior work (Hafner et al., 2019; Buesing et al., 2018), are actually comparable to partially stochastic models. Finally, we note that this explicit representation learning, even on lowreward data, allows an agent with such a model to make progress on representation learning even before it makes progress on task learning.
Equipped with this model, we can then perform RL in the learned latent space of the predictive model. We posit—and confirm experimentally—that our latent variable model provides a useful representation for RL. Our model represents a partially observed Markov decision process (POMDP), and solving such a POMDP exactly would be computationally intractable (Astrom, 1965; Kaelbling et al., 1998; Igl et al., 2018). We instead propose a simple approximation that trains a Markovian critic on the (stochastic) latent state and trains an actor on a history of observations and actions. The resulting stochastic latent actorcritic (SLAC) algorithm loses some of the benefits of full POMDP solvers, but it is easy and stable to train. It also produces good results, in practice, on a range of challenging problems, making it an appealing alternative to more complex POMDP solution methods.
The main contributions of our SLAC algorithm are useful representations learned from our stochastic sequential latent variable model, as well as effective RL in this learned latent space. We show experimentally that our approach substantially improves on both modelfree and modelbased RL algorithms on a range of imagebased continuous control benchmark tasks, attaining better final performance and learning more quickly than algorithms based on (a) endtoend deep RL from images, (b) learning in a latent space produced by various alternative latent variable models, such as a variational autoencoder (VAE) (Kingma and Welling, 2014), and (c) modelbased RL based on latent statespace models with partially stochastic variables (Hafner et al., 2019).
2 Related Work
Representation learning in RL. Endtoend deep RL can in principle learn representations directly as part of the RL process (Mnih et al., 2013). However, prior work has observed that RL has a “representation learning bottleneck”: a considerable portion of the learning period must be spent acquiring good representations of the observation space (Shelhamer et al., 2016). This motivates the use of a distinct representation learning procedure to acquire these representations before the agent has even learned to solve the task. The use of auxiliary supervision in RL to learn such representations has been explored in a number of prior works (Lange and Riedmiller, 2010; Finn et al., 2016; Jaderberg et al., 2017; Higgins et al., 2017; Ha and Schmidhuber, 2018; Nair et al., 2018; Oord et al., 2018; Gelada et al., 2019; Dadashi et al., 2019). In contrast to this class of representation learning algorithms, we explicitly learn a latent variable model of the POMDP, in which the latent representation and latentspace dynamics are jointly learned. By modeling covariances between consecutive latent states, we make it feasible for our proposed algorithm to perform Bellman backups directly in the latent space of the learned model.
Partial observability in RL. Our work is also related to prior research on RL under partial observability. Prior work has studied exact and approximate solutions to POMDPs, but they require explicit models of the POMDP and are only practical for simpler domains (Kaelbling et al., 1998). Recent work has proposed endtoend RL methods that use recurrent neural networks to process histories of observations and (sometimes) actions, but without constructing a model of the POMDP (Hausknecht and Stone, 2015; Foerster et al., 2016; Zhu et al., 2018). Other works, however, learn latentspace dynamical system models and then use them to solve the POMDP with modelbased RL (Watter et al., 2015; Wahlström et al., 2015; Karl et al., 2017; Zhang et al., 2019; Hafner et al., 2019). Although some of these works learn latent variable models that are similar to ours, these modelbased methods are often limited by compounding model errors and finite horizon optimization. In contrast to these works, our approach does not use the model for prediction and performs infinite horizon policy optimization. Our approach benefits from the good asymptotic performance of modelfree RL, while at the same time leveraging the improved latent space representation for sample efficiency. Other works have also trained latent variable models and used their representations as the inputs to modelfree RL algorithms. They use representations encoded from latent states sampled from the forward model (Buesing et al., 2018), belief representations obtained from particle filtering (Igl et al., 2018), or belief representations obtained directly from a learned beliefspace forward model (Gregor et al., 2019). Our approach is closely related to these prior methods, in that we also use modelfree RL with a latent state representation that is learned via prediction. However, instead of using belief representations, our method learns a critic directly on latent states samples.
Sequential latent variable models. Several previous works have explored various modeling choices to learn stochastic sequential models (Krishnan et al., 2015; Archer et al., 2015; Karl et al., 2016; Fraccaro et al., 2016, 2017; Doerr et al., 2018a). In the context of using sequential models for RL, previous works have typically observed that partially stochastic state space models are more effective than fully stochastic ones (Buesing et al., 2018; Igl et al., 2018; Hafner et al., 2019). In these models, the state of the underlying MDP is modeled with the deterministic state of a recurrent network (e.g., LSTM (Hochreiter and Schmidhuber, 1997) or GRU (Cho et al., 2014)), and optionally with some stochastic random variables. As mentioned earlier, a model with a latent state that is entirely stochastic has the appealing interpretation of learning a representation that can properly represent uncertainty about any of the state variables, given past observations. We demonstrate in our work that fully stochastic state space models can in fact be learned effectively and, with a welldesigned stochastic network, such models perform on par to partially stochastic models and outperform fully deterministic models.
3 Reinforcement Learning and Modeling
This work addresses the problem of learning maximum entropy policies from highdimensional observations in POMDPs, by simultaneously learning a latent representation of the underlying MDP state using variational inference and learning the policy in a maximum entropy RL framework. In this section, we describe maximum entropy RL (Ziebart, 2010; Haarnoja et al., 2018a; Levine, 2018) in fully observable MDPs, as well as variational methods for training latent state space models for POMDPs.
3.1 Maximum Entropy RL in Fully Observable MDPs
In a Markov decision process (MDP), an agent at time takes an action from state and reaches the next state according to some stochastic transition dynamics . The initial state comes from a distribution , and the agent receives a reward on each of the transitions. Standard RL aims to learn the parameters of some policy such that the expected sum of rewards is maximized under the induced trajectory distribution . This objective can be modified to incorporate an entropy term, such that the policy also aims to maximize the expected entropy under the induced trajectory distribution . This formulation has a close connection to variational inference (Ziebart, 2010; Haarnoja et al., 2018a; Levine, 2018), and we build on this in our work. The resulting maximum entropy objective is
(1) 
where is the reward function, and is a temperature parameter that controls the tradeoff between optimizing for the reward and for the entropy (i.e., stochasticity) of the policy. Soft actorcritic (SAC) (Haarnoja et al., 2018a) uses this maximum entropy RL framework to derive soft policy iteration, which alternates between policy evaluation and policy improvement within the described maximum entropy framework. SAC then extends this soft policy iteration to handle continuous action spaces by using parameterized function approximators to represent both the Qfunction (critic) and the policy (actor). The soft Qfunction parameters are optimized to minimize the soft Bellman residual,
(2)  
(3) 
where is the replay buffer, is the discount factor, and are delayed parameters. The policy parameters are optimized to update the policy towards the exponential of the soft Qfunction,
(4) 
Results of this stochastic, entropy maximizing RL framework demonstrate improved robustness and stability. SAC also shows the sample efficiency benefits of an offpolicy learning algorithm, in conjunction with the high performance benefits of a longhorizon planning algorithm. Precisely for these reasons, we choose to extend the SAC algorithm in this work to formulate our SLAC algorithm.
3.2 Sequential Latent Variable Models and Amortized Variational Inference in POMDPs
To learn representations for RL, we use latent variable models trained with amortized variational inference. The learned model must be able to process a large number of pixels that are present in the entangled image , and it must tease out the relevant information into a compact and disentangled representation . To learn such a model, we can consider maximizing the probability of each observed datapoint from some training set under the entire generative process . This objective is intractable to compute in general due to the marginalization of the latent variables . In amortized variational inference, we utilize the following bound on the loglikelihood (Kingma and Welling, 2014),
(5) 
We can maximize the probability of the observed datapoints (i.e., the left hand side of Equation (missing)) by learning an encoder and a decoder , and then directly performing gradient ascent on the right hand side of the equation. In this setup, the distributions of interest are the prior , the observation model , and the posterior .
Although such generative models have been shown to successfully model various types of complex distributions (Kingma and Welling, 2014) by embedding knowledge of the distribution into an informative latent space, they do not have a builtin mechanism for the use of temporal information when performing inference. In the case of partially observable environments, as we discuss below, the representative latent state corresponding to a given nonMarkovian observation needs to be informed by past observations.
Consider a partially observable MDP (POMDP), where an action from latent state results in latent state and emits a corresponding observation . We make an explicit distinction between an observation and the underlying latent state , to emphasize that the latter is unobserved and the distribution is not known a priori. Analogous to the fully observable MDP, the initial state distribution is , the transition probability distribution is , and the reward is . In addition, the observation model is given by .
As in the case for VAEs, a generative model of these observations can be learned by maximizing the loglikelihood. In the POMDP setting, however, we note that alone does not provide all necessary information to infer , and thus, prior temporal information must be taken into account. This brings us to the discussion of sequential latent variable models. The distributions of interest are the priors and , the observation model , and the approximate posteriors and . The loglikehood of the observations can then be bounded, similarly to the VAE bound in Equation (missing), as
(6) 
Prior work (Hafner et al., 2019; Buesing et al., 2018; Doerr et al., 2018b) has explored modeling such nonMarkovian observation sequences, using methods such as recurrent neural networks with deterministic hidden state, as well as probabilistic statespace models. In this work, we enable the effective training of a fully stochastic sequential latent variable model, and bring it together with a maximum entropy actorcritic RL algorithm to create SLAC: a sampleefficient and highperforming RL algorithm for learning policies for complex continuous control tasks directly from highdimensional image inputs.
4 Joint Modeling and Control as Inference
Our method aims to learn maximum entropy policies from highdimensional, nonMarkovian observations in a POMDP, while also learning a model of that POMDP. The model alleviates the representation learning problem, which in turn helps with the policy learning problem. We formulate the control problem as inference in a probabilistic graphical model with latent variables, as shown in Figure 1.
For a fully observable MDP, the control problem can be embedded into a graphical model by introducing a binary random variable , which indicates if time step is optimal. When its distribution is chosen to be , then maximization of via approximate inference in that model yields the optimal policy for the maximum entropy objective (Levine, 2018).
In a POMDP setting, the distribution can analogously be given by . Instead of maximizing the likelihood of the optimality variables alone, we jointly model the observations (including the observed rewards of the past time steps) and learn maximum entropy policies by maximizing the marginal likelihood . This objective represents both the likelihood of the observed data from the past steps, as well as the optimality of the agent’s actions for future steps. We factorize our variational distribution into a product of recognition terms and , dynamics terms , and policy terms :
(7) 
The variational distribution uses the dynamics for future time steps to prevent the agent from controlling the transitions and from choosing optimistic actions (Levine, 2018). The posterior over the actions represents the agent’s policy . Although this derivation uses a policy that is conditioned on the latent state, our algorithm, which will be described in the next section, learns a parametric policy that is directly conditioned on observations and actions. This approximation allows us to directly execute the policy without having to perform inference on the latent state at run time.
We use the posterior from Equation (missing) to obtain the evidence lower bound (ELBO) of the marginal likelihood,
(8) 
where is the action prior. The full derivation of the ELBO is given in Appendix A. This derivation assumes that the reward function, which determines , is known. However, in many RL problems, this is not the case. In that situation, we can simply append the reward to the observation, and learn the reward along with . This requires no modification to our method other than changing the observation space, and we use this approach in all of our experiments. We do this to learn latent representations that are more relevant to the task, but we do not use predictions from it. Instead, the RL objective uses rewards from the agent’s experience, as in modelfree RL.
5 Stochastic Latent Actor Critic
We now describe our stochastic latent actor critic (SLAC) algorithm, which approximately maximizes the ELBO using function approximators to model the prior and posterior distributions. The ELBO objective in Equation (missing) can be split into a model objective and a maximum entropy RL objective. The model objective can directly be optimized, while the maximum entropy RL objective can be solved via message passing. We can learn Qfunctions for the messages, and then we can rewrite the RL objective to express it in terms of these messages. Additional details of the derivation of the SLAC objectives are given in Appendix A.
Latent Variable Model: The first part of the ELBO corresponds to training the latent variable model to maximize the likelihood of the observations, analogous to the ELBO in Equation (missing) for the sequential latent variable model. The distributions of the latent variable model are diagonal Gaussian distributions, where the means and variances are outputs of neural networks. The distribution parameters of this model are optimized to maximize the first part of the ELBO. The model loss is
(9) 
We use the reparameterization trick to sample from the filtering distribution .
Critic and Actor: The second part of the ELBO corresponds to the maximum entropy RL objective. As in the fully observable case from Section 3.1 and as described by Levine (2018), this optimization can be solved via message passing of soft Qvalues, except that we use the latent states rather than the true states . For continuous state and action spaces, this message passing is approximated by minimizing the soft Bellman residual, which we use to train our soft Qfunction parameters ,
(10) 
where are delayed parameters, obtained as exponential moving averages of . Notice that the latents and , which are used in the Bellman backup, are sampled from the same joint, i.e. . The RL objective, which corresponds to the second part of the ELBO, can be rewritten in terms of the soft Qfunction. The policy parameters are optimized to maximize this objective, analogously to soft actorcritic (Haarnoja et al., 2018a). The policy loss is then
(11) 
We assume a uniform action prior, so is a constant term that we omit from the policy loss. We use the reparameterization trick to sample from the policy, and the policy loss only uses the last sample of the sequence for the critic. Although the policy used in our derivation is conditioned in the latent state, our learned parametric policy is conditioned directly on the past observations and actions, so that the learned policy can be executed at run time without requiring inference of the latent state. Finally, we note that for the expectation over latent states in the Bellman residual in Equation (missing), rather than sampling latent states , we sample latent states from the filtering distribution . This design choice allows us to minimize the critic loss for samples that are most relevant for , while also allowing the critic loss to use the Qfunction in the same way as implied by the policy loss in Equation (missing).
SLAC is outlined in 1. The actorcritic component follows prior work, with automatic tuning of the temperature and two Qfunctions to mitigate underestimation (Fujimoto et al., 2018; Haarnoja et al., 2018a, b). SLAC can be viewed as a variant of SAC (Haarnoja et al., 2018a) where the critic is trained on the stochastic latent state of our sequential latent variable model. The backup for the critic is performed on a tuple , sampled from the posterior . The critic can, in principle, take advantage of the perfect knowledge of the state , which makes learning easier. However, the parametric policy does not have access to , and must make decisions based on a history of observations and actions. SLAC is not a modelbased algorithm, in that in does not use the model for prediction, but we see in our experiments that SLAC can achieve similar sample efficiency as a modelbased algorithm.
6 Latent Variable Model
We briefly summarize our full model architecture here, with full details in Appendix B. Motivated by the recent success of autoregressive latent variables in VAEs (Razavi et al., 2019; Maaloe et al., 2019), we factorize the latent variable into two stochastic layers, and , as shown in Figure 2. This factorization results in latent distributions that are more expressive, and it allows for some parts of the prior and posterior distributions to be shared. We found this design to produce high quality reconstructions and samples, and utilize it in all of our experiments. The generative model and the inference model are given by
Note that we choose the variational distribution over to be the same as the model . Thus, the KL divergence in simplifies to the divergence between and over . We use a multivariate standard normal distribution for , since it is not conditioned on any variables, i.e. . The conditional distributions of our model are diagonal Gaussian, with means and variances given by neural networks. Unlike models from prior work (Hafner et al., 2019; Buesing et al., 2018; Doerr et al., 2018b), which have deterministic and stochastic paths and use recurrent neural networks, ours is fully stochastic, i.e. our latent state is a Markovian latent random variable formed by the concatenation of and . Further details are discussed in Appendix B.
7 Experimental Evaluation
We evaluate SLAC on numerous imagebased continuous control tasks from both the DeepMind Control Suite (Tassa et al., 2018) and OpenAI Gym (Brockman et al., 2016), as illustrated in Figure 3. Full details of SLAC’s network architecture are described in Appendix B. Aside from the value of action repeats (i.e. control frequency) for the tasks, we kept all of SLAC’s hyperparameters constant across all tasks in all domains. Training and evaluation details are given in Appendix C, and image samples from our model for all tasks are shown in Appendix E. Additionally, visualizations of our results and code are available on the project website.^{2}^{2}2https://alexleegk.github.io/slac/
7.1 Comparative Evaluation on Continuous Control Benchmark Tasks
To provide a comparative evaluation against prior methods, we evaluate SLAC on four tasks (cheetah run, walker walk, ballincup catch, finger spin) from the DeepMind Control Suite (Tassa et al., 2018), and four tasks (cheetah, walker, ant, hopper) from OpenAI Gym (Brockman et al., 2016). Note that the Gym tasks are typically used with lowdimensional state observations, while we evaluate on them with raw image observations. We compare our method to the following stateoftheart modelbased and modelfree algorithms:
SAC (Haarnoja et al., 2018a): This is an offpolicy actorcritic algorithm, which represents a comparison to stateoftheart modelfree learning. We include experiments showing the performance of SAC based on true state (as an upper bound on performance) as well as directly from raw images.
D4PG (BarthMaron et al., 2018): This is also an offpolicy actorcritic algorithm, learning directly from raw images. The results reported in the plots below are the performance after training steps, as stated in the benchmarks from (Tassa et al., 2018).
MPO (Abdolmaleki et al., 2018b, a): This is an offpolicy actorcritic algorithm that performs an expectation maximization form of policy iteration, learning directly from raw images.
PlaNet (Hafner et al., 2019): This is a modelbased RL method for learning from images, which uses a partially stochastic sequential latent variable model, but without explicit policy learning. Instead, the model is used for planning with model predictive control (MPC), where each plan is optimized with the cross entropy method (CEM).
DVRL (Igl et al., 2018): This is an onpolicy modelfree RL algorithm that also trains a partially stochastic latentvariable POMDP model. DVRL uses the full belief over the latent state as input into both the actor and critic, as opposed to our method, which trains the critic with the latent state and the actor with a history of actions and observations.
Our experiments on the DeepMind Control Suite in Figure 5 show that the sample efficiency of SLAC is comparable or better than both modelbased and modelfree alternatives. This indicates that overcoming the representation learning bottleneck, coupled with efficient offpolicy RL, provides for fast learning similar to modelbased methods, while attaining final performance comparable to fully modelfree techniques that learn from state. SLAC also substantially outperforms DVRL. This difference can be explained in part by the use of an efficient offpolicy RL algorithm, which can better take advantage of the learned representation.
We also evaluate SLAC on continuous control benchmark tasks from OpenAI Gym in Figure 5. We notice that these tasks are much more challenging than the DeepMind Control Suite tasks, because the rewards are not as shaped and not bounded between 0 and 1, the dynamics are different, and the episodes terminate on failure (e.g., when the hopper or walker falls over). PlaNet is unable to solve the last three tasks, while for the cheetah task, it learns a suboptimal policy that involves flipping the cheetah over and pushing forward while on its back. To better understand the performance of fixedhorizon MPC on these tasks, we also evaluated with the ground truth dynamics (i.e., the true simulator), and found that even in this case, MPC did not achieve good final performance, suggesting that infinite horizon policy optimization, of the sort performed by SLAC and modelfree algorithms, is important to attain good results on these tasks.
Our experiments show that SLAC successfully learns complex continuous control benchmark tasks from raw image inputs. On the DeepMind Control Suite, SLAC exceeds the performance of PlaNet on three of the tasks, and matches its performance on the walker task. However, on the harder imagebased OpenAI Gym tasks, SLAC outperforms PlaNet by a large margin. In both domains, SLAC substantially outperforms all prior modelfree methods. We note that the prior methods that we tested generally performed poorly on the imagebased OpenAI Gym tasks, despite considerable hyperparameter tuning.
7.2 Evaluating the Latent Variable Model
We next study the tradeoffs between different design choices for the latent variable model. We compare our fully stochastic model, as described in Section 6, to a standard nonsequential VAE model (Kingma and Welling, 2014), which has been used in multiple prior works for representation learning in RL (Higgins et al., 2017; Ha and Schmidhuber, 2018; Nair et al., 2018), the partially stochastic model used by PlaNet (Hafner et al., 2019), as well as three variants of our model: a simple filtering model that does not factorize the latent variable into two layers of stochastic units, a fully deterministic model that removes all stochasticity from the hidden state dynamics, and a partially stochastic model that has stochastic transitions for and deterministic transitions for , similar to the PlaNet model, but with our architecture. Both the fully deterministic and partially stochastic models use the same architecture as our fully stochastic model, including the same twolevel factorization of the latent variable. In all cases, we use the RL framework of SLAC and only vary the choice of model for representation learning. As shown in the comparison in Figure 6, our fully stochastic model outperforms prior models as well as the deterministic and simple variants of our own model. The partially stochastic variant of our model matches the performance of our fully stochastic model but, contrary to the conclusions in prior work (Hafner et al., 2019; Buesing et al., 2018), the fully stochastic model performs on par, while retaining the appealing interpretation of a stochastic state space model. We hypothesize that these prior works benefit from the deterministic paths (realized as an LSTM or GRU) because they use multistep samples from the prior. In contrast, our method uses samples from the posterior, which are conditioned on samestep observations, and thus these latent samples are less sensitive to the propagation of the latent states through time.
7.3 Qualitative Predictions from the Latent Variable Model
We show example image samples from our learned sequential latent variable model for the cheetah task in Figure 7, and we include the other tasks in Appendix E. Samples from the posterior show the images as constructed by the decoder , using a sequence of latents that are encoded and sampled from the posteriors, and . Samples from the prior, on the other hand, use a sequence of latents where is sampled from and all remaining latents are from the propagation of the previous latent state through the latent dynamics . Note that these prior samples do not use any image frames as inputs, and thus they do not correspond to any ground truth sequence. We also show samples from the conditional prior, which is conditioned on the first image from the true sequence: for this, the sampling procedure is the same as the prior, except that is encoded and sampled from the posterior , rather than being sampled from . We notice that the generated images samples can be sharper and more realistic by using a smaller variance for when training the model, but at the expense of a representation that leads to lower returns. Finally, note that we do not actually use the samples from the prior for training.
Cheetah run 
\adjustboxvalign=c
Ground Truth 
\adjustboxvalign=c,margin=0 1pt 
\adjustboxvalign=c
Posterior Sample 
\adjustboxvalign=c,margin=0 1pt  
\adjustboxvalign=c
Conditional Prior Sample 
\adjustboxvalign=c,margin=0 1pt  
\adjustboxvalign=c
Prior Sample 
\adjustboxvalign=c,margin=0 1pt 
8 Discussion
We presented SLAC, an efficient RL algorithm for learning from highdimensional image inputs that combines efficient offpolicy modelfree RL with representation learning via a sequential stochastic state space model. Through representation learning in conjunction with effective task learning in the learned latent space, our method achieves improved sample efficiency and final task performance as compared to both prior modelbased and modelfree RL methods.
While our current SLAC algorithm is fully modelfree, in that predictions from the model are not utilized to speed up training, a natural extension of our approach would be to use the model predictions themselves to generate synthetic samples. Incorporating this additional synthetic modelbased data into a mixed modelbased/modelfree method could further improve sample efficiency and performance. More broadly, the use of explicit representation learning with RL has the potential to not only accelerate training time and increase the complexity of achievable tasks, but also enable reuse and transfer of our learned representation across tasks.
Acknowledgments
We thank Marvin Zhang, Abhishek Gupta, and Chelsea Finn for useful discussions and feedback, Danijar Hafner for providing timely assistance with PlaNet, and Maximilian Igl for providing timely assistance with DVRL. We also thank Deirdre Quillen, Tianhe Yu, and Chelsea Finn for providing us with their suite of Sawyer manipulation tasks. This research was supported by the National Science Foundation through IIS1651843 and IIS1700697, as well as ARL DCIST CRA W911NF1720181 and the Office of Naval Research. Compute support was provided by NVIDIA.
References
 Relative entropy regularized policy iteration. arXiv preprint arXiv:1812.02256. Cited by: §7.1.
 Maximum a posteriori policy optimisation. In International Conference on Learning Representations (ICLR), Cited by: §7.1.
 Black box variational inference for state space models. arXiv preprint arXiv:1511.07367. Cited by: §2.
 Optimal control of markov processes with incomplete state information. Journal of mathematical analysis and applications. Cited by: §1.
 Distributed distributional deterministic policy gradients. In International Conference on Learning Representations (ICLR), Cited by: §7.1.
 OpenAI Gym. arXiv preprint arXiv:1606.01540. Cited by: §7.1, §7.
 Learning and querying fast generative models for reinforcement learning. arXiv preprint arXiv:1802.03006. Cited by: §1, §2, §2, §3.2, §6, §7.2.
 Learning phrase representations using rnn encoder–decoder for statistical machine translation. In Conference on Empirical Methods in Natural Language Processing (EMNLP), Cited by: §2.
 Deep reinforcement learning in a handful of trials using probabilistic dynamics models. In Neural Information Processing Systems (NeurIPS), Cited by: §1.
 The value function polytope in reinforcement learning. In International Conference on Machine Learning (ICML), Cited by: §2.
 PILCO: a modelbased and dataefficient approach to policy search. In International Conference on Machine Learning (ICML), Cited by: §1.
 Probabilistic recurrent statespace models. In International Conference on Machine Learning (ICML), Cited by: §2.
 Probabilistic recurrent statespace models. In International Conference on Machine Learning (ICML), Cited by: §3.2, §6.
 Deep visual foresight for planning robot motion. In International Conference on Robotics and Automation (ICRA), Cited by: §1.
 Deep spatial autoencoders for visuomotor learning. In International Conference on Robotics and Automation (ICRA), Cited by: §2.
 Learning to communicate with deep multiagent reinforcement learning. In Neural Information Processing Systems (NIPS), Cited by: §2.
 A disentangled recognition and nonlinear dynamics model for unsupervised learning. In Neural Information Processing Systems (NIPS), Cited by: §2.
 Sequential neural models with stochastic layers. In Neural Information Processing Systems (NIPS), Cited by: §2.
 Addressing function approximation error in actorcritic methods. In International Conference on Machine Learning (ICML), Cited by: §5.
 DeepMDP: learning continuous latent space models for representation learning. In International Conference on Machine Learning (ICML), Cited by: §2.
 Shaping belief states with generative environment models for rl. In Neural Information Processing Systems (NeurIPS), Cited by: §2.
 Continuous deep qlearning with modelbased acceleration. In International Conference on Machine Learning (ICML), Cited by: §1.
 World models. arXiv preprint arXiv:1803.10122. Cited by: §2, §7.2.
 Soft actorcritic: offpolicy maximum entropy deep reinforcement learning with a stochastic actor. In International Conference on Machine Learning (ICML), Cited by: Appendix C, Appendix C, §3.1, §3, §5, §5, §7.1.
 Soft actorcritic algorithms and applications. arXiv preprint arXiv:1812.05905. Cited by: Appendix C, §5.
 Learning latent dynamics for planning from pixels. In International Conference on Machine Learning (ICML), Cited by: §1, §1, §2, §2, §3.2, §6, §7.1, §7.2.
 Deep recurrent Qlearning for partially observable MDPs. In AAAI Fall Symposium on Sequential Decision Making for Intelligent Agents, Cited by: §2.
 DARLA: improving zeroshot transfer in reinforcement learning. In International Conference on Machine Learning (ICML), Cited by: §2, §7.2.
 Long shortterm memory. Neural computation. Cited by: §2.
 Deep variational reinforcement learning for POMDPs. In International Conference on Machine Learning (ICML), Cited by: §1, §2, §2, §7.1.
 Reinforcement learning with unsupervised auxiliary tasks. In International Conference on Learning Representations (ICLR), Cited by: §2.
 Planning and acting in partially observable stochastic domains. Artificial intelligence 101 (12), pp. 99–134. Cited by: §1, §2.
 Deep variational bayes filters: unsupervised learning of state space models from raw data. In International Conference on Learning Representations (ICLR), Cited by: §2.
 Deep variational bayes filters: unsupervised learning of state space models from raw data. In International Conference on Learning Representations (ICLR), Cited by: §2.
 Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR), Cited by: Appendix C.
 Autoencoding variational bayes. In International Conference on Learning Representations (ICLR), Cited by: §1, §3.2, §3.2, §7.2.
 Deep kalman filters. arXiv preprint arXiv:1511.05121. Cited by: §2.
 Deep autoencoder neural networks in reinforcement learning. In International Joint Conference on Neural Networks (IJCNN), Cited by: §2.
 Reinforcement learning and control as probabilistic inference: tutorial and review. arXiv preprint arXiv:1805.00909. Cited by: Appendix A, Appendix A, Appendix A, §3.1, §3, §4, §4, §5.
 Biva: a very deep hierarchy of latent variables for generative modeling. In Neural Information Processing Systems (NeurIPS), Cited by: §6.
 Playing Atari with deep reinforcement learning. In NIPS Deep Learning Workshop, Cited by: §2.
 Neural network dynamics for modelbased deep reinforcement learning with modelfree finetuning. In International Conference on Robotics and Automation (ICRA), Cited by: §1.
 Visual reinforcement learning with imagined goals. In Neural Information Processing Systems (NeurIPS), Cited by: §2, §7.2.
 Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748. Cited by: §2.
 Generating diverse highfidelity images with vqvae2. Cited by: §6.
 Loss is its own reward: selfsupervision for reinforcement learning. arXiv preprint arXiv:1612.07307. Cited by: §2.
 Dyna, an integrated architecture for learning, planning, and reacting. ACM SIGART Bulletin 2 (4), pp. 160–163. Cited by: §1.
 DeepMind control suite. arXiv preprint arXiv:1801.00690. Cited by: Appendix C, §7.1, §7.1, §7.
 From pixels to torques: policy learning with deep dynamical models. arXiv preprint arXiv:1502.02251. Cited by: §2.
 Embed to control: a locally linear latent dynamics model for control from raw images. In Neural Information Processing Systems (NIPS), Cited by: §2.
 SOLAR: deep structured latent representations for modelbased reinforcement learning. In International Conference on Machine Learning (ICML), Cited by: §1, §2.
 Dexterous manipulation with deep reinforcement learning: efficient, general, and lowcost. In International Conference on Robotics and Automation (ICRA), Cited by: Appendix D.
 On improving deep reinforcement learning for POMDPs. arXiv preprint arXiv:1804.06309. Cited by: §2.
 Modeling purposeful adaptive behavior with the principle of maximum causal entropy. Cited by: §3.1, §3.
Appendix A Derivation of the Evidence Lower Bound and SLAC Objectives
In this appendix, we discuss how the SLAC objectives can be derived from applying a variational inference scheme to the control as inference framework for reinforcement learning (Levine, 2018) . In this framework, the problem of finding the optimal policy is cast as an inference problem, conditioned on the evidence that the agent is behaving optimally. While Levine (2018) derives this in the fully observed case, we present a derivation in the POMDP setting.
We aim to maximize the marginal likelihood , where is the number of steps that the agent has already taken. This likelihood reflects that the agent cannot modify the past actions and they might have not been optimal, but it can choose the future actions up to the end of the episode, such that the chosen future actions are optimal. Notice that unlike the standard control as inference framework, in this work we not only maximize the likelihood of the optimality variables but also the likelihood of the observations, which provides additional supervision for the latent representation. This does not come up in the MDP setting since the state representation is fixed and learning a dynamics model of the state would not change the modelfree equations derived from the maximum entropy RL objective.
For reference, we restate the factorization of our variational distribution:
(12) 
As discussed by Levine (2018), the agent does not have control over the stochastic dynamics, so we use the dynamics for in the variational distribution in order to prevent the agent from choosing optimistic actions.
The joint likelihood is
(13) 
We use the posterior from Equation (missing), the likelihood from Equation (missing), and Jensen’s inequality to obtain the ELBO of the marginal likelihood,
(14)  
(15)  
(16) 
We are interested in the likelihood of optimal trajectories, so we use for , and its distribution is given by in the control as inference framework. Notice that the dynamics terms for from the posterior and the prior cancel each other out in the ELBO.
The first part of the ELBO corresponds to the model objective. When using the parametric function approximators, the negative of it corresponds directly to the model loss in Equation (missing).
The second part of the ELBO corresponds to the maximum entropy RL objective. We assume a uniform action prior, so the term is a constant term that can be omitted when optimizing this objective. We use message passing to optimize this objective, with messages defined as
(17)  
(18) 
Then, the maximum entropy RL objective can be expressed in terms of the messages as
(19)  
(20) 
where the first equality is obtained from dynamic programming (see Levine (2018) for details), the second equality holds from the definition of KL divergence, and is the normalization factor for with respect to . Since the KL divergence term is minimized when its two arguments represent the same distribution, the optimal policy is given by
(21) 
Noting that the KL divergence term is zero for the optimal action, the equality from Equation (missing) can be used in Equation (missing) to obtain
(22) 
This equation corresponds to the standard Bellman backup with a soft maximization for the value function.
As mentioned in Section 5, our algorithm conditions the parametric policy in the history of observations and actions, which allows us to directly execute the policy without having to perform inference on the latent state at run time. When using the parametric function approximators, the negative of the maximum entropy RL objective, written as in Equation (missing), corresponds to the policy loss in Equation (missing). Lastly, the Bellman backup of Equation (missing) corresponds to the Bellman residual in Equation (missing) when approximated by a regression objective.
We showed that the SLAC objectives can be derived from applying variational inference in the control as inference framework in the POMDP setting. This leads to the joint likelihood of the past observations and future optimality variables, which we aim to optimize by maximizing the ELBO of the loglikelihood. We decompose the ELBO into the model objective and the maximum entropy RL objective. We express the latter in terms of messages of Qfunctions, which in turn are learned by minimizing the Bellman residual. These objectives lead to the model, policy, and critic losses.
Appendix B Network Architectures
Recall that our full sequential latent variable model has two layers of latent variables, which we denote as and . We found this design to provide a good balance between ease of training and expressivity, producing good reconstructions and generations and, crucially, providing good representations for reinforcement learning. For reference, we reproduce the model diagram from the
main paper in Figure 8. Note that this diagram represents the Bayes net corresponding to our full model. However, since all of the latent variables are stochastic, this visualization also presents the design of the computation graph. Inference over the latent variables is performed using amortized variational inference, with all training done via reparameterization. Hence, the computation graph can be deduced from the diagram by treating all solid arrows as part of the generative model and all dashed arrows as part of approximate posterior. The generative model consists of the following probability distributions, as described in the main paper:
The initial distribution is a multivariate standard normal distribution . All of the other distributions are conditional and parameterized by neural networks with parameters . The networks for , , , and consist of two fully connected layers, each with 256 hidden units, and a Gaussian output layer. The Gaussian layer is defined such that it outputs a multivariate normal distribution with diagonal variance, where the mean is the output of a linear layer and the diagonal standard deviation is the output of a fully connected layer with softplus nonlinearity. The observation model consists of 5 transposed convolutional layers (256 , 128 , 64 , 32 , and 3 filters, respectively, stride 2 each, except for the first layer). The output variance for each image pixel is fixed to .
The variational distribution , also referred to as the inference model or the posterior, is represented by the following factorization:
Note that the variational distribution over and is intentionally chosen to exactly match the generative model , such that this term does not appear in the KLdivergence within the ELBO, and a separate variational distribution is only learned over and . This intentional design decision simplifies the inference process. The networks representing the distributions and both consist of 5 convolutional layers (32 , 64 , 128 , 256 , and 256 filters, respectively, stride 2 each, except for the last layer), 2 fully connected layers (256 units each), and a Gaussian output layer. The parameters of the convolution layers are shared among both distributions.
The latent variables have 32 and 256 dimensions, respectively, i.e. and . For the image observations, . All the layers, except for the output layers, use leaky ReLU nonlinearities. Note that there are no deterministic recurrent connections in the network—all networks are feedforward, and the temporal dependencies all flow through the stochastic units and .
For the reinforcement learning process, we use a critic network consisting of 2 fully connected layers (256 units each) and a linear output layer. The actor network consists of 5 convolutional layers, 2 fully connected layers (256 units each), a Gaussian layer, and a tanh bijector, which constrains the actions to be in the bounded action space of . The convolutional layers are the same as the ones from the latent variable model, but the parameters of these layers are not updated by the actor objective. The same exact network architecture is used for every one of the experiments in the paper.
Appendix C Training and Evaluation Details
The control portion of our algorithm uses the same hyperparameters as SAC (Haarnoja et al., 2018a), except for a smaller replay buffer size of 100000 environment steps (instead of a million) due to the high memory usage of image observations. All of the parameters are trained with the Adam optimizer (Kingma and Ba, 2015), and we perform one gradient step per environment step. The Qfunction and policy parameters are trained with a learning rate of 0.0003 and a batch size of 256. The model parameters are trained with a learning rate of 0.0001 and a batch size of 32. We use sequences of length for all the tasks. Note that the sequence length can be less than for the first steps () of each episode.
We use action repeats for all the methods, except for D4PG for which we use the reported results from prior work (Tassa et al., 2018). The number of environment steps reported in our plots correspond to the unmodified steps of the benchmarks. Note that the methods that use action repeats only use a fraction of the environment steps reported in our plots. For example, 3 million environment steps of the cheetah task correspond to 750000 samples when using an action repeat of 4. The action repeats used in our experiments are given in Table 1.
Unlike in prior work (Haarnoja et al., 2018a, b), we use the same stochastic policy as both the behavioral and evaluation policy since we found the deterministic greedy policy to be comparable or worse than the stochastic policy.
Benchmark  Task 




DeepMind Control Suite  cheetah run  4  0.01  0.04  
walker walk  2  0.025  0.05  
ballincup catch  4  0.02  0.08  
finger spin  2  0.02  0.04  
\aboverulesep3.0pt15\cdashline.51\cdashline.5 

\cdashlineplus1fil minus1fil
\belowrulesep 

OpenAI Gym  HalfCheetahv2  1  0.05  0.05  
Walker2dv2  4  0.008  0.032  
Hopperv2  2  0.008  0.016  
Antv2  4  0.05  0.2 
Appendix D Additional Experiments on Simulated Robotic Manipulation Tasks
Beyond standard benchmark tasks, we also aim to illustrate the flexibility of our method by demonstrating it on a variety of imagebased robotic manipulation skills. The reward functions for these tasks are the following:
Sawyer Door Open
(23) 
Sawyer Drawer Close
(24) 
Sawyer Pickup
(25) 
In Figure 9, we show illustrations of SLAC’s execution of these manipulation tasks of using a simulated Sawyer robotic arm to push open a door, close a drawer, and reach out and pick up an object. Our method is able to learn these contactrich manipulation tasks from raw images, succeeding even when the object of interest actually occupies only a small portion of the image.
In our next set of manipulation experiments, we use the 9DoF 3fingered DClaw robot to rotate a valve (Zhu et al., 2019) from various starting positions to various desired goal locations, where the goal is illustrated as a green dot in the image. In all of our experiments, we allow the starting position of the valve to be selected randomly between , and we test three different settings for the goal location (see Figure 10). First, we prescribe the goal position to always be fixed. In this task setting, we see that SLAC, SAC from images, and SAC from state all perform similarly in terms of both sample efficiency as well as final performance. However, when we allow the goal location to be selected randomly from a set of 3 options , we see that SLAC and SAC from images actually outperform SAC from state. This interesting result can perhaps be explained by the fact that, when learning from state, the goal is specified with just a single number within the state vector, rather than with the redundancy of numerous green pixels in the image. Finally, when we allow the goal position of the valve to be selected randomly from , we see that SLAC’s explicit representation learning improves substantially over imagebased SAC, performing comparable to the oracle baseline that receives the true state observation.
Appendix E Additional Predictions from the Latent Variable Model
We show additional samples from our model in Figure 11, Figure 12, and Figure 13. Samples from the posterior show the images as constructed by the decoder , using a sequence of latents that are encoded and sampled from the posteriors, and . Samples from the prior, on the other hand, use a sequence of latents where is sampled from and all remaining latents are from the propagation of the previous latent state through the latent dynamics . These samples do not use any image frames as inputs, and thus they do not correspond to any ground truth sequence. We also show samples from the conditional prior, which is conditioned on the first image from the true sequence: for this, the sampling procedure is the same as the prior, except that is encoded and sampled from the posterior , rather than being sampled from .
Walker walk 
\adjustboxvalign=c
Ground Truth 
\adjustboxvalign=c,margin=0 1pt 
\adjustboxvalign=c
Posterior Sample 
\adjustboxvalign=c,margin=0 1pt  
\adjustboxvalign=c
Conditional Prior Sample 
\adjustboxvalign=c,margin=0 1pt  
\adjustboxvalign=c
Prior Sample 
\adjustboxvalign=c,margin=0 1pt  
Ballincup catch 
\adjustboxvalign=c
Ground Truth 
\adjustboxvalign=c,margin=0 1pt 
\adjustboxvalign=c
Posterior Sample 
\adjustboxvalign=c,margin=0 1pt  
\adjustboxvalign=c
Conditional Prior Sample 
\adjustboxvalign=c,margin=0 1pt  
\adjustboxvalign=c
Prior Sample 
\adjustboxvalign=c,margin=0 1pt  
Finger spin 
\adjustboxvalign=c
Ground Truth 
\adjustboxvalign=c,margin=0 1pt 
\adjustboxvalign=c
Posterior Sample 
\adjustboxvalign=c,margin=0 1pt  
\adjustboxvalign=c
Conditional Prior Sample 
\adjustboxvalign=c,margin=0 1pt  
\adjustboxvalign=c
Prior Sample 
\adjustboxvalign=c,margin=0 1pt 
HalfCheetahv2 
\adjustboxvalign=c
Ground Truth 
\adjustboxvalign=c,margin=0 1pt 
\adjustboxvalign=c
Posterior Sample 
\adjustboxvalign=c,margin=0 1pt  
\adjustboxvalign=c
Conditional Prior Sample 
\adjustboxvalign=c,margin=0 1pt  
\adjustboxvalign=c
Prior Sample 
\adjustboxvalign=c,margin=0 1pt  
Walker2dv2 
\adjustboxvalign=c
