Deep Reinforcement Learning in a Handful of Trials
using Probabilistic Dynamics Models
Abstract
Modelbased reinforcement learning (RL) algorithms can attain excellent sample efficiency, but often lag behind the best modelfree algorithms in terms of asymptotic performance, especially those with highcapacity parametric function approximators, such as deep networks. In this paper, we study how to bridge this gap, by employing uncertaintyaware dynamics models. We propose a new algorithm called probabilistic ensembles with trajectory sampling (PETS) that combines uncertaintyaware deep network dynamics models with samplingbased uncertainty propagation. Our comparison to stateoftheart modelbased and modelfree deep RL algorithms shows that our approach matches the asymptotic performance of modelfree algorithms on several challenging benchmark tasks, while requiring significantly fewer samples (e.g. 25 and 125 times fewer samples than Soft Actor Critic and Proximal Policy Optimization respectively on the halfcheetah task).
1 Introduction
Reinforcement learning (RL) algorithms provide for an automated framework for decision making and control: by specifying a highlevel objective function, an RL algorithm can, in principle, automatically learn a control policy that satisfies this objective. This has the potential to automate a range of applications, such as autonomous vehicles and interactive conversational agents. However, current modelfree reinforcement learning algorithms are quite expensive to train, which often limits their application to simulated domains (Mnih et al., 2015; Lillicrap et al., 2015; Schulman et al., 2017), with a few exceptions (Kober and Peters, 2009; Levine et al., 2016). A promising direction to reducing sample complexity is to explore modelbased reinforcement learning (MBRL) methods, which proceed by first acquiring a predictive model of the world, and then using that model to make decisions (Atkeson and Santamaría, 1997; Kocijan et al., 2004; Deisenroth et al., 2014). MBRL is appealing because the dynamics model is rewardindependent and therefore can generalize to new tasks in the same environment, and it can easily benefit from all of the advances in deep supervised learning to utilize highcapacity models. However, the asymptotic reward of MBRL methods on common benchmark tasks generally lags behind modelfree methods. That is, although MBRL methods tend to learn more quickly, they also tend to converge to more suboptimal solutions.
In this paper, we take a step toward narrowing the gap between modelbased and modelfree RL methods. Our approach is based on several observations that, though relatively simple, are critical for good performance. We first observe that model capacity is a critical ingredient in the success of MBRL methods: while efficient models such as Gaussian processes can learn extremely quickly, they struggle to represent very complex and discontinuous dynamical systems (Calandra et al., 2016). Neural network (NN) models can scale to large datasets with highdimensional inputs, and can represent such systems more effectively. However, they struggle with the opposite problem: in the lowdata regime in which MBRL always starts, they tend to overfit and make poor predictions far into the future. For this reason, MBRL with NNs has proven exceptionally challenging.
Our second observation is that this issue can, to a large extent, be mitigated by properly incorporating uncertainty into the dynamics model. While a number of prior works have explored uncertaintyaware deep neural network models (Neal, 1995; Lakshminarayanan et al., 2017), including in the context of RL (Gal et al., 2016; Depeweg et al., 2016), our work is, to our knowledge, the first to bring these components together in a MBRL framework that approaches the asymptotic performance of stateoftheart modelfree RL methods, at a fraction of the sample complexity.
Our main contribution is a MBRL algorithm called probabilistic ensembles with trajectory sampling (PETS) summarized in Figure 1) with highcapacity NN models that incorporate uncertainty via an ensemble of bootstrapped models, where each model encodes distributions (opposed to point predictions), rivaling the performance modelfree methods on standard benchmark tasks at a fraction of the sample complexity. An additional advantage of PETS over prior probabilistic MBRL algorithms is an ability to isolate two distinct classes of uncertainty: aleatoric (inherent system stochasticity) and epistemic (subjective uncertainty, due to limited data). Isolating epistemic uncertainty is especially useful for directing exploration (Thrun, 1992), although we leave this for future work. Finally, we present a systematic analysis of how incorporating uncertainty into MBRL with NNs affects performance, in both model training and planning. We show, that PETS’s particular treatment of uncertainty significantly reduces the amount of data required to learn a task, e.g. 25 times fewer data on halfcheetah compared to the modelfree Soft Actor Critic algorithm (Haarnoja et al., 2018).
2 Related work
Model choice in MBRL is delicate: we desire effective learning in both lowdata regimes (at the beginning) and highdata regimes (in the later stages of the learning process). For this reason, Bayesian nonparametric models, such as GPs, are often the model of choice in MBRL (Kocijan et al., 2004; Ko et al., 2007; NguyenTuong et al., 2008; Grancharova et al., 2008; Deisenroth et al., 2014; Kamthe and Deisenroth, 2017). However, such models typically induce additional assumptions on the system, such as the smoothness assumption inherent in GPs with squaredexponential kernels (Rasmussen et al., 2003). Parametric function approximators have also been used extensively in MBRL (Hernandaz and Arkun, 1990; Miller et al., 1990; Lin, 1992; Draeger et al., 1995), but were largely supplanted by Bayesian models in recent years. Methods based on local models, such as guided policy search algorithms (Levine et al., 2016; Finn et al., 2016; Chebotar et al., 2017), can efficiently train NN policies, but using timevarying linear models, which only locally model the system dynamics. Recent improvements in parametric function approximators, such as NNs, suggest that such methods are worth revisiting (Baranes and Oudeyer, 2013; Fu et al., 2015; Punjani and Abbeel, 2015; Lenz et al., 2015; Agrawal et al., 2016; Gal et al., 2016; Depeweg et al., 2016; Williams et al., 2017; Nagabandi et al., 2017). Unlike Gaussian processes, NNs have constanttime inference and tractable training in the large data regime, and have potential to represent more complex functions, including nonsmooth dynamics that are often present in robotics (Fu et al., 2015; Mordatch et al., 2016; Nagabandi et al., 2017). However, most works that use NNs focus on deterministic models, consequently suffering from overfitting in the early stages of learning. For this reason, our approach is able to achieve even higher dataefficiency that prior deterministic MBRL methods such as Nagabandi et al. (2017).
Constructing good Bayesian NN models remains an open problem (MacKay, 1992; Neal, 1995; Osband, 2016; Guo et al., 2017), although recent promising work exists on incorporating dropout (Gal et al., 2017), ensembles (Osband et al., 2016; Lakshminarayanan et al., 2017), and divergence (HernándezLobato et al., 2016). Such probabilistic NNs have previously been used for control, including using dropout Gal et al. (2016) and divergence Depeweg et al. (2016). In contrast to these prior methods, our experiments focus on more complex tasks with challenging dynamics, including contact discontinuities, and we compare directly to prior modelbased and modelfree methods on standard benchmark problems, where our method exhibits asymptotic performance that is comparable to modelfree approaches.
3 Modelbased reinforcement learning
We now detail the MBRL framework and the notation used. Adhering to the Markov decision process formulation (Bellman, 1957), we denote the state and the actions of the system, the reward function , and we consider the dynamic systems governed by the transition function such that given the current state and current input , the next state is given by . For probabilistic dynamics, we represent the conditional distribution of the next state given the current state and action as some parameterized distribution family: , overloading notation. Learning forward dynamics is thus the task of fitting an approximation of the true transition function , given the measurements from the real system.
Once a dynamics model is learned, we use it to predict the distribution over trajectories resulting from applying a given policy (e.g., a sequence of actions). We hence compute the distribution of the reward for a given policy, and use it to optimize the best policy to use. In Section 4 we discuss multiple methods to model the dynamics, while in Section 5 we detail how to compute the distribution over trajectories and how to parametrize the controller.
4 Uncertaintyaware neural network dynamics models
This section describes a number of ways to model uncertain dynamics, including our method: an ensemble of bootstrapped probabilistic neural networks. Whilst uncertaintyaware dynamics models have been explored in a number of prior works (Gal et al., 2016; Depeweg et al., 2016), the particular details of the implementation and design decisions in regard incorporation of uncertainty have not been rigorously analyzed empirically. As a result, prior work has generally found that expressive parametric models, such as deep neural networks, generally do not produce modelbased RL algorithms that are competitive with their modelfree counterparts in terms of asymptotic performance (Nagabandi et al., 2017), and often even found that simpler timevarying linear models can outperform expressive neural network models (Levine et al., 2016; Gu et al., 2016).
Any MBRL algorithm must select a class of model to predict the dynamics. This choice is often crucial for an MBRL algorithm, as even small bias can significantly influence the quality of the corresponding controller (Atkeson and Santamaría, 1997; Abbeel et al., 2006). A major challenge building a model that performs well in low and high data regimes: in the early stages of training, data is scarce, and highly expressive function approximators are liable to overfit. In the later stages of training, data is plentiful, but for systems with complex dynamics, simple function approximators might underfit. While Bayesian models such as GPs perform well in lowdata regimes, they do not scale favorably with dimensionality and often use kernels illsuited for discontinuous dynamics (Calandra et al., 2016), which is typical of robots interacting through contacts.
Model  Aleatoric  Epistemic 

uncertainty  uncertainty  
Baseline Models  
Deterministic NN (D)  No  No 
Probabilistic NN (P)  Yes  No 
Deterministic ensemble NN (DE)  No  Yes 
Gaussian process baseline (GP)  Homoscedastic  Yes 
Our Model  
Probabilistic ensemble NN (PE)  Yes  Yes 
In this paper, we study how expressive NNs can be incorporated into MBRL. To account for uncertainty, we study NNs that model two types of uncertainty. The first type, aleatoric uncertainty, arises from inherent stochasticities of a system, e.g. observation noise and process noise. Aleatoric uncertainty can be captured by outputting the parameters of a parameterized distribution, while still training the network discriminatively. The second type – epistemic uncertainty – corresponds to subjective uncertainty about the dynamics function, due to a lack of sufficient data to uniquely determine the underlying system exactly. In the limit of infinite data, epistemic uncertainty should vanish, but for datasets of finite size, subjective uncertainty remains when predicting transitions. It is precisely the subjective epistemic uncertainty which Bayesian modeling excels at, which helps mitigate overfitting. Below, we describe how we use combinations of ‘probabilistic networks’ to capture aleatoric uncertainty and ‘ensembles’ to capture epistemic uncertainty. Each combination is summarized in Table 1.
Probabilistic neural networks (P)
We define a probabilistic NN as a network whose output neurons encode any parametric distribution, capturing aleatoric uncertainty. We use the negative log prediction probability as our loss function . For example, we might define our predictive model to output a Gaussian distributions with diagonal covariances parameterized by and conditioned on and , i.e.: . Then the loss become
(1) 
Such a network output, which parameterizes a Gaussian distribution, models heteroscedastic aleatoric uncertainty (heteroscedasticity means the random output variability is a function of the input). However, it does not model epistemic uncertainty, which cannot be captured with purely discriminative training. Choosing a Gaussian distribution is a common choice for continuousvalued states, and reasonable if we assume that any stochasticity in the system is unimodal. However, in general, any tractable distribution class can be used. To provide for an expressive dynamics model, we can represent the parameters of this distribution (e.g., the mean and covariance of a Gaussian) as nonlinear, parametric functions of the current state and action, which can be arbitrarily complex but deterministic. This makes it feasible to incorporate NNs into a probabilistic dynamics model even for highdimensional and continuous states and actions. Finally, an underappreciated detail of probabilistic networks is that their variance has arbitrary values for outofdistribution inputs, which can disrupt planning, an issue we discuss how to mitigate in Appendix A.1.
Deterministic neural networks (D)
For comparison, we define a deterministic NN as a specialcase probabilistic network that outputs delta distributions centered around point predictions denoted as : , trained using the MSE loss . Although MSE can be interpreted as with a Gaussian model of fixed unit variance, in practice this variance cannot be used for uncertaintyaware propagation, since it does not correspond to any notion of uncertainty (e.g. a deterministic model with infinite data would be adding variance to particles for no good reason).
Ensembles (DE and PE)
A principled means to capture epistemic uncertainty is with Bayesian inference. Whilst accurate Bayesian NN inference is possible with sufficient compute (Neal, 1995), approximate inference methods (Blundell et al., 2015; Gal et al., 2017; HernándezLobato and Adams, 2015) have enjoyed recent popularity given their simpler implementation and faster training times. Ensembles of bootstrapped models are even simpler still: given a base model, no additional (hyper)parameters need be tuned, whilst still providing reasonable uncertainty estimates (Efron and Tibshirani, 1994; Osband, 2016). We consider ensembles of many bootstrap models, using to refer to the parameters of our model . Ensembles can be composed of deterministic models (DE) or probabilistic models (PE) – as done by Lakshminarayanan et al. (2017) – both of which define predictive probability distributions: . A visual example is provided in Appendix A.2. Each of our bootstrap models have their unique dataset , generated by sampling (with replacement) times the dynamics dataset recorded so far , where is the size of . We found sufficient for all our experiments. To validate the number of layers and neurons of our models, we can visualize onestep predictions (e.g. Appendix A.3).
5 Planning and control with learned dynamics
This section describes different ways uncertainty can be incorporated into planning using probabilistic dynamics models. Once a model is learned, we can use it for control by predicting the future outcomes of candidate policies or actions and then selecting the particular candidate that is predicted to result in the highest reward. MBRL planning in discrete time over long time horizons is generally performed by using the dynamics model to recursively predict how an estimated Markov state will evolve from one time step to the next, e.g.: where . When planning, the choice of action can depend on the state, forming a policy, . Otherwise, planning with actions independent of the state is typically framed as model predictive control (MPC) (Camacho and Alba, 2013). MPC can be considered a specialcase policy, trained as a function of the dynamics model, and thereafter only dependent on time, not state. We use MPC in our own experiments for several reasons, including implementation simplicity, lower computational burden (no gradients), no requirement to specify the taskhorizon in advance, whilst achieving the same dataefficiency as Gal et al. (2016) who used a Bayesian NN with a policy to learn the cartpole task in 2000 time steps. Our full algorithm is summarized in Section 6.
Given the state of the system at time , the prediction horizon of the MPC controller, and an action sequence ; the probabilistic dynamics model induces a distribution over the resulting trajectories . At each time step , the MPC controller applies the first action of the sequence of optimized actions . A common technique to compute the optimal action sequence is a random sampling shooting method due to its parallelizability and ease of implementation. Nagabandi et al. (2017) use deterministic NN models and MPC with random shooting to achieve data efficient control for in higher dimensional tasks than what is feasible for GP to model. Our work improves upon Nagabandi et al. (2017)’s data efficiency in two ways: First, we capture uncertainty in modeling and planning, to prevent overfitting in the lowdata regime. Second, we use CEM (Botev et al., 2013) instead of randomshooting, which samples actions from a distribution closer to previous action samples that yielded high reward.
Evaluating the exact expected trajectory reward using recursive state prediction is generally intractable. Multiple approaches to approximate uncertainty propagation found in the literature (Girard et al., 2002; QuiñoneroCandela et al., 2003). These approaches can be categorized by how they represent the state distribution: deterministic, particle, and parametric methods. Deterministic methods use the mean prediction and ignore the uncertainty, particle methods propagate a set of Monte Carlo samples, and parametric methods include Gaussian or Gaussian mixture models, etc. Although parametric distributions have been successfully used in MBRL (Deisenroth et al., 2014), experimental results (Kupcsik et al., 2013) suggest that particle approaches can be competitive both computationally and in terms of accuracy, without making strong assumptions about the distribution used. Hence, we use particlebased propagation, specifically suited to our PE dynamics model which distinguishes two types of uncertainty, detailed in Section 5.1. Unfortunately, little prior work has empirically compared the design decisions involved in choosing the particular propagation method. Thus, we compare against several baselines in Section 5.2. Visual examples are provided in Appendix A.4.
5.1 Our state propagation method: trajectory sampling (TS)
Our method to predict plausible state trajectories begins by creating particles from the current state, . We found sufficient in all our experiments. For each particle we associate a bootstrap , sampled uniformly from , where is the number of bootstrap models in the ensemble. A particle’s bootstrap index can potentially change as a function of time . We consider two TS variants:

TS1 refers to particles uniformly resampling a bootstrap per time step. If we were to consider an ensemble as a Bayesian model, the particles would be effectively continually resampling from the approximate marginal posterior of plausible dynamics. We consider TS1’s bootstrap resampling to place a soft restriction on trajectory multimodality: particles separation cannot be attributed to the compounding effects of differing bootstraps using TS1.

TS refers to particle bootstraps never changing during a trial. Since an ensemble is a collection of plausible models, which together represent the subjective uncertainty in function space of the true dynamics function , which we assume is time invariant. TS captures such time invariance since each particle’s bootstrap index is made consistent over time. An advantage of using TS is aleatoric and epistemic uncertainties are separable (e.g. aleatoric state variance is the average variance of particles of same bootstrap, whilst epistemic state variance is the variance of the average of particles of same bootstrap indexes). Epistemic is the ‘learnable’ type of uncertainty, useful for directed exploration (Thrun, 1992). Without a way to distinguish epistemic uncertainty from aleatoric, an exploration algorithm (e.g. Bayesian optimization) might mistakingly choose actions with high predicted rewardvariance ‘hoping to learn something’ when in fact such variance is caused by persistent and irreducible system stochasticity offering zero exploration value.
In both variants, we then propagate particles by sampling . Note that TS can capture multimodal distributions and can be used with any probabilistic model.
5.2 Baseline state propagation methods for comparison
To validate our state propagation method, in the experiments of Section 7.2 we compare against four alternative state propagation methods, which we now discuss.
Expectation (E)
To judge the importance of our TS method using multiple particles to represent a distribution we compare against the aforementioned deterministic propagation technique. The simplest way to plan is iteratively propagating the expected prediction at each time step (ignoring uncertainty) . An advantage of this approach over TS is reduced computation and simple implementation: only a single particle is propagated. The main disadvantage of choosing E over TS is small model biases can compound quickly over time, with no way to tell the quality of the state estimate.
Moment matching (MM)
Whilst TS’s particles can represent multimodal distributions, forcing a unimodal distribution via moment matching (MM) can (in some cases) benefit MBRL data efficiency (Gal et al., 2016). Although unclear why, Gal et al. (2016) (who use Gaussian MM) hypothesize this effect may be caused by smoothing of the loss surface and implicitly penalizing multimodal distributions (which often only occur with uncontrolled systems). To test this hypothesis we use Gaussian MM as a baseline and assume independence between bootstraps and particles for simplicity , where . Future work might consider other distributions too, such as the Laplace distribution.
Distribution sampling (DS)
The previous MM approach made a strong unimodal assumption about state distributions: the state distribution at each time step was recast to Gaussian. A softer restriction on multimodality – between MM and TS – is to moment match w.r.t. the bootstraps only (noting the particles are otherwise independent if ). This means that we effectively smooth the loss function w.r.t. epistemic uncertainty only (the uncertainty relevant to learning), whilst the aleatoric uncertainty remains free to be multimodal. We call this method distribution sampling (DS) , with .
6 Algorithm summary
Here we summarize our MBRL method PETS in Algorithm 1. We use the PE model to capture heteroskedastic aleatoric uncertainty and heteroskedastic epistemic uncertainty, which the TS planning method was able to best use. To guide the random shooting method of our MPC algorithm we found the CEM method learned faster (as discussed in Appendix A.7).
7 Experimental results




We now evaluate the performance of our proposed MBRL algorithm called PETS using deep probabilistic dynamics models. First, we compare our approach on standard benchmark tasks against stateoftheart modelfree and modelbased approaches in Section 7.1. Then, in Section 7.2, we provide a detailed evaluation of the individual design decisions in the model and uncertainty propagation method and analyze their effect on performance. Additional considerations of horizon length, action sampling distribution, stochastic systems are discusses in Appendix A.6. Experiment setup is seen Figure 2, and NN architecture details are discussed in the supplementary materials, in Appendix A.5. Videos of the experiments can be found at https://sites.google.com/view/drlinahandfuloftrials.
7.1 Comparisons to prior reinforcement learning algorithms
We compare our Algorithm 1 against the following reinforcement learning algorithms for continuous stateaction control:

Soft actor critic (SAC): (Haarnoja et al., 2018) is a modelfree deep actorcritic algorithm, which reports better dataefficiency than DDPG on MuJoCo benchmarks (we obtained authors’ data).

Modelbased modelfree hybrid (MBMF): (Nagabandi et al., 2017) is a recent deterministic deep modelbased RL algorithm, which we reimplement.

Gaussian process dynamics model (GP): we compare against three MBRL algorithms based on GPs. GPE learns a GP model, but only propagate the expectation. GPDS uses the propagation method DS. GPMM is the algorithm proposed by Kamthe and Deisenroth (2017) except that we do not update the dynamics model after each transition, but only at the end of each trial.
The results of the comparison are presented in Figure 3. Our method reaches performance that is similar to the asymptotic performance of the stateoftheart modelfree baseline PPO. However, PPO requires several orders of magnitude more samples to reach this point. We reach PPO’s asymptotic performance in fewer than 100 trials on all four tasks, faster than any prior modelfree algorithm, and the asymptotic performance substantially exceeds that of the prior MBRL algorithm by Nagabandi et al. (2017) which corresponds to the deterministic variant of our approach (DE). This result highlights the value of uncertainty estimation. Whilst the probabilistic baseline GPMM slightly outperformed our method in cartpole, GPMM scales cubically in time and quadratically state dimensionality, so was infeasible to run on the remaining higher dimensional tasks. It is worth noting that modelbased deep RL algorithms have typically been considered to be efficient but incapable of achieving similar asymptotic performance as their modelfree counterparts. Our results demonstrate that a purely modelbased deep RL algorithm that only learns a dynamics model, omitting even a parameterized policy, can achieve comparable performance when properly incorporating uncertainty estimation during modeling and planning. In the next section, we study which specific design decisions and components of our approach are important for achieving this level of performance.
7.2 Analyzing dynamics modeling and uncertainty propagation
In this section, we compare different choices for the dynamics model in Section 4 and uncertainty propagation technique in Section 5. The results in Figure 4 first show that w.r.t. model choice, the model should consider both uncertainty types: the probabilistic ensembles (PEXX) perform best in all tasks, except cartpole (‘X’ symbolizes any character). Close seconds are the singleprobabilitytype models: probabilistic network (PXX) and ensembles of deterministic networks (EXX). Worst is the deterministic network (DE).
These observations shed some light on the role of uncertainty in MBRL, particularly as it relates to discriminatively trained, expressive parametric models such as NNs. Our results suggest that, the quality of the model and the use of uncertainty at learning time significantly effect the performance of the MBRL algorithms tested, while the use of more aadvanced uncertainty propagation technique seem to offers only minor improvements. We reconfirm that moment matching (MM) is competitive in lowdimensional tasks (consistent with (Gal et al., 2016)), however is not a reliable MBRL choice in higher dimensions, e.g. the half cheetah.
The analysis provided in this section summarizes the experiments we conducted to design our algorithm. It is worth noting that the individual components of our method – ensembles, probabilistic networks, and various approximate uncertainty propagation techniques – have existed in various forms in supervised learning and RL. However, as our experiments here and in the previous section show, the particular choice of these components in our algorithm achieves substantially improved results over previous stateoftheart modelbased and modelfree methods, experimentally confirming both the importance of uncertainty estimation in MBRL and the potential for MBRL to achieve asymptotic performance that is comparable to the best modelfree methods at a fraction of the sample complexity.
8 Discussion & conclusion
Our experiments suggest several conclusions that are relevant for further investigation in modelbased reinforcement learning. First, our results show that modelbased reinforcement learning with neural network dynamics models can achieve results that are competitive not only with Bayesian nonparametric models such as GPs, but also on par with modelfree algorithms such as PPO and SAC in terms of asymptotic performance, while attaining substantially more efficient convergence. Although the individual components of our modelbased reinforcement learning algorithms are not individually new – prior works have suggested both ensembling and outputting Gaussian distribution parameters (Lakshminarayanan et al., 2017), as well as the use of MPC for modelbased RL (Nagabandi et al., 2017) – the particular combination of these components into a modelbased reinforcement learning algorithm is, to our knowledge, novel, and the results provide a new stateoftheart for modelbased reinforcement learning algorithms based on highcapacity parametric models such as neural networks. The systematic investigation in our experiments was a critical ingredient in determining the precise combination of these components that attains the best performance.
Our results indicate that the gap in asymptotic performance between modelbased and modelfree reinforcement learning can, at least in part, be bridged by incorporating uncertainty estimation into the model learning process. Our experiments further indicate that both epistemic and aleatoric uncertainty plays a crucial role in this process. Our analysis considers modelbased algorithm based on dynamics estimation and planning. A compelling alternative class of methods uses the model to train a parameterized policy (Ko et al., 2007; Deisenroth et al., 2014; McAllister and Rasmussen, 2017). While the choice of using the model for planning versus policy learning is largely orthogonal to the other design choices, a promising direction for future work is to investigate how policy learning can be incorporated into our framework to amortize the cost of planning at testtime. Our initial experiments with policy learning did not yield an effective algorithm by directly propagating gradients through our uncertaintyaware models, though future work could consider alternative methods for policy learning. Finally, the observation that modelbased RL can match the performance of modelfree algorithms suggests that substantial further investigation of such of methods is in order, as a potential avenue for effective, sampleefficient, and practical generalpurpose reinforcement learning.
References
 Abbeel et al. (2006) P. Abbeel, M. Quigley, and A. Y. Ng. Using inaccurate models in reinforcement learning. In International Conference on Machine Learning (ICML), pages 1–8, 2006. ISBN 1595933832. doi: 10.1145/1143844.1143845.
 Agrawal et al. (2016) P. Agrawal, A. Nair, P. Abbeel, J. Malik, and S. Levine. Learning to poke by poking: Experiential learning of intuitive physics. arXiv preprint, 2016. arXiv:1606.07419.
 Atkeson and Santamaría (1997) C. G. Atkeson and J. C. Santamaría. A comparison of direct and modelbased reinforcement learning. In Proceedings of the International Conference on Robotics and Automation (ICRA), 1997.
 Baranes and Oudeyer (2013) A. Baranes and P.Y. Oudeyer. Active learning of inverse models with intrinsically motivated goal exploration in robots. Robotics and Autonomous Systems, 61(1):49–73, 2013. ISSN 09218890. doi: 10.1016/j.robot.2012.05.008.
 Bellman (1957) R. Bellman. A Markovian decision process. Journal of Mathematics and Mechanics, pages 679–684, 1957.
 Blundell et al. (2015) C. Blundell, J. Cornebise, K. Kavukcuoglu, and D. Wierstra. Weight uncertainty in neural networks. arXiv preprint arXiv:1505.05424, 2015.
 Botev et al. (2013) Z. I. Botev, D. P. Kroese, R. Y. Rubinstein, and P. L’Ecuyer. The crossentropy method for optimization. In Handbook of statistics, volume 31, pages 35–59. Elsevier, 2013.
 Calandra et al. (2016) R. Calandra, J. Peters, C. E. Rasmussen, and M. P. Deisenroth. Manifold Gaussian processes for regression. In International Joint Conference on Neural Networks (IJCNN), pages 3338–3345, 2016. doi: 10.1109/IJCNN.2016.7727626.
 Camacho and Alba (2013) E. F. Camacho and C. B. Alba. Model predictive control. Springer Science & Business Media, 2013.
 Chebotar et al. (2017) Y. Chebotar, K. Hausman, M. Zhang, G. Sukhatme, S. Schaal, and S. Levine. Combining modelbased and modelfree updates for trajectorycentric reinforcement learning. In International Conference on Machine Learning (ICML), 2017.
 Deisenroth et al. (2014) M. Deisenroth, D. Fox, and C. Rasmussen. Gaussian processes for dataefficient learning in robotics and control. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 37(2):408–423, 2014. ISSN 01628828. doi: 10.1109/TPAMI.2013.218.
 Depeweg et al. (2016) S. Depeweg, J. M. HernándezLobato, F. DoshiVelez, and S. Udluft. Learning and policy search in stochastic dynamical systems with Bayesian neural networks. ArXiv eprints, May 2016.
 Dhariwal et al. (2017) P. Dhariwal, C. Hesse, O. Klimov, A. Nichol, M. Plappert, A. Radford, J. Schulman, S. Sidor, and Y. Wu. Openai baselines. https://github.com/openai/baselines, 2017.
 Draeger et al. (1995) A. Draeger, S. Engell, and H. Ranke. Model predictive control using neural networks. IEEE Control Systems, 15(5):61–66, Oct 1995. ISSN 1066033X. doi: 10.1109/37.466261.
 Efron and Tibshirani (1994) B. Efron and R. Tibshirani. An introduction to the bootstrap. CRC press, 1994.
 Finn et al. (2016) C. Finn, X. Tan, Y. Duan, T. Darrell, S. Levine, and P. Abbeel. Deep spatial autoencoders for visuomotor learning. In International Conference on Robotics and Automation (ICRA), 2016.
 Fu et al. (2015) J. Fu, S. Levine, and P. Abbeel. Oneshot learning of manipulation skills with online dynamics adaptation and neural network priors. arXiv preprint, 2015. arXiv:1509.06841.
 Gal et al. (2016) Y. Gal, R. McAllister, and C. Rasmussen. Improving PILCO with Bayesian neural network dynamics models. 2016.
 Gal et al. (2017) Y. Gal, J. Hron, and A. Kendall. Concrete dropout. In Advances in Neural Information Processing Systems, pages 3584–3593, 2017.
 Girard et al. (2002) A. Girard, C. E. Rasmussen, J. QuinoneroCandela, R. MurraySmith, O. Winther, and J. Larsen. Multiplestep ahead prediction for non linear dynamic systems–a Gaussian process treatment with propagation of the uncertainty. Neural Information Processing Systems (NIPS), 15:529–536, 2002.
 Grancharova et al. (2008) A. Grancharova, J. Kocijan, and T. A. Johansen. Explicit stochastic predictive control of combustion plants based on Gaussian process models. Automatica, 44(6):1621–1631, 2008.
 Gu et al. (2016) S. Gu, T. Lillicrap, I. Sutskever, and S. Levine. Continuous deep Qlearning with modelbased acceleration. In International Conference on Machine Learning (ICML), pages 2829–2838, 2016.
 Guo et al. (2017) C. Guo, G. Pleiss, Y. Sun, and K. Q. Weinberger. On calibration of modern neural networks. arXiv preprint arXiv:1706.04599, 2017.
 Haarnoja et al. (2018) T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine. Soft actorcritic: Offpolicy maximum entropy deep reinforcement learning with a stochastic actor. arXiv preprint arXiv:1801.01290, 2018.
 Hernandaz and Arkun (1990) E. Hernandaz and Y. Arkun. Neural network modeling and an extended DMC algorithm to control nonlinear systems. In 1990 American Control Conference, pages 2454–2459, May 1990.
 HernándezLobato and Adams (2015) J. M. HernándezLobato and R. Adams. Probabilistic backpropagation for scalable learning of Bayesian neural networks. In International Conference on Machine Learning, pages 1861–1869, 2015.
 HernándezLobato et al. (2016) J. M. HernándezLobato, Y. Li, M. Rowland, D. HernándezLobato, T. Bui, and R. E. Turner. Blackbox divergence minimization. 2016.
 Kamthe and Deisenroth (2017) S. Kamthe and M. P. Deisenroth. Dataefficient reinforcement learning with probabilistic model predictive control. CoRR, abs/1706.06491, 2017. URL http://arxiv.org/abs/1706.06491.
 Ko et al. (2007) J. Ko, D. J. Klein, D. Fox, and D. Haehnel. Gaussian processes and reinforcement learning for identification and control of an autonomous blimp. In IEEE International Conference on Robotics and Automation (ICRA), pages 742–747. IEEE, 2007.
 Kober and Peters (2009) J. Kober and J. Peters. Policy search for motor primitives in robotics. In Advances in neural information processing systems (NIPS), pages 849–856, 2009.
 Kocijan et al. (2004) J. Kocijan, R. MurraySmith, C. E. Rasmussen, and A. Girard. Gaussian process model based predictive control. In American Control Conference, volume 3, pages 2214–2219. IEEE, 2004.
 Kupcsik et al. (2013) A. G. Kupcsik, M. P. Deisenroth, J. Peters, G. Neumann, et al. Dataefficient generalization of robot skills with contextual policy search. In Proceedings of the 27th AAAI Conference on Artificial Intelligence, pages 1401–1407, 2013.
 Lakshminarayanan et al. (2017) B. Lakshminarayanan, A. Pritzel, and C. Blundell. Simple and scalable predictive uncertainty estimation using deep ensembles. In Neural Information Processing Systems (NIPS), pages 6405–6416. 2017.
 Lenz et al. (2015) I. Lenz, R. Knepper, and A. Saxena. DeepMPC: Learning deep latent features for model predictive control. In Robotics Science and Systems (RSS), 2015.
 Levine et al. (2016) S. Levine, C. Finn, T. Darrell, and P. Abbeel. Endtoend training of deep visuomotor policies. J. Mach. Learn. Res., 17(1):1334–1373, Jan. 2016. ISSN 15324435.
 Lillicrap et al. (2015) T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015.
 Lin (1992) L.J. Lin. Reinforcement Learning for Robots Using Neural Networks. PhD thesis, Carnegie Mellon University, 1992.
 MacKay (1992) D. J. MacKay. A practical Bayesian framework for backpropagation networks. Neural computation, 4(3):448–472, 1992.
 McAllister and Rasmussen (2017) R. McAllister and C. E. Rasmussen. Dataefficient reinforcement learning in continuous stateaction GaussianPOMDPs. In Neural Information Processing Systems (NIPS), pages 2037–2046. 2017.
 Miller et al. (1990) W. T. Miller, R. P. Hewes, F. H. Glanz, and L. G. Kraft. Realtime dynamic control of an industrial manipulator using a neural networkbased learning controller. IEEE Transactions on Robotics and Automation, 6(1):1–9, Feb 1990. ISSN 1042296X. doi: 10.1109/70.88112.
 Mnih et al. (2015) V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, et al. Humanlevel control through deep reinforcement learning. Nature, 518(7540):529–533, 2015.
 Mordatch et al. (2016) I. Mordatch, N. Mishra, C. Eppner, and P. Abbeel. Combining modelbased policy search with online model learning for control of physical humanoids. In IEEE International Conference on Robotics and Automation (ICRA), pages 242–248, May 2016. doi: 10.1109/ICRA.2016.7487140.
 Nagabandi et al. (2017) A. Nagabandi, G. Kahn, R. S. Fearing, and S. Levine. Neural network dynamics for modelbased deep reinforcement learning with modelfree finetuning. ArXiv eprints, Aug. 2017.
 Neal (1995) R. Neal. Bayesian learning for neural networks. PhD thesis, University of Toronto, 1995.
 NguyenTuong et al. (2008) D. NguyenTuong, J. Peters, and M. Seeger. Local Gaussian process regression for real time online model learning. In Neural Information Processing Systems (NIPS), pages 1193–1200, 2008.
 Osband (2016) I. Osband. Risk versus uncertainty in deep learning: Bayes, bootstrap and the dangers of dropout. NIPS Workshop on Bayesian Deep Learning, 2016.
 Osband et al. (2016) I. Osband, C. Blundell, A. Pritzel, and B. Van Roy. Deep exploration via bootstrapped DQN. In Neural Information Processing Systems (NIPS), pages 4026–4034, 2016.
 Punjani and Abbeel (2015) A. Punjani and P. Abbeel. Deep learning helicopter dynamics models. In IEEE International Conference on Robotics and Automation (ICRA), pages 3223–3230, May 2015. doi: 10.1109/ICRA.2015.7139643.
 QuiñoneroCandela et al. (2003) J. QuiñoneroCandela, A. Girard, J. Larsen, and C. E. Rasmussen. Propagation of uncertainty in Bayesian kernel models—application to multiplestep ahead forecasting. In IEEE International Conference on Acoustics, Speech and Signal Processing, volume 2, pages 701–704, April 2003. doi: 10.1109/ICASSP.2003.1202463.
 Ramachandran et al. (2017) P. Ramachandran, B. Zoph, and Q. V. Le. Searching for activation functions. CoRR, abs/1710.05941, 2017. URL http://arxiv.org/abs/1710.05941.
 Rasmussen et al. (2003) C. E. Rasmussen, M. Kuss, et al. Gaussian processes in reinforcement learning. In Neural Information Processing Systems (NIPS), volume 4, page 1, 2003.
 Schulman et al. (2017) J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
 Thrun (1992) S. Thrun. Efficient exploration in reinforcement learning. 1992.
 Todorov et al. (2012) E. Todorov, T. Erez, and Y. Tassa. Mujoco: A physics engine for modelbased control. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 5026–5033, 2012.
 Williams et al. (2017) G. Williams, N. Wagener, B. Goldfain, P. Drews, J. M. Rehg, B. Boots, and E. A. Theodorou. Information theoretic MPC for modelbased reinforcement learning. In International Conference on Robotics and Automation (ICRA), 2017.
Appendix A Appendix
a.1 Well behaved probabilistic networks
An underappreciated detail of probabilistic networks is how the variance output is implemented with automatic differentiation. Often the realvalued output is treated as a log variance (or similar), and transformed through an exponential function (or similar) to produce a nonnegativevalued output, necessary to be interpreted as a variance. However, whilst this variance output is well behaved at points within the training distribution, its value is undefined outside the trained distribution. Thus, when this model is then evaluated at previously unseen states, as is often the case during the MBRL learning process, the outputted variance can either collapse to zero, or explode toward infinity. As a remedy, we found that lower bounding and upper bounding the output such that they could not be lower or higher than the lowest and highest values in the training data significantly helped.
To bound the variance output for a probabilistic network to be between the upper and lower bounds found during training the network on the training data, we used the following code with automatic differentiation:
logvar = max_logvar  tf.nn.softplus(max_logvar  logvar)  
logvar = min_logvar + tf.nn.softplus(logvar  min_logvar)  
var = tf.exp(logvar) 
with a small regularization penalty on term on max_logvar so that it does not grow beyond the training distribution’s maximum output variance, and on the negative of min_logvar so that it does not drop below the training distribution’s minimum output variance.
a.2 Fitting PE model to toy function
As an initial test, we evaluated all previously described models by fitting to a dataset of points from a sine function, where the ’s are sampled uniformly from . Before fitting, we introduced heteroscedastic noise by performing the transformation
(2) 
The model fit to (2) was shown in Figure 1, but reproduced here for convenience as Figure A.5.
a.3 Onestep predictions of learned models
To visualize and verify the accuracy of our PE model, we took all training data from the experiments and visualized the onestep predictions of the model. Since the states are highdimensional, we resorted to plotting the output dimensions individually, sorting by the ground truth value in each dimension, seen in Figure A.6.




a.4 Uncertainty propagation methods




a.5 Experimental setting
For our experiments, we used four continuouscontrol benchmark tasks simulated via MuJoCo [Todorov et al., 2012] that vary in complexity, dimensionality, and the presence of contact forces (pictured Figure 2). The simplest is the classical cartpole swingup benchmark (, ). To evaluate our model with higher dimensional dynamics and frictional contacts, we use a simulated PR2 robot in a reaching and pushing task (, ), as well as the halfcheetah (, ). Each experiment is repeated with different random seeds, and the mean and standard deviation of the cost is reported for each condition. Each neural network dynamics model consist of three fully connected layers, 500 neurons per layer (except 250 for halfcheetah), and swish activation functions [Ramachandran et al., 2017]. The weights of the networks were initially sampled from a truncated Gaussian with variance equal to the reciprocal of the number of fanin neurons.
a.6 Additional considerations

MPC horizon length: choosing the MPC horizon is nontrivial: ‘too short’ and MPC suffer from bias, ‘too long’ then variance. Probabilistic propagation methods are robust to horizons set ‘too long’. This effect is due to particle separation over time (e.g. Figure A.7), which reduces the dependence of actions on expectedcost further in time. The action selection procedure then effectively ignores the unpredictable with our method. Deterministic methods have no such mechanism to avoid model bias [Deisenroth et al., 2014].

MPC action sampling: We hypothesized the higher the state or action dimensionality, the more important that MPC action selection is guided (opposed to the uniform random shooting method, used by Nagabandi et al. [2017]). Thus we tested crossentropy method (CEM) and random shooting for various tasks confirming this hypothesis (details Appendix A.7).
a.7 MPC action selection
We study the impact of the particular choice of action optimization technique. An important criterion when selecting the optimizer is not only the optimality of the selected actions, but also the speed with which the actions can be obtained, which is especially critical for realworld control tasks that must proceed in real time^{1}^{1}1Such as robotics, where control frequencies below 20Hz are undesirable, meaning that a decision need to be taken in under 50ms. Simple random search techniques have been proposed in prior work due to their simplicity and ease of parallelism [Nagabandi et al., 2017]. However, uniform random search suffers in highdimensional spaces. In addition to random search, we compare to the crossentropy method (CEM) [Botev et al., 2013], which iteratively samples solutions from a candidate distribution that is adjusted based on the best sampled solutions. Hence, we compare some models presented in Section 4–5, by running the modelbased RL algorithm for 100 trials using different policy optimizers, and reporting the performance of the final policy. The results (Figure A.10) show that using CEM significantly outperforms random search on the halfcheetah task, while maintaining computational time sufficiently small to be run in real time ( for action selection in our implementation). We use CEM in all of the remaining experiments. It might seem obvious that better optimization techniques will result in better performance, but this observation is worth considering carefully in the context of complex nonconvex models such as neural networks: while the accuracy of the model likely has substantial impact on algorithm performance, so does the algorithm’s ability to actually discover good actions under that model.
a.8 Stochastic systems:
In Figure 10(f) we compare and contrast the effect stochastic action noise has w.r.t. variable MBRL modelling decisions. Notice methods that PE method that propagate uncertainty are generally required for consistent performance.






a.9 Ablation study:



