MBMF: Model-Based Priors for Model-Free Reinforcement Learning

MBMF: Model-Based Priors for
Model-Free Reinforcement Learning

Somil Bansal Roberto Calandra Kurtland Chua Sergey Levine Claire Tomlin
Department of Electrical Engineering and Computer Sciences
University of California Berkeley, United States
{somil, roberto.calandra, kchua, svlevine, tomlin}@berkeley.edu
Abstract

Reinforcement Learning is divided in two main paradigms: model-free and model-based. Each of these two paradigms has strengths and limitations, and has been successfully applied to real world domains that are appropriate to its corresponding strengths. In this paper, we present a new approach aimed at bridging the gap between these two paradigms that is at the same time data-efficient and cost-savvy. We do so by learning a probabilistic dynamics model and leveraging it as a prior for the intertwined model-free optimization. As a result, our approach can exploit the generality and structure of the dynamics model, but is also capable of ignoring its inevitable inaccuracies, by directly incorporating the evidence provided by the direct observation of the cost. Preliminary results demonstrate that our approach outperforms purely model-based and model-free approaches, as well as the approach of simply switching from a model-based to a model-free setting.

1 Introduction

Reinforcement learning (RL) methods can generally be divided into Model-Free (MF) approaches, in which the cost is directly optimized, and Model-Based (MB) approaches, which additionally employ and/or learn a model of the environment. Both of these approaches have different strengths and limitations [1]. Typically, MF approaches are very effective at learning complex policies, but the convergence might require millions of trials and lead to (globally sub-optimal) local minima. On the other hand, MB approaches have the theoretical benefit of being able to better generalize to new tasks and environments, and in practice they can drastically reduce the number of trials required [2, 3]. However, for this generalization, an accurate model is necessary (either engineered or learned), which on its own can be challenging to acquire. This issue is very crucial since any bias in the model does not translate to a proportional bias in the policy – a weakly biased model might result in a strongly biased policy. As a consequence, MB approaches have been often limited to low-dimensional spaces, and often require a significant degree of engineering to perform well. Thus, it is desirable to design approaches that can leverage the respective advantages and overcome the challenges of each approach.

A second motivation for understanding and bridging the gap between MB and MF approaches is provided by neuroscience. Evidence in neuroscience suggests that humans employ both MF and MB approaches for learning new skills, and switch between the two during the learning process [4]. From the machine learning perspective, one possible explanation for this behavior can be found in the concept of bounded rationality [5]. The switch from a MB approach to a MF approach after learning sufficiently good policies can be motivated by the need to devote the limited computational resources to new challenging tasks, while still being able to solve the previous tasks (with a sufficient generalization capability). However, in the reinforcement learning community, there are limited coherent frameworks that combine these two approaches.

In this paper, we propose a probabilistic framework that integrates MB and MF approaches. This bridging is achieved by considering the cost estimated by the MB component as the prior for the intertwined probabilistic MF component. In particular, we learn a dynamics model from scratch which is used to compute the trajectory distribution corresponding to a given policy, which in turn can be used to estimate the cost of the policy. This estimate is used by a Bayesian Optimization-based MF policy search to guide the policy exploration. In essence, this probabilistic framework allows us to model and combine the uncertainty in the cost estimates of the two methods. The advantage of doing so is to exploit the structure and generality of the dynamics model throughout the entire state-action space. At the same time, the evidence provided by the observation of the actual cost can be integrated into the estimation of the posterior.

We demonstrate our method on a 2D navigation task and a more complex simulated manipulation task that requires pushing an object to a goal position. Our results show that the proposed approach can overcome the model bias and inaccuracies in the dynamics model to learn a well-performing policy, and yet retain and improve upon the fast convergence rate of MB approaches.

2 Related Work

Gaussian processes have been widely used in the literature to learn dynamics model [2, 6, 7, 8, 9] and for control purposes [1, 10, 11]. By learning a forward dynamics model, it is possible to predict the sequence of states (e.g., a trajectory) generated by the given policy. The main hypothesis here is that the learning of the dynamics is a good proxy, full of structure, that allows to predict the behavior of a given policy without evaluating it on the real system. This structure is particularly valuable, as the alternative would be to directly predict the cost from the policy parameters (e.g., in Bayesian optimization (BO)), which can be challenging, especially for high dimensional policies with thousands or millions of parameters. However, MB approaches do not usually incorporate evidence from the cost. Hence, if the model is inaccurate (e.g., due to intrinsic limitations of the models or the compounding of the inaccuracies over trajectory propagation) the expected cost might be wrong, even if the cost has been measured for the considered policy. This issue is often referred to as model bias [12].

To overcome the model bias, [13] proposed to optimize not only the expected cost, but to also incorporate the predicted variance. This modification results in a exploration-exploitation trade-off that very closely connects to the acquisition functions used in BO. However, unlike our work, [13] does not try to make use of MF approaches and therefore its approach can be considered as a specific case of our framework where, once again, the evidence derived from directly observing the cost is disregarded.

Recently, it has been proposed that a solution to overcoming the model bias is to directly train the dynamics in a goal-oriented manner [14, 15] using the cost observations. However, this approach has the drawback that the generality of the dynamics model is lost. Moreover, directly optimizing the high dimensional dynamics in a goal-oriented fashion can be very challenging. In contrast, we learn a general dynamics model and yet take the cost observations into account which allows us to overcome the limitations of both pure MB and MF methods.

Several prior works have sought to combine MB and MF reinforcement learning, typically with the aim of accelerating MF learning while minimizing the effects of model bias on the final policy. [16] proposed a method that generates synthetic experience for MF learning using a learned model, and explored the types of models that might be suitable for continuous control tasks. This work follows on a long line of research into using learned models for generating synthetic experience, including the classic Dyna framework [17]. Authors in [18] use models for improving the accuracy of MF value function backups. In other works, models have been used to produce a good initialization for the MF component [19, 20]. However, our method directly combines MB and MF approaches into a single RL method without using synthetic samples that degrade with the modeling errors.

[21] also proposed to combine MB and MF methods, with the MF algorithm learning the residuals from the MB return estimates. Our approach also uses the MB return estimates as a bias for MF learning, but in contrast to [21], the MB component is incorporated as a prior mean into a Bayesian model-free update, which allows our approach to reason about the confidence of the MF estimates across the entire policy space. Our approach is perhaps most similar to [22] wherein a linear model is learned in the feature space which is used to provide a prior mean for the MF updates. The features used to learn the model are hand-picked. In contrast, we employ general dynamics models that are learned from scratch.

3 Problem Formulation

The goal of reinforcement learning is to learn a policy that maximizes the sum of future rewards (or equivalently minimize the sum of future costs). In particular, we consider a discrete-time, potentially stochastic and non-linear, dynamical system

(1)

where and denote the state and the action of the system respectively at time-step , and is the state transition map. Our objective is to find the parameters of the policy that minimizes a given cost function subject to the system dynamics. In the finite horizon case, we aim to minimize the cost function

(2)

where is time horizon, and is the cost function at time-step . One of the key challenges in designing the policy is that the system dynamics of Equation (1) are typically unknown. In this work, we propose a novel approach that combines MF and MB methods to learn the optimal policy .

4 Background

Our general approach will be to learn the dynamics model of the system, and use Bayesian Optimization (BO) to find the optimal policy parameters. In particular, we use a Gaussian Process (GP) to model the underlying objective function in BO. In this section, we provide a brief overview of GPs and BO. In the next section, we combine the learned dynamics model with BO to overcome some of the challenges that pure MB and pure MF methods face.

4.1 Gaussian Process (GP)

GPs are a state-of-the-art probabilistic regression method [23]. In general, a GP can be used to model a nonlinear map, , from an input vector to the function value . Hence, we assume that function values , associated with different values of , are random variables and that any finite number of these random variables have a joint Gaussian distribution [23]. For GPs, we define a prior mean function, , and a covariance function (or kernel), , which defines the covariance between any two function values, and . The choice of kernel is problem-dependent and encodes general assumptions such as smoothness of the unknown function. In this work, we employ the squared exponential kernel where the hyperparameters are optimized by maximizing the marginal likelihood [23].

The GP framework can be used to predict the distribution of the function  at an arbitrary input  based on the past observations, . Conditioned on , the prediction of the GP for the input  is a Gaussian distribution with posterior mean and variance given by

(3)

where is the kernel matrix with , is the prior mean function, and . Thus, the GP provides both the expected value of the function at any arbitrary point  as well as a notion of the uncertainty of this estimate. In this paper, we use GPs within BO (discussed in Section 4.2), to map policy parameters to predicted cost. In some of our simulations, we also use GPs to learn the unknown dynamics model in Equation (1), where, represents and represents . Central to our choice of employing GPs is their capability of explicitly modeling the uncertainty in the underlying function. This uncertainty allows to account for the model-bias in the dynamics model, and to deal with the exploration/exploitation trade-off in a principled manner in BO.

4.2 Bayesian Optimization (BO)

BO is a gradient-free optimization procedure that aims to find the global minimum of an unknown function [24, 25, 26]. At each iteration, BO uses the past observations to model the objective function , which is modeled using a GP. BO uses this model to determine the next informative sample location by optimizing the so-called acquisition function. Different acquisition functions are used in literature to trade off exploration and exploitation during the optimization process [26]. In this work, we use the expected improvement (EI) acquisition function [27]. Intuitively, EI selects the next parameter point where the expected improvement in performance is maximal. In this paper, we use BO as our MF method, i.e., we use BO to find the optimal policy parameters that minimize the cost function in Equation (2) directly based on the observed cost on the system, as we will now detail in the next section.

5 Using Model-based Prior for Model-free RL

1:  init: Sample policy parameters
2:  Apply sampled policies on the system and record resultant state-input trajectory and cost data
3:  Initialize ; 
4:  Train dynamics model using
5:  Define : Computed by evaluating the trajectory distribution corresponding to using Monte-Carlo on and computing the expected cost in Equation (2)
6:  repeat
7:     Train GP-based response surface using and as the prior mean
8:     Minimize the acquisition function :
9:     Evaluate on the real system 
10:     Collect trajectory data and the observed cost
11:     Add to and trajectory data to
12:     Every iterations:
13:        Update the dynamics model based on
14:        Redefine based on the updated GP dynamics
15:  until converged
Algorithm 1 MBMF Algorithm

We now present our novel approach to incorporating a MB prior in MF RL, which we term Model-Based Model-Free (MBMF). As with most MB approaches, our algorithm starts with training a forward dynamics model from single-step state transition data . This model can be linear or non-linear and can be learned in a variety of ways, e.g., using linear regression, GP regression, etc. Once the dynamics model is trained, for any given policy parameterization , we can predict the corresponding trajectory distribution by iteratively computing the distribution of states for . Given the trajectory distribution, we compute the predicted distribution of the cost as a function of the policy parameters using Equation (2). We denote the expected value of this predicted cost function as

At the same time, similarly to BO, we train a GP-based response surface, , that predicts given the measured tuples of . Here, denotes the observed cost corresponding to the policy for the given horizon, as defined in Equation (2). However, unlike plain BO, we employ the prediction of the cost distribution from the dynamics model as the prior mean of the response surface111A more correct, but computationally harder approach would be to treat the full cost distribution as a prior for the response surface.. This modified response surface is then used to optimize the acquisition function to compute the next policy parameters to evaluate on the real system. The policy is then rolled out on the actual system. The observed state-input trajectories and the realized cost data is next added to and respectively, and the entire process is repeated again. A summary of our algorithm is provided in Algorithm 1.

Intuitively, the learned dynamics model has the capability to estimate the cost corresponding to a particular policy; however, it suffers from the model bias which translates into a bias in the estimated cost. The BO response surface, on the other hand, can predict the true cost of a policy in the regime where it has observed the training samples, as it was trained directly on the observed performances. However, it can have a huge uncertainty in the cost estimates in the unobserved regime. Incorporating the model-based cost estimates as the prior allows it to leverage the structure of the dynamics model to guide its exploration in this unobserved regime. Thus, using the model-based prior in BO leads to a sample-efficient exploration of the policy space, and at the same time overcomes the biases in the model-based cost estimates, leading to the optimal performance on the actual system.

Note that we collect trajectory data at each iteration so, in theory, we can update the dynamics model, and hence the response surface prior, at each iteration. However, it might be desirable to update the prior every iterations instead, as the dynamics model might change significantly between consecutive iterations, especially when the dataset is small. We will demonstrate the effect of on the learning progress in Section 6.

It should also be noted that algorithms like PILCO [2] can be thought of as a special case of our approach, where the response surface consists exclusively of the prior mean provided from a GP-based dynamics model, without any consideration of the evidences (i.e., the measured costs). In other words, PILCO does not take the dataset into account. Leveraging allows the BO to learn an accurate response surface by accounting for the differences between the “belief" cost based on the dynamics model and the actual cost observed on the system.

Remark 1

It is important to note that we do not explicitly compute the function . The function is only computed for specific that are queried by the optimization algorithm during the optimization of the acquisition function (Line 8 of Algorithm 1).

Remark 2

The proposed approach is agnostic to the function approximator used to learn the dynamics model; thus, different dynamics models, e.g., linear models, neural networks, GPs, Bayesian neural networks, etc. can easily be used in the proposed framework.

6 Experimental Results

In this section, we compare the performance of MBMF with a pure MB method, a pure MF method, as well as a combination of the two where the model is used to “warm start" the MF method.

6.1 Experimental Setting

Task details

We apply the proposed approach as well as the baseline approaches on two different tasks. In the first task, a 2D point mass is moving in the presence of obstacles. The setup of the task is shown in Figure 2. The agent has no information about the position and the type of the obstacles (the Grey cylinders). The goal is to reach the goal position (the Green circle) from the starting position (the Red circle). For the cost function, we penalize the squared distance from the goal position.

In the second task, an under-actuated three degree-of-freedom (DoF) robotic arm (only two of the three joints can be controlled) is trying to push an object from one position to another. The setup of the task is shown in Figure 4. The Red box represents the object which needs to be moved to the goal position, denoted by the Green box. As before, the squared distance from the goal position of the object is used as the cost function.

These tasks pose challenging learning problem because they are under-observed, under-actuated, and have both contact and non-contact modes, which result in discontinuous dynamics.

Implementation details

For the GP regression, we use the GPy package [28]. We use the Dividing Rectangles (DIRECT) algorithm [29] for all policy searches in this paper. For simulating the tasks, we use OpenAI Gym [30] environments and the Mujoco [31] physics engine. In our experiments, we employ linear policies, but more complex policies can be easily incorporated as well.

Baselines details

For the MB method, we learn a dynamics model and use this dynamics model to perform policy search. Given the dynamics model and the cost function, we learn a linear policy using DIRECT. The resultant policy is then executed on the real system and the corresponding state and action trajectories, as well as the resultant cost are obtained. The observed trajectories are then added to the training set, and the entire process is repeated again. We denote this baseline as MB in our plots.

For the MF method, we use BO to directly find the optimal policy parameters, and denote it as MF in plots. In the final variant, we use the MB method above to optimize the policy for a given number of iterations, after which we switch to BO and continue the optimization. The cost observations obtained during the executions of MB method were used to initialize the BO. We denote it as MB+MF in the plots. We will simulate this baseline for different switching points, which corresponds to the number of iterations after which we switch from MB to MF approach. We denote this number by in our plots. Finally, we denote our approach as MBMF and also simulate it for multiple prior update frequencies .

6.2 2D Point Mass

The goal of this experiment is to demonstrate how leveraging the MB prior in the MF method can reduce the model-bias and yet maintains the data-efficiency. We use a GP-based dynamics model for this simulation, where we learn a separate GP for every dimension of the state. We use Monte-Carlo simulation to find the trajectory distribution which is highly parallelizable and known to be very effective for GPs [32]. Nevertheless, other schemes can be used to compute a good approximation of this distribution [2].

Figure 1: The mean (curves) and the standard deviation (shaded regions) of the cost obtained for different approaches for the 2D point mass system. A pure MF approach is unable to perform well. A pure MB approach continues to improve, but is outperformed by the MBMF, indicating the utility of blending MB and MF approaches.
\thesubsubfigure Trial 1
\thesubsubfigure Trial 2
\thesubsubfigure Trial 3
\thesubsubfigure Trial 4
Figure 2: Trajectories obtained via executing the learned controller for the point mass system after 25 iterations. Each trial corresponds to different initial data, but was same across all approaches. The optimal trajectory requires the system to overcome the obstacles (the Grey cylinders) to reach from the initial position (the Red circle) to the goal position (the Green circle). MB and MF approaches have different behavior across different trials and they often get stuck in the obstacles. MBMF, on the other hand, is able to learn how to overcome the obstacles and consistently reaches the goal position.

The optimal mean cost (curve) and the standard deviation (shaded area) obtained for different approaches (across thirty trials) as learning progresses are shown in Figure 1, where each iteration corresponds to one execution on the real system. The MF approach (the dot-dashed Blue curve) improves as the learning progresses, but is still significantly outperformed by all other approaches, indicating the data-inefficiency of a pure MF approach. The pure MB approach (the Green curve) continues to improve as learning progresses; however, it is outperformed by MBMF very early on. Interestingly, in this case, using MB method to warm start the MF method (with =5) doesn’t improve the performance, as evident from the dotted Orange curve, indicating that using the MB component to initialize the MF component may not be sufficient for the policy improvement. In contrast, using model information as a prior for the MF method (with =10) outperforms the other approaches and is able to learn a good policy roughly within 15 iterations, indicating the utility of systematically incorporating the model information during policy exploration. We note that MBMF also has a smaller variance compared to all the other baselines, indicating the consistency in its performance.

We also simulated MB+MF and MBMF for different and respectively. A naive switching from MB to MF fails to improve the policy even for different switching points, and thus are outperformed by the pure MB approach. The frequency at which the prior is updated in the MBMF approach, however, affects the learning process. We found that switching the model prior too frequently or too slowly both might lead to a suboptimal performance. Switching too often makes MBMF too sensitive to the changes in the dynamics model, which can change significantly especially early-on in the learning, and can “mis-guide" the policy exploration. On the other hand, switching too slowly may strip it of the full potential of the dynamics model. In this particular case, the optimal frequency turns out to be (i.e., the MB prior is updated every 10 iterations). It might also be interesting to note that the MBMF approach is at least as good as the best baseline for all values of . Nevertheless, systematically finding the optimal update frequency is an interesting future direction. The mean and the standard deviation of the costs obtained by different approaches, as well as additional learning curves can be found in appendix A.1).

We also plot the trajectories obtained by executing the learned controller on the actual system for the MB, MF and MBMF approaches in Figure 2. The initial and the goal positions are denoted by the Red and the Green circles respectively. For comparison purposes, the globally optimal trajectory (the dotted Red curve) was also computed, using the actual system dynamics obtained through MuJoCo; however, the dynamics are unknown to any of the learning method. We plot the trajectories for different trials, which correspond to different (but same across all methods) initial data. As evident from the figure, the MBMF approach is consistently able to reach the goal state, whereas the MB and MF approaches fail to achieve a consistent good performance. In particular, the optimal trajectory requires the system to overcome the obstacle next to the starting position. A pure MB approach is unable to consistently learn this behavior, potentially because it requires learning a discontinuous dynamics model. Consequently, it is often unable to reach the goal position and gets stuck in the obstacles (Figures 2, 2). Similarly, a pure MF approach is unable to learn to overcome the obstacles within 25 iterations. MBMF approach, however, can take evidence into account and is able to overcome this challenge to reach the goal state consistently, demonstrating its robustness to the training data, which is also evident from a lower variance in the performance of MBMF.

6.3 Three DoF Robotic Arm

We again employ a GP-based dynamics model in this simulation. As evident from Figure 3, MBMF(=1) outperforms the other approaches and is continue to improve policy over iterations; however, due to the computational complexity of a GP-based dynamics model, we stop the learning process after 20 iterations. MB+MF approach(=15) continues to improve after switching from MB to MF; however, it is still outperformed by the pure MB approach. We also note that MBMF has a significantly smaller variance compared to all other baselines, indicating that the MBMF approach is robust to the initial training data

We also simulate the MB+MF and MBMF approaches for different s and s. We only plot the curves corresponding to optimal and in Figure 3 for brevity reasons, but additional learning curves can an be found in appendix A.2). Interestingly, in this case, if the prior update frequency is too small ( is large), then the MBMF lags behind the pure MB approach, as it is not fully leveraging the dynamics model information. However, if the right update frequency is chosen, then MBMF can leverage the advantages of both MB and MF approaches and outperforms the two.

Figure 3: The mean (curves) and the standard deviation (shaded regions) of the cost obtained for different approaches for the three DoF robotic arm. MBMF leverages the advantages of both MB and MF approaches to design a better policy, indicating the data-efficiency of the MBMF approach, as well as its ability to overcome the model bias.

The corresponding trajectory comparison between MB and MBMF approaches in Figure 4 also highlight the efficacy of MBMF in leveraging the advantages of both the MB and MF components to quickly learn the optimal policy. A pure MB approach struggles with learning to move the object vertically in a straight line, potentially due to the complexity of the dynamics given the contact-rich nature of the task. The MBMF approach, on the other hand, has the capability to trade-off the observed costs and the predicted cost. As a result, it has been able to move the object closer to the goal position within a small number of iterations (20 in this case).

(a) MBMF

(b) MB
Figure 4: (a) Trajectory obtained via executing the learned controller for the MBMF approach. The Red box represents the object, which needs to be moved to the Green box. MBMF is able to push the object fairly close to the goal position. (b) Trajectory obtained via executing the learned controller for the pure model-based approach. A pure MB approach struggles with accomplishing this task, with the final position of the object end up being very far from the goal position.

7 Conclusion

We propose MBMF, a novel probabilistic framework to combine model-based and model-free RL methods. This bridging is achieved by using the cost estimated by the model-based component as the prior for the model-free component. Our results show that the proposed approach can overcome the model bias and inaccuracies in the dynamics model, and yet retain the fast convergence rate of model-based approaches. There are several interesting future directions that emerge out of this work. First, it would be interesting to investigate how this approach performs on more complex tasks. Moreover, the prediction-time of Gaussian processes scales cubically with the number of training samples [23], which makes the proposed approach prohibitive for the higher-dimensional systems or policies. Exploring more scalable versions of the proposed approach is an interesting future direction. Finally, a natural direction of research is the inclusion of other intermediate representations, such as value functions and trajectories, in the proposed approach.

\acknowledgments

This research is supported by NSF under the CPS Frontiers VehiCal project (1545126), by the UC-Philippine-California Advanced Research Institute under project IIID-2016-005, and by the ONR MURI Embedded Humans (N00014-16-1-2206).

References

  • Deisenroth et al. [2013] M. P. Deisenroth, G. Neumann, and J. Peters. A survey on policy search for robotics. Foundations and Trends in Robotics, 2(1-2):1–142, 2013.
  • Deisenroth et al. [2015] M. P. Deisenroth, D. Fox, and C. E. Rasmussen. Gaussian processes for data-efficient learning in robotics and control. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(2):408–423, 2015.
  • Levine and Abbeel [2014] S. Levine and P. Abbeel. Learning neural network policies with guided policy search under unknown dynamics. In Advances in Neural Information Processing Systems, pages 1071–1079, 2014.
  • Gläscher et al. [2010] J. Gläscher, N. Daw, P. Dayan, and J. P. O’Doherty. States versus rewards: dissociable neural prediction error signals underlying model-based and model-free reinforcement learning. Neuron, 66(4):585–595, 2010.
  • Simon [1982] H. A. Simon. Models of bounded rationality: Empirically grounded economic reason, volume 3. MIT press, 1982.
  • Nguyen-Tuong et al. [2009] D. Nguyen-Tuong, M. Seeger, and J. Peters. Model learning with local Gaussian process regression. Advanced Robotics, 23(15):2015–2034, 2009.
  • Nguyen-Tuong and Peters [2011] D. Nguyen-Tuong and J. Peters. Model learning for robot control: a survey. Cognitive processing, 12(4):319–340, 2011.
  • Schreiter et al. [2015] J. Schreiter, P. Englert, D. Nguyen-Tuong, and M. Toussaint. Sparse Gaussian process regression for compliant, real-time robot control. In International Conference on Robotics and Automation, 2015.
  • Pan and Theodorou [2014] Y. Pan and E. Theodorou. Probabilistic differential dynamic programming. In Advances in Neural Information Processing Systems, pages 1907–1915, 2014.
  • Kocijan et al. [2004] J. Kocijan, R. Murray-Smith, C. E. Rasmussen, and A. Girard. Gaussian process model based predictive control. In American Control Conference, pages 2214–2219. IEEE, 2004.
  • Calandra et al. [2015] R. Calandra, A. Seyfarth, J. Peters, and M. P. Deisenroth. Bayesian optimization for learning gaits under uncertainty. Annals of Mathematics and Artificial Intelligence, 76(1):5–23, 2015.
  • Deisenroth and Rasmussen [2011] M. Deisenroth and C. E. Rasmussen. PILCO: A model-based and data-efficient approach to policy search. In International Conference on Machine Learning, pages 465–472, 2011.
  • McHutchon [2014] A. McHutchon. Nonlinear modelling and control using Gaussian processes. PhD thesis, Department of Engineering, University of Cambridge, 2014.
  • Bansal et al. [2017] S. Bansal, R. Calandra, T. Xiao, S. Levine, and C. J. Tomlin. Goal-driven dynamics learning via Bayesian optimization. arXiv preprint arXiv:1703.09260, 2017.
  • Donti et al. [2017] P. L. Donti, B. Amos, and J. Z. Kolter. Task-based end-to-end model learning. arXiv preprint arXiv:1703.04529, 2017.
  • Gu et al. [2016] S. Gu, T. Lillicrap, I. Sutskever, and S. Levine. Continuous deep Q-learning with model-based acceleration. arXiv preprint arXiv:1603.00748, 2016.
  • Sutton [1991] R. S. Sutton. Dyna, an integrated architecture for learning, planning, and reacting. ACM SIGART Bulletin, 2(4):160–163, 1991.
  • Heess et al. [2015] N. Heess, G. Wayne, D. Silver, T. Lillicrap, T. Erez, and Y. Tassa. Learning continuous control policies by stochastic value gradients. In Advances in Neural Information Processing Systems, pages 2944–2952, 2015.
  • Farshidian et al. [2014] F. Farshidian, M. Neunert, and J. Buchli. Learning of closed-loop motion control. In Intelligent Robots and Systems Conference, pages 1441–1446. IEEE, 2014.
  • Nagabandi et al. [2017] A. Nagabandi, G. Kahn, R. S. Fearing, and S. Levine. Neural network dynamics for model-based deep reinforcement learning with model-free fine-tuning. arXiv preprint arXiv:1708.02596, 2017.
  • Chebotar et al. [2017] Y. Chebotar, K. Hausman, M. Zhang, G. Sukhatme, S. Schaal, and S. Levine. Combining model-based and model-free updates for trajectory-centric reinforcement learning. arXiv preprint arXiv:1703.03078, 2017.
  • Wilson et al. [2014] A. Wilson, A. Fern, and P. Tadepalli. Using trajectory data to improve bayesian optimization for reinforcement learning. The Journal of Machine Learning Research, 15(1):253–282, 2014.
  • Rasmussen and Williams [2006] C. E. Rasmussen and C. K. I. Williams. Gaussian processes for machine learning. The MIT Press, 2006.
  • Kushner [1964] H. J. Kushner. A new method of locating the maximum point of an arbitrary multipeak curve in the presence of noise. Journal of Basic Engineering, 86:97, 1964.
  • Osborne et al. [2009] M. A. Osborne, R. Garnett, and S. J. Roberts. Gaussian processes for global optimization. In International Conference on Learning and Intelligent OptimizatioN, pages 1–15, 2009.
  • Shahriari et al. [2016] B. Shahriari, K. Swersky, Z. Wang, R. P. Adams, and N. de Freitas. Taking the human out of the loop: A review of Bayesian optimization. Proceedings of the IEEE, 104(1):148–175, 2016.
  • Močkus [1975] J. Močkus. On Bayesian methods for seeking the extremum. In Optimization Techniques IFIP Technical Conference, 1975.
  • GPy [since 2012] GPy. GPy: A Gaussian process framework in python. http://github.com/SheffieldML/GPy, since 2012.
  • Gablonsky et al. [2001] J. M. Gablonsky et al. Modifications of the DIRECT algorithm. 2001.
  • Brockman et al. [2016] G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba. OpenAI Gym, 2016.
  • Todorov et al. [2012] E. Todorov, T. Erez, and Y. Tassa. MuJoCo: A physics engine for model-based control. In Intelligent Robots and Systems Conference, pages 5026–5033. IEEE, 2012.
  • Kupcsik et al. [2014] A. Kupcsik, M. P. Deisenroth, J. Peters, A. P. Loh, P. Vadakkepat, and G. Neumann. Model-based contextual policy search for data-efficient generalization of robot skills. Artificial Intelligence, 2014.

Appendix A Appendix

a.1 Point Mass System

Figure 5: The mean cost obtained for different switching points for the MB+MF approach for the 2D point mass system. Switching from MB to MF results in a flat learning curve in this case, indicating that a naive switching between the two may not be sufficient for the policy improvement.
Figure 6: The mean cost obtained for different prior update frequencies for the MBMF approach for the 2D point mass system. The learning efficiency of MBMF depends on the choice of the prior update frequency. Switching too often makes MBMF too sensitive to the changes in the dynamics model, which can “mis-guide" the policy exploration. On the other hand, if the prior update frequency is too small ( is large), then the MBMF lags behind the pure model-based approach, as it is not fully leveraging the dynamics model information. In this case, the optimal update frequency turns out to be ; however, MBMF is at least as good as the best baseline for all update frequencies.
Approach Obtained Cost
Model-free (MF) 22.83 10.81
Model-based (MB) 9.86 4.10
MB+MF () 11.91 4.02
MB+MF () 11.16 3.87
MB+MF () 10.24 3.31
MB+MF () 10.08 4.02
MBMF () 9.80 2.62
MBMF () 8.34 3.38
MBMF () 7.50 2.93
Table 1: Point mass system. The mean and the standard deviation of the cost obtained by executing the learned controller (after 25 iterations) on the actual system for different approaches. The results are computed over 30 trials.

a.2 Three DoF Robotic Arm

Figure 7: The mean cost obtained for different switching points for the MB+MF approach for the robotic arm. Switching from MB to MF results in a slower learning compared to a pure MB approach.
Figure 8: The mean cost obtained for different prior update frequencies for the MBMF approach for the robotic arm. If the prior update frequency is too small ( is large), then the MBMF lags behind the pure MB approach in this case, as it is not fully leveraging the dynamics model information. In this case, the optimal update frequency turns out to be . Nevertheless, systematically finding the optimal prior update frequency is an important future direction.
Approach Obtained Cost
Model-free (MF) 54.60 9.85
Model-based (MB) 49.54 12.13
MB+MF () 53.70 9.58
MB+MF () 53.19 10.68
MB+MF () 49.70 10.64
MBMF () 46.38 7.54
MBMF () 49.25 10.50
MBMF () 51.36 10.44
Table 2: Three DoF robotic arm system. The mean and the standard deviation of the cost obtained by executing the learned controller on the actual system for different approaches. The reported numbers are at the end of the iteration number 20 and are computed over 30 trials.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
1297
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description