Variational Inference MPC for Bayesian Model-based Reinforcement Learning

Variational Inference MPC for
Bayesian Model-based Reinforcement Learning

Masashi Okada
Panasonic Corp., Japan
okada.masashi001@jp.panasonic.com
&Tadahiro Taniguchi
Ritsumeikan Univ. & Panasonic Corp., Japan
taniguchi@em.ci.ritsumei.ac.jp
Abstract

In recent studies on model-based reinforcement learning (MBRL), incorporating uncertainty in forward dynamics is a state-of-the-art strategy to enhance learning performance, making MBRLs competitive to cutting-edge model-free methods, especially in simulated robotics tasks. Probabilistic ensembles with trajectory sampling (PETS) is a leading type of MBRL, which employs Bayesian inference to dynamics modeling and model predictive control (MPC) with stochastic optimization via the cross entropy method (CEM). In this paper, we propose a novel extension to the uncertainty-aware MBRL. Our main contributions are twofold: Firstly, we introduce a variational inference MPC (VI-MPC), which reformulates various stochastic methods, including CEM, in a Bayesian fashion. Secondly, we propose a novel instance of the framework, called probabilistic action ensembles with trajectory sampling (PaETS). As a result, our Bayesian MBRL can involve multimodal uncertainties both in dynamics and optimal trajectories. In comparison to PETS, our method consistently improves asymptotic performance on several challenging locomotion tasks.

\keywords

model predictive control, variational inference, model-based reinforcement learning

1 Introduction

Model predictive control (MPC) is a powerful and accepted technology for advanced control systems such as manufacturing processes [1], HVAC systems [2], power electronics [3], autonomous vehicles [4], and humanoids [5]. MPC utilizes the specified models of system dynamics to predict future states and rewards (or costs) to plan future actions that maximize the total reward over the predicted trajectories. Especially for industrial applications, the clear explainability of such a decision-making process is advantageous. Furthermore, in some tasks (e.g., games) [6], planning-based policies of this nature could outperform reactive-policies (e.g., full neural network policies).

Model-based reinforcement learning (MBRL) methods that employ expressive function approximators (e.g., deep neural networks: DNNs) [7, 8, 9] present appealing approaches for MPC. The main difficulty in introducing MPC to practical systems is specifying the forward dynamics models of target systems. However, accurate system identification is challenging in many advanced applications. Take robotics for example, where robots encounter floors and walls, and must be able to manipulate some objects, making the dynamics highly non-linear. The main objective of MBRL is to train approximators of complex dynamics through experiences in real systems. The general procedure of MBRL is summarized as; (1. training-step) train the approximate model with a given training dataset, then (2. test-step) execute the actions (or policies) optimized with the dynamics model in a real environment and augment the dataset with the observed results. The above training and test steps are iteratively conducted to collect sufficient and diverse data so as to achieve the desired performance.

One feature of MBRL is its considerable sample efficiency compared to model-free reinforcement learning (MFRL), which directly trains policies through experiences. In other words, MBRL requires much less test time in real environments. In addition, MBRL benefits from the generalizability of the trained model, which can be easily applied to new tasks in the same system. However, the asymptotic performance of MBRL is generally inferior to that of model-free methods. This discrepancy is primarily due to the overfitting of dynamics models to the few data available during initial MBRL steps, which is called the model-bias problem [7]. Several studies have demonstrated that incorporating uncertainty in dynamics models can alleviate this issue. The uncertainty-aware modeling is realized by Bayesian inference employing a Gaussian Process [7], dropout as variational inference [10, 11, 12], or neural network ensembles [13, 14, 15].

Probabilistic ensembles with trajectory sampling (PETS) [13] is one type of uncertainty-aware MBRL. As an MPC-oriented MBRL method, PETS conducts trajectory optimization via the cross entropy method (CEM) [16] by using trajectories probabilistically sampled from the ensemble networks. Experiments have demonstrated that PETS can achieve competitive performance over state-of-the-art MFRL methods like Soft Actor Critic (SAC) [17], while yielding much higher sample efficiency. Since our primary interest is MPC and its application to practical systems, this paper mainly focuses on PETS and treats this method as a strong baseline.

Considering the success of probabilistic dynamics modeling, incorporating uncertainty in optimal trajectories appears very promising for MBRL. However, an optimization scheme that can utilize uncertainty has not yet been discussed. Although several stochastic approaches, including CEM, model predictive path integral (MPPI) [18, 8], covariance matrix adaptation evolution strategy (CMA-ES) [19], and proportional CEM (Prop-CEM) [20], have been proposed, they are not uncertainty-aware and tend to underestimate uncertainty. In addition, although their optimization procedures are very similar, they have been independently derived. Consequently, theoretical relations among these methods are unclear, preventing us from systematically understanding and reformulating them to be uncertainty-aware in a Bayesian fashion.

Motivated by these, in this paper, we propose a novel MPC concept for Bayesian MBRL. The organization and contributions of this paper are summarized as follows. (1) In Sec. 3, we introduce a novel MPC framework, variational inference MPC (VI-MPC), which generalizes and reformulates various stochastic MPC methods in a Bayesian fashion. The key observations for deriving this framework are organized in Sec. 2, where we point out that general stochastic optimization methods can be regarded as the moment matching of the optimal trajectory posterior, which appear in a Bayesian MBRL formulation. (2) In Sec. 4, we propose a novel instance of the framework, called probabilistic action ensembles with trajectory sampling (PaETS). Toy task examples and the concept of our method are exhibited in Fig. 1. (3) In Sec. 5, we demonstrate that our method consistently outperforms PETS via experiments with challenging locomotion tasks in the MuJoCo physics simulator [21].

(a) Vanilla CEM used in PETS [13]: VIMPC(‘CEM’, ‘Gaussian’, False) (b) PaETS (Ours): VIMPC(‘CEM’, ‘GMM(M=5)’, True)
Figure 1: Toy task examples that illustrate the concept of our method. The objective of this task is to navigate a point mass on the - plane by actuating with maximum magnitude , while avoiding obstacles . This task is designed to have multiple (sub-)optimal trajectories. (a) A trajectory found by vanilla CEM. (b) Multiple trajectories found by PaETS that approximates the trajectory posterior via variational inference with a Gaussian mixture model. The line-width indicates the magnitude of mixture-coefficients. Exploiting diverse plans encourages active exploration in state-action spaces, improving the optimization performance and training dataset diversity. The notation of VIMPC() is introduced in Sec. 3.

2 Model-based Reinforcement Learning as Bayesian Inference

In this section, we describe MBRL as a Bayesian inference problem using control as inference framework [22]. Fig. 2 displays the graphical model for the formulation, with which an MBRL procedure can be re-written in a Bayesian fashion: (1. training-step) do inference of . (2. test-step) do inference of , then, sample actions from the posterior and execute the actions in a real environment. We denote a trajectory as , where and respectively represent state and action. Given a state-action pair at time , the next state can be predicted by a forward-dynamics model parameterized with . The posterior of is inferred from training dataset , where consists of states and actions observed during the test step. To formulate optimal control as inference, we auxiliarly introduce a binary random variable to represent the optimality of (, ). Given , trajectory optimization can be expressed as an inference problem:

(1)

where uninformative action prior (i.e., : uniform distribution) is supposed. For readability, is simply denoted as . For the same reason, we omit the subscripts of sequences , . In the remainder of the paper, this simplified notation is employed. In Sec. 2.12.2, we review how these inference problems have been approximately handled in previous works.

2.1 Inference of Forward-dynamics Posterior

Figure 2: Graphical model for Bayesian MBRL.

Given a sufficiently parameterized expressive model, i.e., DNNs, one of the most practical and promising schemes for approximating the posterior is to utilize neural network ensembles [13, 14, 15]. This process approximates the posterior as a set of particles , where is Dirac delta function and is the number of networks. Each particle is independently trained by stochastic gradient descent so as to (sub-)optimize . Although this approximation is incompletely Bayesian, this scheme has several useful features. First, we can simply implement this process in standard deep learning frameworks. Furthermore, the ensemble model successfully involves multimodal uncertainty in the exact posterior.

Another possible way to infer is dropout as variational inference [10, 11, 12], which approximates as a Gaussian distribution . It is proofed that the variational inference problem: approximately equivalent to training networks with dropout, where denote Kullback-Leibler (KL) divergence. Although this scheme is also simple and theoretically supported, approximation by a single Gaussian distribution tends to underestimate uncertainty (or multimodality) in the posterior. To remedy this problem, -divergence dropout has been proposed [23], which replaces KL-divergence to -divergence so as to prevent from overfitting a single mode. However, as long as is Gaussian, the multimodality cannot be managed well.

In our preliminary experiments of MBRL, we have tested the above two schemes and observed that the ensemble performs much better than (-)dropout (this result is summarized in Sec. A). This result provides us with the insight that capturing multimodality in the posterior has crucial effects in MBRL literature. Therefore, in this paper, we also employ this ensemble scheme to approximate in the same way as our baseline: PETS [13]. In Sec. 4, we also attempt to incorporate multimodality in the posterior .

2.2 Moment Matching of Trajectory Posterior

This section clarifies the connection between trajectory optimization and the posterior approximation problem. The key observation delineated here is that several MPC methods, including CEM used in PETS and MPPI, can be regarded as the moment matching of the posterior.

Given an inferred model posterior , we can sample trajectories from (1).111 Trajectory sampling methods with have been discussed and experimented in [13]. In this paper, we employ the TS1 method suggested in the reference (see 3–6 in Alg. 1). Let us approximate the action posterior with a Gaussian distribution . The mean of posterior action sequence can be estimated by moment matching:

(2)

where

(3)

Eq. (2) can be viewed as a weighted average where each sampled action is weighted by the likelihood of optimality . In the same way, we can also estimate the variance of the posterior .

In practice, sampling from uniform distribution is quite inefficient and requires almost infinite samples. Hence, let us consider iteratively estimating the parameters by incorporating importance sampling. Let , be the estimated parameters at iteration ; we can rearrange (2) as

(4)

It is worth noting that a similar iterative law can also be derived by solving the optimization problem by mirror descent [24, 25]. To connect this inference problem to trajectory optimization, we define the optimality likelihood with trajectory reward and a monotonically increasing function , as . If we define the same as [22, 26], an optimization algorithm similar to MPPI [18, 8, 25] is recovered. As summarized in Table 1, other similarities to well-known optimization algorithms, including CEM, can be observed with different optimality definitions. 222 We implicitly assume the existence of step-wise likelihood corresponding to each definition. Since another graphical model with a single unified optimality can be defined, the existence is not critical.

MPPI [18] CEM [16] Prop-CEM [20] CMA-ES [19]
Table 1: Optimization algorithms derived by moment matching of and different definitions; indicates an indicator function, denotes rank-preserving transformation.

There is a discrepancy between (4) and the CEM implementation in [13]; in which is used instead of . Since is a convex function, Jensen’s inequality holds in this case, thus . The equality holds when is constant, implying that for low-variance and for high-variance (or more uncertain) . Namely, underestimates the optimality likelihood if generates uncertain trajectories. Since we have experimentally observed that this uncertainty avoidance behavior by demonstrates higher optimization performance than (see Sec. B), this paper heuristically employs the use of .

In practice, expectation operators should be implemented on digital computers through the Monte Carlo integration with sampled actions and trajectories for each action: and .

3 Variational Inference MPC: From Moment Matching to Inference

Given uncertainty in a dynamics model, it is natural to suppose that optimal trajectories are also uncertain. However, as exhibited in the previous section, PETS employs the moment matching of the trajectory posterior, ignoring almost uncertainty in optimal trajectories. In this section, we newly introduce a variational inference MPC (VI-MPC) framework to formulate MBRL as fully Bayesian and involve uncertainty both in the dynamics and optimalities.

Let us consider a variational inference problem: . We assume the variational distribution is decomposed to ; hence, we introduce as a posterior, which takes the similar decomposable form as . This assumption forces optimal state transitions to be controlled only by  [22]. As shown in Sec. C.1, this inference problem can be transformed to the maximization problem: . A notable property is that this objective has an entropy regularization term , which encourages to have broader shape to capture more uncertainty. For the sake of convenience, we introduce a tunable hyperparameter to the optimality likelihood . Then the above objective can be transformed as . By applying mirror descent [27] to this optimization problem, we can derive an update law for (see Sec. C.2 for the detailed derivation):

(5)

where , are hyperparameters and is absorbed into them. is inverted step-size to control optimization speed and is the weight of the entropy regularization term .

Eq. (5) suggests a novel and general MPC framework, which we call variational inference MPC (VI-MPC). To realize a specific VI-MPC method, we specify the following parameters: (1) optimality definition (or ; see Table 1), (2) variational distribution model , and (3) entropy regularization or . We did not include into the specifications since it is highly dependent on the optimality definition (see Sec. G). In this paper, we describe the above specifications as VIMPC(<optimality_def>, <variational_dist>, <max_ent>). For example, we respectively express vanilla CEM and MPPI as VIMPC(‘CEM’, ‘Gaussian’, False) and VIMPC(‘MPPI’, ‘Gaussian’, False). In Sec. 4, we propose a new instance of VI-MPC to incorporate multimodal uncertainty in the posterior.

4 Probabilistic Action Ensembles with Trajectory Sampling

As reviewed in Sec. 2.1, previous methods have successfully involved multimodality in with network ensembles. If this multimodality in is given, other distributions depending on , including , would also be multimodal. In other words, there are various possible optimal trajectories (or actions) like Fig. 1. It is obvious that VIMPC(*, ‘Gaussian’, *) will still easily fail to capture multimodality because of overfitting to a single mode. Inspired by the success of the ensemble approach for dynamics modeling, we propose a novel VI-MPC method that introduces action ensembles with a Gaussian mixture model (GMM), i.e., VIMPC(*, ‘GMM(M=*)’, *), which we call PaETS (Probabilistic Action Ensembles with Trajectory Sampling).

PaETS defines the variational distribution as

(6)

where and is the number of components of the mixture model. Now, we derive the iteration scheme to update the parameters of GMM. At first, drawing samples from , we approximate as a discretized distribution (or a set of particles):

(7)

where . Just after sampling, the weight of each particle is uniform: . By substituting this approximated distribution to (5), the update law for the particle weights is derived as

(8)

Then we estimate , which maximizes the observation probability of the weighted particles:

(9)

By taking the derivative and borrowing the concept of the EM algorithm [28], we get the update laws of which take the weight-average form like (4) (see Sec. D for the complete definition):

(10)
Figure 3: Evaluated locomotion tasks simulated in MuJoCo.

Fig. 8 in Sec. E illustrates how this method works in a toy optimization task.

In summary, PaETS and MBRL utilizing it are respectively described in Algs. 1 and 2, where is the number of iterations for optimization and is the length of the task episode. At in Alg. 2, s are initialized independently at random. At , s and s are reset to be initial values, encouraging exploration for the next time-step and preventing from degenerating to a single mode. If we set , these procedures are almost equivalent to those of PETS. The use of GMM () does not increase computational complexity significantly (see Sec. F).

Input: State , GMM param.  and
Output: Optimized GMM param. 
1 for  to  do
        Sample actions Sample states  // TS1 method
2        , Eval.  Calc.  by (8) Update by (10)
Algorithm 1 PaETS
Data: initial variance  
1 Init.  with a random controller for one trial repeat
        Infer  // train ensemble DNNs
         // rand. init.
2        for  to  do
               Exec. Alg. 1 Sample Send to actuators and observe  // warm-start
3              
4       
until the MPC-policy performs well
Algorithm 2 MBRL with PaETS

5 Experiments

5.1 Comparison to State-of-the-art Methods

The main objective of this experiment is to demonstrate that PaETS has advantages over the state-of-the-art MBRL baseline: PETS [13]. In this experiment, PaETS and PETS (or vanilla CEM) were implemented using our same codebase with different parameters, i.e., VIMPC(‘CEM’, ‘GMM(M=5)’, True) for PaETS, and VIMPC(‘CEM’, ‘GMM(M=1)’, False) for PETS. We also evaluated another MBRL baseline with MPPI  [8], realized as VIMPC(‘MPPI’, ‘GMM(M=1)’, False). These above methods share the settings for inference (training of network ensembles). The state-of-the-art MFRL method SAC [17], was also evaluated to compare asymptotic performance.333We used the open-source code: https://github.com/pranz24/pytorch-soft-actor-critic Fig. 3 illustrates the simulated locomotion tasks evaluated in this experiment, which are complex and challenging due to their high non-linearity. Other details about our implementation and experimental settings are described in Sec. G and Sec. H. Fig. 4 presents the experimental results, in which PaETS consistently exhibits better asymptotic performance than that of the MBRL baselines. In addition, PaETS outperforms or is comparable to SAC while requiring significantly fewer samples (about x10 more sample efficient).

Figure 4: Learning curves for different tasks and algorithms. These are averaged results of 8 (for MBRL) and 20 (for SAC) trials with different random seeds. We stopped the training when convergence was observed or after reaching the specified test steps ( for MBRL and for SAC). The asymptotic performances (averages of the last 10 test steps) are depicted in dashed lines.

5.2 Ablation Study

This experiment clarifies which component of PaETS (GMM and entropy-regularization) contributed to the overall improvement. Fig. 5 expresses the results of this ablation study and Welch’s -test for some selected representative pairs. From this figure, one can observe that the use of GMM () significantly improves performance. The effect of the regularization () is relatively small, but not negligible. In certain tasks, setting to particular values could improve the performance. In the case of , the regularization sheds light on actions sampled from low , thus encouraging to be multimodal. In some tasks which requires rather delicate controls (e.g., Hopper, Walker2d), the effect of seems less significant. Fig. 6 examines sensitivity with the number of mixture components , for which achieved the highest performance. If infinite or enough samples are given (), it would be reasonable to set to be large enough to capture multimodality. However, in practice, is finite and could be small enough due to computational constraints. In this case, larger makes it difficult to approximate as a set of particles , resulting in degradation of the optimization performance.

Figure 5: Asymptotic performance comparison with varying s and s. These are averaged results over 8 different MBRL trials and the last 10 test steps. The error bars denote confidence intervals (95%). Symbols ‘**’ and ‘n.s.’ respectively mean and in Welch’s -test.
Figure 6: Asymptotic performance comparison with varying s. Only the HalfCheetah task is evaluated in this test.

6 Related Work

Dynamics Posterior Inference Recent MBRL methods, MB-MPO (Model-Based Meta-Policy-Optimization) [15] and ME-TRPO (Model Ensemble Trust Region Optimization) [14], also employ network ensembles to model dynamics, but they utilize the ensembles differently than we do: to train policy networks, not MPC.

Trajectory Optimization Sequential Monte-Carlo based MPC, described as VIMPC(*, ‘Particles’, False), has been introduced in [29], but it requires well-designed proposal distribution to sample particles for the next iteration . Another particle-based method has been derived [26] by utilizing the control as inference framework. However, this method relies on not only a dynamics model, but also policy and value functions to manage particles, so MFRL methods must be incorporated.

Recent studies have demonstrated that entropy regularization is a promising strategy in policy training  [30, 31, 32, 17]. However, to the best of our knowledge, the introduction of entropy regularization to MPC is novel along with explicit multimodal expression to successfully realize their synergistic effect.

Ref. [33] also systematically organizes the stochastic MPC methods from the perspective of online learning, but uncertainty-aware discussions from a Bayesian viewpoint are not conducted.

Bayesian Reformulation Ref. [34] proposes a novel approach to generative adversarial imitation learning (GAIL) [35], which reformulates general GAIL in a Bayesian fashion and utilizes ensembles to infer discriminator posteriors. Another Bayesian reformulation of GAIL integrates imitation and reinforcement learning by introducing another optimality (i.e., imitation optimality [36].

7 Conclusion & Discussions

This paper introduces a novel VI-MPC framework that systematically generalizes and reformulates various stochastic MPC methods in a Bayesian fashion. We also devise a novel instance of this framework, called PaETS, which can successfully incorporate multimodal uncertainty in optimal trajectories. By combining our method and the recent uncertainty-aware dynamics modeling with neural network ensembles, our Bayesian MBRL is able to involve multimodalities both in dynamics and optimalities. In addition, our method is a quite simple extension of general stochastic methods and requires no significant additional computational complexity. Our experiments demonstrate that PaETS can improve asymptotic performance compared to the leading MBRL baseline PETS, and thus substantially enhances MBRL potential to be more competitive to the state-of-the-art MFRL.

Considering the simplicity and generalizability of VI-MPC and PaETS, we expect that our concept is applicable to a variety of tasks, such as traditional MPC with deterministic dynamics and advanced MPC with latent dynamics from pixels by Deep Planning Network  [37]. By introducing a categorical mixture model as a variational distribution, application to combinational optimizations is also feasible. In fact, our ongoing work includes experiments of discrete MPC for a practical system.

A question that remains is how to determine VI-MPC specifications. As implied in Fig. 4, the best optimality definition could be task dependent (e.g., MPPI outperformed vanilla CEM in the Ant but not in other tasks). The regularization weight also has task dependency as shown in Fig. 5. It would be challenging but interesting future work to add the parameters to the graphical model in Fig. 2 as latent variables to infer promising parameters along with optimal trajectories, like infinite GMM [38]. Another appealing endeavor for future work is to introduce the concept of parallel tempering [39] in Markov Chain Monte Carlo. By adaptively varying different temperatures ( in our case) of ensemble actions, we can expect the ensemble diversity to improve.

\acknowledgments

We thank Vishwajeet Singh, Hiroki Nakamura and Akira Kinose for their cooperation in this study during their student-internship periods. Most of the experiments were conducted in ABCI (AI Bridging Cloud Infrastructure), built by the National Institute of Advanced Industrial Science and Technology, Japan.

References

  • Vargas-Villamil and Rivera [2000] F. D. Vargas-Villamil and D. E. Rivera. Multilayer optimization and scheduling using model predictive control: application to reentrant semiconductor manufacturing lines. Computers & Chemical Engineering, 24(8):2009–2021, 2000.
  • Afram and Janabi-Sharifi [2014] A. Afram and F. Janabi-Sharifi. Theory and applications of HVAC control systems–a review of model predictive control (MPC). Building and Environment, 72:343–355, 2014.
  • Vazquez et al. [2014] S. Vazquez, J. Leon, L. Franquelo, J. Rodriguez, H. A. Young, A. Marquez, and P. Zanchetta. Model predictive control: A review of its applications in power electronics. IEEE Ind. Electron. Mag., 8(1):16–31, 2014.
  • Paden et al. [2016] B. Paden, M. Čáp, S. Z. Yong, D. Yershov, and E. Frazzoli. A survey of motion planning and control techniques for self-driving urban vehicles. IEEE Trans. Intell. Veh., 1(1):33–55, 2016.
  • Kuindersma et al. [2016] S. Kuindersma, R. Deits, M. Fallon, A. Valenzuela, H. Dai, F. Permenter, T. Koolen, P. Marion, and R. Tedrake. Optimization-based locomotion planning, estimation, and control design for the Atlas humanoid robot. Autonomous Robots, 40(3):429–455, 2016.
  • Silver et al. [2016] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, et al. Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587):484, 2016.
  • Deisenroth and Rasmussen [2011] M. Deisenroth and C. E. Rasmussen. PILCO: A model-based and data-efficient approach to policy search. In ICML, 2011.
  • Williams et al. [2017] G. Williams, N. Wagener, B. Goldfain, P. Drews, J. M. Rehg, B. Boots, and E. A. Theodorou. Information theoretic MPC for model-based reinforcement learning. In ICRA, 2017.
  • Nagabandi et al. [2018] A. Nagabandi, G. Kahn, R. S. Fearing, and S. Levine. Neural network dynamics for model-based deep reinforcement learning with model-free fine-tuning. In ICRA, 2018.
  • Gal and Ghahramani [2016] Y. Gal and Z. Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In ICML, 2016.
  • Gal et al. [2017] Y. Gal, J. Hron, and A. Kendall. Concrete dropout. In NeurIPS, 2017.
  • Kahn et al. [2017] G. Kahn, A. Villaflor, V. Pong, P. Abbeel, and S. Levine. Uncertainty-aware reinforcement learning for collision avoidance. arXiv preprint arXiv:1702.01182, 2017.
  • Chua et al. [2018] K. Chua, R. Calandra, R. McAllister, and S. Levine. Deep reinforcement learning in a handful of trials using probabilistic dynamics models. In NeurIPS, 2018.
  • Kurutach et al. [2018] T. Kurutach, I. Clavera, Y. Duan, A. Tamar, and P. Abbeel. Model-ensemble trust-region policy optimization. In ICLR, 2018.
  • Clavera et al. [2018] I. Clavera, J. Rothfuss, J. Schulman, Y. Fujita, T. Asfour, and P. Abbeel. Model-based reinforcement learning via meta-policy optimization. In CoRL, 2018.
  • Botev et al. [2013] Z. I. Botev, D. P. Kroese, R. Y. Rubinstein, and P. L’Ecuyer. The cross-entropy method for optimization. In Handbook of statistics, volume 31, pages 35–59. Elsevier, 2013.
  • Haarnoja et al. [2018] T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In ICML, 2018.
  • Williams et al. [2016] G. Williams, P. Drews, B. Goldfain, J. M. Rehg, and E. A. Theodorou. Aggressive driving with model predictive path integral control. In ICRA, 2016.
  • Hansen et al. [2003] N. Hansen, S. D. Müller, and P. Koumoutsakos. Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMA-ES). Evolutionary computation, 11(1):1–18, 2003.
  • Goschin et al. [2013] S. Goschin, A. Weinstein, and M. Littman. The cross-entropy method optimizes for quantiles. In ICML, 2013.
  • Todorov et al. [2012] E. Todorov, T. Erez, and Y. Tassa. MuJoCo: A physics engine for model-based control. In IROS, 2012.
  • Levine [2018] S. Levine. Reinforcement learning and control as probabilistic inference: Tutorial and review. arXiv preprint arXiv:1805.00909, 2018.
  • Li and Gal [2017] Y. Li and Y. Gal. Dropout inference in bayesian neural networks with alpha-divergences. In ICML, 2017.
  • Miyashita et al. [2018] M. Miyashita, S. Yano, and T. Kondo. Mirror descent search and its acceleration. Robotics and Autonomous Systems, 106:107–116, 2018.
  • Okada and Taniguchi [2018] M. Okada and T. Taniguchi. Acceleration of gradient-based path integral method for efficient optimal and inverse optimal control. In ICRA, 2018.
  • Piche et al. [2018] A. Piche, V. Thomas, C. Ibrahim, Y. Bengio, and C. Pal. Probabilistic planning with sequential monte carlo methods. In ICLR, 2018.
  • Bubeck et al. [2015] S. Bubeck et al. Convex optimization: Algorithms and complexity, volume 8, chapter 4. Now Publishers, Inc., 2015.
  • Bilmes et al. [1998] J. A. Bilmes et al. A gentle tutorial of the EM algorithm and its application to parameter estimation for gaussian mixture and hidden markov models. International Computer Science Institute, 4(510):126, 1998.
  • Kantas et al. [2009] N. Kantas, J. Maciejowski, and A. Lecchini-Visintini. Sequential monte carlo for model predictive control. In Nonlinear model predictive control, pages 263–273. Springer, 2009.
  • Abdolmaleki et al. [2015] A. Abdolmaleki, R. Lioutikov, J. R. Peters, N. Lau, L. P. Reis, and G. Neumann. Model-based relative entropy stochastic search. In NeurIPS, 2015.
  • Abdolmaleki et al. [2017] A. Abdolmaleki, B. Price, N. Lau, L. P. Reis, and G. Neumann. Deriving and improving CMA-ES with information geometric trust regions. In GECCO, 2017.
  • Haarnoja et al. [2017] T. Haarnoja, H. Tang, P. Abbeel, and S. Levine. Reinforcement learning with deep energy-based policies. In ICML, 2017.
  • Wagener et al. [2019] N. Wagener, C.-A. Cheng, J. Sacks, and B. Boots. An online learning approach to model predictive control. In Robotics: Science and Systems, 2019.
  • Jeon et al. [2018] W. Jeon, S. Seo, and K.-E. Kim. A bayesian approach to generative adversarial imitation learning. In NeurIPS, 2018.
  • Ho and Ermon [2016] J. Ho and S. Ermon. Generative adversarial imitation learning. In NeurIPS, 2016.
  • Kinose and Tadahiro [2019] A. Kinose and T. Tadahiro. Integration of imitation learning using GAIL and reinforcement learning using task-achievement rewards via probabilistic generative model. arXiv preprint arXiv:1907.02140, 2019.
  • Hafner et al. [2019] D. Hafner, T. Lillicrap, I. Fischer, R. Villegas, D. Ha, H. Lee, and J. Davidson. Learning latent dynamics for planning from pixels. In ICML, 2019.
  • Rasmussen [2000] C. E. Rasmussen. The infinite gaussian mixture model. In NeurIPS, 2000.
  • Brooks et al. [2011] S. Brooks, A. Gelman, G. Jones, and X.-L. Meng. Handbook of Markov Chain Monte Carlo, chapter 11. CRC press, 2011.
  • Theodorou et al. [2010] E. Theodorou, J. Buchli, and S. Schaal. A generalized path integral control approach to reinforcement learning. Journal of Machine Learning Research, 11:3137–3181, 2010.

Appendix A Preliminary Experiment for Uncertainty Modeling

Fig. 7 shows the result of a preliminary experiment, in which different uncertainty modeling approaches were evaluated on the HalfCheetah task. For all trials, vanilla CEM was introduced for trajectory optimization. This result suggests that (-)dropout is insufficient to capture uncertainty in dynamics, resulting in worse local optima.

Figure 7: Comparison of uncertainty modeling approaches: ensemble and (-)dropout.

Appendix B Comparison Between and

We evaluated the impact of and on the optimization performance of (vanilla) CEM and MPPI, the results of which are summarized in Table 2, where gained much higher rewards than .

CEM MPPI
Table 2: Episode reward of HalfCheetah task with and . A common dynamics model (sufficiently trained ensemble neural network by MBRL) was employed for this test. Ten different trials were conducted and the results were averaged.

Appendix C Derivations

c.1 Derivation of the Variational Inference Objective

By using the assumption of , the KL-divergence can be transformed as

(11)
(12)
(13)

c.2 Derivation of (5)

In this section, we simply denote as and as for readability. Let us consider the optimization problem:

(14)

By applying mirror descent [27], the iterative update law of is given as

(15)

where is the inner-product operator, is a hyper-parameter related to the step-size, and is the Lagrange multiplier for the constraint . The arguments in the operator can be rearranged as

(16)

where, we used the relations:

(17)
(18)

The integrand of (16) can be organized as

(19)
(20)
(21)

Integrating the above equation yields,

(22)

By minimizing this equation, we get:

(23)

The Lagrange multiplier can be removed using the constraint :

(24)
(25)

Considering the discussion in Sec. 2.2 and Sec. B, we compute as

(26)

Substituting (25) to (23) results in:

(27)

Marginalizing ), we finally obtain:

(28)

In (5), we replaced , .

Appendix D Complete Definition of PaETS

(29)
(30)
(31)
(32)
(33)

Appendix E Optimization of Toy Objective Function by PaETS

Fig. 8 illustrates how PaETS optimizes in a toy multimodal objective function.

Figure 8: The optimization process of a 2D multimodal objective function by PaETS (VIMPC(‘MPPI’, ‘GMM(M=2)’, True)), in which two distribution components are successfully optimized to fit the two modals. depict particles that approximates .

Appendix F Computational Complexity

The main computational bottleneck of PaETS (and PETS) is the execution of 3–6 in Alg. 1, in which total trajectories must be sampled. In our experiment, and were respectively set as , as in [13]. Compared to PETS, PaETS requires additional procedures like action sampling from GMM (2) and GMM parameter update (9). However, these additional procedures are easily parallelizable on GPUs, and their computation times are much shorter than the above mentioned bottleneck. In the experiments with our early prototype in TensorFlow, it took about 57 ms for and 55 ms for (equivalent to PETS) to execute one iteration of the for-loop in Alg. 1 on a single NVIDIA RTX2080 GPU. The above execution time does not meet the real-time constraints (e.g., 30 Hz). However, considering the success of the real-time implementation of MPPI in [18, 8], we believe real-time implantation of our method is feasible with optimized implementation using compiled language, low-level GPU APIs, and thorough tuning of hyperparameters (e.g., , and DNN complexity).

Appendix G Implementation Notes

Cross Entropy Method

It is general technique to adaptively determine in Table 1 so that only the top- samples satisfies the threshold condition. We employ this technique and the eliteness ratio is set to be . has no effect on CEM optimization since takes binary values.

Mppi

Reward normalization heuristics, as suggested in [40], were also introduced for our MPPI implementation as

(34)

where . was set to be as also suggested in [40].

Entropy Regularization

The value of is very sensitive to task settings, especially for the dimensionalities of action spaces. To make insensitive, we introduced the following normalization trick inspired by the above heuristics. First, we rearrange (8) as

(35)

Then, we replace to normalized one:

(36)

By applying these heuristics, the range of entropy bonus is limited to , where the action with the lowest probability among samples gains the highest entropy bonus of .

Appendix H Experimental Setup

We used MuJoCo tasks modified from standard OpenAI Gym tasks.444https://github.com/openai/gym Table 3 summarizes the task settings, where , and respectively denote the velocity, orientation angle, and height of the agents. Penalty functions , are newly introduced to encourage the agents to move forward in the proper form. Instead, done flags used originally for early task stopping are removed. , are defined as

(37)
(38)

We modified the range of actions (i.e., torques) from to to exaggerate uncertainties in the optimal trajectory posteriors.

Task Reward Function Misc.
HalfCheetah
Ant
Hopper
Walker2d
Table 3: MuJoCo task settings.

Table 4 summarizes the shared parameter settings for MBRL (PaETS, PETS, and MPPI). For SAC, we used the default parameters from the original codebase.

HalfCheetah Ant Hopper Walker2d
: prediction horizon 30 30 60 45
: weight of entropy regularizer 0.5 0.25 0.5 0.5
: # sampled actions 500
: # trajectories for each action 20
: # optimization-iterations 5
: # episode length 1000
: # neural networks 5
hidden nodes (200, 200, 200, 200)
activation function Swish
optimizer Adam
learning rate
batch-size 160
Table 4: MBRL parameters.

Appendix I Diversity Analysis of

In this section, we analyzes the diversity of training data collected by different MPC-policies. The distributions (histograms) of the data samples are illustrated in Fig. 9, in which the dimension of a sample (, ) was reduced by t-SNE. This figure suggests that incorporating uncertainty both in the dynamics and optimalities can improve the diversity of (i.e., coverage of state-action space).

Figure 9: Comparison of training data distributions collected by different MPC-policies. CR (cover ratio) indicates the ratio of non-zero bins in each 2D histogram.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
393190
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description