Sufficiently Accurate Model Learning

Sufficiently Accurate Model Learning

Abstract

Modeling how a robot interacts with the environment around it is an important prerequisite for designing control and planning algorithms. In fact, the performance of controllers and planners is highly dependent on the quality of the model. One popular approach is to learn data driven models in order to compensate for inaccurate physical measurements and to adapt to systems that evolve over time. In this paper, we investigate a method to regularize model learning techniques to provide better error characteristics for traditional control and planning algorithms. This work proposes learning “Sufficiently Accurate” models of dynamics using a primal-dual method that can explicitly enforce constraints on the error in pre-defined parts of the state-space. The result of this method is that the error characteristics of the learned model is more predictable and can be better utilized by planning and control algorithms. The characteristics of Sufficiently Accurate models are analyzed through experiments on a simulated ball paddle system.

Model Learning, Planning, Control

I Introduction

One of the fundamental problems in robotics is the design of controllers and planners for complex dynamical systems. These algorithms rely on models of robots that are derived from physical laws using measured physical constants. These measurements may not be accurate, since the robot and its environment may change over time, resulting in a degradation of performance of the control and planning algorithms. Recent works address these errors in estimation by using data driven models to adapt an initial analytic model [18, 34].

One popular method of using data in control and planning systems is to learn the control inputs directly whether through reinforcement-learning algorithms [21] [23], imitation learning [28] [15], or other means. This may work for specific tasks, but can have difficulty adapting to different tasks or task parameters such as different control constraints. Learning a model of the system is therefore more flexible and can be used with a variety of existing algorithms.

Many modern controllers and planners rely on solving optimization problems such as iLQR [32], CHOMP [26], and TrajOpt [29]. These methods require differentiable forward dynamics models that have well behaved gradients in addition to being accurate. This paper formulates the model learning problem as a constrained optimization problem that seeks to provide such algorithms with predictable errors that enable them to perform well in a variety of scenarios.

Fig. 1: Optimizing a sufficiently accurate model: The solid line represents the true trajectory of the ball, while the dotted lines represent predicted trajectories. The paddle actions, , are optimized with gradient descent on a defined loss function. For the task of bouncing the ball consistently, a model that guarantees prediction error within some bound may be sufficient.

Learning a model is fundamentally different from learning a controller in that the controller is an end in itself but a model is useful as an intermediate step to learn a controller. Therefore, learning a model with arbitrary accuracy is not necessary. Rather, we want to learn a model that is sufficiently accurate for controller design. This paper formulates the problem of learning a model as a constrained optimization problem in which the required accuracy of the model is imposed as a set of constraints (Section III). The constraints that are imposed in the model accuracy are intended to ensure that the model is sufficiently accurate for a variety of control tasks. An additional advantage derived from our problem formulation is that the accuracy constraints that are imposed in model learning allow for trading off the accuracy of the model in different parts of the state space. For example, in Fig. 1 we consider the problem of determining the dynamical trajectories followed by a ball that is hit by a paddle in order to design a controller that would allow us to keep the ball in the air by repeatedly hitting it when the vertical position crosses a certain threshold. We argue that for this problem it is advantageous to learn models that predict with an accuracy dependent on the velocity of the ball.

I-a Contributions

This paper proposes a constraint-based formulation for learning and controlling dynamical systems. The contributions of this paper are: (i) a novel constrained objective function for model learning with neural networks and the adjoining constrained optimization problem for learning the controller, and (ii) a primal-dual method to solve both these problems that has small duality gap. The method is evaluated on a simulated ball bouncing task with varying task parameters and injected errors.

Ii Related Work

The idea of learning a model from data and using it to control systems is not new. PILCO [6] learns a probabilistic forward model with Gaussian Process Regression and is later extended to Bayesian Neural Networks [10]. Guided Policy Search [19, 20] uses a Gaussian Mixture Model as a probabilistic dynamics model. Both formulate an optimization problem with the task of maximizing an expected reward. This allows a policy to be trained by backpropagating through the forward model. [13] and [11] both learn neural network models. The latter then formulates a convex optimization problem by linearizing the neural network. However, it is has been observed that linearizing highly nonlinear systems often performs poorly [7]. The aforementioned methods learn models with an objective function similar to

(1)

where represents the true model dynamics and is the learned model. [4] designs a specific neural network architecture for their forward model which uses a normalized objective. However, there are slack terms introduced to avoid numerical instability caused by dividing with small numbers. [2] presents a way to differentiate through the controller so that a model can be learned end to end. This method requires that the policy has converged to a fixed point which can be hard to achieve in complicated systems.

Learning models is also of great interest in reinforcement learning where a forward model can increase sample efficiency [30, 19]. In addition, it has been found that learning forward and inverse models can provide additional rewards to help train a reinforcement learning agent [24]. [1] introduced a policy search method for reinforcement learning that can impose expectation constraints on states and actions.

There has also been some work in multi task learning to obtain task agnostic policies. One way to do this is with meta learning algorithms such as MAML [9] or [5] which learn policies that can be adapted to different task parameters. [25, 23] learn policies that include a goal as an input to encourage the policy to generalize across different goals. These methods work for small numbers of task parameters but have difficulty scaling up as each additional parameter added to a policy will decrease the sample efficiency of the algorithm.

Iii Constrained Model Learning

In this work, we consider a discrete forward dynamic model. Formally, let and denote the state and action spaces respectively. Then, the dynamic model is defined by a function whose inputs are the state and action at time , denoted by and , and whose output is the state at time

(2)

We denote as the neural network approximation of the true model, where represents the network parameters. The classical approach to model learning consists of finding the parameters that minimize the expectation of a loss function , this is

(3)

where denotes the sampling distribution over the state-action space. Note that is not influenced by and is simply a training distribution. This simple objective does not allow any control over how errors are distributed. To address this limitation, we formulate the problem of learning a model as a constrained optimization problem in which the required accuracy of the model is imposed as a set of constraints. The constraints that are imposed in the model accuracy are intended to ensure that the model is sufficiently accurate, making it suitable for a variety of control tasks. To that end, we consider subsets, with , of the state-action space and constraints , where each component of the constraint is imposed on a different region of the space-action space. With these definitions, we propose the following optimization problem

(4)

where with are indicator functions taking the value if and otherwise. For simplicity we define for all . In the next section we present a primal dual algorithm to solve the optimization problem (4). Before doing so, we consider a specific case to illustrate the sufficiently accurate learning framework.

Example: Normalized Error Model. As a specific case of the previous formulation, we consider the minimization of a normalized error. The error tolerance in a forward model is directly related to the magnitude of the quantity being estimated. For example, 1 unit of error when the output is 100 is different from 1 unit of error when the output is 1. A natural way to mitigate these difference is to consider a normalized error, . It is often the case that small values of are hard to even measure accurately, thus, it matters only that the error is bounded rather than minimized. The set of non-small values will be defined as , where . One can then pose the following model learning objective

(5)

This objective simply states that for all large enough , the normalized error should be minimized, and all small values should be bounded by . This constraint can be seen as a regularization placed on the model learning objective to avoid overfitting the model to small valued labels.

Fig. 2: Model training. The orange curve shows the value of the objective function, . Teal curve shows the value of the constraint function, . Dark blue curve shows the dual variable, . The objective and constraint functions are smoothed and shown in a darker color. The dotted red line shows the line and the constraint curve must go below that for the problem to have a feasible solution. The curves have different values and are normalized so that they can be displayed on one graph.

Iii-a Primal-Dual algorithm

The problem of sufficiently accurate learning can be formulated as the constrained optimization problem (4). A possible approach to solve said problem is through primal-dual methods. Let us start by defining a vector of multipliers and the Lagrangian associated to problem (4)

(6)

where to simplify notation we have defined and we have dropped the distribution . The Lagrangian allows us to define the dual problem as

(7)

The duality gap is defined by the difference . When an optimization problem has zero duality gap (we show in Section III-B that the problem of learning sufficiently accurate models has close to zero duality gap), then the solutions of both optimization problems in (4) and (7) are the same. The optimal primal variable, , must necessarily minimize given the optimal dual variable, . Likewise, the optimal dual variable must maximize the Lagrangian given the optimal primal variable. This leads to the widely used primal-dual method, where the Lagrangian is iteratively minimized with respect to the primal variable and maximized with respect to the dual. This minimization/maximization can be solved by computing gradient descent steps with respect to and gradient ascent steps with respect to . The gradient of the Lagrangian with respect to takes the form

(8)

and the gradient of the Lagrangian with respect to the multiplier yields

(9)

Each iteration must be followed by a projection of onto the positive orthant to make sure it is non negative. Given a static distribution of states and actions to optimize this model, the algorithm is summarized in Algorithm LABEL:alg:model_learning. \@floatalgocf[h]     \end@float Note that gradient ascent and descent steps can be modified to include momentum or include more complicated algorithms such as ADAM [16]. In the next section we show that the sufficiently accurate learning formulated in (4) has almost zero duality gap which motivates the use of the primal-dual algorithm to solve it. One modification that many model learning methods use is to run the model while it is training to gather new data that is added to the training set. This is a DAGGER-like [28] approach used by many methods [33, 22].

Iii-B Almost Zero Duality Gap

By definition of the dual problem, it follows that the dual solution is always a lower bound for the primal solution . That is, [3]. The converse is however not true, but we can show that in the case of sufficiently accurate learning the duality gap is small. To that end we consider the following generalization of the problem (4)

(10)

where instead of optimizing the weights of a function approximator, the optimization is done over the space of all possible integrable functions . We now have the following result shown in [27].

Theorem 1.

Given the optimization problem in (10), if (i) the distribution , is non-atomic, (ii) the inequality constraints define a compact region within , and (iii) there exists a strictly feasible solution (Slater’s condition) then the duality gap is zero.

With this result, a natural question to ask is how the parameterization of functions affects the duality gap. The following theorem from [8] describes sufficient conditions under which a proxy of the duality gap is bounded.

Theorem 2.

For the optimization problems (7) and (10), if,

  • there exists a strictly feasible solution to the primal problem (10);

  • the parameterization of the function space is a universal approximator within error , i.e. for some for all ;

  • the loss function, is expectation-wise Lipschitz continuous, i.e., there exists a such that for all ;

then the optimal parameterized dual value is bounded by

(11)

where is the solution of (7).

The following proposition formalizes that the problem of sufficient accurate learning in (4) has small duality gap.

Proposition 1. Sufficiently Accurate Learning and its dual, defined in (5), satisfy the assumptions of Theorem 2. Hence (11) holds for this case.

Proof.

First we look at the Lipschitz condition.

Thus, the loss is expectation-wise Lipschitz continuous. A strictly feasible solution, , exists since the ground truth model, which is representable in , is strictly feasible. Lastly, the parameterization used is the class of all neural networks which are universal function approximators. Therefore the conditions for Theorem 2 are fulfilled. ∎

This theorem states that for a large enough neural network, the gap between the optimal solution with no function approximation and the optimal solution to the parameterized dual problem scales with . While this does not mean that the optimal parameterized dual problem can be solved, it does motivate the use of the primal dual method. A sample training curve is shown in Fig. 2 for solving Eq. 5. More details about the specific training parameters are given in Section V.

Iv Control with Learned Models

\@float

algocf[h]     \end@float The previous section describes a method for learning a sufficiently accurate model for controller design. In this section we describe how the learned model can be used to that end. For many planning algorithms such as A* or RRT [17], the model can simply be used to generate motion primitives that are more accurate. For optimization based planners, there is a variety of ways to use the model. One such method is to write out costs that explicitly include the model.

A deterministic policy is a function from the state to the action space . A specific way to describe a desired policy is to minimize some cost, associated with the performance of the system. This could be for instance the difference between the predicted state of the model and some desired state. The action selected has to typically satisfy some constraints imposed by the model, e.g., the action is bounded by the maximum torque of the motor in a robotic system, or obstacles in an environment. Denote by , the constraints imposed to the system and define the following policy

(12)

where is the cost on the final state and is the cost on each step. In addition, . Thus will be the model applied times to . Observe that the policy depends on the learned model, and thus it is also a function of the parameters . In particular, if the residual error of learning the model dynamics is low, we can expect good performance out of such policies.

Since, the model is a neural network, it is easy to obtain gradients which can be used for the same primal-dual method mentioned in Sec. III-A. The only difference is that instead of optimizing model weights, the solver is optimizing for the inputs. This procedure is shown in Algorithm LABEL:alg:controller where the Lagrangian is defined as

(13)

Learning a model and using such an optimization problem as the policy allows different controllers to be designed for different goals and constraints. This optimization problem can be solved repeatedly, taking only one action each time in a Model Predictive Control framework. Another alternatives can be to formulate optimization problems where the dynamics are constraints as in direct collocation [14].

V Experiments

We apply the sufficiently accurate model learning framework to a robot arm bouncing a ball with a paddle, to demonstrate its effects. The paddle has 5 degrees of freedom: the position in the three dimensional space and the pitch and roll angles. We model the impact of the ball on the paddle. It takes as input the velocities of the paddle and the ball before they collide as well as the orientation of the paddle and outputs the velocity of the ball after collision. The controller is then tasked to solve the optimization problem

(14)

where is the relative velocity of the ball to the paddle, and is a function that outputs where on the xy plane the ball will hit next. The actions, , consist of a chosen roll and pitch angle as well as desired paddle velocity. The goal of the controller is to bounce a ball at some specific xy location, while obeying action constraints and not hit any obstacles. The simulation was created using libraries from the DeepMind Control Suite [31].

While simulation has drawbacks such as its inability to accurately capture full noise characteristics of a system, it is a useful tool to test out model learning. We can inject known errors into a model and compare how our method does with the real model. This is difficult or impossible to do in real life.

We will first present experiments in which we learn a full model of the system. That is, the neural network is tasked to output the full model, . Next, we will present experiments where a residual model is learned. In the residual model, the neural network is tasked to output the residual where is a (possibly wrong) analytic model. This is the situation where we may have an initial guess of what the model of the system looks like from physical measurements but can be fine-tuned with data.

Fig. 3: Analytic vs Residual Model. The (x, y z) trajectory of the ball is plotted for both the a wrong analytic model as well as the residual model learned on top of it. The analytic model has an error of 0.2 radians in its observation of the roll angle of the paddle. This leads to poor performance by the optimizing controller. The residual model corrects these errors and can track the desired (x, y) location quite well.
Fig. 4: Model errors vs. state magnitude. On the left side, data from a model trained with Eq. 1 is shown. The right side shows data from a model using Eq. 5. The scatter plot shows the errors of running each model on a validation set. The unconstrained problem leads to a good loss in expectation, however the errors are distributed poorly. There might be large errors for states with a small magnitude. The normalized loss on the right allows the large states to have large error and small states to have small error.

V-a Full Model Learning

To train the full model, we first collect a dataset from running the controller described by Eq. 14 where we replace with a faulty analytic model on the system. The faulty analytic model is mistaken about the roll angle at which the robot is holding the paddle (off by 0.1 radians). While this level of angular error can be hard to measure, it has a large effect on how the ball bounces. We collect data while simulating the system for the equivalent of 42 real world minutes. The analytic model used for the velocity of the ball after impact is given as

(15)

where is the relative velocity of the ball to the paddle, is a coefficient of restitution, and is the normal vector representing the orientation of the paddle. The normal vector is observed with an error.

First, we examine the distribution of errors, comparing with a simple model learning approach where just the loss is minimized with the objective defined in Eq. 5. They will be denoted as unconstrained and constrained models, respectively. The neural network model that is used is a simple 2 hidden layer fully connected neural net with 128 neurons in each hidden layer with parametric rectified linear activations [12]. The network was trained using the ADAM optimizer with an initial learning rate of 1e-3. For the constrained example, both and were chosen to be 0.1. The results in Fig. 4 show that the model trained with the sufficiently accurate objective has a different error distribution.

One hypothesis that can be drawn from this figure is that the gradients of the model may be less noisy as there are not sudden jumps in error. For optimization based controllers and planners, this is a big benefit. To see what effect this has on the controller, the controller is run with each model 500 times with different goal locations, as well as different velocity constraints, and . is uniformly distributed in the region , uniformly sampled from the interval , and is selected to be above by between to . Of these 500 experiments, we consider it a failure if the ball falls off the paddle, or when it is hit far away and can not recover. The “Full Model” rows of Table I shows that the failure rate of the sufficiently accurate model is much lower as well as having a lower mean error when it does not fail. The mean error is the mean position error (to the desired location) over time. It is a crude measure of both how fast the controller gets to the goal and how well it stays on it.

Unconstrained Constrained
Full Model Failure
Mean error 0.2124
Residual Model Failure
Mean error 0.156
TABLE I: Controller performance with learned models

V-B Residual Model Learning

In many scenarios, we will have a base analytic model that may be wrong, but would like to improve upon it rather than learning a full model from scratch. Using the same dataset as described in the previous section, we can train a residual model using the unconstrained form in Eq. 1 and a constrained model. All architecture details and hyperparameters used in the residual model are the same as in the full model training. With the residual models, there are no failures as the base analytic model performs well enough to prevent that. The mean error rates are shown in Table I. Trajectories of the ball using the analytic and the residual constrained model is shown in Fig. 3. We can see with a wrong analytic model, the controller does not track the desired position well. As expected, it has a consistent bias to overshoot when trying to correct its position and ends up with a jagged trajectory. The constrained residual model tracks the position much better and does not deviate as the analytic model does.

We compare the constrained residual model with different analytic models with varying levels of error in Fig. 5. The error is computed the same way as in Table I. The residual model has both a lower mean error as well as having a much tighter variance, which means it has more consistent performance.

Fig. 5: Analytic vs Residual Model. A residual model is trained using the same dataset for the same number of iterations with different base analytic models of increasing error. Both the analytic and constrained residual models are then evaluated by running them for 50 different task goals and constraints and their mean errors are computed. The transparent regions show half of a standard deviation above and below the mean.

Vi Discussion and Conclusion

In looking at model learning, we have seen that using the Sufficiently Accurate formulation can bring better results simply by changing the optimization objective. We believe this is due to the fact that this formulation can smooth out error characteristics and provide better gradients for the controller to work with. This methodology is orthogonal with most other model learning algorithms as it makes a suggestion to use constraints as a way to control the errors and gradients.

There are several drawbacks of this method. One of which is that computing derivatives of the model through a neural network can be computationally expensive. This makes it more difficult to deploy on systems that require fast control loops. This can possibly be alleviated by training a fast policy for specific tasks by imitating the more expensive model based solution. Another drawback is that in Eq. 4 is determined by hand. These may need to be adjusted if the problem is infeasible or not tight enough.

There are several branches of future exploration. One is to implement this on a robot arm to test how this method can handle other types of errors that can occur. Another is to explore different types of constraints on different portions of the state space, or an automated way to choose constraints. Testing how different controllers and planners interact with these models can inform us of other characteristics of models that may be important to study.

Acknowledgments

The authors would like to thank Pratik Chaudari for valuable conversations as well as funding by NSF Grant No. DGE-1321851 and the Intel Science and Technology Center for Wireless Autonomous Systems. Any opinions, findings, and conclusions do not necessarily reflect the views of the NSF.

References

  1. J. Achiam, D. Held, A. Tamar and P. Abbeel (2017) Constrained policy optimization. arXiv preprint arXiv:1705.10528. Cited by: §II.
  2. B. Amos, I. Jimenez, J. Sacks, B. Boots and J. Z. Kolter (2018) Differentiable mpc for end-to-end planning and control. In Advances in Neural Information Processing Systems, pp. 8299–8310. Cited by: §II.
  3. S. Boyd and L. Vandenberghe (2004) Convex optimization. Cambridge university press. Cited by: §III-B.
  4. A. Byravan, F. Leeb, F. Meier and D. Fox (2017) SE3-pose-nets: structured deep dynamics models for visuomotor planning and control. arXiv preprint arXiv:1710.00489. Cited by: §II.
  5. J. Cerviño, J. A. Bazerque, M. Calvo-Fullana and A. Ribeiro (2019) Meta-learning through coupled optimization in reproducing kernel hilbert spaces. In 2019 American Control Conference (ACC), pp. 4840–4846. Cited by: §II.
  6. M. Deisenroth and C. E. Rasmussen (2011) PILCO: a model-based and data-efficient approach to policy search. In Proceedings of the 28th International Conference on machine learning (ICML-11), pp. 465–472. Cited by: §II.
  7. M. Diehl, H. G. Bock and J. P. Schlöder (2005) A real-time iteration scheme for nonlinear optimization in optimal feedback control. SIAM Journal on control and optimization 43 (5), pp. 1714–1736. Cited by: §II.
  8. M. Eisen, C. Zhang, L. F. Chamon, D. D. Lee and A. Ribeiro (2018) Learning optimal resource allocations in wireless systems. arXiv preprint arXiv:1807.08088. Cited by: §III-B.
  9. C. Finn, P. Abbeel and S. Levine (2017) Model-agnostic meta-learning for fast adaptation of deep networks. arXiv preprint arXiv:1703.03400. Cited by: §II.
  10. Y. Gal, R. McAllister and C. E. Rasmussen Improving pilco with bayesian neural network dynamics models. Cited by: §II.
  11. M. T. Gillespie, C. M. Best, E. C. Townsend, D. Wingate and M. D. Killpack (2018) Learning nonlinear dynamic models of soft robots for model predictive control with neural networks. In 2018 IEEE International Conference on Soft Robotics (RoboSoft), pp. 39–45. Cited by: §II.
  12. K. He, X. Zhang, S. Ren and J. Sun (2015) Delving deep into rectifiers: surpassing human-level performance on imagenet classification. In Proceedings of the IEEE international conference on computer vision, pp. 1026–1034. Cited by: §V-A.
  13. M. I. Jordan and D. E. Rumelhart (1992) Forward models: supervised learning with a distal teacher. Cognitive science 16 (3), pp. 307–354. Cited by: §II.
  14. M. Kelly (2017) An introduction to trajectory optimization: how to do your own direct collocation. SIAM Review 59 (4), pp. 849–904. Cited by: §IV.
  15. A. Khan, C. Zhang, N. Atanasov, K. Karydis, V. Kumar and D. D. Lee (2017) Memory augmented control networks. arXiv preprint arXiv:1709.05706. Cited by: §I.
  16. D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §III-A.
  17. S. M. LaValle and J. J. Kuffner Jr (2001) Randomized kinodynamic planning. The international journal of robotics research 20 (5), pp. 378–400. Cited by: §IV.
  18. G. Lee, S. S. Srinivasa and M. T. Mason (2017) Gp-ilqg: data-driven robust optimal control for uncertain nonlinear dynamical systems. arXiv preprint arXiv:1705.05344. Cited by: §I.
  19. S. Levine and P. Abbeel (2014) Learning neural network policies with guided policy search under unknown dynamics. In Advances in Neural Information Processing Systems, pp. 1071–1079. Cited by: §II, §II.
  20. S. Levine, C. Finn, T. Darrell and P. Abbeel (2016) End-to-end training of deep visuomotor policies. The Journal of Machine Learning Research 17 (1), pp. 1334–1373. Cited by: §II.
  21. S. Levine and V. Koltun (2013) Guided policy search. In International Conference on Machine Learning, pp. 1–9. Cited by: §I.
  22. A. Nagabandi, G. Kahn, R. S. Fearing and S. Levine (2018) Neural network dynamics for model-based deep reinforcement learning with model-free fine-tuning. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 7559–7566. Cited by: §III-A.
  23. A. V. Nair, V. Pong, M. Dalal, S. Bahl, S. Lin and S. Levine (2018) Visual reinforcement learning with imagined goals. In Advances in Neural Information Processing Systems, pp. 9209–9220. Cited by: §I, §II.
  24. D. Pathak, P. Agrawal, A. A. Efros and T. Darrell Curiosity-driven exploration by self-supervised prediction. Cited by: §II.
  25. D. Pathak, P. Mahmoudieh, G. Luo, P. Agrawal, D. Chen, Y. Shentu, E. Shelhamer, J. Malik, A. A. Efros and T. Darrell Zero-shot visual imitation. Cited by: §II.
  26. N. Ratliff, M. Zucker, J. A. Bagnell and S. Srinivasa (2009) CHOMP: gradient optimization techniques for efficient motion planning. Cited by: §I.
  27. A. Ribeiro (2012) Optimal resource allocation in wireless communication and networking. EURASIP Journal on Wireless Communications and Networking 2012 (1), pp. 272. Cited by: §III-B.
  28. S. Ross, G. Gordon and D. Bagnell (2011) A reduction of imitation learning and structured prediction to no-regret online learning. In Proceedings of the fourteenth international conference on artificial intelligence and statistics, pp. 627–635. Cited by: §I, §III-A.
  29. J. Schulman, J. Ho, A. X. Lee, I. Awwal, H. Bradlow and P. Abbeel (2013) Finding locally optimal, collision-free trajectories with sequential convex optimization.. In Robotics: science and systems, Vol. 9, pp. 1–10. Cited by: §I.
  30. R. S. Sutton (1990) Integrated architectures for learning, planning, and reacting based on approximating dynamic programming. In Machine Learning Proceedings 1990, pp. 216–224. Cited by: §II.
  31. Y. Tassa, Y. Doron, A. Muldal, T. Erez, Y. Li, D. d. L. Casas, D. Budden, A. Abdolmaleki, J. Merel and A. Lefrancq (2018) DeepMind control suite. arXiv preprint arXiv:1801.00690. Cited by: §V.
  32. Y. Tassa, N. Mansard and E. Todorov (2014) Control-limited differential dynamic programming. In 2014 IEEE International Conference on Robotics and Automation (ICRA), pp. 1168–1175. Cited by: §I.
  33. G. Williams, N. Wagener, B. Goldfain, P. Drews, J. Rehg, B. Boots and E. Theodorou Information theoretic mpc for model-based reinforcement learning.. In Proceedings of the 2017 IEEE Conference on Robotics and Automation (ICRA), Cited by: §III-A.
  34. G. Williams, N. Wagener, B. Goldfain, P. Drews, J. M. Rehg, B. Boots and E. A. Theodorou Information theoretic mpc using neural network dynamics. Cited by: §I.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
410013
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description