# Towards Planning and Control of Hybrid Systems with Limit Cycle using LQR Trees

## Abstract

We present a multi-query recovery policy for a hybrid system with goal limit cycle. The sample trajectories and the hybrid limit cycle of the dynamical system are stabilized using locally valid Time Varying LQR controller policies which probabilistically cover a bounded region of state space. The original LQR Tree algorithm builds such trees for non-linear static and non-hybrid systems like a pendulum or a cart-pole. We leverage the idea of LQR trees to plan with a continuous control set, unlike methods that rely on discretization like dynamic programming to plan for hybrid dynamical systems where it is hard to capture the exact event of discrete transition. We test the algorithm on a compass gait model by stabilizing a dynamic walking hybrid limit cycle with point foot contact from random initial conditions. We show results from the simulation where the system comes back to a stable behavior with initial position or velocity perturbation and noise.

## I Introduction

Bipedal walking robots can easily access man-made environments and enable friendly interaction with humans, but such systems are highly nonlinear with hybrid dynamics posing a challenge to conventional control designs. Controllers designed to track a trajectory for highly nonlinear systems provide only a reflex policy and can at most succeed only in a local region of attraction around the target trajectory being tracked. To find a policy from any given initial state one can discretize the state space and use dynamic programming. However in the problem of stabilizing a hybrid system, which has continuous dynamics punctuated by discrete transitions, one must discretize the system using a fine resolution to capture the jump event with reasonable tolerance. An approach to control a compass gait where many approximations are made to adapt the existing methods to hybrid systems is described in [10].

Also, there is a large variety of robots that consist of a logical discrete event decision-making system interacting with a continuous time process. The action capabilities of such robots cannot be captured with a single controller. These type of hybrid systems have multiple system dynamics governing their behavior and can therefore have multiple discontinuities in any of their trajectory in the state space. With the increase in the development of highly sophisticated robots, the need for a hybrid controller is imperative. Our work focuses on extending a relatively new and a continuous method of controller design to such hybrid systems.

This approach is inspired by randomized feedback motion planning methods, which cover a bounded region of the state space with locally valid linear quadratic regulator (LQR) policies [11] that lead to a target trajectory. Tedrake [11] has mentioned LQR trees as a scalable algorithm that could avoid the pitfall of discretization in dynamic programming and also use continuous control actions. However, the original algorithm did not consider a hybrid system in its formulation and was tested only on a relatively simpler platform like a pendulum. Also, the original method only stabilizes the region of attraction of LQR controller around a stabilizable fixed point. We focus on extending the LQR trees to stabilize a periodic limit cycle of a hybrid system. That is, we would cover a bounded region of state space with funnels around sample trajectories that lead to the region of attraction around the goal limit cycle of a hybrid system. The funnels are regions of attraction around a trajectory created by a controller that act as a vacuum tube and sucks any state inside the tube so that they do not leave the tube while driving the state towards the end of the trajectory. These regions of attraction can be found using Lyapunov function verification. Tedrake [17] shows that recent methods that use Sums-of-Squares (SoS) optimization to verify Lyapunov functions allow very fast determination of these regions of attraction.

Reist et. al, [16] provide a very similar algorithm which verifies the funnels empirically through simulation instead of using SoS optimization. This method mainly aims at simplifying the implementation of the verification process at the cost of time complexity. It is tested on a more complex, 2 DoF cart-pole system. Here the size of a funnel is initialized to a very high value and is trimmed for every sample that fails to reach the goal under the time varying LQR policy of the nominal trajectory. Hence the method requires a huge number of samples, both to build the tree and to verify funnels. However, the number of nodes in the tree and several other results will be unchanged even while using the formal verification. Hence we will be comparing our results to this implementation.^{1}

The original LQR tree algorithm ensures that any sequence of such funnels ends at the region of attraction of the final controller that stabilizes the system at a fixed point. In this paper, we design sequences of funnels that end at the goal limit cycle. For this, we ensure that the size of each funnel, at their end, which directly connects to the limit cycle is less than the funnel of the limit cycle itself. Also, we use trajectory optimization to generate hybrid trajectories instead of a single continuous trajectory which enables our algorithm to work for hybrid systems.

## Ii Related Work

Stabilizing the gait has been a great interest among robotics researchers and falls into two broad classes: 1) Based on the Zero Moment Pole (ZMP) principle 2) Passive dynamic and limit cycle walkers. Robots like Honda ASIMO uses the ZMP technique [7] to keep the dynamical degrees of freedom fully actuated. The criteria in ZMP is to keep the center of pressure within the support polygon of the stance foot. However, the motions resulting from these techniques are very unnatural and inefficient.

There is a large body of work that aims to find the most stable limit cycle. Dai et al. [18] formulates an optimization problem to find the limit cycle that maximizes its robustness against external disturbances. They do this by minimizing the mean controller costs of limit cycle states encountering collision so that the reset states post impact lie close to the limit cycle. But our method is indifferent to any such goal limit cycle and brings the system back to a given goal limit cycle trajectory from far outside the region of attraction.

To deal with states that are far outside this region of attraction, [5] and [12] plan trajectories beforehand which can act as look-up table that provides a policy for any given initial state. The recent trend in verifying a Lyapunov function using convex optimization techniques gave birth to several effective methods like [11] and [16] that did not exist before due to the time overhead in verifying Lyapunov functions. Numerical methods for computing regions of finite-time invariance [15] (“verification of funnels”) around solutions of polynomial differential equations is extensively used in this paper. In fact, the idea is to cover the state space with the funnels around the sample trajectories that lead to the region of attraction of the goal trajectory. These funnels have non-zero volumes in state space and hence can effectively cover a bounded region filled with nominal trajectories . Moreover, it takes relatively less number of funnels to cover the entire region of state space compared to the number of nodes in other methods like Probabilistic Road Map (PRM) [2] to cover a given region.

There are several previous works that describe the usage of sample paths as fundamental representation of policies. Initial attempts to use sampling based planner to control nonlinear hybrid systems uses Rapidly-exploring Random Tree (RRT) [5]. Here, the nearest state in the tree to a state sampled at random is forward simulated with a random control input. Thus, due to forward simulation, every edge in the tree is a feasible trajectory with which we have a policy to go from the start node to any other node in the tree (which may include goal if a path is feasible). Most of the work in this field following [5] can be categorized as improving the 1) Sampling distribution [12] 2) Distance metric [11, 16, 21] and 3) Extend operation [11, 16]. LQR Tree algorithm [11] uses the controller cost function as distance metric which improves the success rate of finding a trajectory using direct collocation [1] from the nearest point on the tree to the sampled point (extend operation). Our work focuses on extending the capability of LQR Trees to stabilize a hybrid trajectory.

The remainder of this paper is outlined as follows: Section III sets the mathematical premise for the proposed extension. Section IV explains the proposed method in detail, viz., the basic principle of Time Varying Linear Quadratic Regulator (TVLQR) and how it is used to stabilize a goal limit cycle, the estimation of the limit cycle itself, the details involved in using the direct collocation trajectory optimization for hybrid trajectories and putting it together to cover the bounded region of state space with recovery policies. The testbed and the collision dynamics that is responsible for the discrete jump in the state space are described in section V. The algorithm is experimentally evaluated using the simulation of compass gait in section VI. Section VII and VIII offer a discussion and concluding remarks.

## Iii Problem Statement

Consider a hybrid system with continuous dynamics and discrete transitions at with stabilizable goal limit cycle given by where is the limit cycle trajectory and is the open loop control law. Let be the entire state space of this system and be the set of stabilizable states of this space. The LQR tree algorithm has a number of feedback stabilized sample trajectories which we’ll denote using for trajectory in the tree. Let each trajectory start at time and end at time . We use a controller to stabilize the system around every trajectory which results in a region of attraction around this trajectory such that . From the LQR trees for any we have, for some . We design the system such that, the same holds for every trajectory and the system always ends in the respective parent’s funnel at the end of each child’s trajectory ultimately leading to the funnel of goal limit cycle, . Our primary objective is that, as the number of sample trajectories increase, the union of all the funnels cover the entire stabilizable state space by also accounting for the discrete state transitions i.e., .

## Iv Method

### Iv-a TVLQR feedback stabilization for hybrid systems

We use a TVLQR to stabilize the system around a given trajectory. We first explain the theory behind TVLQR controller which is necessary to estimate the funnels. Let us consider the sub-problem of designing a time-varying LQR feedback based on a time-varying linearization along a nominal trajectory. Consider a smoothly differentiable, nonlinear system with stabilizable limit cycle trajectory, and . Let , be the linearization of system dynamics with respect to state and input respectively. For now, assume we have the optimal cost-to-go matrix of TVLQR controller (around the limit cycle) for the limit cycle trajectory. Determination of is discussed later in this subsection.

The optimal cost-to-go for any nominal trajectory of the controller is given by [11] as,

(1) |

where is the solution to the Riccati equation

(2) |

and the optimal feedback policy of TVLQR controller is

(3) |

where , are the state and input deviations from the nominal values and , Q and R penalize and respectively in the LQR cost function. From the above description of obtaining for a trajectory we can conclude - 1) We need the cost to go matrix at final time step, from where we can integrate backwards to obtain . 2) The controller thus obtained promises only to put the system finally within an ellipsoid (intuitively this means the cost-to-go is less than some threshold ) for some . The nominal trajectory , stabilized by a controller terminates at a new nominal trajectory , stabilized by a controller . It is required by condition 2) that must be able to stabilize any state in the ellipsoid resulted by applying . The same holds for trajectories following , . As we keep transitioning the system from one trajectory to another, the system eventually must find itself in a stabilizable state or the limit cycle.

Consider a limit cycle that starts at and ends at . Since the limit cycle ends at its start i.e., , we also have . One of the methods to find is to initialize to some arbitrary ^{2}^{3}

For a hybrid system, such as the compass gait, we have continuous dynamics and discrete transitions. Hence we cannot obtain from a single Riccati equation since a solution to a first order quadratic differential equation must be smoothly differentiable. Therefore we have different Riccati equations for different modes the system is operating in. And we have a discrete ‘jump’ event in the Riccati equation where we jump from one equation to another. This is called the jump Riccati equation [14]. The collision dynamics of the system is a function that maps states in mode just before the collision event (guards that cause discrete transitions) to states in mode just after the collision. That is, , where and are the instances just before and after the collision event. We linearize the collision dynamics of the system to get . The cost to go matrix during the jump event is given by [14],

(4) |

### Iv-B Funnel around a trajectory

Consider a system with a closed loop limit cycle whose region of attraction is given by , where . can be determined empirically as described in subsection IV-A. Let be a trajectory that takes the system from an arbitrary start state to the state of the limit cycle. The closed loop limit cycle requires to end at a state such that . We can view this as the allowed uncertainty at the tail end of the trajectory . Hence for a time varying system, instead of defining a discrete region of attraction at every time step, we define funnel ^{4}

Mathematically, we can define the funnel as the time varying region ,where

(5) |

where is the function that forward simulates the closed loop trajectory from to and is the region of attraction around the goal trajectory.

For any such trajectory , we use the cost-to-go, which is time varying here, as the Lyapunov candidate and find the largest in the interval using SoS programming and binary search for as in [11], which gives us the region

(6) |

where , the value function of the closed loop system, is nothing but , the optimal cost-to-go from Eq. 1.

This must satisfy Eq. 5. Similarly for the goal region, the region of attraction is

(7) |

where represents a constraint on the final value, such that . For a time varying system, it is not reasonable to talk about asymptotic stability as this can only be defined for the system as time goes to infinity. However, we can still say that the cost to go function is going downhill and the system is converging to the trajectory for the duration of the trajectory. The bounded final value condition can be verified by proving that is closed over . The set is closed if we have,

(8) | ||||

(9) |

where is the boundary of the funnel ,

(10) |

The first condition (Eq. 8) is satisfied by the LQR derivation which makes sure that is positive definite. The time derivative of the Lyapunov function is given by,

(11) |

Tedrake verifies the second condition (Eq. 9) by formulating a series of sums-of-squares feasibility programs just as in original LQR Tree algorithm [11].

Building a funnel around the required goal trajectory gives the system a little breathing area. It is impossible for a system to track the trajectory obtained from direct collocation, as it will be comprised of cubic splines. It is however reasonable to ask for the dynamics of the system to evolve in such a manner that it lies within the volume defined by the LQR funnels.

### Iv-C Estimation of the nominal limit cycle

The hybrid system under our consideration has a stabilizable limit cycle behavior executing the trajectory , . That is, for some bounded perturbation , the system stays inside the bounded region around the limit cycle trajectory. Dynamics of a hybrid system can be factored into modes and the discrete transition is only due to resetting the state governed by the dynamics of current mode to an initial state governed by dynamics of the next mode. That is, a guard event only resets the system in the current mode (in dynamics sense) with another configuration.

The time instances just before and after this transition or collision event are and and the states are and respectively. Let the collision dynamics function which takes you from mode to be such that (Eqn. 17 & 18). To estimate the periodic limit cycle trajectory of the hybrid system we perform point to point trajectory optimization with knot points, , with the constraint that which gives a set of continuous trajectories.

A hybrid system might have more than one stable limit cycle. This is attributed to the range of different possible inputs – all of which achieve limit cycle stability. We choose the limit cycle that results in a local optimum given the cost function . Although we can design an LQR-tree algorithm which finds a policy from any initial state to the region of attraction of this open loop limit cycle, it is important to design a TVLQR controller which stabilizes the system around this limit cycle itself to increase the region of attraction of the open loop system. This region of attraction is defined by as discussed in section IV-A and IV-B

### Iv-D Direct collocation for hybrid systems

We use direct collocation to give locally optimal trajectories from a given pose to the goal position. This is required as we need a trajectory that connects a newly sampled state to the nearest funnel in the TVLQR cost function sense. For the hybrid trajectory optimization, we add the mode transition constraint in the optimization problem. The transition can be easily described using the collision event which occurs in the hybrid dynamical system. We search for this collision event between knot points so that the system dynamics at this event is accounted for. Once this transition is added into the dynamic model, a normal use of the collocation gives us the required nominal trajectory, and . Another problem to be addressed here is the mode sequence that is to be given to the optimizer. We assume that we know the mode sequence of our hybrid system and input it with an initial guess for the time when the transitions occur. However, for systems with several modes, the number of possible mode sequences become enormously high. For such systems, we can let the solver figure out the order of contacts using the implicit trajectory optimization method proposed by Posa et al. [19].

We cannot directly use the trajectory generated by the optimizer (line 14 in Alg. 2) method as a sample policy since the probability of a given initial state being one of the states in this nominal trajectory, and , is zero (a trajectory in state space has zero volume). The TVLQR controller however stabilizes the system around this trajectory and finds the funnel of attraction which has a non-zero volume in state space (and hence has a non-zero probability of occurrence). For a hybrid trajectory, we have piecewise continuous sub-trajectories punctuated by discrete transitions. We apply TVLQR controller to stabilize the system around every piece of continuous trajectories and find the funnel around each trajectory separately. The jump Riccati equation finds a cost to go such that every state inside lands in . This ensures that the system is stable throughout the hybrid trajectory.

### Iv-E Growing the LQR Trees

The LQR tree is initialized with the goal funnel of the nominal limit cycle and . Next, a state is randomly sampled uniformly from the state space. If the newly sampled point falls inside the funnel then it is discarded (Alg.2 line 11). Otherwise, we find the nearest point in the existing tree by using the LQR cost-to-go distance metric[11]. This in a way means that the controller requires minimum effort to bring the sampled point to this nearest point. Once the closest node in the tree is identified we perform trajectory optimization like direct collocation connecting the sampled point and the nearest point (Alg.2 line 14). Then we design the TVLQR controller for the trajectory. Following this, the funnel around this trajectory under the stabilization of TVLQR is verified using section IV-B. We add the newly verified funnel to the existing tree. Our algorithm terminates when a predetermined number of consecutive sample points returns failure or fall in existing funnels.

One can find the TVLQR controller for the given trajectory using Eq. 3. We feed the piecewise continuous trajectory of the hybrid trajectory and design separate TVLQR controllers for each mode. Then we verify every controller separately and determine the Lyapunov function and the size of the funnel . Hence the region of attraction is given by with which we check if a sampled point falls inside a funnel or not.

## V Experimental Setup

The dynamical system in which we investigate our algorithm is a minimalistic version of a compass gait. The compass gait is modeled as a two link ( and ) manipulator with masses concentrated at their center Fig.3.

We define the state of the system i.e., for swing () and for stance () and their derivatives with respect to the world frame attached to the ground as shown in Fig.3. The state of the system is given by, . The dynamics governing the compass gait model can be described as , where the matrices , and contain the inertial, Coriolis and the gravity terms as given in [13].

(12) |

where,

(13) |

C | (14) |

G | (15) |

where . The compass gait is placed on a flat terrain, i.e., the acceleration due to gravity is in the direction. During the walking cycle, when the swing leg collides with the ground we generally assume conservation of angular momentum ^{5}

(16) |

(17) |

(18) |

where and are the joint angular velocities just before and after the collision, , and are the transition matrices that contain the coefficients of conservation of angular momentum. The size of and matrices are because angular momentum is conserved about two points and for two angular velocities. During collision event, the mapping between joint angles before and after collision is obtained by merely interchanging them i.e., and . The complete collision dynamics is found using this and velocity mappings in Eq. 16.

## Vi Experiments and Results

### Vi-a Compass Gait Model

The compass gait model whose dynamics are described above is simulated with the following parameters. The mass at the hip , link masses , link lengths and . Here and at the hip. The joints are assumed to be frictionless and the collisions are inelastic.

### Vi-B Direct Collocation and LQR Trees

The direct collocation procedure is a nonlinear trajectory optimization problem which requires an initial guess for and . The initial guess for is a straight line connecting the random sample and the nearest point in the existing tree based on the TVLQR cost function. The initial guess for input trajectory is a random value. The mode transition constraint is given by the collision dynamics (Eq. 16) which occurs in the compass gait model when the distance between the swing leg and the ground is zero. The direct collocation function has very sparse gradients and the constraints depend upon the values at knot points or adjacent knot points. Solvers such as SNOPT [8] can very efficiently solve such nonlinear programs with sparse gradients. Every iteration of direct collocation is terminated and a new state is sampled if there is no result after 40 seconds or if the solver fails.

The parameters of the LQR tree algorithm are , . We terminate after 500 consecutive samples (MAXITER om Alg. 2) fall in the existing tree or fail to find a trajectory to the tree. The LQR tree is considered to have reasonably covered the entire region of stabilizable state space for compass gait if 500 consecutive samples either fall inside the funnels or outside and the direct collocation fails.

System | Pendulum | Cartpole | Compass gait |

No. of Samples | 146 | - | 252 |

6068 | 5821 | - | |

No. of nodes | 146 | - | 192 |

477 | 881 | - | |

Time taken for | 0h 2m | - | 6h 21m |

planning | 0.5h | 2h | - |

The goal limit cycle trajectory and are obtained directly from the PeriodicDirCollocation() method. Also, a TVLQR controller given a nominal trajectory can be found using tvlqr(). The sampledFiniteTimeVerification() (aka FTV) finds the ratio of value of Lyapunov function to the size of the funnel . So to check if a point x is in a given funnel, we just need to check .

### Vi-C Experiments with Initial Perturbation and Noise

Once the state space is filled with LQR trees the compass gait has a recovery policy to stabilize itself back to the limit cycle’s region of attraction from any arbitrary and stabilizable initial condition. It is to be noted that not all states of the compass gait are stabilizable no matter what input we apply. For example, a compass gait lying down cannot pump energy to get itself up in any way to get back up on two legs. We tested our algorithm by perturbing both the position and velocity states of the system. The figure 4 shows the phase portrait of the case in which there was an initial velocity perturbation and figure 4 shows the phase portrait of the case in which there was both velocity and position perturbation. Both the above cases are simulated by considering a white Gaussian noise in the forward simulation with 0.05 as standard deviation.

### Vi-D Inferences

TABLE I provides a comparison of the two methods to generate LQR trees and the dynamical systems they were applied to. It took around 381 minutes to completely cover the state space^{6}

## Vii Future Work

One of the interesting future works would be to extend the algorithm for the case of a high DoF compass gait (Compass Gait with knees[9] – 3 DoFs). We have a more efficient walking limit cycle with knees than keeping the knees straight. To find a policy for kneed gait with LQR trees of kneeless gait, a set of approximations could be made to map a given state in kneed gait to a state in simple gait and follow its policy given by the LQR-tree. Also the number of possible mode sequences becomes intractable for high DoF systems. For such systems (kneed compass gait) mode sequence specification could be circumvented by using the algorithm laid by [19].

When a perturbation causes the kneed compass gait to go off the region of attraction of its limit cycle we can approximate the instantaneous configuration by 1) making the links move to a configuration with equal angles at the knees and by locking the knee angles 2) stretching the knee to obtain an equivalent low DoF configuration. In both cases, we go to the described configurations using direct collocation with the constraint that the knee angles are equal (or zero for straight knees). Once we stabilize the system with the knees straight or locked, the system will exhibit a stable kneeless limit cycle. We just need one trajectory from one of the points of this limit cycle to a point in the more efficient kneed limit cycle. Moreover, we can use these trajectories obtained from such mappings as seed trajectories to build trees in the state space of kneed compass gait.

## Viii Conclusions

We presented a method to use LQR trees to control hybrid systems like compass gait. We also devised a method (Algorithm.1) to find the region of attraction of TVLQR controller around a dynamic limit cycle which is important to find the funnel around every other trajectory of LQR Tree. This method effectively covers the stabilizable regions in a bounded state space with a series of feedback stabilized sample trajectories that lead to the goal trajectory. The algorithm was tested for a compass gait modeled as a two-link manipulator. We also discuss how an LQR Tree built for a lower DoF robot may be reused on a higher DoF systems. While we probabilistically cover stabilizable regions of state space to provide a policy beyond a controller’s region of attraction, it takes more than six hours to do this for one goal trajectory. For a new goal trajectory, one can reuse the tree if a trajectory can be found between any two points – one in the old and one in the new goal trajectory. Otherwise, it could potentially take another six hours of planning.

## Acknowledgment

We specially thank Prof. Russ Tedrake for his quick responses to our queries regarding DRAKE simulator [20]. We also thank Prof. Dmitry Berenson for his motivation behind exploring this area of research.

### Footnotes

- The original implementation only shows results tested on a pendulum. A 2-DoF cart-pole system would be better to compare with since a compass gait is a 2 DoF system too.
- In , is the number of iterations. We can initialize it to penalize the state dimensions we care about. A good choice would be Q from TVLQR cost function
- In practice we would want them to be close under some tolerance.
- We use the term ’funnel’ for a trajectory which is analogous to basin of attraction for a stabilizable state.
- We do not consider the change in the mechanical energy of the system during this conservation. In [3], they prove that is always negative.
- It was evaluated on a Intel Core i7-6700K 4 GHz Quad-Core Processor with 32 GB RAM

### References

- Von Stryk, Dipl Math Oskar. ”Numerical solution of optimal control problems by direct collocation.” Optimal Control. BirkhÃ¤user Basel, 1993. 129-143.
- Kavraki, Lydia E., et al. ”Probabilistic roadmaps for path planning in high-dimensional configuration spaces.” IEEE transactions on Robotics and Automation 12.4 (1996): 566-580.
- Goswami, Ambarish, Benoit Thuilot, and Bernard Espiau. ”Compass-like biped robot part I: Stability and bifurcation of passive gaits.” (1996).
- Lavalle, Steven M. ”Rapidly-Exploring Random Trees: A New Tool for Path Planning.” (1998).
- Branicky, Michael S., and Michael M. Curtiss. ”Nonlinear and hybrid control via RRTs.” Proc. Intl. Symp. on Mathematical Theory of Networks and Systems. Vol. 750. 2002.
- S. Prajna, A. Papachristodoulou, P. Seiler, and P. A. Parrilo, SOSTOOLS: Sum of squares optimization toolbox for MATLAB, 2004.
- VukobratoviÄ, Miomir, and Branislav Borovac. ”Zero-moment point–thirty five years of its life.” International Journal of Humanoid Robotics 1.01 (2004): 157-173.
- Philip E. Gill, Walter Murray, and Michael A. Saunders. SNOPT: An SQP algorithm for large-scale constrained optimization. SIAM Review, 47(1):99â131, 2005.
- Chen, Vanessa F. Hsu. Passive dynamic walking with knees: A point foot model. Diss. Massachusetts Institute of Technology, 2007.
- Byl, Katie, and Russ Tedrake. ”Approximate optimal control of the compass gait on rough terrain.” Robotics and Automation, 2008. ICRA 2008. IEEE International Conference on. IEEE, 208.
- Tedrake, Russ. ”LQR-Trees: Feedback motion planning on sparse randomized trees,” in Proc. of Robotics: Science and Systems, Seattle, WA, June 2009.
- Shkolnik, Alexander, Matthew Walter, and Russ Tedrake. ”Reachability-guided sampling for planning under differential constraints.” Robotics and Automation, 2009. ICRA’09. IEEE International Conference on. IEEE, 2009.
- Tedrake, Russ. ”Underactuated robotics: Learning, planning, and control for efficient and agile machines course notes for MIT 6.832.” Working draft edition (2009).
- Manchester, Ian R., et al. ”Regions of attraction for hybrid limit cycles of walking robots.” arXiv preprint arXiv:1010.2247 (2010).
- Tobenkin, Mark M., Ian R. Manchester, and Russ Tedrake. ”Invariant funnels around trajectories using sum-of-squares programming.” arXiv preprint arXiv:1010.3013 (2010).
- Reist, Philipp, and Russ Tedrake. ”Simulation-based LQR-trees with input and state constraints.” Robotics and Automation (ICRA), 2010 IEEE International Conference on. IEEE, 2010.
- Tedrake, Russ, et al. ”LQR-trees: Feedback motion planning via sums-of-squares verification.” The International Journal of Robotics Research (IJRR), 2010.
- Dai, Hongkai, and Russ Tedrake. ”Optimizing robust limit cycles for legged locomotion on unknown terrain.” Decision and Control (CDC), 2012 IEEE 51st Annual Conference on. IEEE, 2012.
- Posa, Michael, and Russ Tedrake. ”Direct trajectory optimization of rigid body dynamical systems through contact.” Algorithmic foundations of robotics X. Springer Berlin Heidelberg, 2013. 527-542.
- Russ Tedrake. Drake: A planning, control, and analysis toolbox for nonlinear dynamical systems. http://drake.mit.edu, 2014
- Bharatheesha, Mukunda, et al. ”Distance metric approximation for state-space rrts using supervised learning.” Intelligent Robots and Systems (IROS 2014), 2014 IEEE/RSJ International Conference on. IEEE, 2014.