# Memory of Motion for Warm-starting Trajectory Optimization

## Abstract

Trajectory optimization for motion planning requires good initial guesses to obtain good performance. In our proposed approach, we build a memory of motion based on a database of robot paths to provide good initial guesses. The memory of motion relies on function approximators and dimensionality reduction techniques to learn the mapping between the tasks and the robot paths. Three function approximators are compared: -Nearest Neighbor, Gaussian Process Regression, and Bayesian Gaussian Mixture Regression. In addition, we show that the memory can be used as a metric to choose between several possible goals, and using an ensemble method to combine different function approximators results in a significantly improved warm-starting performance. We demonstrate the proposed approach with motion planning examples on the dual-arm robot PR2 and the humanoid robot Atlas.

Learning and Adaptive Systems; Motion and Path Planning

## 1 Introduction

\IEEEPARstartMotion planning for robots with high Degree-of-Freedoms (DoFs) presents many challenges, especially in the presence of constraints such as obstacle avoidance, joint limits, etc. To handle the high-dimensionality and the various constraints, many works [24] [29] [12] focus on trajectory optimization methods that attempt to find a locally optimal solution. In this approach, the motion planning problem is formulated as an optimization problem

(1) |

where denotes the robot’s configurations from time step to ; , and are the cost, the inequality and the equality constraints. The solution of (1) is the path , with the dimension of . When the path is parameterized by time, it is called trajectory.

As an example, consider the planning problem depicted in Fig. 4, where the PR2 robot has to move its base around an object or to perform a dual-arm motion to pick items from the shelves. If the task is to move from an initial configuration to a goal configuration while minimizing the total joint velocity, the optimization problem can be written as

(2) |

Other constraints can also be added, e.g. to avoid collisions, to comply with joint limits, etc.

Such optimization problems are in general non-convex, especially due to the collision constraints, which makes finding the global optimum very difficult. Trajectory optimization methods such as TrajOpt [24], CHOMP [29], or STOMP [12] solve the non-convex problem by iteratively optimizing around the current solution. While such approach is very popular and yields good practical results, the convergence and the quality of the solution are very sensitive to the choice of the initial guess. If it is far from the optimal solution, the method can get stuck at a poor local optimum.

To overcome this problem, our approach builds a memory of motion that learns how to provide good initializations (i.e., a warm-start) to the solver based on previously solved problems. Functionally, the memory of motion is expected to learn the mapping that maps each task to the robot path . Such mapping can be highly nonlinear and multimodal (i.e., one task can be associated to several robot paths ), and the dimension of is typically very high. Our proposed method relies on machine learning techniques such as function approximation and dimensionality reduction to learn this mapping effectively. We use the term memory of motion to include both the database of motions and the algorithms to query the warm-starts from the database.

We point out that while other techniques such as sampling-based motion planners can also be used to warm-start the solver (e.g. in [19]), such methods typically require a considerable computation time (i.e. in the order of seconds) that is comparable to the solver’s convergence time itself, given the very high dimensional problems considered here. In contrast, querying the memory of motion can be done very fast, in the order of milliseconds. Additionally, our proposed method produces initial guesses that are close to the optimal solutions, reducing the convergence time.

The contribution of this paper is the following. First, we propose the use of function approximation methods to learn the mapping . We consider three methods: -Nearest Neighbor (-NN), Gaussian Process Regressor (GPR) and Bayesian Gaussian Mixture Regression (BGMR), and discuss their different characteristics on various planning problems. We show in particular that BGMR handles multimodal output very well. Furthermore, we show that the memory of motion can be also be used as a metric for choosing optimally between several possible goals. Finally, we demonstrate that using an ensemble of function approximators to provide warm-starts boosts the success rate significantly.

The paper is organized as follows. In Section 2 we discuss the related work that use the concept of memory of motion for various problems. Section 3 explains the methods for constructing and using the memory of motion. The experimental results are presented and discussed in Section 4 and 5. Finally, Section 6 concludes the paper.

## 2 Related Work

The idea of using a memory of motion to warm-start an optimization solver has previously been explored in the context of optimal control and motion planning. In [25] a trajectory library is constructed to learn a control policy. A set of trajectories are planned offline using algorithm and stored as library, then -NN is used online to determine the action to perform at each state. In [16], they use similar approach to predict an initial guess for balancing a two-link robot, which is then optimized by Differential Dynamic Programming. An iterative method to build a memory of motion to initialize an optimal control solver is proposed in [17]. They use neural networks to approximate the mapping from the task descriptors (initial and goal states) to the state and control trajectories. Another neural network is trained to approximate the value function, which is then used as a metric to determine how close two states are dynamically. In [7] GPR is used to predict new trajectories based on the library of demonstrated movements encoded as Dynamic Movement Primitives (DMP) [10]. GPR is used to map the task descriptors to the DMP parameters.

In robot motion planning, Probabilistic Roadmap (PRM) [13] can be seen as the construction of memory of motion by precomputing the graph connecting robot configurations. Some works exploit Rapidly-exploring Random Trees (RRT) [15], another popular sampling-based method. For example, in [18] an offline computation is used to speed-up the online planning in the form of an additional bias in sampling the configurations. In [20], an Experience Graph is built from previously planned solutions. During the online planning, the search is biased towards this graph. The Lightning framework is proposed in [1] to plan paths in high-dimensional spaces by learning from experience. The path library is constructed incrementally. Given the current path library and a task to be executed, the algorithm runs two versions of the planner online, one that plans from scratch and the other one initialized by the library. In [11] a high-dimensional (791) task descriptor is constructed, and the metric between the task descriptors is refined to minimize the necessary refinement of the initial trajectory using norm, resulting in a sparse metric and hence sparse descriptors. In [19], a subindexing is used to reduce the amount of memory storage and to use the subtrajectories of the original solutions. In robot locomotion [27], a mapping from the task space to the optimal trajectory for cyclic walking is learned using various machine learning algorithms, but the prediction is not re-optimized online. In [14], the initial trajectories for real-time catching are predicted using -NN, Support Vector Regression, and GPR.

As compared to the above works, our proposed method has the following differences: (i) none of the above methods attempt to handle the multimodal output cases. We show that BGMR can handle well such cases, (ii) we show that the memory of motion can be used as a metric for choosing optimally between several possible goals, and (iii) we show that using an ensemble of methods to provide the warm-start outperforms the individual methods significantly.

## 3 Method

Section 3.1 discusses the main idea of building the memory of motion using function approximation and dimensionality reduction techniques to learn the mapping between the task and the associated robot path. Section 3.2 explains how the memory of motion can be used as a metric for choosing between different goals. Finally, Section 3.3 explains how the warm-starting performance can be improved significantly using an ensemble method.

### 3.1 Building a Memory of Motion

To learn the mapping , we firstly generate a set of tasks and the corresponding robot paths . This is done by sampling from a uniform distribution covering the space of possible tasks and run the trajectory optimizer to obtain the robot paths until we obtain samples (,). Let and . The mapping can be learned by training function approximators using the database . In this paper we consider three function approximators: -NN, GPR, and BGMR.

#### -Nearest Neighbor (-Nn)

-NN is a very simple non-parametric method. Given a task , the algorithm finds samples in the database where are the -nearest to according to a chosen metric (in this paper the Euclidean metric is used). It then predicts the corresponding robot path by taking the average . The method is very simple to implement and it works well if there is a sufficiently dense dataset, but it suffers from the curse of dimensionality; as the dimension of increases, the number of data that needs to be stored increases exponentially. This method is mainly considered as the baseline against the next two methods.

#### Gaussian Process Regressor (GPR)

Like -NN, GPR [23] is a non-parametric method which improves its accuracy as the number of data increases. While having higher computational complexity as compared to -NN, GPR tends to better interpolate, resulting in higher approximation accuracy. Given the database , GPR assigns a Gaussian prior to the joint probability of , i.e., . is the mean function and is the covariance matrix constructed with elements , where is the kernel function that measures the similarity between the inputs and . In this paper we use Radial Basis Function (RBF) as the kernel function, and the mean function is set to be zero as usually done in GPR.

To predict the output given a new input , GPR constructs the joint probability distribution of the training data and the prediction, and then conditions on the training data to obtain the predictive distribution of the output, , where is the posterior mean computed as

(3) |

and is the posterior covariance which provides a measure of uncertainty on the output. In this work we simply use the posterior mean as the output, i.e., .

While having good approximation accuracy, one major limitation with GPR is that it does not scale well with very large datasets. There are variants of GPR that attempt to overcome this problem, e.g., sparse GPR [22] or using Stochastic Variational Inference (SVI) [9]. More details on GPR can be found in [23] and [2].

#### Bayesian Gaussian Mixture Regression (BGMR)

When using RBF as the covariance function, GPR assumes that the mapping from to is smooth and continuous. When this assumption is met, it performs very well, but otherwise it will yield poor results. For example, when there is discontinuity in the mapping or there are multimodal outputs, GPR tends to average the solutions from both sides of the discontinuity or from both modes. This characteristic is also shared by many other function approximators. To handle discontinuity and multimodality problems, using local models is one of the possible solutions. Each local model can be fit to each side of the discontinuity or to each mode.

Gaussian Mixture Regression (GMR) is an example of such local models approaches [4]. It can be seen as a probabilistic mixture of linear regressions. Given the database it can be used to construct the joint probability of as a mixture of Gaussians

(4) |

where , , and are the -th component’s mixing coefficient, mean, and covariance, respectively. Given a query , the conditional probability of the output is also a mixture of Gaussians.

In GMR, the parameters , and are determined from the data by Expectation-Maximization method, while the number of Gaussians is usually determined by the user. Bayesian GMR (BGMR) [21] is a Bayesian extension of GMR that allows us to estimate the posterior distribution of the mixture parameters (instead of relying on a single point estimate as in GMR). The number of components can also be automatically determined from the data. As a Bayesian model, BGMR gives priors to the parameters , and , and computes the posterior distribution of those parameters given the data. In high dimensional problems, the prior reduces the overfitting that commonly occurs with GMR. The prediction , given the input , is then computed by marginalizing over the posterior distribution and conditioning on . The resulting predictive distribution of is a mixture of t-distributions,

(5) |

where is the probability of belonging to the -th component of the mixture, and is a multivariate t-distribution, the mean of which is linear in . We can interpret (5) as probabilistic linear regression models, each of which has the probability of . More details about BGMR can be found in [21].

To obtain a point-prediction from (5), there are several approaches. One of the most used is to take the mean of the predictive distribution in (5) using moment matching. While this approach can provide smooth estimates (as required in many applications), the same problems as in GPR will appear in the case of discontinuity and multimodality; taking average in those cases will give us poor results. Instead, we propose to take, as the point prediction, the mean^{1}

#### Dimensionality reduction

In our problem, the path is a vector consisting of the sequence of configurations with dimension during time steps, which can be very high. This motivates us to use dimensionality reduction techniques to reduce the dimension of . For example, when is large and the time interval is small, RBF can be used to represent the evolution of each variable as weights of the basis functions. Techniques such as Principal Component Analysis (PCA), Independent Component Analysis, Factor Analysis, and Variational Autoencoder [2] [6] can also be used. The mapping to be learned then becomes the mapping from to , where is the projection of to the lower dimensional subspace. The advantage is that the memory required to store the data is reduced significantly, while the approximation performance is maintained or even improved because the important correlations between the variables are preserved. In this work, since the number of time steps is not large, we use PCA to reduce the dimension of .

### 3.2 Using the Memory as a Metric

In some planning problems, there can be several alternative goals to be achieved. For example, in robot drilling task [26], the orientation around the drilling axis is free (the number of possible goals is infinite). A naive way is to choose one of the goals randomly, plan the motion, and if it fails then select another goal. While this is simple to implement, it does not make use of the benefit of having multiple goals. Another method is to plan the paths to each goal and select the one having the smallest cost, but this is computationally expensive. It will be useful, therefore, to have a metric that measures the cost to a given goal. Our idea is to use the memory of motion as the metric.

In Section 3.1, function approximators were trained to predict an initial guess to achieve a task . The possible goals can then be formulated as multiple tasks . For each task , the function approximator predicts the initial guess corresponding to the task, and the cost can be computed. The initial guess and the corresponding task with the lowest cost is then taken as the chosen goal to be given to the trajectory optimizer. Since the cost computation (the total discrete velocity in (2)) can be done quickly relative to optimization time, this approach can yield significant improvements to the trajectory optimizer performance.

### 3.3 Using Ensemble Method to Provide Warm-Start

In machine learning, methods such as AdaBoost [8] and Random Forests [3] have shown that using an ensemble of methods often yields improved performances as compared to choosing a single method. We propose to use an ensemble method where we run multiple trajectory optimizations in parallel, each one warm-started by one of the function approximators in Section 3.1, and once one of them finds a successful path the others are terminated. Since each function approximator has different learning characteristics, combining them in this way can significantly improve the motion planning performance. The method in Section 3.2 can also be used as one of the ensemble’s component.

## 4 Experiments

To evaluate the proposed method, we consider several examples of motion planning for PR2 and Atlas robots. TrajOpt [24] is used as the trajectory optimizer to be warm-started. The output is the robot path that accomplishes the given task. In this paper we only work with robot path as the output, but the method can also be applied to robot trajectory.

We consider 5 motion planning cases presented in ascending order of complexity. Each case is chosen to demonstrate certain characteristics of the proposed method. For each case, we follow the following procedures. First we generate the dataset by randomly sampling tasks from a uniform distribution and run TrajOpt to find the paths achieving the tasks. The number of time steps is set to , except for Atlas (). In all cases, the cost is defined as the discrete velocity of the states, as defined in (2). The number of is different for each case, depending on the complexity of the task. The function approximators are then trained with or without PCA using the dataset. We heuristically set components for the PCA; for the -NN, we use .

To validate the performance, we sample random tasks and use the various methods to warm-start TrajOpt. The solutions are compared in terms of convergence time, success rate and cost. The planning is considered successful if the solution is feasible. The comparison results are presented in the Tables 1-6. The values are averaged over tasks, and the standard deviation is also given for the convergence time and the cost. In the presented results, we use the label ‘STD’ to refer to the solution obtained by warm-starting the solver with a straight-line path (via waypoint, if any), and the names of the function approximators for the rest. The subscript ‘PCA’ is added when PCA is used. The query time for predicting the warm-starts by each method is negligible w.r.t. the convergence time, i.e. less than 5 ms for most methods, except for BGMR without PCA (around 20ms), so they are not included in the comparison. The codes to run the experiments are provided in https://github.com/teguhSL/memmo_for_trajopt_codes, and the videos are submitted as supplementary file.

Method | Success | Conv. | Cost |
---|---|---|---|

(%) | time (s) | (rad/s) | |

STD | 80.0 | 0.550.29 | 1.370.37 |

-NN | 93.0 | 0.350.20 | 1.450.47 |

GPR | 96.0 | 0.370.15 | 1.320.36 |

BGMR | 97.0 | 0.320.14 | 1.340.35 |

Method | Success | Conv. | Cost |
---|---|---|---|

(%) | time (s) | (rad/s) | |

STD | 79.0 | 0.530.23 | 1.430.37 |

-NN | 95.0 | 0.320.16 | 1.530.62 |

GPR | 00.0 | - | - |

BGMR | 94.0 | 0.310.15 | 1.330.40 |

### 4.1 Base motion planning

The task is to plan the motion for the PR2 mobile base from a random pose in front of the kitchen to another random pose behind the kitchen (Fig. (a)a). In this case, the state is the 3 DoF planar pose of the base. The task descriptor is then . The database is constructed with samples and the evaluation is performed with . Although this is an easy problem, TrajOpt actually finds it difficult to solve without a proper initialization. For example, initializing TrajOpt with a straight-line interpolation from to never manages to find a feasible solution because it results in a path that moves the robot through the kitchen while colliding, and the solver get stuck in poor local optima due to the conflicting gradients. To obtain better initialization for building the database, we initialize TrajOpt with two manually chosen waypoints on the left and on the right of the kitchen ( and , respectively).

We consider two cases of building the database: in the first one, we only use as waypoint, while in the second we use both and . We initialize TrajOpt with the straight-line motion from to the waypoint and from the waypoint to . With this setting we build the database, train the function approximators, and obtain the results as shown in Table 1 and 2.

In the first case, the mapping from to is unimodal because all movements go through the right. Table 1 shows that the performance of -NN, GPR and BGMR are quite similar. In the second case, however, the output is multimodal because the database contains two possible ways (modes) to accomplish the same task. This affects GPR significantly (see Table 2), as GPR averages both modes and outputs a path that goes through the kitchen, while -NN and BGMR are not affected. -NN does not average the modes because we use , while BGMR overcomes the multimodality by constructing local models for each mode automatically.

Fig. 9 shows the examples of warm-starts produced by each method in the second case. As expected, GPR provides a warm-start that goes through the kitchen (hence 0 success rate). With BGMR, if we retrieve the components with the two highest probability, both possible solutions are obtained.

Method | Success | Conv. | Cost |
---|---|---|---|

(%) | time (s) | (rad/s) | |

STD | 80.0 | 0.770.37 | 1.830.61 |

-NN | 91.2 | 0.580.29 | 1.930.69 |

GPR | 92.4 | 0.650.25 | 1.840.57 |

GPR | 92.8 | 0.660.26 | 1.830.57 |

BGMR | 88.8 | 0.640.26 | 1.850.56 |

BGMR | 92.0 | 0.670.26 | 1.840.58 |

Method | Success | Conv. | Cost |
---|---|---|---|

(%) | time (s) | (rad/s) | |

STD | 75.2 | 0.820.43 | 1.310.74 |

-NN | 65.6 | 1.160.58 | 1.550.88 |

GPR | 85.6 | 0.850.39 | 1.320.74 |

GPR | 88.0 | 0.810.36 | 1.330.73 |

BGMR | 84.0 | 0.810.40 | 1.340.76 |

BGMR | 78.3 | 0.880.42 | 1.390.78 |

Waypoints | 94.0 | 1.520.67 | 1.831.34 |

Ensemble | 97.2 | 1.060.41 | 1.420.82 |

### 4.2 Planning from a fixed initial configuration to a random goal configuration

Here consists of joint angles of the two DoFs arms of PR2. The task is to move from a fixed to a random goal configuration (i.e. ). The database is constructed with , and the evaluation results with are presented in Table 3.

Since each PR2 arm is redundant, the path from to can be multimodal, which may pose a problem for GPR. However, Table 3 shows that GPR and BGMR perform similarly. This is due to the fact that although redundant robots can achieve a goal configuration in many different ways, planning using optimization here results in similar motions for similar goal configurations. The use of PCA does not improve the performance significantly, but it still helps to reduce the size of the data. In this case, for each path it reduces the number of variables from () to (number of PCA components), more than 8 times reduction while maintaining the performance.

### 4.3 Planning from a random initial configuration to a random goal configuration

To proceed with a more complex case, the task here is to plan a path from a random initial configuration to a random target . The task consists of the initial and goal configurations, . The database is constructed with and evaluated with . The result is presented in Table 4.

-NN performs poorly here, similar to STD, due to the dimension of the input space that is much larger as compared to Section 4.2. To achieve good performance, -NN requires a much denser dataset. GPR outperforms BGMR by a wide margin.

The last row of Table 4 shows the result of the ensemble method described in Section 3.3. Given an input , the method uses all function approximators to provide different warm-starts, each of which is used to initialize an instance of TrajOpt in parallel. Once a valid solution is obtained, the other instances of TrajOpt are terminated. This method results in a huge boost of the success rate, with comparable convergence time and cost to the other methods. As comparison, we also include here the standard multiple initializations suggested by TrajOpt (labeled as ‘waypoints‘). Each initialization is created by interpolating through a waypoint that is manually defined. While the success rate is high, the convergence time and the cost increase significantly. On the contrary, each initialization in the ensemble method has a good probability of being close to the optimal solution, resulting in lower cost and convergence time.

Method | Success (%) | Conv. time (s) | Cost (rad/s) |
---|---|---|---|

STD | 65.2 | 1.100.62 | 1.860.86 |

-NN | 73.6 | 1.280.96 | 1.840.81 |

GPY | 66.4 | 1.810.96 | 1.870.87 |

GPY | 66.8 | 1.680.98 | 1.780.83 |

BGMR | 74.4 | 1.370.82 | 1.820.86 |

BGMR | 77.2 | 1.330.75 | 1.840.80 |

METRIC GPR | 86.8 | 0.700.30 | 1.490.56 |

Ensemble | 98.0 | 1.500.60 | 1.600.68 |

Method | Success (%) | Conv. time (s) | Cost (rad/s) |
---|---|---|---|

STD | 50.8 | 6.313.90 | 0.120.07 |

-NN | 58.8 | 1.481.39 | 0.110.06 |

GPY | 54.4 | 1.291.09 | 0.100.05 |

GPY | 60.0 | 1.541.46 | 0.110.05 |

BGMR | 56.4 | 1.321.57 | 0.100.05 |

BGMR | 58.0 | 1.361.16 | 0.110.06 |

Ensemble | 71.2 | 1.461.40 | 0.120.06 |

### 4.4 Planning to Cartesian goals from a fixed initial configuration

In Section 4.2 and 4.3, we use TrajOpt to plan to goals in configuration space. In practical situations, however, the task is often to reach a certain Cartesian pose using the end-effector (e.g., to pick an object on the shelf), instead of planning to a specific joint configuration. One way to solve this problem is to first compute a configuration that achieves the Cartesian pose using an inverse kinematic solver and plan to this configuration, but it does not make use of the flexibility inherent in the task. TrajOpt has an option to plan directly to a Cartesian goal, but it typically requires longer convergence time and lower success rate than planning to a joint configuration goal.

We present two approaches to use the memory of motion in this problem. In the first approach, we rely on the similar procedure as in previous cases: we formulate the task as where and are the Cartesian positions of the right and left hand of PR2. The database is then constructed with and the function approximators are trained. In this approach, TrajOpt plans to a Cartesian goal directly. The second approach relies on the fact that a Cartesian goal corresponds to multiple goals in configuration space. In Section 4.2 we have already constructed several function approximators that can predict an initial guess , given a goal in configuration space. The second approach uses one of them as a metric (Sect. 3.2) to choose between the different goals in configuration space. First, given a Cartesian goal , we run an inverse kinematic solver to find joint configurations that satisfy this pose. For each joint configuration, we use the function approximator to predict the initial guess of the robot path to reach that configuration, and we compute the cost of that path. Finally, the goal configuration and the path with the lowest cost are chosen, and TrajOpt is run to reach this goal configuration with the given path as the warm-start. Note that in this second approach, TrajOpt plans to a joint configuration instead of a Cartesian goal. For this approach we choose the method from the Section 4.2, and use the term ’’ to differentiate from the first approach (denoted in standard notation).

We present the results in Table 5 with . Among the methods using the first approach, we note that BGMR yields better result than GPR because the mapping from the Cartesian goal to the robot path here is multimodal, as planning to a Cartesian pose has more redundancy as compared to planning to a joint configuration. This again demonstrates that BGMR handles multimodal output better than GPR. However, the second approach outperforms even BGMR. The improvement over the first approach is very significant in all three criteria. This demonstrates that using the memory as a metric to choose the optimal goal results in large improvements. We point out that the additional computational time required to find IK solutions and the corresponding warm-starts is only around s, which is negligible compared to the convergence time. Finally, we use the ensemble method that uses all function approximators in parallel, including . This boosts the success rate to 98%.

### 4.5 Planning whole-body motion for an Atlas robot

Finally, we also applied our method for planning the motion of the 34-DoFs Atlas robot (28-DoFs joints and 6-DoFs root pose). We consider the same task as in Section 4.4, i.e. planning from a fixed initial configuration to a random Cartesian pose, in this case chosen to be the location of Atlas’ right hand. The task corresponds to the target position of Atlas’ right hand, while the orientation is not constrained. The feet location are fixed, while the Zero Moment Point (ZMP) is constrained to be between the two feet location. We use here the first approach as explained in Section 4.4, i.e. treating it as a regression problem where the input is the Cartesian goal and the output is the trajectory, and use the various function approximators to predict the initial guesses. The database is constructed with and the evaluation is performed with . The results are presented in Table 6.

-NN performs quite well, as the input size of is small (the position of the hand is constrained to be inside the shelf). Unlike in Section 4.4, the performance of GPR and BGMR are quite similar, although the goals are also in the Cartesian space. This is due to the difference in the implementation; in Section 4.4, given a Cartesian goal, we use an inverse kinematic solver to calculate the joint configuration that satisfies this goal, and calculate the initial guess as straight-line interpolation from the fixed initial configuration to the goal configuration. This initial guess is used when building the database. Due to the redundancy of the PR2 dual arm, similar Cartesian goals can correspond to very different joint configurations, resulting in the multimodality of the solutions in the database. In this Atlas experiment, however, we do not provide initial guesses to TrajOpt when building the database, so TrajOpt always tries to solve the problem with zero initialization. This results in more uniform solutions, and hence GPR can still perform quite well. Finally, using the ensembe method again shows superior results, giving us an increase of the success rate by more than 10%.

Planning for such high DoFs problem with many constraints (feet location, ZMP constraint, kinematic constraint) requires quite a lot of computational time ( 6.3 s in average without warm-start). Using the memory of motion in this complex task further exemplify the benefit of the approach, as our method speeds up the computational time significantly by more than four times faster. We note that the tasks are sampled randomly, and there is no guarantee that the task is indeed feasible. This explains why even the best method (i.e. the ensemble method) only achieves 70% success rates.

## 5 Discussions

### 5.1 Choice of function approximators

In Section 4, we have compared the performance of -NN, GPR and BGMR over different tasks, and shown that they have different characteristics. When the dataset is quite dense or the input space is small, -NN usually manages to obtain good performance (as shown in Section 4.1, 4.2, and 4.4), while for larger input space (Section 4.4) it does not yield good results. GPR performs the best when the output is unimodal (Section 4.2 and 4.3), while for multimodal output BGMR has a better performance than GPR (Section 4.4). This comparison can guide us to select the best method for each task. However, it may not be obvious whether a given task (and its solution) is unimodal or multimodal (e.g. compare Section 4.4 and 4.5). A better way is to combine the different methods via an ensemble method, as we have shown in this paper.

### 5.2 Data requirement

In Fig. 13, we plot the performance of various methods against the number of training samples, with STD given as the baseline. We choose the task in Section 4.3, since it has the largest input space among the other tasks. It is interesting that when the training size is small, GPR performs quite well, while -NN and BGMR are even worse than STD. As training size increases, -NN and BGMR start to approach the performance of GPR. On the contrary, the performance of the ensemble method is quite stable even when the training size is small. As the training size grows, its convergence time decreases, while the success rate is already high even when the training size is small.

### 5.3 Ensemble method

Using an ensemble method for motion planning has been explored in [5], which uses an ensemble of motion planners. While such approach also manages to boost the performance successfully, it is not easy to design and set up several motion planners for a given task. On the contrary, many function approximators are available and can be used easily, since our problem is formulated as a standard regression problem. We only need to configure one motion planner (in this work, TrajOpt, but other optimization frameworks can also be used) for a given task, unlike in [5]. Another benefit of our ensemble method is that each of the ensemble’s component starts from an initial guess that has good probability of being close to the optimal solution. This reduces the average computational time, as we have shown by comparing it against the multiple waypoints initialization in Table 4.

### 5.4 Dynamic environment

In this work we assume that the environment is static, so that the trajectories previously planned remain valid. When the environment changes, a new memory of motion has to be built. For the simple example in Section 4.1, building the memory takes only 3 minutes of computational time, but complex example such as Section 4.5 takes 3 hours. While paralellization can be used to speed up the building process, more effective strategies would be interesting to explore. In [28], an efficient way of updating a dynamic roadmap when the environment changes is presented. Such method can possibly be used to modify the existing memory of motion, so that we do not have re-build from scratch but only modify those affected. Alternatively, when the environment largely remain the same but a few obstacles are moving (as in many real tasks), we can include these obstacles’ locations as additional inputs to the regression problem, at the expense of larger input size. We will explore these ideas in our future work.

## 6 Conclusion

We have presented an approach to build a memory of motion to warm-start trajectory optimization solver, and demonstrate through experiments with PR2 and Atlas robots that the warm-start can improve the solver’s performance. Function approximators and dimensionality reduction are used to learn the mapping between the task descriptor and the corresponding robot path. Three function approximators are considered: -NN as baseline, GPR, and BGMR, and their different characteristics have been discussed. The use of PCA also improves the solution, although not very significantly, while reducing the memory storage. We have also shown that we can use the memory of motion as a metric to choose optimally between several alternative goals, and this results in a significantly improved performance for the case of Cartesian goal planning. Finally, the different function approximators can be combined as an ensemble method, which boosts the success rate significantly.

10

### Footnotes

- As in Gaussian distribution, the mean of a multivariate t-distribution is also its mode.

### References

- (2012) A robot path planning framework that learns from experience. In Proc. IEEE ICRA, pp. 3671–3678. Cited by: §2.
- (2006) Pattern recognition and machine learning. Springer. Cited by: §3.1.2, §3.1.4.
- (2001) Random forests. Machine Learning 45 (1), pp. 5–32. Cited by: §3.3.
- (2016) A tutorial on task-parameterized movement learning and retrieval. Intelligent Service Robotics 9 (1), pp. 1–29. Cited by: §3.1.3.
- (2015) The planner ensemble: motion planning by executing diverse algorithms. In Proc. IEEE ICRA, pp. 2389–2395. Cited by: §5.3.
- (2016) Tutorial on variational autoencoders. arXiv preprint arXiv:1606.05908. Cited by: §3.1.4.
- (2012) On-line motion synthesis and adaptation using a trajectory database. Robotics and Autonomous Systems 60 (10), pp. 1327–1339. Cited by: §2.
- (1997) A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences 55 (1), pp. 119–139. Cited by: §3.3.
- (2013) Gaussian processes for big data. In Conference on Uncertainty in Artificial Intellegence, pp. 282–290. Cited by: §3.1.2.
- (2002) Learning rhythmic movements by demonstration using nonlinear oscillators. In Proc. IEEE/RSJ IROS, pp. 958–963. Cited by: §2.
- (2009) Trajectory prediction: learning to map situations to robot trajectories. In Proc. ICML, pp. 449–456. Cited by: §2.
- (2011) STOMP: stochastic trajectory optimization for motion planning. In Proc. IEEE ICRA, pp. 4569–4574. Cited by: §1, §1.
- (1996) Probabilistic roadmaps for robot path planning. IEEE Transactions on Robotics and Automation 12 (4), pp. 566–580. Cited by: §2.
- (2011) Trajectory planning for optimal robot catching in real-time. In Proc. IEEE ICRA, pp. 3719–3726. Cited by: §2.
- (1998) Rapidly-exploring random trees: a new tool for path planning. Technical report Technical Report TR 98-11, Iowa State University, Computer Science Department. Cited by: §2.
- (2009) Standing balance control using a trajectory library. In Proc. IEEE/RSJ IROS, pp. 3031–3036. Cited by: §2.
- (2018) Using a memory of motion to efficiently warm-start a nonlinear predictive controller. In Proc. IEEE ICRA, pp. 2986–2993. Cited by: §2.
- (2007) Offline and online evolutionary bi-directional rrt algorithms for efficient re-planning in dynamic environments. In Proc. IEEE CASE, pp. 1131–1136. Cited by: §2.
- (2018) Leveraging precomputation with problem encoding for warm-starting trajectory optimization in complex environments. In Proc. IEEE/RSJ IROS, pp. 5877–5884. Cited by: §1, §2.
- (2012) E-graphs: bootstrapping planning with experience graphs.. In Proc. R:SS, Vol. 5, pp. 110. Cited by: §2.
- (2019) Bayesian Gaussian mixture model for robotic policy imitation. IEEE Robotics and Automation Letters (RA-L), pp. 1–7. External Links: Document Cited by: §3.1.3.
- (2005) A unifying view of sparse approximate gaussian process regression. Journal of Machine Learning Research 6 (Dec), pp. 1939–1959. Cited by: §3.1.2.
- (2006) Gaussian processes for machine learning. MIT Press, Cambridge, MA, USA. Cited by: §3.1.2, §3.1.2.
- (2013) Finding locally optimal, collision-free trajectories with sequential convex optimization. In Proc. R:SS, Vol. 9, pp. 1–10. Cited by: §1, §1, §4.
- (2006) Policies based on trajectory libraries.. In Proc. IEEE ICRA, pp. 3344–3349. Cited by: §2.
- (2018) RoboTSP–a fast solution to the robotic task sequencing problem. In Proc. IEEE ICRA, pp. 1611–1616. Cited by: §3.2.
- (2015) Generalization of optimal motion trajectories for bipedal walking. In Proc. IEEE/RSJ IROS, pp. 1571–1577. Cited by: §2.
- (2017) HDRM: a resolution complete dynamic roadmap for real-time motion planning in complex scenes. IEEE RA-L 3 (1). Cited by: §5.4.
- (2013) Chomp: covariant hamiltonian optimization for motion planning. Intl Journal of Robotics Research 32 (9-10), pp. 1164–1193. Cited by: §1, §1.