Stable, Concurrent Controller Composition for Multi-Objective Robotic Tasks

Stable, Concurrent Controller Composition for Multi-Objective
Robotic Tasks

Anqi Li, Ching-An Cheng, Byron Boots, and Magnus Egerstedt *This work was sponsored in part by Grant No. W911NF-17-2-0181 from the U.S. Army Research Laboratory DCIST CRA.Anqi Li, Ching-An Cheng, Byron Boots, and Magnus Egerstedt are with the Institute for Robotics and Intelligent Machines, Georgia Institute of Technology, Atlanta, GA 30332, USA. Email: {anqi.li, cacheng, magnus}@gatech.edu, bboots@cc.gatech.edu
Abstract

Robotic systems often need to consider multiple tasks concurrently. This challenge calls for controller synthesis algorithms that fulfill multiple control specifications while maintaining the stability of the overall system. In this paper, we decompose multi-objective tasks into subtasks, where individual subtask controllers are designed independently and then combined to generate the overall control policy. In particular, we adopt Riemannian Motion Policies (RMPs), a recently proposed controller structure in robotics, and, RMPflow, its associated computational framework for combining RMP controllers. We re-establish and extend the stability results of RMPflow through a rigorous Control Lyapunov Function (CLF) treatment. We then show that RMPflow can stably combine individually designed subtask controllers that satisfy certain CLF constraints. This new insight leads to an efficient CLF-based computational framework to generate stable controllers that consider all the subtasks simultaneously. Compared with the original usage of RMPflow, our framework provides users the flexibility to incorporate design heuristics through nominal controllers for the subtasks. We validate the proposed computational framework through numerical simulation and robotic implementation.

I Introduction

Multi-objective tasks are often involved in the control of robotic systems [1, 2, 3, 4]. For example, a group of robots may be tasked with achieving a certain formation, moving toward a goal region, while avoiding collisions with each other and obstacles [2]. These types of problems call for algorithms that can systematically generate a stable controller capable of fulfilling multiple control specifications simultaneously.

A classic strategy is to first design a controller for each individual control specification, and then provide a high-level rule to switch among them. This idea has been frequently exploited in robotics [5]. For example, it is common practice to switch to a collision avoidance controller when the robot risks colliding with obstacles [5]. The stability of switching systems has been thoroughly investigated, e.g. by finding a common or switched Lyapunov function for the systems among all designed controllers [6, 7, 8, 9]. However, a fundamental limitation shared by these switching approaches is that only a single controller is active at a time and hence only a subset of the control specifications is considered. If not designed properly, some controllers for secondary tasks might take over the operation for most of the time. For example, when a robot navigates in a cluttered environment, the collision avoidance controller can dominate other controllers and the primary tasks may never be considered [2]. Therefore, it may be more desirable to blend controllers rather than impose a hard switch between them, so that all tasks can be considered simultaneously.

In robotics, the strategy of weighting controllers for different tasks has been explored in potential field methods [10, 5]. While easy to implement such schemes, it can be difficult to provide formal stability guarantees for the overall “blended” system, especially when the weights are state-dependent. In some cases, the stability of the overall system has been shown through a common Lyapunov function [7, 8], but the existence of a common Lyapunov function is not guaranteed. Finding a common Lyapunov function can be particularly challenging for robotics applications because the tasks can potentially conflict, e.g. the robot may need to move through a cluttered environment to go to the goal.

The framework of null-space or hierarchical control handles this problem by assigning priorities to the tasks, and hence to the controllers [1, 11]. The performance of the high-priority tasks can be guaranteed by forcing the lower-priority controllers to act on the null space of high-priority tasks. However, several problems surface as the number of tasks increases. One problem is the algorithmic singularities introduced by the usage of multiple levels of projections [1, 11]. Most algorithms are designed under the assumption of singular-free conditions. But this assumption is unlikely to hold in practice, especially when there are a large number of tasks, and the system can easily become unstable if the algorithmic singularities occur. In addition, similar to the switching scheme, it is possible that secondary controllers, e.g. collision avoidance controllers, become the ones with high-priorities and the primary task can not be achieved. While several heuristics [12, 13] have been proposed to shift the control priorities dynamically, whether such systems can be globally stabilized in presence of the algorithmic singularities is still an open question [14].

Control Lyapunov functions (CLFs) and control barrier functions (CBFs) constitute another class of methods to encode multiple control specifications [4, 15, 2]. In the CLF and CBF frameworks, the control specifications are encoded as constraints on the time derivatives of Lyapunov or barrier function candidates, and a control input that satisfies all the constraints is solved through a constrained optimization problem. However, in the case of conflicting specifications, the CLF and CBF frameworks suffer from feasibility problems [16], i.e. there does not exist any controller that satisfies all the control specifications. Although the CLF constraints can be relaxed through slack variables [15], they also add a new set of hyperparameters to trade off the importance of different specifications; care must be taken in tuning these hyperparameters in order to achieve desired performance properties and maintain stability. Finally, it can be hard to encode certain high-dimensional control specifications, such as damping behaviors, as CLF or CBF constraints.

In this paper, we focus on weighting individual controllers. We aim to address two interrelated questions:

  • How can controllers be composed while guaranteeing system stability?

  • How should individual controllers be designed so that they can be easily combined?

Although ensuring stability is challenging for arbitrary blending schemes, we design a systematic process to combine controllers so that the stability of the overall system is guaranteed. Our framework considers all control specifications simultaneously, while providing the flexibility to vary the importance of different controllers based on the robot state. Moreover, instead of considering specifications in the configuration space, we allow for controllers defined directly on different spaces or manifolds111Specifications defined on non-Euclidean manifolds are common in robotics; for example, in obstacle avoidance, obstacles become holes in the space and the geodesics flow around them [3]. for different specifications.

This separation can largely simplify the design and computation of each individual controller, because it only concerns a possibly lower-dimensional manifold that is directly relevant to a particular control specification. For example, controllers for different links of a robot manipulator can be designed in their corresponding (possibly non-Euclidan) workspaces. We leverage a recent approach to controller synthesis in robotics, Riemannian Motion Policies (RMPs) [3] and RMPflow [17], which have been successfully deployed on robot manipulators [17, 3] and multi-robot systems [18]. An RMP is a mathematical object that is designed to describe a controller on a manifold, and RMPflow is a computational framework for combining RMPs designed on different task manifolds into a controller for the entire system. A particular feature of RMPflow is the use of state-dependent importance weightings of controllers based on the properties of the corresponding manifolds. It is show in  [17] that when RMPs are generated from Geometric Dynamical Systems (GDSs), the combined controller is Lyapunov-stable.

RMPs and RMPflow were initially studied in terms of the geometric structure of second-order differential equations [17], where Riemannian metrics on manifolds (of GDSs) naturally provide a geometrically-consistent notion of task importance and hence a mechanism to combine controllers (i.e. RMPflow). While differential geometry provides a mathematical interpretation of RMPflow, in practice, the restriction to GDSs for control specifications could limit performance and make controller design difficult.

To overcome this limitation, we revisit RMPflow with a rigorous CLF treatment and show that the existing computational framework of RMPflow actually ensures stability for a larger class of systems than GDSs. This discovery is made possible by an alternative stability analysis of RMPflow and an induction lemma that characterizes how the stability of individual controllers is propagated to the combined controller in terms of CLF constraints. Hence, we can reuse RMPflow to stably combine a range of controllers, not limited to the ones consistent with GDSs. To demonstrate, we introduce a computational framework called RMPflow–CLF, where we augment RMPflow with CLF constraints to generate a stable controller given user-specified nominal controllers for each of the control specifications. This allows users to incorporate additional design knowledge given by, e.g. heuristics, motion planners, and human demonstrations, without worrying about the geometric properties of the associated manifolds. RMPflow–CLF can be viewed as a soft version of the QP–CLF framework [4] that guarantees the stability of the overall system, while ensuring feasibility even when control specifications are conflicting.

Ii Background

We first review RMPflow [3, 17] and CLFs [15, 4], which are different ways to combine control specifications.

Ii-a Riemannian Motion Policies (RMPs) and RMPflow

Consider a robot with configuration space which is a smooth -dimensional manifold. We assume that admits a global generalized coordinate and follow the assumption in [17] that the system can be feedback linearized in such a way that it can be controlled directly through the generalized acceleration222This setup can be extended to torque controls as in [1]., i.e. . We call a control policy or a controller, and the state.

The task is often defined on a different manifold from called the task space, denoted . A task may admit further structure as a composition of subtasks (e.g. reaching a goal, avoiding collision with obstacles, etc.). In this case, we can treat the task space as a collection of multiple lower-dimensional subtask spaces, each of which is a manifold. In other words, each subtask space is associated with a control specification and together the task space describes the overall multi-objective control problem.

Figure 1: An example of an RMP-tree. See text for details.

Ratliff et al. [3] propose Riemannian Motion Policies (RMPs) to represent control policies on manifolds. Consider an -dimensional manifold with a global coordinate . An RMP on can be represented by two forms, its canonical form and its natural form , where is the desired acceleration, is the inertial matrix, and is the desired force. It is important to note that and do not necessarily correspond to physical quantities; defines the importance of an RMP when combined with other RMPs, and is proposed for computational efficiency.

RMPflow [17] is a recursive algorithm to generate control policies on the configuration space given the RMPs of subtasks. It introduces: 1) a data structure, the RMP-tree, for computational efficiency; and 2) a set of operators, the RMP-algebra, to propagate information across the RMP-tree.

An RMP-tree is a directed tree, which encodes the computational structure of the task map from to (see Fig. 1). In the RMP-tree, a node is associated with the state and the RMP on a manifold, and an edge is augmented with a smooth map from a parent-node manifold to a child-node manifold. In particular, the root node is associated with the state of the robot and its control policy on the configuration space , and each leaf node is associated with the RMP for a subtask, where is a subtask manifold. Recall the collection is the task space , where is the number of tasks.

To illustrate how the RMP-algebra operates, consider a node with child nodes . Let denote the edge from to and let be the associated smooth map. Suppose that is associated with an RMP on a manifold with coordinate , and is associated with an RMP on a manifold with coordinate . (Note that here we represent the RMPs in their natural form.) The RMP-algebra consists of the following three operators:

  1. pushforward is the operator to forward propagate the state from the parent node to its child nodes . Given the state from , it computes for each child node , where is the Jacobian matrix of .

  2. pullback is the operator to backward propagate the RMPs from the child nodes to the parent node. Given from the child nodes, the RMP for the parent node is computed as,

  3. resolve maps an RMP from its natural form to its canonical form. Given , it outputs with , where denotes Moore-Penrose inverse.

RMPflow performs control policy generation through running the RMP-algebra on the RMP-tree. It first performs a forward pass, by recursively calling pushforward from the root node to the leaf nodes to update the state associated with each node on the RMP-tree. Second, every leaf node evaluates its natural form RMP given its associated state. Then, RMPflow performs a backward pass, by recursively calling pullback from the leaf nodes to the root node to back propagate the RMPs in the natural form. Finally, resolve is applied to the root node to transform the RMP into its canonical form and set the control policy as .

RMPflow was originally analyzed based on a differential geometric interpretation. Cheng et al. [17] consider the inertial matrix generated by a Riemannian metric on the tangent bundle of the manifold (denoted as ). Let be a (projected) Riemannian metric and define the curvature terms

(1)

where , is the th column of , and is the th component of . The inertial matrix is then related to through,

(2)

Under this geometric interpretation, RMPs on a manifold can be (but not necessarily) generated from a class of systems called Geometric Dynamical Systems (GDSs) [17], whose dynamics are on the form of

(3)

where is the damping matrix, and is the potential function. When , the GDSs reduce to the widely studied Simple Mechanical Systems [19].

The stability properties of RMPflow is analyzed in [17] under the assumption that every leaf-node RMP is specified as a GDS (3). Before stating the stability theorem, let us define the metric, damping matrix, and potential function for every node in the RMP-tree: For a leaf node, its metric, damping matrix, and potential are defined naturally by its underlying GDS. For a non-leaf node with children , these terms are defined recursively by the relationship,

(4)

where , and are the metric, damping matrix, and potential function for the th child. The stability results for RMPflow are stated below.

Theorem II.1.

[17] Let , , and be the metric, damping matrix, and potential function of the root node defined in (4). If each leaf node is given by a GDS, , and is non-singular, then the system converges to a forward invariant set .

Ii-B Control Lyapunov Functions (CLFs)

Control Lyapunov Function (CLF) methods [15, 4, 20] encode control specifications as Lyapunov function candidates. In these methods, controllers are designed to satisfy the inequality constraints on the time derivative of the Lyapunov function candidates.

Consider a dynamical system in control-affine form,

(5)

where and are the state and control input for the system. We assume that and are locally Lipschitz continuous, and the system (5) is forward complete, i.e. is defined for all . For second-order systems considered by RMPflow, we have ,

(6)

Suppose that a Lypapunov function candidate is designed for a control specification. The control input is then required to satisfy a CLF constraint, e.g. , where is a locally Lipschitz class function [21] (i.e. is strictly increasing and ). In the case of control-affine system, the CLF constraint becomes a linear inequality constraint on control input given state ,

(7)

where and are the Lie derivatives of along and , respectively.

When there are multiple control specifications, one can design Lyapunov function candidates separately. Then the controller synthesis problem becomes finding a controller that satisfies all the linear inequalities given by the Lyapunov function candidates. Morris et al. [4] propose a computational framework, QP–CLF, that solves for the controller through a Quadratic programming (QP) problem that augments the constraints with a quadratic objective:

(8)
s.t.

However, when the specifications are conflicting, it may not be possible to enforce the CLF constraints for all since the optimization problem (8) can become infeasible [4]. In [15], Ames et al. introduce slack variables so that the optimization problem is always feasible. Let denote all decision variables. Then the relaxed optimization problem becomes,

(9)
s.t.

where and encode how the original objective function and the CLF constraints are balanced. However, care must be taken in tuning and to achieve desired performance properties and maintain stability.

Iii The CLF Interpretation of RMPflow

The goal of this paper is to combine control policies specified for subtask manifolds into a control policy for the robot with stability guarantees. RMPflow provides a favorable computational framework but its original analysis is limited to subtask control policies generated by GDSs. This assumption is rather unsatisfying, as the users need to encode the control specifications as GDS behaviors. Further, this restriction can potentially limit the performance of the subtasks and result in unnecessary energy consumption by the system. Although empirically RMPflow has been shown to work with non-GDS leaf policies [17], it is unclear if the overall system is still stable.

In this section, we show that the RMP-algebra actually preserves the stability of a wider range of leaf-node control policies than GDSs. We relax the original GDS assumption in [17] to a more general CLF constraint on each leaf node, and provide a novel stability analysis of RMPflow. These results allow us to reuse RMPflow for combining a more general class of control policies, which we will demonstrate by combining controllers based on CLF constraints.

Iii-a An Induction Lemma

In order to establish CLF constraints on leaf nodes that guarantee stability, we first need to understand how the RMP-algebra, especially the pullback operator, connects the stability results of the child nodes to the parent node.

Again, let us consider a node with child nodes , in which is associated with a manifold with coordinate , and is associated with a manifold with coordinate . In addition, let be the smooth map between manifolds and . We furthur assume that is surjective, i.e., .

Let us associate each child node with a proper, continuously differentiable and lower-bounded potential on its manifold along with a continuously differentiable Riemannian metric on its tangent bundle . Then, for node , there is a natural Lyapunov function candidate,

(10)

and an associated natural-formed RMP , where is the force policy, and is defined by as in (2). We shall further assume that is locally Lipschitz continuous for the ease of later analysis. By construction of the RMP-algebra, these Lyapunov function candidates and RMPs of the child nodes define, for the parent node , a Lyapunov function candidate

(11)

where and are given in (4).

The following lemma states how the decay-rate of is connected to the decay-rates of via pullback.

Lemma III.1.

For each child node , assume that renders for in (10). If the parent node follows dynamics , where and are given by pullback, then for in (11).

Proof.

For notational convenience, we suppress the arguments of functions. First, note that,

(12)

As is a function in both and , following a similar derivation as in [17], we can show

(13)

where and are the inertial matrix and the curvature term defined in Section II-A.

Note that , where is given by the RMP . Hence, the first term can be rewritten as

The time-derivative of can then be simplified as

(14)

By assumption on , we also have

(15)

The statement follows then from the two equalities. ∎

We can use Lemma III.1 to infer the overall stability of RMPflow. For an RMP-tree with leaf nodes, let leaf node be defined on task space with coordinates and has a natural Lyapunov function candidate

(16)

for some potential function and positive-definite Riemannian metric defined as above. By Lemma III.1, if each leaf node satisfies a CLF constraint,

(17)

then a similar constraint is satisfied by the root node. This observation is summarized below without proof.

Proposition III.2.

For each leaf node , assume that renders (17) for  (16). Consider the Lyapunov function candidate at the root node defined through (11). Then, for the root node control policy of RMPflow, it holds where is the map from to , which can be obtained through composing maps from the root node to the leaf node on the RMP-tree, and .

Note that Proposition III.2 provides an alternative way to show the stability results of RMPflow.

Corollary III.2.1.

For each leaf node , assume that is given by a GDS . Consider the Lyapunov function candidate at the root node defined recursively through (11). Then we have, under the resulting control policies from RMPflow, where is defined recursively through (4).

With Corollary III.2.1, we then can show Theorem II.1 by invoking LaSalle’s invariance principle [21].

Iii-B Global Stability Properties

More importantly, by Proposition III.2, we can find how CLF constraints are propagated from the leaf nodes to the root node through pullback.

Proposition III.3.

For each leaf node , assume that renders, for in (16),

(18)

where is a locally Lipschitz continuous class functions [21]. Consider the Lyapunov function candidate at the root node defined recursively through (11). Then

(19)

under the resulting control policies from RMPflow.

With this insight, we state a new stability theorem of RMPflow by applying LaSalle’s invariance principle. We assume that the inertia matrix at the root node is nonsingular for simplicity, so that the actual control input can be solved through the resolve operation; a sufficient condition for being nonsingular is provided in [17].

Theorem III.4.

For each leaf node , assume that renders (18). Suppose that is nonsingular, and the task space is an immersion of the configuration space . Then the control policy generated by RMPflow renders the system converging to the forward invariant set

(20)

Further if, for all leaf nodes, when , the system converges to the forward invariant set

(21)

where is the potential in defined recursively in (4).

Proof.

By assumption, is proper, continuously differentiable and lower bounded. Hence, the system converges to the largest invariant set in by LaSalle’s invariance principle [21]. By (19) in Proposition III.3, if and only if for all . Since is immersed in , we have . Hence, the system converges to a forward invariant set . Any forward invariant set in must have , which implies that as is nonsingular. Note that is given by the pullback operation recursively, hence,

where the last equality follows from . Thus, the system converges to the forward invariant set in (20).

Now, assume that when (which is implied by ). Notice that by the definition of in (4), . By the chain rule,

Hence implies . Thus, the system converges forwardly to (21). ∎

Theorem III.4 implies that subtask controllers satisfying CLF constraints (18) can be stably combined by RMPflow.

Iv A Computational Framework for RMPflow with CLF Constraints

We introduce a computational framework for controller synthesis based on the stability results presented in Section III. The main idea is to leverage Proposition III.3, which says that RMPflow is capable of preserving CLF constraints in certain form. Recall that for leaf node , the constraint on the time-derivative of the Lyapunov function in Proposition III.3 is . Combined with the particular choice of leaf-node Lyapunov function candidate in (16), this yields a CLF constraint333This a linear constraint with respect to . When , the constraint (22) holds trivially because both sides equal .

(22)

where is defined in (1). Proposition III.3 shows that, when the leaf-node control policies satisfy (22), RMPflow will yield a stable controller under suitable conditions. This provides a constructive principle to synthesize controllers.

Iv-a Algorithm Details

Assume that some nominal controller is provided by the specification of subtask . We design the leaf-node controller as a minimally invasive controller that modifies the nominal controller as little as possible while satisfying the CLF constraint (22):

(23)

where and is given by through (2). Possible choices of include the identity matrix and the inverse of the inertial matrix . In particular, yields an objective function equivalent to , where is the acceleration policy of the node.

We combine this minimally invasive controller design with RMPflow as a new computational framework for controller synthesis, called RMPflow–CLF. RMPflow–CLF follows the same procedure as the original RMPflow [17] as is discussed in Section II-A. The difference is that the leaf nodes solve for the RMPs based on the optimization problem (23) during the evaluation step. Note that (23) is a QP problem with a single linear constraint, so it can be solved analytically by projecting onto the half-plane given by the constraint.

Iv-B Stability Properties

The form of (23) together with Theorem III.4 and the results of [4] yields the following theorem:

Theorem IV.1.

Under the assumptions in Theorem III.4, if , , all edge Jacobians and their derivatives are locally Lipschitz continuous, then the control policy generated by RMPflow–CLF is locally Lipschitz continuous and renders the system converging forwardly to (20).

Proof.

By Theorem III.4, the system converges to (20). By [4], for all , is locally Lipschitz. Since under the assumption pullback and resolve preserves Lipschitz continuity; the statement follows. ∎

Note that RMPflow can be interpreted as a soft version of the QP–CLF formulation [4] that enforces the decay-rates of all Lyapunov function candidates (8). Meanwhile, compared with the QP–CLF framework with slack variables [15] that requires the users to design the objective function trade off between control specifications, RMPflow provides a structured way to implicitly generate such an objective function so that the system is always stable.

It should be noted that the system can also be stabilized by directly enforcing a single constraint on the time derivative of the combined Lyapunov function candidate (19) at the root node, rather than enforcing the CLF constraint at every leaf node (18). However, this can be less desirable: although the stability can be guaranteed for the resulting controller, the behavior of each individual subtask is no longer explicitly regulated. By contrast, the approach with leaf-node CLF constraints allows the users to design and test the controllers from (23) independently. This allows for designing controllers that can be applied to robots with different kinematic structures, which is a significant feature of RMPflow [17].

V Experimental Results

In this section, we compare the proposed RMPflow–CLF framework with the original RMPflow framework [17]. A video of the experimental results can be found at https://youtu.be/eU_x8Yklv-4. The original RMPflow framework [17] is referred to as RMPflow–GDS to differentiate it from RMPflow–CLF.

V-a Simulation Results

We present two simulated examples, a -dimensional goal reaching task and a multi-robot goal reaching example.

V-A1 2D Goal Reaching

We first consider the 2D goal reaching task presented in [17]. In this example, a planar robot with double-integrator dynamics is expected to move to a goal position without colliding into a circular obstacle. As is in [17], the RMP-tree has a collision avoidance leaf-node RMP and a goal attractor leaf-node RMP. For the RMPflow–CLF framework, we use the collision avoidance RMP in [17] and keep the choice of metrics and potential functions for the goal attractor RMP consistent with [17]. For the goal attractor RMP, we present several nominal controllers: (i) a pure potential-based nominal controller ; (ii) a spiral nominal controller , where is the potential-based controller rotated by , i.e. with being the rotation matrix; and (iii) a sinusoidal controller . For the minimally invasive controller, we use to minimize the Euclidean distance between the nominal controller and the solution to the optimization problem (23). We implement the RMPflow–GDS framework with the same choice of parameters as [17]. The trajectories under different nominal controllers are shown in Fig. 2. Although it is possible that similar behaviors can be realized with the RMPflow–GDS framework with a careful redesign of the metric and potential function, the RMPflow–CLF framework can produce a rich class of behaviors without being concerned with the geometric properties of the subtask manifold.

(a) RMPflow–CLF
(b) RMPflow–GDS [17]
Figure 2: 2D goal reaching task with a circular obstacle (grey). (a) RMPflow–CLF with three choices of nominal controllers, resulting in different goal reaching behaviors. (b) RMPflow–GDS with the goal attractor given by a GDS. The behavior is limited by the choice of the metric and the potential function.

V-A2 Multi-Robot Goal Reaching

RMPflow–CLF guarantees system stability even when the nominal controllers are not inherently stable or asymptotically stable. Therefore, the user can incorporate design knowledge given by, e.g. motion planners, human demonstrations or even heuristics, into the nominal controllers. To illustrate this, we consider a multi-robot goal reaching task, where the robots are tasked with moving to the opposite side of the arena without colliding. If the robots move in straight lines, their trajectories would intersect near the center of the arena. Due to the symmetric configuration, the system can easily deadlock with robots moving very slowly or stopping near the center to avoid collisions. This problem can be fixed if the symmetry is broken. One possible solution is to design nominal controllers for the goal attractors so that the robots move along curves.

We compare the spiral goal attractor RMP with the GDS goal attractor RMP presented in [18]. In both cases, an RMP-tree structure similar to the centralized RMP-tree structure in [18] is used. We define collision avoidance for pairs of robots in the RMP-tree with the same choice of parameters. The trajectories of the robots under the spiral nominal controllers are shown in Fig. 2(a). The spiral controllers produce smooth motion, whereas the GDS goal attractors produce jerky motion when the robots are near the center due to deadlock caused by the symmetric configuration (Fig. 2(b)).

(a) RMPflow–CLF
(b) RMPflow–GDS [17]
Figure 3: Multi-robot goal reaching task. (a) RMPflow–CLF with spiral nominal controllers. The robots move to their goal smoothly. (b) RMPflow–GDS with the goal attractor given by a GDS. Due to the symmetry of the configuration, the system suffers from deadlock when the robots are near the center: the robots oscillate around the deadlock configuration.

V-B Robotic Implementation

We present an experiment conducted on the Robotarium [22], a remotely accessible swarm robotics platform. In the experiment, five robots are tasked with preserving a regular pentagon formation while the leader has an additional task of reaching a goal. We use the same RMP-tree structure and parameters for most leaf-node RMPs as described in the formation preservation experiment in [18]. The only difference is that we replace the GDS goal attractor in [18] with the spiral nominal controller augmented with the CLF condition (23). Fig. 4 presents the snapshots from the formation preservation experiment. In Fig. 3(a)Fig. 3(c), we see that the leader approaches the goal with a spiral trajectory specified by the nominal controller, while other subtask controllers preserve distances and avoid collision. This shows the efficacy of our controller synthesis framework. By contrast, the robot moves in straight lines under the goal attractor given by the GDS (see Fig. 3(d)-Fig. 3(f)). Although it could be possible to redesign the subtask manifold such that there exists a GDS that produces similar behaviors, the RMPflow–CLF framework provides the user additional flexibility to shape the behaviors without worrying about the geometric properties of the subtask manifolds.

(a) Spiral CLF controller:
(b) Spiral CLF controller:
(c) Spiral CLF controller:
(d) GDS controller:
(e) GDS controller:
(f) GDS controller:
Figure 4: Multi-robot formation preservation task. The robots are tasked with preserving a regular pentagon formation while the leader has an additional task of reaching a goal position. (a) RMPflow–CLF with a spiral nominal controller. (b) RMPflow–GDS. The goal (red star) and the trajectories (blue curves) of the leader robot are projected onto the environment through an overhead projector. RMPflow–CLF shapes the goal-reaching behavior through a spiral nominal controller.

Vi Conclusions

We consider robot control with multiple control specifications by adopting Riemannian Motion Policies (RMPs), a recent concept in robotics for describing control policies on manifolds, and RMPflow, the computational structure to combine these controllers. The stability results of RMPflow is re-established and extended through a rigorous CLF treatment. This new analysis suggests that any subtask controllers satisfying certain CLF constraints can be stably combined by RMPflow, while the original analysis given in [17] only provides stability guarantees for a limited type of controller. Based on this analysis, we propose a new computational framework, RMPflow–CLF, to stably combine individually designed subtask controllers. This formulation provides users the flexibility of shaping behaviors of subtasks through nominal subtask controllers given by, e.g. heuristics, human demonstrations, and motion planners. The proposed RMPflow–CLF framework is validated through numerical simulation and deployment on real robots.

References

  • [1] J. Peters, M. Mistry, F. Udwadia, J. Nakanishi, and S. Schaal, “A unifying framework for robot control with redundant dofs,” Autonomous Robots, vol. 24, no. 1, pp. 1–12, 2008.
  • [2] L. Wang, A. D. Ames, and M. Egerstedt, “Multi-objective compositions for collision-free connectivity maintenance in teams of mobile robots,” in IEEE 55th Conference on Decision and Control, pp. 2659–2664, IEEE, 2016.
  • [3] N. D. Ratliff, J. Issac, D. Kappler, S. Birchfield, and D. Fox, “Riemannian motion policies,” arXiv preprint arXiv:1801.02854, 2018.
  • [4] B. Morris, M. J. Powell, and A. D. Ames, “Sufficient conditions for the lipschitz continuity of qp-based multi-objective control of humanoid robots,” in 52nd IEEE Conference on Decision and Control, pp. 2920–2926, IEEE, 2013.
  • [5] R. C. Arkin et al., Behavior-based robotics. MIT press, 1998.
  • [6] D. Liberzon, J. P. Hespanha, and A. S. Morse, “Stability of switched systems: a lie-algebraic condition,” Systems & Control Letters, vol. 37, no. 3, pp. 117–122, 1999.
  • [7] L. Vu and D. Liberzon, “Common Lyapunov functions for families of commuting nonlinear systems,” Systems & control letters, vol. 54, no. 5, pp. 405–416, 2005.
  • [8] K. S. Narendra and J. Balakrishnan, “A common Lyapunov function for stable lti systems with commuting a-matrices,” IEEE Transactions on Automatic Control, vol. 39, no. 12, pp. 2469–2471, 1994.
  • [9] J. Daafouz, P. Riedinger, and C. Iung, “Stability analysis and control synthesis for switched systems: a switched Lyapunov function approach,” IEEE Transactions on Automatic Control, vol. 47, no. 11, pp. 1883–1887, 2002.
  • [10] O. Khatib, “Real-time obstacle avoidance for manipulators and mobile robots,” in IEEE International Conference on Robotics and Automation, vol. 2, pp. 500–505, Mar 1985.
  • [11] A. Escande, N. Mansard, and P.-B. Wieber, “Hierarchical quadratic programming: Fast online humanoid-robot motion generation,” The International Journal of Robotics Research, vol. 33, no. 7, pp. 1006–1028, 2014.
  • [12] A. Dietrich, A. Albu-Schäffer, and G. Hirzinger, “On continuous null space projections for torque-based, hierarchical, multi-objective manipulation,” in IEEE International Conference on Robotics and Automation, pp. 2978–2985, IEEE, 2012.
  • [13] J. Lee, N. Mansard, and J. Park, “Intermediate desired value approach for task transition of robots in kinematic control,” IEEE Transactions on Robotics, vol. 28, no. 6, pp. 1260–1277, 2012.
  • [14] A. Dietrich, C. Ott, and J. Park, “The hierarchical operational space formulation: stability analysis for the regulation case,” IEEE Robotics and Automation Letters, vol. 3, no. 2, pp. 1120–1127, 2018.
  • [15] A. D. Ames, J. W. Grizzle, and P. Tabuada, “Control barrier function based quadratic programs with application to adaptive cruise control,” in IEEE 53rd Annual Conference on Decision and Control, pp. 6271–6278, IEEE, 2014.
  • [16] E. Squires, P. Pierpaoli, and M. Egerstedt, “Constructive barrier certificates with applications to fixed-wing aircraft collision avoidance,” in 2018 IEEE Conference on Control Technology and Applications, pp. 1656–1661, IEEE, 2018.
  • [17] C.-A. Cheng, M. Mukadam, J. Issac, S. Birchfield, D. Fox, B. Boots, and N. Ratliff, “RMPflow: A computational graph for automatic motion policy generation,” in The 13th International Workshop on the Algorithmic Foundations of Robotics, 2018.
  • [18] A. Li, M. Mukadam, M. Egerstedt, and B. Boots, “Multi-objective policy generation for multi-robot systems using Riemannian motion policies,” in The 19th International Symposium on Robotics Research, 2019.
  • [19] F. Bullo and A. D. Lewis, Geometric control of mechanical systems: modeling, analysis, and design for simple mechanical control systems, vol. 49. Springer Science & Business Media, 2004.
  • [20] E. D. Sontag, “A Lyapunov-like characterization of asymptotic controllability,” SIAM Journal on Control and Optimization, vol. 21, no. 3, pp. 462–471, 1983.
  • [21] H. K. Khalil, “Noninear systems,” Prentice-Hall, New Jersey, vol. 2, no. 5, pp. 5–1, 1996.
  • [22] D. Pickem, P. Glotfelter, L. Wang, M. Mote, A. Ames, E. Feron, and M. Egerstedt, “The robotarium: A remotely accessible swarm robotics research testbed,” in IEEE International Conference on Robotics and Automation, pp. 1699–1706, IEEE, 2017.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
388336
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description