Constraint-Driven Coordinated Control of Multi-Robot Systems

Constraint-Driven Coordinated Control of Multi-Robot Systems

Gennaro Notomista and Magnus Egerstedt *This work was sponsored by the U.S. Office of Naval Research through Grant No. N00014-15-2115.G. Notomista is with the School of Mechanical Engineering, Institute for Robotics and Intelligent Machines, Georgia Institute of Technology, Atlanta, GA, USA g.notomista@gatech.eduM. Egerstedt is with the School of Electrical and Computer Engineering, Institute for Robotics and Intelligent Machines, Georgia Institute of Technology, Atlanta, GA, USA

In this paper we present a reformulation–framed as a constrained optimization problem–of multi-robot tasks which are encoded through a cost function that is to be minimized. The advantages of this approach are multiple. The constraint-based formulation provides a natural way of enabling long-term robot autonomy applications, where resilience and adaptability to changing environmental conditions are essential. Moreover, under certain assumptions on the cost function, the resulting controller is guaranteed to be decentralized. Furthermore, finite-time convergence can be achieved, while using local information only, and therefore preserving the decentralized nature of the algorithm. The developed control framework has been tested on a team of ground mobile robots implementing long-term environmental monitoring.

I Introduction

Robotic swarms are gradually leaving academic laboratories, e. g. [1, 2], in favor of industrial settings, as in the case of [3], to reach even less structured and more dynamic environments, like agricultural lands and construction sites [4]. This presents new challenges that robots have to face, which come either from unexpected and/or unmodeled phenomena or from changing environmental conditions. These issues become even more pronounced when the robots are deployed on the field for long-term applications, like persistent environment surveiling [5] or plant growth monitoring [6]. Therefore, a way of encoding survivability [7], i. e., the ability to remain alive (in a robotic sense), is needed now more than ever. In this paper, we introduce a method which can be used to ensure survivability, and which makes use of optimization tools that allow to only minimally influence the task which the robots are asked to perform.

Several solutions have been proposed in order to make robots robust to unknown or changing environmental conditions and to ensure their applicability to unstructured or even hazardous environments [8]. Moreover, in order to let the robots survive for as much time as possible, some of the proposed methods entail scheduled periodic maintenance [9], or path optimization with the aim of maximizing the time spent in the field/minimizing the consumed energy [10]. Some other methods employ a power-dependent multi-objective optimization to ensure that the robots execute their task, while maintaining a desired energy reserve, as done in [11]. In both cases, a careful parameters tuning is required in order to prevent situations in which the robots trade survivability for shorter-term rewards.

As a matter of fact, goal-oriented control strategies may not be ideal for long-term applications, where robustness to changing environmental conditions is required. Indeed, control policies obtained using optimal control strategies are characterized by a fragility related to the precise model assumptions [12]. These are likely to be violated during the long time horizons over which the robots are deployed in the field. For this reason, in this paper, we consider a constraint-oriented approach for multi-robot systems, where the survivability of the swarm is enforced as a constraint on the robots’ task, encoded by the nominal input to the robots, . The control input is, then, synthesized, at each point in time, by solving the optimization problem


where is the survivability constraint, which is also a function of the robots’ state [7].

It is informative to note that ecological studies have shown that the constraints imposed by an environment strongly determine the behaviors developed by animals living in it [13]. Inspired by this concept, we ask whether robotic swarms can be controlled using constraints only. What this entails is that robots are programmed to do nothing, subject to task and survivability constraints. This is formalized by the following optimization problem


where encodes the task constraint, which is equivalent to executing the nominal input in (1). Thus, what we could call a robot-ecological formulation [7] naturally lends itself to be implemented using optimization-based control techniques.

In this paper, we first give sufficient conditions for turning certain classes of multi-robot tasks into constraints within an optimization problem. Then, we present a systematic way of doing this. And, finally, we propose an effective task prioritization technique obtained by combining hard and soft constraints, such as survivability and task execution in (3).

The remainder of the paper is organized as follows. In Section II, we briefly recall the control techniques that will be used to synthesize the constraint-driven coordinated control policies for multi-robot systems. Then, we introduce an optimization program for executing minimum-energy gradient flow, which will be used, in Section III, to formalize the constraint-based control of multi-robot systems. Moreover, we show how to achieve decentralized finite-time minimization algorithms using the presented approach. Two applications are presented in Section IV, namely formation control and coverage control. Section V reports the results of experiments executed with a team of ground mobile robots implementing the proposed controller to execute a long-term environmental monitoring task.

Ii Constraint-Based Control Design

In this section, we review the concepts related to control Lyapunov functions and control barrier functions which will then be used to formulate optimization problems whose solution corresponds to the execution of decentralized coordinated controllers for multi-robot systems.

Ii-a Control Lyapunov and Control Barrier Functions

In order to design controllers that allow the execution of multi-robot tasks, we make use of control Lyapunov functions and, in particular, we resort to methods from finite-time stability theory of dynamical systems.

Consider the dynamical system in control affine form


with , , and and locally Lipschitz continuous vector fields. One of the results we will use is given by the following theorem.

Theorem 1 (Based on Theorem 4.3 in [14]).

Given a dynamical system (6) and a continuous, positive definite function , a continuous controller such that

where and denote the Lie derivatives of in the directions of and , respectively, and , renders the origin finite-time stable.

Moreover, an upper bound for the settling time is given by

where is the value of at time .


Similar to Theorem 4.3 in [14]. ∎

To enforce constraints, such as survivability or task execution for multi-robot systems, we employ control barrier functions. These, as will be shown, are suitable for synthesizing constraints that can be encoded in terms of set-membership. Conceptually similar to control Lyapunov functions, control barrier functions have been introduced in [15] with the objective of ensuring safety in a provably correct way. In this context, ensuring safety means ensuring the forward invariance of a set , in which we want to confine the state . Without loss of generality, here we consider forward complete systems, for which , solution to (6), exists for all .

Suppose we can find a continuously differentiable function , such that the safe set can be defined as the zero-superlevel set of , i. e.,




where and denote the boundary and the interior of , respectively. If the following condition is satisfied


with an extended class function [16], then is called a (zeroing) control barrier function. Conditions to ensure forward invariance of the set are given in the following theorem.

Theorem 2 (Safe set forward invariance [17]).

Given a dynamical system (6) and a set defined by a continuously differentiable function as in (7), any Lipschitz continuous controller such that (10) holds renders the set forward invariant.


See [17]. ∎

The existence of the control barrier function also ensures the asymptotic stability of the set , as shown in the following theorem.

Theorem 3 (Safe set asymptotic stability).

Under the same hypotheses as in Theorem 2, any Lipschitz continuous controller such that (10) holds renders the set asymptotically stable, that means as .

Proof (based on [17]).



be a control Lyapunov candidate function. Thus, for and for . Moreover,


Furthermore, since is continuously differentiable, is continuously differentiable as well. Then, by hypothesis (10), for and for . By Theorem 2, is forward invariant. Moreover, is closed, since it is the inverse image of the closed set under the continuous map . Therefore, by Theorem 2.8 in [18], the system (6) is uniformly globally asymptotically stable with respect to the set . Thus, there exists a class function [16] such that, given any initial state , the solution satisfies , where . Hence, as , . ∎

Ii-B Minimum-Energy Gradient Flow

In this section we consider the problem of minimizing a cost function . We present a method to reformulate the classic gradient flow algorithm using the tools introduced in Section II-A. This allows us to synthesize a constrained optimization program that is equivalent to the minimization of the cost .

Consider the single integrator dynamical system , where are the state and the control input, respectively. Assume that the objective consists in minimizing a cost , where is a continuously differentiable function. Applying gradient flow algorithms, the problem


can be directly minimized by choosing


In fact, with this choice of input, applying chain rule leads to:


We now show that the minimization problem (14) can be formulated as a minimum-energy problem that achieves the same objective of minimizing the cost .

To this end, let us define the barrier function


and its zero-superlevel set, i. e. the safe set,


By Theorems 2 and 3, the differential constraint


where is an extended class function, ensures that the set is forward invariant when and asymptotically stable when , being the value of the state at time .

Observation 4.

Theorem 3 shows the existence of the control Lyapunov function


Indeed, from (16), since , one can see that is a control Lyapunov function. In fact, if belongs to compact, LaSalle’s Invariance Principle ensures that the state will converge to a stationary point of , namely, , with .

We can now introduce the following optimization problem:


, which solves the problem in (14), as shown in the following proposition.

Proposition 5.

The solution of the optimization problem (21), where is given by (17) and is an extended class function, solves (14), driving the state of the dynamical system to a stationary point of the cost .


The KKT conditions for the problem in (21) are


where and are primal and dual optimal points [19]. First of all, we note that, if , then by the fourth equation in (22). Therefore, from the first equation in (22), . This is equivalent to and, since , this implies that . In case , from the third and fourth equation in (22), one has , and therefore, . Since and is continuously differentiable, one can show that . Thus, we can unify the two cases, and , and write the expression of the optimal as follows:


With this expression of the input , the evolution in time of the cost is given by




Hence, as , , such that . ∎

Corollary 6.

Under the same hypotheses as in Proposition 5 and such that , the solution of the optimization program


solves the problem in (14).


Proceeding similarly to the proof of Proposition 5, the solution to (26) evaluates to , and, therefore, . Thus, as [20]. Hence, as , such that , and so . ∎

In summary, we saw that the expression for given in (23) solves the initial optimization problem (14), which can be equivalently solved following the gradient flow of the cost using (15).

We now illustrate that, besides the advantages related to long-term autonomy applications discussed in Section I, the formulation in (21) can be used to design decentralized cost minimization algorithms that are faster than gradient descent. In the optimization literature, there are plenty of methods that can be employed to improve the convergence speed of gradient flow algorithms (see, e. g., [19]). Nevertheless, these second order methods, such as Newton’s method or conjugate gradient, suffer from their centralized nature. Only in some cases, this issue can be partially mitigated by resorting to distributed optimization techniques, such as [21, 22]. The above-mentioned methods are all suitable for minimizing a cost function. However, we insist on having a constrained optimization formulation–where we encode cost minimization as a constraint–because of the flexibility and robustness properties discussed in Section I, useful for long-term robot autonomy applications.

The following proposition shows that, using the formulation in (21), it is possible to minimize the cost and to be not just faster than gradient descent, but actually to reach a stationary point in finite time.

Proposition 7.

Given the dynamical system and the objective of minimizing the cost function , the solution of the optimization problem


where is given by (17), and , will drive the state to a stationary point of the cost in finite time.


Similarly to what has been done in Propositon 5, it can be shown that


Thus, by Theorem 1, we conclude that




Hence, , with , in finite time. Indeed, as shown in [23], such that is a finite-time convergence control barrier function for the system characterized by single integrator dynamics, . ∎

Using the results derived in this section, the next section presents a procedure to synthesize decentralized optimization problems whose solutions result in coordinated control of multi-robot systems.

Iii Constraint-Based Control of Multi-Robot Systems

Local, scalable, safe and emergent are four essential features that decentralized multi-robot coordinated control algorithms should possess [24]. Many algorithms that satisfy these properties have been developed for applications ranging from social behavior mimicking [25], formation assembling [26] and area patrolling [27]. In [24], the authors analyze the common features among these algorithms and discuss their decentralized implementations in robotic applications. In this section, we apply the results derived in Section II with the aim of obtaining constrained optimization problems equivalent to the decentralized execution of multi-robot tasks.

Consider a collection of robots, whose position is denoted by , where for planar robots and in the case of aerial robots. Assume each robot is equipped with an omni-directional range sensor that allows it to measure the relative position of neighboring robots, namely robot is able to measure , when robot is within its sensing range. These interactions among the robots are described by a graph , where is the set of vertices of the graph, representing the robots, and is the set of edges between the robots, encoding adjacency relationships. If , then robot can measure robot ’s position. For the purposes of this paper, we assume that the graph is undirected, namely . In order to obtain decentralized algorithms, we want each robot to act only based on local information, by which we mean the relative positions of its neighbors. By construction, this leads to inherently scalable coordinated control algorithms.

Denoting the ensemble state of the robotic swarm by , a general expression for the cost that leads to decentralized control laws is given by


where is the neighborhood set of robot , and , is a symmetric, pairwise cost between robots and . We assume that , so that . Assuming we can directly control the velocity of robot , , we can employ a gradient descent flow policy like (15) to minimize , obtaining


This is nothing but a weighted consensus protocol, and it is decentralized insofar as the input only depends on robot ’s neighbors. The construction shown in Section II can be then applied to minimize the cost given in (31) by formulating the following minimum-energy problem:


where is the vector of robots’ inputs, and a single integrator dynamics, , is assumed for each robot. Solving (33) leads to the accomplishment of the task, by which we mean that a stationary point of the cost has been reached. As explicitly shown by (23) in Proposition 5, a minimum-energy formulation, initially introduced in [15], allows the robots to move towards lower values of the cost until the task is accomplished ().

The following proposition gives the expression of the optimization problems whose solutions lead to a decentralized minimization of the cost in (31).

Proposition 8 (Constraint-driven decentralized task execution).

Given the pairwise cost function defined in (31), a collection of robots, characterized by single integrator dynamics, minimizes in a decentralized fashion, if each robot executes the control input, solution of the following optimization problem:


where and is an extended class function, , superadditive for , i. e. . If , , , a stationary point of the cost is reached in finite time, with the upper bounds on the settling time given in Theorem 1.


Proposition 5 ensures that, by imposing the global constraint , constructed using the whole state vector , the cost is decreasing towards a stationary point. We want to show that, by imposing only local constraints (i. e., such that robot only needs information about its neighbors), the multi-robot system is able to enforce the global constraint and, hence, to minimize the cost in a decentralized fashion.

We proceed by starting to sum up the constraints for each robot, obtaining:


where we used the superadditivity property of , and we set . Moreover, since the graph , which encodes the neighboring relations between the robots, is undirected, we have that . Thus,


where , , and an extended class function. Hence, by Proposition 5, will converge to a stationary point of .

Finally, we note that a class function , defined for is convex, and hence superadditive, for . Applying Proposition 7, the statement holds. ∎

The structure of the cost function , even though quite specific, allows us to encode a rich set of multi-robot tasks, by carefully choosing the weights as a function of the state . The following section shows two variations on the cost function which allow a multi-robot system to perform formation control, i. e., assembling particular shapes, and coverage control, consisting in spreading out the robotic swarm in the environment in an optimal way.

Iv Applications

In this section we recall the expression of the cost for two specific multi-robot tasks: formation control and coverage control.

Iv-a Formation Control

In formation control applications, the robots are asked to assemble a predefined shape, specified in terms of inter-agent distances. In order to frame this problem as a cost minimization problem, let be the formation error


where is the desired distance between robots and . measures how far the robots are from assembling the desired formation characterized by the relative distances . corresponds to the robots forming the desired shape. Note that, as is a sum of squares, , required as a hypothesis in order for Proposition 8 to hold.

The gradient the evaluates to


This can be interpreted as follows: if the distance between robots and is smaller than , then the weight is negative, and the robots experience a repelling effect. Conversely, if the two robots are further than apart, the positive weight will attract one towards the other. The special case in which corresponds to the well-known consensus problem.

The expression of the gradient in (39) is decentralized as robot has to compute relative distances only with respect to its neighboring robots.

Iv-B Coverage Control

In coverage control, the task given to the robots is that of covering a domain . Given a coverage performance measure, the robots should spread over the domain in an optimal way. As shown in [28], each robot should be in charge only of a subset of the domain that, more specifically, is its Voronoi cell, defined as .

Let us introduce the measure of how bad a domain is being covered:


where denotes the centroid of the Voronoi cell . This form is just a reformulation of the locational cost originally introduced in [27]. Taking the derivative of with respect to , required in the optimization problem (34), one obtains:


where is the identity matrix. Note that, even if virtually depends on the entire ensemble state, , of the robotic swarm robot , in order to compute it, only requires information from the robots with which it shares part of the boundary of its Voronoi cells.

In the Appendix we show how the formulation presented in this paper also allows an exact decentralized implementation of the coverage control with time-varying density functions introduced in [29].

We deployed the optimization-based control algorithms with the expressions of the costs derived in Sections IV-A and IV-B on a real multi-robot system and, in the next section, we show the experimental results. Moreover, the constraint-driven formulation of Section III is used to achieve long-term environmental monitoring, where the robots are tasked with covering a domain over a time-horizon which is much longer than their battery life, and during which the robots will also have to avoid collisions with obstacles moving around in the domain.

V Experiments

The coordinated control approach presented in this paper has been tested on the Robotarium [1], a remotely accessible swarm robotics testbed. The Robotarium is populated by small-scale differential-drive robots which can be programmed by uploading code scripts via a web interface.

Throughout the paper, we assumed we can directly control the velocity of the robots, by modeling them using single integrator dynamics. However, a differential-drive robot can be more accurately modeled using unicycle dynamics:


where and are the robot’s position and orientation in the plane, respectively, and and are the linear and angular velocity inputs, respectively. Nevertheless, in [30], it is shown that it is possible to derive a near-identity diffeomorphism that can be used to partially feedback linearize the system (42). This way, the unicycle can be abstracted as a single integrator. This is realized through the invertible map


where is the matrix that rotates vectors in counterclockwise by an angle , and is the velocity in the plane of a point located in front of the unicycle at a distance from its center. This method is used to control the robots on the Robotarium, by calculating linear and angular velocities of robot , and , from the control input , obtained by solving the optimization problems derived in Section III.

Regarding the implementation of the optimization program (34) needed for the execution of the tasks presented in the previous section, the function has been chosen to be , which is an extended class function, convex for . This implies it is also superadditive for , as required by the hypotheses in Proposition 8.

V-a Formation and Coverage Control

Fig. 1: A team of six small-scale differential-drive robots on the Robotarium executes formation control using (38) and (39) in the optimization program (34). The edges encoding maintained distances between robots are projected onto the testbed.
Fig. 2: A team of six small-scale differential-drive robots performs coverage control of a rectangular area on the Robotarium using (40) and (41) in (34). The Voronoi cells of the robots are projected onto the testbed, together with their centroids, depicted as gray circles.

The optimization problem (34) has been implemented with the specific expressions of and given in (38) and (39) in order to achieve formation control, as explained in Section IV-A. A sequence of snapshots recorded during the experiments in the Robotarium is shown in Fig. 1: six robots are asked to assemble a hexagon specified through the inter-agent distances in (38). The edges corresponding to distances that are maintained are projected down onto the testbed and depicted as black lines in Figures (a)a to (d)d.

Similarly, coverage control has been implemented using the constraint-based optimization (34) and the expressions of and in (40) and (41). The results are shown in Fig. 2. Six robots are asked to spread over a rectangular domain. The Voronoi partition of the domain is projected on the testbed. As a result of the optimization program, the robots are moving towards the centroids of their respective Voronoi cells, represented as gray circles in Figures (a)a to (d)d.

V-B Combining and Prioritizing Tasks

In this section, we present the application of the proposed constraint-driven coordinated control to long-term environmental monitoring.

The setup of the experiment is as follows. Six robots are asked to monitor an area by performing coverage control. While executing this task, the robots must not run out of energy and must not collide with two dynamic obstacles, embodied by two additional robots moving in the environment. In order to do so, we define constraints that allow the robots to always keep enough residual energy in their batteries and to be always a minimum distance apart from the obstacles.

To accomplish the first goal, we use a method similar to the one developed in [31]. Assuming that the domain is endowed with charging stations, i. e. locations where the robots can recharge their batteries, let us define the following barrier function:


where is the position of robot , is the energy in its battery, is the minimum residual energy we want the robots to keep, is the location of the charging station dedicated to robot , is the minimum distance from the charging station at which the robots can recharge their batteries (typical behavior of wireless charging technologies), and is a constant such that upper-bounds the energy required to reach a charging station. We refer to [31] for a rigorous analysis.

As far as obstacle avoidance is concerned, we define, for each obstacle, the following barrier function, which ensures collision-free operations in multi-robot systems [32]:


where is the position of the obstacle and is the minimum distance we want the robots to maintain from the obstacle.

Combining the energy constraint and the obstacle constraint together with the coverage task constraints, the following optimization problem can be formulated:


The variable in the coverage constraint acts as a relaxation parameter, which allows the constraints related to energy and collisions to be fulfilled. This translates to trading task execution for survivability. Consequently, this formulation allows tasks prioritization obtained by combining hard and soft constraints.

Fig. 3: A team of six robots is tasked with monitoring a rectangular domain on the Robotarium, by performing coverage control as in Fig. 2. This time, however, the robots are asked to perform this task over a time horizon which is much longer than their (simulated) battery life. Additionally, two more robots (circled in red) act as obstacles which have to be avoided by the remaining six robots. These execute (46) to avoid the obstacles, go and recharge their batteries at the dedicated charging stations (blue circles on the left of the figures that turn yellow when the robots are charging), while always covering the given domain. A video of the experiments is available online [33].

The results of the long-term environmental monitoring experiment are shown in Fig. 3. The Voronoi partition generated by the robots is projected down onto the testbed, as in Fig. 2. Arranged vertically along the left edge of the domain, there are six charging stations depicted as blue circles that turn yellow when the robots are charging. The two robots circle in red are moving in the environment acting solely as obstacles. The sequence of snapshots shows the robots starting to perform coverage (Fig. (a)a), two robots avoiding a obstacles (top right and bottom left in Fig. (b)b), and two robots recharging their batteries (Fig. (c)c). In Fig. (d)d the robots have reached a configuration corresponding to a local minimum of the locational cost (40). A video of the experiments can be found online [33].

Fig. 4: Simulated energy levels of the robots tasked with performing persistent coverage (Fig. 3). The residual energy is kept above a minimum desired value using (44). With the simulated energy dynamics, each robot experiences two charging cycles during the course of the experiment.

Due to the limited amount of time that each experiment submitted to the Robotarium is allowed to last, we simulate the battery dynamics in such a way that the robots experience multiple charging cycles during the course of a single experiment. Fig. 4 shows the energy levels of the robots employed to perform coverage. The minimum desired energy level, , and the value corresponding to fully charged battery, , are depicted as black thick lines. Enforcing the energy constraints using (44) allows the robots to keep their energy level always above .

We have shown how the constraint-driven control formulation can be used to build a minimum-energy optimization problem, whose constraints encode both the task that the robots are asked to perform and the survivability specifications, thus enabling the robust deployment of robots for long-term applications.

Vi Conclusions

In this paper we presented a reformulation of optimization-based multi-robot tasks in terms of constrained optimization. Identifying a multi-robot task with a cost function that needs to be minimized, we leverage control barrier functions to synthesize decentralized optimization-based controllers that achieve the desired goal. The advantages of this approach include its flexibility of encoding several multi-robot tasks and the ease of combining them with different types of constraints. We showed how this flexibility can be used to enforce robot survivability and achieve long-term robot autonomy, where robustness and resilience are indispensable properties that robots have to possess. A systematic way of formulating the optimization problems for each agent of a robotic swarm is derived. Its effectiveness is demonstrated through a series of experiments using a team of ground mobile robots, culminating in a long-term environmental monitoring application.


The formulation presented in this paper also allows an exact decentralized implementation of the coverage control with time-varying density functions. In [29], the authors show that the control law


minimizes the locational cost


is a time-varying density function, which specifies the importance of point at time . The cost in (48) is equivalent to the one defined in (40) when the centroids are calculated weighting the points in the domain according to the value of the density function associated to them, as shown in [27].

However, inverting the matrix in (47) cannot be done in a decentralized fashion. For this reason, in [29], the inverse is approximated by a truncated Neumann series as

which, on the contrary, can be evaluated only based on information about neighboring robots.

With the formulation presented in this paper, instead, by implementing the optimization problem (34) in Proposition 8, each robot has to solve

which is both exact and decentralized.


The authors would like to thank Professor Jonathan N. Pauli for helpful discussions about ecology.


  • [1] D. Pickem, P. Glotfelter, L. Wang, M. Mote, A. Ames, E. Feron, and M. Egerstedt, “The robotarium: A remotely accessible swarm robotics research testbed,” in 2017 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2017, pp. 1699–1706.
  • [2] M. Rubenstein, C. Ahler, and R. Nagpal, “Kilobot: A low cost scalable robot system for collective behaviors,” in 2012 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2012, pp. 3293–3298.
  • [3] R. D’Andrea, “Guest editorial: A revolution in the warehouse: A retrospective on kiva systems and the grand challenges ahead,” IEEE Transactions on Automation Science and Engineering, vol. 9, no. 4, pp. 638–639, 2012.
  • [4] L. E. Parker, “Multiple mobile robot systems,” in Springer Handbook of Robotics.   Springer, 2008, pp. 921–941.
  • [5] E. Stump and N. Michael, “Multi-robot persistent surveillance planning as a vehicle routing problem,” in 2011 IEEE International Conference on Automation Science and Engineering.   IEEE, 2011, pp. 569–575.
  • [6] A. English, P. Ross, D. Ball, and P. Corke, “Vision based guidance for robot navigation in agriculture,” in 2014 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2014, pp. 1693–1698.
  • [7] M. Egerstedt, J. N. Pauli, G. Notomista, and S. Hutchinson, “Robot ecology: Constraint-based control design for long duration autonomy,” Annual Reviews in Control, 2018, to appear.
  • [8] R. C. Arkin and G. Vachtsevanos, “Techniques for robot survivability,” in Proc. 3rd International Symposium on Robotics and Manufacturing, Vancouver, BC, 1990, pp. 383–388.
  • [9] S. Mishra, S. Rodriguez, M. Morales, and N. M. Amato, “Battery-constrained coverage,” in Automation Science and Engineering (CASE), 2016 IEEE International Conference on.   IEEE, 2016, pp. 695–700.
  • [10] S. Martin and P. Corke, “Long-term exploration & tours for energy constrained robots with online proprioceptive traversability estimation,” in 2014 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2014, pp. 5778–5785.
  • [11] J. Derenick, N. Michael, and V. Kumar, “Energy-aware coverage control with docking for robot teams,” in 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2011, pp. 3667–3672.
  • [12] M. E. Csete and J. C. Doyle, “Reverse engineering of biological complexity,” science, vol. 295, no. 5560, pp. 1664–1669, 2002.
  • [13] R. E. Ricklefs, The economy of nature.   Macmillan, 2008.
  • [14] S. P. Bhat and D. S. Bernstein, “Finite-time stability of continuous autonomous systems,” SIAM Journal on Control and Optimization, vol. 38, no. 3, pp. 751–766, 2000.
  • [15] A. D. Ames, J. W. Grizzle, and P. Tabuada, “Control barrier function based quadratic programs with application to adaptive cruise control,” in Decision and Control (CDC), 2014 IEEE 53rd Annual Conference on.   IEEE, 2014, pp. 6271–6278.
  • [16] C. M. Kellett, “A compendium of comparison function results,” Mathematics of Control, Signals, and Systems, vol. 26, no. 3, pp. 339–374, 2014.
  • [17] X. Xu, P. Tabuada, J. W. Grizzle, and A. D. Ames, “Robustness of control barrier functions for safety critical control,” IFAC-PapersOnLine, vol. 48, no. 27, pp. 54–61, 2015.
  • [18] Y. Lin, E. D. Sontag, and Y. Wang, “A smooth converse lyapunov theorem for robust stability,” SIAM Journal on Control and Optimization, vol. 34, no. 1, pp. 124–160, 1996.
  • [19] S. Boyd and L. Vandenberghe, Convex optimization.   Cambridge university press, 2004.
  • [20] H. K. Khalil, “Noninear systems,” Prentice-Hall, New Jersey, vol. 2, no. 5, pp. 5–1, 1996.
  • [21] E. Wei, A. Ozdaglar, and A. Jadbabaie, “A distributed newton method for network utility maximization–i: Algorithm,” IEEE Transactions on Automatic Control, vol. 58, no. 9, pp. 2162–2175, 2013.
  • [22] S. Boyd, N. Parikh, E. Chu, B. Peleato, J. Eckstein, et al., “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations and Trends® in Machine learning, vol. 3, no. 1, pp. 1–122, 2011.
  • [23] A. Li, L. Wang, P. Pierpaoli, and M. Egerstedt, “Formally correct composition of coordinated behaviors using control barrier certificates,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2018, to appear.
  • [24] J. Cortés and M. Egerstedt, “Coordinated control of multi-robot systems: A survey,” SICE Journal of Control, Measurement, and System Integration, vol. 10, no. 6, pp. 495–503, 2017.
  • [25] C. W. Reynolds, “Flocks, herds and schools: A distributed behavioral model,” in ACM SIGGRAPH computer graphics, vol. 21, no. 4.   ACM, 1987, pp. 25–34.
  • [26] M. Egerstedt and X. Hu, “Formation constrained multi-agent control,” IEEE transactions on robotics and automation, vol. 17, no. 6, pp. 947–951, 2001.
  • [27] J. Cortes, S. Martinez, T. Karatas, and F. Bullo, “Coverage control for mobile sensing networks,” IEEE Transactions on robotics and Automation, vol. 20, no. 2, pp. 243–255, 2004.
  • [28] S. Lloyd, “Least squares quantization in pcm,” IEEE transactions on information theory, vol. 28, no. 2, pp. 129–137, 1982.
  • [29] S. G. Lee, Y. Diaz-Mercado, and M. Egerstedt, “Multirobot control using time-varying density functions,” IEEE Transactions on Robotics, vol. 31, no. 2, pp. 489–493, 2015.
  • [30] R. Olfati-Saber, “Near-identity diffeomorphisms and exponential/spl epsi/-tracking and/spl epsi/-stabilization of first-order nonholonomic se (2) vehicles,” in American Control Conference, 2002. Proceedings of the 2002, vol. 6.   IEEE, 2002, pp. 4690–4695.
  • [31] G. Notomista, S. F. Ruf, and M. Egerstedt, “Persistification of robotic tasks using control barrier functions,” IEEE Robotics and Automation Letters, vol. 3, no. 2, pp. 758–763, 2018.
  • [32] L. Wang, A. D. Ames, and M. Egerstedt, “Safety barrier certificates for collisions-free multirobot systems,” IEEE Transactions on Robotics, vol. 33, no. 3, pp. 661–674, 2017.
  • [33] G. Notomista. Notomista, Egerstedt - Constraint Driven Coordinated Control of Multi Robot Systems. Youtube. [Online]. Available:
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description