Multiagent Systems with Compasses
Abstract
This paper investigates agreement protocols over cooperative and cooperative–antagonistic multiagent networks with coupled continuoustime nonlinear dynamics. To guarantee convergence for such systems, it is common in the literature to assume that the vector field of each agent is pointing inside the convex hull formed by the states of the agent and its neighbors, given that the relative states between each agent and its neighbors are available. This convexity condition is relaxed in this paper, as we show that it is enough that the vector field belongs to a strict tangent cone based on a local supporting hyperrectangle. The new condition has the natural physical interpretation of requiring shared reference directions in addition to the available local relative states. Such shared reference directions can be further interpreted as if each agent holds a magnetic compass indicating the orientations of a global frame. It is proven that the cooperative multiagent system achieves exponential state agreement if and only if the timevarying interaction graph is uniformly jointly quasistrongly connected. Cooperative–antagonistic multiagent systems are also considered. For these systems, the relation has a negative sign for arcs corresponding to antagonistic interactions. State agreement may not be achieved, but instead it is shown that all the agents’ states asymptotically converge, and their limits agree componentwise in absolute values if and in general only if the timevarying interaction graph is uniformly jointly strongly connected.
Keywords: shared reference direction, nonlinear systems, cooperativeantagonistic network
1 Introduction
In the last decade, coordinated control of multiagent systems has attracted extensive attention due to its broad applications in engineering, physics, biology and social sciences, e.g., [6, 15, 22, 26, 36]. A fundamental idea is that by carefully implementing distributed control protocols for each agent, collective tasks can be reached for the overall system using only neighboring information exchange. Several important results have been established, e.g., in the area of mobile systems including spacecraft formation flying, rendezvous of multiple robots, and animal flocking [18, 8, 34].
Agreement protocols, where the goal is to drive the states of the agents to reach a common value using local interactions, play a basic role in coordination of multiagent systems. The state agreement protocol and its fundamental convergence properties were established for linear systems in the classical work [35]. The convergence of the linear agreement protocol has been widely studied since then for both continuoustime and discretetime models, e.g., [5, 15, 29]. Much understandings have been established, such as the explicit convergence rate in many cases [7, 24, 27, 28]. A major challenge is how to quantitatively characterize the influence of a timevarying communication graph on the agreement convergence. Agreement protocols with nonlinear dynamics have also drawn attention in the literature, e.g., [4, 14, 19, 25, 30, 31]. Due to the complexity of nonlinear dynamics, it is in general difficult to obtain explicit convergence rates for these systems. All the above studies on linear or nonlinear multiagent dynamics are based on the standing assumption that agents in the network are cooperative. Recently, motivated from opinion dynamics evolving over social networks [10, 37], state agreement problems over cooperative–antagonistic networks were introduced [1, 2]. In such networks, antagonistic neighbors exchange their states with opposite signs compared to cooperative neighbors.
In most of the work discussed above, a convexity assumption plays an essential role in the local interaction rule for reaching state agreement. For discretetime models, it is usually assumed that each agent updates its state as a convex combination of its neighbors’ states [5, 15]. A precise characterization of this convexity condition guaranteeing asymptotic agreement was established in [25]. For continuoustime models, an interpretation of this assumption is that the vector field for each agent must fall into the relative interior of a tangent cone formed by the convex hull of the relative state vectors in its neighborhood [19]. The recent work [21] generalized agreement protocols to convex metric spaces, but a convexity assumption for the local dynamics continued to play an important role in ensuring agreement convergence.
In this paper, we show that the convexity condition for agreement seeking of multiagent systems can be relaxed at the cost of shared reference directions. Such shared reference directions can be easily obtained by a magnetic compass, with the help of which the direction of each axis can be observed from a prescribed global coordinate system. Using the relative state information and the shared reference direction information, each agent can derive a strict tangent cone from a local supporting hyperrectangle. This cone defines the feasible set of local control actions for each agent to guarantee convergence to state agreement. In fact, the agents just need to determine, through sensing or communication, the relative orthant of each of their neighbors’ states. The vector field of an agent can be outside of the convex hull formed by the states of the agent and its neighbors, so this new condition provides a relaxed condition for agreement seeking. We remark that a compass is naturally present in many systems. For instance, the classical Vicsek’s model [36] inherently uses “compass”like directional information and the calculation of each agent’s heading relies on the information where the common east is. In addition, scientists observed that the European Robin bird can detect and navigate through the Earth’s magnetic field, providing them with biological compasses in addition to their normal vision [33]. Engineering systems, such as multirobot networks, can be equipped with magnetic compasses at a low cost [13, 32].
Under a general definition of nonlinear multiagent systems with shared reference directions, we establish two main results:

For cooperative networks, we show that the underlying graph associated with the nonlinear interactions being uniformly jointly quasistrongly connected is necessary and sufficient for exponential agreement. The convergence rate is explicitly given. This improves the existing results based on convex hull conditions [25, 19].

For cooperativeantagonistic networks, we propose a general model following the signflipping interpretation along an antagonistic arc introduced in [2]. We show that when the underlying graph is uniformly jointly strongly connected, irrespective with the sign of the arcs, all the agents’ states asymptotically converge, and their limits agree componentwise in absolute values.
The remainder of the paper is organized as follows. In Section 2, we give some mathematical preliminaries on convex sets, graph theory, and Dini derivatives. The nonlinear multiagent dynamics, the interaction graph, the shared reference direction, and the agreement metrics are given in Section 3. The main results and discussions are presented in Section 4. The proofs of the results are presented in Sections 5 and 6, respectively, for cooperative and cooperative–antagonistic networks. A brief concluding remark is given in Section 7.
2 Preliminaries
In this section, we introduce some mathematical preliminaries on convex analysis [3], graph theory [12], and Dini derivatives [11].
2.1 Convex analysis
For any nonempty set , is called the distance between and , where denotes the Euclidean norm. A set is called convex if when , , and . A convex set is called a convex cone if when and . The convex hull of is denoted and the convex hull of a finite set of points denoted .
Let be a convex set. Then there is a unique element , called the convex projection of onto , satisfying associated to any . It is also known that is continuously differentiable for all , and its gradient can be explicitly computed [3]:
(1) 
Let be convex and closed. The interior and boundary of is denoted by and , respectively. If contains the origin, the smallest subspace containing is the carrier subspace denoted by . The relative interior of , denoted by , is the interior of with respect to the subspace and the relative topology used. If does not contain the origin, denotes the smallest subspace containing , where is any point in . Then, is the interior of with respect to the subspace . Similarly, we can define the relative boundary .
Let be a closed convex set and . The tangent cone to at is defined as the set Note that if , then . Therefore, the definition of is essential only when . The following lemma can be found in [3] and will be used.
Lemma 1
Let be convex sets. If , then .
2.2 Graph theory
A directed graph consists of a pair , where is a finite, nonempty set of nodes and is a set of ordered pairs of nodes, denoted arcs. The set of neighbors of node is denoted . A directed path in a directed graph is a sequence of arcs of the form . If there exists a path from node to , then node is said to be reachable from node . If for node , there exists a path from to any other node, then is called a root of . is said to be strongly connected if each node is reachable from any other node. is said to be quasistrongly connected if has a root.
2.3 Dini derivatives
Let be the upper Dini derivative of with respect to , i.e., The following lemma [9] will be used for our analysis.
Lemma 2
Suppose for each , is continuously differentiable. Let , and let be the set of indices where the maximum is reached at time . Then
3 Multiagent Network Model
In this section, we present the model of the considered multiagent systems, introduce the corresponding interaction graph, and define some useful geometric concepts used in the control laws.
Consider a multiagent system with agent set . Let denote the state of agent . Let and denote .
3.1 Nonlinear multiagent dynamics
Let be a given (finite or infinite) set of indices. An element in is denoted by . For any , we define a function associated with , where with , .
Let be a piecewise constant function, so, there exists a sequence of increasing time instances such that remains constant for and switches at .
The dynamics of the multiagent systems is described by the switched nonlinear system
(2) 
We place some mild assumptions on this system.
Assumption 1
There exists a lower bound , such that .
Assumption 2
is uniformly locally Lipschitz on , i.e., for every , we can find a neighborhood for some such that there exists a real number with for any and .
Under Assumptions 1 and 2, the Caratheodory solutions of (2) exist for arbitrary initial conditions, and they are absolutely continuous functions for almost all on the maximum interval of existence [11]. All our further discussions will be on the Caratheodory solutions of (2) without specific mention.
3.2 Interaction graph
Having the dynamics defined for the considered multiagent system, similar to [19], we introduce next its interaction graph.
Definition 1
The graph associated with is the directed graph on node set and arc set such that if and only if depends on , i.e., there exist such that
The set of neighbors of node in is denoted by . The dynamic interaction graph associated with system (2) is denoted by . The joint graph of during time interval is defined by . We impose the following definition on the connectivity of , cf., [31].
Definition 2
is uniformly jointly quasistrongly (respectively, strongly) connected if there exists a constant such that is quasistrongly (respectively, strongly) connected for any .
For each , the node relation along an interaction arc may be cooperative, or antagonistic. We assume that there is a sign, “+1” or “1”, associated with each , denoted by . To be precise, if is cooperative to , , and if is antagonistic to , .
Definition 3
If , for all and all , the considered multiagent network is called a cooperative network. Otherwise, it is called a cooperativeantagonistic network.
3.3 Shared reference direction, hyperrectangle, and tangent cone
We assume that each agent has access to shared reference directions with respect to a common Cartesian coordinate system. We use to represent the basis of that Cartesian coordinate system. Here denotes the unit vector in the direction of axis .
A hyperrectangle is the generalization of a rectangle to higher dimensions. An axisaligned hyperrectangle is a hyperrectangle subject to the constraint that the edges of the hyperrectangle are parallel to the Cartesian coordinate axes.
Definition 4
Let be a bounded set. The supporting hyperrectangle is the axisaligned hyperrectangle where by definition , , and denotes the th entry of .
In other words, a supporting hyperrectangle of a bounded set is an axisaligned minimum bounding hyperrectangle.
Definition 5
Let be an axisaligned hyperrectangle and a constant. The strict tangent cone to at is the set
(3) 
where , denotes the two facets of perpendicular to the axis , and denotes the side length parallel to the axis .
Figure 1 gives an example of the strict tangent cone to at .
3.4 Agreement metrics
We next define uniformly asymptotic agreement and exponential agreement in this section.
Definition 6
The multiagent system (2) is said to achieve uniformly asymptotic agreement on if

pointwise uniform agreement can be achieved, i.e., for all , and , there exists such that for all , where , and the agreement manifold is defined as and denotes ; and

uniform agreement attraction can be achieved, i.e., for all , there exist and such that for all ,
Definition 7
The multiagent system (2) is said to achieve exponential state agreement on if

pointwise uniform agreement can be achieved; and

exponential agreement attraction can be achieved, i.e., there exist and , , such that for all ,
4 Main Results
In this section, we state the main results of the paper.
4.1 Cooperative networks
We first study the convergence property of the nonlinear switched system (2) over a cooperative network defined by an interaction graph. Introduce the local convex hull . In order to achieve exponential agreement, we propose the following strict tangent cone condition for the feasible vector field.
Assumption 3
For all , , and , it holds that .
In Assumption 3, the vector can be chosen freely from the set . Hence, the assumption specifies constraints on the feasible controls for the multiagent system. Here denotes the convex hull formed by agent and its neighbors, denotes the local supporting hyperrectangle of the set , and denotes the strict tangent cone to at . Figure 2 gives an example of the convex hull and the supporting hyperrectangle formed by agent and its’ neighbors. Two feasible vectors are also presented.
In order to implement a controller compatible with Assumption 3, the agents need to determine, through local sensing or communication, the relative orthant of each of their neighbors’ states. This can be realized, for instance, if each agent is capable of measuring the relative states with respect to its neighbors and is aware of the direction of each axis of a prescribed global coordinate system. More specifically, when the agent is in the interior of the hyperrectangle, the vector field for the agent can be chosen arbitrarily. When the agent is on the boundary of its supporting hyperrectangle, the feasible control is any direction pointing inside the tangent cone of its supporting hyperrectangle. Note that the absolute state of the agents is not needed, but each agent needs to identify absolute directions such that it can identify the direction of its neighbors with respect to itself. For example, for the planar case , in addition to the relative state measurements with respect to its neighbors, each agent just needs to be equipped with a compass. The compass together with relative state measurements provide the quadrant location information of the neighbors.
We state an exponential agreement result for the cooperative multiagent systems.
Theorem 1
In order to compare the proposed “supporting hyperrectangle condition” with respect to the usual convex hull condition [25, 19], we introduce the following assumption, which is a weaker condition than Assumption 3.
Assumption 4
For all , , and , it holds that .
We next present a uniformly asymptotic agreement result based on the relative interior condition of a tangent cone formed by the supporting hyperrectangle.
Proposition 1
Figure 3 illustrates the relative interior of a tangent cone of the convex hull (Assumption A2 of [19]), relative interior of a tangent cone of the supporting hyperrectangle (Assumption 4), and strict tangent cone of the supporting hyperrectangle (Assumption 3). It is obvious that the vector fields can be chosen more freely under Assumption 4 than under Assumption A2 of [19]. On the other hand, strict tangent cone condition is a more strict condition than the relative interior condition of a tangent cone. However, exponential agreement can be achieved under strict tangent cone condition while only uniformly asymptotic agreement is achieved under the relative interior condition of a tangent cone.
Remark 1
Theorem 1 and Proposition 1 are consistent with the main results in [19, 21, 25]. Our analysis relies on some critical techniques developed in [17, 19]. Proposition 1 allows that the vector field belongs to a larger set compared with the convex hull condition proposed in [19, 21, 25]. In addition, we allow the agent dynamics to switch over a possibly infinite set and we show exponential agreement and derive in the proof of Theorem 1 the explicit exponential convergence rate. It follows that by sharing reference directions in addition to the available local information, agreement of multiagent systems has an enlarged set of interactions and faster convergence speed compared with the case of using only local information.
Example 1. Let us first consider Vicsek’s model [36]. In particular, consider agent moving in the plane with position , the same absolute velocity , and the heading at discrete time . The position and angle updates are described by
(4)  
(5)  
(6) 
for all , where by convention it is assumed that . From (6), we see that Vicsek’s model inherently uses a “compass”like directional information. Then, similar to the analysis of Theorem 1, we can easily show that the first quadrant is an invariant set for (4) and (5). This can be verified by the fact that when for all . Figure 4 illustrates this point for three agents. At time , the vector filed of all the agents are pointing inside the first quadrant, so agents construct an “unbounded” hyperrectangle (both the upper and right bounds are at infinity). This “unbounded” hyperrectangle is the invariant set for the positions of all the agents. The existence of left and lower bounds of the hyperrectangle guarantees that agents 1 and 2 satisfy Assumption 3. However, it is easy to verify that agent 3 does not satisfy Assumption 3 since the upper and right bounds of the hyperrectangle do not exist. Therefore, position agreement cannot be achieved in general for Vicsek’s model.
Example 2. Consider the following dynamics for each agent :
(7) 
where is a continuous function representing the weight of arc , and is a statedependent rotation matrix which is continuous in for any fixed . Certainly the dynamics described in (7) is beyond the convex hull agreement protocols [19, 21, 25]. With the results in Theorem 1 and Proposition 1, it becomes evident that the existence of may still guarantee agreement as long as rotates the convex hull vector filed, , within the proposed tangent cones given by the local supporting hyperrectangle. Certainly this does not mean that should be sufficiently small since from Figure 3 this rotation angle can be large for proper under certain interaction rules. This can also be viewed as a structural robustness of the proposed “compass”based framework.
4.2 Cooperativeantagonistic networks
Next, we study the convergence property of the cooperative–antagonistic networks. Define . We impose the following assumption.
Assumption 5
For all , and , it holds that .
Assumption 5 follows the model for antagonistic interactions introduced in [2], where simple examples can be found on that state agreement cannot always be achieved for cooperative–antagonistic networks. Instead, it is possible that agents converge to values with opposite signs, which is known as bipartite consensus [2]. We present the following result for cooperative–antagonistic networks.
Theorem 2
Let Assumptions 1, 2 and 5 hold. Then, if (and in general only if) the interaction graph is uniformly jointly strongly connected, all the agents’ trajectories asymptotically converge for cooperativeantagonistic multiagent system (2), and their limits agree componentwise in absolute values for every initial time and initial state.
Here by “in general only if,” we mean that we can always construct simple examples with fixed interaction rule, for which strong connectivity is necessary for the result in Theorem 2 to stand. The proof of Theorem 2 will be presented in Section 6. Compared with the results given in [2], Theorem 2 requires no conditions on the structural balance of the network. Theorem 2 shows that every positive or negative arc contributes to the convergence of the absolute values of the nodes’ states, even for general nonlinear multiagent dynamics.
The exponential agreement and uniformly asymptotical agreement results given in Theorem 1 and Proposition 1 rely on uniformly jointly quasistrong connectivity, while the result in Theorem 2 needs uniformly jointly strong connectivity. For cooperative networks, we establish the exponential convergence rate in the proof of Theorem 1. In contrast, for cooperative–antagonistic networks in Theorem 2, the convergence speed is unclear. We conjecture that exponential convergence might not hold in general under the conditions of Theorem 2. The reason is that Lemmas 5 and 7 given in Section 5 cannot be recovered for cooperative–antagonistic networks.
5 Cooperative Multiagent Systems
In this section, we focus on the case of cooperative multiagent systems. We will prove Theorem 1 and Proposition 1 by analyzing a contraction property of (2), with the help of a series of preliminary lemmas.
5.1 Invariant set
We introduce the following definition.
Definition 8
A set is an invariant set for the system (2) if for all ,
For all , define where denotes entry of . In addition, define the supporting hyperrectangle by the initial states of all agents as , where .
In the following lemma, we show that the supporting hyperrectangle formed by the initial states of all agents is an invariant set for system (2).
Lemma 3
Proof. We first show that , . Let be the set of indices where the maximum is reached at . It then follows from Lemma 2 that for all , where denotes entry of the vector . Consider any initial state and any initial time . It follows from Definition 5 and Lemma 1 that for Assumption 3 and for Assumption 4. It follows from the definition of the tangent cone that for all satisfying . It follows that for all and any , We can similarly show that for all , .
Therefore, it follows that , , , . Then, based on the definition of , we have shown that is an invariant set.
5.2 Interior agents
In this subsection, we study the state evolution of the agents whose states are interior points of . In the following lemma, we show that the projection of the state on any coordinate axis is strictly less than an explicit upper bound as long as it is initially strictly less than this upper bound. Figure 5 illustrates the following Lemma 4.
The proof follows from a similar argument used in the proof Lemma 4.9 in [17] and the following lemma holds separately for any .
Lemma 4
Proof. Fix and any . Denote and where , and The rest of the proof will be divided in three steps.
(Step I). Define the following nonlinear function
(8) 
where denotes the entry of the vector , denotes the entry of the vector and denotes all the other components of except . The nonlinear function is used as an upper bound of and the argument is used to describe the state . In this step, we establish some useful properties of based on Lemmas 11 and 12 in the Appendices. We make the following claim.
Claim A: (i) if ; (ii) if ; (iii) is Lipschitz continuous with respect to on .
It follows from Definition 5, Lemma 1 and the similar analysis of Lemma 3 (by replacing with ) that , . Then, it follows from Definition 5 that when . This implies that when based on the definition of . We next show that actually when . Since is uniformly jointly quasistrongly connected, there must exist a such that has a nonempty arc set . We can then choose and such that agent has at least one neighbor agent, i.e., is not empty since is nonempty. We next choose , for all , where . In such a case, is the singleton and it follows from Assumption 3 (or 4) that . Therefore, based on the definition of , we know that if . This proves (i).
Next, for any , we still use the same and as those in the proof of Claim A(i). We choose , and , , for all . Note that . In such a case, is a line from point to . It then follows from Assumption 3 that or from Assumption 4 that . This verifies that , . This proves (ii).
Finally, it follows from Lemma 12 that , is locally Lipschitz with respect to , , and . Then, it follows from Theorem 1.14 of [20] that is (globally) Lipschitz continuous with respect to on . From the first property of , it follows that . Therefore, based on Lemma 11, it follows that is Lipschitz continuous with respect to on . This proves (iii) and the claim holds.
(Step II). In this step, we construct and investigate the nonlinear function , which is derived by with the argument measuring the difference between and the upper boundary . Define
(9) 
where and are constants determined by . Obviously, is continuous. We make the following claim.
Claim B: (i) is Lipschitz continuous with respect to on , where the Lipschitz constant is denoted by and is related to the initial bounded set ; (ii) if ; (iii) if .
Note that is compact on the compact set . It follows that is Lipschitz continuous with respect to on the compact set . This shows that (i) holds and properties (ii) and (iii) follow directly from the definition of .
(Step III). In this step, we take advantage of and to show that will be always strictly less than the upper bound as long as it is initially strictly less than .
Suppose at some and let . Based on the definition of , it follows that Let be the solution of with initial condition . Based on the Comparison Lemma (Lemma 3.4 of [16]), it follows that , .
Note that and . It follows from the first property of that , . This shows that based on the second and the third properties of . Thus, the solution of satisfies , based on the Comparison Lemma.
Therefore,