Distributed Robust Consensus Control of Multi-agent Systems with Heterogeneous Matching Uncertainties

Distributed Robust Consensus Control of Multi-agent Systems with Heterogeneous Matching Uncertainties

[    [    [
Abstract

This paper considers the distributed consensus problem of linear multi-agent systems subject to different matching uncertainties for both the cases without and with a leader of bounded unknown control input. Due to the existence of nonidentical uncertainties, the multi-agent systems discussed in this paper are essentially heterogeneous. For the case where the communication graph is undirected and connected, a distributed continuous static consensus protocol based on the relative state information is first designed, under which the consensus error is uniformly ultimately bounded and exponentially converges to a small adjustable residual set. A fully distributed adaptive consensus protocol is then designed, which, contrary to the static protocol, relies on neither the eigenvalues of the Laplacian matrix nor the upper bounds of the uncertainties. For the case where there exists a leader whose control input is unknown and bounded, distributed static and adaptive consensus protocols are proposed to ensure the boundedness of the consensus error. It is also shown that the proposed protocols can be redesigned so as to ensure the boundedness of the consensus error in the presence of bounded external disturbances which do not satisfy the matching condition. A sufficient condition for the existence of the proposed protocols is that each agent is stabilizable.

li]Zhongkui Li\corauthrefcor, li]Zhisheng Duan, lewis]Frank L. Lewis

State Key Laboratory for Turbulence and Complex Systems, Department of Mechanics and Aerospace Engineering, College of Engineering, Peking University, Beijing 100871, China 

The Automation and Robotics Research Institute, The University of Texas at Arlington, Fort Worth, TX 76118-7115, USA 


Key words:  Multi-agent systems; uncertain systems; consensus; distributed tracking; adaptive control.

 

1 Introduction

Cooperative control of a network of autonomous agents has been an emerging research direction and attracted a lot of attention from many scientific communities, especially the systems and control community. A group of autonomous agents, by coordinating with each other via communication or sensing networks, can perform certain challenging tasks which cannot be well accomplished by a single agent. Cooperative control of multi-agent systems has potential applications in broad areas including spacecraft formation flying, sensor networks, and cooperative surveillance [1, 2]. In the area of cooperative control, consensus is an important and fundamental problem, which means to develop distributed control policies using only local information to ensure that the agents reach an agreement on certain quantities of interest.

Two pioneering works on consensus are [3] and [4]. A theoretical explanation is provided in [3] for the alignment behavior observed in the Vicsek model [5] and a general framework of the consensus problem for networks of integrators is proposed in [4]. Since then, the consensus problem has been extensively studied by various scholars from different perspectives; see [1, 2, 6, 7, 8, 9, 10, 11, 12] and references therein. Existing consensus algorithms can be roughly categorized into two classes, namely, consensus without a leader (i.e., leaderless consensus) and consensus with a leader. The latter is also called leader-follower consensus or distributed tracking. In [6], a sufficient condition is derived to achieve consensus for multi-agent systems with jointly connected communication graphs. The authors in [7] design a distributed neighbor-based estimator to track an active leader. Distributed tracking algorithms are proposed in [13] and [14] for a network of agents with first-order dynamics. Consensus of networks of double- and high-order integrators is studied in [15, 16]. Consensus algorithms are designed in [8, 17] for multi-agent systems with quantized communication links. The authors in [18] address a distributed tracking problem for multiple Euler-Lagrange systems with a dynamic leader. The consensus problem of multi-agent systems with general discrete- and continuous-time linear dynamics is studied in [9, 10, 11, 12, 19, 20, 21]. It is worth noting that the design of the consensus protocols in [9, 10, 11, 20, 21] requires the knowledge of the eigenvalues of the Laplacian matrix of the communication graph, which is actually global information. To overcome this limitation, distributed adaptive consensus protocols are proposed in [22, 23]. For the case where there exists a leader with possibly nonzero control input, distributed controllers are proposed in [24, 23] to solve the leader-follower consensus problem. A common assumption in [9, 10, 11, 12, 19, 20, 21, 24, 23] is that the dynamics of the agents are identical and precisely known, which might be restrictive and not practical in many circumstances. In practical applications, the agents may be subject to certain parameter uncertainties or unknown external disturbances.

This paper considers the distributed consensus problem of multi-agent systems with identical nominal linear dynamics but subject to different matching uncertainties. A typical example belonging to this scenario is a network of mass-spring systems with different masses or unknown spring constants. Due to the existence of the nonidentical uncertainties which may be time-varying, nonlinear and unknown, the multi-agent systems discussed in this paper are essentially heterogeneous. The heterogeneous multi-agent systems in this paper contain the homogeneous linear multi-agent systems studied in [9, 10, 11, 12, 19, 20, 21] as a special case where the uncertainties do not exist. Note that because of the existence of the uncertainties, the consensus problem in this case becomes quite challenging to solve and the consensus algorithms given in [9, 10, 11, 12, 19, 20, 21] are not applicable any more.

In this paper, we present a systematic procedure to address the distributed robust consensus problem of multi-agent systems with matching uncertainties for both the cases without and with a leader of possibly nonzero control input. First, we consider the case where the communication graph is undirected and connected. A distributed continuous static consensus protocol based on the relative states of neighboring agents is designed, under which the consensus error is uniformly ultimately bounded and exponentially converges to a small residual set. Note that the design of this protocol relies on the eigenvalues of the Laplacian matrix and the upper bounds of the matching uncertainties. In order to remove these requirements, a fully distributed adaptive protocol is further designed, under which the residual set of the consensus error is also given. One desirable feature is that for both the static and adaptive protocols, the residual sets of the consensus error can be made to be reasonably small by properly selecting the design parameters of the protocols and the convergence rates of the consensus error are explicitly given. Next, we extend to consider the case where there exists a leader with nonzero control input. Here we study the general case where the leader’s control input is not available to any follower, which imposes additional difficulty. Distributed static and adaptive consensus protocols based on the relative state information are proposed and designed to ensure that the consensus error can converge to residual sets which are explicitly given and adjustable. The case where the external disturbances associated with the agent dynamics are bounded and do not satisfy the matching condition is also examined. The proposed consensus protocols are redesigned to guarantee the boundedness of the consensus error. The existence conditions of the consensus protocols proposed in this paper are discussed. It is pointed out that a sufficient condition of the existence of the protocols is that each agent is stabilizable.

It is worth mentioning that in related works [25, 26], the distributed tracking problem of multi-agent systems with unknown nonlinear dynamics are discussed. Compared to [25, 26], the contribution of this paper is at least three-fold. First, the agents in [25, 26] are restricted to be first-order and special high-order systems. It is far from trivial to extend [25, 26] to solve the consensus problem of the general high-order multi-agent systems with matching uncertainties as in this paper. Second, contrary to [25, 26] which consider only the case with a leader, consensus for both the cases with and without a leader is addressed in this paper. Third, the design of the protocols in [25, 26] depends on global information of the communication graph. In contrast, the adaptive consensus protocols proposed in this paper are fully distributed, which do not require any global information.

The rest of this paper is organized as follows. Some useful results of graph theory are reviewed in Section 2. The distributed robust leaderless consensus problem is discussed in Section 3 for the case with an undirected graph. The robust leader-follower consensus problem is addressed in Section 4 for the case where there exists a leader with unknown control input. The robustness of the proposed consensus protocols with respect to external disturbances which do not satisfy the matching condition is discussed in Section 5. Simulation examples are presented for illustration in Section 6. Conclusions are drawn in Section 7.

2 Notation and Graph Theory

represents the identity matrix of dimension . Denote by a column vector with all entries equal to one. represents a block-diagonal matrix with matrices on its diagonal. denotes the Kronecker product of matrices and . For a vector , let denote its 2-norm. For a symmetric matrix , and denote, respectively, the minimum and maximum eigenvalues of .

A directed graph is a pair , where is a nonempty finite set of nodes and is a set of edges, in which an edge is represented by an ordered pair of distinct nodes. For an edge , node is called the parent node, node the child node, and is a neighbor of . A graph with the property that implies for any is said to be undirected. A path from node to node is a sequence of ordered edges of the form , . A subgraph of is a graph such that and . A directed graph contains a directed spanning tree if there exists a node called the root, which has no parent node, such that the node has directed paths to all other nodes in the graph.

The adjacency matrix associated with the directed graph is defined by , if and otherwise. The Laplacian matrix is defined as and , . For undirected graphs, both and are symmetric.

Lemma 1 [6]  Zero is an eigenvalue of with as a right eigenvector and all nonzero eigenvalues have positive real parts. Furthermore, zero is a simple eigenvalue of if and only if has a directed spanning tree.

3 Distributed Robust Leaderless Consensus

In this paper, we consider a network of autonomous agents with identical nominal linear dynamics but subject to heterogeneous uncertainties. The dynamics of the -th agent are described by

(1)

where is the state, is the control input, and are constant known matrices with compatible dimensions, and and denote, respectively, the parameter uncertainties and external disturbances associated with the -th agent, which are assumed to satisfy the following standard matching condition [27, 28].

Assumption 1 There exist functions and such that and , .

By letting represent the lumped uncertainty of the -th agent, (3) can be rewritten into

(2)

In the previous related works [9, 10, 29, 20, 19, 11, 22, 23], the agents are identical linear systems and free of uncertainties. In contrast, the agents (3) considered in this paper are subject to nonidentical uncertainties, which makes the resulting multi-agent systems are essentially heterogeneous. The agents (3) can recover the nominal linear agents in [9, 10, 29, 20, 19, 11, 22, 23] when the uncertainties do not exist. Note that the existence of the uncertainties associated with the agents makes the consensus problem quite challenging to solve, as detailed in the sequel.

Regarding the bounds of the uncertainties , we introduce the following assumption.

Assumption 2 There exist continuous scalar valued functions , , such that , , for all and .

The communication graph among the agents is represented by a undirected graph , which is assumed to be connected throughout this section. The objective of this section is to solve the consensus problem for the agents in (3), i.e., to design distributed consensus protocols such that , .

3.1 Distributed Static Consensus Protocol

Based on the relative states of neighboring agents, the following distributed static consensus protocol is proposed:

(3)

where is the constant coupling gain, is the feedback gain matrix, is the -th entry of the adjacency matrix associated with , and the nonlinear function is defined as follows: for ,

(4)

where is a small positive value.

Let and . Using (3.1) for (3), we can obtain the closed-loop network dynamics as

(5)

where denotes the Laplacian matrix of , and

(6)

Let , where and . It is easy to see that is a simple eigenvalue of with as a corresponding right eigenvector and 1 is the other eigenvalue with multiplicity . Then, it follows that if and only if . Therefore, the consensus problem under the protocol (3.1) is solved if and only if asymptotically converges to zero. Hereafter, we refer to as the consensus error. By noting that , it is not difficult to obtain from (3.1) that the consensus error satisfies

(7)

The following result provides a sufficient condition to design the consensus protocol (3.1).

Theorem 1  Suppose that the communication graph is undirected and connected and Assumption 2 holds. The parameters in the distributed protocol (3.1) are designed as and , where is the smallest nonzero eigenvalue of and is a solution to the following linear matrix inequality (LMI):

(8)

Then, the consensus error of (3.1) is uniformly ultimately bounded and exponentially converges to the residual set

(9)

with a convergence rate faster than , where

(10)

Proof  Consider the following Lyapunov function candidate:

By the definition of , it is easy to see that . For a connected graph , it then follows from Lemma 1 that

(11)

The time derivative of along the trajectory of (3.1) is given by

(12)

By using Assumption 2, we can obtain that

(13)

Next, consider the following three cases.

i) , .

In this case, it follows from (3.1) and (3.1) that

(14)

Substituting (14) and and (13) into (12) yields , where .

ii) , .

In this case, we can get from (3.1) and (3.1) that

(15)

Substituting (14), (13), and (15) into (12) gives

(16)

iii) satisfies neither case i) nor case ii).

Without loss of generality, assume that , , and , , where . By combing (14) and (15), in this case we can get that

(17)

Then, it follows from (12), (14), (17), and (13) that

Therefore, by analyzing the above three cases, we get that satisfies (16) for all . Note that (16) can be rewritten as

(18)

where .

Because is connected, it follows from Lemma 1 that zero is a simple eigenvalue of and all the other eigenvalues are positive. Let and , with , , be such unitary matrices that , where are the nonzero eigenvalues of . Let . By the definitions of and , it is easy to see that Then, it follows that

(19)

Because , we can see from (19) that . Then, we can get from (18) that

(20)

By using the well-known Comparison lemma (Lemma 3.4 in [30]), we can obtain from (20) that

(21)

which, by (11), implies that exponentially converges to the residual set in (3.1) with a convergence rate not less than .

Remark 1  The distributed consensus protocol (3.1) consists of a linear part and a nonlinear part, where the term is used to suppress the effect of the uncertainties . For the case where , we can accordingly remove from (3.1), which can recover the static consensus protocols as in [9, 29, 11]. As shown in Proposition 2 of [9], a necessary and sufficient condition for the existence of a to the LMI (3.1) is that is stabilizable. Therefore, a sufficient condition for the existence of (3.1) satisfying Theorem 1 is that is stabilizable. Note that in Theorem 1 the parameters and of (3.1) are independently designed.

Note that the nonlinear component in (3.1) is continuous, which is actually a continuous approximation, via the boundary layer concept [28, 30], of the discontinuous function The value of in (3.1) defines the size of the boundary layer. As , the continuous function approaches the discontinuous function .

Corollary 1  Assume that is connected and Assumption 2 holds. The consensus error converges to zero under the discontinuous consensus protocol:

(22)

where and are chosen as in Theorem 1.

Remark 2  An inherent drawback of the discontinuous protocol (22) is that it will result in the undesirable chattering effect in real implementation, due to imperfections in switching devices [31, 28]. The effect of chattering is avoided by using the continuous protocol (3.1). The cast is that the protocol (3.1) does no guarantee asymptotic stability but rather uniform ultimate boundedness of the consensus error . Note that the residual set of depends on the smallest nonzero eigenvalue of , the number of agents, the largest eigenvalue of , and the size of the boundary layer. By choosing a sufficiently small , the consensus error under the protocol (3.1) can converge to an arbitrarily small neighborhood of zero, which is acceptable in most applications.

3.2 Distributed Adaptive Consensus Protocol

In the last subsection, the design of the distributed protocol (3.1) relies on the minimal nonzero eigenvalue of and the upper bounds of the matching uncertainties . However, is global information in the sense that each agent has to know the entire communication graph to compute it. Besides, the bounds of the uncertainties might not be easily obtained in some cases, e.g., contains certain unknown external disturbances. In this subsection, we will implement some adaptive control ideas to compensate the lack of and and thereby to solve the consensus problem using only the local information available to each agent.

Before moving forward, we introduce a modified assumption regarding the bounds of the lumped uncertainties , .

Assumption 3  There are positive constants and such that , .

Based on the local state information of neighboring agents, we propose the following distributed adaptive protocol to each agent:

(23)

where and are the adaptive gains associated with the -th agent, is the feedback gain matrix, and are positive scalars, and are small positive constants chosen by the designer, the nonlinear function is defined as follows: for ,

(24)

and the rest of the variables are defined as in (3.1).

Let the consensus error be defined as in (3.1) and . Then, it is not difficult to get from (3) and (23) that the closed-loop network dynamics can be written as

(25)

where

(26)

and the rest of the variables are defined as in (3.1).

To establish the ultimate boundedness of the states , , and of (25), we use the following Lyapunov function candidate

(27)

where , , , and .

Theorem 3 Suppose that is connected and Assumption 3 holds. The feedback gain matrices of the distributed adaptive protocol (23) are designed as and , where is a solution to the LMI (3.1). Then, both the consensus error and the adaptive gains and , , in (25) are uniformly ultimately bounded and the following statements hold.

  • For any and , , , and exponentially converge to the residual set

    (28)

    with a convergence rate faster than , where and is defined as in (3.1).

  • If small and satisfy , then in addition to i), exponentially converges to the residual set

    (29)

with a convergence rate faster than .

Proof  The time derivative of along (25) can be obtained as

(30)

where .

By noting that , it is easy to get that

(31)

In light of Assumption 3, we can obtain that

(32)

In what follows, we consider three cases.

i) , .

In this case, we can get from (24) and (26) that

(33)

Substituting (31), (32), and (33) into (30) yields

where and we have used the facts that and .

ii) , .

In this case, we can get from (24) and (26) that

(34)

Then, it follows from (31), (32), (34), and (30) that

(35)

where we have used the fact that for , .

iii)