General formation control for multi-agent systems with double-integrator dynamics

General formation control for multi-agent systems with double-integrator dynamics

Chen Wang, Weiguo Xia, Jinan Sun, Ruifeng Fan and Guangming Xie This work was supported in part by grants from the National Natural Science Foundation of China (NSFC, No. 61503008, 61603071, 91648120, 61633002, 51575005), and the China Postdoctoral Science Foundation (No. 2016T90016, 2015M570013).C. Wang, R. Fan and G. Xie are with the State Key Laboratory of Turbulence and Complex Systems, Intelligent Biomimetic Design Lab, College of Engineering, Peking University, Beijing 100871, China. {wangchen, frf, xiegming}@pku.edu.cnW. Xia is with the School of Control Science and Engineering, Dalian University of Technology, Dalian 116024, China. wgxiaseu@dlut.edu.cnJ. Sun is with the National Engineering Research Center for Software Engineering, Peking University, Beijing 100871, China. sjn@pku.edu.cn
Abstract

We study the general formation problem for a group of mobile agents in a plane, in which the agents are required to maintain a distribution pattern, as well as to rotate around or remain static relative to a static/moving target. The prescribed distribution pattern is a class of general formations that the distances between neighboring agents or the distances from each agent to the target do not need to be equal. Each agent is modeled as a double integrator and can merely perceive the relative information of the target and its neighbors. A distributed control law is designed using the limit-cycle based idea to solve the problem. One merit of the controller is that it can be implemented by each agent in its Frenet-Serret frame so that only local information is utilized without knowing global information. Theoretical analysis is provided of the equilibrium of the -agent system and of the convergence of its converging part. Numerical simulations are given to show the effectiveness and performance of the proposed controller.

I Introduction

In recent years, control of multi-agent systems has captured increasing attention due to both its wide practical potential in various applications, such as exploration [1], environmental monitoring [2], pursuit and evasion [3, 4, 5], and surveillance [6], and its theoretical challenges arising from restrictions in application implementations.

Formation control is one of the most actively studied topics within the realm of control and coordination of multi-agent systems, since in such cooperative tasks the robots can benefit from forming clusters or moving in a desired formation with certain geometric shapes [7, 8, 9]. In particular, by forming desired patterns, the robots are able to successfully complete the tasks [8] and even to improve their performance, such as the quality of the collected data, and the robustness of group motion against random environmental disturbances [9]. One theoretical challenge of such formation control problems for multi-agent systems arises from the fact that the robots can use only local information to implement their distributed control strategies without centralized coordination.

Intensive research efforts have been devoted to the distributed formation control for multi-agent systems in the systems and control community [8, 10, 11]. A considerable amount of studies have focused on consensus based formation control where the formation control problem is converted by a proper transformation to a state consensus problem. Specifically, the dynamics of the agents are modeled as single-integrators [12, 13], double-integrators [14], and unicycles [15, 16, 17]; some constrained conditions are considered including input saturation [12], agents’ locomotion constraints [18], finite-time control [19] , and limited communication [20]. With the aid of limit-cycle oscillators, the property of collision avoidance has been guaranteed when controlling a group of agents to form a circle around a prescribed target [21]. Using the nonlinear bifurcation dynamics, including limit cycles, [22] has proposed swarm control laws to realize some formation configurations of large-scale swarms. From these studies, the potential of limit-cycle oscillators to formation controllers design has been shown, which greatly inspires our work in this paper.

The goal of this paper is to design a distributed controller that can guide a group of mobile agents with double-integrator dynamics to form any given general formation in a plane. The general control objective of the problem comprises two specific sub-objectives. The first is target circling that each agent rotates around or remains fixed relative to a static/moving target as expected, as well as keeping desired distance to the target. The second is distribution adjustment that each agent maintains the desired distance from its neighbors. It’s worth to emphasize that the general formations allow that the distance between neighboring agents are distinguished and the distances from the agents to the target are different. In addition, the agents can only sense local information including the relative information of the target and their two neighbors.

To realize the general formation, a limit-cycle based design is delivered in this paper. We propose to use a controller comprised of two parts to deal with the two sub-objectives of target circling and distribution adjustment. The key idea is to first design a limit cycle oscillator as the converging part, which makes each agent keep a desired distance to the static/moving target as well as rotating counterclockwise/clockwise around or remaining static relative to the target as required. Then a layout part is introduced to the designed limit cycle oscillator to further make the agents maintain desired distance from its two neighbors. Subsequently, an integrated controller is obtained to solve the general formation problem. Our proposed controller can be implemented by agents in their Frenet-Serret frame, so that only local information is utilized without knowing global information.

The rest of the paper is organized as follows. In Section II, we formulate the general formation problem. Then we design a distributed controller and provide some theoretical analysis on its performances in Section III. Simulation results are given in Section IV. Finally, Section V concludes this paper.

Ii Problem formulation

We consider a group of , , agents labeled to and a static/moving target labeled to be circled around in a plane (see Fig. 1(a)). The agents’ initial positions are not required to be distinguished from each other, whereas no agent occupies the same position as the target. For ease of expression, we label the agents based on their initial positions according to the following three rules: i) the labels are sorted firstly in ascending order in a counterclockwise manner around the target; ii) for the agents who lie on the same ray extending from the target, their labels are sorted in ascending order by the distance to the target point; and iii) for the agents who occupy the same position, their labels are chosen randomly. Then we consider the case when the agents’ neighbor relationships are described by an undirected ring graph , where and . In such a way, each agent only has two neighbors that are immediately in front of or behind itself. We denote the set of agent ’s two neighbors by where

and

(1)

Let , , and denote the position, velocity and control input of agent , respectively. Each agent is described by a double-integrator dynamics model

(2)

The dynamics of the static/moving target are described as follows

(3)

where , , and denote the position, velocity and acceleration of the target, respectively.

(a) Initial states
(b) Frenet-Serret frame
Fig. 1: General formation in a plane. (a) The agents are initially located in a plane. (b) The proposed controller can be implemented in the Frenet-Serret frame of each agent .

In this paper, the General Formation problem in a plane is formalized as to design local controllers for all agents by only using the relative information between the agent and the target and the relative information between the agent and its two neighbors such that all the agents asymptotically form a desired formation to keep the static/moving target as a reference point. The general formation is required to rotate clockwise/counterclockwise around the target, or to remain static relative to the target, and to maintain a prescribed distribution pattern without the requirement that all the desired distances between neighboring agents are equal nor the requirement that the desired distances between each agent and the target are equal.

To formulate the problem mathematically, the following variables are introduced. Let be the relative position between agent and target measured by agent at time ,

(4)

where . Denote as the angular of the vector for agent . The relative velocity between agent and the target can be derived as

(5)

where . We further introduce the variables as the angular distance from agent to , which is formed by counterclockwise rotating the ray extending from the target to agent until reaching agent . Similarly, is the angular distance from agent to .

Let denote the desired angular spacing from agent to , and denote the desired distance from agent to the target. Then the desired distribution pattern of the agents is determined by the two vectors

(6)

and

(7)

Let denote each agent’s desired angular velocity relative to the target. For , the desired formation is required to rotate counterclockwise (resp. clockwise) around the target. For , the desired formation is required to remain static relative to the target. Note that only local information of and in vector is available to each agent . We say a prescribed general formation is admissible if , and .

With the above preparation, we are ready to formulate the General Formation Problem of interest.

Definition 1 (General Formation Problem)

Given an admissible general formation characterized by and in a plane with a desired angular velocity to a static/moving target . Design distributed control laws , , such that with any initial states , the solution to system (2) converges to some equilibrium point satisfying

(8)

and

(9)

where denotes the state at the equilibrium point in this paper.

Iii Main results

In this section, we propose a control law to solve the General Formation Problem, and then give theoretical analysis.

Iii-a Limit-cycle-based control design

The proposed control law takes the following form:

(10)

where

(11)

and are constants.

Note that the controller is designed in the form of a limit cycle oscillator as the converging part corresponding to the first sub-objective target circling, while a layout part is introduced to deal with the second sub-objective distribution adjustment.

Let , , and , where is the angular of the vector . Then the system (2) under control laws (10) can be represented in the polar coordinates

as

(12)

and

(13)

where and are given by (11).

Now we have the overall closed-loop system in the polar coordinates with states , and described by equations (12). It is worth to emphasize that the variables here can be treated as additional states, which are only used for analysis purposes and are not known to the agents (see Fig. 1(b)).

Furthermore, for each agent , we construct a moving frame, the Frenet-Serret frame, that is fixed on the agent with its origin at the representing point and -axis coincident with the orientation of the vector . The agent ’s Frenet-Serret frame is shown by in Fig. 1(b). One can easily check that our proposed control laws (10) can be successfully implemented by agents in their Frenet-Serret frame without knowing the information of global coordinates.

Iii-B Analysis of Equilibrium

Now, we analyze the equilibria of the -agent system (2) under the control law (10). For this purpose, we consider both the closed-loop system (12) in the polar coordinates and the dynamics of additional states described by (13). Then the equilibrium points can be calculated by solving

(14)

It is known from the definition of the angular distance that

(15)

Together with (13), one arrives at a subsystem with states

We first analyze the states at the equilibrium point of system (12).

Proposition 1

Any equilibrium point of the -agent system (12) is also an equilibrium of the following system

(17)
{proof}

At any equilibrium point of system (12), one has or since . When , we have . When and , we have . When and , it follows that from their definitions, and , and thus . Now, one can conclude that always hold at any equilibrium point of system (12). It implies that . This completes the proof.

We further rewrite the system (17) into a compact form

(18)

where

and is given by (19).

(19)

Note that system (18), which merely contains variables , is helpful when calculating the equilibrium of the -agent system, especially the layout part . Next we give some useful results about system (18) to facilitate the discussion on the equilibrium of the -agent system (2) under the control law (10).

Let . Then . For analysis purposes, we introduce a pair of variables

Then we have

(20)

where

Suppose and are eigenvalues of and , respectively.

Lemma 1 (Lemma 5 of [19])

It holds that
i) is diagonalizable and , ;

ii) is a single eigenvalue;

iii) When is even, is an eigenvalue, while when is odd, is not.

In view of Lemma 1, without loss of generality, we now assume . Then we analyze the eigenvalues of .

Lemma 2

Matrix has exactly a zero eigenvalue of algebraic multiplicity and all the other eigenvalues have negative real parts.

{proof}

Let be an eigenvalue of the matrix . Then, one has . Note that

Hence,

(21)

From (21) and Lemma 1, it is easy to see that has a zero eigenvalue of algebraic multiplicity and all the other eigenvalues have negative real parts.

Lemma 3

System (20) achieves consensus asymptotically and , and , as goes to infinity, where is the non-negative left eigenvector of associated with the eigenvalue and .

{proof}

In view of Lemma 2, one can check that eigenvalue zero of has geometric multiplicity equal to one. Note that can be written in Jordan canonical form as

where can be chosen to be the right eigenvectors or generalized eigenvectors of , can be chosen to be the left eigenvectors or generalized eigenvectors of , and is the Jordan upper diagonal block matrix corresponding to non-zero eigenvalues.

Without loss of generality, choose , where and are -dimensional all-one and all-zero vectors, respectively. It can be verified that is a right eigenvector of associated with the eigenvalue . Let be the non-negative vector such that and . It can be verified that is a left eigenvector of associated with eigenvalue , where .

Noting that all eigenvalues of except a simple zero eigenvalue have negative real parts, we see that

which converges to as . Noting that

we see that , and as . As a result, we know that and as . That is, system (20) achieves consensus asymptotically.

Lemma 4

System (18) achieves consensus asymptotically. Specifically, and as .

{proof}

From Lemma 2 and Lemma 3, one can see that system (20) achieves consensus asymptotically, which further implies that system (18) achieves consensus asymptotically. Moreover, one can check that there exists such that and . Since and hold all times, we have

Thus, we have and for large .

With the above preparation, we are ready to calculate the equilibria of the -agent system (2) under the control law (10) (i.e., the closed-loop system in the polar coordinates (12)) by solving (14). All the equilibrium points can be classified into the following three cases:

  • Case I: ;

  • Case II: ;

  • Case III: and , where , .

Proposition 2 (Equilibrium Case I)

The equilibrium of the -agent system (12) is (22) when and is (23) when , if it satisfies .

(22)
(23)
{proof}

In this case, we need to consider three subcases.

Subcase I-a: . From (14), one can have due to , thus , and holds. Together with the definition of and , one can check and thus and . It follows that , and thus . From (13), it holds that . From Proposition 1 and Lemma 4, one can have , . It follows that . Since , the equilibrium in Case I-a only exists when . From (14), one can have . It follows that . Together with , one can have . Moreover, since , i.e., , one can check that for and for .

To sum up, for Subcase I-a, an equilibrium (22) exists when .

Subcase I-b: . From (13), we get . Since , one can check that . It follows that . Together with the definition of , one can have . Thus, considering system (III-B), the equilibrium in this case satisfies

Thus, . It holds that from the definition. In view of Lemma 1, one can check that . Then we calculate by (11), and get . It follows . Since , the equilibrium in Case I-b only exists when . From (12), we have . It follows . Together with , one can check that .

To sum up, for Subcase I-b, an equilibrium (23) exists when .

Subcase I-c: and , where . Using the similar idea with the calculation in Case I-a and I-b, one can have

It follows that

where are constants. In this case, both -agent and -agent exist in the system. It implies that there exists at least one -agent (labeled as ) who has one or two -agent as its neighbor. One can check by (11) that, for such an agent , its is a function of . Thus is also a function of . Comparing , we arrive at a contradiction.

To sum up, for Subcase I-c, no equilibrium exists.

Proposition 3 (Equilibrium Case II)

The equilibrium of the -agent system (12) is (24) for any , if it satisfies .

(24)
{proof}

It holds that , since . Combining with the definition of , we have . Then one can check (12) and (13) and derive that and thus . This completes the proof.

Proposition 4 (Equilibrium Case III)

The equilibria of the -agent system (12) are (25) and (27) when , and are (26) and (27) when , if it satisfies and , where and .

(25)
(26)
(27)

where , ,and are constants whose value are related to the initial states.

{proof}

For ease of expression, we denote the agent satisfying by -agent, the one satisfying by -agent, the one satisfying by -agent , and the one satisfying by -agent