Sparse Stabilization and Control of Alignment Models

Sparse Stabilization and Control of Alignment Models

Marco Caponigro111Conservatoire National des Arts et Métiers, Équipe M2N, 292 rue Saint-Martin, 75003, Paris, France. (marco.caponigro@cnam.fr), Massimo Fornasier222Technische Universität München, Facultät Mathematik, Boltzmannstrasse 3 D-85748, Garching bei München, Germany (massimo.fornasier@ma.tum.de)., Benedetto Piccoli333Rutgers University, Department of Mathematics, Business Science Building Room 325 Camden, NJ 08102, USA (piccoli@camden.rutgers.edu)., Emmanuel Trélat444Université Pierre et Marie Curie (Univ. Paris 6) and Institut Universitaire de France, CNRS UMR 7598, Laboratoire Jacques-Louis Lions, F-75005, Paris, France (emmanuel.trelat@upmc.fr).
Abstract

Starting with the seminal papers of Reynolds (1987), Vicsek et. al. (1995), Cucker–Smale (2007) there has been a flood of recent works on models of self-alignment and consensus dynamics. Self-organization has been so far the main driving concept of this research direction. However, the evidence that in practice self-organization does not necessarily occur (for instance, the achievement of unanimous consensus in government decisions) leads to the natural question of whether it is possible to externally influence the dynamics in order to promote the formation of certain desired patterns. Once this fundamental question is posed, one is also faced with the issue of defining the best way of obtaining the result, seeking for the most “economical” way to achieve a certain outcome. Our paper precisely addressed the issue of finding the sparsest control strategy in order to lead us optimally towards a given outcome, in this case the achievement of a state where the group will be able by self-organization to reach an alignment consensus. As a consequence we provide a mathematical justification to the general principle according to which “sparse is better” in the sense that a policy maker, who is not allowed to predict future developments, should always consider more favorable to intervene with stronger action on the fewest possible instantaneous optimal leaders rather than trying to control more agents with minor strength in order to achieve group consensus. We then establish local and global sparse controllability properties to consensus. Finally, we analyze the sparsity of solutions of the finite time optimal control problem where the minimization criterion is a combination of the distance from consensus and of the -norm of the control. Such an optimization models the situation where the policy maker is actually allowed to observe future developments. We show that the lacunarity of sparsity is related to the codimension of certain manifolds in the space of cotangent vectors.

Keywords: Cucker–Smale model, consensus emergence, -norm minimization, optimal complexity, sparse stabilization, sparse optimal control.

MSC 2010: 34D45, 35B36, 49J15, 65K10, 93D15, 93B05

1 Introduction

1.1 Self-organization Vs organization via intervention

In recent years there has been a very fast growing interest in defining and analyzing mathematical models of multiple interacting agents in social dynamics. Usually individual based models, described by suitable dynamical systems, constitute the basis for developing continuum descriptions of the agent distribution, governed by suitable partial differential equations. There are many inspiring applications, such as animal behavior, where the coordinated movement of groups, such as birds (starlings, geese, etc.), fishes (tuna, capelin, etc.), insects (locusts, ants, bees, termites, etc.) or certain mammals (wildebeasts, sheep, etc.) is considered, see, e.g., [2, 8, 21, 22, 54, 59, 60, 67, 74, 76] or the review chapter [11], and the numerous references therein. Models in microbiology, such as the Patlak-Keller-Segel model [45, 61], describing the chemotactical aggregation of cells and multicellular micro-organisms, inspired a very rich mathematical literature [40, 41, 63], see also the very recent work [4] and references therein. Human motion, including pedestrian and crowd modeling [24, 25, 48, 52], for instance in evacuation process simulations, has been a matter of intensive research, connecting also with new developments such as mean field games, see [46] and the overview in its Section 2. Certain aspects of human social behavior, as in language evolution [27, 29, 44] or even criminal activities [69], are also subject of intensive study by means of dynamical systems and kinetic models. Moreover, relevant results appeared in the economical realm with the theoretical derivation of wealth distributions [31] and, again in connection with game theory, the description of formation of volatility in financial markets [47]. Beside applications where biological agents, animals and micro-(multi)cellular organisms, or humans are involved, also more abstract modeling of interacting automatic units, for instance simple robots, are of high practical interest [14, 43, 72, 49, 62, 68].
One of the leading concepts behind the modeling of multiagent interaction in the past few years has been self-organization [8, 54, 59, 60, 74], which, from a mathematical point of view, can be described as the formation of patterns, to which the systems tend naturally to be attracted. The fascinating mechanism to be revealed by such a modeling is how to connect the microscopical and usually binary rules or social forces of interaction between individuals with the eventual global behavior or group pattern, forming as a superposition in time of the different microscopical effects. Hence, one of the interesting issues of such socio-dynamical models is the global convergence to stable patterns or, as more often and more realistically, the instabilities and local convergence [63].
While the description of pattern formation can explain some relevant real-life behaviors, it is also of paramount interest how one may enforce and stabilize pattern formation in those situations where global and stable convergence cannot be ensured, especially in presence of noise [80], or, vice versa, how one can avoid certain rare and dangerous patterns to form, despite that the system may suddenly tend stably to them. The latter situations may refer, for instance, to the so-called “black swans”, usually referred to critical (financial or social) events [3, 73]. In all these situations where the independent behavior of the system, despite its natural tendencies, does not realize the desired result, the active intervention of an external policy maker is essential. This naturally raises the question of which optimal policy should be considered.
In information theory, the best possible way of representing data is usually the most economical for reliably or robustly storing and communicating data. One of the modern ways of describing economical description of data is their sparse representation with respect to an adapted dictionary [50, Chapter 1]. In this paper we shall translate these concepts to realize best policies in stabilization and control of dynamical systems modeling multiagent interactions. Beside stabilization strategies in collective behavior already considered in the recent literature, see e.g. [66, 68], the conceptually closest work to our approach is perhaps the seminal paper [49], where externally driven “virtual leaders” are inserted in a collective motion dynamics in order to enforce a certain behavior. Nevertheless our modeling still differs significantly from this mentioned literature, because we allow us directly, externally, and instantaneously to control the individuals of the group, with no need of introducing predetermined virtual leaders, and we shall specifically seek for the most economical (sparsest) way of leading the group to a certain behavior. In particular, we will mathematically model sparse controls, designed to promote the minimal amount of intervention of an external policy maker, in order to enforce nevertheless the formation of certain interesting patterns. In other words we shall activate in time the minimal amount of parameters, potentially limited to certain admissible classes, which can provide a certain verifiable outcome of our system. The relationship between parameter choices and result will be usually highly nonlinear, especially for several known dynamical systems, modeling social dynamics. Were this relationship linear instead, then a rather well-established theory would predict how many degrees of freedom are minimally necessary to achieve the expected outcome, and, depending on certain spectral properties of the linear model, allows also for efficient algorithms to compute them. This theory is known in mathematical signal processing under the name of compressed sensing, see the seminal work [9] and [30], see also the review chapter [34]. The major contribution of these papers was to realize that one can combine the power of convex optimization, in particular -norm minimization, and spectral properties of random linear models in order to show optimal results on the ability of -norm minimization of recovering robustly sparsest solutions. Borrowing a leaf from compressed sensing, we will model sparse stabilization and control strategies by penalizing the class of vector valued controls by means of a mixed -norm, i.e.,

where here is the -Euclidean norm on . This mixed norm has been used for instance in [33] as a joint sparsity constraint and it has the effect of optimally sparsifying multivariate vectors in compressed sensing problems [32]. The use of (scalar) -norms to penalize controls dates back to the 60’s with the models of linear fuel consumption [23]. More recent work in dynamical systems [78] resumes again -minimization emphasizing its sparsifying power. Also in optimal control with partial differential equation constraints it became rather popular to use -minimization to enforce sparsity of controls, for instance in the modeling of optimal placing of actuators or sensors [12, 16, 17, 39, 64, 71, 79].
Differently from this previously mentioned work, we will investigate in this paper optimally sparse stabilization and control to enforce pattern formation or, more precisely, convergence to attractors in dynamical systems modeling multiagent interaction. A simple, but still rather interesting and prototypical situation is given by the individual based particle system we are considering here as a particular case

(1)

for , where and , are the state and consensus parameters respectively. Here one may want to imagine that the ’s actually represent abstract quantities such as words of a communication language, opinions, invested capitals, preferences, but also more classical physical quantities such as the velocities in a collective motion dynamics. This model describes the emerging of consensus in a group of interacting agents described by degrees of freedom each, trying to align the consensus parameters (also in terms of abstract consensus) with their social neighbors. One of the motivations of this model proposed by Cucker and Smale was in fact to describe the formation and evolution of languages [27, Section 6], although, due to its simplicity, it has been eventually related mainly to the description of the emergence of consensus in a group of moving agents, for instance flocking in a swarm of birds [28]. One of the interesting features of this simple system is its rather complete analytical description in terms of its ability of convergence to attractors according to the parameter which is ruling the communication rate between far distant agents. For , corresponding to a still rather strong long - social - distance interaction, for every initial condition the system will converge to a consensus pattern, characterized by the fact that all the parameters ’s will tend for to the mean which is actually an invariant of the dynamics. For , the emergence of consensus happens only under certain configurations of state variables and consensus parameters, i.e., when the group is sufficiently close to its state center of mass or when the consensus parameters are sufficiently close to their mean. Nothing instead can be said a priori when at the same time one has and the mentioned conditions on the initial data are not fulfilled. Actually one can easily construct counterexamples to formation of consensus, see our Example 1 below. In this situation, it is interesting to consider external control strategies which will facilitate the formation of consensus, which is precisely the scope of this work.

1.2 The general Cucker–Smale model and introduction to its control

Let us introduce the more general Cucker–Smale model under consideration in this article.

The model.

We consider a system of interacting agents. The state of each agent is described by a pair of vectors of the Euclidean space , where represents the main state of an agent and the its consensus parameter. The main state of the group of agents is given by the -uple . Similarly for the consensus parameters . The space of main states and the space of consensus parameters is for both, the product -times of the Euclidean space endowed with the induced inner product structure.

The time evolution of the state of the agent is governed by the equations

(2)

for every , where is a nonincreasing positive function. Here, denotes again the -Euclidean norm in . The meaning of the second equation is that each agent adjusts its consensus parameter with those of other agents in relation with a weighted average of the differences. The influence of the agent on the dynamics of the is a function of the (social) distance of the two agents. Note that the mean consensus parameter is an invariant of the dynamics, hence it is constant in time.

As mentioned previously, an example of a system of the form (2) is the influential model of Cucker and Smale [27] in which the function is of the form

(3)

where , , and are constants accounting for the social properties of the group of agents.

In matrix notation, System (2) can be written as

(4)

where is the Laplacian555Given a real matrix and we denote by the action of on by mapping to . Given a nonnegative symmetric matrix , the Laplacian of is defined by , with and . of the matrix and depends on . The Laplacian is an matrix acting on , and verifies for every . Notice that the operator always is positive semidefinite.

Consensus.

For every , we define the mean vector and the symmetric bilinear form on by

where denotes the scalar product in . We set

(5)
(6)

These are two orthogonal subspaces of . Every can be written as with and .

Note that restricted to coincides, up to the factor , with the scalar product on . Moreover . Indeed for every .

Given a solution of (2) we define the quantities

and

Consensus is a state in which all agents have the same consensus parameter.

Definition 1 (Consensus point).

A steady configuration of System (2) is called a consensus point in the sense that the dynamics originating from is simply given by rigid translation . We call the consensus manifold.

Definition 2 (Consensus).

We say that a solution of System (2) tends to consensus if the consensus parameter vectors tend to the mean , namely if for every .

Remark 1.

Because of uniqueness, a solution of (2) cannot reach consensus within finite time, unless the initial datum is already a consensus point. The consensus manifold is invariant for (2).

Remark 2.

The following definitions of consensus are equivalent:

  • for every ;

  • for every ;

  • .

The following lemma, whose proof is given in the Appendix, shows that actually is a Lyapunov functional for the dynamics of (2).

Lemma 1.

For every

In particular if then .

For multi-agent systems of the form (2) sufficient conditions for consensus emergence are a particular case of the main result of [37] and are summarized in the following proposition, whose proof is recalled in the Appendix, for self-containedness and reader’s convenience.

Proposition 1.

Let be such that and satisfy

(7)

Then the solution of (2) with initial data tends to consensus.

The meaning of (7) is that as soon as and are sufficiently small then the system tends to consensus. In other words if the disagreement of the consensus parameters is sufficiently small and the initial main states are sufficiently close then the system tends to consensus.

Definition 3 (Consensus Region).

We call the set of satisfying (7) the consensus region.

The consensus region represents an estimate on the basin of attraction of the consensus manifold. This estimate is, in some simple case, sharp as showed in Example 1 below.

Although consensus forms a rigidly translating stable pattern for the system and represents in some sense a “convenient” choice for the group, there are initial conditions for which the system does not tend to consensus, as the following example shows.

Example 1 (Cucker–Smale system: two agents on the line).

Consider the Cucker–Smale system (2)-(3) in the case of two agents moving on with position and velocity at time , and . Assume for simplicity that , and . Let be the relative main state and be the relative consensus parameter. Equation (2), then reads

with initial conditions and . The solution of this system can be found by direct integration, as from we have

If the initial conditions satisfy then, as a consequence of Remark 1, the relative main state is bounded uniformly by , otherwise we would have for a finite . The boundedness of fulfills the sufficient condition on the states in Lemma 1 for consensus. If then the system tends to consensus as well, since , depending on whether respectively: if were unbounded then , respectively, and necessarily we converged to consensus. If were bounded then again by Lemma 1 we would converge to consensus.
On the other hand, whenever , which implies for some , the consensus parameter remains bounded away from for every time, since

for every . In other words, the system does not tend to consensus.

Let us mention that by now there are several extensions of Cucker–Smale models of consensus towards addressing the presence of noise, collision-avoidance forces, non-symmetric communication, informed agents, and tolerance to faults. For a state of the art review on the current developments on such generalization we refer to [77, Section 4.4.1]. We mention in particular the recent work of Cucker and Dong [26], which modifies the original model by consider cohesion and avoidance forces. We shall return to this model in Section 6 where we deal with extensions of our work.

Control of the Cucker–Smale model.

When the consensus in a group of agents is not achieved by self-organization of the group, as in Example 1 in case of , it is natural to ask whether it is possible to induce the group to reach it by means of an external action. In this sense we introduce the notion of organization via intervention. We consider the system (2) of interacting agents, in which the dynamics of every agent is additionally subject to the action of an external field. Admissible controls, accounting for the external field, are measurable functions satisfying the -norm constraint

(8)

for every , for a given positive constant . The time evolution of the state is governed by

(9)

for , and , . In matrix notation, the above system can be written as

(10)

where is the Laplacian defined in Section 1.2.

Our aim is then to find admissible controls steering the system to the consensus region in finite time. In particular, we shall address the question of quantifying the minimal amount of intervention one external policy maker should use on the system in order to lead it to consensus, and we formulate a practical strategy to approach optimal interventions. Let us mention another conceptually similar approach to our consensus control, i.e., the mean-field game, introduced by Lasry and Lions [47], and independently in the optimal control community under the name Nash Certainty Equivalence (NCE) within the work [42], later greatly popularized within consensus problems, for instance in [57, 58]. The first fundamental difference with our work is that in (mean-field) games, each individual agent is competing freely with the others towards the optimization of its individual goal, as for instance in the financial market, and the emphasis is in the characterization of Nash equilibria. Whereas in our model we are concerned with the optimization of the intervention of an external policy maker or coordinator endowed with rather limited resources to help the system to form a pattern, when self-organization does not realize it autonomously, as it is a case, e.g., in modeling economical policies and government strategies. Let us stress that in our model we are particularly interested to sparsify the control towards most effective results, and also that such an economical concept does not appear anywhere in the literature when we deal with large particle systems.

Our first approach towards sparse control will be a greedy one, in the sense that we will design a feedback control which will optimize instantaneously three fundamental quantities:

  • it has the minimal amount of components active at each time;

  • it has the minimal amount of switchings equispaced in time within the finite time interval to reach the consensus region;

  • it maximizes at each switching time the rate of decay of the Lyapunov functional measuring the distance to the consensus region.

This approach models the situation where the external policy maker is actually not allowed to predict future developments and has to make optimal decisions based on instantaneous configurations. Note that a componentwise sparse feedback control as in (i) is more realistic and convenient in practice than a control simultaneously active on more or even all agents, because it requires acting only on at most one agent, at every instant of time. The adaptive and instantaneous rule of choice of the controls is based on a variational criterion involving -norm penalization terms. Since however such componentwise sparse controls are likely to be chattering (see, for instance, Example 2), i.e., requiring an infinite number of changes of the active control component over a bounded interval of time, we will also have to pay attention in deriving control strategies with property (ii), which are as well sparse in time, and we therefore call them time sparse controls.
Our second approach is based on a finite time optimal control problem where the minimization criterion is a combination of the distance from consensus and of the -norm of the control. Such an optimization models the situation where the policy maker is actually allowed to make deterministic future predictions of the development. We show that componentwise sparse solutions are again likely to be the most favorable.

The rest of the paper is organized as follows: Section 2 is devoted to establishing sparse feedback controls stabilizing System (9) to consensus. We investigate the construction of componentwise and time sparse controls. In Section 3 we discuss in which sense the proposed sparse feedback controls have actually optimality properties and we address a general notion of complexity for consensus problems. In Section 4 we we combine the results of the previous sections with a local controllability result near the consensus manifold in order to prove global sparse controllability of Cucker–Smale consensus models. We study the sparsity features of solutions of a finite time optimal control of Cucker–Smale consensus models in Section 5 and we establish that the lacunarity of their sparsity is related to the codimension of certain manifolds. The paper is concluded by an Appendix which collects some of the merely technical results of the paper.

2 Sparse Feedback Control of the Cucker–Smale Model

2.1 A first result of stabilization

Note first that if the integral diverges then every pair satisfies (7), in other words the interaction between the agents is so strong that the system will reach the consensus no matter what the initial conditions are. In this section we are interested in the case where consensus does not arise autonomously therefore throughout this section we will assume that

As already clarified in Lemma 1 the quantity is actually a Lyapunov functional for the uncontrolled System (2). For the controlled System (9) such quantity actually becomes dependent on the choice of the control, which can nevertheless be properly optimized. As a first relevant and instructive observation we prove that an appropriate choice of the control law can always stabilize the system to consensus.

Proposition 2.

For every and initial condition , the feedback control defined pointwise in time by , with , satisfies the constraint (8) for every and stabilizes the system (9) to consensus in infinite time.

Proof.

Consider the solution of (9) with initial data associated with the feedback control , with . Then, by non-negativity of ,

Therefore and tends to exponentially fast as . Moreover

and thus the control is admissible. ∎

In other words the system (8)-(9) is semi-globally feedback stabilizable. Nevertheless this result has a merely theoretical value: the feedback stabilizer is not convenient for practical purposes since it requires to act at every instant of time on all the agents in order to steer the system to consensus, which may require a large amount of instantaneous communications. In what follows we address the design of more economical and practical feedback controls which can be both componentwise and time sparse.

2.2 Componentwise sparse feedback stabilization

We introduce here a variational principle leading to a componentwise sparse stabilizing feedback law.

Definition 4.

For every and every , let be defined as the set of solutions of the variational problem

(11)

where

(12)

The meaning of (11) is the following. Minimizing the component means that, at every instant of time, the control is of the form , for some sequence of nonnegative scalars. Hence it acts as an additional force which pulls the particles towards having the same mean consensus parameter, as highlighted by the proof of Proposition 2. Imposing additional -norm constraints has the function of enforcing sparsity, i.e., most of the will turn out to be zero, as we will in more detail clarify below. Eventually, the threshold is chosen in such a way that when the control switches-off the criterion (7) is fulfilled. Let us stress that the choice of the -norm minimization has the relevant advantage with respect to other potentially sparsifying constraints, such that, e.g., , to reduce the variational principle (11) to a very simple separable scalar optimization.

The componentwise sparsity feature of feedback controls is analyzed in the next remark, where we make explicit the set according to the value of in a partition of the space .

Remark 3.

First of all, it is easy to see that, for every and every element there exist nonnegative real numbers ’s such that

(13)

where .
The componentwise sparsity of depends on the possible values that the ’s may take in function of . Actually, the space can be partitioned in the union of the four disjoint subsets , and defined by

  • ,

  • ,

  • and there exists a unique such that for every ,

  • and there exist and such that and for every .

The subsets and are open, and the complement has Lebesgue measure zero. Moreover for every , the set is single valued and its value is a sparse vector with at most one nonzero component. More precisely, one has and for some unique .
If then a control in may not be sparse: indeed in these cases the set consists of all such that for every , where the ’s are nonnegative real numbers such that whenever , and whenever .

By showing that the choice of feedback controls as in Definition 4 optimizes the Lyapunov functional , we can again prove convergence to consensus.

Theorem 1.

For every , and , set , where is as in Definition 4. Then for every initial pair , the differential inclusion

(14)

with initial condition is well-posed and its solutions converge to consensus as tends to .

Remark 4.

By definition of the feedback controls , and from Remark 3, it follows that, along a closed-loop trajectory, as soon as is small enough with respect to the trajectory has entered the consensus region defined by (7). From this point in time no action is further needed to stabilize the system, since Proposition 1 ensures then that the system is naturally stable to consensus. Notice that is strictly contained in the consensus region and, moreover, every trajectory of the uncontrolled system (2) originating in remains in (see Lemma 5 in the Appendix). Hence when the system enters the region , in which there is no longer need to control, the control switches-off automatically end it is set to forever. It follows from the proof of Theorem 1 below that the time needed to steer the system to the consensus region is not larger than , where is defined by (12), and .

Proof of Theorem 1.

First of all we prove that (14) is well-posed, by using general existence results of the theory of differential inclusions (see e.g. [1, Theorem 2.1.3]). For that we address the following steps:

  • being the set non-empty, closed, and convex for every (see Remark 3), we show that is upper semi-continuous; this will imply local existence of solutions of (14);

  • we will then argue the global extension of these solutions for every by the classical theory of ODE’s, as it is sufficient to remark that there exist positive constants such that .

Let us address the upper semi-continuity of , that is for every and for every there exists such that

where are the balls of centered in with radius and respectively. As the composition of upper semi-continuous functions is upper semi-continuous (see [1, Proposition 1.1.1]), then it is sufficient to prove that is upper semi continuous. For every , is single valued and continuous. If then there exist such that and for every . If then hence, in particular, it is upper semi continuous. With a similar argument it is possible to prove that is upper semi continuous for every . This establishes the well-posedness of (14).

Now, let be a solution of (14). Let the minimal time needed to reach the consensus, that is is the smallest number such that , with the convention that if the system does not reach consensus. For almost every then we have . Thus the trajectory is in or and there exist indices in such that and for every . Hence if then

where . Then,

(15)

For clarity, notice that in the last inequality we used the maximality of for which

or

and eventually

Let and . It follows from Lemma 6 in Appendix, or simply by direct integration, that

(16)

and

Note that the time needed to steer the system in the consensus region is not larger than

(17)

and in particular it is finite. Indeed, for almost every we have

and Proposition 1, in particular (7), implies that the system is in the consensus region. Finally, for large enough , then by Lemma 1 we infer that tends to . ∎

Within the set as in Definition 4, which in general does not contain only sparse solutions, there are actually selections with maximal sparsity.

Definition 5.

We select the componentwise sparse feedback control according to the following criterion:

  • if , then ,

  • if let be the smallest index such that

    then

The control can be, in general, highly irregular in time. If we consider for instance a system in which there are two agents with maximal disagreement then the control switches at every instant from one agent to the other and it is everywhere discontinuous. The natural definition of solution associated with the feedback control is therefore the notion of sampling solution as introduced in [15].

Definition 6 (Sampling solution).

Let , be continuous and locally Lipschitz in uniformly on compact subset of . Given a feedback , , and we define the sampling solution of the differential system

as the continuous (actually piecewise ) function solving recursively for

using as initial value , the endpoint of the solution on the preceding interval, and starting with . We call the sampling time.

This notion of solution is of particular interest for applications in which a minimal interval of time between two switchings of the control law is demanded. As the sampling time becomes smaller and smaller the sampling solution of (9) associated with our componentwise sparse control as defined in Definition 5 approaches uniformly a Filippov solution of (14), i.e. an absolutely continuous function satisfying (14) for almost every . In particular we shall prove in Section 2.4 the following statement.

Theorem 2.

Let be the componentwise sparse control defined in Definition 5. For every , and let be the sampling solution of (9) associated with . Every closure point of the sequence of trajectories is a Filippov solution of (14).

Let us stress that, as a byproduct of our analysis, we shall eventually construct practical feedback controls which are both componentwise and time sparse.

2.3 Time sparse feedback stabilization

Theorem 1 gives the existence of a feedback control whose behavior may be, in principle, very complicated and that may be nonsparse. In this section we are going to exploit the variational principle (11) to give an explicit construction of a piecewise constant and componentwise sparse control steering the system to consensus. The idea is to take a selection of a feedback in which has at most one nonzero component for every , as in Definition 5, and then sample it to avoid chattering phenomena (see, e.g., [81]).

Theorem 3.

Fix and consider the control law given by Definition 5. Then for every initial condition there exists small enough, such that for all the sampling solution of (9) associated with the control , the sampling time , and initial pair reaches the consensus region in finite time.

Remark 5.

The maximal sampling time depends on the number of agents , the -norm bound on the control, the initial conditions , and the rate of communication function . The precise bounding condition (18) is given in the proof below. Moreover, as in Remark 4, the sampled control is switched-off as soon as the sampled trajectory enters the region . In particular the systems reaches the consensus region defined by (7) within time , where . The control is then set to zero in a time that is not larger than