# Distributed Convex Optimization for Continuous-Time Dynamics with Time-Varying Cost Functions

###### Abstract

In this paper, a time-varying distributed convex optimization problem is studied for continuous-time multi-agent systems. The objective is to minimize the sum of local time-varying cost functions, each of which is known to only an individual agent, through local interaction. Here the optimal point is time varying and creates an optimal trajectory. Control algorithms are designed for the cases of single-integrator and double-integrator dynamics. In both cases, a centralized approach is first introduced to solve the optimization problem. Then this problem is solved in a distributed manner and a discontinuous algorithm based on the signum function is proposed in each case. In the case of single-integrator (respectively, double-integrator) dynamics, each agent relies only on its own position and the relative positions (respectively, positions and velocities) between itself and its neighbors. A gain adaption scheme is introduced in both algorithms to eliminate certain global information requirement. To relax the restricted assumption imposed on feasible cost functions, an estimator based algorithm using the signum function is proposed, where each agent uses dynamic average tracking as a tool to estimate the centralized control input. As a trade-off, the estimator based algorithm necessitates communication between neighbors. Then in the case of double-integrator dynamics, the proposed algorithms are further extended. Two continuous algorithms based on, respectively, a time-varying and a fixed boundary layer are proposed as continuous approximations of the signum function. To account for inter-agent collision for physical agents, a distributed convex optimization problem with swarm tracking behavior is introduced for both single-integrator and double-integrator dynamics. It is shown that the center of the agents tracks the optimal trajectory, the connectivity of the agents is maintained and inter-agent collision is avoided. Finally, numerical examples are included for illustration.

## I Introduction

The distributed optimization problem has attracted a significant attention recently. It arises in many applications of multi-agent systems, where agents cooperate in order to accomplish various tasks as a team in a distributed and optimal fashion. We are interested in a class of distributed convex optimization problems, where the goal is to minimize the sum of local cost functions, each of which is known to only an individual agent.

The incremental subgradient algorithm is introduced as one of the earlier approaches addressing this problem [1, 2]. In this algorithm an estimate of the optimal point is passed through the network while each agent makes a small adjustment on it. Recently some significant results based on the combination of consensus and subgradient algorithms have been published [3, 4, 5]. For example, this combination is used in [4] for solving the coupled optimization problems with a fixed undirected graph. A projected subgradient algorithm is proposed in [5], where each agent is required to lie in its own convex set. It is shown that all agents can reach an optimal point in the intersection of all agents’ convex sets even for a time-varying communication graph with doubly stochastic edge weight matrices.

However, all the aforementioned works are based on discrete-time algorithms. Recently, some new research is conducted on distributed optimization problems for multi-agent systems with continuous-time dynamics. Such a scheme has applications in motion coordination of multi-agent systems. For example, multiple physical vehicles modelled by continuous-time dynamics might need to rendezvous at a team optimal location. In [6], a generalized class of zero-gradient sum controllers is introduced for twice differentiable strongly convex functions under an undirected graph. In [7], a continuous-time version of [5] for directed and undirected graphs is studied, where it is assumed that each agent is aware of the convex optimal solution set of its own cost function and the intersection of all these sets is nonempty. Article [8] derives an explicit expression for the convergence rate and ultimate error bounds of a continuous-time distributed optimization algorithm. In [9], a general approach is given to address the problem of distributed convex optimization with equality and inequality constraints. A proportional-integral algorithm is introduced in [10, 11, 12], where [11] considers strongly connected weight balanced directed graphs and [12] extends these results using discrete-time communication updates. A distributed optimization problem is studied in [13] with the adaptivity and finite-time convergence properties.

In continuous-time optimization problems, the agents are usually assumed to have single-integrator dynamics. However, a broad class of vehicles requires double-integrator dynamic models. In addition, having time-invariant cost functions is a common assumption in the literature. However, in many applications the local cost functions are time varying, reflecting the fact that the optimal point could be changing over time and creates a trajectory. There are just a few works in the literature addressing the distributed optimization problem with time-varying cost functions [14, 15, 16]. In those works, there exist bounded errors converging to the optimal trajectory. For example, the economic dispatch problem for a network of power generating units is studied in [14], where it is proved that the algorithm is robust to slowly time-varying loads. In particular, it is shown that for time-varying loads with bounded first and second derivatives the optimization error will remain bounded. In [15], a distributed time-varying stochastic optimization problem is considered, where it is assumed that the cost functions are strongly convex, with Lipschitz continuous gradients. It is proved that under the persistent excitation assumption, a bounded error in expectation will be achieved asymptotically. In [16], a distributed discrete-time algorithm based on the alternating direction method of multipliers (ADMM) is introduced to optimize a time-varying cost function. It is proved that for strongly convex cost functions with Lipschitz continuous gradients, if the primal optimal solutions drift slowly enough with time, the primal and dual variables are close to their optimal values.

Furthermore, in all articles on distributed optimization mentioned above, the agents will eventually approach a common optimal point while in some applications it is desirable to achieve swarm behavior. The goal of flocking or swarming with a leader is that a group of agents tracks a leader with only local interaction while maintaining connectivity and avoiding inter-agent collision [17, 18, 19, 20]. Swarm tracking algorithms are studied in [18] and [19], where it is assumed that the leader is a neighbor of all followers and has a constant and time-varying velocity, respectively. In [20], swarm tracking algorithms via a variable structure approach are introduced, where the leader is a neighbor of only a subset of the followers. In the aforementioned studies, the leader plans the trajectory for the team and the agents are not directly assigned to complete a task cooperatively. In [21], the agents are assigned a task to estimate a stationary field while exhibiting cohesive motions. Although optimizing a certain team criterion while performing the swarm behavior is a highly motivated task in many multi-agent applications, it has not been addressed in the literature.

The introduced framework, distributed continuous-time time-varying optimization, is of great significance in motion coordination. Here, multiple agents cooperatively achieve motion coordination while optimizing a time-varying team objective function with only local information and interaction. For example, multiple spacecraft might need to dock at a moving location distributively with only local information and interaction such that the total team performance is optimized. Multiple agents moving in a formation or swarm with local information and interaction might need to cooperatively figure out what optimal trajectory the virtual leader or center of the team should follow and that knowledge would help the individual agents specify their motions. Furthermore, there is a significant need to use distributed optimization in various applications such as economic dispatch, internet congestion control, and home automation with smart electrical devices. While the studies in the aforementioned applications would be simplified by assuming that the changing rate of the cost functions or the constraints, is small and hence treated as invariant in each time interval, it might be more realistic and relevant to explicitly take into account the time-varying nature of the cost functions or constraints. As a result, distributed continuous-time optimization algorithms with time-varying cost functions or constraints might serve as continuous-time solvers to figure out the optimal trajectory in these applications.

In this paper, we are faced with several challenges such as: 1) Having time-varying cost functions, which generally changes the problem from finding the fixed optimal point to tracking the optimal trajectory. 2) Solving the problem in a distributed manner using only local information and local interaction. 3) Solving the problem for continuous-time single-integrator and double-integrator dynamics, where in the latter case there is only direct control on agents’ accelerations. 4) In our algorithms, the signum function is employed to compensate for the effect of the inconsistent internal time-varying optimization signals among the agents so that the agents can reach consensus. As the signum function might cause chattering in some applications, it is replaced with continuous approximations in some algorithms but additional challenges in analysis would result from the replacement. 5) Providing analysis on optimization error bounds in scenarios where the agents’ states cannot reach consensus. 6) The coexistence of the optimization objective and the inherent nonlinearity of the swarm tracking behavior. Our preliminary attempts for solving the distributed convex optimization problem with time-varying cost functions have been presented in [22, 23].

The remainder of this paper is organized as follows: In Section II, the notation and preliminaries used throughout this paper are introduced. In Section III, the case of single-integrator dynamics is studied. In Subsection III-A, a centralized approach is introduced. Then, in Subsections III-B and III-C, two discontinuous algorithms are proposed to solve the problem in a distributed manner. In Section IV, the case of double-integrator dynamics is studied. In Subsection IV-A, a centralized algorithm is introduced. Then in Subsections IV-B and IV-C two discontinuous algorithms are defined to solve the problem in a distributed manner. Subsections IV-D and IV-E are devoted to extend the proposed discontinuous control algorithms. In the discontinuous algorithms, the signum function is used but it might cause chattering in some applications. Two continuous algorithms are proposed to avoid the chattering effect, where a time-varying and a time-invariant approximation of the signum function are employed in Subsections IV-D and IV-E, respectively. In Section V, the distributed convex optimization problem with swarm tracking behavior is studied, where two algorithms for single-integrator and double-integrator dynamics are designed in Subsections V-A and V-B, respectively. Finally in Section VI, numerical examples are given for illustration.

## Ii notations and preliminaries

The following notations are adopted throughout this paper. denotes the set of positive real numbers. The cardinality of a set is denoted by . denotes the index set ; The transpose of matrix and vector are shown as and , respectively. denotes the p-norm of the vector . We define where and . Let and denote the column vectors of ones and zeros, respectively. denotes the identity matrix. For matrix and , the Kronecker product is denoted by . The gradient and Hessian of function are denoted by and , respectively. The matrix inequality , , and mean that , , and are positive (semi)definite, respectively. Let and denote, respectively, the smallest and the largest eigenvalue of the matrix .

Let a triplet be an undirected graph, where is the node set and is the edge set, and is the adjacency matrix. An edge between agents and , denoted by , means that they can obtain information from each other. In an undirected graph the edges and are equivalent. We assume . The adjacency matrix is defined as if and otherwise. The set of neighbors of agent is denoted by . A sequence of edges of the form where is a path. The graph is connected if there is a path from every node to every other node. By arbitrarily assigning an orientation for the edges in , let be the incidence matrix associated with , where if the edge leaves node , if it enters node , and otherwise. Let the Laplacian matrix associated with the graph be defined as and for . Note that . The Laplacian matrix is symmetric positive semidefinite. The undirected graph is connected if and only if has a simple zero eigenvalue with the corresponding eigenvector and all other eigenvalues are positive [24]. When the graph is connected, we order the eigenvalues of as . Particularly, is the second smallest eigenvalue of the Laplacian matrix . The above notations can also be adopted for time-varying graphs, where and are, respectively, the undirected graph, the adjacency matrix, the incidence matrix and the Laplacian matrix at time . For the time-varying graph , is a function of . As long as is connected, is uniformly lower bounded above because there is only a finite number of possible associated with .

###### Lemma II.1

[25] The second smallest eigenvalue of the Laplacian matrix associated with the undirected connected graph satisfies .

###### Lemma II.2

###### Lemma II.3

[28] The symmetric real matrix is positive definite if and only if one of the following conditions hold: (i) ; or (ii)

## Iii Time-Varying Convex Optimization For Single-Integrator Dynamics

Consider a multi-agent system consisting of physical agents with an interaction topology described by the undirected graph . It is common to adopt single-integrator or double-integrator models. Here, suppose that the agents satisfy the continuous-time single-integrator dynamics

(1) |

where is the position, and is the control input of agent . Note that and are functions of time. Later for ease of notation we will write them as and . A time-varying local cost function is assigned to agent which is known to only agent . The team cost function is denoted by and assumed to be convex. Note that here only is required to be convex but not necessarily each . Our objective is to design for (1) using only local information and local interaction with neighbors such that all agents track the optimal state , where is the minimizer of the time-varying convex optimization problem

(2) |

###### Assumption III.1

There exists a continuous that minimizes the team cost function .

Because the inverse of the Hessian will be used in our algorithm, we need one of the following assumptions to guarantee its existence.

###### Assumption III.2

The function is twice continuously differentiable with respect to with invertible Hessian .

###### Assumption III.3

Each function is twice continuously differentiable with respect to with invertible Hessian .

### Iii-a Centralized Time-Varying Convex Optimization

As a first step in this subsection, we focus on the time-varying convex optimization problem of

(3) |

where is convex in , for single-integrator dynamics

(4) |

where are the system’s state and control input, respectively. Next, an algorithm adapted from [29] will be proposed to solve the problem defined by (3) for the system (4). The control input is proposed for (4) as

(5) |

where is a positive coefficient; and are respectively, the first and the second derivative of the cost function with respect to , namely, the gradient and Hessian.

###### Theorem III.4

Proof: Define the positive-definite Lyapunov function candidate . The derivative of along the system (4) with the control input (5) is . Therefore, for . This guarantees that will asymptotically converge to zero when . Then by using Lemma II.2 and under Assumption III.1, it is easy to see that converges to , and will be minimized.

###### Remark III.5

There exist other choices for the control input instead of the one proposed in (5). For example, might be used. In this alternative control input, it can be seen that for a time-invariant cost function, . Hence we will have the well-known gradient descent algorithm. For a time-invariant cost function, the proposed algorithm (5) will become a Newton algorithm, which is generally much faster than the gradient descent algorithm.

The results from Theorem III.4 can be extended to minimize the convex function . If Assumptions III.1 and III.2 hold, with

(6) |

for (4), the function is minimized. Unfortunately, (6) is a centralized solution for agents with single-integrator dynamics relying on the knowledge of all . In Subsections III-B and III-C, (6) will be exploited to propose two algorithms for solving the time-varying convex optimization problem for single-integrator dynamics in a distributed manner.

### Iii-B Distributed Time-Varying Convex Optimization Using Neighbors’ Positions

In this subsection, we focus on solving the distributed time-varying convex optimization problem (2) for agents with single-integrator dynamics (1). Each agent has access to only its own position and the relative positions between itself and its neighbors. In some applications, the relative positions can be obtained by using only agents’ local sensing capabilities, which might in turn eliminate the communication necessity between agents. The problem defined in (2) is equivalent to

(7) |

Intuitively, the problem is deformed as a consensus problem and a minimization problem on the team cost function . Here the goal is that the states converge to the optimal trajectory , i.e.,

(8) |

The control input is proposed for (1) as

(9) | ||||

where is an internal signal, is a varying gain with , and sgn() is the signum function defined componentwise. Note that depends on only agent ’s position. Here (III-B) is a discontinuous controller. It is worth mentioning that unlike continuous or smooth systems, the equilibrium concept of setting the right hand equal to zero to find the equilibrium point might not be valid for discontinuous systems. Let , and denote, respectively, the aggregated states and the aggregated internal signals of the agents. We also define . Define agent ’s consensus error as . Define the consensus error vector . Note that has one simple zero eigenvalue with as its right eigenvector and has as its other eigenvalue with the multiplicity . Then it is easy to see that if and only if .

###### Remark III.6

With the signum function in the proposed algorithms in this paper, the right-hand sides of the closed-loop systems are discontinuous. Thus, the solution should be investigated in terms of differential inclusions by using nonsmooth analysis [30, 31]. However, since the signum function is measurable and locally essentially bounded, the Filippov solutions of the closed-loop dynamics always exist. Also the Lyapunov function candidates adopted in the proofs hereafter are continuously differentiable. Therefore, the set-valued Lie derivative of them is a singleton at the discontinuous points and the proofs still hold. To avoid symbol redundancy, we do not use the differential inclusions in the proofs. Furthermore, Filippov solutions are absolutely continuous curves [30], which means that the agents’ states are continuous functions.

The remainder of this subsection is devoted to the verification of the algorithm (III-B). In Proposition 1, we will show that the agents reach consensus using (III-B). Then this result will be used in Theorem III.9 to prove that the agents minimize the team cost function as .

###### Definition III.7

Defining , a new Laplacian matrix is introduced, where and for . Since , the matrix is symmetric. Similar to the definition of , is the incidence matrix associated with , where if the edge leaves node , if it enters node , and otherwise. Thus, can be given by .

###### Assumption III.8

With defined in (III-B), there exists a positive constant such that and .

###### Proposition 1

Proof: Using Definition III.7, the closed-loop system (1) with the control input (III-B) can be recast into a compact form as

(10) |

where and are defined in Section II and Definition III.7, respectively. We can rewrite (10) as

(11) |

where we have used the fact that . Define the Lyapunov function candidate

where is to be selected. The time derivative of along (11) can be obtained as

(12) | ||||

where the last inequality holds under Assumption III.8. Because is connected, we have

Selecting such that we have

(13) | ||||

where in the last inequality the fact that and Lemma II.1 have been used. Therefore, having and , we can conclude that . By integrating both sides of (13), we can see that . Now, applying Barbalat’s Lemma [32], we obtain that will converge to zero asymptomatically and hence the agents’ positions reach consensus, i.e, as .

###### Theorem III.9

Proof: Define the Lyapunov function candidate

(14) |

where is positive definite with respect to . The time derivative of can be obtained as Under the assumption of identical Hessians, we will have

(15) |

On the other hand, by using (III-B) for the system (1) and summing up both sides for , we know that . Then we can rewrite (15) as Therefore, for . This guarantees that will asymptomatically converge to zero. Now, under the assumption that is convex, using Proposition 1 and Lemma II.2, it is easy to see that under Assumption III.1 as the team cost function will be minimized, where .

###### Remark III.10

In (III-B) each agent is required to know which might be restrictive. However, there are applications where each agent knows the closed form of its own local cost function (e.g., motion control with an optimization objective) or at least the agent knows how the cost function is varying with respect to time (e.g., home automation). For example, in motion control with an optimization objective, it is possible that each agent knows the closed form of its local cost function or in home automation smart electrical devices need to agree on the total amount of energy consumption that maximizes an overall utility function formed by the sum of the utility functions of the devices. However, a varying price rate for electricity during a day makes the optimization problem time varying. Although the price rate of the electricity is varying during the day, it is known to the agents beforehand. Hence, calculating might not be an issue in this application. Furthermore, there are algorithms to estimate the derivative of a function by knowing only the value of the function at each time . How to apply the idea to distributed continuous-time time-varying optimization is a possible direction for our future studies.

###### Remark III.11

Assumption III.8 intuitively places a bound on the Hessians and the changing rates of the gradients of the cost functions with respect to . In Appendix A, we will show that Assumption III.8 holds if the cost functions with identical Hessians satisfy certain conditions such that the boundedness of for all guarantees the boundedness of and for all . For example, consider the cost functions commonly used for energy minimization, e.g., where is a positive constant and is a time-varying function particularly for agent . For these cost functions, the boundedness of for all guarantees the boundedness of and for all , if and are bounded. Hence to satisfy Assumption III.8 for it is sufficient to have a bound on and .

In Subsection III-C an estimator-based algorithm is introduced, where the assumption on identical Hessians is relaxed.

### Iii-C Estimator-Based Distributed Time-Varying Convex Optimization

In this subsection, an estimator-based algorithm is designed such that each agent calculates (6) in a distributed manner. To achieve this goal, distributed average tracking is used as a tool. Each agent generates an estimate of (6). Then a controller is designed such that each agent tracks its own generated signal while guaranteeing that the agents reach consensus.

The proposed algorithm for the system (1) has two separate parts, the estimator and controller. The estimator part is given by

(16) |

(17) |

(18) |

(19) |

where , and are positive coefficients to be selected and is the set of agent ’s neighbors at time . The controller part is given by

(20) |

where is defined componentwise and .
In implementing (19), can be projected on the space of positive-definite matrices, which ensures that remains nonsingular. Also and are the internal states of the distributed average tracking estimators, where their initial values are such that^{1}^{1}1As a special case the initial values can be chosen as .

(21) |

The estimator part (16)-(19), generates the internal signal for each agent and the controller part (20) guarantees consensus. Here the separation principle can be applied if the estimator part converges in finite time.

###### Assumption III.12

The estimators’ coefficients and satisfy the following inequalities: and .

Assumption III.12 can be satisfied if the partial derivatives of the Hessians, the first- and second-order partial derivatives of the gradient are bounded.

###### Theorem III.13

Proof: Estimator: It follows from Theorem 2 in [33] that if Assumption III.12 holds, then there exists a such that for all , and . Now it follows from (19) that for all , . Note that for , is nonsingular without projection due to Assumption III.2 and hence the projection operation simply returns itself. Till now we have shown that all agents generate the internal signal , where , in finite time.

Controller: Note that denoted as . For using (20) for (1), we have

(22) |

For , rewriting (22) using new variables , we have

(23) |

It is proved in [34] that using (23), there exists a time such that . As a result we have and . Now, it is easy to see that according to (6) the optimization goal (8) is achieved.

###### Remark III.14

Satisfying the conditions mentioned in Assumption III.12 might be restrictive but they hold for an important class of cost functions. For example, if the agents’ cost functions are in the form of , where the Hessians are not equal, the above conditions are equivalent to the conditions that and are bounded. This is applicable to a vast class of time-varying functions, , such as and .

###### Remark III.15

The algorithm introduced in (16)-(20) just requires that Assumptions III.2 and III.12 hold. Note that in Assumption III.2, it is not required that each agent’s cost function has invertible Hessian but instead their sum, which is weaker than Assumption III.3. In contrast, for the algorithm (III-B), not only Assumption III.3 and the conditions mentioned in Remark III.11 have to be satisfied for each individual function , it requires the agents’ Hessians to be equal. However, in the algorithm (III-B) the agents just need their own positions and the relative positions between themselves and their neighbors. In some applications, these pieces of information can be obtained by sensing; hence the communication necessity might be eliminated. In contrast, in the algorithm (16)-(20) each agent must communicate three variables and with its neighbors, which necessitates the communication requirement.

## Iv Time-Varying Convex Optimization For Double-Integrator Dynamics

In this section, we study the convex optimization problem with time-varying cost functions for double-integrator dynamics. In some applications, it might be more realistic to model the equations of motion of the agents with double-integrator dynamics, i.e., mass-force model, to take into account the effect of inertia. Unlike single-integrator dynamics, in the case of double-integrator dynamics, the agents’ positions and velocities at each time must be determined properly such that the team cost function is minimized. However, there is only direct control on each agent’s acceleration and hence there exist new challenges. As a first step, in Subsection IV-A, a centralized algorithm will be introduced.

### Iv-a Centralized Time-Varying Convex Optimization

In this subsection, we focus on the time-varying convex optimization problem of (3) for double-integrator dynamics

(24) |

where are the position, velocity, and control input, respectively. Our goal is to design the control input to minimize the cost function . In Theorem IV.1, an algorithm will be proposed to solve the problem defined by (3) and (24). The control input is proposed for (24) as

(25) |