A Proximal Dual Consensus ADMM Method for MultiAgent Constrained Optimization
Abstract
This paper studies efficient distributed optimization methods for multiagent networks. Specifically, we consider a convex optimization problem with a globally coupled linear equality constraint and local polyhedra constraints, and develop distributed optimization methods based on the alternating direction method of multipliers (ADMM).
The considered problem has many applications in machine learning and smart grid control problems.
Due to the presence of the polyhedra constraints, agents in the existing methods have to deal with polyhedra constrained subproblems at each iteration. One of the key issues is that projection onto a polyhedra constraint is not trivial, which prohibits from closedform solutions or the use of simple algorithms for solving these subproblems. In this paper, by judiciously integrating the proximal minimization method with ADMM, we propose
a new distributed optimization method where the polyhedra constraints are handled softly as penalty terms in the subproblems. This makes the subproblems efficiently solvable and consequently reduces the overall computation time. Furthermore, we propose a randomized counterpart that is robust against randomly ON/OFF agents and imperfect communication links. We analytically show that both the proposed methods have a worstcase convergence rate, where is the iteration number. Numerical results show that the proposed methods offer considerably lower computation time than the existing distributed ADMM method.
Keywords Distributed optimization, ADMM, Consensus
EDICS: OPTDOPT, MLRDIST, NETDISP, SPCAPPL.
I Introduction
Multiagent distributed optimization [1] has been of great interest due to applications in sensor networks [2], cloud computing networks [3] and due to recent needs for distributed largescale signal processing and machine learning tasks [4]. Distributed optimization methods are appealing because the agents access and process local data and communicate with connecting neighbors only [1], thereby particularly suitable for applications where the local data size is large and the network structure is complex. Many of the problems can be formulated as the following optimization problem
(1a)  
(1b)  
(1c) 
In (1), is a local control variable owned by agent , is a local cost function, , , , and are locally known data matrices (vectors) and constraint set, respectively. The constraint (1b) is a global constraint which couples all the ’s; while each in (1c) is a local constraint set of agent which consists of a simple constraint set (in the sense that projection onto is easy to implement) and a polyhedra constraint . It is assumed that each agent knows only , , and , and the agents collaborate to solve the coupled problem (P). Examples of (P) include the basis pursuit (BP) [5] and LASSO problems [6] in machine learning, the power flow and load control problems in smart grid [7], the network flow problem [8] and the coordinated transmission design problem in communication networks [9], to name a few.
Various distributed optimization methods have been proposed in the literature for solving problems with the form of (P). For example, the consensus subgradient methods [10, 11, 12, 13] can be employed to handle (P) by solving its Lagrange dual problem [1]. The consensus subgradient methods are simple to implement, but the convergence rate is slow. In view of this, the alternating direction method of multipliers (ADMM) [14, 15] has been used for fast distributed consensus optimization [16, 17, 18, 19, 20, 21]. Specifically, the work [16] proposed a consensus ADMM (CADMM) method for solving a distributed LASSO problem. The linear convergence rate of CADMM is further analyzed in [20], and later, in [21], CADMM is extended to that with asynchronous updates. By assuming that a certain coloring scheme is available to the network graph, the works in [17, 18] proposed several distributed ADMM (DADMM) methods for solving problems with the same form as (P). The DADMM methods require each agent either to update the variables sequentially (not in parallel) or to solve a minmax (saddle point) subproblem at each iteration. In the recent work [19], the authors proposed a distributed optimization method, called dual consensus ADMM (DCADMM), which solves (P) in a fully parallel manner over arbitrary networks as long as the graph is connected. An inexact counterpart of DCADMM was also proposed in [19] for achieving a low periteration complexity when is complex.
In this paper, we improve upon the works in [19] by presenting new computationally efficient distributed optimization methods for solving (P). Specifically, due to the presence of the polyhedra constraints in (1c), the agents in the existing methods have to solve a polyhedra constrained subproblem at each iteration. Since projection onto the polyhedra constraint is not trivial, closedform solutions are not available and, moreover, simple algorithms such as the gradient projection method [22] cannot handle this constrained subproblem efficiently. To overcome this issue, we propose in this paper a proximal DCADMM (PDCADMM) method where each of the agents deals with a subproblem with simple constraints only, which is therefore more efficiently implementable than DCADMM. This is made possible by the use of the proximal minimization method [14, Sec. 3.4.3] to deal with the dual variables associated with the polyhedra constrains, so that the constraints can be softly handled as penalty terms in the subproblems. Our contributions are summarized as follows.

We propose a new PDCADMM method, and show that the proposed method converges to an optimal solution of (P) with a worstcase convergence rate, where is the iteration number. Numerical results will show that the proposed PDCADMM method exhibits a significantly lower computation time than DCADMM in [19].

We further our study by presenting a randomized PDCADMM method that is tolerable to randomly ON/OFF agents and robust against imperfect communication links. We show that the proposed randomized PDCADMM method is convergent to an optimal solution of (P) in the mean, with a worstcase convergence rate.
The rest of this paper is organized as follows. Section II presents the applications, network model and assumptions of (P). The PDCADMM method and the randomized PDCADMM method are presented in Section III and Section IV, respectively. Numerical results are presented in Section V and conclusions are given in Section VI.
Notations: () means that matrix is positive semidefinite (positive definite); indicates that for all , where means the th element of vector . is the identity matrix; is the dimensional allone vector. denotes the Euclidean norm of vector , represents the 1norm, and for some ; is a diagonal matrix with the th diagonal element being . Notation denotes the Kronecker product. denotes the maximum eigenvalue of the symmetric matrix .
Ii Applications, Network Model and Assumptions
Iia Applications
Problem (P) has applications in machine learning [6, 4], data communications [9, 8] and the emerging smart grid systems [13, 7, 23, 24], to name a few. For example, when , (P) is the leastnorm solution problem of the linear system ; when , (P) is the wellknown basis pursuit (BP) problem [5, 17]; and if , then (P) is the BP problem with group sparsity [6]. The LASSO problem can also be recast as the form of (P). Specifically, consider a LASSO problem [6] with column partitioned data model [17, Fig. 1],[25],
(2) 
where ’s contain the training data vectors, is a response signal and is a penalty parameter. By defining , one can equivalently write (2) as
(3a)  
(3b)  
(3c) 
which is exactly an instance of . The polyhedra constraint can rise, for example, in the monotone curvature fitting problem [26]. Specifically, suppose that one wishes to fit a signal vector over some fine grid of points , using a set of monotone vectors , . Here, each is modeled as where contains the basis vectors and is the fitting parameter vector. To impose monotonicity on , one needs constraints of , , if is nonincreasing. This constitutes a polyhedra constraint on . Readers may refer to [26] for more about constrained LASSO problems.
On the other hand, the load control problems [13, 7, 23] and microgrid control problems [24] in the smart grid systems are also of the same form as (P). Specifically, consider that a utility company manages the electricity consumption of customers for power balance. Let denote the power supply vector and be the power consumption vector of customer ’s load, where is the load control variable. For many types of electricity loads (e.g., electrical vehicle (EV) and batteries), the load consumption can be expressed as a linear function of [23, 24], i.e., , where is a mapping matrix. Besides, the variables ’s are often subject to some control constraints (e.g., maximum/minimium charging rate and maximum capacity et al.), which can be represented by a polyhedra constraint for some and . Then, the load control problem can be formulated as
(4a)  
(4b)  
(4c) 
where is a slack variable and is the cost function for power imbalance. Problem (4) is again an instance of (P).
IiB Network Model and Assumptions
We model the multiagent network as a undirected graph , where is the set of nodes (i.e, agents) and is the set of edges. In particular, an edge if and only if agent and agent are neighbors; that is, they can communicate and exchange messages with each other. Thus, for each agent , one can define the index subset of its neighbors as . Besides, the adjacency matrix of the graph is defined by the matrix , where if and otherwise. The degree matrix of is denoted by . We assume that
Assumption 1
The undirected graph is connected.
Assumption 1 is essential for consensus optimization since it implies that any two agents in the network can always influence each other in the long run. We also have the following assumption on the convexity of (P).
Assumption 2
(P) is a convex problem, i.e., ’s are proper closed convex functions (possibly nonsmooth), and ’s are closed convex sets; there is no duality gap between (P) and its Lagrange dual; moreover, the minimum of (P) is attained and so is its optimal dual value.
Iii Proposed Proximal Dual Consensus ADMM Method
In the section, we propose a distributed optimization method for solving (P), referred to as the proximal dual consensus ADMM (PDCADMM) method. We will compare the proposed PDCADMM method with the existing DCADMM method in [19], and discuss the potential computational merit of the proposed PDCADMM.
The proposed PDCADMM method considers the Lagrange dual of (P). Let us write (P) as follows
(5a)  
(5b)  
(5c) 
where , are introduced slack variables. Denote as the Lagrange dual variable associated with constraint (5b), and as the Lagrange dual variable associated with each of the constraints in (5c). The Lagrange dual problem of (5) is equivalent to the following problem
(6) 
where
(7) 
for all . To enable multiagent distributed optimization, we allow each agent to have a local copy of the variable , denoted by , while enforcing the distributed ’s to be the same across the network through proper consensus constraints. This is equivalent to reformulating (6) as the following problem
(8a)  
(8b)  
(8c)  
(8d) 
where and are slack variables. Constraints (8b) and (8c) are equivalent to the neighborwise consensus constraints, i.e., . Under Assumption 1, neighborwise consensus is equivalent to global consensus; thus (8) is equivalent to (6). It is worthwhile to note that, while constraint (8d) looks redundant at this stage, it is a key step that constitutes the proposed method as will be clear shortly.
Let us employ the ADMM method [14, 15] to solve (8). ADMM concerns an augmented Lagrangian function of (8)
(9) 
where , and are the Lagrange dual variables associated with each of the constraints in (8b), (8c) and (8d), respectively, and and are penalty parameters. Then, by applying the standard ADMM steps [14, 15] to solve problem (8), we obtain: for iteration ,
(10)  
(11)  
(12)  
(13)  
(14)  
(15) 
Equations (10), (11) and (12) involve updating the primal variables of (8) in a oneround GaussSeidel fashion; while equations (13), (14) and (15) update the dual variables.
It is shown in Appendix A that
(16) 
for all and for all . By (16), equations (10) to (15) can be simplified to the following steps
(17)  
(18)  
(19) 
By letting
(20) 
(21) 
On the other hand, note that the subproblem in (17) is a strongly convex problem. However, it is not easy to handle as subproblem (17) is in fact a minmax (saddle point) problem (see the definition of in (7)). Fortunately, by applying the minimax theorem [27, Proposition 2.6.2] and exploiting the strong convexity of (17) with respect to , one may avoid solving the minmax problem (17) directly. As we show in Appendix B, of subproblem (17) can be conveniently obtained in closedform as follows
(22a)  
(22b) 
where is given by an solution to the following quadratic program (QP)
(23) 
As also shown in Appendix B, the dummy constraint in (8d) and the augmented term in (III) are essential for arriving at (22) and (III). Since they are equivalent to applying the proximal minimization method [14, Sec. 3.4.3] to the variables ’s in (8), we name the developed method above the proximal DCADMM method. In Algorithm 1, we summarize the proposed PDCADMM method. Note that the PDCADMM method in Algorithm 1 is fully parallel and distributed except that, in (29), each agent requires to exchange with its neighbors.
The PDCADMM method in Algorithm 1 is provably convergent, as stated in the following theorem.
Theorem 1
Suppose that Assumptions 1 and 2 hold. Let and , be a pair of optimal primaldual solution of (5) (i.e., (P)), where and , and let (which stacks all for all ) be an optimal dual variable of problem (8). Moreover, let
(24) 
and , where are generated by (3). Then, it holds that
(25) 
where and are constants, in which and .
The proof is presented in Appendix C. Theorem 1 implies that the proposed PDCADMM method asymptotically converges to an optimal solution of (P) with a worstcase convergence rate.
(26)  
(27)  
(28)  
(29) 
(30)  
(31)  
(32) 