Accelerated Gradient Methods
for Networked Optimization
Abstract
We develop multistep gradient methods for networkconstrained optimization of strongly convex functions with Lipschitzcontinuous gradients. Given the topology of the underlying network and bounds on the Hessian of the objective function, we determine the algorithm parameters that guarantee the fastest convergence and characterize situations when significant speedups can be obtained over the standard gradient method. Furthermore, we quantify how the performance of the gradient method and its accelerated counterpart are affected by uncertainty in the problem data, and conclude that in most cases our proposed method outperforms gradient descent. Finally, we apply the proposed technique to three engineering problems: resource allocation under networkwide budget constraints, distributed averaging, and Internet congestion control. In all cases, we demonstrate that our algorithm converges more rapidly than alternative algorithms reported in the literature.
I Introduction
Distributed optimization has recently attracted significant attention from several research communities. Examples include the work on network utility maximization for resource allocation in communication networks [1], distributed coordination of multiagent systems [2], collaborative estimation in wireless sensor networks [3], distributed machine learning [4], and many others. The majority of these praxes apply gradient or subgradient methods to the dual formulation of the decision problem. Although gradient methods are easy to implement and require modest computations, they suffer from slow convergence. In some cases, such as the development of distributed power control algorithms for cellular phones [5], one can replace gradient methods by fixedpoint iterations and achieve improved convergence rates. For other problems, such as average consensus [6], a number of heuristic methods have been proposed that improve the convergence time of the standard method [7, 8]. However, we are not interested in tailoring techniques to individual problems; our aim is to develop generalpurpose schemes that retain the simplicity of the gradient method, yet improve the convergence factors.
Even if the optimization problem is convex and the subgradient method is guaranteed to converge to an optimal solution, the rate of convergence is very modest. The convergence rate of the gradient method is improved if the objective function is differentiable with Lipschitzcontinuous gradient, and even more so if the function is also strongly convex. However, for smooth optimization problems several techniques allow for even better convergence rates. One such technique is higherorder methods, such as Newton’s method [9], which use both the gradient and the Hessian of the objective function. Although distributed Newton methods have recently been developed for special problem classes (e.g., [10, 11]), they impose large communication overhead to collect global Hessian information. Another technique is the augmented Lagrangian dual method [12]. This method was originally developed to cope with robustness issues of the dual ascent method, but it turns out that different variations of this technique, such as the method of multipliers [4], tend to converge in fewer iterations than gradient descent. Recently a few applications of these algorithms to distributed optimization have been proposed [4, 13] but convergence rate estimates and optimal algorithm parameters are still unaddressed. A third way to obtain faster convergence is to use multistep methods [14, 9]. These methods rely only on gradient information but use a history of the past iterates when computing the future ones. This paper explores the latter approach for distributed optimization, and addresses the design, convergence properties, optimal stepsize selection, and robustness of networked multistep methods. Moreover, we also apply the developed techniques to three important classes of distributed optimization problems.
This paper makes the following contributions. First, we develop an multistep weighted gradient method that maintains a networkwide constraint on the decision variables throughout the iterations. The accelerated algorithm is based on the heavy ball method by Polyak [14] extended to the networked setting. We derive optimal algorithm parameters, show that the method has linear convergence rate and quantify the improvement in convergence factor over the gradient method. Our analysis shows that method is particularly advantageous when the eigenvalues of the Hessian of the objective function and/or the eigenvalues of the graph Laplacian of the underlying network have a large spread. Second, we investigate how similar techniques can be used to accelerate dual decomposition across a network of decisionmakers. In particular, given smoothness parameters of the objective function, we present closedform expressions for the optimal parameters of an accelerated gradient method for the dual. Third, we quantify how the convergence properties of the algorithm are affected when the algorithm is tuned using misestimated problem parameters. This robustness analysis shows that the accelerated algorithm endures parameter violations well and in most cases outperforms its nonaccelerated counterpart. Finally, we apply the developed algorithms to three case studies: networked resource allocation, consensus, and network flow control. In each application we demonstrate superior performance compared to alternatives from the literature.
The paper is organized as follows. In Section II, we introduce our networked optimization problem. Section III reviews multistep gradient techniques. Section IV proposes a multistep weighted gradient algorithm, establishes conditions for its convergence and derives optimal stepsize parameters. Section V develops a technique for accelerating the dual problem based on parameters for the (smooth) primal. Section VI presents a robustness analysis of the multistep algorithm in the presence of uncertainty. Section VII applies the proposed techniques to three engineering problems: resource allocation, consensus and network flow control; numerical results and performance comparisons are presented for each case study. Finally, concluding remarks are given in Section VIII.
Ii Assumptions and problem formulation
This paper is concerned with collaborative optimization by a network of decisionmakers. Each decisionmaker is endowed with a loss function , has control of one decisionvariable , and collaborates with the others to solve the optimization problem
(1) 
for given matrices and . We will assume that lies in the range space of , i.e. that there exists at least one decision vector that satisfies the constraints.
The information exchange between decisionmakers is represented by a graph with vertex set and edge set . Specifically, at each time , we will assume that decisionmaker has access to for all its neighbors .
Most acceleration techniques in the literature (e.g. [15, 16, 17]) require that the loss functions are smooth and convex. Similarly, we will make the following assumptions:
Assumption 1
Each loss function is convex and twice continuously differentiable with
(2) 
for some positive real constants with .
Some remarks are in order. Let , and define . Then, Assumption 1 ensures that is strongly convex with modulus :
and that its gradient is Lipschitzcontinuous with constant :
See, e.g, [15, Lemma 1.2.2 and Theorem 2.1.11] for details. Similarly, the Hessian of satisfies
(3) 
Furthermore, Assumption 1 guarantees that (1) is a convex optimization problem whose unique optimizer satisfies
(4) 
where is the (unique) optimal Lagrange multiplier for the linear constraints.
Iii Background on multistep methods
The basic gradient method for unconstrained minimization of a convex function takes the form
(5) 
where is a fixed stepsize parameter. Assume that is strongly convex with modulus and has Lipschitzcontinuous gradient with constant . Then if , the sequence generated by (5) converges to at linear rate, i.e. there exists a convergence factor such that
for all . The smallest convergence factor is obtained for the stepsize (see, e.g., [14]).
While the convergence rate cannot be improved unless higherorder information is considered [14], the convergence factor can be meliorated by accounting for the history of iterates when computing the ones to come. Methods in which the next iterate depends not only on the current iterate but also on the preceding ones are called multistep methods. The simplest multistep extension of the gradient method is
(6) 
for fixed stepsize parameters and . This technique, originally proposed by Polyak, is sometimes called the heavyball method based on the physical interpretation of the added “momentum term”. For a centralized setup, Polyak derived the optimal stepsize parameters and showed that these guaranteed a convergence factor of , which is always smaller than the convergence factor for the gradient method and significantly so when is large.
In the following sections, we will develop multistep gradient methods for networkconstrained optimization, analyze their convergence properties, and develop techniques for finding the optimal algorithm parameters.
Iv A multistep weighted gradient method
In the absence of constraints, (1) is trivial to solve since the objective function is separable and each decisionmaker could simply minimize its loss independently of the others. Hence, it is the existence of constraints that makes (1) challenging. In the optimization literature, there are essentially two ways of dealing with constraints. One way is to project the iterates onto the constraint set to maintain feasibility at all times; such a method will be developed in this section. The other way is to use dual decomposition to eliminate couplings between decisionmakers and solve the associated dual problem; we will return to such techniques in Section V.
Computing the Euclidean projection onto the constraint of (1) typically requires the full decision vector , which is not available to the decisionmakers in our setting. An alternative, explored e.g. in [18], is to consider weighted gradient methods which use a linear combination of the information available to nodes to ensure that iterates remain feasible. For our problem (1) the weighted gradient method takes the form
(7) 
where is a weight matrix that satisfies the sparsity constraint that if and . In this way, the iterations (7) read
and can be executed by individual decisionmakers based on the information that they have access to. If satisfies
(8) 
then any initially feasible will always remain feasible. While the constraints on might appear restrictive, it is possible to construct appropriate weight matrices for many applications. The following examples describe two such cases.
Example 1
When the decisionmakers are only constrained by a total resource budget, (1) reduces to
A distributed gradient method for this problem was developed in [19]. Later, [18] interpreted these as a weighted gradient method and developed techniques for computing the weight matrix that minimizes the guaranteed convergence factor.
Example 2
Consider a scenario where the decisionmakers have to find a common decision that minimizes the total cost
We can rewrite this problem in our standard form (1) by introducing local decision variables :
(9) 
Note that in vector form, the constraint of (9) reads where is the incidence matrix of the graph . Next, we will show that the gradient iterations for the dual problem of (9) has the structure of a weighted gradient method in the primal variables. To this end, we form the Lagrangian and the dual function
Under Assumption 1, the Lagrangian has a unique minimizer and the dual function is continuously differentiable with . Hence, the iterations
will converge to a primaldual optimal pair for appropriately chosen stepsize . Introducing and multiplying both sides of the iterations by , we obtain
(10) 
Note that is the graph Laplacian of and that satisfies the sparsity constraint for distributed execution detailed above. One can readily verify that has a simple eigenvalue at for which .
One important application of this technique is to distributed averaging, in which nodes should converge to the networkwide average of constants held by each node . This average can be found by solving (9) with (since its optimal solution is the average of the constants ). The corresponding iterations (10) read
We will return to these iterations and their accelerated counterparts in Section VII.
Iva A multistep weighted gradient method and its convergence
The examples indicate that variants of the weighted gradient method with improved convergence factors could also allow to speed up the convergence of networkwide resource allocation and consensus processes. To this end, we consider the following multistep variant of the weighted gradient iteration
(11) 
Under the sparsity constraint on detailed above, these iterations can be implemented by individual decisionmakers. Moreover, (8) ensures that if and satisfy the constraints of (1) then every iterate produced by (11) will also be feasible. The next theorem characterizes the convergence of the iterations (11) and derive optimal stepsize parameters.
Theorem 1
Consider the optimization problem (1) under Assumption 1, and let denote its unique optimizer. Assume that has eigenvalue at and satisfies and . Let and be the magnitude of eigenvalues of . Then, for
the iterates (11) converge to at linear rate
with . Moreover, the minimal value of is
obtained for stepsizes and where
See the appendix for all the proofs. Similar to the discussion in Section III, it is interesting to investigate when (11) significantly improves over the singlestep algorithm. In [18], it is shown that the best convergence factor of the weighted gradient iteration (7) is
One can verify that , i.e. the multistep iterations can always be tuned to converge faster. Moreover, the improvement in convergence factor depends on the quantity : when is large, the speedup is roughly proportional to . In the networked setting, there are two reasons for a large value of . One is simply that the Hessian of the objective function is illconditioned, so that the ratio is large. The other is that the matrix is illconditioned, i.e. that is large. As we have seen in the examples, the graph Laplacian is often a valid choice for . Thus, the topology of the underlying graph directly impacts the convergence rate (and the convergence rate improvements) of the multistep weighted gradient method. We will return to this in detail in Section VII.
In many applications, we will not know , but only bounds such as (3). The next result can then be useful
Proposition 1
IvB Optimal weight selection for the multistep method
The results in the previous subsection provide optimal stepsize parameters and for a given weight matrix . However, the expressions for the associated convergence factors depend on the eigenvalues of and optimizing the entries in jointly with the stepsize parameters can yield even further speedups. We make the following observation.
Proposition 2
The next result shows how the optimal weight selection for both scenarios can be found via convex optimization.
Proposition 3
Let be the span of real symmetric matrices with the sparsity pattern induced by , i.e.
Then the problem of minimizing is equivalent to
(12) 
where is the eigenvector space corresponding to the zero eigenvalues of and is a matrix of unit vectors spanning .
Note that when we only want to minimize the condition number of subject to the structural constraints, we simply set in the formulation above.
Remark 1
The lower bound in (12) is rather arbitrary: any scaled matrix for has the same condition number as , and if if and are the optimal stepsizes for the matrix , then and are optimal for .
V A multistep dual ascent method
An alternative approach for solving (1) is to use Lagrange relaxation, i.e. to introduce Lagrange multipliers for the equality constraints and solve the dual problem. The dual function associated with (1) is then
(13) 
where is the conjugate function of . The dual problem is to maximize the dual function with respect to , i.e.,
Recall that if is strongly convex then and hence are convex and continuously differentiable [20]. Hence, in light of our earlier discussion, it is natural to attempt to solve the dual problem using the multistep iteration
(14) 
In order to find the optimal stepsizes and estimate the convergence factors of the iterations, we need to be able to bound the convexity modulus of and bound the Lipschitz constant of its gradient. The following observation is a first step towards this goal:
Lemma 1
Consider the optimization problem (1) with associated dual function (13). Let be a continuously differentiable and closed convex function. Then,

If is strongly convex with modulus , then is Lipschitz continuous with constant .

If is Lipschitz continuous with constant , then is strongly convex with modulus .
These dual bounds can be used to find stepsizes with strong performance guarantees for the dual iterations. Specifically:
Theorem 2
The advantage of Theorem 2 is that it provides stepsize parameters with guaranteed convergence factor using readily available data of the primal problem. How close to optimal these results are depends on how tight the bounds in Lemma 1 are. If the bounds are tight, then the stepsizes in Theorem 2 are truly optimal. The next example shows that a certain degree of conservatism may be present, even for quadratic programming problems.
Example 3
Consider the quadratic minimization problem
where , nonsingular and . This implies that the objective function is strongly convex with modulus and that its gradient is Lipschitzcontinuous with constant . Hence, according to Lemma 1, is strongly convex with modulus and its gradient is Lipschitz continuous with constant . However, direct calculations reveal that
from which we see that has convexity modulus and that its gradient is Lipschitz continuous with constant . By [21, p. 225], these bounds are tighter than those offered by Lemma 1. Specifically, for congruent matrices and there exists nonnegative real numbers such that and . For and we obtain
For some important classes of problems, the bounds are, however, tight. One such example is the average consensus application considered in Section VII.
Vi Robustness analysis
The proposed multistep methods have significantly improved convergence factors compared to the gradient iterations, and particularly so when the Hessian of the loss function and/or the graph Laplacian of the network is illconditioned. However, to design the optimal stepsizes for the multistep methods one needs to know the upper and lower bounds on the Hessian and the largest and smallest nonzero eigenvalue of the graph Laplacian. These quantities can be hard to estimate accurately in practice. It is therefore important to analyze the sensitivity of the multistep methods to errors in these parameters to assess if the performance benefits prevail when the stepsizes are tuned based on misestimated parameters. Such a robustness analysis will be performed next.
Let and denote the estimates of and available when tuning the stepsizes. We are interested in quantifying how the convergence properties, and the convergence factors, of the gradient and the multistep methods are affected when and are used in the stepsize formulas that we have derived earlier. Theorem 1 provides some useful observations for the multistep method. The corresponding results for the weighted gradient method are summarized in the following lemma:
Lemma 2
Combining this lemma with our previous results from Theorem 1 yields the following observation.
Proposition 4
In practice, one should expect that is overestimated, in which case both methods converge. However, convergence can be guaranteed for a much wider range of perturbations. Figure 1 considers perturbations of the form and . The white area is the locus of perturbations for which convergence is guaranteed, while the dark area represents inadmissible perturbations which render either or negative. Note that both algorithms are robust to a continuous departure from the true values of and , since there is a ball with radius around the true values for which the methods are guaranteed to converge.
Next, we proceed to compare the convergence factors of the two methods when the stepsizes are tuned based on inaccurate parameters. The following Lemma is then useful.
Lemma 3
The convergence factor expressions derived in Lemma 3 allow us to come to the following conclusions:
Proposition 5
Let , and define the set of perturbation under which the methods converge
and the fourth quadrant in the perturbation space . Then, for all , it holds that . However, there exists for which the scaled gradient has a smaller convergence factor than the multistep variant. In particular, for
(19) 
Fig. 1(b) illustrates the different perturbations considered in Proposition 5. While the multistep method has superior convergence rate for most perturbations, the troublesome region is envisaged to be the most likely one in engineering applications. Because it represents the perturbations where the smallest eigenvalue is underestimated while the largest eigenvalue is overestimated. To shed more light on the convergence properties in this region, we perform a numerical study on a quadratic function with and varying from to . We first consider symmetric perturbations , in which case the convergence factor of the gradient method is while the convergence factor of the multistep method is . Fig. 2(a) shows the convergence factors as a function of the perturbation . The convergence factor of the gradient iterations is insensitive to this class of perturbations, while the performance of the multistep iterations degrades with the size of the perturbation, and will eventually become inferior to the gradient. To complement this analysis, we also sweep over and compute the convergence factors for the two methods for problems with different . The plot in Fig. 2(b) indicates that when the condition number increases, the area where the gradient method is superior (the area above the contour line) is shrinking. It also shows that when tends to zero or is very large, the performance of the multistep method is severely degraded.
Vii Applications
In this section, we will apply the developed techniques to three classes of engineering problem for which distributed optimization techniques have received significant attention. These are resourceallocation subject to a networkwide resourceconstraint, distributed averaging consensus, and Internet congestion control. In all cases, we will demonstrate that significant speedups can be achieved by direct applications of our results, even when compared to acceleration techniques that have been tailormade to the specific problem class.
Viia Accelerated resource allocation
Our first application is the distributed resource allocation problem under a networkwide resource constraint described in Example 1. This problem class was introduced in [19] and revisited by [18], who developed optimal and heuristic weights for the corresponding weighted gradient iteration (7). We hence compare the multistep method developed in this paper with the optimal and suboptimal tuning for the standard weighted gradient iterations proposed in [18]. Similarly to [18] we create problem instances by generating random networks and assigning loss functions on the form to nodes. The parameters and are drawn uniformly from the intervals , , and , respectively. In [18] it was shown that the second derivatives of these functions are bounded by and .
Fig. 3 shows a representative example of a problem instance along with the convergence behavior for weighted and multistep weighted gradient iterations for several weight choices. The optimal weights for the weighted gradient method can be found by solving a semidefinite program derived in [18], and by Proposition 3 for the multistep variant. In addition, we use the heuristic weights “best constant” and “metropolis” introduced in [18]. In all cases, we observe significantly improved convergence factors for the multistep method.
In addition to simulations, we compare the analytical expressions for the convergence factors of the weighted gradient and multistep iterations. Table I again demonstrates superior performance of the multistep method. In addition to the heuristic weights considered previously, we have also used the “maxdegree” weight heuristic from [18]. While this weight setting tends to be worse than “best constant” for the scaled gradient iterations, the two methods will always result in the same convergence factors for the multistep method. This follows from Remark 1 and the fact that both heuristics generate weight matrices on the form where is the Laplacian of the underlying graph and is a positive scalar.
Method  Maxdegree  Metropolis  Best Constant  SDP 

XiaoBoyd  0.9420  0.9318  0.9133  0.8952 
Multistep  0.8667  0.8565  0.8667  0.7604 
ViiB Distributed averaging and consensus
Our second application is devoted to distributed averaging. Distributed algorithms for consensus seeking have been researched intensively for decades, see e.g. [6, 22, 23]. Here, each node in the network initially holds a value and coordinates with neighbors in the graph to find the networkwide average. Clearly, this average can be found by applying any distributed optimization technique to the problem
(20) 
since the optimal solution to this problem is the networkwide average of the constants . In particular, we will explore how the multistep technique described in Example 2 with our optimal parameter selection rule compares with the stateofthe art distributed averaging algorithms from the literature.
The basic consensus algorithms use iterations on the form
(21) 
where are scalar weights, and the node states are initialized with . The paper [24] provides necessary and sufficient conditions on the weight matrix for the iterations to converge to the networkwide average of the initial values, along with computational procedures for finding that minimizes the convergence factor of the iterations.
Following the steps of Example 2, the optimization approach to consensus would suggest the iterations
(22) 
with where is the incidence matrix of . These iterations are on the same form as (21) but use a particular weight matrix. The multistep counterpart of (22) is
(23) 
In a fair comparison between the multistep iterations (23) and the basic consensus iterations, the weight matrices of the two approaches should not necessarily be the same, nor necessarily equal to the graph Laplacian. Rather, the weight matrix for the consensus iterations (21) should be optimized using the results from [24] and the weigh matrix for the multistep iteration should be computed using Proposition 3.
In addition to the basic consensus iterations with optimal weights, we will also compare our multistep iterations with two alternative acceleration schemes from the literature. The first one comes from the literature on accelerated consensus and uses shift registers [7], [25], [26]. Similarly to the multistep method, these techniques use a history of past iterates, stored in local registers, when computing the next. For the basic consensus iterations (21), the shift register yields
(24) 
The current approaches to consensus based on shiftregisters assume that is given and design to minimize the convergence factor of the iterations. The key results can be traced back to Golub and Varga [27] who determined the optimal and the associated convergence factor to be
(25) 
In our comparisons, the shiftregister iterations will use the matrix optimized for the basic consensus iterations and the associated given above. The second accleration technique that we will compare with is the orderoptimal gradient methods developed by Nesterov [15]. While these techniques have optimal convergence rate, also in the absence of strong convexity, they are not guaranteed to obtain the best convergence factors. For the case of an objective function which is strongly convex with modulus and whose gradient is Lipschitz continuous with constant , the following iterations are proposed in [15]:
initialized with . When we apply this technique to the consensus problem, we arrive at the iterations
(26) 
with parameters and .
Fig. 4 compares the multistep iterations (23) developed in this paper with (a) the basic consensus iterations (21) with a weight matrix determined using the metropolis scheme, (b) the shiftregister acceleration (24) with the same weight matrix and the optimal , and (c) the orderoptimal method (26). The particular results shown are for a network of nodes in a dumbbell topology. The simulations show that all three methods yield a significant improvement in convergence factors over the basic iterations, and that the multistep method developed in this paper outperforms the alternatives.
Several remarks are in order. First, since the Hessian of (20) equals the identity matrix, the speedup of the multistep iterations are proportional to . When equals , the Laplacian of the underlying graph, we can quantify the speedups for certain classes of graphs using spectral graph theory [28]. For example, the complete graph has so and there is no real advantage of the multistep iterations. On the other hand, for a ring network the eigenvalues of are given by , so grows quickly with the number of nodes, and the performance improvements of 23) over (22) could be substantial.
Our second remark pertains to the shiftregister iterations. Since these iterations have the same form as (23), we can go beyond the current literature on shiftregister consensus (which assumes to be given and optimizes ) and provide jointly optimal weight matrix and parameter:
ViiC Internet congestion control
Our final application is to the area of Internet congestion control, where Network Utility Maximization (NUM) has emerged as powerful framework for studying various important resource allocation problems, see, e.g., [1, 29, 30, 31]. The vast majority of the work in this area is based on the dual decomposition approach introduced in [29]. Here, the optimal bandwidth sharing among flows in a data network is posed as the optimizer of a convex optimization problem
(27) 
In this formulation is the communication rate of flow , and the strictly concave and increasing function describes the utility that source has of communicating at rate . The communication rate is restricted to a bounded interval. Finally, is a routing matrix, whose entries equal one if flow traverses link and zero otherwise. In this way, is the total traffic on links, which cannot exceed the link capacities . We make the following assumptions.
Assumption 2
For the problem (27) it holds that

Each is twice continuously differentiable and satisfies for

For every link , there exists a source whose flow only traverses , i.e. and for all .
While these assumptions appear restrictive, they are often postulated in the literature (e.g. [29, Assumptions C1C4]). Note that under Assumption 2, the routing matrix has full row rank and all link constraints hold with equality at optimum. Hence, we can replace in (27) with , introduce Lagrange multipliers for these constraint, and form the associated dual function
Evaluating amounts to solving an optimization problem in . Since this problem is separable in , it can be solved by each source in isolation based on the sum of the Lagrange multipliers for the links that the flow traverses,
(28) 
Similarly, each Lagrange multiplier update
(29) 
can be updated by the corresponding link based on local information: if the traffic demand on the link exceeds capacity, the multiplier is increased, otherwise it is decreased. It is possible to show that under the conditions that under Assumption 2, the dual function is strongly concave, differentiable and has a Lipschitzcontinuous gradient [29]. Hence, by standard arguments, the updates (28), (29) converge to a primaldual optimal point for appropriately chosen stepsize .
Our results from Section V indicate that substantially improved convergence factors could be obtained by the following class of multistep updates of the Lagrange multipliers
(30) 
To tune the stepsizes in an optimal way, we bring the techniques from Section V into action. To do so, we first bound the eigenvalues of using the following result: