Accelerated Gradient Methods for Networked Optimization

Euhanna Ghadimi, Iman Shames, and Mikael Johansson E. Ghadimi and M. Johansson are with the ACCESS Linnaeus Center, Electrical Engineering, Royal Institute of Technology, Stockholm, Sweden. {euhanna, mikaelj}@ee.kth.se. I. Shames is with the Department of Electrical and Electronic Engineering, The University of Melbourne, Melbourne, Australia. iman.shames@unimelb.edu.au.
Abstract

We develop multi-step gradient methods for network-constrained optimization of strongly convex functions with Lipschitz-continuous gradients. Given the topology of the underlying network and bounds on the Hessian of the objective function, we determine the algorithm parameters that guarantee the fastest convergence and characterize situations when significant speed-ups can be obtained over the standard gradient method. Furthermore, we quantify how the performance of the gradient method and its accelerated counterpart are affected by uncertainty in the problem data, and conclude that in most cases our proposed method outperforms gradient descent. Finally, we apply the proposed technique to three engineering problems: resource allocation under network-wide budget constraints, distributed averaging, and Internet congestion control. In all cases, we demonstrate that our algorithm converges more rapidly than alternative algorithms reported in the literature.

I Introduction

Distributed optimization has recently attracted significant attention from several research communities. Examples include the work on network utility maximization for resource allocation in communication networks [1], distributed coordination of multi-agent systems [2], collaborative estimation in wireless sensor networks [3], distributed machine learning [4], and many others. The majority of these praxes apply gradient or sub-gradient methods to the dual formulation of the decision problem. Although gradient methods are easy to implement and require modest computations, they suffer from slow convergence. In some cases, such as the development of distributed power control algorithms for cellular phones [5], one can replace gradient methods by fixed-point iterations and achieve improved convergence rates. For other problems, such as average consensus [6], a number of heuristic methods have been proposed that improve the convergence time of the standard method [7, 8]. However, we are not interested in tailoring techniques to individual problems; our aim is to develop general-purpose schemes that retain the simplicity of the gradient method, yet improve the convergence factors.

Even if the optimization problem is convex and the subgradient method is guaranteed to converge to an optimal solution, the rate of convergence is very modest. The convergence rate of the gradient method is improved if the objective function is differentiable with Lipschitz-continuous gradient, and even more so if the function is also strongly convex. However, for smooth optimization problems several techniques allow for even better convergence rates. One such technique is higher-order methods, such as Newton’s method [9], which use both the gradient and the Hessian of the objective function. Although distributed Newton methods have recently been developed for special problem classes (e.g., [10, 11]), they impose large communication overhead to collect global Hessian information. Another technique is the augmented Lagrangian dual method [12]. This method was originally developed to cope with robustness issues of the dual ascent method, but it turns out that different variations of this technique, such as the method of multipliers [4], tend to converge in fewer iterations than gradient descent. Recently a few applications of these algorithms to distributed optimization have been proposed [4, 13] but convergence rate estimates and optimal algorithm parameters are still unaddressed. A third way to obtain faster convergence is to use multi-step methods [14, 9]. These methods rely only on gradient information but use a history of the past iterates when computing the future ones. This paper explores the latter approach for distributed optimization, and addresses the design, convergence properties, optimal step-size selection, and robustness of networked multi-step methods. Moreover, we also apply the developed techniques to three important classes of distributed optimization problems.

This paper makes the following contributions. First, we develop an multi-step weighted gradient method that maintains a network-wide constraint on the decision variables throughout the iterations. The accelerated algorithm is based on the heavy ball method by Polyak [14] extended to the networked setting. We derive optimal algorithm parameters, show that the method has linear convergence rate and quantify the improvement in convergence factor over the gradient method. Our analysis shows that method is particularly advantageous when the eigenvalues of the Hessian of the objective function and/or the eigenvalues of the graph Laplacian of the underlying network have a large spread. Second, we investigate how similar techniques can be used to accelerate dual decomposition across a network of decision-makers. In particular, given smoothness parameters of the objective function, we present closed-form expressions for the optimal parameters of an accelerated gradient method for the dual. Third, we quantify how the convergence properties of the algorithm are affected when the algorithm is tuned using misestimated problem parameters. This robustness analysis shows that the accelerated algorithm endures parameter violations well and in most cases outperforms its non-accelerated counterpart. Finally, we apply the developed algorithms to three case studies: networked resource allocation, consensus, and network flow control. In each application we demonstrate superior performance compared to alternatives from the literature.

The paper is organized as follows. In Section II, we introduce our networked optimization problem. Section III reviews multi-step gradient techniques. Section IV proposes a multi-step weighted gradient algorithm, establishes conditions for its convergence and derives optimal step-size parameters. Section V develops a technique for accelerating the dual problem based on parameters for the (smooth) primal. Section VI presents a robustness analysis of the multi-step algorithm in the presence of uncertainty. Section VII applies the proposed techniques to three engineering problems: resource allocation, consensus and network flow control; numerical results and performance comparisons are presented for each case study. Finally, concluding remarks are given in Section VIII.

Ii Assumptions and problem formulation

This paper is concerned with collaborative optimization by a network of decision-makers. Each decision-maker is endowed with a loss function , has control of one decision-variable , and collaborates with the others to solve the optimization problem

 minimize∑v∈Vfv(xv)subject toAx=b (1)

for given matrices and . We will assume that lies in the range space of , i.e. that there exists at least one decision vector that satisfies the constraints.

The information exchange between decision-makers is represented by a graph with vertex set and edge set . Specifically, at each time , we will assume that decision-maker has access to for all its neighbors .

Most acceleration techniques in the literature (e.g. [15, 16, 17]) require that the loss functions are smooth and convex. Similarly, we will make the following assumptions:

Assumption 1

Each loss function is convex and twice continuously differentiable with

 lv≤∇2fv(xv)≤uv,∀xv (2)

for some positive real constants with .

Some remarks are in order. Let , and define . Then, Assumption 1 ensures that is strongly convex with modulus :

 f(y) ≥f(x)+(y−x)⊤∇f(x)+l2∥y−x∥2∀(x,y)

and that its gradient is Lipschitz-continuous with constant :

 f(y) ≤f(x)+(y−x)⊤∇f(x)+u2∥y−x∥2∀(x,y)

See, e.g[15, Lemma 1.2.2 and Theorem 2.1.11] for details. Similarly, the Hessian of satisfies

 lI ≤∇2f(x)≤uI∀x (3)

Furthermore, Assumption 1 guarantees that (1) is a convex optimization problem whose unique optimizer satisfies

 Ax⋆ =b,∇f(x⋆)=A⊤μ⋆ (4)

where is the (unique) optimal Lagrange multiplier for the linear constraints.

Iii Background on multi-step methods

The basic gradient method for unconstrained minimization of a convex function takes the form

 x(k+1) =x(k)−α∇f(x(k)), (5)

where is a fixed step-size parameter. Assume that is strongly convex with modulus and has Lipschitz-continuous gradient with constant . Then if , the sequence generated by (5) converges to at linear rate, i.e. there exists a convergence factor such that

 ∥x(k+1)−x⋆∥≤q∥x(k)−x⋆∥

for all . The smallest convergence factor is obtained for the step-size  (see, e.g.[14]).

While the convergence rate cannot be improved unless higher-order information is considered [14], the convergence factor can be meliorated by accounting for the history of iterates when computing the ones to come. Methods in which the next iterate depends not only on the current iterate but also on the preceding ones are called multi-step methods. The simplest multi-step extension of the gradient method is

 x(k+1) =x(k)−α∇f(x(k))+β(x(k)−x(k−1)) (6)

for fixed step-size parameters and . This technique, originally proposed by Polyak, is sometimes called the heavy-ball method based on the physical interpretation of the added “momentum term”. For a centralized set-up, Polyak derived the optimal step-size parameters and showed that these guaranteed a convergence factor of , which is always smaller than the convergence factor for the gradient method and significantly so when is large.

In the following sections, we will develop multi-step gradient methods for network-constrained optimization, analyze their convergence properties, and develop techniques for finding the optimal algorithm parameters.

Iv A multi-step weighted gradient method

In the absence of constraints, (1) is trivial to solve since the objective function is separable and each decision-maker could simply minimize its loss independently of the others. Hence, it is the existence of constraints that makes (1) challenging. In the optimization literature, there are essentially two ways of dealing with constraints. One way is to project the iterates onto the constraint set to maintain feasibility at all times; such a method will be developed in this section. The other way is to use dual decomposition to eliminate couplings between decision-makers and solve the associated dual problem; we will return to such techniques in Section V.

Computing the Euclidean projection onto the constraint of (1) typically requires the full decision vector , which is not available to the decision-makers in our setting. An alternative, explored e.g. in [18], is to consider weighted gradient methods which use a linear combination of the information available to nodes to ensure that iterates remain feasible. For our problem (1) the weighted gradient method takes the form

 x(k+1) =x(k)−αW∇f(x(k)) (7)

where is a weight matrix that satisfies the sparsity constraint that if and . In this way, the iterations (7) read

 xv(k+1) =xv(k)−α∑w∈v∪NvWvw∇fw(xw(k))

and can be executed by individual decision-makers based on the information that they have access to. If satisfies

 AW =0 WA⊤=0 (8)

then any initially feasible will always remain feasible. While the constraints on might appear restrictive, it is possible to construct appropriate weight matrices for many applications. The following examples describe two such cases.

Example 1

When the decision-makers are only constrained by a total resource budget, (1) reduces to

 minimize∑v∈Vfv(xv)subject to∑v∈Vxv=xtot

A distributed gradient method for this problem was developed in [19]. Later, [18] interpreted these as a weighted gradient method and developed techniques for computing the weight matrix that minimizes the guaranteed convergence factor.

Example 2

Consider a scenario where the decision-makers have to find a common decision that minimizes the total cost

 minimize∑v∈Vfv(x)

We can rewrite this problem in our standard form (1) by introducing local decision variables :

 minimize∑v∈Vfv(xv)subject toxv=xw∀(v,w)∈E (9)

Note that in vector form, the constraint of (9) reads where is the incidence matrix of the graph . Next, we will show that the gradient iterations for the dual problem of (9) has the structure of a weighted gradient method in the primal variables. To this end, we form the Lagrangian and the dual function

 d(μ) =infxL(x,μ)=infxf(x)−μ⊤Ax

Under Assumption 1, the Lagrangian has a unique minimizer and the dual function is continuously differentiable with . Hence, the iterations

 μ(k+1) =μ(k)−αAx(k) x(k+1) =∇f−1(A⊤μ(k+1))

will converge to a primal-dual optimal pair for appropriately chosen step-size . Introducing and multiplying both sides of the iterations by , we obtain

 z(k+1)=z(x)−αW∇f−1(z(k))x(k+1)=∇f−1(z(k+1)) (10)

Note that is the graph Laplacian of and that satisfies the sparsity constraint for distributed execution detailed above. One can readily verify that has a simple eigenvalue at for which .

One important application of this technique is to distributed averaging, in which nodes should converge to the network-wide average of constants held by each node . This average can be found by solving (9) with (since its optimal solution is the average of the constants ). The corresponding iterations (10) read

 z(k+1) =z(k)−αW(z(k)−c) x(k+1) =z(k+1)+c

We will return to these iterations and their accelerated counterparts in Section VII.

Iv-a A multi-step weighted gradient method and its convergence

The examples indicate that variants of the weighted gradient method with improved convergence factors could also allow to speed up the convergence of network-wide resource allocation and consensus processes. To this end, we consider the following multi-step variant of the weighted gradient iteration

 x(k+1) =x(k)−αW∇f(x)+β(x(k)−x(k−1)) (11)

Under the sparsity constraint on detailed above, these iterations can be implemented by individual decision-makers. Moreover, (8) ensures that if and satisfy the constraints of (1) then every iterate produced by (11) will also be feasible. The next theorem characterizes the convergence of the iterations (11) and derive optimal step-size parameters.

Theorem 1

Consider the optimization problem (1) under Assumption 1, and let denote its unique optimizer. Assume that has eigenvalue at and satisfies and . Let and be the magnitude of eigenvalues of . Then, for

 0 ≤β≤1, 0<α<2u(1+β)λn(W)

the iterates (11) converge to at linear rate

 ∥x(k+1)−x⋆∥ ≤q∥x(k)−x⋆∥∀k≥0

with . Moreover, the minimal value of is

 q⋆ =√¯¯¯λ−√λ––√¯¯¯λ+√λ––

obtained for step-sizes and where

 α⋆=⎛⎜ ⎜⎝2√¯¯¯λ+√λ––⎞⎟ ⎟⎠2,β⋆=⎛⎜ ⎜⎝√¯¯¯λ−√λ––√¯¯¯λ+√λ––⎞⎟ ⎟⎠2
{proof}

See the appendix for all the proofs. Similar to the discussion in Section III, it is interesting to investigate when (11) significantly improves over the single-step algorithm. In [18], it is shown that the best convergence factor of the weighted gradient iteration (7) is

 q⋆0 =¯¯¯λ−λ––¯¯¯λ+λ––

One can verify that , i.e. the multi-step iterations can always be tuned to converge faster. Moreover, the improvement in convergence factor depends on the quantity : when is large, the speed-up is roughly proportional to . In the networked setting, there are two reasons for a large value of . One is simply that the Hessian of the objective function is ill-conditioned, so that the ratio is large. The other is that the matrix is ill-conditioned, i.e. that is large. As we have seen in the examples, the graph Laplacian is often a valid choice for . Thus, the topology of the underlying graph directly impacts the convergence rate (and the convergence rate improvements) of the multi-step weighted gradient method. We will return to this in detail in Section VII.

In many applications, we will not know , but only bounds such as (3). The next result can then be useful

Proposition 1

Let and . Then and . Moreover, the step-sizes

 α =⎛⎜ ⎜⎝2√¯¯¯λW+√λ––W⎞⎟ ⎟⎠2,β=⎛⎜ ⎜⎝√¯¯¯λW−√λ––W√¯¯¯λW+√λ––W⎞⎟ ⎟⎠2

guarantee that (11) converges to at linear rate

 ∥x(k+1)−x⋆∥ ≤~q∥x(k)−x⋆∥∀k,

where

 ~q =√¯¯¯λW−√λ––W√¯¯¯λW+√λ––W

Iv-B Optimal weight selection for the multi-step method

The results in the previous subsection provide optimal step-size parameters and for a given weight matrix . However, the expressions for the associated convergence factors depend on the eigenvalues of and optimizing the entries in jointly with the step-size parameters can yield even further speed-ups. We make the following observation.

Proposition 2

Under the hypotheses of Proposition 1,

• If is known, then minimizing the convergence factor is equivalent to minimizing .

• If is not known, while and in (3) are, then the weight matrix that minimizes is the one with minimal value of .

The next result shows how the optimal weight selection for both scenarios can be found via convex optimization.

Proposition 3

Let be the span of real symmetric matrices with the sparsity pattern induced by , i.e.

 M ={M∈Sn|Svw=0 if v≠w and% (v,w)∉E}.

Then the problem of minimizing is equivalent to

 minimizeωtsubject toIn−m≤P⊤H1/2ωH1/2P≤tIn−mH1/2ωH1/2∈M,H1/2ωH1/2V=0, (12)

where is the eigenvector space corresponding to the zero eigenvalues of and is a matrix of unit vectors spanning .

Note that when we only want to minimize the condition number of subject to the structural constraints, we simply set in the formulation above.

Remark 1

The lower bound in (12) is rather arbitrary: any scaled matrix for has the same condition number as , and if if and are the optimal step-sizes for the matrix , then and are optimal for .

V A multi-step dual ascent method

An alternative approach for solving (1) is to use Lagrange relaxation, i.e. to introduce Lagrange multipliers for the equality constraints and solve the dual problem. The dual function associated with (1) is then

 d(μ) ≜infxf(x)+μ⊤(Ax−b)=−f⋆(−A⊤μ)−μTb (13)

where  is the conjugate function of . The dual problem is to maximize the dual function with respect to , i.e.,

 minimizeμ−d(μ)=f⋆(−A⊤μ)+b⊤μ.

Recall that if is strongly convex then and hence are convex and continuously differentiable [20]. Hence, in light of our earlier discussion, it is natural to attempt to solve the dual problem using the multi-step iteration

 μ(k+1) =μ(k)+α∇d(μ(k))+β(μ(k)−μ(k−1)). (14)

In order to find the optimal step-sizes and estimate the convergence factors of the iterations, we need to be able to bound the convexity modulus of and bound the Lipschitz constant of its gradient. The following observation is a first step towards this goal:

Lemma 1

Consider the optimization problem (1) with associated dual function (13). Let be a continuously differentiable and closed convex function. Then,

• If is strongly convex with modulus , then is Lipschitz continuous with constant .

• If is Lipschitz continuous with constant , then is strongly convex with modulus .

These dual bounds can be used to find step-sizes with strong performance guarantees for the dual iterations. Specifically:

Theorem 2

Consider the smoothness bounds stated in Lemma 1. Then, the accelerated dual iterations (14) converge to at linear rate with the guaranteed convergence factor

 q⋆=√uλn(AA⊤)−√lλ1(AA⊤)√uλn(AA⊤)+√lλ1(AA⊤),

obtained for step-sizes:

 α⋆=(2√uλn(AA⊤)+√lλ1(AA⊤))2,β⋆=(√uλn(AA⊤)−√lλ1(AA⊤)√uλn(AA⊤)+√lλ1(AA⊤))2.

The advantage of Theorem 2 is that it provides step-size parameters with guaranteed convergence factor using readily available data of the primal problem. How close to optimal these results are depends on how tight the bounds in Lemma 1 are. If the bounds are tight, then the step-sizes in Theorem 2 are truly optimal. The next example shows that a certain degree of conservatism may be present, even for quadratic programming problems.

Example 3

 minimize12x⊤Qxsubject toAx=b

where , nonsingular and . This implies that the objective function is strongly convex with modulus and that its gradient is Lipschitz-continuous with constant . Hence, according to Lemma 1, is strongly convex with modulus and its gradient is Lipschitz continuous with constant . However, direct calculations reveal that

 d(μ) =−12μ⊤AQ−1A⊤μ−μ⊤b

from which we see that has convexity modulus and that its gradient is Lipschitz continuous with constant . By [21, p. 225], these bounds are tighter than those offered by Lemma 1. Specifically, for congruent matrices and there exists nonnegative real numbers such that and . For and we obtain

 λ1(AA⊤)λn(Q) ≤λ1(AQ−1A⊤),λn(AQ−1A⊤)≤λn(AA⊤)λ1(Q)

For some important classes of problems, the bounds are, however, tight. One such example is the average consensus application considered in Section VII.

Vi Robustness analysis

The proposed multi-step methods have significantly improved convergence factors compared to the gradient iterations, and particularly so when the Hessian of the loss function and/or the graph Laplacian of the network is ill-conditioned. However, to design the optimal step-sizes for the multi-step methods one needs to know the upper and lower bounds on the Hessian and the largest and smallest non-zero eigenvalue of the graph Laplacian. These quantities can be hard to estimate accurately in practice. It is therefore important to analyze the sensitivity of the multi-step methods to errors in these parameters to assess if the performance benefits prevail when the step-sizes are tuned based on misestimated parameters. Such a robustness analysis will be performed next.

Let and denote the estimates of and available when tuning the step-sizes. We are interested in quantifying how the convergence properties, and the convergence factors, of the gradient and the multi-step methods are affected when and are used in the step-size formulas that we have derived earlier. Theorem 1 provides some useful observations for the multi-step method. The corresponding results for the weighted gradient method are summarized in the following lemma:

Lemma 2

Consider the weighted gradient iterations (7) and let and denote the largest and smallest non-zero eigenvalue of , respectively. Then, for fixed step-size (7) converges to at linear rate with convergence factor

 qG =max{|1−αλ––|,|1−α¯¯¯λ|}

The minimal value is obtained for the step-size .

Combining this lemma with our previous results from Theorem 1 yields the following observation.

Proposition 4

Let and be estimates of and , respectively, and assume that . Then, for all values of and such that , both the weighted gradient iteration (7) with step-size

 ˜α=2/(˜λ+\lx@converttounder\wtildeλ) (15)

and the multi-step method variant (11) with

 ˜α =⎛⎝2√˜λ+√\lx@converttounder\wtildeλ⎞⎠2,˜β=⎛⎝√˜λ−√\lx@converttounder\wtildeλ√˜λ+√\lx@converttounder\wtildeλ⎞⎠2 (16)

converge to the optimizer of (1).

In practice, one should expect that is overestimated, in which case both methods converge. However, convergence can be guaranteed for a much wider range of perturbations. Figure 1 considers perturbations of the form and . The white area is the locus of perturbations for which convergence is guaranteed, while the dark area represents inadmissible perturbations which render either or negative. Note that both algorithms are robust to a continuous departure from the true values of and , since there is a ball with radius around the true values for which the methods are guaranteed to converge.

Next, we proceed to compare the convergence factors of the two methods when the step-sizes are tuned based on inaccurate parameters. The following Lemma is then useful.

Lemma 3

Let and satisfy . The convergence factor of the weighted gradient method (7) with step-size (15) is given by

 ˜qG ={2¯¯¯λ/(\lx@converttounder\wtildeλ+˜λ)−1 if \lx@converttounder\wtildeλ+˜λ≤λ––+¯¯¯λ1−2λ––/(\lx@converttounder\wtildeλ+˜λ)otherwise, (17)

while the multi-step weighted gradient method (11) with step-sizes (16) has convergence factor

 ˜q =max{√˜β,|1+˜β−˜αλ––|−√˜β,|1+˜β−˜α¯¯¯λ|−√˜β} (18)

The convergence factor expressions derived in Lemma 3 allow us to come to the following conclusions:

Proposition 5

Let , and define the set of perturbation under which the methods converge

 C ={(\lx@converttounder\wtildeε,˜ε)|\lx@converttounder\wtildeε≥−λ––,˜ε≥−¯¯¯λ,\lx@converttounder\wtildeε+˜ε≥−λ––}

and the fourth quadrant in the perturbation space . Then, for all , it holds that . However, there exists for which the scaled gradient has a smaller convergence factor than the multi-step variant. In particular, for

 (\lx@converttounder\wtildeε,˜ε)∈Q4and(¯¯¯λ+˜ε)/(λ––+\lx@converttounder\wtildeε) ≥(¯¯¯λ/λ––)2 (19)

the multi-step iterations (11) converge slower than (7).

Fig. 1(b) illustrates the different perturbations considered in Proposition 5. While the multi-step method has superior convergence rate for most perturbations, the troublesome region is envisaged to be the most likely one in engineering applications. Because it represents the perturbations where the smallest eigenvalue is underestimated while the largest eigenvalue is overestimated. To shed more light on the convergence properties in this region, we perform a numerical study on a quadratic function with and varying from to . We first consider symmetric perturbations , in which case the convergence factor of the gradient method is while the convergence factor of the multi-step method is . Fig. 2(a) shows the convergence factors as a function of the perturbation . The convergence factor of the gradient iterations is insensitive to this class of perturbations, while the performance of the multi-step iterations degrades with the size of the perturbation, and will eventually become inferior to the gradient. To complement this analysis, we also sweep over and compute the convergence factors for the two methods for problems with different . The plot in Fig. 2(b) indicates that when the condition number increases, the area where the gradient method is superior (the area above the contour line) is shrinking. It also shows that when tends to zero or is very large, the performance of the multi-step method is severely degraded.

Vii Applications

In this section, we will apply the developed techniques to three classes of engineering problem for which distributed optimization techniques have received significant attention. These are resource-allocation subject to a network-wide resource-constraint, distributed averaging consensus, and Internet congestion control. In all cases, we will demonstrate that significant speed-ups can be achieved by direct applications of our results, even when compared to acceleration techniques that have been tailor-made to the specific problem class.

Vii-a Accelerated resource allocation

Our first application is the distributed resource allocation problem under a network-wide resource constraint described in Example 1. This problem class was introduced in [19] and revisited by [18], who developed optimal and heuristic weights for the corresponding weighted gradient iteration (7). We hence compare the multi-step method developed in this paper with the optimal and suboptimal tuning for the standard weighted gradient iterations proposed in  [18]. Similarly to [18] we create problem instances by generating random networks and assigning loss functions on the form to nodes. The parameters and are drawn uniformly from the intervals , , and , respectively. In [18] it was shown that the second derivatives of these functions are bounded by and .

Fig. 3 shows a representative example of a problem instance along with the convergence behavior for weighted and multi-step weighted gradient iterations for several weight choices. The optimal weights for the weighted gradient method can be found by solving a semi-definite program derived in [18], and by Proposition 3 for the multi-step variant. In addition, we use the heuristic weights “best constant” and “metropolis” introduced in [18]. In all cases, we observe significantly improved convergence factors for the multi-step method.

In addition to simulations, we compare the analytical expressions for the convergence factors of the weighted gradient and multi-step iterations. Table I again demonstrates superior performance of the multi-step method. In addition to the heuristic weights considered previously, we have also used the “max-degree” weight heuristic from [18]. While this weight setting tends to be worse than “best constant” for the scaled gradient iterations, the two methods will always result in the same convergence factors for the multi-step method. This follows from Remark 1 and the fact that both heuristics generate weight matrices on the form where is the Laplacian of the underlying graph and is a positive scalar.

Vii-B Distributed averaging and consensus

Our second application is devoted to distributed averaging. Distributed algorithms for consensus seeking have been researched intensively for decades, see e.g. [6, 22, 23]. Here, each node in the network initially holds a value and coordinates with neighbors in the graph to find the network-wide average. Clearly, this average can be found by applying any distributed optimization technique to the problem

 minimizex∑v∈V12(x−cv)2 (20)

since the optimal solution to this problem is the network-wide average of the constants . In particular, we will explore how the multi-step technique described in Example 2 with our optimal parameter selection rule compares with the state-of-the art distributed averaging algorithms from the literature.

The basic consensus algorithms use iterations on the form

 xv(k+1) =Qvvxv(k)+∑w∈NvQvwxw(k),x (21)

where are scalar weights, and the node states are initialized with . The paper [24] provides necessary and sufficient conditions on the weight matrix for the iterations to converge to the network-wide average of the initial values, along with computational procedures for finding that minimizes the convergence factor of the iterations.

Following the steps of Example 2, the optimization approach to consensus would suggest the iterations

 x(k+1) =x(k)−αWx(k) (22)

with where is the incidence matrix of . These iterations are on the same form as (21) but use a particular weight matrix. The multi-step counterpart of (22) is

 x(k+1) =((1+β)I−αW)x(k)−βx(k−1) (23)

In a fair comparison between the multi-step iterations (23) and the basic consensus iterations, the weight matrices of the two approaches should not necessarily be the same, nor necessarily equal to the graph Laplacian. Rather, the weight matrix for the consensus iterations (21) should be optimized using the results from [24] and the weigh matrix for the multi-step iteration should be computed using Proposition 3.

In addition to the basic consensus iterations with optimal weights, we will also compare our multi-step iterations with two alternative acceleration schemes from the literature. The first one comes from the literature on accelerated consensus and uses shift registers [7][25][26]. Similarly to the multi-step method, these techniques use a history of past iterates, stored in local registers, when computing the next. For the basic consensus iterations (21), the shift register yields

 x(k+1) =ζQx(k)+(1−ζ)x(k−1) (24)

The current approaches to consensus based on shift-registers assume that is given and design to minimize the convergence factor of the iterations. The key results can be traced back to Golub and Varga [27] who determined the optimal and the associated convergence factor to be

 ζ⋆=21+√1−λ2n−1(Q),q⋆SR=   ⎷1−√1−λ2n−1(Q)1+√1−λ2n−1(Q) (25)

In our comparisons, the shift-register iterations will use the -matrix optimized for the basic consensus iterations and the associated given above. The second accleration technique that we will compare with is the order-optimal gradient methods developed by Nesterov [15]. While these techniques have optimal convergence rate, also in the absence of strong convexity, they are not guaranteed to obtain the best convergence factors. For the case of an objective function which is strongly convex with modulus and whose gradient is Lipschitz continuous with constant , the following iterations are proposed in [15]:

 ^x(k+1) =x(k)−∇f(x(k))/u x(k+1) =^x(k+1)+√u−√l√u+√l(^x(k+1)−^x(k))

initialized with . When we apply this technique to the consensus problem, we arrive at the iterations

 x(k+1) =(I−αW)(x(k)+b(x(k)−x(k−1))) (26)

with parameters and .

Fig. 4 compares the multi-step iterations (23) developed in this paper with (a) the basic consensus iterations (21) with a weight matrix determined using the metropolis scheme, (b) the shift-register acceleration (24) with the same weight matrix and the optimal , and (c) the order-optimal method (26). The particular results shown are for a network of nodes in a dumbbell topology. The simulations show that all three methods yield a significant improvement in convergence factors over the basic iterations, and that the multi-step method developed in this paper outperforms the alternatives.

Several remarks are in order. First, since the Hessian of (20) equals the identity matrix, the speed-up of the multi-step iterations are proportional to . When equals , the Laplacian of the underlying graph, we can quantify the speed-ups for certain classes of graphs using spectral graph theory [28]. For example, the complete graph has so and there is no real advantage of the multi-step iterations. On the other hand, for a ring network the eigenvalues of are given by , so grows quickly with the number of nodes, and the performance improvements of 23) over (22) could be substantial.

Our second remark pertains to the shift-register iterations. Since these iterations have the same form as (23), we can go beyond the current literature on shift-register consensus (which assumes to be given and optimizes ) and provide jointly optimal weight matrix and -parameter:

Proposition 6

The weight matrix and constant that minimizes the convergence factor of the shift-register consensus iterations (24) are

 Q⋆ =I−θ⋆W⋆,ζ⋆=1+β⋆

where is computed in Proposition 3, is given in Theorem 1 with and

 θ⋆ =2λ2(W⋆)+λn(W⋆)

Vii-C Internet congestion control

Our final application is to the area of Internet congestion control, where Network Utility Maximization (NUM) has emerged as powerful framework for studying various important resource allocation problems, see, e.g., [1, 29, 30, 31]. The vast majority of the work in this area is based on the dual decomposition approach introduced in [29]. Here, the optimal bandwidth sharing among flows in a data network is posed as the optimizer of a convex optimization problem

 minimizex∑sus(xs)subject toxs∈[ms,Ms]Rx≤c (27)

In this formulation is the communication rate of flow , and the strictly concave and increasing function describes the utility that source has of communicating at rate . The communication rate is restricted to a bounded interval. Finally, is a routing matrix, whose entries equal one if flow traverses link and zero otherwise. In this way, is the total traffic on links, which cannot exceed the link capacities . We make the following assumptions.

Assumption 2

For the problem (27) it holds that

• Each is twice continuously differentiable and satisfies for

• For every link , there exists a source whose flow only traverses , i.e. and for all .

While these assumptions appear restrictive, they are often postulated in the literature (e.g. [29, Assumptions C1-C4]). Note that under Assumption 2, the routing matrix has full row rank and all link constraints hold with equality at optimum. Hence, we can replace in (27) with , introduce Lagrange multipliers for these constraint, and form the associated dual function

 d(μ) =maxxs∈[ms,Ms]∑s{us(xs)−xs∑lRlsμl}+∑lμlcl

Evaluating amounts to solving an optimization problem in . Since this problem is separable in , it can be solved by each source in isolation based on the sum of the Lagrange multipliers for the links that the flow traverses,

 x⋆s(μ) =argmaxz∈[ms,Ms]us(z)−z∑lRlsμl (28)

Similarly, each Lagrange multiplier update

 μl(k+1) =μl(k)+α(∑lRlsx⋆s(μ(k))−cl) (29)

can be updated by the corresponding link based on local information: if the traffic demand on the link exceeds capacity, the multiplier is increased, otherwise it is decreased. It is possible to show that under the conditions that under Assumption 2, the dual function is strongly concave, differentiable and has a Lipschitz-continuous gradient [29]. Hence, by standard arguments, the updates (28), (29) converge to a primal-dual optimal point for appropriately chosen step-size .

Our results from Section V indicate that substantially improved convergence factors could be obtained by the following class of multi-step updates of the Lagrange multipliers

 μl(k+1) =μl(k)+α(∑lRlsx⋆s(μ(k))−cl)+β(μl(k)−μl(k−1)) (30)

To tune the step-sizes in an optimal way, we bring the techniques from Section V into action. To do so, we first bound the eigenvalues of using the following result:

Lemma 4

Let satisfy Assumption 2. Then

 1 ≤λ1(RR⊤),λn(RR⊤)≤lmaxsmax

where and .

The optimal step-size parameters and corresponding convergence factor now follow from Lemma 4 and Theorem 2:

Proposition 7

Consider the network utility maximization problem (27) under Assumption 2. Then, for and the iterations (28) and (30) converge linearly to a primal-dual optimal pair. The step-sizes

 α =(