Distributed Semi-Stochastic Optimization with Quantization Refinement
We consider the problem of regularized regression in a network of communication-constrained devices. Each node has local data and objectives, and the goal is for the nodes to optimize a global objective. We develop a distributed optimization algorithm that is based on recent work on semi-stochastic proximal gradient methods. Our algorithm employs iteratively refined quantization to limit message size. We present theoretical analysis and conditions for the algorithm to achieve a linear convergence rate. Finally, we demonstrate the performance of our algorithm through numerical simulations.
We consider the problem of distributed optimization in a network where communication is constrained, for example a wireless sensor network. In particular, we focus on problems where each node has local data and objectives, and the goal is for the nodes to learn a global objective that includes this local information. Such problems arise in networked systems problems such as estimation, prediction, resource allocation, and control.
Recent works have proposed distributed optimization methods that reduce communication by using quantization. For example, in , the authors propose a distributed algorithm to solve unconstrained problems based on a centralized inexact proximal gradient method . In , the authors extend their work to constrained optimization problems. In these algorithms, the nodes compute a full gradient step in each iteration, requiring quantized communication between every pair of neighboring nodes. Quantization has been applied in distributed consensus algorithms  and distributed subgradient methods .
In this work, we address the specific problem of distributed regression with regularization over the variables across all nodes. Applications of our approach include distributed compressed sensing, LASSO, group LASSO, and regression with Elastic Net regularization, among others. Our approach is inspired by . We seek to further reduce per-iteration communication by using an approach based on a stochastic proximal gradient algorithm. This approach only requires communication between a small subset of nodes in each iteration. In general, stochastic gradients may suffer from slow convergence. Thus any per-iteration communication savings could be counter-acted by an extended number of iterations. Recently, however, several works have proposed semi-stochastic gradient methods . To reduce the variance of the iterates generated by a stochastic approach, these algorithms periodically incorporate a full gradient computation. It has been shown that these algorithms achieve a linear rate of convergence to the optimal solution.
We propose a distributed algorithm for regularized regression based on the centralized semi-stochastic proximal gradient of . In most iterations, only a subset of nodes need communicate. We further reduce communication overhead by employing quantized messaging. Our approach reduces both the length of messages sent between nodes as well as the number of messages sent in total to converge to the optimal solution. The detailed contributions of our work are as follows:
We extend the centralized semi-stochastic proximal gradient algorithm to include errors in the gradient computations and show the convergence rate of this inexact algorithm.
We propose a distributed optimization algorithm based on this centralized algorithm that uses iteratively refined quantization to limit message size.
We show that our distributed algorithm is equivalent to the centralized algorithm, where the errors introduced by quantization can be interpreted as inexact gradient computations. We further design quantizers that guarantees a linear convergence rate to the optimal solution.
We demonstrate the performance of the proposed algorithm in numerical simulations.
The remainder of this paper is organized as follows. In Section ?, we present the centralized inexact proximal gradient algorithm and give background on quantization. In Section 3, we give the system model and problem formulation. Section 4 details our distributed algorithm. Section 5 provides theoretical analysis of our proposed algorithm. Section 6 presents our simulation results, and we conclude in Section 7.
2.1Inexact Semi-Stochastic Proximal Gradient Algorithm
We consider an optimization problem over the form:
where , and the following assumptions are satisfied.
Problem (Equation 1) can be solved using a stochastic proximal gradient algorithm  where, in each iteration, a single is computed for a randomly chosen , and the iterate is updated accordingly as,
Here, is the proximal operator
While stochastic methods offer the benefit of reduced per-iteration computation over standard gradient methods, the iterates may have high variance. These methods typically use a decreasing step-size to compensate for this variance, resulting in slow convergence. Recently, Xiao and Zhang proposed a semi-stochastic proximal gradient algorithm, Prox-SVRG that reduces the variance by periodically incorporating a full gradient computation . This modification allows Prox-SVRG to use a constant step size, and thus, Prox-SVRG achieves a linear convergence rate.
We extend Prox-SVRG to include a zero-mean error in the gradient computation. Our resulting algorithm, Inexact Prox-SVRG, is given in Algorithm ?. The algorithm consists of an outer loop where the full gradient is computed and an inner loop where the iterate is updated based on both the stochastic and full gradients.
The following theorem states the convergence behavior of Algorithm ?.
The proof is given in the appendix.
From this theorem, we can derive conditions for the algorithm to converge to the optimal . Let the sequence decrease linearly at a rate . Then
If , then converges linearly with a rate of .
If , then converges linearly with a rate of .
If , then converges linearly with a rate in .
2.2Subtractively Dithered Quantization
We employ a subtractively dithered quantizer to quantize values before transmission. We use a substractively dithered quantizer rather than non-subtractively dithered quantizer because the quantization error of the subtractively dithered quantizer is not correlated with its input. We briefly summarize the quantizer and its key properties below.
Let be real number to be quantized into bits. The quantizer is parameterized by an interval size and a midpoint value . Thus the quantization interval is , and the quantization step-size is . We first define the uniform quantizer,
In subtractively dithered quantization, a dither is added to , the resulting value is quantized using a uniform quantizer, and then transmitted. The recipient then subtracts from this value. The subtractively dithered quantized value of , denoted , is thus
Note that this quantizer requires both the sender and recipient to use the same value for , for example, by using the same pseudorandom number generator.
The following theorem describes the statistical properties of the quantization error.
With some abuse of notation, we also write where is a vector. In this case, the quantization operator is applied to each component of independently, using a vector-valued midpoint and the same scalar-valued interval bounds.
We consider a similar system model to that in . The network is a connected graph of nodes where inter-node communication is limited to the local neighborhood of each node. The neighbor set consists of node ’s neighbors and itself. The neighborhoods exist corresponding to the fixed undirected graph . We denote as the maximum degree of the graph .
Each node has a state vector with dimension . The state of the system is . We let be the vector consisting of the concatenation of states of all nodes in . For ease of exposition, we define the selecting matrices , , where and the matrices , where . These matrices each have -norm of 1.
Every node has a local objective function over the states in . The distributed optimization problem is thus,
where . We assume that Assumptions ? and ? are satisfied. Further, we require the following assumptions hold.
We note that Assumption ? holds for standard regularization functions used in LASSO (), group LASSO where each its own group, and Elastic Net regularization ().
In the next section, we present our distributed implementation of Prox-SVRG to solve Problem (Equation 4).
Our distributed algorithm is given in Algorithm ?. In each outer iteration , node quantizes its iterate and the gradient and sends it to all of its neighbors. These values are quantized using two subtractively dithered quantizers, and , whereby the sender (node ) sends an bit representation and the recipient reconstructs the value from this representation and subtracts the dither. The midpoints for and are set to be the quantized values from the previous iteration. Thus, the recipients already know these midpoints. The quantized values (after the dither is subtracted) are denoted by and , and the quantization errors are and , respectively.
For every iteration of the outer loop of the algorithm, there is an inner loop of iterations. In each inner iteration, a single node , chosen at random, computes its gradient. To do this, node and its neighbors exchange their states and gradients . These values are quantized using two subtractively dithered quantizers, and . The midpoints for these quantizers are and . Each node sends these values to their neighbors before the inner loop, so all nodes are aware of the midpoints. The quantized values (after the dither is subtracted) are denoted by and , and their quantization errors are and , respectively. The quantization interval bounds , , , and , are initialized to , , , and , respectively, and each iteration, the bounds are multiplied by . Thus the quantizers are refined in each iteration.
The quantizers limit the length of a single variable transmission to bits. In the outer loop of the algorithm, each node sends its local variable, consisting of quantized components, to every neighbor. It also sends its gradient, consisting of quantized components to every neighbor. Thus the number of bits exchanged by all nodes is bits. In each inner iteration, only nodes exchange messages. Each node quantizes state variables and sends them to node . This yields a transmission of bits in total. In turn, node quantizes its gradient and sends it to all of its neighbors, which is total bits. Thus, in each inner iteration bits are transmitted. The total number of bits transmitted in a single outer iteration is therefore,
Let and . An upper bound on the number bits transmitted by the algorithm in each outer iteration is .
We now present our analysis of Algorithm ?. First we show that the algorithm is equivalent to Algorithm ?, where the quantization errors are encapsulated in the error term . We also give an explicit expression for this error term.
The error is:
We note that all quantization errors are zero-mean. Further, by Assumption ?, , for a zero-mean random variable . Therefore, .
We now show that is is uncorrelated with and the gradients , . Clearly, and are uncorrelated with the terms of containing , , and . In accordance with Assumption ?, the gradients and are either linear or constant. If they are constant, then and . Thus, the terms in containing these differences are also 0. If they are linear, e.g., , for an appropriately sized, matrix and vector (possibly 0). Then,
By Theorem ?, is uncorrelated with . It is clearly also uncorrelated with . Similar arguments can be used to show that and are uncorrelated with the remaining terms in .
With respect to , we have
The first term on the right hand side can be bounded using the fact that , as
We now bound the first term in this expression,
where the first inequality follows from Assumptions ? and ? and the fact that . The second inequality follows from the independence of quantization errors (Theorem ?). Next we bound the second term,
where the first inequality uses the fact that for a random variable , . The remaining inequalities follow from Assumptions ? and ?, the fact that , and the independence of the quantization errors.
Finally, again from the independence of the quantization errors, we have,
Combining these bounds, we obtain the desired result,
We next show that, if all of the values fall within their respective quantization intervals, then the error term decreases linearly with rate , and thus the algorithm converges to the optimal solution linearly with rate .
First we note that, by Theorem ? and the update rule for the quantization intervals, we have:
We use these inequalities to bound ,
Summing over , we obtain,
Applying Theorem ?, with , we have
While we do not yet have theoretical guarantees that all values will fall within their quantization intervals, our simulations indicate that is always possible to find parameters , , , and , for which all values lie within their quantization intervals for all iterations. Thus, in practice, our algorithm achieves a linear convergence rate. We anticipate that it is possible to develop a programmatic approach, similar to that in , to identify values for , , , and that guarantee linear convergence. This is a subject of current work.
This section illustrates the performance of Algorithm ? by solving a distributed linear regression problem with elastic net regularization.
We randomly generate a -regular graph with and uniform degree of 8, i.e., . We set each subsystem size, , to be 10. Each node has a local function where is a random matrix. We generate by first generating a random vector and then computing . The global objective function is:
This simulation was implemented in Matlab and the optimal value was computed using CVX. We set the total number of inner iterations to be and use the step size . With these values, , as required by Theorem ?. We set , which ensures that . We use the quantization parameters . With these parameters, the algorithms values always fell within their quantization intervals.
Figure 1 shows the performance of the algorithm where the number of bits is 11, 13, and 15, as well as the performance of the algorithm without quantization. In these results, is the concatenation of the vectors, for . It is important to note the rate of convergence of the algorithm in all four cases is linear, and, performance improves as the number of bits increases.
We have presented a distributed algorithm for regularized regression in communication-constrained networks. This algorithm is based on recently proposed semi-stochastic proximal gradient methods. Our algorithm reduces communication requirements by (1) using a stochastic approach where only a subset of nodes communicate in each iteration and (2) quantizing all messages. We have shown that this distributed algorithm is equivalent to a centralized version with inexact gradient computations, and we have used this equivalence to analyze the convergence rate of the distributed method. Finally, we have demonstrated the performance of our algorithm in numerical simulations.
In future work, we plan to extend our theoretical analysis to develop a programmatic way to identify initial quantization intervals. We also plan to explore the integration of more complex regularization functions.
Proof of Theorem
We first restate some useful results from .
We now proceed to prove Theorem ?. For brevity, we omit some details that are identical to those in the proof of Theorem 3.1 in . We have indicated these omissions below.
First, we define
where is as defined in Algorithm ?.
We analyze the change in the distance between and in a single inner iteration,
We next apply Lemma ?, with , , , , and , to obtain,
where . This implies,
We follow the same reasoning as in the proof of Theorem 3.1 in  to obtain the following expression, which is conditioned on and takes expectation with respect to ,
Since and are independent of and , and since is zero-mean,
Further, since is independent of and ,
Applying Lemma ?, we obtain,
We consider a single execution of the inner iteration of the algorithm, so and . Summing over on both sides gives and taking expectation over , for gives us,