# Parallel random coordinate descent method for composite minimization: convergence analysis and error bounds

## Abstract

In this paper we consider a parallel version of a randomized (block) coordinate descent method for minimizing the sum of a partially separable smooth convex function and a fully separable non-smooth convex function. Under the assumption of Lipschitz continuity of the gradient of the smooth function, this method has a sublinear convergence rate. Linear convergence rate of the method is obtained for the newly introduced class of *generalized error bound functions*. We prove that the new class of generalized error bound functions encompasses both global/local error bound functions and smooth strongly convex functions. We also show that the theoretical estimates on the convergence rate depend on the number of blocks chosen randomly and a natural measure of separability of the objective function. Numerical simulations are also provided to confirm our theory.

## 1Introduction

In recent years there has been an ever-increasing interest in the optimization community for algorithms suitable for solving convex optimization problems with a very large number of variables. These problems, known as big data problems, have arisen from more recent fields such as network control [1], machine learning [3] and data mining [4]. An important property of these problems is that they are *partially separable*, which permits parallel and/or distributed computations in the optimization algorithms that are to be designed for them [1]. This, together with the surge of multi-core machines or clustered parallel computing technology in the past decade, has led to the widespread focus on coordinate descent methods.

**State of the art**: Coordinate descent methods are methods in which a number of (block) coordinates updates of vector of variables are conducted at each iteration. The reasoning behind this is that coordinate updates for problems with a large number of variables are much simpler than computing a full update, requiring less memory and computational power, and that they can be done independently, making coordinate descent methods more scalable and suitable for distributed and parallel computing hardware. Coordinate descent methods can be divided into two main categories: deterministic and random. In deterministic coordinate descent methods, the (block) coordinates which are to be updated at each iteration are chosen in a cyclic fashion or based on some greedy strategy. For cyclic coordinate search, estimates on the rate of convergence were given recently in [6], while for the greedy coordinate search the convergence rate is given e.g. in [8]. On the other hand, in random coordinate descent methods, the (block) coordinates which are to be updated are chosen randomly based on some probability distribution. In [10], Nesterov presents a random coordinate descent method for smooth convex problems, in which only one coordinate is updated at each iteration. Under some assumption of Lipschitz gradient and strong convexity of the objective function, the algorithm in [10] was proved to have linear convergence in the expected values of the objective function. In [11] a 2-block random coordinate descent method is proposed to solve linearly constrained smooth convex problems. The algorithm from [11] was extended to linearly constrained composite convex minimization in [13]. The results in [10] and [14] were combined in [15], in which the authors propose a randomized coordinate descent method to solve composite convex problems. To our knowledge, the first results on the linear convergence of coordinate descent methods under more relaxed assumptions than smoothness and strong convexity were obtained e.g. in [8]. In particular, linear convergence of these methods is proved under some local error bound property, which is more general than the assumption of Lipschitz gradient and strong convexity as required in [10]. However, the authors in [8] were able to show linear convergence only locally. Finally, very few results were known in the literature on distributed and parallel implementations of coordinate descent methods. Recently, a more thorough investigation regarding the separability of the objective function and ways in which the convergence can be accelerated through parallelization was undertaken in [12], where it is shown that speedup can be achieved through this approach. Several other papers on parallel coordinate descent methods have appeared around the time this paper was finalized [20].

**Motivation**: Despite widespread use of coordinate descent methods for solving large convex problems, there are some aspects that have not been fully studied. In particular, in practical applications, the assumption of Lipschitz gradient and strong convexity is restrictive and the main interest is in finding larger classes of functions for which we can still prove linear convergence. We are also interested in providing schemes based on parallel and/or distributed computations. Finally, the convergence analysis has been almost exclusively limited to centralized stepsize rules and local convergence results. These represent the main issues that we pursue here.

**Contribution**: In this paper we consider a parallel version of random (block) coordinate gradient descent method [10] for solving large optimization problems with a convex separable composite objective function, i.e. consisting of the sum of a partially separable smooth function and a fully separable non-smooth function. Our approach allows us to analyze in the same framework several methods: full gradient, serial random coordinate descent and any parallel random coordinate descent method in between. Analysis of coordinate descent methods based on updating in parallel more than one (block) component per iteration was given first in [12] and then further studied e.g. in [18]. We provide a detailed rate analysis for our parallel coordinate descent algorithm under general assumptions on the objective function, e.g. error bound type property, and under more knowledge on the structure of the problem and we prove substantial improvement on the convergence rate w.r.t. the existing results from the literature.. In particular, we show that this algorithm attains linear convergence for problems belonging to a general class, named *generalized error bound problems*. We establish that our class includes problems with global/local error bound objective functions and implicitly strongly convex functions with some Lipschitz continuity property on the gradient. We also show that the new class of problems that we define in this paper covers many applications in networks. Finally, we establish that the theoretical estimates on the convergence rate depend on the number of blocks chosen randomly and a natural measure of separability of the objective function. In summary, the contributions of this paper include:

*(i)* We employ a parallel version of coordinate descent method and show that the algorithm has sublinear convergence rate in the expected values of the objective function. For this algorithm, the iterate updates can be done independently and thus it is suitable for parallel and/or distributed computing architectures.

*(ii)* We introduce a new class of generalized error bound problems for which we show that it encompasses both, problems with global/local error bound functions and smooth strongly convex functions and that it covers many practical applications.

*(iii)* Under the generalized error bound property, we prove that our parallel random coordinate descent algorithm has a global linear convergence rate.

*(iv)* We also perform a theoretical identification of which categories of problems and objective functions satisfy the generalized error bound property.

**Paper Outline**: In Section 2 we present our optimization model and discuss practical applications which can be posed in this framework. In Sections and we analyze the properties of a parallel random coordinate descent algorithm, in particular we establish sublinear convergence rate, under some Lipschitz continuity assumptions. In Section 5 we introduce the class of generalized error bound problems and we prove that the random coordinate descent algorithm has global linear convergence rate under this property. In Section Section 6 we investigate which classes of optimization problems have an objective function that satisfies the generalized error bound property. In Section 7 we discuss implementation details of the algorithm and compare it with other existing methods. Finally, in Section 8 we present some preliminary numerical tests on constrained lasso problem.

## 2Problem formulation

In many big data applications arising from e.g. networks, control and data ranking, we have a system formed from several entities, with a communication graph which indicates the interconnections between entities (e.g. sources and links in network optimization [22], website pages in data ranking [3] or subsystems in control [2]). We denote this *bipartite graph* as , where , and is an incidence matrix. We also introduce two sets of neighbors and associated to the graph, defined as:

The index sets and , which e.g. in the context of network optimization may represent the set of sources which share the link and the set of links which are used by the source , respectively, describe the local information flow in the graph. We denote the entire vector of variables for the graph as . The vector can be partitioned accordingly in block components , with . In order to easily extract subcomponents from the vector , we consider a partition of the identity matrix , with , such that and matrices , such that , with being the vector containing all the components with . In this paper we address problems arising from such systems, where the objective function can be written in a general form as (for similar models and settings, see [1]):

where and . We denote and . The function is a smooth *partially separable* convex function, while is *fully separable* convex non-smooth function. The local information structure imposed by the graph should be considered as part of the problem formulation. We consider the following natural measure of separability of the objective function :

Note that , and the definition of the measure of separability is more general than the one considered in [18] that is defined only in terms of . It is important to note that coordinate gradient descent type methods for solving problem are appropriate only in the case when is relatively small, otherwise incremental type methods [22] should be considered for solving . Indeed, difficulties may arise when is the sum of a large number of component functions and is large, since in that case exact computation of the components of gradient (i.e. ) can be either very expensive or impossible due to noise. In conclusion, we assume that the algorithm is employed for problems , with relatively small, i.e. .

Throughout this paper, by we denote an optimal solution of problem and by the set of optimal solutions. We define the index indicator function as:

and the set indicator function as:

Also, by we denote the standard Euclidean norm and we introduce an additional norm , where is a positive diagonal matrix. Considering these, we denote by the projection of a point onto a set in the norm :

Furthermore, for simplicity of exposition, we denote by the projection of a point on the optimal set , i.e. . In this paper we consider that the smooth component of satisfies the following assumption:

Note that our assumption is different from the ones in [10], where the authors consider that the gradient of the function is coordinate-wise Lipschitz continuous, which states the following: if we define the partial gradient , then there exists some constants such that:

As a consequence of Assumption ? we have that [25]:

Based on Assumption ? we can show the following distributed variant of the descent lemma, which is central in our derivation of a parallel coordinate descent method and in proving the convergence rate for it.

If we sum up for and by the definition of we have that:

Given matrices , note that we can express the first term in the right hand side as:

Note that since is a diagonal matrix we can express the norm as:

From the definition of and , note that is equivalent to . Thus, for the final term of the right hand side of we have that:

and the proof is complete.

. Moreover, from the generalized descent lemma through the norm , the sparsity induced by the graph via the sets and and implicitly via the measure of separability will intervene in the estimates for the convergence rates of the algorithm. A detailed discussion on this issue can be found in Section 7. The following lemma establishes Lipschitz continuity for but in the norm , whose proof can be derived using similar arguments as in [25]:

### 2.1Motivating practical applications

We now present important applications from which the interest for problems of type stems.

*Application I*: One specific example is the sparse logistic regression problem. This type of problem is often found in data mining or machine learning, see e.g. [26]. In a training set , with , the vectors represent samples, and represent the binary class labels with . The likelihood function for these samples is:

where is the conditional probability and is expressed as:

with being the weight vector. In some applications (see e.g. [27]), we require a bias term (also called as an intercept) in the loss function; therefore, is replaced with . The equality defines a hyperplane in the feature space on which . Also, if and otherwise. Then, the sparse logistic regression can be formulated as the following convex problem:

where is some constant and is the average logistic loss function:

Note that , where denotes the 1-norm, is the separable non-smooth component which promotes the sparsity of the decision variable . If we associate to this problem a bipartite graph where the incidence matrix is defined such that provided that , then the vectors have a certain sparsity according to this graph, i.e. they only have nonzero components in . Therefore, can be written as , where each function is defined as:

It can be easily proven that the objective function in this case satisfies with and with . Furthermore, we have that satisfies with matrix .

*Application II*: Another classic problem which implies functions with Lipschitz continuous gradient of type is:

where , the sets are convex, and . This problem is also known as the constrained lasso problem [28] and is widely used e.g. in signal processing, fused or generalized lasso and monotone curve estimation [28] or distributed model predictive control [1]. For example, in image restoration, incorporating a priori information (such as box constraints on ) can lead to substantial improvements in the restoration and reconstruction process (see [29] for more details). Note that this problem is a special case of problem , with being block separable and with the functions defined as:

where are the nonzero components of row of , corresponding to . In this case, functions satisfy with . Given these constants, we find that in this case satisfies with the matrix . Also, note that functions of type satisfy Lipschitz continuity with , where denotes block column of the matrix .

*Application III*: A third type of problem which falls under the same category is derived from the following primal formulation:

where and the functions are strongly convex with convexity parameters . This type of problem is often found in network control [2], network optimization or utility maximization [22]. In all these applications matrix is very sparse, i.e. both are small. We formulate the dual problem of as:

where denotes the Lagrange multiplier and is the indicator function for the nonnegative orthant . By denoting by the convex conjugate of the function , the previous problem can be rewritten as:

where is the th block column of . Note that, given the strong convexity of , then the functions have Lipschitz continuous gradient in of type with constants [25]. Now, if the matrix has some sparsity induced by a graph, i.e. the blocks if the corresponding incidence matrix has , which in turn implies that the block columns are sparse according to some index set , then the matrix-vector products depend only on , such that , with . Then, has Lipschitz continuous gradient of type with . For this problem we also have componentwise Lipschitz continuous gradient of type with . Furthermore, we find that in this case satisfies with . Note that there are many applications in distributed control or network optimization where or or both are small. E.g., one particular application that appears in the area of network optimization has the structure , where the matrix has column linked block angular form, i.e. the matrix has a block structure of the following form:

One of the standard distributed algorithms to solve network problems is based on a dual decomposition as explained above. In this case, by denoting the convex conjugate of the function , the corresponding problem can be rewritten as:

If we consider the block columns of dimension , then depends on all the coordinates of , i.e. . On the other hand, note that given the structure of , we have that . The reader can easily find many other examples of objective functions where .

## 3Parallel random coordinate descent method

In this section we consider a parallel version of the random coordinate descent method [10], which we call **P-RCD**. Analysis of coordinate descent methods based on updating in parallel more than one (block) component per iteration was given first in [12] and then further studied e.g. in [18]. In particular, such a method and its convergence properties has been analyzed in [1], but under the coordinate-wise Lipschitz assumption . Before we discuss the method however, we first need to introduce some concepts. For a function as defined in , we introduce the following mapping in norm :

Note that the mapping is a fully separable and strongly convex in w.r.t. to the norm with the constant 1. We denote by the proximal step for function , which is the optimal point of the mapping , i.e.:

The proximal step can also be defined in another way. We define the proximal operator of function as:

We recall an important property of the proximal operator [30]:

Based on this operator, note that we can write:

Given that is generally not differentiable, we denote by a vector belonging to the set of subgradients of . Evidently, in both definitions, the optimality conditions of the resulting problem from which we obtain are the same, i.e.:

It will become evident further on that the optimal solution will play a crucial role in the parallel random coordinate descent method. We now establish some properties which involve the function , the mapping and the proximal step . Given that is strongly convex in and that is an optimal point when minimizing over , we have the following inequality:

Further, since is convex and differentiable and by definition of we get:

In the algorithm that we discuss, at a step , the components of the iterate which are to be updated are dictated by a set of indices which is randomly chosen. Let us denote by the vector whose blocks , with , are identical to those of , while the remaining blocks are zeroed out, i.e. or:

Also, for the separable function , we denote the partial sum and the vector . A random variable is uniquely characterized by the probability density function:

For the random variable , we also define the probability with which a subcomponent can be found in as:

In our algorithm, we consider a uniform sampling of unique coordinates , that make up , i.e. . For a random variable with , we observe that we have a total number of possible values that can take, and with the uniform sampling we have that . Given that is random, we can express as:

For a single index , note that we have a total number of possible sets that can take which will include and therefore the probability that this index is in is:

Having defined the proximal step as in , in the algorithm that follows we generate randomly at step an index set of cardinality . We denote the vector which will be used to update , i.e. in the sense that . Also, by we denote the complement set of , i.e. . Thus, the algorithm consists of the following steps:

Note that the iterate update of **(P-RCD**) method can also be expressed as:

Note that the right hand sides of the last two equalities contain the same optimization problem whose optimality conditions are:

Clearly, the optimization problem from which we compute the iterate of **(P-RCD)** is fully separable. Then, it follows that for updating component of we need the following data: and . However, the th diagonal entry and th block component of the gradient can be computed distributively according to the communication graph imposed on the original optimization problem. Therefore, if algorithm **(P-RCD)** runs on a multi-core machine or as a multi-thread process, it can be observed that component updates can be done distributively and in parallel by each core/thread (see Section 7 for details).

We now establish that method **(P-RCD)** is a descent method, i.e. for all . From the convexity of and we obtain the following:

With **(P-RCD)** being a descent method, we can now introduce the following term:

and assume it to be bounded. We also define the random variable comprising the whole history of previous events as:

## 4Sublinear convergence for smooth convex minimization

In this section we establish the sublinear convergence rate of method **(P-RCD)** for problems of type with the objective function satisfying Assumption ?. Our analysis in this section combines the tools developed above with the convergence analysis in [16] for random one block coordinate descent methods. The next lemma provides some property for the uniform sampling with :

For any vector we consider its counterpart for a sampling taken as described above. Given the previous lemma and by taking into account the separability of the inner product and of the squared norm it follows immediately that:

Based on relations –, the separability of the function , and the properties of the expectation operator, the following inequalities can be derived (see e.g. [18]):

We can now formulate an important relation between the gradient mapping in a point and a point . By the definition of and the convexity of and we have:

Furthermore, from the optimality conditions we obtain:

This property will prove useful in the following theorem, which provides the sublinear convergence rate for method **(P-RCD)**.

Our proof generalizes the proof of Theorem 1 in [16] from one (block) component update per iterate to the case of (block) component updates, based on uniform sampling and on Assumption , and consequently on a different descent lemma. Thus, by taking expectation in w.r.t. conditioned on we get:

Now, if we take and in we get:

From this and we obtain:

Denote . From the definition of we have that:

If we divide both sides of the above inequality by and take expectation, we obtain:

Through this inequality and we arrive at:

After some rearranging of terms we obtain the following inequality: