# Importance Sampling Strategy for Non-Convex Randomized Block-Coordinate Descent

###### Abstract

As the number of samples and dimensionality of optimization problems related to statistics an machine learning explode, block coordinate descent algorithms have gained popularity since they reduce the original problem to several smaller ones. Coordinates to be optimized are usually selected randomly according to a given probability distribution. We introduce an importance sampling strategy that helps randomized coordinate descent algorithms to focus on blocks that are still far from convergence. The framework applies to problems composed of the sum of two possibly non-convex terms, one being separable and non-smooth. We have compared our algorithm to a full gradient proximal approach as well as to a randomized block coordinate algorithm that considers uniform sampling and cyclic block coordinate descent. Experimental evidences show the clear benefit of using an importance sampling strategy.

## I Introduction

In the era of Big Data, current computational methods for statistics and machine learning are challenged by size of data both in terms of dimensionality and number of examples. Parameters of estimators learned from these large amount of data are usually obtained as minimizer of a regularized empirical risk problems of the form

(1) |

where is usually a smooth and non-convex function with Lipschitz gradient and a non-smooth function. In such a large-scale and high-dimensionality context, most prevalent approaches use first-order method based on gradient descent [1] although second-order quasi-Newton algorithms have been considered [14].

More efficient algorithms can be considered for solving problem (1) if and present some special structures. When is separable, Problem 1 can be expressed as

We suppose that is of the form where is the number of groups in and and . In this case, methods that can use the group structure such as coordinate descent algorithms [19] or randomized coordinate descent [12] are among the most efficient ones for solving problem (1).

In this paper, we focus on a specific class of randomized block proximal gradient algorithm, useful when each block has a special structure. We suppose that each is a difference of convex functions and is non-smooth. However, it has to have a closed-form proximal operator [8]. Such a situation mainly arises when is a non-convex sparsity-inducing regularizer. Common non-convex and non-differentiable regularizers are the SCAD regularizer [6], the regularizer [9], the capped- and the log penalty [4]. These regularizers have been frequently used for feature selection or for obtaining sparse models in machine learning [10, 7, 4].

A large majority of works dealing with randomized block coordinate descent algorithms (RBCD) considers uniform distribution of sampling [12, 17, 15]. Few attentions have been devoted to the use of arbitrary distribution [11, 13]. In these two latter efforts, principal statement is that the probability of drawing any block should be not less than a value to ensure that all blocks have non-zero probabilities to be selected and hence to guarantee convergence in expectation of the algorithm. However, because no prior knowledge are usually available for directing the choice of the probability distribution of block sampling, experimental analysis of the randomized algorithms usually consider uniform distribution.

This paper proposes a probability distribution for randomized block coordinate sampling that goes beyond the uniform sampling and that is updated after each iteration of the algorithm. Indeed, we have designed a distribution that is dependent on approximate optimality condition of the problem. Owing to such a distribution, described in Section II we can bias the sampling towards coordinates that are still far from optimality allowing to save substantial computational efforts as illustrated by our empirical experiments (see Section III).

## Ii Framework and algorithm

### Ii-a Randomized BCD

We discuss now a generic approach for solving problem (1) when is separable by taking advantage of this separability. The general framework is shown in Algorithm 1 where is the partial gradient at of with respect to .

At each iteration in the algorithm a block is selected to be optimized (line 3). Then, a partial proximal gradient step is performed (line 5) for the selected group. It consists in solving efficiently the proximal operator

Note that since is separable, the proximal operator can be applied only on the current group and will update only . A backtracking (line 6-9) may be necessary to ensure a decrease in the objective but a non monotone version can also be used as discussed in [11]. Finally, if the number of groups is set to 1, then the algorithm boils down to GIST [8], i.e. a proximal method for non-convex optimization.

This randomized algorithm is interesting w.r.t. the classical proximal gradient descent since it does not require the computation of the full gradient at each iteration. For instance, when estimating a linear model, the loss can be expressed as . The gradient is where the derivative is computed pointwise. Computing the partial gradient where is the submatrix of corresponding to group requires much less floating operations as reported in Table I since . In addition, this computational complexity can be greatly decreased by storing the prediction and by using the low complexity update at each iteration.

Task | GIST | RBCD |
---|---|---|

Gradient computation | ||

Proximal operator | ||

Cost computation |

### Ii-B Block selection and importance sampling

The convergence of the RBCD algorithm is clearly dependent of the block selection strategy of line 3 in Algorithm 1. One can select the group using classic cyclic rule as in [5],[3] or using the realization of a random distribution [12],[18]. The uniform distribution is often used in order to ensure that all blocks are updated equally, but convergence in expected value has been proved for any discrete distribution that have non-null components, () [11].

In this work, we introduce a novel probability distribution for sampling blocks in RBCD. This distribution is dependent on the optimality conditions of each block. In other words, we want to update more often blocks that are still far from convergence. Formally, let be the discrete density distribution such that is the probability that the block is selected at a given iteration and . We propose in this work to use the following distribution

(2) |

where is a user-defined parameter, is a vector composed of coordinates and is the infinite norm. As made clearer in the sequel, a component encodes the optimality condition violation in each block. Indeed, let , with and being two convex functions, then if is a local minimizer of , from Clarke subdifferential calculus [16], one can show that a necessary condition of optimality is that there exists and such that for all . Accordingly, we define the optimality condition violation as

(3) |

The role of in Equation (2) is to balance the effect of the optimality condition on the distribution. When , we retrieve a uniform distribution. Other values of will ensure that if a variable in a block has not converged, its block is likely to be updated more often than a block that has converged. Note that, owing to the DC decomposition of , the violation (3) can be easily computed, even for non-convex penalty function such as SCAD or the log-sum as discussed in [2].

Computing the optimality condition violation vector is not possible in practice for RBCD since it requires the full gradient of the problem, which as discussed in the previous section, is not computed at each iteration. As a solution, we propose to use a vector initialized with the exact condition violation computed from the initial vector . Thereon, only the entry of is updated at each iteration leading to an approximate optimality condition evaluation. Indeed, in algorithm 1 line 4, when a partial gradient is computed, we can use it to update the approximate and then update the probabilities accordingly. This latter vector is clearly a coarse approximation of the optimality condition violation but as shown in the experiments it is a relevant choice for the proposed importance sampling scheme.

### Ii-C On tricks of the trade

The proposed optimization algorithm has an important parameter that has to be chosen carefully: the initial gradient step size at each iteration. If chosen too small, the gradients steps will barely improve the objective value, if chosen too large the backtracking step in lines 6-9 will require numerous computation of the loss function. In this work we use an extension of the Barzilai-Borwein (BB) rule that has been proposed in a non-convex scheme by [8]. This approach consists in using a Newton step with the approximate Hessian . When performing the full gradient descent in GIST, the BB rule gives

(4) |

where and . Again, in our algorithm the full gradient is not available but we can still benefit from the second-order approximation brought to us by the BB rule. We propose to this end to model the Hessian as a diagonal matrix where the weight of the diagonal is block-dependent. In other word, we store an estimate whose components are updated similarly to equation (4) but using instead partial gradient and variations and . This new rule is actually more general than the classical BB-rule since it brings local information and encodes a more precise Hessian approximation with group-wise coefficients similar to the variable metric in [5] .

## Iii Numerical Experiments

In this section, we illustrate the behaviour of our randomized BCD algorithm with importance sampling on some toy and real-world classification problems. For all problems, we have considered a logistic loss function and the log-sum non-convex sparsity inducing penalty defined as

with . We have compared our algorithm to a non-convex proximal gradient algorithm known as GIST [8] and a randomized BCD version of GIST with uniform sampling [11]. Note that since this regularization term is fully separable per variable, we used a separation of blocks of size variables.

### Iii-a Toy problem

As in [14] we consider a binary classification problem in . Among these variables, only of them define a subspace of in which classes can be discriminated. For these relevant variables, the two classes follow a Gaussian pdf with means respectively and and covariance matrices randomly drawn from a Wishart distribution where is the identity matrix. The components of have been independently and identically drawn from . The other non-relevant variables follow an i.i.d Gaussian probability distribution with zero mean and unit variance for both classes. We have respectively sampled and = 1000 number of examples for training and testing. Before learning, the training set has been normalized to zero mean and unit variance and test set has been rescaled accordingly. Note that the hyperparameter or any other parameters related to the regularization term have been set so as to maximize the performance of the GIST algorithm on the test set. We have initialized all algorithms with the zero vector ().

The different algorithms have been compared based on their computational demands and more exactly based on the number of flops they need for reaching a stopping criterion. Hence, this criterion is critical for a fair comparison. The GIST algorithm has been run until it reaches a necessary optimality condition lower than or until iterations is attained. For the randomized algorithms, including our approach , the stopping criterion is set according to a maximal number of iterations. This number is set so that the number of coordinate gradient evaluations is equal for all algorithms i.e we have used the number of GIST iterations where is the number of blocks. In the sequel, the number of flops reported is related to those needed for computing both function values and gradient evaluations.

Figure 1 (left) presents some examples of optimality condition evolution with respects to the number of flops. These curves are obtained as averages over iterations of the results obtained for a given experimental set-up (here , and ). We can first note that with respect to optimality condition, RBCD algorithm with uniform sampling (Unif RBCD) behaves similarly to the GIST algorithm and a cyclic BCD (Cyclic BCD). In terms of flops, few gain can be expected from such an approach. Instead, using importance sampling (IS RBCD) considerably helps in improving convergence. Such a behaviour can also be noted when monitoring evolution of the objective value (see central panel in Figure 1). Randomized algorithms tend to converge faster towards their optimal value with a clear advantage to the importance sampling approach. Finally, while they are not reported due to lack of space, the final classification performances are similar for all three methods.

Figure 1 (right) depicts evolutions of optimality conditions depending on block-coordinate group size. We can note that regardless of this size, our importance sampling approach achieves better performance than the GIST algorithm. In addition, it is clear that for our examples, the smaller the size is, the faster convergence we obtain.

### Iii-B Real-world classification problems

data | Algorithm | Class. Rate (%) | Flops | Opt. Condition | Obj. Val | ||
---|---|---|---|---|---|---|---|

classic | 7094 | 41681 | GIST | 96.370.5 | 9277.7664.6 | 0.030.0 | 32.642.2 |

classic | 7094 | 41681 | IS RBCD | 95.110.7 | 347.164.1 | 0.010.0 | 25.230.8 |

classic | 7094 | 41681 | Unif RBCD | 95.870.6 | 364.1266.2 | 0.030.0 | 35.260.8 |

la2 | 3075 | 31472 | GIST | 91.111.1 | 3148.75287.8 | 0.060.1 | 39.4257.7 |

la2 | 3075 | 31472 | IS RBCD | 90.981.2 | 101.163.6 | 0.150.2 | 43.3559.0 |

la2 | 3075 | 31472 | Unif RBCD | 91.040.9 | 108.114.8 | 0.230.3 | 45.5159.0 |

ohscal | 11162 | 11465 | GIST | 88.300.6 | 7452.22895.6 | 2.652.3 | 520.41451.2 |

ohscal | 11162 | 11465 | IS RBCD | 87.880.8 | 164.4221.5 | 0.870.6 | 480.53428.5 |

ohscal | 11162 | 11465 | Unif RBCD | 87.750.8 | 156.4517.7 | 1.141.1 | 480.55428.5 |

sports | 8580 | 14870 | GIST | 97.930.4 | 5034.751219.5 | 0.110.1 | 208.11215.2 |

sports | 8580 | 14870 | IS RBCD | 97.760.5 | 154.7420.3 | 0.070.1 | 212.05215.3 |

sports | 8580 | 14870 | Unif RBCD | 97.860.4 | 173.9910.6 | 0.390.3 | 222.38215.3 |

We have also compared these algorithms on real-world high-dimensional learning problems. The related datasets have been already used as benchmark datasets in [8, 14]. For these problems, we have used of the examples as training set and the remaining as test set. Again, hyperparameters of the model have been chosen so as to roughly maximize performances of the GIST algorithm. Stopping criteria of all algorithms have been set as previously. However, maximal number of iterations has been set to for GIST. In addition, we have limited the maximal number of iterations to . The number of blocks has been set to for all datasets.

Performances of the different algorithms are reported in Table II. Three measure of performances have been compared. Classification rates of all algorithms are almost similar although differences in performances are statistically significant in favor of GIST according to a Wilcoxon sign rank test with a p-value of . We explain this by the fact that regularization parameters have been selected w.r.t. to its generalization performances. The number of flops needed for convergence are highly in favor of the randomized algorithms. The factor gain in flops ranges in between to . Interestingly, exact optimality conditions after algorithms have halted are always in favor of our importance sampling randomized BCD algorithms except for the la2 dataset. Note that, in the table, we have also provided the objective values of the algorithms upon convergence. As one may have expected in a non-convex optimization problem, different “nearly” optimal objective values leads to similar classification rate performances stressing the existence of several local minimizers with good generalization property.

## Iv Conclusion

This paper introduced a framework for randomized block coordinate descent algorithm that leverages on importance sampling. We presented a sampling distribution that biases the algorithm to focus on block coordinates that are still far from convergence. While this idea is rather simple, our experimental results have shown that it considerably helps in achieving a faster empirical convergence of the randomized BCD algorithm. Future works will be devoted to the theoretical analysis of the importance sampling impact on the convergence rate. In addition, we plan to carry out thorough experimental analyses that unveil the impact of the algorithm parameters.

## References

- [1] A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM Journal on Imaging Sciences, vol. 2, no. 1, pp. 183–202, 2009.
- [2] A. Boisbunon, R. Flamary, and A. Rakotomamonjy, “Active set strategy for high-dimensional non-convex sparse optimization problems,” in Acoustics, Speech and Signal Processing (ICASSP), IEEE International Conference on. IEEE, Firenze, Italy, May 2014, pp. 1517–1521.
- [3] J. Bolte, S. Sabach, and M. Teboulle, “Proximal alternating linearized minimization for nonconvex and nonsmooth problems,” Mathematical Programming, vol. 146, no. 1-2, pp. 459–494, 2014.
- [4] E. Candès, M. Wakin, and S. Boyd, “Enhancing sparsity by reweighted minimization,” J. Fourier Analysis and Applications, vol. 14, no. 5-6, pp. 877–905, 2008.
- [5] E. Chouzenoux, J.-C. Pesquet, and A. Repetti, “Variable metric forward–backward algorithm for minimizing the sum of a differentiable function and a convex function,” Journal of Optimization Theory and Applications, vol. 162, no. 1, pp. 107–132, 2014.
- [6] J. Fan and R. Li, “Variable selection via nonconcave penalized likelihood and its oracle properties,” Journal of the American Statistical Association, vol. 96, no. 456, pp. 1348–1360, 2001.
- [7] G. Gasso, A. Rakotomamonjy, and S. Canu, “Recovering sparse signals with a certain family of non-convex penalties and dc programming,” IEEE Trans. Signal Processing, vol. 57, no. 12, pp. 4686–4698, 2009.
- [8] P. Gong, C. Zhang, Z. Lu, J. Huang, and Y. Jieping, “A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems,” in Proceedings of the 30th International Conference on Machine Learning, Atlanta, Georgia, Jun. 2013, pp. 37–45.
- [9] K. Knight and W. Fu, “Asymptotics for lasso-type estimators,” Annals of Statistics, vol. 28, no. 5, pp. 1356–1378, 2000.
- [10] L. Laporte, R. Flamary, S. Canu, S. Dejean, and J. Mothe, “Nonconvex regularizations for feature selection in ranking with sparse svm,” Neural Networks and Learning Systems, IEEE Transactions on, vol. 25, no. 6, pp. 1118–1130, 2014.
- [11] Z. Lu and L. Xiao, “A randomized nonmonotone block proximal gradient method for a class of structured non linear programming,” Arxiv, no. 1306.5918v2, 2015.
- [12] Y. Nesterov, “Efficiency of coordinate descent methods on huge-scale optimization problems,” SIAM Journal on Optimization, vol. 22, no. 2, pp. 341–362, 2012.
- [13] Z. Qu and P. Richtárik, “Coordinate descent with arbitrary sampling i: Algorithms and complexity,” arXiv preprint arXiv:1412.8060, 2014.
- [14] A. Rakotomamonjy, R. Flamary, and G. Gasso, “Dc proximal newton for nonconvex optimization problems,” IEEE Trans. on Neural Networks and Learning Systems, vol. 1, no. 1, pp. 1–13, 2015.
- [15] P. Richtárik and M. Takác, “Efficient serial and parallel coordinate descent methods for huge-scale truss topology design,” in Operations Research Proceedings 2011. Springer, 2012, pp. 27–32.
- [16] R. T. Rockafellar and R. J.-B. Wets, Variational analysis. Springer Science & Business Media, 2009, vol. 317.
- [17] S. Shalev-Shwartz and A. Tewari, “Stochastic methods for l 1-regularized loss minimization,” The Journal of Machine Learning Research, vol. 12, pp. 1865–1892, 2011.
- [18] R. Tappenden, P. Richtárik, and J. Gondzio, “Inexact coordinate descent: Complexity and preconditioning,” ArXiv e-prints, 2013.
- [19] P. Tseng, “Convergence of block coordinate descent method for nondifferentiable minimization,” Journal of Optimization Theory and Application, vol. 109, pp. 475–494, 2001.