Importance Sampling for Minibatches
Abstract
Minibatching is a very well studied and highly popular technique in supervised learning, used by practitioners due to its ability to accelerate training through better utilization of parallel processing power and reduction of stochastic variance. Another popular technique is importance sampling – a strategy for preferential sampling of more important examples also capable of accelerating the training process. However, despite considerable effort by the community in these areas, and due to the inherent technical difficulty of the problem, there is no existing work combining the power of importance sampling with the strength of minibatching. In this paper we propose the first importance sampling for minibatches and give simple and rigorous complexity analysis of its performance. We illustrate on synthetic problems that for training data of certain properties, our sampling can lead to several orders of magnitude improvement in training time. We then test the new sampling on several popular datasets, and show that the improvement can reach an order of magnitude.
1 Introduction
Supervised learning is a widely adopted learning paradigm with important applications such as regression, classification and prediction. The most popular approach to training supervised learning models is via empirical risk minimization (ERM). In ERM, the practitioner collects data composed of examplelabel pairs, and seeks to identify the best predictor by minimizing the empirical risk, i.e., the average risk associated with the predictor over the training data.
With ever increasing demand for accuracy of the predictors, largely due to successful industrial applications, and with ever more sophisticated models that need to trained, such as deep neural networks [8, 14], or multiclass classification [9], increasing volumes of data are used in the training phase. This leads to huge and hence extremely computationally intensive ERM problems.
Batch algorithms—methods that need to look at all the data before taking a single step to update the predictor—have long been known to be prohibitively impractical to use. Typical examples of batch methods are gradient descent and classical quasiNewton methods. One of the most popular algorithms for overcoming the delugeofdata issue is stochastic gradient descent (SGD), which can be traced back to a seminal work of Robbins and Monro [28]. In SGD, a single random example is selected in each iteration, and the predictor is updated using the information obtained by computing the gradient of the loss function associated with this example. This leads to a much more finegrained iterative process, but at the same time introduces considerable stochastic noise, which eventually—typically after one or a few passes over the data—effectively halts the progress of the method, rendering it unable to push the training error (empirical risk) to the realm of small values.
1.1 Strategies for dealing with stochastic noise
Several approaches have been proposed to deal with the issue of stochastic noise. The most important of these are i) decreasing stepsizes, ii) minibatching, iii) importance sampling and iv) variance reduction via “shift”, listed here from historically first to the most modern.
The first strategy, decreasing stepsizes, takes care of the noise issue by a gradual and direct scaledown process, which ensures that SGD converges to the ERM optimum [38]. However, an unwelcome side effect of this is a considerable slowdown of the iterative process [1]. For instance, the convergence rate is sublinear even if the function to be minimized is strongly convex.
The second strategy, minibatching, deals with the noise by utilizing a random set of examples in the estimate of the gradient, which effectively decreases the variance of the estimate [35]. However, this has the unwelcome sideeffect of requiring more computation. On the other hand, if a parallel processing machine is available, the computation can be done concurrently, which ultimately leads to speedup. This strategy does not result in an improvement of the convergence rate (unless progressively larger minibatch sizes are used, at the cost of further computational burden [7]), but can lead to massive improvement of the leading constant, which ultimately means acceleration (almost linear speedup for sparse data) [36].
The third strategy, importance sampling, operates by a careful datadriven design of the probabilities of selecting examples in the iterative process, leading to a reduction of the variance of the stochastic gradient thus selected. Typically, the overhead associated with computing the sampling probabilities and with sampling from the resulting distribution is negligible, and hence the net effect is speedup. In terms of theory, as in the case of minibatching, this strategy leads to the improvement of the leading constant in the complexity estimate, typically via replacing the maximum of certain datadependent quantities by their average [24, 12, 40, 23, 19, 2, 3].
Finally, and most recently, there has been a considerable amount of research activity due to the groundbreaking realization that one can gain the benefits of SGD (cheap iterations) without having to pay through the side effects mentioned above (e.g., halt in convergence due to decreasing stepsizes or increase of workload due to the use of minibatches). The result, in theory, is that for strongly convex losses (for example), one does not have to suffer sublinear convergence any more, but instead a fast linear rate “kicks in”. In practice, these methods dramatically surpass all previous existing approaches.
The main algorithmic idea is to change the search direction itself, via a properly designed and cheaply maintainable “variancereducing shift” (control variate). Methods in this category are of two types: those operating in the primal space (i.e., directly on ERM) and those operating in a dual space (i.e., with the dual of the ERM problem). Methods of the primal variety include SAG [29], SVRG [10], S2GD [11], proxSVRG [37], SAGA [4], mS2GD [13] and MISO [17]. Methods of the dual variety work by updating randomly selected dual variables, which correspond to examples. These methods include SCD [31], RCDM [20, 26], SDCA [34], Hydra [25, 6], mSDCA [36], APCG [15], AsySPDC [16], RCD [18], APPROX [5], SPDC [39], ProxSDCA [32], ASDCA [33], IProxSDCA [40], and QUARTZ [23].
1.2 Combining strategies
We wish to stress that the key strategies, minibatching, importance sampling and variancereducing shift, should be seen as orthogonal tricks, and as such they can be combined, achieving an amplification effect. For instance, the first primal variancereduced method allowing for minibatching was [13]; while dualbased methods in this category include [33, 23, 2]. Variancereduced methods with importance sampling include [20, 26, 24, 21] for general convex minimization problems, and [40, 23, 19, 2] for ERM.
2 Contributions
Despite considerable effort of the machine learning and optimization research communities, no importance sampling for minibatches was previously proposed, nor analyzed. The reason for this lies in the underlying theoretical and computational difficulties associated with the design and successful implementation of such a sampling. One needs to come up with a way to focus on a reasonable set of subsets (minibatches) of the examples to be used in each iteration (issue: there are many subsets; which ones to choose?), assign meaningful datadependent nonuniform probabilities to them (issue: how?), and then be able to sample these subsets according to the chosen distribution (issue: this could be computationally expensive).
The tools that would enable one to consider these questions did not exist until recently. However, due to a recent line of work on analyzing variancereduced methods utilizing what is known as arbitrary sampling [24, 23, 21, 22, 2], we are able to ask these questions and provide answers. In this work we design a novel family of samplings—bucket samplings—and a particular member of this family—importance sampling for minibatches. We illustrate the power of this sampling in combination with the reducedvariance dfSDCA method for ERM. This method is a primal variant of SDCA, first analyzed by ShalevShwartz [30], and extended by Csiba and Richtárik [2] to the arbitrary sampling setting. However, our sampling can be combined with any stochastic method for ERM, such as SGD or S2GD, and extends beyond the realm of ERM, to convex optimization problems in general. However, for simplicity, we do not discuss these extensions in this work.
We analyze the performance of the new sampling theoretically, and by inspecting the results we are able to comment on when can one expect to be able to benefit from it. We illustrate on synthetic datasets with varying distributions of example sizes that our approach can lead to dramatic speedups when compared against standard (uniform) minibatching, of one or more degrees of magnitude. We then test our method on real datasets and confirm that the use of importance minibatching leads to up to an order of magnitude speedup. Based on our experiments and theory, we predict that for real data with particular shapes and distributions of example sizes, importance sampling for minibatches will operate in a favourable regime, and can lead to speedup higher than one order of magnitude.
3 The Problem
Let be a data matrix in which features are represented in rows and examples in columns, and let be a vector of labels corresponding to the examples. Our goal is to find a linear predictor such that , where the pair is sampled from the underlying distribution over datalabel pairs. In the L2regularized Empirical Risk Minimization problem, we find by solving the optimization problem
(1) 
where is a loss function associated with examplelabel pair , and . For instance, the square loss function is given by . Our results are not limited to L2regularized problems though: an arbitrary strongly convex regularizer can be used instead [23]. We shall assume throughout that the loss functions are convex and smooth, where . The latter means that for all and all , we have
This setup includes ridge and logistic regression, smoothed hinge loss, and many other problems as special cases [34]. Again, our sampling can be adapted to settings with nonsmooth losses, such as the hinge loss.
4 The Algorithm
In this paper we illustrate the power of our new sampling in tandem with Algorithm 1 (dfSDCA) for solving (1).
The method has two parameters. A “sampling” , which is a random setvalued mapping [27] with values being subsets of , the set of examples. No assumptions are made on the distribution of apart from requiring that is positive for each , which simply means that each example has to have a chance of being picked. The second parameter is a stepsize , which should be as large as possible, but not larger than a certain theoretically allowable maximum depending on and , beyond which the method could diverge.
Algorithm 1 maintains “dual” variables, , which act as variancereduction shifts. This is most easily seen in the case when we assume that (no minibatching). Indeed, in that case we have
where is the stochastic gradient. If is set to a proper value, as we shall see next, then it turns out that for all , is converging , where is the solution to (1), which means that the shifted stochastic gradient converges to zero. This means that its variance is progressively vanishing, and hence no additional strategies, such as decreasing stepsizes or minibatching are necessary to reduce the variance and stabilize the process. In general, dfSDCA in each step picks a random subset of the examples, denoted as , updates variables for , and then uses these to update the predictor .
4.1 Complexity of dfSDCA
In order to state the theoretical properties of the method, we define
Most crucially to this paper, we assume the knowledge of parameters for which the following ESO^{1}^{1}1ESO = Expected Separable Overapproximation [27, 22]. inequality holds
(2) 
holds for all . Tight and easily computable formulas for such parameters can be found in [22]. For instance, whenever , inequality (2) holds with . However, this is a conservative choice of the parameters. Convergence of dfSDCA is described in the next theorem.
5 Bucket Sampling
We shall first explain the concept of “standard” importance sampling.
5.1 Standard importance sampling
Assume that always picks a single example only. In this case, (2) holds for , independently of [22]. This allows us to choose the sampling probabilities as , which ensures that (4) is minimized. This is importance sampling. The number of iterations of dfSDCA is in this case proportional to
If uniform probabilities are used, the average in the above formula gets replaced by the maximum:
Hence, one should expect the following speedup when comparing the importance and uniform samplings:
(5) 
If for instance, then dfSDCA with importance sampling is 10 faster than dfSDCA with uniform sampling.
5.2 Uniform minibatch sampling
In machine learning, the term “minibatch” is virtually synonymous with a special sampling, which we shall here refer to by the name nice sampling [27]. Sampling is nice if it picks uniformly at random from the collection of all subsets of of cardinality . Clearly, and, moreover, it was show by Qu and Richtárik [22] that (2) holds with defined by
(6) 
where In the case of nice sampling we have the stepsize and complexity given by
(7) 
(8) 
5.3 Bucket sampling: definition
We now propose a family of samplings, which we call bucket samplings. Let be a partition of into nonempty sets (“buckets”).
Definition 2 (Bucket sampling).
We say that is a bucket sampling if for all , with probability 1.
Informally, a bucket sampling picks one example from each of the buckets, forming a minibatch. Hence, and for each , where, as before, . Notice that given the partition, the vector uniquely determines a bucket sampling. Hence, we have a family of samplings indexed by a single dimensional vector. Let be the set of all vectors describing bucket samplings associated with partition . Clearly,
5.4 Optimal bucket sampling
The optimal bucket sampling is that for which (4) is minimized, which leads to a complicated optimization problem:
A particular difficulty here is the fact that the parameters depend on the vector in a complicated way. In order to resolve this issue, we prove the following result.
Theorem 3.
Let be a bucket sampling described by partition and vector . Then the ESO inequality (2) holds for parameters set to
(9) 
where , and .
Observe that is the set of examples which express feature , and is the number of buckets intersecting with . Clearly, that (if , we simply discard this feature from our data as it is not needed). Note that the effect of the quantities on the value of is small. Indeed, unless we are in the extreme situation when , which has the effect of neutralizing , the quantity is between and . Hence, for simplicity, we could instead use the slightly more conservative parameters:
.
5.5 Uniform bucket sampling
Assume all buckets are of the same size: for all . Further, assume that for all . Then , and hence Theorem 3 says that
(10) 
and in view of (4), the complexity of dfSDCA with this sampling becomes
(11) 
Formula (6) is very similar to the one for nice sampling (10), despite the fact that the sets/minibatches generated by the uniform bucket sampling have a special structure with respect to the buckets. Indeed, it is easily seen that the difference between between and is negligible. Moreover, if either or for all , then for all and hence . This is also what we get for the nice sampling.
6 Importance Minibatch Sampling
In the light of Theorem 3, we can formulate the problem of searching for the optimal bucket sampling as
(12) 
Still, this is not an easy problem. Importance minibatch sampling arises as an approximate solution of (12). Note that the uniform minibatch sampling is a feasible solution of the above problem, and hence we should be able to improve upon its performance.
6.1 Approach 1: alternating optimization
Given a probability distribution , we can easily find using Theorem 3. On the other hand, for any fixed , we can minimize (12) over by choosing the probabilities in each group and for each via
(13) 
This leads to a natural alternating optimization strategy. Eventually, this strategy often (in experiments) converges to a pair for which (13) holds. Therefore, the resulting complexity will be
(14) 
We can compare this result against the complexity of nice in (8). We can observe that the terms are very similar, up to two differences. First, the importance minibatch sampling has a maximum over group averages instead of a maximum over everything, which leads to speedup, other things equal. On the other hand, and are different quantities. The alternating optimization procedure for computation of is costly, as one iteration takes a pass over all data. Therefore, in the next subsection we propose a closed form formula which, as we found empirically, offers nearly optimal convergence rate.
6.2 Approach 2: practical formula
For each group , let us choose for all the probabilities as follows:
(15) 
where is given by (10). After doing some simplifications, the associated complexity result is
where
and
We would ideally want to have for all (this is what we get for importance sampling without minibatches). If for all , then the complexity is an improvement on the complexity of the uniform minibatch sampling since the maximum of group averages is always better than the maximum of all elements :
Indeed, the difference can be very large.
7 Experiments
We now comment on the results of our numerical experiments, with both synthetic and real datasets. We plot the optimality gap (vertical axis) against the computational effort (horizontal axis). We measure computational effort by the number of effective passes through the data divided by . We divide by as a normalization factor; since we shall compare methods with a range of values of . This is reasonable as it simply indicates that the updates are performed in parallel. Hence, what we plot is an implementationindependent model for time.
We compared two algorithms:

imp: dfSDCA using importance sampling (i.e., importance minibatch sampling) defined in Subsection 6.2.
For each dataset we provide two plots. In the left figure we plot the convergence of nice for different values of , and in the right figure we do the same for importance. The horizontal axis has the same range in both plots, so they are easily comparable. The values of we used to plot are . In all experiments we used the logistic loss: and set the regularizer to . We will observe the theoretical and empirical ratio . The theoretical ratio is computed from the corresponding theory. The empirical ratio is the ratio between the horizontal axis values at the moments when the algorithms reached the precision .
7.1 Artificial data
We start with experiments using artificial data, where we can control the sparsity pattern of and the distribution of . We fix and choose and . For each feature we sampled a random sparsity coefficient to have the average sparsity under control. We used two different regimes of sparsity: (10% nonzeros) and (80% nonzeros). After deciding on the sparsity pattern, we rescaled the examples to match a specific distribution of norms ; see Table 1. The code column shows the corresponding code in Julia to create the vector of norms . The distributions can be also observed as histograms in Figure 1.
label  code  

extreme  L = ones(n);L[1] = 1000  980.4 
chisq1  L = rand(chisq(1),n)  17.1 
chisq10  L = rand(chisq(10),n)  3.9 
chisq100  L = rand(chisq(100),n)  1.7 
uniform  L = 2*rand(n)  2.0 
The corresponding experiments can be found in Figure 4 and Figure 4. The theoretical and empirical speedup are also summarized in Tables 2 and 3.
Data  

uniform  1.2 : 1.0  1.2 : 1.1  1.2 : 1.1  1.2 : 1.1  1.3 : 1.1  1.4 : 1.1 
chisq100  1.5 : 1.3  1.5 : 1.3  1.5 : 1.4  1.6 : 1.4  1.6 : 1.4  1.6 : 1.4 
chisq10  1.9 : 1.4  1.9 : 1.5  2.0 : 1.4  2.2 : 1.5  2.5 : 1.6  2.8 : 1.7 
chisq1  1.9 : 1.4  2.0 : 1.4  2.2 : 1.5  2.5 : 1.6  3.1 : 1.6  4.2 : 1.7 
extreme  8.8 : 4.8  9.6 : 6.6  11 : 6.4  14 : 6.4  20 : 6.9  32 : 6.1 
Data  

uniform  1.2 : 1.1  1.2 : 1.1  1.4 : 1.2  1.5 : 1.2  1.7 : 1.3  1.8 : 1.3 
chisq100  1.5 : 1.3  1.6 : 1.4  1.6 : 1.5  1.7 : 1.5  1.7 : 1.6  1.7 : 1.6 
chisq10  1.9 : 1.3  2.2 : 1.6  2.7 : 2.1  3.1 : 2.3  3.5 : 2.5  3.6 : 2.7 
chisq1  1.9 : 1.3  2.6 : 1.8  3.7 : 2.3  5.6 : 2.9  7.9 : 3.2  10 : 3.9 
extreme  8.8 : 5.0  15 : 7.8  27 : 12  50 : 16  91 : 21  154 : 28 
7.2 Real data
We used several publicly available datasets^{2}^{2}2https://www.csie.ntu.edu.tw/ cjlin/libsvmtools/datasets/, summarized in Table 4. Experimental results are in Figure 2. The theoretical and empirical speedup table for these datasets can be found in Table 5.
Dataset  #samples  #features  sparsity  
ijcnn1  35,000  23  60.1%  2.04 
protein  17,766  358  29.1%  1.82 
w8a  49,749  301  4.2%  9.09 
url  2,396,130  3,231,962  0.04 %  4.83 
aloi  108,000  129  24.6%  26.01 
Data  

ijcnn1  1.2 : 1.1  1.4 : 1.1  1.6 : 1.3  1.9 : 1.6  2.2 : 1.6  2.3 : 1.8 
protein  1.3 : 1.2  1.4 : 1.2  1.5 : 1.4  1.7 : 1.4  1.8 : 1.5  1.9 : 1.5 
w8a  2.8 : 2.0  2.9 : 1.9  2.9 : 1.9  3.0 : 1.9  3.0 : 1.8  3.0 : 1.8 
url  3.0 : 2.3  2.6 : 2.1  2.0 : 1.8  1.7 : 1.6  1.8 : 1.6  1.8 : 1.7 
aloi  13 : 7.8  12 : 8.0  11 : 7.7  9.9 : 7.4  9.3 : 7.0  8.8 : 6.7 
7.3 Conclusion
In all experiments, importance sampling performs significantly better than nice sampling. The theoretical speedup factor computed by provides an excellent estimate of the actual speedup. We can observe that on denser data the speedup is higher than on sparse data. This matches the theoretical intuition for for both samplings. As we observed for artificial data, for extreme datasets the speedup can be arbitrary large, even several orders of magnitude. A rule of thumb: if one has data with large , practical speedup from using importance minibatch sampling will likely be dramatic.





8 Proof of Theorem 3
8.1 Three lemmas
We first establish three lemmas, and then proceed with the proof of the main theorem. With each sampling we associate an “probability matrix” defined as follows: . Our first lemma characterizes the probability matrix of the bucket sampling.
Lemma 4.
If is a bucket sampling, then
(16) 
where is the matrix of all ones,
(17) 
and denotes the Hadamard (elementwise) product of matrices. Note that is the 01 matrix given by if and only if belong to the same bucket for some .
Proof.
Lemma 5.
Let be a nonempty subset of , let be as in Lemma 4 and put . Then
(18) 
Proof.
For any , we have
where we used the CauchySchwarz inequality. Using this, we obtain
∎
Lemma 6.
Let be any nonempty subset of and be a bucket sampling. Then
(19) 
Proof.
Choose any and note that
where and . It remains to apply the CauchySchwarz inequality:
and notice that the th element on the diagonal of is for and 0 for ∎
8.2 Proof of Theorem 3
References
 Bottou [2010] Léon Bottou. Largescale machine learning with stochastic gradient descent. In Proceedings of COMPSTAT’2010, pages 177–186. Springer, 2010.
 Csiba and Richtárik [2015] Dominik Csiba and Peter Richtárik. Primal method for ERM with flexible minibatching schemes and nonconvex losses. arXiv:1506.02227, 2015.
 Csiba et al. [2015] Dominik Csiba, Zheng Qu, and Peter Richtárik. Stochastic dual coordinate ascent with adaptive probabilities. ICML, 2015.
 Defazio et al. [2014] Aaron Defazio, Francis Bach, and Simon LacosteJulien. Saga: A fast incremental gradient method with support for nonstrongly convex composite objectives. In NIPS 27, pages 1646–1654, 2014.
 Fercoq and Richtárik [2015] Olivier Fercoq and Peter Richtárik. Accelerated, parallel, and proximal coordinate descent. SIAM Journal on Optimization, 25(4):1997–2023, 2015.
 Fercoq et al. [2014] Olivier Fercoq, Zheng Qu, Peter Richtárik, and Martin Takáč. Fast distributed coordinate descent for minimizing nonstrongly convex losses. IEEE International Workshop on Machine Learning for Signal Processing, 2014.
 Friedlander and Schmidt [2012] Michael P Friedlander and Mark Schmidt. Hybrid deterministicstochastic methods for data fitting. SIAM Journal on Scientific Computing, 34(3):A1380–A1405, 2012.
 Hinton [2007] Geoffrey E Hinton. Learning multiple layers of representation. Trends in cognitive sciences, 11(10):428–434, 2007.
 Huang et al. [2012] GuangBin Huang, Hongming Zhou, Xiaojian Ding, and Rui Zhang. Extreme learning machine for regression and multiclass classification. Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on, 42(2):513–529, 2012.
 Johnson and Zhang [2013] Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In NIPS 26, 2013.
 Konečný and Richtárik [2013] Jakub Konečný and Peter Richtárik. S2GD: Semistochastic gradient descent methods. arXiv:1312.1666, 2013.
 Konečný et al. [2014] Jakub Konečný, Zheng Qu, and Peter Richtárik. Semistochastic coordinate descent. arXiv:1412.6293, 2014.
 Konečný et al. [2015] Jakub Konečný, Jie Lu, Peter Richtárik, and Martin Takáč. mS2GD: Minibatch semistochastic gradient descent in the proximal setting. to appear in IEEE Journal of Selected Topics in Signal Processing, 2015.
 Krizhevsky et al. [2012] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In NIPS 25, pages 1097–1105, 2012.
 Lin et al. [2014] Qihang Lin, Zhaosong Lu, and Lin Xiao. An accelerated proximal coordinate gradient method and its application to regularized empirical risk minimization. arXiv:1407.1296, 2014.
 Liu and Wright [2015] Ji Liu and Stephen J Wright. Asynchronous stochastic coordinate descent: Parallelism and convergence properties. SIAM Journal on Optimization, 25(1):351–376, 2015.
 Mairal [2015] Julien Mairal. Incremental majorizationminimization optimization with application to largescale machine learning. SIAM Journal on Optimization, 25(2):829–855, 2015.
 Necoara and Patrascu [2014] Ion Necoara and Andrei Patrascu. A random coordinate descent algorithm for optimization problems with composite objective function and linear coupled constraints. Computational Optimization and Applications, 57:307–337, 2014.
 Needell et al. [2014] Deanna Needell, Rachel Ward, and Nati Srebro. Stochastic gradient descent, weighted sampling, and the randomized kaczmarz algorithm. In NIPS 27, pages 1017–1025, 2014.
 Nesterov [2012] Yurii Nesterov. Efficiency of coordinate descent methods on hugescale optimization problems. SIAM Journal on Optimization, 22(2):341–362, 2012.
 Qu and Richtárik [2014] Zheng Qu and Peter Richtárik. Coordinate descent methods with arbitrary sampling I: Algorithms and complexity. arXiv:1412.8060, 2014.
 Qu and Richtárik [2014] Zheng Qu and Peter Richtárik. Coordinate descent with arbitrary sampling II: Expected separable overapproximation. arXiv:1412.8063, 2014.
 Qu et al. [2015] Zheng Qu, Peter Richtárik, and Tong Zhang. Quartz: Randomized dual coordinate ascent with arbitrary sampling. In NIPS 28, pages 865–873. 2015.
 Richtárik and Takáč [2015] Peter Richtárik and Martin Takáč. On optimal probabilities in stochastic coordinate descent methods. Optimization Letters, pages 1–11, 2015.
 Richtárik and Takáč [2016] Peter Richtárik and Martin Takáč. Distributed coordinate descent method for learning with big data. to appear in JMLR, 2016.
 Richtárik and Takáč [2014] Peter Richtárik and Martin Takáč. Iteration complexity of randomized blockcoordinate descent methods for minimizing a composite function. Mathematical Programming, 144(2):1–38, 2014.
 Richtárik and Takáč [2015] Peter Richtárik and Martin Takáč. Parallel coordinate descent methods for big data optimization. Mathematical Programming, pages 1–52, 2015.
 Robbins and Monro [1951] Herbert Robbins and Sutton Monro. A stochastic approximation method. Ann. Math. Statist., 22(3):400–407, 1951.
 Schmidt et al. [2013] Mark Schmidt, Nicolas Le Roux, and Francis Bach. Minimizing finite sums with the stochastic average gradient. arXiv:1309.2388, 2013.
 ShalevShwartz [2015] Shai ShalevShwartz. SDCA without duality. arXiv:1502.06177, 2015.
 ShalevShwartz and Tewari [2011] Shai ShalevShwartz and Ambuj Tewari. Stochastic methods for regularized loss minimization. JMLR, 12:1865–1892, 2011.
 ShalevShwartz and Zhang [2012] Shai ShalevShwartz and Tong Zhang. Proximal stochastic dual coordinate ascent. arXiv:1211.2717, 2012.
 ShalevShwartz and Zhang [2013a] Shai ShalevShwartz and Tong Zhang. Accelerated minibatch stochastic dual coordinate ascent. In NIPS 26, pages 378–385. 2013a.
 ShalevShwartz and Zhang [2013b] Shai ShalevShwartz and Tong Zhang. Stochastic dual coordinate ascent methods for regularized loss. JMLR, 14(1):567–599, 2013b.
 ShalevShwartz et al. [2011] Shai ShalevShwartz, Yoram Singer, Nathan Srebro, and Andrew Cotter. Pegasos: Primal estimated subgradient solver for svm. Mathematical programming, 127(1):3–30, 2011.
 Takáč et al. [2013] Martin Takáč, Avleen Singh Bijral, Peter Richtárik, and Nathan Srebro. Minibatch primal and dual methods for SVMs. arXiv:1303.2314, 2013.
 Xiao and Zhang [2014] Lin Xiao and Tong Zhang. A proximal stochastic gradient method with progressive variance reduction. SIAM Journal on Optimization, 24(4):2057–2075, 2014.
 Zhang [2004] Tong Zhang. Solving large scale linear prediction problems using stochastic gradient descent algorithms. ICML, 2004.
 Zhang and Xiao [2015] Yuchen Zhang and Lin Xiao. Stochastic primaldual coordinate method for regularized empirical risk minimization. ICML, 2015.
 Zhao and Zhang [2015] Peilin Zhao and Tong Zhang. Stochastic optimization with importance sampling. ICML, 2015.