On exponential convergence of SGD in nonconvex overparametrized learning
Abstract
Large overparametrized models learned via stochastic gradient descent (SGD) methods have become a key element in modern machine learning. Although SGD methods are very effective in practice, most theoretical analyses of SGD suggest slower convergence than what is empirically observed. In our recent work [MBB17] we analyzed how interpolation, common in modern overparametrized learning, results in exponential convergence of SGD with constant step size for convex loss functions. In this note, we extend those results to a much broader nonconvex function class satisfying the PolyakLojasiewicz (PL) condition. A number of important nonconvex problems in machine learning, including some classes of neural networks, have been recently shown to satisfy the PL condition. We argue that the PL condition provides a relevant and attractive setting for many machine learning problems, particularly in the overparametrized regime.
1 Introduction
Stochastic Gradient Descent and its variants have become a staple of the algorithmic foundations of machine learning. Yet many of its properties are not fully understood, particularly in nonconvex settings common in modern practice.
In this note, we study convergence of Stochastic Gradient Descent (SGD) for the class of functions satisfying the PolyakLojasiewicz (PL) condition. This class contains all stronglyconvex functions as well as a broad range of nonconvex functions including those used in machine learning applications (see the discussion below).
The primary purpose of this note is to show that in the interpolation setting (common in modern overparametrized machine learning and studied in our previous work [MBB17]) SGD with fixed step size has exponential convergence for the functions satisfying the PL condition. To the best of our knowledge, this is the first such exponential convergence result for a class of nonconvex functions.
Below, we discuss and highlight a number of aspects of the PL condition which differentiate it from the convex setting and make it more relevant to the practice and requirements of many machine learning problems. We first recall that in the interpolation setting, a minimizer of the empirical loss satisfies that for all . We say that satisfies the PL condition (see [KNS18]) if for some .
Most analyses for optimization in machine learning have concentrated on convex or, commonly, strongly convex setting. These settings are amenable to theoretical analyses and describe many important special cases of ML, such as linear and kernel methods. Still, a large class of modern models, notably neural networks, are nonconvex. Even for kernel machines, many of the arising optimization problems are poorly conditioned and not welldescribed by the traditional strongly convex analysis. Below we list some properties of the PLtype setting which make it particularly attractive and relevant to the requirements of machine learning, especially in the interpolated and overparametrized setting.

To verify the PL condition in the interpolated setting we need access to the norm of the gradient and the value of the objective function . These quantities are typically easily accessible empirically^{1}^{1}1In general we need to evaluate . Since , no further knowledge about is required., can be accurately estimated from a subsample of the data, and are often tractable analytically. On the other hand, verifying convexity requires the cumbersome positive definiteness of the Hessian matrix requiring accurate estimation of its smallest eigenvalue . Verifying this empirically is often difficult and cannot always be based on a subsample due to the required precision of the estimator when is close to zero (as is frequently the case in practice).

The norm of the gradient is much more resilient to perturbation of the objective function than the smallest eigenvalue of the Hessian (for convexity).

Many modern machine learning methods are overparametrized and result in manifolds of global minima [cooper2018loss]. This is not compatible with strict convexity and, in most circumstances^{2}^{2}2Unless those manifolds are convex domains in lowerdimensional affine subspaces., not compatible with convexity. However, manifolds of solutions are compatible with the PL condition.

Nearly every application of machine learning employs techniques for feature extraction or feature transformation. Global minima and the property of interpolation (shared global minima for the individual loss functions) are preserved under coordinate transformations. Yet convexity is generally not, thus not allowing for a unified analysis of optimization under feature transforms. In contrast, as discussed in Section 3, the PL condition is invariant under a broad class of nonlinear coordinate transformations.

Many problems of interest in machine learning involve optimization on manifolds. While geodesic convexity allows for efficient optimization, it is a parametrization dependent notion and is generally difficult to establish, as it requires explicit knowledge of the geodesic coordinates on the manifold. In contrast, the PL condition also allows for efficient optimization, while invariant under the choice of coordinates and far easier to verify. See [weber2017frank] for some recent applications.

Most convergence analyses in convex optimization rely on the distance to the minimizer. Yet, this distance is often difficult or impossible to bound empirically. Furthermore, the distance to minimizer can be infinite in many important settings, including optimization via logistic loss [soudry2017implicit] or inverse problems over Hilbert spaces, as in kernel methods [ma2017diving]. In contrast, PLtype analyses directly involve the value of the loss function, an empirically observable quantity of practical significance.

As originally observed by Polyak [polyak1963gradient], the PL condition is sufficient for exponential convergence of gradient descent. As we establish in this note, it also allows for exponential convergence of stochastic gradient descent with fixed step size in the interpolated setting.
Technical contributions: The main technical contribution of this note is to show the exponential convergence of minibatch SGD in the interpolated setting. The proof is simple and is reminiscent of the original observation by Polyak [polyak1963gradient] of exponential convergence of gradient descent. It also extends our previous work on the exponential convergence of minibatch SGD [MBB17] to a nonconvex setting. Interestingly, the step size arising from the PL condition in our analysis depends on the parameter and is potentially much smaller than that in the strongly convex case, where no such dependence is needed. At this point it is an open question whether this dependence is necessary in the PL setting. As an additional contribution, in Section 4, we show that for a special class of PL functions obtained by a composition of a strictly convex function and a linear transformation^{3}^{3}3These functions are convex but not necessarily strictly convex., we obtain exponential convergence without such dependence on in the step size. However, this result requires a different type of analysis than that for the general PL setting. In Section 3, we provide a formal statement capturing the transformation invariance property of the PL condition.
Examples and Related Work:
The PL condition has recently become popular in optimization and machine learning starting with the work [KNS18]. In fact, as discussed in [KNS18], several other conditions proposed for convergence analysis are special cases of the PL condition. One such condition is Restricted secant inequality (RSI) proposed in [zhang2013gradient]. Another set of conditions that are special cases of the PL condition was referred to as “onepoint convexity” in [allen2017natasha]. The two variations of onepoint convexity discussed there are special cases of RSI and PL, respectively, and hence are in the PL class. The same reference points out several examples of “onepoint convexity” in previous works. Some notable examples satisfying RSI include twolayer neural networks [li2017convergence], matrix completion [sun2016guaranteed], dictionary learning [arora2015simple], and phase retrieval [chen2015solving]. It has also been observed empirically that neural networks satisfy the PL condition [kleinberg2018alternative]. In particular, we note the recent work [soltanolkotabi2018theoretical] which considers a class of neural networks that attain zero quadratic loss implying interpolation. In their proof it is shown that this class of neural nets satisfies the PL condition. Hence our results imply exponential convergence of SGD for this class. To the best of our knowledge this is the first time that exponential convergence of SGD has been established for a class of multilayer neural networks.
2 Exponential Convergence of SGD for PL Losses
We start by formally stating the Polyak Lojasiewicz (PL) Condition.
Definition 2.1 (PL function).
Let . Let be a differentiable function. Assume, w.o.l.g., that . We say that is PL if for every , we have
ERM with smooth losses:
We consider the ERM problem where for all , is smooth. Moreover, is smooth, PL function (as in Definition 2.1 above).
We do not assume compact parameter space; that is, a parameter vector can have unbounded norm, however is assumed to be bounded. In particular, a global minimizer may not exist, however, we assume the existence of global infimum for (which is equal to zero w.o.l.g.).
To elaborate, we assume the existence of a sequence such that
(1) 
Assumption 1 (Interpolation).
For every sequence such that , we have for all , .
Consider the SGD algorithm that starts at an arbitrary , and at each iteration makes an update with a constant step size :
(2) 
where is the size of a minibatch of data points whose indices are drawn uniformly with replacement at each iteration from .
The theorem below establishes the exponential convergence of minibatch SGD for any smooth, PL loss in the interpolated regime.
Theorem 1.
Consider the minibatch SGD with smooth losses as described above. Suppose that Assumption 1 holds and suppose that the empirical risk function is PL for some fixed . For any minibatch size , the minibatch SGD (2) with constant step size gives the following guarantee
(3) 
where the expectation is taken w.r.t. the randomness in the choice of the minibatch.
Proof.
Fixing and taking expectation with respect to the randomness in the choice of the batch (and using the fact that those indices are i.i.d.), we get
Since is smooth and nonnegative, we have with probability over the choice of . Thus, the last inequality reduces to
3 A TransformationInvariance Property of PL Functions and Its Implications
In this section, we formally discuss a simple observation concerning the class of PL functions that has useful implications on wide array of problems in modern machine learning. In particular, we observe that if is smooth and PL function for some , then for any map that satisfies certain weak conditions, the composition is smooth and PL for some that depend on , respectively, as well as a fairly general property of . This shows that the class of smooth PL objectives is closed under a fairly large family of transformations. Given our results above, this observation has direct implications on the convergence of SGD for large class of problems that involve parameter transformation, e.g., via feature maps.
First, we formalize this closure property in the following claim. Let be any map. We can write such a map as , where for each is a scalar function over . The Jacobian of is an operator that, for each , is described by a realvalued matrix whose entries are the partial derivatives
Claim 1.
Let be smooth and PL function for some . Let be any map, where . Suppose there exist such that for all and , where and denote the minimum and maximum eigen values of , respectively. Then, the function is smooth and PL, where and .
Note that the condition that is necessary for to be positive. The condition on holds when is differentiable and Lipschitzcontinuous. The above claim follows easily from the chain rule and the PL condition.
Given this property of PL functions and our result in Theorem 1, we can argue that for smooth, PL losses, the exponential convergence rate of SGD is preserved under any transformation that satisfies the conditions in the above claim. We formalize this conclusion below.
As before, we consider a set of smooth losses , where the empirical risk is smooth and PL.
Corollary 1.
Let be any map that satisfies the conditions in Claim 1. Suppose Assumption 1 holds and that there is sequence such that . Suppose we run minibatch SGD w.r.t. the loss functions with batch size and step size , where is as defined in Theorem 1. Let denote the sequence of parameter vectors generated by minibatch SGD over iterations. Then, we have
4 Faster Convergence for a Class of Convex Losses
We consider a special class of PL functions originally discussed in [KNS18]. This class contains all convex functions that can be expressed as a composition of a strongly convex function with a linear function . Note that this class contains convex losses that are convex but not necessarily strongly, or even strictly convex.
In [KNS18, Appendix B], it was shown that if is strongly convex and is matrix whose least nonzero singular value is , then defined as is PL function. For this special class of PL losses, we show a better bound on the convergence rate than what is directly implied by Theorem 1. The proof technique for this result is different from that of Theorem 1. Exponential convergence of SGD for strongly convex losses in the interpolation setting has been established previously in [MBB17]. In this section, we show a similar convergence rate for this larger class of convex losses.
Let . Let and denote the smallest nonzero singular value and the largest singular value of , respectively. Consider a collection of loss functions where each can be expressed as for some smooth convex function . It is easy to see that this implies that each is smooth and convex. The empirical risk can be written as . Moreover, suppose that is smooth and strongly convex. Now, suppose we run SGD described in (2) to solve the ERM problem defined by the losses The following theorem provides an exponential convergence guarantee for SGD in the interpolation setting.
Theorem 2.
Consider the scenario described above and suppose Assumption 1 is true. Let and be the smallest nonzero singular value and the largest singular value of , respectively. Let be any vector such that is the unique minimizer of . The minibatch SGD (2) with batch size and step size gives the following guarantee
where and where is the pseudoinverse of .
Proof.
Recall that we can express via SVD as where is the matrix whose columns form an eigen basis for , is the matrix whose columns form an eigen basis for , and is matrix that contains the singular values of ; in particular and for , where is the singular value of , . Let be the nonzero singular values of , where . The following is a known fact: is orthonormal basis for and is orthonormal basis for , where is the subspace orthogonal to . Also, recall that the MoorePenrose inverse (pseudoinverse) of , denoted as is given by , where where , and the remaining entries are all zeros. The following is also a known fact that follows easily from the definition of and the facts above: is orthonormal basis for . Hence, from the above facts, it is easy to see that . Thus, by the direct sum theorem, any can be uniquely expressed as sum of two orthogonal components , where and . In particular, .
Using these observations, we can make the following claim.
Claim 2.
is strongly convex over .
The proof of the above claim is as follows. Fix any . Observe that
(5)  
(6) 
where (5) follows from the strong convexity of , and (6) follows from the definition of and the fact that . Now, we note that since , we have . Plugging this into (6) proves the claim.
We now proceed with the proof of the Theorem 2. By smoothness of , we have
(7) 
where, as above, is the projection of onto . Similarly, is the projection of onto . Now, consider . From the update step (2) of the minibatch SGD and the linearity of the projection operator , we have
where the first equality follows from the update step and the fact that . The last inequality follows from the fact that is orthogonal to (and hence orthogonal to ), and the fact that projection cannot increase the norm. Fixing and taking expectation with respect to the choice of the batch , we have
As noted earlier is smooth. Also, it is easy to see that is smooth. From this point onward, the proof follows the same lines of the proof of [MBB17, Theorem 1]. We thus can show that by choosing , we get
Using the above inequality together with (7), we have
∎