Fast minimization of structured convex quartics
We propose faster methods for unconstrained optimization of structured convex quartics, which are convex functions of the form
for , , , and such that . In particular, we show how to achieve an -optimal minimizer for such functions with only calls to a gradient oracle and linear system solver, where is a problem-dependent parameter. Our work extends recent ideas on efficient tensor methods and higher-order acceleration techniques to develop a descent method for optimizing the relevant quartic functions. As a natural consequence of our method, we achieve an overall cost of calls to a gradient oracle and (sparse) linear system solver for the problem of -regression when , providing additional insight into what may be achieved for general -regression. Our results show the benefit of combining efficient higher-order methods with recent acceleration techniques for improving convergence rates in fundamental convex optimization problems.
In this paper, we are interested in the unconstrained optimization problem
where is a convex function of the form
for some , , , and such that and are the rows of . We will refer to functions of this form as structured convex quartics, as we are given an explicit decomposition of the fourth-order term, i.e.,
While fast minimization of convex quadratic functions has been an area of significant research efforts (Cohen et al., 2015; Clarkson and Woodruff, 2017; Agarwal et al., 2017b, c), the structured convex quartic case has been less explored.
In this work, we present a method, called , whose total cost to find an -optimal minimizer is established in the following theorem.
Let be a convex function of the form (2). Then, under appropriate initialization, finds a point such that
with total computational cost , where is the time to calculate the gradient of , LSS is the time to solve a (sparse) linear system, and is a problem-dependent parameter.
In the case where , being the matrix multiplication constant, and for when the linear system is sufficiently sparse, our method improves upon (up to logarithmic factors) the previous best rate of (where is the radius of the box containing the relevant convex set), which can be achieved by using a fast cutting plane method (Lee et al., 2015).
We believe that, in addition to improving the complexity for a certain class of convex optimization problems, our approach illustrates the possibility of using an efficient local search-type method for some more difficult convex optimization tasks, such as -regression. This is in contrast to homotopy-based approaches (such as interior-point or path-following methods) (Nesterov and Nemirovskii, 1994; Bubeck et al., 2018a), cutting plane methods (Lee et al., 2015), and the ellipsoid method (Khachiyan, 1980).
1.1 Related work
In the general case, it has been shown to be NP-hard to find the global minimizer of a quartic polynomial (Murty and Kabadi, 1987; Parrilo and Sturmfels, 2003), or even to decide if the quartic polynomial is convex (Ahmadi et al., 2013). However, in this paper we are able to bypass these hardness results by guaranteeing the convexity of .
In terms of optimization for higher-order smooth convex functions, for functions whose Hessian is -Lipschitz, Monteiro and Svaiter (2013) achieve an error of after calls to a second-order Taylor expansion minimization oracle. Lower bounds have been established for the oracle complexity of higher-order smooth functions, (Arjevani et al., 2018; Agarwal and Hazan, 2018) which match the rate of Monteiro and Svaiter (2013) for , and recent progress has been made toward tightening these bounds.
Some recent work from Gasnikov et al. (2018), only available in Russian, establishes near-optimal rates for higher-order smooth optimization, though to the best of our understanding, it appears that the paper does not provide an explicit guarantee for the line search procedure. More recently, two independent works (Jiang et al., 2018; Bubeck et al., 2018b), published on the arxiv over the past few days, establish near-optimal rates for optimization of functions with higher-order smoothness, under an oracle model, along with an analysis of the binary search procedure. In this paper, while we consider only the case for , we go beyond the oracle model to establish an end-to-end complexity based on efficient approximations of tensor methods (Nesterov, 2018a). Furthermore, while our paper also relies on a careful handling of the binary search procedure, our approach requires the more general setting of higher-order smoothness with respect to matrix-induced norms, which does not appear to follow immediately from Jiang et al. (2018); Bubeck et al. (2018b).
Let be a symmetric positive-definite matrix, i.e., . We let for a matrix , and we denote the minimizer as . For any vector , we define its matrix-induced norm (w.r.t. ) as . Throughout the paper, we will let . We say a differentiable function is -uniformly convex (of degree ) with respect to if, for all ,
Note that for and , this definition captures the standard notion of strong convexity. As we shall see, since our aim is to minimize structured quartic functions, we will be concerned with this definition for and .
A related notion is that of (higher-order) smoothness. Namely, we say a -times differentiable function is smooth (of degree ) w.r.t. if the -th differential is Lipschitz continuous, i.e., for all ,
where we define
Again, since we our concerned with quartic functions, we will later show how is smooth with respect to the appropriate norm.
For that are smooth w.r.t. , we also have that, for all ,
It will eventually become necessary to handle the set of all points that might be reached by our method, starting from an initial point . To that end, we consider the following objects, beginning with the set
Given this set, we now consider the maximum function value attained over , i.e., Finally, we let
where . We may also define . We note that, since is smooth, is a problem-dependent parameter, i.e., it depends on , , , and . As we will later show, the dependence on in the final convergence rate will only appear as part of logarithmic factors.
2.1 Properties of convex quartic functions
Throughout the paper, following the conventions of Nesterov (2018a), we will let
denote the -th order Taylor approximation of , centered at . Furthermore, for that is smooth, we define a model function
As we are only concerned with functions that are smooth, we will drop the subscript to define and
Note that is smooth (of degree 3) w.r.t . The following theorem illustrates some useful properties of the model .
Theorem 2.1 (Nesterov (2018a), Theorem 1, for ).
Suppose is convex, 3-times differentiable, and smooth (of degree 3). Then, for any , we have
Moreover, for all ,
With this representation of the model function in hand, we let
denote a minimizer of the fourth-order model, centered at . The following lemma concerning , which will later prove useful, establishes a relaxed version of eq. (2.13) from Nesterov (2018a).
Let , and let be as in (11). Then, for all ,
In order to get a handle on the regularity properties of , we establish its smoothness and uniform convexity parameters w.r.t. .
Lemma 2.3 ( smoothness).
Suppose is of the form (2). Then, for all ,
Lemma 2.4 ( uniform convexity).
Suppose is of the form (2). Then, for all ,
We may also observe that is uniformly convex w.r.t. .
For all ,
3 Minimizing structured convex quartics
In order to show an overall convergence rate for minimizing structured convex quartics, we shall see that the desired algorithm would be to find an exact minimizer of , for some at each iteration of the main algorithm. Thus, one of our main challenges will be to show that an approximate minimizer of is sufficiently accurate for the rest of the algorithm. To that end, we begin by considering the auxiliary minimization problem, for which our method converges at a linear rate to an -optimal minimizer. With this approximate minimizer in hand, we find that, when taking small enough, it provides a sufficiently accurate solution to be used as part of a binary search procedure, called . This approach is needed for finding an appropriate value which meets a certain approximation criterion.
Finally, once we have found an valid choice of and its corresponding , we show how they can be used as part of our main method, called , to lead to a final solution such that in iterations of . Furthermore, each of these iterations requires some polylogarithmic factors incurred by and .
3.1 Approximate auxiliary minimization
To begin, we consider the auxiliary minimization problem , where
Note that is equivalent to , up to a change of variables. Our aim is to establish a minimization procedure which returns an -optimal solution in iterations, where is a problem-dependent parameter. Furthermore, each iteration is dominated by calls to a (sparse) linear system solver. This subroutine, which we call , is described in Section 5 of Nesterov (2018a) and is necessary for returning an approximate minimizer of . The approach involves showing that the auxiliary function is relatively smooth and convex (Lu et al., 2018), and further that each iteration of the method for minimizing such a function reduces to a minimization problem of the form
As noted by Nesterov (2018a), this minimization problem is both one-dimensional and strongly convex, and so we may achieve global linear convergence. Taken together with the relative smoothness and convexity of , we have the following theorem.
For all , , generated by (Algorithm 1), we have that
where and .
Let be the output from , for and , where . Then
where each iteration requires time proportional to evaluating in order to compute , as well as calls to a (sparse) linear system solver.
We first note that , and so . As observed by Nesterov (2018a) (see also: Appendix A in Agarwal et al. (2017a)), can be calculated in time proportional to the cost of evaluating , which takes time for of the form (2). In addition, Nesterov (2018a) notes that (16) can be found by any reasonable linearly convergent procedure, and so given access to the gradient of , this problem can be optimized (to sufficiently small error) in calls to a gradient oracle. Since
calculating the gradient requires time.
Finally, since , by our choice of , it follows from Theorem 3.1 that
As we shall see, it will become necessary to handle the approximation error from , and so we provide the following lemmas to that end.
Let , let be as output by , and let be as in (11). Then,
Let . Then,
Let be the output from for . In addition, let . Then,
Let . We have that
Now, by Lemma 3.3, we know that , and so it follows from the definition of that
3.2 Search procedure for finding
In this section, we establish the correctness of , our subroutine for finding an appropriate choice of , given as inputs. One of the key algorithmic components for achieving fast higher-order acceleration, as observed by Monteiro and Svaiter (2013) and Nesterov (2018b), is to determine such that , where we define
We will also need to define an approximate version
where we let . We may observe that is continuous in , and furthermore that there exists some such that , since if , then , and if , then . Thus, we may reduce it to a binary search problem, under an appropriate initialization. For now, we assume that at each iteration , is given initial bounds and such that , thus ensuring it is a valid binary search procedure. We will later show how can provide with such guarantees.
An important part of managing this process is to limit how quickly can grow, as we will need to ensure a closeness in function value once our candidate bounds and are sufficiently close. The following theorem gives us precisely what we need, namely a differential inequality w.r.t. .
Given , as inputs, and chosen sufficiently small, the algorithm outputs and such that
where is as defined in (20).