Fast minimization of structured convex quartics

Fast minimization of structured convex quartics

Brian Bullins
Princeton University
Abstract

We propose faster methods for unconstrained optimization of structured convex quartics, which are convex functions of the form

for , , , and such that . In particular, we show how to achieve an -optimal minimizer for such functions with only calls to a gradient oracle and linear system solver, where is a problem-dependent parameter. Our work extends recent ideas on efficient tensor methods and higher-order acceleration techniques to develop a descent method for optimizing the relevant quartic functions. As a natural consequence of our method, we achieve an overall cost of calls to a gradient oracle and (sparse) linear system solver for the problem of -regression when , providing additional insight into what may be achieved for general -regression. Our results show the benefit of combining efficient higher-order methods with recent acceleration techniques for improving convergence rates in fundamental convex optimization problems.

1 Introduction

In this paper, we are interested in the unconstrained optimization problem

(1)

where is a convex function of the form

(2)

for some , , , and such that and are the rows of . We will refer to functions of this form as structured convex quartics, as we are given an explicit decomposition of the fourth-order term, i.e.,

While fast minimization of convex quadratic functions has been an area of significant research efforts (Cohen et al., 2015; Clarkson and Woodruff, 2017; Agarwal et al., 2017b, c), the structured convex quartic case has been less explored.

In this work, we present a method, called , whose total cost to find an -optimal minimizer is established in the following theorem.

Theorem 1.1.

Let be a convex function of the form (2). Then, under appropriate initialization, finds a point such that

with total computational cost , where is the time to calculate the gradient of , LSS is the time to solve a (sparse) linear system, and is a problem-dependent parameter.

In the case where , being the matrix multiplication constant, and for when the linear system is sufficiently sparse, our method improves upon (up to logarithmic factors) the previous best rate of (where is the radius of the box containing the relevant convex set), which can be achieved by using a fast cutting plane method (Lee et al., 2015).

We believe that, in addition to improving the complexity for a certain class of convex optimization problems, our approach illustrates the possibility of using an efficient local search-type method for some more difficult convex optimization tasks, such as -regression. This is in contrast to homotopy-based approaches (such as interior-point or path-following methods) (Nesterov and Nemirovskii, 1994; Bubeck et al., 2018a), cutting plane methods (Lee et al., 2015), and the ellipsoid method (Khachiyan, 1980).

1.1 Related work

In the general case, it has been shown to be NP-hard to find the global minimizer of a quartic polynomial (Murty and Kabadi, 1987; Parrilo and Sturmfels, 2003), or even to decide if the quartic polynomial is convex (Ahmadi et al., 2013). However, in this paper we are able to bypass these hardness results by guaranteeing the convexity of .

In terms of optimization for higher-order smooth convex functions, for functions whose Hessian is -Lipschitz, Monteiro and Svaiter (2013) achieve an error of after calls to a second-order Taylor expansion minimization oracle. Lower bounds have been established for the oracle complexity of higher-order smooth functions, (Arjevani et al., 2018; Agarwal and Hazan, 2018) which match the rate of Monteiro and Svaiter (2013) for , and recent progress has been made toward tightening these bounds.

Some recent work from Gasnikov et al. (2018), only available in Russian, establishes near-optimal rates for higher-order smooth optimization, though to the best of our understanding, it appears that the paper does not provide an explicit guarantee for the line search procedure. More recently, two independent works (Jiang et al., 2018; Bubeck et al., 2018b), published on the arxiv over the past few days, establish near-optimal rates for optimization of functions with higher-order smoothness, under an oracle model, along with an analysis of the binary search procedure. In this paper, while we consider only the case for , we go beyond the oracle model to establish an end-to-end complexity based on efficient approximations of tensor methods (Nesterov, 2018a). Furthermore, while our paper also relies on a careful handling of the binary search procedure, our approach requires the more general setting of higher-order smoothness with respect to matrix-induced norms, which does not appear to follow immediately from Jiang et al. (2018); Bubeck et al. (2018b).

2 Setup

Let be a symmetric positive-definite matrix, i.e., . We let for a matrix , and we denote the minimizer as . For any vector , we define its matrix-induced norm (w.r.t. ) as . Throughout the paper, we will let . We say a differentiable function is -uniformly convex (of degree ) with respect to if, for all ,

Note that for and , this definition captures the standard notion of strong convexity. As we shall see, since our aim is to minimize structured quartic functions, we will be concerned with this definition for and .

A related notion is that of (higher-order) smoothness. Namely, we say a -times differentiable function is smooth (of degree ) w.r.t. if the -th differential is Lipschitz continuous, i.e., for all ,

where we define

Again, since we our concerned with quartic functions, we will later show how is smooth with respect to the appropriate norm.

For that are smooth w.r.t. , we also have that, for all ,

(3)
(4)

It will eventually become necessary to handle the set of all points that might be reached by our method, starting from an initial point . To that end, we consider the following objects, beginning with the set

(5)

Given this set, we now consider the maximum function value attained over , i.e., Finally, we let

(6)

where . We may also define . We note that, since is smooth, is a problem-dependent parameter, i.e., it depends on , , , and . As we will later show, the dependence on in the final convergence rate will only appear as part of logarithmic factors.

2.1 Properties of convex quartic functions

Throughout the paper, following the conventions of Nesterov (2018a), we will let

(7)

denote the -th order Taylor approximation of , centered at . Furthermore, for that is smooth, we define a model function

(8)

As we are only concerned with functions that are smooth, we will drop the subscript to define and

(9)

Note that is smooth (of degree 3) w.r.t . The following theorem illustrates some useful properties of the model .

Theorem 2.1 (Nesterov (2018a), Theorem 1, for ).

Suppose is convex, 3-times differentiable, and smooth (of degree 3). Then, for any , we have

Moreover, for all ,

(10)

With this representation of the model function in hand, we let

(11)

denote a minimizer of the fourth-order model, centered at . The following lemma concerning , which will later prove useful, establishes a relaxed version of eq. (2.13) from Nesterov (2018a).

Lemma 2.2.

Let , and let be as in (11). Then, for all ,

(12)

where

and

In order to get a handle on the regularity properties of , we establish its smoothness and uniform convexity parameters w.r.t. .

Lemma 2.3 ( smoothness).

Suppose is of the form (2). Then, for all ,

(13)
Lemma 2.4 ( uniform convexity).

Suppose is of the form (2). Then, for all ,

(14)

We may also observe that is uniformly convex w.r.t. .

Lemma 2.5.

For all ,

(15)

3 Minimizing structured convex quartics

In order to show an overall convergence rate for minimizing structured convex quartics, we shall see that the desired algorithm would be to find an exact minimizer of , for some at each iteration of the main algorithm. Thus, one of our main challenges will be to show that an approximate minimizer of is sufficiently accurate for the rest of the algorithm. To that end, we begin by considering the auxiliary minimization problem, for which our method converges at a linear rate to an -optimal minimizer. With this approximate minimizer in hand, we find that, when taking small enough, it provides a sufficiently accurate solution to be used as part of a binary search procedure, called . This approach is needed for finding an appropriate value which meets a certain approximation criterion.

Finally, once we have found an valid choice of and its corresponding , we show how they can be used as part of our main method, called , to lead to a final solution such that in iterations of . Furthermore, each of these iterations requires some polylogarithmic factors incurred by and .

3.1 Approximate auxiliary minimization

To begin, we consider the auxiliary minimization problem , where

Note that is equivalent to , up to a change of variables. Our aim is to establish a minimization procedure which returns an -optimal solution in iterations, where is a problem-dependent parameter. Furthermore, each iteration is dominated by calls to a (sparse) linear system solver. This subroutine, which we call , is described in Section 5 of Nesterov (2018a) and is necessary for returning an approximate minimizer of . The approach involves showing that the auxiliary function is relatively smooth and convex (Lu et al., 2018), and further that each iteration of the method for minimizing such a function reduces to a minimization problem of the form

(16)

where

and

As noted by Nesterov (2018a), this minimization problem is both one-dimensional and strongly convex, and so we may achieve global linear convergence. Taken together with the relative smoothness and convexity of , we have the following theorem.

Theorem 3.1 (Nesterov (2018a), eq.(5.9) . See also: Lu et al. (2018), Theorem 3.1).

For all , , generated by (Algorithm 1), we have that

where and .

  Input: , , , .
  for  to  do
     
     
  end for
  return  
Algorithm 1
Corollary 3.2.

Let be the output from , for and , where . Then

where each iteration requires time proportional to evaluating in order to compute , as well as calls to a (sparse) linear system solver.

Proof.

We first note that , and so . As observed by Nesterov (2018a) (see also: Appendix A in Agarwal et al. (2017a)), can be calculated in time proportional to the cost of evaluating , which takes time for of the form (2). In addition, Nesterov (2018a) notes that (16) can be found by any reasonable linearly convergent procedure, and so given access to the gradient of , this problem can be optimized (to sufficiently small error) in calls to a gradient oracle. Since

calculating the gradient requires time.

Finally, since , by our choice of , it follows from Theorem 3.1 that

As we shall see, it will become necessary to handle the approximation error from , and so we provide the following lemmas to that end.

Lemma 3.3.

Let , let be as output by , and let be as in (11). Then,

Proof.

By Lemma 2.5, we know that

and so it follows from Corollary 3.2 that

Lemma 3.4.

Let . Then,

Proof.

The result follows from Lemmas 2.2 and 3.3. ∎

Lemma 3.5.

Let be the output from for . In addition, let . Then,

Proof.

Let . We have that

Now, by Lemma 3.3, we know that , and so it follows from the definition of that

3.2 Search procedure for finding

In this section, we establish the correctness of , our subroutine for finding an appropriate choice of , given as inputs. One of the key algorithmic components for achieving fast higher-order acceleration, as observed by Monteiro and Svaiter (2013) and Nesterov (2018b), is to determine such that , where we define

(17)
(18)

and

(19)

We will also need to define an approximate version

(20)

where we let . We may observe that is continuous in , and furthermore that there exists some such that , since if , then , and if , then . Thus, we may reduce it to a binary search problem, under an appropriate initialization. For now, we assume that at each iteration , is given initial bounds and such that , thus ensuring it is a valid binary search procedure. We will later show how can provide with such guarantees.

  Input: , , , , (s.t. ), , , .
  Define .
  ,
  for  to  do
     
     
     
     
     
     
     if  then
        
     else if  then
        
     else
        return  
     end if
  end for
  return  
Algorithm 2

An important part of managing this process is to limit how quickly can grow, as we will need to ensure a closeness in function value once our candidate bounds and are sufficiently close. The following theorem gives us precisely what we need, namely a differential inequality w.r.t. .

Theorem 3.6.

Let be as defined in (17), for some . Then we have that, for all ,

where is as defined in (33).

Theorem 3.7.

Given , as inputs, and chosen sufficiently small, the algorithm outputs and such that

(21)

where is as defined in (20).

3.3 Analyzing the convergence of FastQuartic

  Input: , , , , , , , .
  Define , , , as in (37).
  for  to  do