On the Complexity of Bandit and Derivative-Free Stochastic Convex Optimization

On the Complexity of Bandit and Derivative-Free Stochastic Convex Optimization

Abstract

The problem of stochastic convex optimization with bandit feedback (in the learning community) or without knowledge of gradients (in the optimization community) has received much attention in recent years, in the form of algorithms and performance upper bounds. However, much less is known about the inherent complexity of these problems, and there are few lower bounds in the literature, especially for nonlinear functions. In this paper, we investigate the attainable error/regret in the bandit and derivative-free settings, as a function of the dimension and the available number of queries . We provide a precise characterization of the attainable performance for strongly-convex and smooth functions, which also imply a non-trivial lower bound for more general problems. Moreover, we prove that in both the bandit and derivative-free setting, the required number of queries must scale at least quadratically with the dimension. Finally, we show that on the natural class of quadratic functions, it is possible to obtain a “fast” error rate in terms of , under mild assumptions, even without having access to gradients. To the best of our knowledge, this is the first such rate in a derivative-free stochastic setting, and holds despite previous results which seem to imply the contrary.

1Introduction

This paper considers the following fundamental question: Given an unknown convex function , and the ability to query for (possibly noisy) realizations of its values at various points, how can we optimize with as few queries as possible?

This question, under different guises, has played an important role in several communities. In the optimization community, this is usually known as “zeroth-order” or “derivative-free” convex optimization, since we only have access to function values rather than gradients or higher-order information. The goal is to return a point with small optimization error on some convex domain, using a limited number of queries. Derivative-free methods were among the earliest algorithms to numerically solve unconstrained optimization problems, and have recently enjoyed increasing interest, being especial useful in black-box situations where gradient information is hard to compute or does not exist [21]. In a stochastic framework, we can only obtain noisy realizations of the function values (for instance, due to running the optimization process on sampled data). We refer to this setting as derivative-free SCO (short for stochastic convex optimization).

In the learning community, these kinds of problems have been closely studied in the context of multi-armed bandits and (more generally) bandit online optimization, which are powerful models for sequential decision making under uncertainty [11]. In a stochastic framework, these settings correspond to repeatedly choosing points in some convex domain, obtaining noisy realizations of some underlying convex function’s value. However, rather than minimizing optimization error, our goal is to minimize the (average) regret: roughly speaking, that the average of the function values we obtain is not much larger than the minimal function value. For example, the well-known multi-armed bandit problem corresponds to a linear function over the simplex. We refer to this setting as bandit SCO. As will be more explicitly discussed later on, any algorithm which attains small average regret can be converted to an algorithm with the same optimization error. In other words, bandit SCO is only harder than derivative-free SCO. We note that in the context of stochastic multi-armed bandits, the potential gap between the two settings (under the terms “cumulative regret” and “simple regret”) was introduced and studied in [9].

When one is given gradient information, the attainable optimization error / average regret is well-known: under mild conditions, it is for convex functions and for strongly-convex functions, where is the number of queries [24]. Note that these bounds do not explicitly depend on the dimension of the domain.

The inherent complexity of bandit/derivative-free SCO is not as well-understood. An important exception is multi-armed bandits, where the attainable error/regret is known to be exactly , where is the dimension and is the number of queries1 [7]. Linear functions over other convex domains has also been explored, with upper bounds on the order of to (e.g. [1]). For linear functions over general domains, information-theoretic lower bounds have been proven in [14]. However, these lower bounds are either on the regret (not optimization error); shown for non-convex domains; or are implicit and rely on artificial, carefully constructed domains. In contrast, we focus here on simple, natural domains and convex problems.

When dealing with more general, non-linear functions, much less is known. The problem was originally considered over 30 years ago, in the seminal work by Yudin and Nemirovsky on the complexity of optimization [20]. The authors provided some algorithms and upper bounds, but as they themselves emphasize (cf. pg. 359), the attainable complexity is far from clear. Quite recently, [18] provided an lower bound for strongly-convex functions, which demonstrates that the “fast” rate in terms of , that one enjoys with gradient information, is not possible here. In contrast, the current best-known upper bounds are for convex, strongly-convex, and strongly-convex-and-smooth functions respectively ([16]); And a bound for convex functions ([3]), which is better in terms of dependence on but very bad in terms of the dimension .

In this paper, we investigate the complexity of bandit and derivative-free stochastic convex optimization, focusing on nonlinear functions, with the following contributions (see also the summary in Table 1):

  • We prove that for strongly-convex and smooth functions, the attainable error/regret is exactly . This has three important ramifications: First of all, it settles the question of attainable performance for such functions, and is the first sharp characterization of complexity for a general nonlinear bandit/derivative-free class of problems. Second, it proves that the required number of queries in such problems must scale quadratically with the dimension, even in the easier optimization setting, and in contrast to the linear case which often allows linear scaling with the dimension. Third, it formally provides a natural lower bound for more general classes of convex problems.

  • We analyze an important special case of strongly-convex and smooth functions, namely quadratic functions. We show that for such functions, one can (efficiently) attain optimization error, and that this rate is sharp. To the best of our knowledge, it is the first general class of nonlinear functions for which one can show a “fast rate” (in terms of ) in a derivative-free stochastic setting. In fact, this may seem to contradict the result in [18], which shows an lower bound on quadratic functions. However, as we explain in more detail later on, there is no contradiction, since the example establishing the lower bound of [18] imposes an extremely small domain (which actually decays with ), while our result holds for a fixed domain. Although this result is tight, we also show that under more restrictive assumptions on the noise process, it is sometimes possible to obtain better error bounds, as good as .

  • We prove that even for quadratic functions, the attainable average regret is exactly , in contrast to the result for optimization error. This shows there is a real gap between what can be obtained for derivative-free SCO and bandit SCO, without any specific distributional assumptions. Again, this stands in contrast to settings such as multi-armed bandits, where there is no difference in their distribution-free performance.

We emphasize that our upper bounds are based on the assumption that the function minimizer is bounded away from the domain boundary, or that we can query points slightly outside the domain. However, we argue that this assumption is not very restrictive in the context of strongly-convex functions (especially in learning applications), where the domain is often , and a minimizer always exists.

The paper is structured as follows: In Section 2, we formally define the setup and introduce the notation we shall use in the remainder of the paper. For clarity of exposition, we begin with the case of quadratic functions in Section 3, providing algorithms, upper and lower bounds. The tools and insights we develop for the quadratic case will allow us to tackle the more general strongly-convex-and-smooth setting in Section 4. We end the main part of the paper with a summary and discussion of open problems in Section 5. In Appendix A, we demonstrate that one can obtain improved performance in the quadratic case, if we’re considering more specific natural noise processes. Additional proofs are presented in Appendix B.

Table 1: A summary of the complexity upper bounds () and lower bounds (), for derivative-free stochastic convex optimization (optimization error) and bandit stochastic convex optimization (average regret), for various function classes, in terms of the dimension and the number of queries . The boxed results are shown in this paper. The upper bounds for the convex and strongly convex case combine results from . The table shows dependence on only and ignores other factors and constants.

Function Type

Quadratic
Str. Convex and Smooth
Str. Convex
Convex

2Preliminaries

Let denote the standard Euclidean norm. We let denote the convex function of interest, where is a (closed) convex domain. We say that is -strongly convex, for , if for any and any subgradient of at , it holds that . Intuitively, this means that we can lower bound everywhere by a quadratic function of fixed curvature. We say that is -smooth if for any , and any subgradient of at , it holds that . Intuitively, this means that we can upper-bound everywhere by a quadratic function of fixed curvature. We let denote a minimizer of on . To prevent trivialities, we consider in this paper only functions whose optimum is known beforehand to lie in some bounded domain (even if is large or all of ), and the function is Lipschitz in that domain.

The learning/optimization process proceeds in rounds. Each round , we pick and query a point , obtaining an independent realization of , where is an unknown zero-mean random variable, such2 that . In the bandit SCO setting, our goal is to minimize the expected average regret, namely

whereas in the derivative-free SCO setting, our goal is to compute, based on and the observed values, some point , such that the expected optimization error

is as small as possible. We note that given a bandit SCO algorithm with some regret bound, one can get a derivative-free SCO algorithm with the same optimization error bound: we simply run the stochastic bandit algorithm, getting , and returning . By Jensen’s inequality, the expected optimization error is at most the expected average regret with respect to . Thus, bandit SCO is only harder than derivative-free SCO.

In this paper, we provide upper and lower bounds on the attainable optimization error / average regret, as a function of the dimension and the number of rounds/queries . For simplicity, we focus here on bounds which hold in expectation, and an interesting point for further research is to extend these to bounds on the actual error/regret, which hold with high probability.

3Quadratic Functions

In this section, we consider the class of quadratic functions, which have the form

where is positive-definite (with a minimal eigenvalue bounded away from ). Moreover, to make the problem well-behaved, we assume that has a spectral norm of at most , and that . We note that if the norms are bounded but larger than , this can be easily handled by rescaling the function. It is easily seen that such functions are both strongly convex and smooth. Moreover, this is a natural and important class of functions, which in learning applications appears, for instance, in the context of least squares and ridge regression. Besides providing new insights for this class, we will use the techniques developed here later on, in the more general case of strongly-convex and smooth functions.

3.1Upper Bounds

We begin by showing that for derivative-free SCO, one can obtain an optimization error bound of . To the best of our knowledge, this is the first example of a derivative-free stochastic bound scaling as for a general class of nonlinear functions, as opposed to . However, to achieve this result, we need to make the following mild assumption:

With strongly-convex functions, the most common case is that , and then both cases actually hold for any value of . Even in other situations, one of these assumptions virtually always holds. Note that we crucially rely here on the strong-convexity assumption: with (say) linear functions, the domain must always be bounded and the optimum always lies at the boundary of the domain.

With this assumption, the bound we obtain is on the order of . As discussed earlier, [18] recently proved a lower bound for derivative-free SCO, which actually applies to quadratic functions. This does not contradict our result, since in their example the diameter of (and hence also ) decays with . In contrast, our bound holds for fixed , which we believe is natural in most applications.

To obtain this behavior, we utilize a well-known -point gradient estimate technique, which allows us to get an unbiased estimate of the gradient at any point by randomly querying for a (noisy) value of the function around it (see [20]). Our key insight is that whereas for general functions one must query very close to the point of interest (scaling to with ), quadratic functions have additional structure which allows us to query relatively far away, allowing gradient estimates with much smaller variance.

The algorithm we use is presented as Algorithm ?, and is computationally efficient. It uses a modification of the domain , defined as follows. First, we let denote some known upper bound on . If the first alternative of assumption ? holds, then consists of all points in , whose distance from ’s boundary is at least . If the second alternative holds, then . Note that under any alternative, it holds that is convex, that , that , and that our algorithm always queries at legitimate points. In the pseudocode, we use to denote projection on . For simplicity, we assume that is an integer and that includes the origin .

The following theorem quantifies the optimization error of our algorithm.

Note that returning as the average over the last iterates (as opposed to averaging over all iterates) is necessary to avoid factors [22].

As an interesting side-note, we conjecture that a gradient-based approach is crucial here to obtain rates (in terms of ). For example, a different family of derivative-free methods (see for instance [20]) is based on a type of noisy binary search, where a few strategically selected points are repeatedly sampled in order to estimate which of them has a larger/smaller function value. This is used to shrink the feasible region where the optimum might lie. Since it is generally impossible to estimate the mean of noisy function values at a rate better than , it is not clear if one can get an optimization rate faster than with such methods.

The proof of the theorem relies on the following key lemma, whose proof appears in the appendix.

This lemma implies that Algorithm ? essentially performs stochastic gradient descent over the strongly-convex function , where the gradient estimates are unbiased and with bounded second moments. The returned point is a suffix-average of the last iterates. Using a convergence analysis for stochastic gradient descent with suffix-averaging [22], and plugging in the bounds of Lemma ?, we get Thm. ?.

3.2Lower Bounds

In this subsection, we prove that the upper bound obtained in Thm. ? is essentially tight: namely, up to constants, the worst-case error rate one can obtain for derivative-free SCO of quadratic functions is order of . Besides showing that the algorithm above is essentially optimal, it implies that even for extremely nice strongly-convex functions and domains, the number of queries required to reach some fixed accuracy scales quadratically with the dimension . This stands in contrast to the case of linear functions, where the provable query complexity often scales linearly with .

Note that since , we know in advance that the optimum must lie in the unit Euclidean ball. Despite this, the lower bound holds even if we do not restrict at all the domain in which we are allowed to query - i.e., it can even be all of .

The proof technique is inspired by a lower bound which appears in [4], in the different context of compressed sensing. The argument also bears some close similarities to the proof of Assouad’s lemma (see [13]).

We will exhibit a distribution over quadratic functions , such that in expectation over this distribution, any querying strategy will attain optimization error. This implies that for any querying strategy, there exists some deterministic for which it will have this amount of error.

The functions we shall consider are

where is drawn uniformly from , with being a parameter to be specified later. Moreover, we will assume that the noise is a Gaussian random variable with zero mean and standard deviation .

By definition of -strong convexity, it is easy to verify that . Thus, the expected optimization error (over the querying strategy) is at least

We will assume that the querying strategy is deterministic: is a deterministic function of the previous query values at . This assumption is without loss of generality, since any random querying strategy can be seen as a randomization over deterministic querying strategy. Thus, a lower bound which holds uniformly for any deterministic querying strategy would also hold over a randomization.

To lower bound Eq. (Equation 1), we use the following key lemma, which relates this to the question of how informative are the query values (as measured by Kullback-Leibler or KL divergence) for determining the sign of ’s coordinates. Intuitively, the more similar the query values are, the smaller is the KL divergence and the harder it is to distinguish the true sign of each , leading to a larger lower bound. The proof appears in the appendix.

Using Lemma ?, we can get a lower bound for the above, provided an upper bound on the ’s. To analyze this, consider any fixed values of , and any fixed values of . Since the querying strategy is assumed to be deterministic, it follows that is uniquely determined. Given this , the function value equals

conditioned on , and

conditioned on . Comparing Eq. (Equation 2) and Eq. (Equation 3), we notice that they both represent a Gaussian distribution (due to the noise term), with standard deviation and means seperated by . To bound the divergence, we use the following standard result on the KL divergence between two Gaussians [19]:

Using this lemma, it follows that

Plugging this upper bound on the ’s in Lemma ?, we can further lower bound on the expected optimization error from Eq. (Equation 1) by

Finally, we choose , and obtain a lower bound of

as required.

The theorem above applies to the optimization error for derivative-free SCO. We now turn to deal with the case of bandit SCO and regret, showing an lower bound. Since the derivative-free SCO bound was , the result implies a real gap between what can be obtained in terms of average regret, as opposed to optimization error, without any specific distributional assumptions. This stands in contrast to settings such as multi-armed bandits, where the construction implying the known lower bound (e.g. [11]) applies equally well to derivative-free and bandit SCO (see [9]).

Note that our lower bound holds even when the domain is unrestricted (the algorithm can pick any point in ). Moreover, the lower bound coincides (up to a constant) with the regret upper-bound shown for strongly-convex and smooth functions in [2]. This shows that for strongly-convex and smooth functions, the minimax average regret is . Also, the lower bound implies that one cannot hope to obtain average regret better than for more general bandit problems, such as strongly-convex or even convex problems.

The proof relies on techniques similar to the lower bound of Thm. ?, with a key additional insight. Specifically, in Thm. ?, the lower bound obtained actually depends on the norm of the points (see Eq. (Equation 4)), and the optimal has a very small norm. In a regret minimization setting the points cannot be too far from , and thus must have a small norm as well, leading to a stronger lower bound than that of Thm. ?. The formal proof appears in the appendix.

4Strongly Convex and Smooth Functions

We now turn to the more general case of strongly convex and smooth functions. First, we note that in the case of functions which are both strongly convex and smooth, [2] already provided an average regret bound (which holds even in a non-stochastic setting). The main result of this section is a matching lower bound, which holds even if we look at the much easier case of derivative-free SCO. This lower bound implies that the attainable error for strongly-convex and smooth functions is order of , and at least for any harder setting.

Note that we made no attempt to optimize the constant.

The general proof technique is rather similar to that of Thm. ?, but the construction is a bit more intricate. Specifically, letting be a parameter to be determined later, we look at functions of the form

where is uniformly distributed on . To see the intuition behind this choice, let us consider the one-dimensional case (). Recall that in the quadratic setting, the function we considered (in one dimension) was of the form

where was chosen uniformly at random from , and is a “small” number. Thus, the optimum is at either or , and the difference at these optima is order of . However, by picking , the difference is on the order of - much larger than the difference close to the optimum, which is order of . Therefore, by querying for far from the optimum, and getting noisy values of , it is easier to distinguish whether we are dealing with or , leading to a optimization error bound. In contrast, the function we consider here (in the one-dimensional case) is of the form

This form is carefully designed so that is order of , not just at the optima of and , but for all . This is because of the additional denominator, which makes the function closer and closer to the larger is - see Figure 1 for a graphical illustration. As a result, no matter how the function is queried, distinguishing the choice of is difficult, leading to the strong lower bound of Thm. ?. A formal proof is presented in the appendix.

Figure 1: The two solid blue lines represents F_e(w) as in Eq. (), for e=0.1 and e=-0.1, whereas the two dashed black lines represent two quadratic functions with similar minimum points. Close to the minima, F_e(w) and the quadratic functions behave rather similarly. However, as we increase |w|, the two quadratic functions become rather distinguishable, whereas F_e(w) become more and more indistinguishable for the two choices of e. Thus, distinguishing whether e=0.1 or e=-0.1, based only on function values is of F_e(w), is much harder than the quadratic case
Figure 1: The two solid blue lines represents as in Eq. (), for and , whereas the two dashed black lines represent two quadratic functions with similar minimum points. Close to the minima, and the quadratic functions behave rather similarly. However, as we increase , the two quadratic functions become rather distinguishable, whereas become more and more indistinguishable for the two choices of . Thus, distinguishing whether or , based only on function values is of , is much harder than the quadratic case

5Discussion

In this paper, we considered the dual settings of bandit and derivative-free stochastic convex optimization. We provided a sharp characterization of the attainable performance for strongly-convex and smooth functions. The results also provide useful lower-bounds for more general settings. We also considered the case of quadratic functions, showing that a “fast” rate is possible in a stochastic setting, even without knowledge of derivatives. Our results have several qualitative differences compared to previously known results which focus on linear functions, such as quadratic dependence on the dimension even for extremely “nice” functions, and a provable gap between the attainable performance in bandit optimization and derivative-free optimization.

Our work leaves open several questions. For example, we have only dealt with bounds which hold in expectation, and our lower bounds focused on the dependence on , where other problem parameters, such as the Lipschitz constant and strong convexity parameter, are fixed constants. While this follows the setting of previous works, it does not cover situations where these parameters scale with . Finally, while this paper settles the case of strongly-convex and smooth functions, we still don’t know what is the attainable performance for general convex functions, as well as the more specific case of strongly-convex (possibly non-smooth) functions. Our lower bound still holds, but the existing upper bounds are much larger: for convex functions, and for strongly-convex functions (see Table 1). We don’t know if the lower bound or the existing upper bounds are tight. However, it is the current upper bounds which seem less “natural”, and we suspect that they are the ones that can be considerably improved, using new algorithms which remain undiscovered.


AImproved Results for Quadratic Functions

In Section 3, we showed a tight bound on the achievable error for quadratic functions, in the derivative-free SCO setting. This was shown under the assumption that the noise is zero-mean and has a second moment bounded by . In this appendix, we show how under additional natural assumptions on the noise, one can improve on this result with an efficient algorithm. The main message here is not so much the algorithmic result, but rather to show that the generic noise assumption is important for our lower bounds, and that better algorithms may still be possible for more specific settings.

To give a concrete example, consider the classic setting of ridge regression, where we have labeled training examples sampled i.i.d. from some distribution over , and our goal is to find some minimizing

In a bandit / derivative-free SCO setting, we can think of each query as giving as the value of

for some specific example , and note that its expected value (over the random draw of ) equals . Thus, it falls within the setting considered in this paper. However, the noise process is not generic, but has a particular structure. We will show here that one can actually attain an error rate as good as for this problem.

To formally present our result, it would be useful to consider a more general setting, the ridge regression setting above being a special case. Suppose we can write as , where decomposes into a deterministic term and a stochastic quadratic term :

where are random variables. We assume that whenever we query a point , we get for some random realization of . In general, can be a strongly-convex regularization term, such as in Eq. (Equation 6).

The algorithm we consider, Algorithm ?, is a slight variant of Algorithm ?, which takes this decomposition of into account when constructing its unbiased gradient estimate. Compared to Algorithm ?, this algorithm also queries at random points further away from , up to a distance of . We will assume here that we can always query at such points3. We also let in the algorithm, where we recall that is some known upper bound on .

We now show that with this algorithm, one can improve on our error upper bound from (Thm. ?).

Note that if we only assume , then can be as high as , which leads to an bound, same as in Thm. ?. However, it may be much smaller than that. In particular, for the ridge regression case we considered earlier, corresponds to where is a randomly drawn instance. Under the common assumption that (independent of the dimension), it follows that . Therefore, is independent of the dimension, leading to an error upper bound in terms of .

We remark that even in this specific setting, the bound does not carry over to the bandit SCO setting (i.e. in terms of regret), since the algorithm requires us to query far away from . Also, we again emphasize that this result does not contradict our lower bound in the quadratic case (Thm. ?), since the setting there included a generic noise term, while here the stochastic “noise” has a very specific structure.

As to the proof of Thm. ?, it is very similar to that of Thm. ?, the key difference being a better moment upper bound on the gradient estimate , as formalized in the following lemma. Plugging this improved bound into the calculations results in the theorem.

By definition of , we note that

Using a similar calculation to the one in the proof of Lemma ?, we have that the expected value of this expression over and is

which is a subgradient of . As to the moment bound, we have

Letting denote entry in , and recalling that by definition of , , we have that

Also, using the fact that is the identity matrix, we have

Finally, we have

Plugging these inequalities back into Eq. (Equation 7), we get that

from which the lemma follows.

BAdditional Proofs

b.1Proof of Lemma

By the way is picked, we have that and that for all . Thus, letting denote expectation w.r.t. and the random function values, we have

Also, by the assumptions on and the assumptions on the noise , we have

as required.

b.2Proof of Lemma

We have the following:

where the last inequality is by the fact that for any values , it holds that .

Consider (without loss of generality) the term corresponding to the first coordinate, namely

This term equals

By Pinsker’s inequality and the assumption that is a deterministic function of , this expression is at most

where is the Kullback-Leibler divergence between the two distributions. By the chain rule (see e.g. [12]), we can upper bound the above by

Plugging these bounds back into Eq. (Equation 8), the result follows.

b.3Proof of Thm.

We may assume without loss of generality that , and it is enough to show that the expected average regret is at least . This is because if there was a strategy with average regret after rounds, then for the case of rounds, we could just run that strategy for rounds, compute the average of all points played so far, and then repeatedly choose in the remaining rounds. By Jensen’s inequality, this would imply a average regret after rounds, in contradiction.

Let be an arbitrary deterministic function of . A proof identical to that of Thm. ?, up to Eq. (Equation 4), implies that for any , there exists a quadratic function of the form

with , such that

In particular, letting , using Jensen’s inequality, and discarding the , we get that

However, we also know that by strong convexity of , we have

Using the fact that

we get that

Substituting into Eq. (Equation 10) and slightly manipulating the resulting inequality, we get

For simplicity, denote the average regret term by . Substituting the expression above into Eq. (Equation 9), we get

Rearranging and simplifying, we get