Tight Regret Bounds for Bayesian Optimization in One Dimension

Tight Regret Bounds for Bayesian Optimization in One Dimension

Abstract

We consider the problem of Bayesian optimization (BO) in one dimension, under a Gaussian process prior and Gaussian sampling noise. We provide a theoretical analysis showing that, under fairly mild technical assumptions on the kernel, the best possible cumulative regret up to time behaves as and . This gives a tight characterization up to a factor, and includes the first non-trivial lower bound for noisy BO. Our assumptions are satisfied, for example, by the squared exponential and Matérn- kernels, with the latter requiring . Our results certify the near-optimality of existing bounds (Srinivas et al., 2009) for the SE kernel, while proving them to be strictly suboptimal for the Matérn kernel with .

\printAffiliationsAndNotice

1 Introduction

Bayesian optimization (BO) Shahriari et al. (2016) is a powerful and versatile tool for black-box function optimization, with applications including parameter tuning, robotics, molecular design, sensor networks, and more. The idea is to model the unknown function as a Gaussian process with a given kernel function dictating the smoothness properties. This model is updated using (typically noisy) samples, which are selected to steer towards the function maximum.

One of the most attractive properties of BO is its efficiency in terms of the number of function samples used. Consequently, algorithms with rigorous guarantees on the trade-off between samples and optimization performance are particularly valuable. Perhaps the most prominent work in the literature giving such guarantees is that of Srinivas et al. (2010), who consider the cumulative regret:

(1)

where is the function being optimized, and is the point chosen at time . Under a Gaussian process (GP) prior and Gaussian noise, it is shown in Srinivas et al. (2010) that an algorithm called Gaussian Process Upper Confidence Bound (GP-UCB) achieves a cumulative regret of the form

(2)

where (with function values and noisy samples ) is known as the maximum information gain. Here denotes the mutual information Cover & Thomas (2001) between the function values and noisy samples, and denotes asymptotic notation up to logarithmic factors.

The guarantee (2) ensures sub-linear cumulative regret for many kernels of interest. However, the literature is severely lacking in algorithm-independent lower bounds, and without these, it is impossible to know to what extent the upper bounds, including (2), can be improved. In this work, we address this gap in detail in the special case of a one-dimensional function. We show that the best possible cumulative regret behaves as under mild assumptions on the kernel, thus identifying both cases where (2) is near-optimal, and cases where it is strictly suboptimal.

1.1 Related Work

An extensive range of BO algorithms have been proposed in the literature, typically involving the maximization of an acquisition function Hennig & Schuler (2012); Hernández-Lobato et al. (2014); Russo & Van Roy (2014); Wang et al. (2016); see Shahriari et al. (2016) for a recent overview. As mentioned above, the most relevant algorithm to this work for the noisy setting is GP-UCB Srinivas et al. (2010), which constructs confidence bounds in which the function lies with high probability, and samples the point with the highest upper confidence bound. Several extensions to GP-UCB have also been proposed, including contextual Krause & Ong (2011); Bogunovic et al. (2016a), batch Contal et al. (2013); Desautels et al. (2014), and high-dimensional Kandasamy et al. (2015); Rolland et al. (2018) variants.

In the noiseless setting, it has been shown that it is possible to achieve bounded cumulative regret de Freitas et al. (2012); Kawaguchi et al. (2015) under some technical assumptions. In de Freitas et al. (2012), this is done by keeping track of a set of potential maximizers, and sampling increasingly finely in order to shrink that set and “zoom in” towards the optimal point. Similar ideas have also been used in the noisy setting for studying batch variants of GP-UCB Contal et al. (2013), simultaneous online optimization (SOO) methods Wang et al. (2014), and lookahead algorithms that use confidence bounds Bogunovic et al. (2016b). Returning to the noiseless setting, upper and lower bounds were given in Grünewälder et al. (2010) for kernels satisfying certain smoothness assumptions, with the lower bounds showing that bounded cumulative regret is not always to be expected.

Alongside the Bayesian view of the Gaussian process model, several works have also considered a non-Bayesian counterpart assuming that the function has a bounded norm in the associated reproducing kernel Hilbert space (RKHS). Interestingly, GP-UCB still provides similar guarantees to (2) in this setting Srinivas et al. (2010). Moreover, lower bounds have been proved; see Bull (2011) for the noiseless setting, and Scarlett et al. (2017) for the noisy setting. In the latter, the lower bounds nearly match the GP-UCB upper bound for the squared exponential (SE) kernel, but gaps remain for the Matérn kernel. For reference, we note that these kernels are defined as follows:

(3)
(4)

where is a lengthscale parameter, is a smoothness parameter, is the modified Bessel function, and is the gamma function.

The multi-armed bandit (MAB) Bubeck & Cesa-Bianchi (2012) literature has developed alongside the BO literature, with the two often bearing similar concepts. The MAB literature is far too extensive to cover here, but it is worth mentioning that sharp lower bounds are known in numerous settings Bubeck & Cesa-Bianchi (2012), and the above-mentioned concept of “zooming in” to the optimal point has also been explored Kleinberg et al. (2008). To our knowledge, however, none of the existing MAB results are closely related to our own.

1.2 Our Results and Their Implications

The main results of this paper are informally summarized as follows.

Main Results (Informal). Under mild technical assumptions on the kernel, satisfied (for example) by the SE kernel and Matérn- kernel with , the best possible cumulative regret of noisy BO in one dimension behaves as and .

Our results have several important implications:

  • To our knowledge, our lower bound is the first of any kind in the noisy Bayesian setting, and is tight up to a factor under our technical assumptions.

  • Our lower bound also establishes the order-optimality of the upper bound of Srinivas et al. (2010) applied to the SE kernel, up to logarithmic factors.

  • On the other hand, our upper bound establishes that the upper bound of Srinivas et al. (2010) for the Matérn- kernel, namely , is strictly suboptimal for . For example, if , then this is , as opposed to our upper bound of . (See also Shekhar & Javidi (2017) for recent improvements over Srinivas et al. (2010) under the Matérn kernel in higher dimensions and/or with smaller ).

  • Another important implication for the Matérn kernel with is that the Bayesian setting is provably harder than the non-Bayesian RKHS counterpart; the latter has cumulative regret Scarlett et al. (2017), which is strictly worse than .

Our upper bound is stated formally in Section 3, and its technical assumptions are given in Section 2.1. We build on the ideas of de Freitas et al. (2012) for the noiseless setting, while addressing highly non-trivial challenges arising in the presence of noise.

Our lower bound is stated formally in Section 4, and its technical assumptions are given in Section 2.1. The analysis is based on a reduction to binary hypothesis testing and an application of Fano’s inequality Cover & Thomas (2001). This approach is inspired by previous work on lower bounds for stochastic convex optimization Raginsky & Rakhlin (2011), but the details are very different.

2 Problem Setup

2.1 Bayesian Optimization

We seek to sequentially optimize an unknown reward function over the one-dimensional domain ; note that any interval can be transformed to this choice via re-scaling. At time , we query a single point and observe a noisy sample , where for some noise variance , with independence across different times. We measure the performance using the cumulative regret , defined in (1).

We henceforth assume to be distributed according to Gaussian process (GP) Rasmussen (2006) having mean zero and kernel function . The posterior distribution of given the points and observations up to time is again a GP, with the posterior mean and variance given by Rasmussen (2006)

(5)
(6)

where , , and is the identity matrix.

2.2 Technical Assumptions

Here we introduce several assumptions that will be adopted in our main results, some of which were also used in the noiseless setting de Freitas et al. (2012).

Assumption 1.

We have the following:

  1. The kernel is stationary, depending on its inputs only through ;

  2. The kernel satisfies for all , and for all ;

Given the stationarity assumption, the assumptions and are without loss of generality, as one can always re-scale the function and adjust the noise variance accordingly.

Next, we give some high-probability assumptions on the random function itself.

Assumption 2.

There exists a constant such that, with probability at least , we have the following:

  1. The function has a unique maximizer such that

    (7)

    for any local maximum that differs from , for some constant .

  2. The function is twice differentiable;

  3. The function and its first two derivatives are bounded:

    (8)

    for all and some constants . This implies that is -Lipschitz continuous, and is -Lipschitz continuous.

The assumption of a unique maximizer holds with probability one in most non-trivial cases de Freitas et al. (2012), and (7) simply formally defines the gap to the second-highest peak. Moreover, given twice differentiability, the remaining conditions in (8) are very mild, only requiring that the function value and its derivatives are bounded, and formally defining the corresponding constants.

Next, we provide assumptions regarding the derivatives of and the resulting Taylor expansions (typically around the optimizer ). We adopt slightly different assumptions for the upper and lower bounds, starting with the former.

Assumption 3.

There exist constants and such that conditioned on the events in Assumption 2, we have with probability at least that one of the following is true:

  1. The maximizer is at an endpoint (i.e., or ), and satisfies the following locally linear behavior: For all (if ) or (if ), it holds that

    (9)

    for some constants .

  2. The maximizer satisfies , and satisfies the following locally quadratic behavior: For all , we have

    (10)

    for some constants .

This assumption is near-identical to the main assumption adopted in the noiseless setting de Freitas et al. (2012), and is also mild given the assumption of twice differentiability. Indeed, (9) and (10) amount to standard Taylor expansions, with the assumptions and only requiring non-vanishing gradient at the endpoint (first case) or non-vanishing second derivative at the function maximizer (second case). These conditions typically hold with probability one de Freitas et al. (2012).

The following assumption will be used for the lower bound.

Assumption 4.

There exists constants and such that conditioned on the events in Assumption 2, both of the following hold with probability at least :

  1. For any and for which , we have

    (11)

    for some (possibly negative) constants , .

  2. The maximizer satisfies , and satisfies the following for all :

    (12)

    for some constants .

Figure 1: Illustration of some of the main assumptions: The function is bounded within and its derivative within , the gap to the second highest peak is at least , and the function is locally quadratic for points within a distance of the maximizer.

The first part is similar to (10), but performs a Taylor expansion around an arbitrary point rather than the specific point , and the second part is precisely (10). Note, however, that here we are assuming both of two conditions to hold, rather than one of two. Hence, we are implicitly assuming that the first item of Assumption 3 does not have a significant probability of occurring. For stationary kernels, the only situations where an endpoint has a high probability of being optimal are those where varies very slowly (e.g., the SE kernel with a larger lengthscale than the domain width). Such functions are of limited practical interest.

Similarly to the noiseless setting de Freitas et al. (2012), all of the above assumptions hold for the SE kernel, as well as the Matérn- kernel with , with the added caveat that in Assumption 4 is a function of the lengthscale and cannot be chosen arbitrarily. Specifically, a smaller lengthscale implies a smaller value of . In contrast, and in Assumptions 2 and 3 can be made arbitrary small by suitably changing the constants , , , , , and so on.

An illustration of some of the main assumptions and their associated constants is given in Figure 1.

3 Upper Bound

Our upper bound is formally stated as follows.

Theorem 1.

(Upper Bound) Consider the problem of BO in one dimension described in Section 2.1, with time horizon and noise variance satisfying for some and . Under Assumptions 1, 2, and 3, there exists an algorithm satisfying the following: With probability at least (with respect to the Gaussian process ), the average cumulative regret (averaged over the noisy samples) satisfies

(13)

Here and are defined in Assumptions 2 and 3, and depends only on the constants therein and .

The assumption that for some is very mild, since typically is constant with respect to . The proof of Theorem 1 extends immediately to a high probability guarantee with respect to both and the noisy samples (i.e., holding with probability for in Lemma 1 below). We have stated the above form for consistency with the lower bound, which will be given in Section 4.

3.1 High-Level Description of the Algorithm

The algorithm considered in the proof of Theorem 1 is described informally in Algorithm 1; the details will be established throughout the proof of Theorem 1, and a complete description is given in Appendix B.

0:  Domain , GP prior (, ), discrete sub-domain , time horizon .
1:  Initialize , epoch number , potential maximizers , and target confidence .
2:  while less than samples have been taken do
3:     Set .
4:     Sample each point within a subset repeatedly times, where and are chosen such that after this sampling, all points satisfy upper and lower confidence bounds of the form
with the gap between the two bounded by .
5:     Update the set of potential maximizers:
6:     Increment .
7:  end while
Algorithm 1 Informal description of our algorithm, based on reducing uncertainty in epochs via repeated sampling.

As in the noiseless setting de Freitas et al. (2012), the idea is to operate in epochs and sample a set of increasingly closely-packed points to reduce the posterior variance, but only within a set of potential maximizers that are updated according to the confidence bounds. As a simple means of bringing the effective noise level down, we perform resampling, i.e., sampling the same point times consecutively. In each epoch, we sample enough to be able to produce upper and lower confidence bounds and that differ by at most a target value within , and then the target is halved for the next epoch.

We do not expect our algorithm to perform well in practice by any means, but it still suffices for our purposes in establishing regret. Indeed, we have made no attempt to optimize the corresponding constant factors, and doing so would require more sophisticated techniques. Moreover, the quantities , , , and in Algorithm 1 are chosen as functions of both the kernel and the constants appearing in our assumptions, which limits the algorithm’s practical utility even further. Note, however, that these constants are merely a function of the kernel, and that suitable bounds suffice in place of exact values (e.g., lower bound on , upper bound on , etc.).

While our algorithm assumes a known time horizon (which is used when selecting ; see Appendix B), this assumption can easily be dropped via a standard doubling trick. The details are given in Appendix A.

3.2 Auxiliary Lemmas

Here we present two very standard auxiliary lemmas. We begin with a simpler version of the conditions of Srinivas et al. Srinivas et al. (2010) guaranteeing that the posterior mean and variance provide valid confidence bounds with high probability. The reason for being slightly simpler is that we are considering a fixed time horizon.

Lemma 1.

Fix . For any finite set of points and time horizon , under the choice , it holds that

(14)

with probability at least .

Proof.

It was shown in Srinivas et al. (2010) that for fixed and , the event holds with probability at least . the lemma follows by substituting the choice of and taking the union bound over the values of and . ∎

The following lemma is also standard, and has been used (implicitly or explicitly) in the study of multiple algorithms that eliminate suboptimal points based on confidence bounds de Freitas et al. (2012); Contal et al. (2013); Bogunovic et al. (2016b). For completeness, we give a short proof.

Lemma 2.

Suppose that at time , for all within a set of points , it holds that

(15)

for some bounds and such that

(16)

Then any point satisfying must also satisfy

(17)

That is, any -suboptimal point can be ruled out according to the confidence bounds (15).

Proof.

We have

(18)
(19)
(20)
(21)
(22)

where (18) and (22) follow from (16), (19) and (21) follow from the confidence bounds in (15), and (20) follows from the assumption . ∎

3.3 Outline of Proof of Theorem 1

Here we provide a high-level outline of the Proof of Theorem 1; the details are given in Appendix B.

Algorithm 1 only samples on a discrete sub-domain . This set is chosen to be a set of regularly-spaced points that are fine enough to ensure that the cumulative regret with respect to is within a constant value of the cumulative regret with respect to . Working with the finite set helps to simplify the subsequent analysis.

We split the epochs into two classes, which we call early epochs and late epochs. The late epochs are those in which we have shrunk the potential maximizers down enough to be entirely within the locally quadratic region, cf., Figure 1; here we only discuss the second case of Assumption 3, which is the more interesting of the two. Since the width of the locally quadratic region is constant, we can show that this occurs after a finite number of epochs, each lasting for at most time. Hence, even if we naively upper bound the instant regret by according to (8), the overall regret incurred within the early epochs is insignificant.

In the later epochs, we exploit the locally quadratic behavior to show that the set of potential maximizers shrinks rapidly, i.e., by a constant factor after each epoch. As a result, we can let the repeatedly-sampled set in Algorithm 1 lie within a given interval that similarly shrinks, thereby controlling the number of samples we need to take in the epoch.

By Lemma 2, after we attain uniform -confidence, the instant regret incurred at each time thereafter is at most . Using the fact that and summing over the epochs, we find that the overall regret behaves as in (13).

A notable difficulty that we omitted above is how we attain the confidence bounds in order to update the potential maximizers . While we directly apply Lemma 1 for the points that were repeatedly sampled, we found it difficult to do this for the non-sampled points. For those, we instead use Lipschitz properties of the function. In the early epochs, we use the global Lipschitz constant from Assumption 2, whereas in the later epochs, we find a considerably smaller Lipschitz constant due to the locally quadratic behavior.

4 Lower Bound

Our lower bound is formally stated as follows.

Theorem 2.

(Lower Bound) Consider the one-dimensional BO problem from Section 2.1, with time horizon and noise variance satisfying for some and . Under Assumptions 1, 2, and 4, any algorithm must yield the following: With probability at least (with respect to the Gaussian process ), the average cumulative regret (averaged over the noisy samples) satisfies

(23)

Here and are defined in Assumptions 2 and 4, and depends only on the constants therein and .

The assumption that for some is very mild, since typically is constant with respect to . The assumption is required to avoid (23) contradicting the trivial upper bound. We also note that Theorem 2 immediately implies an lower bound on the expected regret with respect to both and the noisy samples, as long as . As discussed following Assumption 4, the latter condition is mild.

In the remainder of the section, we introduce some of the main tools and ideas, and then outline the proof. We note that is trivial, as the average regret of the first sample alone is lower bounded by a constant. As a result, we only need to show that .

4.1 Reduction to Binary Hypothesis Testing

Recall that is a one-dimensional GP on with a stationary kernel . We fix , and think of the GP as being generated by the following procedure:

  1. Generate a GP with the same kernel on the larger domain ;

  2. Randomly shift along the -axis by or with equal probability, to obtain ;

  3. Let for .

Since the kernel is stationary, the shifting does not affect the distribution, so the induced distribution of is indeed the desired GP on .

Figure 2: Examples of functions and considered in the lower bound. The two are identical up to a small horizontal shift.
Figure 3: Illustration of reduction from optimization to binary hypothesis testing. The gray boxes are considered to be fixed, whereas the white boxes are introduced for the purpose of proving the lower bound.

We consider a genie argument in which is revealed to the algorithm. Clearly this additional information can only help the algorithm, so any lower bound still remains valid for the original setting. Stated differently, the algorithm knows that is either or , where

(24)
(25)

See Figure 2 for an illustrative example.

This argument allows us to reduce the BO problem to a binary hypothesis test with adaptive sampling, as depicted in Figure 3. The hypothesis, indexed by , is that the underlying function is . We show that under a suitable choice of , achieving small cumulative regret means that we can construct a decision rule such that with high probability, i.e., the hypothesis test is successful. The contrapositive statement is then that if the hypothesis test cannot be successful, we cannot achieve small cumulative regret, from which it only remains to prove the former. This idea was used previously for stochastic convex optimization in Raginsky & Rakhlin (2011).

In the remainder of the analysis, we implicitly condition on an arbitrary realization of , meaning that all expectations and probabilities are only with respect to the random index and/or the noise. We also assume that satisfies the conditions in Assumptions 1, 2, and 4, which holds with probability at least . For sufficiently small , the same assumptions are directly inherited by and . We henceforth assume that is indeed sufficiently small; we will verify that this is the case when we set its value.

We introduce some further notation. Letting , , and denote the maximizers of , and (which are unique by Assumption 2), we see that Assumption 4 ensures these are in the interior , and hence the optimal values coincide: . To simplify some of the notation, instead of working with these functions directly, we consider the equivalent problem of minimizing the corresponding regret functions:

(26)
(27)

Indeed, since we assume the algorithm knows and hence also the optimal value , it can always choose to transform the samples as . In this form, we have the convenient normalization .

4.2 Auxiliary Lemmas

We first state the following useful properties of and .

Lemma 3.

The functions and constructed above satisfy the following for sufficiently small under the conditions in Assumptions 2 and 4:

  1. We have for all that

    (28)

    where is defined in Assumption 4.

  2. There exists a constant such that, for all ,

    (29)
  3. There exists a constant such that, for all ,

    (30)
Proof.

See Appendix C. ∎

The first part states that any point can be better than -optimal for at most one of the two functions, the second part shows that the two functions are close for points near , and the third part shows that the instant regret is lower bounded by a quadratic function.

The first part of the Lemma 3 allows us to bound the cumulative regret using Fano’s inequality for binary hypothesis testing with adaptive sampling Raginsky & Rakhlin (2011). This inequality lower bounds the success probability of such a hypothesis test in terms of a mutual information quantity Cover & Thomas (2001). The resulting lower bound on regret is stated in the following; it is worth noting that the consideration of cumulative regret here provides a distinction from the analogous bound on the instant regret in Raginsky & Rakhlin (2011).

Lemma 4.

Under the preceding setup, we have

(31)

where is equiprobable on , and are the selected points and samples when the minimization algorithm is applied to . Here is the functional inverse of the binary entropy function .

Since this result is particularly fundamental to our analysis, we provide a proof at the end of this section.

4.3 Outline of Proof of Theorem 2

Here we provide a high-level outline of the proof of Theorem 2; the details are given in Appendix D.

Once the lower bound in Lemma 4 is established, the main technical challenge is upper bounding the mutual information. A useful property called tensorization (e.g., see Raginsky & Rakhlin (2011)) allows us to simplify the mutual information with the vectors to a sum of mutual informations containing only a single pair : .

Each such mutual information term can further be upper bounded by the KL divergence Cover & Thomas (2001) between the conditional output distributions corresponding to and , which in turn equals when . By substituting the property (29) given in Lemma 3, we find that is upper bounded by a constant times . If we can further upper bound by a constant in , then (31) establishes an lower bound.

We proceed by considering the cases and separately, with given in (30). The former case will immediately give the lower bound in Theorem 2 when we set , whereas in the latter case, we can use (30) to show that is upper bounded by a constant times , which means that the desired mutual information upper bound (see the previous paragraph) is attained under a choice of scaling as . Under this choice, the lower bound evaluates to , as required.

4.4 Proof of Lemma 4

As mentioned above, the proof of Lemma 4 follows along the lines of Raginsky & Rakhlin (2011), which in turn builds on previous works using Fano’s inequality to establish minimax lower bounds in statistical estimation problems; see for example Yu (1997).

In the following, we use and to denote the cumulative regret associated with and , respectively, and we generically write to denote one of the two with .

We first use Markov’s inequality to write

(32)

for any . We proceed by analyzing the probability on the right-hand side.

Recall that is equiprobable on , and are generated by running the optimization algorithm on . Given the sequence of inputs , let be the index with the lower cumulative regret . By Lemma 3, can be less than for at most one of the two functions, and hence, if then we must have . Therefore,

(33)

where, here and subsequently, and denote probabilities and expectations when the underlying instant regret function is (i.e., the underlying function that the algorithm seeks to maximize is ).

Continuing, we can lower bound the probability appearing in (32) as follows:

(34)
(35)
(36)

where (35) follows from (33), and (36) follows from Fano’s inequality for binary hypothesis testing with adaptive sampling (see Eq. (22) and (24) of Raginsky & Rakhlin (2011)). The proof is completed by combining (32) and (36), and recalling that can be arbitrarily small.

5 Conclusion and Discussion

We have established tight scaling laws on the regret for Bayesian optimization in one dimension, showing that the optimal scaling is and under mild technical assumptions on the kernel. Our results highlight some limitations of the widespread upper bounds based on the information gain, as well as providing cases where the noisy Bayesian setting is provably more difficult than its non-Bayesian RKHS counterpart.

An immediate direction for further work is to sharpen the constant factors in the upper and lower bounds, and to establish whether the upper bound is attained by any algorithm that can also provide state-of-the-art performance in practice. We re-iterate that our algorithm is certainly not suitable for this purpose, as its cumulative regret contains large constant factors, and the algorithm makes use of a variety of specific constants present in the assumptions (though they are merely a function of the kernel).

We expect our techniques to extend to any constant dimension ; the main ideas from the noiseless upper bound still apply de Freitas et al. (2012), and in the lower bound we can choose an arbitrary single dimension and introduce a random shift in that direction as per Section 4.1. While these extensions may still yield regret, the dependence on would be exponential or worse in the upper bound, but constant in the lower bound, with the latter dependence certainly being suboptimal. Multi-dimensional lower bounding techniques based on Fano’s inequality Raginsky & Rakhlin (2011) may improve the latter to , but overall, attaining a sharp joint dependence on and appears to require different techniques.

Acknowledgments. I would like to thank Ilija Bogunovic for his helpful comments and suggestions. This work was supported by an NUS startup grant.

Supplementary Material

Tight Regret Bounds for Bayesian Optimization in One Dimension
(Jonathan Scarlett, ICML 2018)

Appendix A Doubling Trick for an Unknown Time Horizon

Suppose that we have an algorithm that depends on the time horizon and achieves for some . We show that we can also achieve when is unknown.

To see this, fix an arbitrary integer , and repeatedly run the algorithm with fixed time horizons , , , etc., until points have been sampled. The number of stages is no more than . Moreover, we have

(37)

where the first inequality uses , and the last inequality uses . This establishes the desired claim.

0:  Domain , GP prior (, ), time horizon , constants .
1:  Set discrete sub-domain , confidence parameter , initial target confidence , and initial potential maximizers .
2:  Initialize time index and epoch number .
3:  while less than samples have been taken do
4:     Set .
5:     Define the interval
and its width
6:     Set the Lipschitz constant
7:     Construct a subset as follows:
  • Initialize .

  • Construct (not necessarily a subset of or ) containing regularly-spaced points within the interval , with spacing .

  • For each , add its two nearest points in to .

8:     Sample each point in repeatedly times, where
For each sample taken, increment , and terminate if .
9:     Update the posterior distribution according to (5)–(6), with and respectively containing all the selected points and noisy samples so far.
10:     For each , set
where .
11:     Update the set of potential maximizers:
12:     Increment .
13:  end while
Algorithm 2 Full description of our algorithm, based on reducing uncertainty in epochs via repeated sampling.

Appendix B Proof of Theorem 1 (Upper Bound)

We continue from the auxiliary results given in Section 3, proceeding in several steps. Algorithm 2 gives a full description of the algorithm; the reader is encouraged to refer to this throughout the proof, rather than trying to understand all the steps therein immediately. Note that the constants , , , and used in the algorithm come from Assumptions 2 and 3.

Reduction to a finite domain. Our algorithm only samples within a finite set of pre-defined points. We choose these points to be regularly spaced, and close enough to ensure that the highest function value is within of the maximum . Under condition (8) in Assumption 2 (which implies that is -Lipschitz continuous), it suffices to choose

(38)

where denotes the integers. Here we add to because it will be notationally convenient to ensure that the endpoints are both included in the set. Note that satisfies , which we crudely upper bound by .

Since , the cumulative regret with respect to the best point in is such that

(39)

Hence, it suffices to bound instead of . For convenience, we henceforth let denote an arbitrary input that achieves , and we define the instant regret as

(40)

Conditioning on high-probability events. By assumption, the events in Assumptions 2 and 3 simultaneously hold with probability at least . Moreover, by setting in Lemma 1 and letting be as in (38) with , we deduce that (14) holds with probability at least when

(41)

Denoting the intersection of all events in Assumptions 2 and 3 by , and the event in Lemma 1 by , we can write the average regret given as follows:

(42)
(43)
(44)

where (43) follows since and , and (44) follows since condition (8) in Assumption 2 ensures that . By (44), in order to prove Theorem 1, it suffices to show that whenever the conditions of Assumptions 23 and Lemma 1 hold true. We henceforth condition on this being the case.

Sampling mechanism. Recall that represents the target confidence to attain by the end of the -th epoch, and each such value is half of the previous value. For this interpretation to be valid, should be sufficient large so that the entire function is a priori known up to confidence ; by (8) in Assumption 2, the choice