Safe Linear Thompson Sampling with Side Information

Safe Linear Thompson Sampling with Side Information

Safe Linear Thompson Sampling with Side Information

Abstract

The design and performance analysis of bandit algorithms in the presence of stage-wise safety or reliability constraints has recently garnered significant interest. In this work, we consider the linear stochastic bandit problem under additional linear safety constraints that need to be satisfied at each round. We provide a new safe algorithm based on linear Thompson Sampling (TS) for this problem and show a frequentist regret of order , which remarkably matches the results provided by Abeille et al. (2017) for the standard linear TS algorithm in the absence of safety constraints. We compare the performance of our algorithm with UCB-based safe algorithms and highlight how the inherently randomized nature of TS leads to a superior performance in expanding the set of safe actions the algorithm has access to at each round.

\icmlsetsymbol

equal

{icmlauthorlist}\icmlauthor

Ahmadreza Moradipari \icmlauthorSanae Amani \icmlauthorMahnoosh Alizadeh \icmlauthorChristos Thrampoulidis

\icmlcorrespondingauthor

Ahmadreza Moradipariahmadreza_moradipari@ucsb.edu

\icmlkeywords

Machine Learning, ICML


1 Introduction

The application of stochastic bandit optimization algorithms to safety-critical systems has received significant attention in the past few years. In such cases, the learner repeatedly interacts with a system with uncertain reward function and operational constraints. In spite of this uncertainty, the learner needs to ensure that her actions do not violate the operational constraints at any round of the learning process. As such, especially in the earlier rounds, there is a need to choose actions with caution, while at the same time making sure that the chosen action provide sufficient learning opportunities about the set of safe actions. Notably, the actions deemed safe by the algorithm might not originally include the optimal action. This uncertainty about safety and the resulting conservative behavior means the learner could experience additional regret in such constrained environments.

In this paper, we focus on a special class of stochastic bandit optimization problems where the reward is a linear function of the actions. This class of problems, referred to as linear stochastic bandits (LB), generalizes multi-armed bandit (MAB) problems to the setting where each action is associated with a feature vector , and the expected reward of playing each action is equal to the inner product of its feature vector and an unknown parameter vector . There exists several variants of LB that study the finite Auer et al. (2002) or infinite Dani et al. (2008); Rusmevichientong & Tsitsiklis (2010); Abbasi-Yadkori et al. (2011) set of actions, as well as the case where the set of feature vectors can change over time Chu et al. (2011); Li et al. (2010). Two efficient approaches have been developed for LB: linear UCB (LUCB) and linear Thompson Sampling (LTS). For LUCB, Abbasi-Yadkori et al. (2011) provides a regret bound of order . For LTS Agrawal & Goyal (2013); Abeille et al. (2017) adopt a frequentist view and show a regret of order . Here we provide a LTS algorithm that respects linear safety constraints and study its performance. We formally define the problem setting before summarizing our contributions.

1.1 Safe Stochastic Linear Bandit Model

Reward function. The learner is given a convex and compact set of actions . At each round , playing an action results in observing reward

(1)

where is a fixed, but unknown, parameter and is a zero-mean additive noise.

Safety constraint. We further assume that the environment is subject to a linear constraint:

(2)

which needs to be satisfied by the action at every round , to guarantee safe operation of the system. Here, is a positive constant that is known to the learner, while is a fixed, but unknown vector parameter. Let us denote the set of “safe actions” that satisfy the constraint (2) as follows:

(3)

Clearly, is unknown to the learner, since is itself unknown. However, we consider a setting in which, at every round , the learner receives side information about the safety set via noisy measurements:

(4)

where is zero-mean additive noise. During the learning process, the learner needs a mechanism that allows her to use the side measurements in (4) for determining the safe set . This is critical, since it is required (at least with high-probability) that for all rounds .

Regret. The cumulative pseudo-regret of the learner up to round is defined as , where is the optimal safe action that maximizes the expected reward over , i.e.,

Learning goal. The learner’s objective is to control the growth of the pseudo-regret. Moreover, we require that the chosen actions are safe (i.e., they belong to in (3)) with high probability. As is common, we use regret to refer to the pseudo-regret .

1.2 Contributions

  We provide the first safe LTS (Safe-LTS) algorithm with provable regret guarantees for the linear bandit problem with linear safety constraints.

  Our regret analysis shows that Safe-LTS achieves the same order of regret as the original LTS (without safety constraints) as shown by Abeille et al. (2017). Hence, the dependence of the regret of Safe-LTS on the time horizon cannot be improved modulo logarithmic factors (see lower bounds for LB in Dani et al. (2008); Rusmevichientong & Tsitsiklis (2010)).

  We compare Safe-LTS to the existing safe versions of LUCB for linear stochastic bandits with linear stage-wise safety constraints. We show that our algorithm has: better regret in the worst-case, fewer parameters to tune and superior empirical performance.

1.3 Related Work

Safety - A diverse body of related works on stochastic optimization and control have considered the effect of safety constraints that need to be met during the run of the algorithm Aswani et al. (2013); Koller et al. (2018) and references therein. Closely related to our work, Sui et al. (2015, 2018) study nonlinear bandit optimization with nonlinear safety constraints using Gaussian processes (GPs) as non-parametric models for both the reward and the constraint functions. Their algorithms have shown great promises in robotics applications Ostafew et al. (2016); Akametalu et al. (2014). Without the GP assumption, Usmanova et al. (2019) proposes and analyzes a safe variant of the Frank-Wolfe algorithm to solve a smooth optimization problem with an unknown convex objective function and unknown linear constraints (with side information, similar to our setting). All the above algorithms come with provable convergence guarantees, but no regret bounds. To the best of our knowledge, the first work that derived an algorithm with provable regret guarantees for bandit optimization with stage-wise safety constraints, as the ones imposed on the aforementioned works, is Amani et al. (2019). While Amani et al. (2019) restricts attention to a linear setting, their results reveal that the presence of the safety constraint –even though linear– can have a non-trivial effect on the performance of LUCB-type algorithms. Specifically, the proposed Safe-LUCB algorithm comes with a problem-dependent regret bound that depends critically on the location of the optimal action in the safe action set – increasingly so in problem instances for which the safety constraint is active. In Amani et al. (2019), the linear constraint function involves the same unknown vector (say, ) as the one that specifies the linear reward. Instead, in Section 1.1 we allow the constraint to depend on a new parameter vector (say, ) to which the learner get access via side-information measurements (4). This latter setting is the direct linear analogue to that of Sui et al. (2015, 2018); Usmanova et al. (2019) and we demonstrate that an appropriate Safe-LTS algorithm enjoys regret guarantees of the same order as the original LTS without safety constraints. A more elaborate discussion comparing our results to Amani et al. (2019) is provided in Section 4.3. We also mention Kazerouni et al. (2017) as another recent work on safe linear bandits. In contrast to the previously mentioned references, Kazerouni et al. (2017) defines safety as the requirement of ensuring that the cumulative (linear) reward up to each round stays above a given percentage of the performance of a known baseline policy. As a closing remark, Amani et al. (2019); Kazerouni et al. (2017); Usmanova et al. (2019) show that simple linear models for safety constraints might be directly relevant to several applications such as medical trials applications, recommendation systems or managing the customers’ demand in power-grid systems. Moreover, even in more complex settings where linear models do not directly apply (e.g., Ostafew et al. (2016); Akametalu et al. (2014)), we still believe that this simplification is an appropriate first step towards a principled study of the regret performance of safe algorithms in sequential decision settings.

Thompson Sampling - Even though TS-based algorithms Thompson (1933) are computationally easier to implement than UCB-based algorithms and have shown great empirical performance, they were largely ignored by the academic community until a few years ago, when a series of papers (e.g., Russo & Van Roy (2014); Abeille et al. (2017); Agrawal & Goyal (2012); Kaufmann et al. (2012)) showed that TS achieves optimal performance in both frequentist and Bayesian settings. Most of the literature focused on the analysis of the Bayesian regret of TS for general settings such as linear bandits or reinforcement learning (see e.g., Osband & Van Roy (2015)). More recently, Russo & Van Roy (2016); Dong & Van Roy (2018); Dong et al. (2019) provided an information-theoretic analysis of TS.Additionally, Gopalan & Mannor (2015) provides regret guarantees for TS in the finite and infinite MDP setting. Another notable paper is Gopalan et al. (2014), which studies the stochastic MAB problem in complex action settings providing a regret bound that scales logarithmically in time with improved constants. None of the aforementioned papers study the performance of TS for linear bandits with safety constraints.

2 Safe Linear Thompson Sampling

Our proposed algorithm is a safe variant of Linear Thompson Sampling (LTS). At any round , given a regularized least-squares (RLS) estimate , the algorithm samples a perturbed parameter that is appropriately distributed to guarantee sufficient exploration. Considering this sampled as the true environment, the algorithm chooses the action with the highest possible reward while making sure that the safety constraint (2) holds. In order to ensure that actions remain safe at all rounds, the algorithm uses the side-information (4) to construct a confidence region , which contains the unknown parameter with high probability. With this, it forms an inner approximation of the safe set, which is composed by all actions that satisfy the safety constraint for all . The summary is presented in Algorithm 12 and a detailed description follows.

  Input:
  Set
  for  to  do
     Sample (see Section 2.2.2)
     Set
     Compute RLS-estimate and (see (5))
     Set:
     Build the confidence region:
     Compute the estimated safe set:
     Play the following safe action:  
     Observe reward and measurement
  end for
Algorithm 1 Safe Linear Thompson Sampling (Safe-LTS)

2.1 Model assumptions

Notation. denotes the set . The Euclidean norm of a vector is denoted by . Its weighted -norm with respect to a positive semidefinite matrix is denoted by . We also use the standard notation that ignores poly-logarithmic factors. Finally, for ease of notation, from now onwards we refer to the safe set in (12) by and drop the dependence on .

Let denote the filtration that represents the accumulated information up to round . In the following, we introduce standard assumptions on the problem.

Assumption 1.

For all , and are conditionally zero-mean, -sub-Gaussian noise variables, i.e., , and , .

Assumption 2.

There exists a positive constant such that and .

Assumption 3.

The action set is a compact and convex subset of that contains the origin. We assume , .

It is straightforward to generalize our results when the sub-gaussian constants of and and/or the upper bounds on and are different. Throughout, we assume they are equal, for brevity.

2.2 Algorithm description and discussion

Let be the sequence of actions and and be the corresponding rewards and side-information measurements, respectively. For any , we can obtain RLS-estimates of and of as follows:

(5)
(6)

where is the Gram matrix of the actions. Based on and , Algorithm 12 constructs two confidence regions and as follows:

(7)
(8)

Notice that and both depend on , but we suppress notation for simplicity, when clear from cotext. The ellipsoid radius is chosen according to the Theorem 2.1 in Abbasi-Yadkori et al. (2011) in order to guarantee that and with high probability.

Theorem 2.1.

Abbasi-Yadkori et al. (2011) Let Assumptions 1 and 2 hold. For a fixed , and

(9)

with probability at least , it holds that and , for all .

Background on LTS: a frequently optimistic algorithm

Our algorithm inherits the frequentist view of LTS first introduced in Agrawal & Goyal (2013); Abeille et al. (2017), which is essentially defined as a randomized algorithm over the RLS-estimate of the unknown parameter . Specifically, at any round , the randomized algorithm of Agrawal & Goyal (2013); Abeille et al. (2017) samples a parameter centered at :

(10)

and chooses the action that is best with respect to the new sampled parameter, i.e., maximizes the objective . The key idea of Agrawal & Goyal (2013); Abeille et al. (2017) on how to select the random perturbation to guarantee good regret performance is as follows. On the one hand, must stay close enough to the RLS-estimate so that is a good proxy for the true (but unknown) reward . Thus, must satisfy an appropriate concentration property. On the other hand, must also favor exploration in a sense that it leads –often enough– to actions that are optimistic, i.e., they satisfy

(11)

Thus, must satisfy an appropriate anti-concentration property.

Our proposed Algorithm 12 also builds on these two key ideas. However, we discuss next how the safe setting imposes additional challenges and how our algorithm and its analysis manages to address them.

Addressing challenges in the safe setting

Compared to the classical linear bandit setting studied in Agrawal & Goyal (2013); Abeille et al. (2017), the presence of the safety constraint raises the following two questions:

(i) How to guarantee actions played at each round are safe?

(ii) In the face of the safety restrictions, how can optimism (cf. (11)) be maintained?

In the rest of this section, we explain the mechanisms that Safe-LTS Algorithm 12 employs to address both of these challenges.

Safety - First, the chosen action at each round need not only maximize , but also, it needs to be safe. Since the learner does not know the safe action set , Algorithm 12 performs conservatively and guarantees safety as follows. After creating the confidence region around the RLS-estimate , it forms the so-called safe decision set at round denoted as :

(12)

Then, the chosen action is optimized over only the subset , i.e.,

(13)

We make the following two important remarks about the set defined in (12). On a positive note, is easy to compute. To see this, note the following equivalent definitions:

(14)

Thus, in view of (14), the optimization in (13) is an efficient convex quadratic program. The challenge with is that it contains actions which are safe with respect to all the parameters in , and not only . As such, it is only an inner approximation of the true safe set . As we will see next, this fact complicates the requirement for optimism.

Optimism in the face of safety - As previously discussed, in order to guarantee safety, Algorithm 12 chooses actions from the subset . This is only an inner approximation of the true safe set , a fact that makes it harder to maintain optimism of as defined in (11). To see this, note that in the classical setting of Abeille et al. (2017), their algorithm would choose as the action that maximizes over the entire set . In turn, this would imply that because belongs to the feasible set . This observation is the critical first argument in proving that is optimistic often enough, i.e., (11) holds with fixed probability .

Unfortunately, in the presence of safety constraints, the action is a maximizer over only the subset . Since may not lie within , there is no guarantee that as before. So, how does then one guarantee optimism?

Intuitively, at the first rounds, the estimated safe set is only a small subset of the true . Thus, is a vector of small norm compared to that of . Thus, for (11) to hold, it must be that is not only in the direction of , but it also has larger norm than that. To satisfy this latter requirement, the random vector must be large; hence, it will “anti-concentrate more”. As the algorithm progresses, and –thanks to side-information measurements– the set becomes an increasingly better approximation of , the requirements on anti-concentration of become the same as if no safety constraints were present. Overall, at least intuitively, we might hope that optimism is possible in the face of safety, but only provided that is set to satisfy a stronger (at least at the first rounds) anti-concentration property than that required by Abeille et al. (2017) in the classical setting.

At the heart of Algorithm 12 and its proof of regret lies an analytic argument that materializes the intuition described above. Specifically, we will prove that optimism is possible in the presence of safety at the cost of a stricter anti-concentration property compared to that specified in Abeille et al. (2017). While the proof of this fact is deferred to Section 3.1, we now summarize the appropriate distributional properties that provably guarantee good regret performance of Algorithm 12 in the safe setting.

Definition 2.1.

In Algorithm 12, the random vector is sampled IID at each round from a multivariate distribution on that is absolutely continuous with respect to the Lebesgue measure and satisfies the following properties:
  Anti-concentration: There exists a strictly positive probability such that for any with ,

(15)

Concentration: There exists positive constants such that ,

(16)

In particular, the difference to the distributional assumptions required by Abeille et al. (2017) in the classical setting is the extra term in (15) (naturally, the same term affects the concentration property (16)).

Our proof of regret in Section 3 shows that this extra term captures an appropriate notion of the distance between the approximation (where lives) and the true safe set (where lives), and provides enough exploration for the sampled parameter so that actions in can be optimistic. While this intuition can possibly explain the need for an additive term in Definition 2.1, it is insufficient when it comes to determining what the “correct” value for it should be. This is determined by our analytic treatment in Section 3.1.

As a closing remark of this section, note that the properties in Definition 2.1 are satisfied by a number of easy-to-sample from distributions. For instance, it is satisfied by a multivariate zero-mean IID Gaussian distribution with all entries having a (possibly time-dependent) variance .

3 Regret Analysis

In this section, we present our main result, a tight regret bound for Safe-LTS, and discuss key proof ideas.

Since is unknown, the learner does not know the safe action set . Therefore, in order to satisfy the safety constraint (2), Algorithm 12 chooses actions from , which is a conservative inner approximation of . Moreover, at the heart of the action selection rule, is the sampling of an appropriate random perturbation of the RLS estimate . The sampling rule in (10) is almost the same as in the classical setting Abeille et al. (2017), but with a key difference on the distribution of the perturbation vector (cf. Definition 2.1). As explained in Section 3.1, the modification compared to Abeille et al. (2017) is necessary to guarantee that actions are frequently optimistic (see (24)) in spite of limitations on actions imposed because of the safety constraints. In this section, we put these pieces together by proving that the action selection rule of Safe-LTS is simultaneously: 1) frequently optimistic, and, 2) guarantees a proper expansion of the estimated safe set. Our main result stated as Theorem 3.1 is perhaps surprising: in spite of the additional safety constraints, Safe-LTS has regret that is order-wise the same as that in the classical setting Agrawal & Goyal (2013); Abeille et al. (2017).

Theorem 3.1 (Regret of Safe-LTS).

Let . Under Assumptions 1, 2, 3, the regret of Safe-LTS Algorithm 12 is upper bounded with probability as follows:

(17)

where , as in (9) and,

A detailed proof of Theorem 3.1 is deferred to Appendix E. In the rest of the section, we highlight the key changes compared to previous proofs in Agrawal & Goyal (2013); Abeille et al. (2017) that occur due to the safety constraint. To begin, let us consider the following standard decomposition of the cumulative regret :

(18)

Regarding Term II, the concentration property of guarantees that is close to , and consequently, close to thanks to Theorem 2.1. Therefore, controlling Term II can be done similar to previous works e.g., Abbasi-Yadkori et al. (2011); Abeille et al. (2017); see Appendix E.2 for more details. Next, we focus on Term I.

To see how the safety constraints affect the proofs let us first review the treatment of Term I in the classical setting. For UCB-type algorithms, Term I is always non-positive since the pair is optimistic at each round by design Dani et al. (2008); Rusmevichientong & Tsitsiklis (2010); Abbasi-Yadkori et al. (2011). For LTS, Term I can be positive; that is, (11) may not hold at every round . However, Agrawal & Goyal (2013); Abeille et al. (2017) proved that thanks to the anti-concentration property of , this optimistic property occurs often enough. Moreover, this is enough to yield a good enough bound on Term I for every round ; see Appendix A.

As discussed in Section 2.2.2, the requirement for safety complicates the requirement for optimism. Our main technical contribution, detailed in the next section, is to show that the properly modified anti-concentration property in Definition 2.1 together with the construction of approximated safe sets as in (14) can yield frequently optimistic actions even in the face of safety. Specifically, it is the extra term in (15) that allows enough exploration to the sampled parameter in order to compensate for safety limitations on the chosen actions, and because of that we are able to show Safe-LTS obtains the same order of regret as that of Abeille et al. (2017).

3.1 Proof sketch: Optimism despite safety constraints

We prove that Safe-LTS samples a parameter that is optimistic with constant probability. The next lemma informally characterizes this claim (see the formal statement of the lemma and its proof in Appendix D).

Lemma 3.2.

(Optimism in the face of safety; Informal) For any round , Safe-LTS samples a parameter and chooses an action such that the pair is optimistic frequently enough, i.e.,

(19)

where is the probability with which the anti-concentration property (15) holds.

The challenge in the proof is that the actions are chosen from the estimated safe set , which does not necessarily contain all feasible actions and hence, may not contain . Therefore, we need a mechanism to control the distance of the optimal action from the optimistic actions that can only lie within the subset (distance is defined here in terms of an inner product with the optimistic parameters ). Unfortunately, we do not have a direct control on this distance term and so at the heart of the proof lies the idea of identifying a “good” feasible action whose distance to is easier to control.

To be concrete, we show that it suffices to choose the good feasible point in the direction of , i.e., , where the key parameter must be set to satisfy . Naturally, the value of is determined by the approximated safe set as defined in (14). The challenge though is that we do not know how the value of compares to the constant . We circumvent this issue by introducing an enlarged confidence region centered at as

and the corresponding shrunk safe decision set as

(20)

Notice that the shrunk safe set is defined with respect to an ellipsoid centered at (rather than at ). This is convenient since . Using this, it can be easily checked that the following choice of :

(21)

ensures that

From this, and optimality of we have that:

(22)

Next, using (22), in order to show that (19) holds, it suffices to prove that

where, in the second line, we used the definition of in (37). To continue, recall that . Thus, we want to lower bound the probability of the following event:

To simplify the above, we use the following two facts: (i) ; (ii) , because of Cauchy-Schwartz and Theorem 2.1 Put together, we need that

or equivalently,

(23)

where we have defined . By definition of , note that . Hence, the desired (23) holds due to the anti-concentration property of the distribution in (15). This completes the proof of Lemma 3.2.

Before closing, we remark on the following differences to the proof of optimism in the classical setting as presented in Lemma 3 of Abeille et al. (2017). First, we present an algebraic version of the basic machinery introduced in Section 5 of Abeille et al. (2017) that we show is convenient to extend to the safe setting. Second, we employ the idea of relating to a “better” feasible point and show optimism for the latter. Third, even after introducing , the fact that is proportional to (see (37)) is critical for the seemingly simple algebraic steps that follow (21). In particular, in deducing (23) from the expression above, note that we have divided both sides in the probability term by . It is only thanks to the proportionality observation that we made above that the term cancels throughout and we can conclude with (23) without a need to lower bound the minimum eigenvalue of the Gram matrix (which is known to be hard).

4 Numerical Results and Comparison to State of the Art

We present details of our numerical experiments on synthetic data. First, we show how the presence of safety constraints affects the performance of LTS in terms of regret. Next, we evaluate Safe-LTS by comparing it against safe versions of LUCB. Then, we compare Safe-LTS to Amani et al. (2019)’s Safe-LUCB. In all the implementations, we used: , and . Unless otherwise specified, the reward and constraint parameters and are drawn from ; is drawn uniformly from . Throughout, we have implemented a modified version of Safe-LUCB which uses -norms instead of -norms, due to computational considerations (e.g., Dani et al. (2008); Amani et al. (2019)). Recall that the action selection rule of UCB-based algorithms involves solving bilinear optimization problems, whereas, TS-based algorithms (such as the one proposed here) involve simple linear objectives (see Abeille et al. (2017)).

4.1 The effect of safety constraints on LTS

In Fig. 1 we compare the average cumulative regret of Safe-LTS to the standard LTS algorithm with oracle access to the true safe set . The results are averages over 20 problem realizations. As shown, even though Safe-LTS requires that chosen actions belong to the conservative inner-approximation set , it still achieves a regret of the same order as the oracle reaffirming the prediction of Theorem 3.1. Also, the comparison to the oracle reveals that the action selection rule of Safe-LTS is indeed such that it guarantees a fast expansion of the estimated safe set so as to not exclude optimistic actions for a long time. Fig. 1 also shows the performance of a third algorithm discussed in Sec. 4.4.

Figure 1: Comparison of the average cumulative regret of Safe-LTS vs standard LTS with oracle access to the safe set and Safe-LTS with a dynamic noise distribution described in Section 4.4.
Figure 2: The cumulative regret of Safe-LTS, Naive Safe-LUCB and Inflated Naive Safe-LUCB for a specific problem instance.

4.2 Comparison to safe versions of LUCB

Here, we compare the performance of our algorithm with two safe versions of LUCB, as follows. First, we implement a natural extension of the classical LUCB algorithm in Dani et al. (2008), which we call “Naive Safe-LUCB” and which respects safety constraints by choosing actions from the estimated safe set in (12). Second, we consider an improved version, which we call “Inflated Naive Safe-LUCB” and which is motivated by our analysis of Safe-LTS. Specifically, in light of Lemma 3.2, we implement the improved LUCB algorithm with an inflated confidence ellipsoid by a fraction in order to favor optimistic exploration. In Fig. 2, we employ these two algorithms for a specific problem instance (details in Appendix F) showing that both fail to provide the regret of Safe-LTS, in general. Further numerical simulations suggest that while Safe-LTS always outperforms Naive Safe-LUCB, the Inflated Naive Safe-LUCB can have superior performance to Safe-LTS in many problem instances (see Fig. 6 in Appendix F). Unfortunately, not only is this not always the case (cf. Fig. 2), but also we are not aware of an appropriate modification to our proofs to show this problem-dependent performance. This being said further investigations in this direction might be of interest.

4.3 Comparison to Safe-LUCB

In this section we compare our algorithm and results to the Safe-LUCB algorithm of Amani et al. (2019) that was proposed for a similar, but non-identical setting. Specifically, in Amani et al. (2019), the linear safety constraint involves the same unknown parameter vector of the linear reward function and –in our notation– it takes the form , for some known matrix . As such, no side-information measurements are needed. First, while our proof does not show a regret of for the setting of Amani et al. (2019) in the general case, it does so for special cases. For example, it is not hard to see that our proofs readily extend to their setting when . This already improves upon the ) guarantee provided by Amani et al. (2019). Indeed, for , there are non-trivial instances where (i.e., the safety constraint is active), in which Safe-LUCB suffers from a bound Amani et al. (2019). Second, while our proof adapts to a special case of Amani et al. (2019)’s setting, the other way around is not true, i.e., it is not obvious how one would modify the proof of Amani et al. (2019) to obtain a guarantee even in the presence of side information. This point is highlighted by Fig. 3 that numerically compares the two algorithms for a specific problem instance with side information: , , and (note that the constraint is active at the optimal). Also, see Section F for a numerical comparison of the estimated safe-sets’ expansion for the two algorithms. Fig. 4 compares Safe-LTS against Safe-LUCB and Naive Safe-LUCB over 30 problem realizations (see Section F in Appendix for plots with standard deviation). As already pointed out in Amani et al. (2019), Naive Safe-LUCB generally leads to poor regret, since the LUCB action selection rule alone does not provide sufficient exploration towards safe set expansion. In contrast, Safe-LUCB is equipped with a pure exploration phase over a given seed safe set, which is shown to lead to proper safe set expansion. Our paper reveals that the inherent randomized nature of Safe-LTS is alone capable to properly expand the safe set without the need for an explicit initialization phase (during which regret grows linearly).

Figure 3: Comparison of regret of Safe-LUCB and Safe-LTS, for a single problem instance in which the safety constraint is active.
Figure 4: Comparison of the average cumulative regret of Safe-LTS versus two safe LUCB algorithms.

4.4 Sampling from a dynamic noise distribution

In order for Safe-LTS to be frequently optimistic, our theory requires that the random perturbation satisfies (15) for all rounds. Specifically, we need the extra factor compared to Abeille et al. (2017) in order to ensure safe set expansion. While this result is already sufficient for the tight regret guarantees of Theorem 3.1, it does not fully capture our intuition (see also Sec. 2.2.2) that as the algorithm progresses and gets closer to , exploration (and thus, the requirement on anti-concentration) does not need to be so aggressive. Based on this intuition, we propose the following heuristic modification, in which Safe-LTS uses a perturbation with the following dynamic property: for a linearly-decreasing function . Fig. 1 shows empirical evidence of the superiority of the heuristic.

5 Conclusion

We studied LB in which the environment is subject to unknown linear safety constraints that need to be satisfied at each round. For this problem, we proposed Safe-LTS, which to the best of our knowledge, is the first safe TS algorithm with provable regret guarantees. Importantly, we show that Safe-LTS achieves regret of the same order as in the original setting with no safety constraints. We have also compared Safe-LTS with several UCB-type safe algorithms showing that the former has: better regret in the worst-case, fewer parameters to tune and often superior empirical performance. Interesting directions for future work include: theoretically studying the dynamic property of Section 4.4, as well as, investigating TS-based alternatives to the GP-UCB-type algorithms of Sui et al. (2015, 2018).

Appendix A Proof sketch: Why frequent optimism is enough to bound Term I

As discussed in Section 3, the presence of the safety constraints complicates the requirement for optimism. We show in Section 3.1 that Safe-LTS is optimistic with constant probability in spite of safety constraints. Based on this, we complete the sketch of the proof here by showing that we can bound the overall regret of Term I in (18) with the -norm of optimistic (and in our case, safe) actions. Let us first define the set of the optimistic parameters as

(24)

In Section 3.1, we show that Safe-LTS samples from this set i.e., , with constant probability. Note that, if at round Safe-LTS samples from the set of optimistic parameters, Term I at that round is non-positive. In the following, we show that selecting the optimal arm corresponding to any optimistic parameter can control the overall regret of Term I. The argument below is adapted from Abeille et al. (2017) with minimal changes; it is presented here for completeness.

For the purpose of this proof sketch, we assume that at each round , the safe decision set contains the previous safe action that the algorithm played, i.e., . However, for the formal proof in Appendix E.1, we do not need such an assumption. Let be a time such that , i.e., . Then, for any we have

(25)

The last inequality comes from the assumption that at each round , the safe decision set contains the previous played safe actions for rounds ; hence, . To continue from (25), we use Cauchy-Schwarz, and obtain

(26)

The last inequality comes from the fact that the Gram matrices construct a non-decreasing sequence (, ). Then, we define the ellipsoid such that

(27)

where

(28)

It is not hard to see by combining Theorem 2.1 and the concentration property that with high probability. Hence, we can bound (26) using triangular inequality such that:

(29)
(30)

The last inequality comes from the fact that and are non-decreasing in by construction. Therefore, following the intuition of Abeille et al. (2017), we can upper bound Term I with respect to the -norm of the optimal safe action at time (see Section E.1 in Appendix for formal proof). Bounding the term is standard based on the analysis provided in Abbasi-Yadkori et al. (2011) (see Proposition B.1 in the Appendix).

Appendix B Useful Results

The following result is standard and plays an important role in most proofs for linear bandits problems.

Proposition B.1.

(Abbasi-Yadkori et al. (2011)) Let . For any arbitrary sequence of actions , let be the corresponding Gram matrix (6), then

(31)

In particular, we have

(32)

Also, we recall the Azuma’s concentration inequality for super-martingales.

Proposition B.2.

(Azuma’s inequality Boucheron et al. (2013)) If a super-martingale corresponding to a filtration satisfies for some positive constant , for all , then, for any ,

(33)

Appendix C Confidence Regions

We start by constructing the following confidence regions for the RLS-estimates.

Definition C.1.

Let , , and . We define the following events:

  • is the event that the RLS-estimate concentrates around for all steps , i.e., ;

  • is the event that the RLS-estimate concentrates around , i.e., . Moreover, define such that .

  • is the event that the sampled parameter concentrates around for all steps , i.e., . Let be such that .

Lemma C.1.

Under Assumptions 1, 2, we have where , and .

Proof.

The proof is similar to the one in Lemma 1 of Abeille et al. (2017) and is ommited for brevity. ∎

Lemma C.2.

Under Assumptions 1, 2, we have , where .

Proof.

We show that . Then, from Lemma C.1 we know that , thus we can conclude that . Bounding comes directly from concentration inequality (16). Specifically,

Applying union bound on this ensures that

Appendix D Formal proof of Lemma 3.2

In this section, we provide a formal statement and a detailed proof of Lemma 3.2. Here, we need several modifications compared to Abeille et al. (2017) that are required because in our setting, actions belong to inner approximations of the true safe set . Moreover, we follow an algebraic treatment that is perhaps simpler compared to the geometric viewpoint in Abeille et al. (2017).

Lemma D.1.

Let be the set of optimistic parameters, with , then , .

Proof.

First, we provide the shrunk version of as follows:

A shrunk safe decision set . Consider the enlarged confidence region centered at as

(34)

We know that