Safe Linear Thompson Sampling with Side Information
Safe Linear Thompson Sampling with Side Information
Abstract
The design and performance analysis of bandit algorithms in the presence of stagewise safety or reliability constraints has recently garnered significant interest. In this work, we consider the linear stochastic bandit problem under additional linear safety constraints that need to be satisfied at each round. We provide a new safe algorithm based on linear Thompson Sampling (TS) for this problem and show a frequentist regret of order , which remarkably matches the results provided by Abeille et al. (2017) for the standard linear TS algorithm in the absence of safety constraints. We compare the performance of our algorithm with UCBbased safe algorithms and highlight how the inherently randomized nature of TS leads to a superior performance in expanding the set of safe actions the algorithm has access to at each round.
equal
Ahmadreza Moradipari \icmlauthorSanae Amani \icmlauthorMahnoosh Alizadeh \icmlauthorChristos Thrampoulidis
Ahmadreza Moradipariahmadreza_moradipari@ucsb.edu
Machine Learning, ICML
1 Introduction
The application of stochastic bandit optimization algorithms to safetycritical systems has received significant attention in the past few years. In such cases, the learner repeatedly interacts with a system with uncertain reward function and operational constraints. In spite of this uncertainty, the learner needs to ensure that her actions do not violate the operational constraints at any round of the learning process. As such, especially in the earlier rounds, there is a need to choose actions with caution, while at the same time making sure that the chosen action provide sufficient learning opportunities about the set of safe actions. Notably, the actions deemed safe by the algorithm might not originally include the optimal action. This uncertainty about safety and the resulting conservative behavior means the learner could experience additional regret in such constrained environments.
In this paper, we focus on a special class of stochastic bandit optimization problems where the reward is a linear function of the actions. This class of problems, referred to as linear stochastic bandits (LB), generalizes multiarmed bandit (MAB) problems to the setting where each action is associated with a feature vector , and the expected reward of playing each action is equal to the inner product of its feature vector and an unknown parameter vector . There exists several variants of LB that study the finite Auer et al. (2002) or infinite Dani et al. (2008); Rusmevichientong & Tsitsiklis (2010); AbbasiYadkori et al. (2011) set of actions, as well as the case where the set of feature vectors can change over time Chu et al. (2011); Li et al. (2010). Two efficient approaches have been developed for LB: linear UCB (LUCB) and linear Thompson Sampling (LTS). For LUCB, AbbasiYadkori et al. (2011) provides a regret bound of order . For LTS Agrawal & Goyal (2013); Abeille et al. (2017) adopt a frequentist view and show a regret of order . Here we provide a LTS algorithm that respects linear safety constraints and study its performance. We formally define the problem setting before summarizing our contributions.
1.1 Safe Stochastic Linear Bandit Model
Reward function. The learner is given a convex and compact set of actions . At each round , playing an action results in observing reward
(1) 
where is a fixed, but unknown, parameter and is a zeromean additive noise.
Safety constraint. We further assume that the environment is subject to a linear constraint:
(2) 
which needs to be satisfied by the action at every round , to guarantee safe operation of the system. Here, is a positive constant that is known to the learner, while is a fixed, but unknown vector parameter. Let us denote the set of “safe actions” that satisfy the constraint (2) as follows:
(3) 
Clearly, is unknown to the learner, since is itself unknown. However, we consider a setting in which, at every round , the learner receives side information about the safety set via noisy measurements:
(4) 
where is zeromean additive noise. During the learning process, the learner needs a mechanism that allows her to use the side measurements in (4) for determining the safe set . This is critical, since it is required (at least with highprobability) that for all rounds .
Regret. The cumulative pseudoregret of the learner up to round is defined as , where is the optimal safe action that maximizes the expected reward over , i.e.,
Learning goal. The learner’s objective is to control the growth of the pseudoregret. Moreover, we require that the chosen actions are safe (i.e., they belong to in (3)) with high probability. As is common, we use regret to refer to the pseudoregret .
1.2 Contributions
We provide the first safe LTS (SafeLTS) algorithm with provable regret guarantees for the linear bandit problem with linear safety constraints.
Our regret analysis shows that SafeLTS achieves the same order of regret as the original LTS (without safety constraints) as shown by Abeille et al. (2017). Hence, the dependence of the regret of SafeLTS on the time horizon cannot be improved modulo logarithmic factors (see lower bounds for LB in Dani et al. (2008); Rusmevichientong & Tsitsiklis (2010)).
We compare SafeLTS to the existing safe versions of LUCB for linear stochastic bandits with linear stagewise safety constraints. We show that our algorithm has: better regret in the worstcase, fewer parameters to tune and superior empirical performance.
1.3 Related Work
Safety  A diverse body of related works on stochastic optimization and control have considered the effect of safety constraints that need to be met during the run of the algorithm Aswani et al. (2013); Koller et al. (2018) and references therein. Closely related to our work, Sui et al. (2015, 2018) study nonlinear bandit optimization with nonlinear safety constraints using Gaussian processes (GPs) as nonparametric models for both the reward and the constraint functions. Their algorithms have shown great promises in robotics applications Ostafew et al. (2016); Akametalu et al. (2014). Without the GP assumption, Usmanova et al. (2019) proposes and analyzes a safe variant of the FrankWolfe algorithm to solve a smooth optimization problem with an unknown convex objective function and unknown linear constraints (with side information, similar to our setting). All the above algorithms come with provable convergence guarantees, but no regret bounds. To the best of our knowledge, the first work that derived an algorithm with provable regret guarantees for bandit optimization with stagewise safety constraints, as the ones imposed on the aforementioned works, is Amani et al. (2019). While Amani et al. (2019) restricts attention to a linear setting, their results reveal that the presence of the safety constraint –even though linear– can have a nontrivial effect on the performance of LUCBtype algorithms. Specifically, the proposed SafeLUCB algorithm comes with a problemdependent regret bound that depends critically on the location of the optimal action in the safe action set – increasingly so in problem instances for which the safety constraint is active. In Amani et al. (2019), the linear constraint function involves the same unknown vector (say, ) as the one that specifies the linear reward. Instead, in Section 1.1 we allow the constraint to depend on a new parameter vector (say, ) to which the learner get access via sideinformation measurements (4). This latter setting is the direct linear analogue to that of Sui et al. (2015, 2018); Usmanova et al. (2019) and we demonstrate that an appropriate SafeLTS algorithm enjoys regret guarantees of the same order as the original LTS without safety constraints. A more elaborate discussion comparing our results to Amani et al. (2019) is provided in Section 4.3. We also mention Kazerouni et al. (2017) as another recent work on safe linear bandits. In contrast to the previously mentioned references, Kazerouni et al. (2017) defines safety as the requirement of ensuring that the cumulative (linear) reward up to each round stays above a given percentage of the performance of a known baseline policy. As a closing remark, Amani et al. (2019); Kazerouni et al. (2017); Usmanova et al. (2019) show that simple linear models for safety constraints might be directly relevant to several applications such as medical trials applications, recommendation systems or managing the customers’ demand in powergrid systems. Moreover, even in more complex settings where linear models do not directly apply (e.g., Ostafew et al. (2016); Akametalu et al. (2014)), we still believe that this simplification is an appropriate first step towards a principled study of the regret performance of safe algorithms in sequential decision settings.
Thompson Sampling  Even though TSbased algorithms Thompson (1933) are computationally easier to implement than UCBbased algorithms and have shown great empirical performance, they were largely ignored by the academic community until a few years ago, when a series of papers (e.g., Russo & Van Roy (2014); Abeille et al. (2017); Agrawal & Goyal (2012); Kaufmann et al. (2012)) showed that TS achieves optimal performance in both frequentist and Bayesian settings. Most of the literature focused on the analysis of the Bayesian regret of TS for general settings such as linear bandits or reinforcement learning (see e.g., Osband & Van Roy (2015)). More recently, Russo & Van Roy (2016); Dong & Van Roy (2018); Dong et al. (2019) provided an informationtheoretic analysis of TS.Additionally, Gopalan & Mannor (2015) provides regret guarantees for TS in the finite and infinite MDP setting. Another notable paper is Gopalan et al. (2014), which studies the stochastic MAB problem in complex action settings providing a regret bound that scales logarithmically in time with improved constants. None of the aforementioned papers study the performance of TS for linear bandits with safety constraints.
2 Safe Linear Thompson Sampling
Our proposed algorithm is a safe variant of Linear Thompson Sampling (LTS). At any round , given a regularized leastsquares (RLS) estimate , the algorithm samples a perturbed parameter that is appropriately distributed to guarantee sufficient exploration. Considering this sampled as the true environment, the algorithm chooses the action with the highest possible reward while making sure that the safety constraint (2) holds. In order to ensure that actions remain safe at all rounds, the algorithm uses the sideinformation (4) to construct a confidence region , which contains the unknown parameter with high probability. With this, it forms an inner approximation of the safe set, which is composed by all actions that satisfy the safety constraint for all . The summary is presented in Algorithm 12 and a detailed description follows.
2.1 Model assumptions
Notation. denotes the set . The Euclidean norm of a vector is denoted by . Its weighted norm with respect to a positive semidefinite matrix is denoted by . We also use the standard notation that ignores polylogarithmic factors. Finally, for ease of notation, from now onwards we refer to the safe set in (12) by and drop the dependence on .
Let denote the filtration that represents the accumulated information up to round . In the following, we introduce standard assumptions on the problem.
Assumption 1.
For all , and are conditionally zeromean, subGaussian noise variables, i.e., , and , .
Assumption 2.
There exists a positive constant such that and .
Assumption 3.
The action set is a compact and convex subset of that contains the origin. We assume , .
It is straightforward to generalize our results when the subgaussian constants of and and/or the upper bounds on and are different. Throughout, we assume they are equal, for brevity.
2.2 Algorithm description and discussion
Let be the sequence of actions and and be the corresponding rewards and sideinformation measurements, respectively. For any , we can obtain RLSestimates of and of as follows:
(5)  
(6) 
where is the Gram matrix of the actions. Based on and , Algorithm 12 constructs two confidence regions and as follows:
(7)  
(8) 
Notice that and both depend on , but we suppress notation for simplicity, when clear from cotext. The ellipsoid radius is chosen according to the Theorem 2.1 in AbbasiYadkori et al. (2011) in order to guarantee that and with high probability.
Theorem 2.1.
Background on LTS: a frequently optimistic algorithm
Our algorithm inherits the frequentist view of LTS first introduced in Agrawal & Goyal (2013); Abeille et al. (2017), which is essentially defined as a randomized algorithm over the RLSestimate of the unknown parameter . Specifically, at any round , the randomized algorithm of Agrawal & Goyal (2013); Abeille et al. (2017) samples a parameter centered at :
(10) 
and chooses the action that is best with respect to the new sampled parameter, i.e., maximizes the objective . The key idea of Agrawal & Goyal (2013); Abeille et al. (2017) on how to select the random perturbation to guarantee good regret performance is as follows. On the one hand, must stay close enough to the RLSestimate so that is a good proxy for the true (but unknown) reward . Thus, must satisfy an appropriate concentration property. On the other hand, must also favor exploration in a sense that it leads –often enough– to actions that are optimistic, i.e., they satisfy
(11) 
Thus, must satisfy an appropriate anticoncentration property.
Our proposed Algorithm 12 also builds on these two key ideas. However, we discuss next how the safe setting imposes additional challenges and how our algorithm and its analysis manages to address them.
Addressing challenges in the safe setting
Compared to the classical linear bandit setting studied in Agrawal & Goyal (2013); Abeille et al. (2017), the presence of the safety constraint raises the following two questions:
(i) How to guarantee actions played at each round are safe?
(ii) In the face of the safety restrictions, how can optimism (cf. (11)) be maintained?
In the rest of this section, we explain the mechanisms that SafeLTS Algorithm 12 employs to address both of these challenges.
Safety  First, the chosen action at each round need not only maximize , but also, it needs to be safe. Since the learner does not know the safe action set , Algorithm 12 performs conservatively and guarantees safety as follows. After creating the confidence region around the RLSestimate , it forms the socalled safe decision set at round denoted as :
(12) 
Then, the chosen action is optimized over only the subset , i.e.,
(13) 
We make the following two important remarks about the set defined in (12). On a positive note, is easy to compute. To see this, note the following equivalent definitions:
(14) 
Thus, in view of (14), the optimization in (13) is an efficient convex quadratic program. The challenge with is that it contains actions which are safe with respect to all the parameters in , and not only . As such, it is only an inner approximation of the true safe set . As we will see next, this fact complicates the requirement for optimism.
Optimism in the face of safety  As previously discussed, in order to guarantee safety, Algorithm 12 chooses actions from the subset . This is only an inner approximation of the true safe set , a fact that makes it harder to maintain optimism of as defined in (11). To see this, note that in the classical setting of Abeille et al. (2017), their algorithm would choose as the action that maximizes over the entire set . In turn, this would imply that because belongs to the feasible set . This observation is the critical first argument in proving that is optimistic often enough, i.e., (11) holds with fixed probability .
Unfortunately, in the presence of safety constraints, the action is a maximizer over only the subset . Since may not lie within , there is no guarantee that as before. So, how does then one guarantee optimism?
Intuitively, at the first rounds, the estimated safe set is only a small subset of the true . Thus, is a vector of small norm compared to that of . Thus, for (11) to hold, it must be that is not only in the direction of , but it also has larger norm than that. To satisfy this latter requirement, the random vector must be large; hence, it will “anticoncentrate more”. As the algorithm progresses, and –thanks to sideinformation measurements– the set becomes an increasingly better approximation of , the requirements on anticoncentration of become the same as if no safety constraints were present. Overall, at least intuitively, we might hope that optimism is possible in the face of safety, but only provided that is set to satisfy a stronger (at least at the first rounds) anticoncentration property than that required by Abeille et al. (2017) in the classical setting.
At the heart of Algorithm 12 and its proof of regret lies an analytic argument that materializes the intuition described above. Specifically, we will prove that optimism is possible in the presence of safety at the cost of a stricter anticoncentration property compared to that specified in Abeille et al. (2017). While the proof of this fact is deferred to Section 3.1, we now summarize the appropriate distributional properties that provably guarantee good regret performance of Algorithm 12 in the safe setting.
Definition 2.1.
In Algorithm 12, the random vector is sampled IID at each round from a multivariate distribution on that is absolutely continuous with respect to the Lebesgue measure and satisfies the following properties:
Anticoncentration: There exists a strictly positive probability such that for any with ,
(15) 
Concentration: There exists positive constants such that ,
(16) 
In particular, the difference to the distributional assumptions required by Abeille et al. (2017) in the classical setting is the extra term in (15) (naturally, the same term affects the concentration property (16)).
Our proof of regret in Section 3 shows that this extra term captures an appropriate notion of the distance between the approximation (where lives) and the true safe set (where lives), and provides enough exploration for the sampled parameter so that actions in can be optimistic. While this intuition can possibly explain the need for an additive term in Definition 2.1, it is insufficient when it comes to determining what the “correct” value for it should be. This is determined by our analytic treatment in Section 3.1.
As a closing remark of this section, note that the properties in Definition 2.1 are satisfied by a number of easytosample from distributions. For instance, it is satisfied by a multivariate zeromean IID Gaussian distribution with all entries having a (possibly timedependent) variance .
3 Regret Analysis
In this section, we present our main result, a tight regret bound for SafeLTS, and discuss key proof ideas.
Since is unknown, the learner does not know the safe action set . Therefore, in order to satisfy the safety constraint (2), Algorithm 12 chooses actions from , which is a conservative inner approximation of . Moreover, at the heart of the action selection rule, is the sampling of an appropriate random perturbation of the RLS estimate . The sampling rule in (10) is almost the same as in the classical setting Abeille et al. (2017), but with a key difference on the distribution of the perturbation vector (cf. Definition 2.1). As explained in Section 3.1, the modification compared to Abeille et al. (2017) is necessary to guarantee that actions are frequently optimistic (see (24)) in spite of limitations on actions imposed because of the safety constraints. In this section, we put these pieces together by proving that the action selection rule of SafeLTS is simultaneously: 1) frequently optimistic, and, 2) guarantees a proper expansion of the estimated safe set. Our main result stated as Theorem 3.1 is perhaps surprising: in spite of the additional safety constraints, SafeLTS has regret that is orderwise the same as that in the classical setting Agrawal & Goyal (2013); Abeille et al. (2017).
Theorem 3.1 (Regret of SafeLTS).
A detailed proof of Theorem 3.1 is deferred to Appendix E. In the rest of the section, we highlight the key changes compared to previous proofs in Agrawal & Goyal (2013); Abeille et al. (2017) that occur due to the safety constraint. To begin, let us consider the following standard decomposition of the cumulative regret :
(18) 
Regarding Term II, the concentration property of guarantees that is close to , and consequently, close to thanks to Theorem 2.1. Therefore, controlling Term II can be done similar to previous works e.g., AbbasiYadkori et al. (2011); Abeille et al. (2017); see Appendix E.2 for more details. Next, we focus on Term I.
To see how the safety constraints affect the proofs let us first review the treatment of Term I in the classical setting. For UCBtype algorithms, Term I is always nonpositive since the pair is optimistic at each round by design Dani et al. (2008); Rusmevichientong & Tsitsiklis (2010); AbbasiYadkori et al. (2011). For LTS, Term I can be positive; that is, (11) may not hold at every round . However, Agrawal & Goyal (2013); Abeille et al. (2017) proved that thanks to the anticoncentration property of , this optimistic property occurs often enough. Moreover, this is enough to yield a good enough bound on Term I for every round ; see Appendix A.
As discussed in Section 2.2.2, the requirement for safety complicates the requirement for optimism. Our main technical contribution, detailed in the next section, is to show that the properly modified anticoncentration property in Definition 2.1 together with the construction of approximated safe sets as in (14) can yield frequently optimistic actions even in the face of safety. Specifically, it is the extra term in (15) that allows enough exploration to the sampled parameter in order to compensate for safety limitations on the chosen actions, and because of that we are able to show SafeLTS obtains the same order of regret as that of Abeille et al. (2017).
3.1 Proof sketch: Optimism despite safety constraints
We prove that SafeLTS samples a parameter that is optimistic with constant probability. The next lemma informally characterizes this claim (see the formal statement of the lemma and its proof in Appendix D).
Lemma 3.2.
(Optimism in the face of safety; Informal) For any round , SafeLTS samples a parameter and chooses an action such that the pair is optimistic frequently enough, i.e.,
(19) 
where is the probability with which the anticoncentration property (15) holds.
The challenge in the proof is that the actions are chosen from the estimated safe set , which does not necessarily contain all feasible actions and hence, may not contain . Therefore, we need a mechanism to control the distance of the optimal action from the optimistic actions that can only lie within the subset (distance is defined here in terms of an inner product with the optimistic parameters ). Unfortunately, we do not have a direct control on this distance term and so at the heart of the proof lies the idea of identifying a “good” feasible action whose distance to is easier to control.
To be concrete, we show that it suffices to choose the good feasible point in the direction of , i.e., , where the key parameter must be set to satisfy . Naturally, the value of is determined by the approximated safe set as defined in (14). The challenge though is that we do not know how the value of compares to the constant . We circumvent this issue by introducing an enlarged confidence region centered at as
and the corresponding shrunk safe decision set as
(20)  
Notice that the shrunk safe set is defined with respect to an ellipsoid centered at (rather than at ). This is convenient since . Using this, it can be easily checked that the following choice of :
(21) 
ensures that
From this, and optimality of we have that:
(22) 
Next, using (22), in order to show that (19) holds, it suffices to prove that
where, in the second line, we used the definition of in (37). To continue, recall that . Thus, we want to lower bound the probability of the following event:
To simplify the above, we use the following two facts: (i) ; (ii) , because of CauchySchwartz and Theorem 2.1 Put together, we need that
or equivalently,
(23) 
where we have defined . By definition of , note that . Hence, the desired (23) holds due to the anticoncentration property of the distribution in (15). This completes the proof of Lemma 3.2.
Before closing, we remark on the following differences to the proof of optimism in the classical setting as presented in Lemma 3 of Abeille et al. (2017). First, we present an algebraic version of the basic machinery introduced in Section 5 of Abeille et al. (2017) that we show is convenient to extend to the safe setting. Second, we employ the idea of relating to a “better” feasible point and show optimism for the latter. Third, even after introducing , the fact that is proportional to (see (37)) is critical for the seemingly simple algebraic steps that follow (21). In particular, in deducing (23) from the expression above, note that we have divided both sides in the probability term by . It is only thanks to the proportionality observation that we made above that the term cancels throughout and we can conclude with (23) without a need to lower bound the minimum eigenvalue of the Gram matrix (which is known to be hard).
4 Numerical Results and Comparison to State of the Art
We present details of our numerical experiments on synthetic data. First, we show how the presence of safety constraints affects the performance of LTS in terms of regret. Next, we evaluate SafeLTS by comparing it against safe versions of LUCB. Then, we compare SafeLTS to Amani et al. (2019)’s SafeLUCB. In all the implementations, we used: , and . Unless otherwise specified, the reward and constraint parameters and are drawn from ; is drawn uniformly from . Throughout, we have implemented a modified version of SafeLUCB which uses norms instead of norms, due to computational considerations (e.g., Dani et al. (2008); Amani et al. (2019)). Recall that the action selection rule of UCBbased algorithms involves solving bilinear optimization problems, whereas, TSbased algorithms (such as the one proposed here) involve simple linear objectives (see Abeille et al. (2017)).
4.1 The effect of safety constraints on LTS
In Fig. 1 we compare the average cumulative regret of SafeLTS to the standard LTS algorithm with oracle access to the true safe set . The results are averages over 20 problem realizations. As shown, even though SafeLTS requires that chosen actions belong to the conservative innerapproximation set , it still achieves a regret of the same order as the oracle reaffirming the prediction of Theorem 3.1. Also, the comparison to the oracle reveals that the action selection rule of SafeLTS is indeed such that it guarantees a fast expansion of the estimated safe set so as to not exclude optimistic actions for a long time. Fig. 1 also shows the performance of a third algorithm discussed in Sec. 4.4.
4.2 Comparison to safe versions of LUCB
Here, we compare the performance of our algorithm with two safe versions of LUCB, as follows. First, we implement a natural extension of the classical LUCB algorithm in Dani et al. (2008), which we call “Naive SafeLUCB” and which respects safety constraints by choosing actions from the estimated safe set in (12). Second, we consider an improved version, which we call “Inflated Naive SafeLUCB” and which is motivated by our analysis of SafeLTS. Specifically, in light of Lemma 3.2, we implement the improved LUCB algorithm with an inflated confidence ellipsoid by a fraction in order to favor optimistic exploration. In Fig. 2, we employ these two algorithms for a specific problem instance (details in Appendix F) showing that both fail to provide the regret of SafeLTS, in general. Further numerical simulations suggest that while SafeLTS always outperforms Naive SafeLUCB, the Inflated Naive SafeLUCB can have superior performance to SafeLTS in many problem instances (see Fig. 6 in Appendix F). Unfortunately, not only is this not always the case (cf. Fig. 2), but also we are not aware of an appropriate modification to our proofs to show this problemdependent performance. This being said further investigations in this direction might be of interest.
4.3 Comparison to SafeLUCB
In this section we compare our algorithm and results to the SafeLUCB algorithm of Amani et al. (2019) that was proposed for a similar, but nonidentical setting. Specifically, in Amani et al. (2019), the linear safety constraint involves the same unknown parameter vector of the linear reward function and –in our notation– it takes the form , for some known matrix . As such, no sideinformation measurements are needed. First, while our proof does not show a regret of for the setting of Amani et al. (2019) in the general case, it does so for special cases. For example, it is not hard to see that our proofs readily extend to their setting when . This already improves upon the ) guarantee provided by Amani et al. (2019). Indeed, for , there are nontrivial instances where (i.e., the safety constraint is active), in which SafeLUCB suffers from a bound Amani et al. (2019). Second, while our proof adapts to a special case of Amani et al. (2019)’s setting, the other way around is not true, i.e., it is not obvious how one would modify the proof of Amani et al. (2019) to obtain a guarantee even in the presence of side information. This point is highlighted by Fig. 3 that numerically compares the two algorithms for a specific problem instance with side information: , , and (note that the constraint is active at the optimal). Also, see Section F for a numerical comparison of the estimated safesets’ expansion for the two algorithms. Fig. 4 compares SafeLTS against SafeLUCB and Naive SafeLUCB over 30 problem realizations (see Section F in Appendix for plots with standard deviation). As already pointed out in Amani et al. (2019), Naive SafeLUCB generally leads to poor regret, since the LUCB action selection rule alone does not provide sufficient exploration towards safe set expansion. In contrast, SafeLUCB is equipped with a pure exploration phase over a given seed safe set, which is shown to lead to proper safe set expansion. Our paper reveals that the inherent randomized nature of SafeLTS is alone capable to properly expand the safe set without the need for an explicit initialization phase (during which regret grows linearly).
4.4 Sampling from a dynamic noise distribution
In order for SafeLTS to be frequently optimistic, our theory requires that the random perturbation satisfies (15) for all rounds. Specifically, we need the extra factor compared to Abeille et al. (2017) in order to ensure safe set expansion. While this result is already sufficient for the tight regret guarantees of Theorem 3.1, it does not fully capture our intuition (see also Sec. 2.2.2) that as the algorithm progresses and gets closer to , exploration (and thus, the requirement on anticoncentration) does not need to be so aggressive. Based on this intuition, we propose the following heuristic modification, in which SafeLTS uses a perturbation with the following dynamic property: for a linearlydecreasing function . Fig. 1 shows empirical evidence of the superiority of the heuristic.
5 Conclusion
We studied LB in which the environment is subject to unknown linear safety constraints that need to be satisfied at each round. For this problem, we proposed SafeLTS, which to the best of our knowledge, is the first safe TS algorithm with provable regret guarantees. Importantly, we show that SafeLTS achieves regret of the same order as in the original setting with no safety constraints. We have also compared SafeLTS with several UCBtype safe algorithms showing that the former has: better regret in the worstcase, fewer parameters to tune and often superior empirical performance. Interesting directions for future work include: theoretically studying the dynamic property of Section 4.4, as well as, investigating TSbased alternatives to the GPUCBtype algorithms of Sui et al. (2015, 2018).
Appendix A Proof sketch: Why frequent optimism is enough to bound Term I
As discussed in Section 3, the presence of the safety constraints complicates the requirement for optimism. We show in Section 3.1 that SafeLTS is optimistic with constant probability in spite of safety constraints. Based on this, we complete the sketch of the proof here by showing that we can bound the overall regret of Term I in (18) with the norm of optimistic (and in our case, safe) actions. Let us first define the set of the optimistic parameters as
(24) 
In Section 3.1, we show that SafeLTS samples from this set i.e., , with constant probability. Note that, if at round SafeLTS samples from the set of optimistic parameters, Term I at that round is nonpositive. In the following, we show that selecting the optimal arm corresponding to any optimistic parameter can control the overall regret of Term I. The argument below is adapted from Abeille et al. (2017) with minimal changes; it is presented here for completeness.
For the purpose of this proof sketch, we assume that at each round , the safe decision set contains the previous safe action that the algorithm played, i.e., . However, for the formal proof in Appendix E.1, we do not need such an assumption. Let be a time such that , i.e., . Then, for any we have
(25) 
The last inequality comes from the assumption that at each round , the safe decision set contains the previous played safe actions for rounds ; hence, . To continue from (25), we use CauchySchwarz, and obtain
(26) 
The last inequality comes from the fact that the Gram matrices construct a nondecreasing sequence (, ). Then, we define the ellipsoid such that
(27) 
where
(28) 
It is not hard to see by combining Theorem 2.1 and the concentration property that with high probability. Hence, we can bound (26) using triangular inequality such that:
(29)  
(30) 
The last inequality comes from the fact that and are nondecreasing in by construction. Therefore, following the intuition of Abeille et al. (2017), we can upper bound Term I with respect to the norm of the optimal safe action at time (see Section E.1 in Appendix for formal proof). Bounding the term is standard based on the analysis provided in AbbasiYadkori et al. (2011) (see Proposition B.1 in the Appendix).
Appendix B Useful Results
The following result is standard and plays an important role in most proofs for linear bandits problems.
Proposition B.1.
Also, we recall the Azuma’s concentration inequality for supermartingales.
Proposition B.2.
(Azuma’s inequality Boucheron et al. (2013)) If a supermartingale corresponding to a filtration satisfies for some positive constant , for all , then, for any ,
(33) 
Appendix C Confidence Regions
We start by constructing the following confidence regions for the RLSestimates.
Definition C.1.
Let , , and . We define the following events:

is the event that the RLSestimate concentrates around for all steps , i.e., ;

is the event that the RLSestimate concentrates around , i.e., . Moreover, define such that .

is the event that the sampled parameter concentrates around for all steps , i.e., . Let be such that .
Proof.
The proof is similar to the one in Lemma 1 of Abeille et al. (2017) and is ommited for brevity. ∎
Appendix D Formal proof of Lemma 3.2
In this section, we provide a formal statement and a detailed proof of Lemma 3.2. Here, we need several modifications compared to Abeille et al. (2017) that are required because in our setting, actions belong to inner approximations of the true safe set . Moreover, we follow an algebraic treatment that is perhaps simpler compared to the geometric viewpoint in Abeille et al. (2017).
Lemma D.1.
Let be the set of optimistic parameters, with , then , .
Proof.
First, we provide the shrunk version of as follows:
A shrunk safe decision set . Consider the enlarged confidence region centered at as
(34) 
We know that