Bandits with Delayed, Aggregated Anonymous Feedback

Bandits with Delayed, Aggregated Anonymous Feedback

Ciara Pike-Burke, Shipra Agrawal, Csaba Szepesvári, Steffen Grünewälder
Lancaster University, Columbia University, University of Alberta
Abstract

We study a variant of the stochastic -armed bandit problem, which we call “bandits with delayed, aggregated anonymous feedback”. In this problem, when the player pulls an arm, a reward is generated, however it is not immediately observed. Instead, at the end of each round the player observes only the sum of a number of previously generated rewards which happen to arrive in the given round. The rewards are stochastically delayed and due to the aggregated nature of the observations, the information of which arm led to a particular reward is lost. The question is what is the cost of the information loss due to this delayed, aggregated anonymous feedback? Previous works have studied bandits with stochastic, non-anonymous delays and found that the regret increases only by an additive factor relating to the expected delay. In this paper, we show that this additive regret increase can be maintained in the harder delayed, aggregated anonymous feedback setting when the expected delay (or a bound on it) is known. We provide an algorithm that matches the worst case regret of the non-anonymous problem exactly when the delays are bounded, and up to logarithmic factors or an additive variance term, for unbounded delays.

1 Introduction

\includestandalone

[width=0.82]difficulty_diag

Figure 1: The relative difficulties and problem independent regret bounds of the different problems. For MABDAAF, our algorithm uses knowledge of and a mild assumption on the delay bound, which is not required by Joulani et al. (2013).

The stochastic multi-armed bandit (MAB) problem is a prominent framework for capturing the exploration-exploitation tradeoff in online decision making and experiment design. The MAB problem proceeds in discrete sequential rounds, where in each round, the player pulls one of the possible arms. In the classic stochastic MAB setting, the player immediately observes stochastic feedback from the pulled arm in the form of a ‘reward’, which can be used to improve the decisions in subsequent rounds. One of the main application areas of MABs is in online advertising. Here, the arms may correspond to adverts, and the feedback may correspond to conversions, that is users buying a product after seeing an advert. However, in practice, these conversions may not happen immediately after the advert is shown, and it may not always be possible to assign the credit of a sale to a particular showing of an advert. A similar challenge is encountered in many other applications, e.g., in personalized treatment planning, where the effect of a treatment on a patient’s health may be delayed, and it may be difficult to determine which out of several past treatments caused the change in the patients health; or, in content design applications, where the effects of multiple changes in the website design on website traffic and footfall may be delayed and difficult to distinguish.

In this paper, we propose a new bandit model to handle online problems with such ‘delayed, aggregated and anonymous’ feedback. In our model, a player interacts with an environment of actions (or arms) in a sequential fashion. At each time step the player selects an action which leads to a reward generated at random from the underlying reward distribution. At the same time, a nonnegative random integer-valued delay is also generated i.i.d. from an underlying delay distribution. Denoting this delay by and the index of the current round by , the reward generated in round will arrive at the end of the th round. At the end of each round, the player observes only the sum of all the rewards that arrive in that round. Crucially, the player does not know which of the past plays have contributed to this aggregated reward. We call this problem multi-armed bandits with delayed, aggregated anonymous feedback (MABDAAF). As in the standard MAB problem, in MABDAAF, the goal is to maximize the cumulative reward from plays of the bandit, or equivalently to minimize their regret. The regret is the total difference between the reward of the optimal action and the actions taken.

If the delays are all zero, the MABDAAF problem reduces to the standard (stochastic) MAB problem, which has been studied considerably (e.g., Thompson, 1933; Lai and Robbins, 1985; Auer et al., 2002; Bubeck and Cesa-Bianchi, 2012). Compared to the MAB problem, the job of the player in our problem appears to be significantly more difficult since the player has to deal with (i) that some feedback from the previous pulls may be missing due to the delays, and (ii) that the feedback takes the form of the sum of an unknown number of rewards of unknown origin.

An easier problem is when the observations are delayed, but they are non-aggregated and non-anonymous: that is, the player has to only deal with challenge (i) and not (ii). Here, the player receives delayed feedback in the shape of action-reward pairs that inform the player of both the individual reward and which action generated it. This problem, which we shall call the (non-anonymous) delayed feedback bandit problem, has been studied by Joulani et al. (2013), and later followed up by Mandel et al. (2015) (for bounded delays). Remarkably, they show that compared to the standard (non-delayed) MAB setting, the regret will increase only additively by a factor that scales with the expected delay. For delay distributions with a finite expected delay , the worst case regret scales with . Hence, the price to pay for the delay in receiving the observations is negligible. QPM-D of Joulani et al. (2013) and SBD of Mandel et al. (2015) place received rewards into queues for each arm, taking one whenever a base bandit algorithm suggests playing the arm. Joulani et al. (2013) also present a modified upper confidence bound (UCB) algorithm. All of these algorithms achieve the stated regret. None of them require any knowledge of the delay distributions, but they rely heavily upon the non-anonymous nature of the observations.

While these results are encouraging, the assumption that the rewards are observed individually in a non-anonymous fashion is limiting for most practical applications with delays (e.g., recall the applications discussed earlier). How big is the price to be paid for receiving only aggregated anonymous feedback? Our main result is to prove that essentially there is no extra price to be paid provided that the value of the expected delay is available. In particular, this means that detailed knowledge of what action led to a particular delayed reward can be replaced by the much weaker requirement that the expected delay, or a bound on it, is known. Fig. 1 summarizes the relationship between the non-delayed, the delayed and the new problem by showing the leading terms of the regret. In all cases, the dominant term is . Hence, asymptotically, the delayed, aggregated anonymous feedback problem is no more difficult than the standard multi-armed bandit problem.

1.1 Our Techniques and Results

We now consider what sort of algorithm will be able to achieve the aforementioned results for the MABDAAF problem. Since the player only observes delayed, aggregated anonymous rewards, the first problem we face is how to even estimate the mean reward of individual actions. Due to the delays and anonymity, it appears that to be able to estimate the mean reward of an action, the player wants to have played it consecutively for long stretches. Indeed, if the stretches are sufficiently long compared to the mean delay, the observations received during the stretch will mostly consist of rewards of the action played in that stretch. This naturally leads to considering algorithms that switch actions rarely and this is indeed the basis of our approach.

Several popular MAB algorithms are based on choosing the action with the largest upper confidence bound (UCB) in each round (e.g., Auer et al., 2002; Cappé et al., 2013). UCB-style algorithms tend to switch arms frequently and will only play the optimal arm for long stretches if a unique optimal arm exists. Therefore, for MABDAAF, we will consider alternative algorithms where arm-switching is more tightly controlled. The design of such algorithms goes back at least to the work of Agrawal et al. (1988) where the problem of bandits with switching costs was studied. The general idea of these rarely switching algorithms is to gradually eliminate suboptimal arms by playing arms in phases and comparing their upper confidence bounds to the lower confidence bound of a leading arm at the end of each phase. Generally, this sort of rarely switching algorithm switches arms only times. We base our approach on one such algorithm, the so-called Improved UCB111The adjective “Improved” indicates that the algorithm improves upon the regret bounds achieved by UCB1. The improvement replaces by in the regret bound. algorithm of Auer and Ortner (2010).

Using a rarely switching algorithm alone will not be sufficient for MABDAAF. The remaining problem, and where the bulk of our contribution lies, is to construct appropriate confidence bounds and adjust the length of the periods of playing each arm to account for the delayed, aggregated anonymous feedback. In particular, in the confidence bounds attention must be paid to fine details: it turns out that unless the variance of the observations is dealt with, there is a blow-up by a multiplicative factor of . We avoid this by an improved analysis involving Freedman’s inequality (Freedman, 1975). Further, to handle the dependencies between the number of plays of each arm and the past rewards, we combine Doob’s optimal skipping theorem (Doob, 1953) and Azuma-Hoeffding inequalities. Using a rarely switching algorithm for MABDAAF means we must also consider the dependencies between the elimination of arms in one phase and the corruption of observations in the next phase (ie. past plays can influence both whether an arm is still active and the corruption of its next plays). We deal with this through careful algorithmic design.

Using the above, we provide an algorithm that achieves worst case regret of using only knowledge of the expected delay, . We then show that this regret can be improved by using a more careful martingale argument that exploits the fact that our algorithm is designed to remove most of the dependence between the corruption of future observations and elimination of arms. Particularly, if the delays are bounded with known bound , we can recover worst case regret of , matching that of Joulani et al. (2013). If the delays are unbounded but have known variance , we show that the problem independent regret can be reduced to .

1.2 Related Work

We have already discussed several of the most relevant works to our own. However, there has also been other work looking at different flavors of the bandit problem with delayed (non-anonymous) feedback. For example, Neu et al. (2010) and Cesa-Bianchi et al. (2016) consider non-stochastic bandits with fixed constant delays; Dudik et al. (2011) look at stochastic contextual bandits with a constant delay and Desautels et al. (2014) consider Gaussian Process bandits with a bounded stochastic delay. The general observation that delay causes an additive regret penalty in stochastic bandits and a multiplicative one in adversarial bandits is made in Joulani et al. (2013). The empirical performance of -armed stochastic bandit algorithms in delayed settings was investigated in Chapelle and Li (2011). A further related problem is the ‘batched bandit’ problem studied by Perchet et al. (2016). Here the player must fix a set of time points at which to collect feedback on all plays leading up to that point. Vernade et al. (2017) consider delayed stochastic bandits where some observations could also be censored (e.g., no conversion is ever actually observed if the delay exceeds some threshold) but require complete knowledge of the delay distribution. Crucially, here and in all the aforementioned works, the feedback is always assumed to take the form of arm-reward pairs and knowledge of the assignment of rewards to arms underpins the suggested algorithms, rendering them unsuitable for MABDAAF. To the best of our knowledge, ours is the first work to develop algorithms to deal with delayed, aggregated anonymous feedback in the bandit setting.

1.3 Organization

The reminder of this paper is organized as follows: In the next section (Section 2) we give the formal problem definition. This is followed by the description of our algorithm in Section 3. In Section 4, we discuss the performance of our algorithm under various delay assumptions; known expectation, bounded support with known bound and expectation, and known variance and expectation. This is followed by a numerical illustration of our results in Section 5, before concluding in Section 6.

2 Problem Definition

There are actions or arms in the set . Each action is associated with a reward distribution and a delay distribution . The reward distribution is supported in and the delay distribution is supported on . We denote by the mean of , and define to be the reward gap, that is the expected loss of reward each time action is chosen instead of an optimal action. Let be an infinite array of random variables defined on the probability space which are mutually independent. Further, follows the distribution and follows the distribution . The meaning of these random variables is that if the player plays action at time , a payoff of will be added to the aggregated feedback that the player receives at the end of the th play. Formally, if denotes the action chosen by the player at time , then the observation received at the end of the th play is

For the remainder, we will consider iid delays across arms. We also assume discrete delay distributions, although most results hold for continuous delays by redefining the event as in .

In our analysis, we will sum over stochastic index sets. For a stochastic index set and random variables we denote such sums as .

Regret definition.

In most bandit problems, the regret is the cumulative loss due to not playing an optimal action. In the case of delayed feedback, there are several possible ways to define the regret. One option is to consider only the loss of the rewards received before horizon (as in Vernade et al. (2017)). However, we will not use this definition. Instead, as in Joulani et al. (2013), we consider the loss of all generated rewards and define the (pseudo-)regret by

This includes the rewards received after the horizon and does not penalize large delays as long as an optimal action is used. This definition is natural since, in practice, the player should eventually receive all outstanding reward.

Lai and Robbins (1985) showed that the regret of any algorithm for the standard MAB problem must satisfy,

(1)

where is the KL-divergence between the reward distribution of arm and that of an optimal arm. Theorem 4 of Vernade et al. (2017) shows that the lower bound in (1) also holds for their alternative definition of regret () in the delayed feedback bandit problem when there is no censoring. Since for all , we therefore expect the lower bound in (1) to also hold for the MABDAAF problem.

Assumptions on delay distribution.

For our algorithm for MABDAAF, we need some assumptions on the delay distribution. We assume that the expected delay, , is bounded and known. This quantity is used in the algorithm.

Assumption 1

The expected delay is bounded and known to the algorithm.

We then show that under some further mild assumptions on the delay, we can obtain better algorithms with even more efficient regret guarantees. We consider two settings: delay distributions with bounded support, and bounded variance.

Assumption 2 (Bounded support)

There exists some known constant such that the support of the delay distribution is bounded by .

Assumption 3 (Bounded variance)

The variance of delay distribution is bounded and known to the algorithm.

In fact the known expected value and known variance assumption can be replaced by a ‘known upper bound’ on the expected value and variance respectively. However, for simplicity, in the remaining, we use and directly. The next sections provide algorithms and regret analysis for different combinations of the above assumptions.

3 Our Algorithm

Our algorithm is based on the Improved UCB algorithm by Auer and Ortner (2010). It is a phase-based elimination algorithm. The general structure of the algorithm is as follows. In each phase, each arm is played multiple times. At the end of the phase, the observations received are used to update mean estimates, and any arm with an estimated mean below the best estimated mean by a gap larger than a ‘separation gap tolerance’ is eliminated. This separation tolerance is decreased exponentially over phases, so that it is very small in later phases, eliminating all but the best arm(s) with high probability. An alternative formulation of the algorithm is that at the end of a phase, any arm with an upper confidence bound (UCB) lower than the best lower confidence bound (LCB) is eliminated. Here, UCB (LCB) for an arm are computed so that with high probability they are more (less) than the true mean, but within the separation gap tolerance of the mean. The phase lengths are then carefully chosen to ensure that these confidence bounds hold.

0:  A set of arms, ; a horizon, ; choice of for each phase .
0:  Set (tolerance), the set of active arms . Let , (phase index), (round index)
  while  do
     Step 1: Play arms.
     for  do
        Let
        while  and  do
           Play arm , receive . Add to . Increment by .
        end while
     end for
     Step 2: Eliminate sub-optimal arms.
     For every arm in , compute as the average of observations at time steps . That is,
     Construct by eliminating actions with
     Step 3: Decrease Tolerance. Set .
     Step 4: Bridge period. Pick an arm and play it times while incrementing . Discard all observations from this period. Do not add to .
     Increment phase index .
  end while
Algorithm 1 Optimism for Delayed, Aggregated Anonymous Feedback (ODAAF)

Algorithm overview.

Our algorithm, ODAAF, is given in Algorithm 1. It operates in phases . Define to be the set of active arms in phase . The algorithm takes parameter which defines the number of samples of any active arm required by the end of phase .

In Step 1 of phase of the algorithm, each active arm is played repeatedly for steps. We record all timesteps where arm was played in the first phases (excluding bridge periods) in the set . The active arms are played in any arbitrary but fixed order. In Step 2, the observations from timesteps in are averaged to obtain a new estimate of . Arm is eliminated if is further than from .

A further nuance in the algorithm structure is the ‘bridge period’ (see Figure 2). The algorithm picks an active arm to play in this bridge period for steps. The observations received during the bridge period are discarded, and not used for computing confidence intervals. The significance of the bridge period is that it breaks the dependence between confidence intervals calculated in phase and the delayed pay-offs seeping into phase . Without the bridge period this dependence would impair the validity of our confidence intervals.

Choice of .

A key element of algorithm design is the careful choice of . Since determines the number of times each active (possibly suboptimal) arm is played and phase lengths, this clearly has an impact on the regret. In particular, needs to be chosen so that the confidence bounds on the error in estimation hold with given probability. The main challenge is developing these confidence bounds from delayed, aggregated anonymous feedback. Handling this form of feedback involves a credit assignment problem of deciding which samples can be used for a given arm’s mean estimation, since each sample is an aggregate of rewards from multiple arms played some previous time steps. This credit assignment problem would be hopeless in a passive learning setting without further information on how the samples were generated. Our algorithm utilizes the power of active learning by designing the phase execution in such a way that the feedback can be effectively ‘decensored’, without losing too many samples.

One naive approach to defining the confidence bounds when the delay is bounded by some constant is to observe that, since all rewards are in ,

We can then bound from its mean, using Hoeffding’s inequality (see Appendix B). This leads to selecting

for some constants , and corresponds to worst case regret of . For and large , this is significantly worse than that of Joulani et al. (2013). In Section 4, we show that, surprisingly, it is possible to recover the same rate of regret as Joulani et al. (2013), but this requires a more nuanced argument to get tighter confidence bounds and smaller . In the next section, we describe the choice of in every phase and its implications on the regret, for each of the three cases mentioned previously: (i) Known and bounded expected delay (Assumption 1), (ii) Bounded delay with known bound and expected value (Assumptions 1 and 2), (iii) Delay with known and bounded variance and expectation (Assumptions 1 and 3).

\includestandalone

[width=0.45]phases2

Figure 2: An example of phase of our algorithm.

4 Regret Analysis

In this section, we specify the choice of parameters and provide regret guarantees for Algorithm 1 for each of the three previously mentioned cases.

4.1 Known and Bounded Expected Delay

First, we consider the setting with the weakest assumption on delay distribution: we only assume that the expected delay is bounded and known. No assumption on the support or variance of the delay distribution is made. The regret analysis for this setting will not use the bridge period, so Step 4 of the algorithm could be omitted in this case.

Choice of .

Here, we use Algorithm 1 with

(2)

for some large enough constants . The exact value of is given in Appendix C.

Estimation of error bounds.

We bound the error between and by . In order to do this we first bound the corruption of observations received during timesteps due to delays.

Fix a phase and arm . Then the observations in the period are composed of two types of rewards: a subset of rewards from plays of arm in this period, and delayed rewards from some of the plays before this period. The expected value of observations from this period would be but for these rewards entering and leaving this period due to delay. Since the reward is bounded by , a simple observation is that expected discrepancy between the sum of observations in this period and the quantity is bounded by the expected delay ,

(3)

Summing this over phases gives a bound

(4)

Note that given the choice of in (2), the above is smaller than , when large enough constants are used. Then, using concentration inequalities along with the choice of from (2), we can obtain the following high probability bound. A detailed proof is provided in Appendix C. {restatable}lemmalemebone Under Assumption 1 and the choice of given by (2), the estimates constructed by Algorithm 1 satisfy the following: For every fixed arm and phase , with probability , either or:

Regret bounds.

Using Lemma 4.1, we derive the following regret bounds in the current setting. {restatable}theoremthmreggenNew Under Assumption 1, the expected regret of Algorithm 1 is upper bounded as

(5)

Considering the worst-case values of (roughly ), we obtain the following problem independent bound. {restatable}corollarycorreggen For any problem instance satisfying Assumption 1, the expected regret of Algorithm 1 is upper bounded as

Proof of Theorem 4.1 (Sketch).

Given Lemma 4.1, the proof of Theorem 4.1 closely follows the analysis of the Improved UCB algorithm of Auer and Ortner (2010). Lemma 4.1 and the elimination condition in Algorithm 1 ensure that, with high probability, any suboptimal arm will be eliminated by phase , thus incurring regret at most (the extra is due to possible plays in the bridge period). The result is then obtained by substituting in from (2), and summing over all suboptimal arms.

A detailed proof with precise high probability bounds is in Appendix C. As in Auer and Ortner (2010), a union bound over all arms (which would result in an extra factor) is avoided by (i) reasoning about the regret of each arm individually, and (ii) bounding the regret resulting from the elimination of the optimal arm by carefully considering the probability it is eliminated in a given round.

4.2 Delay with Bounded Support

If the delay is bounded by some constant and a single arm is played repeatedly for long enough, we can restrict the number of arms corrupting the observation at a given time . In fact, if each arm is played consecutively for more than rounds, then at any time , the observation will be composed of the rewards from at most two arms: the current arm and previous arm . Further, from the elimination condition, with high probability, arm will have been eliminated if it is clearly suboptimal. We can then recursively use the confidence bounds for arms and from the previous phase to bound . Below, we formalize this intuition to obtain a tighter bound on for every arm and phase .

Choice of .

Here, we define,

for some large enough constants (see Appendix D for the exact value). This choice of means that for large , we essentially revert back to the choice of from (2) for the unbounded case, and we gain nothing by using the bound on the delay. However, if is not large, the choice of in (4.2) is smaller than (2) by a factor of in the second term.

Estimation of error bounds.

In this setting, reusing the analysis from the previous section to bound the corruption of observations during , would result in the same bound as in (3) and (4). This bound scales with . However, using knowledge of the upper bound of the support of , we can obtain a tighter bound, so that an error bound similar to Lemma 4.1 is achieved with the smaller value of in (4.2) that scales with . We prove the following proposition. Since , this is considerably tighter than (3).

Proposition 1

Assume for phases . Define as the event that all arms satisfy error bounds . Then, for every arm ,

Proof: (Sketch). Consider a fixed arm . Again, the expected value of the sum of observations for would be were it not for some rewards entering and leaving this period due to the delays. Because of the i.i.d. assumption on delay, in expectation, the number of rewards leaving the period is roughly the same as the number of rewards entering this period, i.e., . (Conditioning on does not effect this due to the bridge period). Since , the reward coming into the period can only be from the previous arm . All rewards leaving the period are from arm . Therefore the expected difference between rewards entering and leaving the period is . Then, if is close to , then the total reward leaving the period is compensated by total reward entering. Due to the bridge period, even when is the first arm played in phase , , so it was not eliminated in phase . By the elimination condition in Algorithm 1, if the error bounds are satisfied for all arms in , then . This gives the result.

For the regret analysis, we derive a high probability version of the above result. We then use this repeatedly, along with the fact that , to get that,

and iteratively bound to be small. Note that this bound is an improvement of a factor of compared to (4). Using this, and the choice of from (4.2), for large enough constants, we derive the following lemma. A detailed proof is given in Appendix D. {restatable}lemmalemebtwo Under Assumption 1 of known expected value and 2 of bounded delays, and choice of given in (4.2), the estimates obtained by Algorithm 1 satisfy the following: For any arm and phase , with probability at least , either or

Regret bounds.

We now give regret bounds for this case.

{restatable}

theoremthmregboundedNew Under Assumption 1 and bounded delay Assumption 2, the expected regret of Algorithm 1 is upper bounded as

Proof: (Sketch). Given Lemma 4.2, the proof is very similar to the proof of Theorem 4.1. The main change is that we substitute by the (smaller) quantity in (4.2), to obtain a smaller regret bound. The full proof is in Appendix D

Then, if , we get the following problem independent regret bound on the regret which matches that of Joulani et al. (2013). {restatable}corollarycorregboundedNew For any problem instance satisfying Assumption 1 and bounded delay Assumption 2 with , the expected regret of Algorithm 1 is upper bounded as

4.3 Delay with Bounded Variance

If the delay is unbounded but well behaved in the sense that we know (a bound on) the variance, then we can obtain similar regret bounds to the bounded delay case. Intuitively, delays from the previous phase will only corrupt observations in the current phase if their delays exceed the length of the bridge period. We control this by using the bound on the variance to bound the tails of the delay distributions.

Choice of .

Let be the known variance (or bound on the variance) of the delay, as in Assumption 3. Then, we use Algorithm 1 with the following value of ,

(6)

for some large enough constants . The exact value of is given in Appendix E.

Regret bounds.

We get the following instance specific and problem independent regret bound in this case. {restatable}theoremthmregvar Under Assumption 1 and Assumption 3 of known (bound on) expected delay and variance of delay, and choice of from (6), the expected regret of Algorithm 1 can be upper bounded by,

{restatable}

corollarycorregvar For any problem instance satisfying Assumptions 1 and 3, the expected regret of Algorithm 1 is upper bounded as

Thus, it is sufficient to know a bound on variance to obtain regret bounds similar to those in bounded delay case. The proofs for the above results are provided in Appendix  E.2.

5 Experimental Results

(a) Bounded delays. Ratios of regret of ODAAF (solid lines) and ODAAF-B (dotted lines) to that of QPM-D.
(b) Unbounded delays. Ratios of regret of ODAAF (solid lines) and ODAAF-V (dotted lines) to that of QPM-D.
Figure 3: The ratios of regret of variants of our algorithm to that of QPM-D for different delay distributions.

We compared the performance of our algorithm (under different assumptions) to QPM-D (Joulani et al., 2013) in various experimental settings. In these experiments, our aim was to investigate the effect of the delay on the performance of the algorithms. In order to focus on this, we used a simple setup of two arms with Bernoulli rewards and . In every experiment, we ran each algorithm to horizon and used UCB1 Auer et al. (2002) as the base algorithm in QPM-D. The regret was averaged over replications. For ease of reading, we define ODAAF to be our algorithm using only knowledge of the expected delay, with defined as in 2 and run without a bridge period, and ODAAF-B and ODAAF-V to be the versions of Algorithm 1 that use a bridge period and information on the bounded support and the finite variance of the delay to define as in 4.2 and 6 respectively.

We tested the algorithms with different delay distributions. In the first case, we considered bounded delay distributions whereas in the second case, the delays were unbounded. In creftype 2(a), we plotted the ratios of the regret of ODAAF and ODAAF-B (with knowledge of , the delay bound) to the regret of QPM-D. We see that in all cases the ratios converge to a constant. This shows that the regret of our algorithm is essentially of the same order as that of QPM-D. Our algorithm predetermines the number of times to play each active arm per phase (the randomness appears in whether an arm is active), so the jumps in the regret are it changing arm. This occurs at the same points in all replications.

creftype 2(b) shows a similar story for unbounded delays with mean (or bound on mean for the half normal distribution, ) of . The ratios of the regret of ODAAF and ODAAF-V (with knowledge of the delay variance) to the regret of QPM-D again converge to constants. Note that in this case, these constants, and the location of the jumps, vary with the delay distribution and . When the variance of the delay is small, it can be seen that using the variance information leads to improved performance. However, for exponential delays where , the large variance causes to be large and the suboptimal arm is played more, increasing the regret. In this case ODAAF-V had only just eliminated the suboptimal arm at time .

It can also be illustrated experimentally that the regret of our algorithm and that of QPM-D both increase linearly in . This is shown in Appendix F.

6 Conclusion

We have studied an extension of the multi-armed bandit problem to bandits with delayed, aggregated anonymous feedback. Here, a sum of observations are received after some stochastic delay and we do not learn which arms contributed to each observation. In this more difficult setting, we have proven that, surprisingly, it is possible to develop an algorithm that performs comparably to those for the simpler delayed feedback bandits problem, where the assignment of rewards to plays is known. Particularly, using only knowledge of the expected delay, our algorithm matches the worst case regret of Joulani et al. (2013) up to logarithmic factors. These logarithmic factors can be removed using an improved analysis and slightly more information about the delay; if the delay is bounded, we achieve the same worst case regret as Joulani et al. (2013), and for unbounded delays with known finite variance, we have an extra additive term. We supported these claims experimentally. Note that while our algorithm matches the order of regret of QPM-D, the constants are worse. Hence, it is an open problem to find algorithms with better constants.

References

  • Agrawal et al. (1988) Agrawal, R., Hedge, M., and Teneketzis, D. (1988). Asymptotically efficient adaptive allocation rules for the multiarmed bandit problem with switching cost. IEEE Transactions on Automatic Control, 33(10):899–906.
  • Auer et al. (2002) Auer, P., Cesa-Bianchi, N., and Fischer, P. (2002). Finite-time analysis of the multiarmed bandit problem. Journal of Machine Learning Researcj, 47(2-3):235–256.
  • Auer and Ortner (2010) Auer, P. and Ortner, R. (2010). UCB revisited: Improved regret bounds for the stochastic multi-armed bandit problem. Periodica Mathematica Hungarica, 61(1-2):55–65.
  • Bubeck and Cesa-Bianchi (2012) Bubeck, S. and Cesa-Bianchi, N. (2012). Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems. Foundations and Trends in Machine Learning. Now Publishers Incorporated.
  • Cappé et al. (2013) Cappé, O., Garivier, A., Maillard, O.-A., Munos, R., and Stoltz, G. (2013). Kullback–Leibler upper confidence bounds for optimal sequential allocation. The Annals of Statistics, 41(3):1516–1541.
  • Cesa-Bianchi et al. (2016) Cesa-Bianchi, N., Gentile, C., Mansour, Y., and Minora, A. (2016). Delay and cooperation in nonstochastic bandits. In Conference on Learning Theory, pages 605–622.
  • Chapelle and Li (2011) Chapelle, O. and Li, L. (2011). An empirical evaluation of Thompson sampling. In Advances in Neural Information Processing Systems, pages 2249–2257.
  • Desautels et al. (2014) Desautels, T., Krause, A., and Burdick, J. W. (2014). Parallelizing exploration-exploitation tradeoffs in gaussian process bandit optimization. Journal of Machine Learning Research, 15(1):3873–3923.
  • Doob (1953) Doob, J. L. (1953). Stochastic processes. John Wiley & Sons.
  • Dudik et al. (2011) Dudik, M., Hsu, D., Kale, S., Karampatziakis, N., Langford, J., Reyzin, L., and Zhang, T. (2011). Efficient optimal learning for contextual bandits. In Conference on Uncertainty in Artificial Intelligence, pages 169–178.
  • Freedman (1975) Freedman, D. A. (1975). On tail probabilities for martingales. The Annals of Probability, 3(1):100–118.
  • Joulani et al. (2013) Joulani, P., György, A., and Szepesvári, C. (2013). Online learning under delayed feedback. In International Conference on Machine Learning, pages 1453–1461.
  • Lai and Robbins (1985) Lai, T. L. and Robbins, H. (1985). Asymptotically efficient adaptive allocation rules. Advances in Applied Mathematics, 6(1):4–22.
  • Mandel et al. (2015) Mandel, T., Liu, Y.-E., Brunskill, E., and Popovic, Z. (2015). The queue method: Handling delay, heuristics, prior data, and evaluation in bandits. In AAAI, pages 2849–2856.
  • Neu et al. (2010) Neu, G., Antos, A., György, A., and Szepesvári, C. (2010). Online Markov decision processes under bandit feedback. In Advances in Neural Information Processing Systems, pages 1804–1812.
  • Perchet et al. (2016) Perchet, V., Rigollet, P., Chassang, S., and Snowberg, E. (2016). Batched bandit problems. The Annals of Statistics, 44(2):660–681.
  • Szita and Szepesvári (2011) Szita, I. and Szepesvári, C. (2011). Agnostic KWIK learning and efficient approximate reinforcement learning. In Conference on Learning Theory, pages 739–772.
  • Thompson (1933) Thompson, W. R. (1933). On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika, 25(3–4):285–294.
  • Vernade et al. (2017) Vernade, C., Cappé, O., and Perchet, V. (2017). Stochastic bandit models for delayed conversions. In Conference on Uncertainty in Artificial Intelligence.

Appendix

Appendix A Preliminaries

a.1 Table of Notation

For ease of reading, we define here key notation that will be used in this Appendix.

: The horizon.
: The gap between the mean of the optimal arm and the mean of arm , .
: The approximation to at round of the ODAAF algorithm, .
: The number of samples of arm ODAAF needs by the end of round .
: The number of times each arm is played in phase , .
: The bound on the delay in the case of bounded delay.
: The first round of the ODAAF algorithm where .
: The random variable representing the round arm is eliminated in.
: The set of all time point where arm is played up to (and including) round .
: The reward received at time (from any possible past plays of the algorithm).
: The reward generated by playing arm at time .
: The delay associated with playing arm at time .
: The expected delay (assuming i.i.d. delays).
: The variance of the delay (assuming i.i.d. delays).
: The start point of the th phase. See Section A.2 for more details.
: The end point of the th phase. See Section A.2 for more details.
: The start point of phase of playing arm . See Section A.2 for more details.
: The end point of phase of playing arm . See Section A.2 for more details.
: The set of active arms in round of the ODAAF algorithm.
: The contribution of the reward generated at time in certain intervals relating to phase to the corruption. See (12) for the exact definitions.
: The smallest -algebra containing all information up to time , see (7) for a definition.

a.2 Beginning and end of phases

We formalize here some notation that will be used throughout the analysis to denote the start and end points of each phase. Define the random variables and for each phase to be the start and end points of the phase. Then let , denote the start and end points of playing arm in phase . See Figure 4 for details. By convention, let if arm is not active in phase , if the algorithm never reaches phase and let for all . It is important to point out that are deterministic so at the end of any phase , once we have eliminated sub-optimal arms, we also know which arms are in and consequently the start and end points of phase . Furthermore, since we play arms in a given order, we also know the specific rounds when we start and finish playing each active arm in phase . Hence, at any time step in phase , and for all active arms will be known. More formally, define the filtration where

(7)

and . This means the joint events like for all , .

\includestandalone

[width=0.5]phasesS

Figure 4: An example of phase of our algorithm.

a.3 Useful results

For our analysis, we will need Freedman’s version of Bernstein’s inequality for the right-tail of martingales with bounded increments:

Theorem 2 (Freedman’s version of Bernstein’s inequality; Theorem 1.6 of Freedman (1975))

Let be a real-valued martingale with respect to the filtration with increments : and , for . Assume that the difference sequence is uniformly bounded on the right: almost surely for . Define the predictable variation process for . Then, for all , ,

This result implies that if for some deterministic constant, , holds almost surely, then holds for any .

We will also make use of the following technical lemma which combines the Hoeffding-Azuma inequality and Doob’s optional skipping theorem (Theorem 2.3 in Chapter VII of Doob (1953))):

Lemma 3

Fix the positive integers and let . Let be a filtration, be a sequence of -valued random variables such that for , is -measurable, is -measurable, and . Further, assume that with probability one. Then, for any ,

(8)

Proof: This lemma appeared in a slightly more general form (where is allowed) as Lemma A.1 in the paper by Szita and Szepesvári (2011) so we refer the reader to the proof there.

Appendix B Naive approach for bounded delays

Let

denote the width of the confidence intervals used in phase for any arm . We start by showing that the confidence bounds holds with high probability:

Lemma 4

For any ,

Proof: First note that since the delay is bounded by , at most rewards from other arms can seep into phase of playing arm and at most rewards from arm can be lost. Defining and as the start and end points of playing arm in phase , respectively, we have

(9)

because we can pair up the missing and extra rewards, and in each pair the difference is at most one. Then, by and (9) we get

Define and recall that . For any ,

where the first inequality is from the triangle inequality and the last from Hoeffding’s inequality since are independent samples from , the reward distribution of arm . In particular, taking ensures that , finishing the proof.

Observe that setting

(10)

ensures that . Using this, we can substitute this value of into Improved UCB and use the analysis from Auer and Ortner (2010) to get the following bound on the regret.

Theorem 5

Assume there exists a bound on the delay. Then for all , the expected regret of the Improved UCB algorithm run with defined as in (10) can be upper bounded by

Proof: The result follows from the proof of Theorem 3.1 of Auer and Ortner (2010) using the above definition of . In particular, optimizing with respect to gives worst case regret of .

Appendix C Results for known and bounded expected delay

c.1 High probability bounds

\lemebone

* Proof: Let

(11)

We first show that with probability greater than , or .

For arm and phase , assume . Then for any phase and time , define,

(12)

Define the filtration by and

(13)

Then, we use the decomposition,

(14)

where,