Effective Evaluation Using Logged Bandit Feedback from Multiple Loggers

Effective Evaluation Using Logged Bandit Feedback from Multiple Loggers

Aman Agarwal, Soumya Basu, Tobias Schnabel, Thorsten Joachims Cornell University, Dept. of Computer ScienceIthacaNYUSA [aa2398,sb2352,tbs49,tj36]@cornell.edu
Abstract.

Accurately evaluating new policies (e.g. ad-placement models, ranking functions, recommendation functions) is one of the key prerequisites for improving interactive systems. While the conventional approach to evaluation relies on online A/B tests, recent work has shown that counterfactual estimators can provide an inexpensive and fast alternative, since they can be applied offline using log data that was collected from a different policy fielded in the past. In this paper, we address the question of how to estimate the performance of a new target policy when we have log data from multiple historic policies. This question is of great relevance in practice, since policies get updated frequently in most online systems. We show that naively combining data from multiple logging policies can be highly suboptimal. In particular, we find that the standard Inverse Propensity Score (IPS) estimator suffers especially when logging and target policies diverge – to a point where throwing away data improves the variance of the estimator. We therefore propose two alternative estimators which we characterize theoretically and compare experimentally. We find that the new estimators can provide substantially improved estimation accuracy.

counterfactual estimators, log data, implicit feedback, off-policy evaluation
journalyear: 2017copyright: acmlicensedconference: KDD ’17; August 13-17, 2017; Halifax, NS, Canadaprice: 15.00doi: 10.1145/3097983.3098155isbn: 978-1-4503-4887-4/17/08ccs: Computing methodologies Learning from implicit feedbackccs: Computing methodologies Causal reasoning and diagnosticsccs: Information systems Evaluation of retrieval results

1. Introduction

Interactive systems (e.g., search engines, ad-placement systems, recommender systems, e-commerce sites) are typically evaluated according to online metrics (e.g., click through rates, dwell times) that reflect the users’ response to the actions taken by the system. For this reason, A/B tests are of widespread use in which the new policy to be evaluated is fielded to a subsample of the user population. Unfortunately, A/B tests come with two drawbacks. First, they can be detrimental to the user experience if the new policy to be evaluated performs poorly. Second, the number of new policies that can be evaluated in a given amount of time is limited, simply because each A/B test needs to be run on a certain fraction of the overall traffic and should ideally span any cycles (e.g. weekly patterns) in user behavior.

Recent work on counterfactual evaluation techniques provides a principled alternative to A/B tests that does not have these drawbacks (Li et al., 2011, 2015; Bottou et al., 2013; Swaminathan and Joachims, 2015). These techniques do not require that the new policy be deployed online, but they instead allow reusing logged interaction data that was collected by a different policy in the past. In this way, these estimators address the counterfactual inference question of how a new policy would have performed, if it had been deployed instead of the old policy that actually logged the data. This allows reusing the same logged data for evaluating many new policies, greatly improving scalability and timeliness compared to A/B tests.

In this paper, we address the problem of counterfactual evaluation when log data is available not just from one logging policy, but from multiple logging policies. Having data from multiple policies is common to most practical settings where systems are repeatedly modified and deployed. While the standard counterfactual estimators based on inverse propensity scores (IPS) apply to this situation, we show that they are suboptimal in terms of their estimation quality. In particular, we investigate the common setting where the log data takes the form of contextual bandit feedback from a stochastic policy, showing that the variance of the conventional IPS estimator suffers substantially when the historic policies are sufficiently different – to a point where throwing away data improves the variance of the estimator. To overcome the statistical inefficiency of the conventional IPS estimator, we explore two alternative estimators that directly account for the data coming from multiple different logging policies. We show theoretically that both estimators are unbiased, and have lower variance than the conventional IPS estimator. Furthermore, we quantify the amount of variance reduction in an extensive empirical evaluation that demonstrates the effectiveness of both the estimators.

2. Related Work

The problem of re-using logged bandit feedback is often part of counterfactual learning (Bottou et al., 2013; Li et al., 2015; Swaminathan and Joachims, 2015), and more generally can be viewed as part of off-policy evaluation in reinforcement learning (Sutton and Barto, 1998; Precup, 2000).

In counterfactual learning, solving the evaluation problem is often the first step to deriving a learning algorithm (Strehl et al., 2010; Bottou et al., 2013; Swaminathan and Joachims, 2015). The key to being able to counterfactually reason based on logged data is randomness in the logged data. Approaches differ in how randomness is being included in the policies. For example, in (Li et al., 2015) randomization is directly applied to the actions of each policy, whereas (Bottou et al., 2013) randomizes individual policy parameters to create a distribution over actions.

In exploration scavenging (Langford et al., 2008), the authors address counterfactual evaluation in a setting where the actions do not depend on the context. They mention the possibility of combining data from different policies by interpreting each policy as an action. Li et al. (Li et al., 2015) propose to use naturally occurring randomness in the logged data when policies change due to system changes. Since this natural randomness may not be entirely under the operator’s control, the authors propose to estimate the probability that a certain logging policy was in place to recover propensities. The balanced IPS estimator studied in this paper could serve as a starting point for further techniques in that direction.

Evaluation from logged data has often been studied with respect to specific domains, for example in news recommendation (Li et al., 2010, 2011, 2015) as well as in information retrieval (Hofmann et al., 2013; Li et al., 2015). The work by Li et al. (Li et al., 2011) highlights another common use-case in practice, where different logging policies are all active at the same time, focusing on the evaluation of different new methods. The estimators in this paper can naturally be applied to this scenario as well to augment logging data of one policy with the data from others. An interesting example for probabilistic policies can be found in (Hofmann et al., 2013), where the authors consider policies that are the probabilistic interleaving of two deterministic ranking policies and use log data to pre-select new candidate policies.

Very related to combining logs from different policies is the problem of combining samples coming from different proposal distributions in importance sampling (Owen and Zhou, 2000; Owen, 2013; Elvira et al., 2015a). There, samples are drawn from multiple proposal distributions and need to be combined in a way that reduces variance of the combined estimator. Multiple importance sampling has been particularly studied in computer graphics (Veach and Guibas, 1995), as Monte Carlo techniques are employed for rendering. Most related to the weighted IPS estimator presented later in the paper is adaptive multiple importance sampling (AMIS) (Cornuet et al., 2012; Elvira et al., 2015b) that also recognizes that it is not optimal to weigh contributions from all proposal distributions the same, but instead updates weights as well as the proposal distributions after each sampling step. The most notable differences to our setting here are that (i) we regard the sampling distributions as given and fixed, and (ii) the sampled log data is also fixed. An interesting avenue for future work would be to use control variates to further reduce variance of our estimators (Owen and Zhou, 2000; He and Owen, 2014), although this approach is computationally demanding since it requires solving a quadratic problem to determine optimal weights.

Another related area is sampling-based evaluation of information retrieval systems (Yilmaz et al., 2008; Carterette et al., 2009; Schnabel et al., 2016). Instead of feedback data that stems from interactions with users, the observed feedback comes from judges. A policy in this case corresponds to a sampling strategy which determines the query-document pairs to be sent out for judgement. As shown by Carterette et al. (Carterette et al., 2009), relying on sampling-based elicitation schemes cuts down the number of required judgements substantially as compared to a classic deterministic pooling scheme. The techniques proposed in our paper could also be applied to the evaluation of retrieval systems when data from different judgement pools need to be combined.

3. Problem Setting

In this paper, we study the use of logged Bandit feedback that arises in interactive learning systems. In these systems, the system receives as input a vector , typically encoding user input or other contextual information. Based on input , the system responds with an action for which it receives some feedback in the form of a cardinal utility value . Since the system only receives feedback for the action that it actually takes, this feedback is often referred to as Bandit feedback (Swaminathan and Joachims, 2015).

For example, in ad placement models, the input typically encodes user-specific information as well as the web page content, and the system responds with an ad which is then displayed on the page. Finally, user feedback for the displayed ad is presented, such as whether the ad was clicked or not. Similarly, for a news website, the input may encode user-specific and other contextual information to which the system responds with a personalized home page . In this setting, the user feedback could be the time spent by the user on the news website.

In order to be able to counterfactually evaluate new policies, we consider stochastic policies that define a probability distribution over the output space . Predictions are made by sampling from a policy given input . The inputs are assumed to be drawn i.i.d. from a fixed but unknown distribution . The feedback is a cardinal utility that is only observed at the sampled data points. Large values for indicate user satisfaction with for , while small values indicate dissatisfaction.

We evaluate and compare different policies with respect to their induced utilities. The utility of a policy is defined as the expected utility of its predictions under both the input distribution as well as the stochastic policy. More formally:

Definition 3.1 (Utility of Policy).

The utility of a policy is

Our goal is to re-use the interaction logs collected from multiple historic policies to estimate the utility of a new policy. In this paper, we denote the the new policy (also called the target policy) as , and the logging policies as . The log data collected from each logging policy is

where data-points are collected from logging policy , , , , and . Note that during the operation of the logging policies, the propensities are tracked and appended to the logs. We will also assume that the quantity is available at all pairs. This is a very mild assumption since the logging policies were designed and controlled by us, so their code can be stored. Finally, let denote the combined collection of log data over all the logging policies, and denote the total number of samples.

Unfortunately, it is not possible to directly compute the utility of a policy based on log data using the formula from the definition above. While we have a random sample of the contexts and the target policy is known by construction, we lack full information about the feedback . In particular, we know only for the particular action chosen by the logging policy, but we do not necessarily know it for all the actions that the target policy can choose. In short, we only have logged bandit feedback, but not full-information feedback. This motivates the use of statistical estimators to overcome the infeasibility of exact computation. In the following sections, we will explore three such estimators and focus on two of their key statistics properties, namely their bias and variance.

4. Naive Inverse Propensity Scoring

A natural first candidate to explore for the evaluation problem using multiple logging policies as defined above is the well-known inverse propensity score (IPS) estimator. It simply averages over all datapoints, and corrects for the distribution mismatch betweenthe logging policies and the target policy using a weighting term:

Definition 4.1 (Naive IPS Estimator).

This is an unbiased estimator as shown below, as long as all logging policies have full support for the new policy .

Definition 4.2 (Support).

Policy is said to have support for policy if for all and ,

Proposition 4.3 (Bias of Naive IPS Estimator).

Assume each logging policy has support for target . For consisting of i.i.d. draws from and logging policies , the naive IPS estimator is unbiased:

Proof.

By linearity of expectation,

The second equality is valid since each has support for . ∎

Note that the requirement that the logging policies have support for the target policy can be satisfied by ensuring that when deploying policies.

We can also characterize the variance of the naive IPS estimator.

(1)

Having characterized both the bias and the variance of the Naive IPS Estimator, how does it perform on datasets that come from multiple logging policies?

4.1. Suboptimality of Naive IPS Estimator

To illustrate the suboptimality of the Naive IPS Estimator when we have data from multiple logging policies, consider the following toy example where we wish to evaluate a new policy given data from two logging policies and . For simplicity and without loss of generality, consider logged bandit feedback which consists of one sample from and another sample from , more specifically, we have two logs , and . There are two possible inputs and two possible output predictions . The cardinal utility function , the input distribution , the target policy , and the two logging policies and are given in Table 1.

Pr(x) 0.5 0.5
10 1
1 10
0.2 0.8
0.8 0.2
0.9 0.1
0.1 0.9
0.8 0.2
0.2 0.8
Table 1. Dropping data samples from logging policy lowers the variance of the naive and balanced IPS estimators when estimating the utility of .

From the table, we can see that the target policy is similar to logging policy , but that it is substantially different from . Since the mismatch between target and logging policy enters the IPS estimator as a ratio, one would like to keep that ratio small for low variance. That, intuitively speaking, means that samples from result in lower variance than samples from , and that the samples may be adding a large amount of variability to the estimate. Indeed, it turns out that simply omitting the data from greatly improves the variance of the estimator. Plugging the appropriate values into the variance formula in Equation (1) shows that the variance is reduced from to by dropping the sample from the first logging policy . Intuitively, the variance of suffers because higher variance samples from one logging policy drown out the signal from the lower variance samples to an extent that can even dominate the benefit of having more samples. Thus, fails to make the most of the available log data by combining it in an overly naive way.

Under closer inspection of Equation (1), the fact that deleting data helps improve variance also makes intuitive sense. Since the overall variance contains the sum of variances over all individual samples, one can hope to improve variance by leaving out high-variance samples. This motivates the estimators we introduce in the following sections, and we will show how weighting samples generalizes this variance-minimization strategy.

5. Estimator from Multiple Importance Sampling

Having seen that has suboptimal variance, we first explore an alternative estimator used in multiple importance sampling (Owen, 2013). We begin with a brief review of multiple importance sampling.

Suppose there is a target distribution on , a function , and is the quantity to be estimated. The function is observed only at the sampled points. In multiple importance sampling, observations , are taken from sampling distributions for . An unbiased estimator that is known to have low variance in this case is the balance heuristic estimate (Owen, 2013);

where , and . Directly mapping the above to our setting, we define the Balanced IPS Estimator as follows.

Definition 5.1 (Balanced IPS Estimator).

where for all and , .

Note that is a valid policy since the convex combination of probability distributions is a probability distribution. The balanced IPS estimator is also unbiased. Note that it now suffices that has support, but not necessarily that each individual has support.

Proposition 5.2 (Bias of Balanced IPS Estimator).

Assume the policy has support for target . For consisting of i.i.d. draws from and logging policies , the Balanced IPS Estimator is unbiased:

Proof.

By linearity of expectation,

The second equality is valid since has support for . ∎

The variance of can be computed as follows:

A direct consequence of Theorem 1 in (Veach and Guibas, 1995) is that the variance of the balanced estimator is bounded above by the variance of the naive estimator plus some positive term that depends on and the log sizes .

Here, we provide a stronger result that does not require an extra positive term for the inequality to hold.

Theorem 5.3 ().

Assume each logging policy has support for target . We then have that

Proof.

From Equation 1, we have the following expression.

For convenience, and without loss of generality, assume , and therefore, . This is easily achieved by re-labeling the logging policies so that each data-sample comes from a distinctly labeled policy (note that we don’t need the logging policies to be distinct in our setup). Also, for simplicity, let . Then

Thus, it is sufficient to show the following two inequalities

(2)

and for all relevant

(3)

We get Equation 2 by applying Cauchy-Schwarz as follows

Another application of Cauchy-Schwarz gives us Equation 3 in the following way

Returning to our toy example in Table 1, we can check the variance reduction provided by over . The variance of the Balanced IPS Estimator is , which is substantially smaller than for the naive estimator using all the data . However, the Balanced IPS Estimator still improves when removing . In particular, notice that when using only , the variance of the Balanced IPS Estimator is . Therefore, even the variance of can be improved in some cases by dropping data.

6. Weighted IPS Estimator

We have seen that the variances of both the Naive and the Balanced IPS estimators can be reduced by removing some of the data points. More generally, we now explore estimators that re-weight samples from various logging policies based on their relationship with the target policy. This is similar to ideas that are used in Adaptive Multiple Importance Sampling (Cornuet et al., 2012; Elvira et al., 2015b) where samples are also re-weighted in each sampling round. In contrast to the latter scenario, here we assume the logging policies to be fixed, and we derive closed-form formulas for variance-optimal estimators. The general idea of the weighted estimators that follow is to compute a weight for each logging policy that captures the mismatch between this policy and the target policy. In order to characterize the relationship between a logging policy and the new policy to be evaluated, we define the following divergence. This formalizes the notion of mismatch between the two policies in terms of the Naive IPS Estimator variance.

Definition 6.1 (Divergence).

Suppose policy has support for target policy . Then the divergence from to is

Recall that is the utility of policy .

Note that is not necessarily minimal when . In fact, it can easily be seen by direct substitution that where is the optimal importance sampling distribution for with . Nevertheless, informally, the divergence from a logging policy to the target policy is small when the logging policy assigns similar propensities to pairs as the importance sampling distribution for the target policy. Conversely, if the logging policy deviates significantly from the importance sampling distribution, then the divergence is large. Based on this notion of divergence, we propose the following weighted estimator:

Definition 6.2 (Weighted IPS Estimator).

Assume for all .

where the weights are set to

(4)

Note that the assumption is easily satisfied as long as the logging policy is not exactly equal to the optimal importance sampling distribution of the target policy . This is very unlikely given that the utility of the new policy is unknown to us in the first place.

We will show that the Weighted IPS Estimator is optimal in the sense that any other convex combination by that ensures unbiasedness does not give a smaller variance estimator. First, we have a simple condition for unbiasedness:

Proposition 6.3 (Bias of Weighted IPS Estimator).

Assume each logging policy has support for target policy . Consider the estimator

such that and . For consisting of i.i.d. draws from and logging policies , the above estimator is unbiased:

In particular, is unbiased.

Proof.

Following the proof of Proposition 4.3,

Moreover, , which implies is unbiased. ∎

Notice that making the weights equal reduces to . Furthermore, dropping samples from logging policy is equivalent to setting .

To prove variance optimality, note that the variance of the Weighted IPS Estimator for a given set of weights can be written in terms of the divergences.

(5)

We now prove the following theorem:

Theorem 6.4 ().

Assume each logging policy has support for target policy , and . Then, for any estimator of the form as defined in Proposition 6.3

Proof.

The expression for the variance of can be verified to be as stated by directly substituting  (4) into the variance expression in Equation (5). Next, by the Cauchy-Schwarz inequality,

Returning to the toy example in Table 1, the divergence values are and . This leads to weights and , resulting in on . Thus, the weighted IPS estimator does better than the naive IPS estimator (including the case when is dropped) by optimally weighting all the available data.

Note that computing the optimal weights exactly requires access to the utility function everywhere in order to compute the divergences . However, in practice, is only known at the collected data samples, and the weights must be estimated. In Section 7.6 we discuss a simple strategy for doing so, along with an empirical analysis of the procedure.

6.1. Quantifying the Variance Reduction

The extent of variance reduction provided by the Weighted IPS Estimator over the Naive IPS Estimator depends only on the relative proportions of divergences and the log data sizes of each logging policy. The following proposition quantifies the variance reduction.

Proposition 6.5 ().

Let be the ratio of divergences and be the ratio of sample sizes of policy and policy . Then the reduction denoted as is

Proof.

Substituting the expressions for the two variances, we get that

So, normalizing by and , gives the desired expression. Applying the Cauchy-Schwarz inequality gives the upper bound. ∎

For the case of just two logging policies, , it is particularly easy to compute the maximum improvement in variance of the Weighted IPS Estimator over the Naive estimator. The reduction is , which ranges between and depending on and . The benefit of the weighted estimator over the naive estimator is greatest when the logging policies differ substantially, and there are equal amounts of log data from the two logging policies. Intuitively, this is because the weighted estimator mitigates the defect in the naive estimator due to which abundant high variance samples drown out the signal from the equally abundant low variance samples. On the other hand, the scope for improvement is less when the logging policies are similar or when there are disproportionately many samples from one logging policy.

7. Empirical Analysis

In this section, we empirically examine the properties of the proposed estimators. To do this, we create a controlled setup in which we have logging policies of different utilities, and try to estimate the utility of a fixed new policy. We illustrate key properties of our estimators in the concrete setting of CRF policies for multi-label classification, although the estimators themselves are applicable to arbitrary stochastic policies and structured output spaces.

7.1. Setup

Name # features # labels
Scene 294 6 1211 1196
Yeast 103 14 1500 917
LYRL 47236 4 23149 781265
Table 2. Corpus statistics for different multi-label datasets from the LibSVM repository. LYRL was post-processed so that only top level categories were treated as labels
Figure 1. Variance of the Naive IPS Estimator using only relative to the variance of the Naive IPS Estimator using data from both and for different as the relative sample size changes. Dropping data can lower the variance of Naive IPS Estimator in many cases.
Figure 2. Variance of the Balanced IPS Estimator relative to the variance of the Naive IPS Estimator for different as the relative sample size changes. The Balanced IPS Estimator can have substantially smaller variance than the Naive IPS Estimator.

We choose multi-label classification for our experiments because of the availability of a rich feature space and an easily scalable label space . Three multi-label datasets from the LibSVM repository with varying feature dimensionalities, number of class labels, and number of training samples available are used. The corpus statistics are as summarized in Table 2.

Since these datasets involve multi-label classification, the output space is , i.e., the set of all possible labels one can generate given a set of labels. The input distribution is the empirical distribution of inputs as represented in the test set. The utility function is simply the number of correctly assigned labels in with respect to the given ground truth label .

To obtain policies with different utilities in a systematic manner, we train conditional random fields (CRFs) on incrementally varying fractions of the labeled training set. CRFs are convenient since they provide explicit probability distributions over possible predictions conditioned on an input. However, nothing in the following analysis is specific to using CRFs as the stochastic logging policies, and note that the target policy need not be stochastic at all.

For simplicity and ease of interpretability, we use two logging policies in the following experiments. To generate these logging policies, we vary the training fraction for the first logging policy over , keeping the training fractions for the second logging policy fixed at . Similarly, we generate a CRF classifier representing the target policy by training on fraction of the data. The effect is that we now get three policies where the second logging policy is similar to the target while the similarity of the first logging policy varies over a wide range. This results in a wide range of relative divergences

for the first logging policy on which the relative performance of the estimators depends.

We compare pairs of estimators based on their relative variance since all the estimators being considered are unbiased (so, relative variance 1 signifies the estimators being compared have the same variance). Since the variance of the different estimators scales inversely proportional to the total number of samples, the ratio of their variances depends only on the relative size of the two data logs

but not on their absolute size. We therefore report results in terms of relative size where we vary to explore a large range of data imbalances.

For a fixed set of CRFs as logging and target policies, and the relative size of the data logs, the ratio of the variances of the different estimators can be computed exactly since the CRFs provide explicit distributions over , and is based on the test set. We therefore report exact variances in the following. In addition to the exactly computed variances, we also did some bandit feedback simulations to verify the experiment setup. We employed the Supervised Bandit conversion method (Agarwal et al., 2014). In this method, we iterate over the test features , sample some prediction from the logging policy and record the corresponding loss and propensity to generate the logged data-sets . For various settings of logging policies and amounts of data, we sampled bandit data and obtained estimator values over hundreds of iterations. We then computed the empirical mean and variance of the different estimates to make sure that the estimators were indeed unbiased and closely matched the theoretical variances reported above.

Figure 3. Variance of the Weighted IPS Estimator relative to the variance of the Naive IPS Estimator for different as the relative sample size changes. The Weighted IPS Estimator can have substantially smaller variance than the Naive IPS Estimator.
Figure 4. Variance of the Weighted IPS Estimator relative to the variance of the Balanced IPS Estimator for different as the relative sample size changes. The Weighted IPS Estimator does better than the Balanced IPS Estimator when the two logging policies differ significantly. However, the Balanced IPS Estimator performs better when the two policies are similar.

7.2. Can dropping data lower the variance of ?

While we saw that dropping data improved the variance of the Naive IPS Estimator in the toy example, we first verify that this issue also surfaces outside of carefully constructed toy examples. To this effect, Figure 1 plots the variance of the Naive IPS Estimator that uses data only from relative to the variance of when using data from both and . The x-axis varies the relative amount of data coming from and . Each solid circle on the plot corresponds to a training fraction choice for and a log-data-size ratio . A lot-data-size ratio of 0 means that no data from is used, i.e., all data from is dropped. The relative divergence