Confidence Intervals for Random Forests: The Jackknife and the Infinitesimal Jackknife

Confidence Intervals for Random Forests:
The Jackknife and the Infinitesimal Jackknife

\nameStefan Wager \emailswager@stanford.edu \AND\nameTrevor Hastie \emailhastie@stanford.edu \AND\nameBradley Efron \emailbrad@stat.stanford.edu \AND\addrDepartment of Statistics
Stanford University
Stanford, CA 94305, USA
Abstract

We study the variability of predictions made by bagged learners and random forests, and show how to estimate standard errors for these methods. Our work builds on variance estimates for bagging proposed by Efron (1992, 2012) that are based on the jackknife and the infinitesimal jackknife (IJ). In practice, bagged predictors are computed using a finite number of bootstrap replicates, and working with a large can be computationally expensive. Direct applications of jackknife and IJ estimators to bagging require bootstrap replicates to converge, where is the size of the training set. We propose improved versions that only require replicates. Moreover, we show that the IJ estimator requires 1.7 times less bootstrap replicates than the jackknife to achieve a given accuracy. Finally, we study the sampling distributions of the jackknife and IJ variance estimates themselves. We illustrate our findings with multiple experiments and simulation studies.

Confidence Intervals for Random Forests: The Jackknife and the Infinitesimal Jackknife Stefan Wager swager@stanford.edu Trevor Hastie hastie@stanford.edu Bradley Efron brad@stat.stanford.edu Department of Statistics
Stanford University
Stanford, CA 94305, USA

1 Introduction

Bagging (Breiman, 1996) is a popular technique for stabilizing statistical learners. Bagging is often conceptualized as a variance reduction technique, and so it is important to understand how the sampling variance of a bagged learner compares to the variance of the original learner. In this paper, we develop and study methods for estimating the variance of bagged predictors and random forests (Breiman, 2001), a popular extension of bagged trees. These variance estimates only require the bootstrap replicates that were used to form the bagged prediction itself, and so can be obtained with moderate computational overhead. The results presented here build on the jackknife-after-bootstrap methodology introduced by Efron (1992) and on the infinitesimal jackknife for bagging (IJ) (Efron, 2012).

Figure 1: Random forest predictions on the “Auto MPG” dataset. The random forest was trained using 314 examples; the graph shows results on a test set of size 78. The error bars are 1 standard error in each direction. Because this is a fairly small dataset, we estimated standard errors for the random forest using the averaged estimator from Section 5.2. A more detailed description of the experiment is provided in Appendix C.

Figure 1 shows the results from applying our method to a random forest trained on the “Auto MPG” dataset, a regression task where we aim to predict the miles-per-gallon (MPG) gas consumption of an automobile based on 7 features including weight and horsepower. The error bars shown in Figure 1 give an estimate of the sampling variance of the random forest; in other words, they tell us how much the random forest’s predictions might change if we trained it on a new training set. The fact that the error bars do not in general cross the prediction-equals-observation diagonal suggests that there is some residual noise in the MPG of a car that cannot be explained by a random forest model based on the available predictor variables.111Our method produces standard error estimates for random forest predictions. We then represent these standard error estimates as Gaussian confidence intervals , where is a quantile of the normal distribution.

Figure 1 tells us that the random forest was more confident about some predictions than others. Rather reassuringly, we observe that the random forest was in general less confident about the predictions for which the reported MPG and predicted MPG were very different. There is not a perfect correlation, however, between the error level and the size of the error bars. One of the points, circled in red near (32, 32), appears particularly surprising: the random forest got the prediction almost exactly right, but gave the prediction large errorbars of . This curious datapoint corresponds to the 1982 Dodge Rampage, a two-door Coupe Utility that is a mix between a passenger car and a truck with a cargo tray. Perhaps our random forest had a hard time confidently estimating the mileage of the Rampage because it could not quite decide whether to cluster it with cars or with trucks. We present experiments on larger datasets in Section 3.

Estimating the variance of bagged learners based on the preexisting bootstrap replicates can be challenging, as there are two distinct sources of noise. In addition to the sampling noise (i.e., the noise arising from randomness during data collection), we also need to control the Monte Carlo noise arising from the use of a finite number of bootstrap replicates. We study the effects of both sampling noise and Monte Carlo noise.

In our experience, the errors of the jackknife and IJ estimates of variance are often dominated by Monte Carlo effects. Monte Carlo bias can be particularly troublesome: if we are not careful, the jackknife and IJ estimators can conflate Monte Carlo noise with the underlying sampling noise and badly overestimate the sampling variance. We show how to estimate the magnitude of this Monte Carlo bias and develop bias-corrected versions of the jackknife and IJ estimators that outperform the original ones. We also show that the IJ estimate of variance is able to use the preexisting bootstrap replicates more efficiently than the jackknife estimator by having a lower Monte Carlo variance, and needs 1.7 times less bootstrap replicates than the jackknife to achieve a given accuracy.

If we take the number of bootstrap replicates to infinity, Monte Carlo effects disappear and only sampling errors remain. We compare the sampling biases of both the jackknife and IJ rules and present some evidence that, while the jackknife rule has an upward sampling bias and the IJ estimator can have a downward bias, the arithmetic mean of the two variance estimates can be close to unbiased. We also propose a simple method for estimating the sampling variance of the IJ estimator itself.

Our paper is structured as follows. We first present an overview of our main results in Section 2, and apply them to random forest examples in Section 3. We then take a closer look at Monte Carlo effects in Section 4 and analyze the sampling distribution of the limiting IJ and jackknife rules with in Section 5. We spread simulation experiments throughout Sections 4 and 5 to validate our theoretical analysis.

1.1 Related Work

In this paper, we focus on methods based on the jackknife and the infinitesimal jackknife for bagging (Efron, 1992, 2012) that let us estimate standard errors based on the pre-existing bootstrap replicates. Other approaches that rely on forming second-order bootstrap replicates have been studied by Duan (2011) and Sexton and Laake (2009). Directly bootstrapping a random forest is usually not a good idea, as it requires forming a large number of base learners. Sexton and Laake (2009), however, propose a clever work-around to this problem. Their approach, which could have been called a bootstrap of little bags, involves bootstrapping small random forests with around trees and then applying a bias correction to remove the extra Monte Carlo noise.

There has been considerable interest in studying classes of models for which bagging can achieve meaningful variance reduction, and also in outlining situations where bagging can fail completely (e.g., Skurichina and Duin, 1998; Bühlmann and Yu, 2002; Chen and Hall, 2003; Buja and Stuetzle, 2006; Friedman and Hall, 2007). The problem of producing practical estimates of the sampling variance of bagged predictors, however, appears to have received somewhat less attention in the literature so far.

2 Estimating the Variance of Bagged Predictors

This section presents our main result: estimates of variance for bagged predictors that can be computed from the same bootstrap replicates that give the predictors. Section 3 then applies the result to random forests, which can be analyzed as a special class of bagged predictors.

Suppose that we have training examples , an input to a prediction problem, and a base learner To make things concrete, the could be a list of e-mails paired with labels that catalog the e-mails as either spam or non-spam, could be a decision tree trained on these labeled e-mails, and could be a new e-mail that we seek to classify. The quantity would then be the output of the tree predictor on input .

With bagging, we aim to stabilize the base learner by resampling the training data. In our case, the bagged version of is defined as

(1)

where the are drawn independently with replacement from the original data (i.e., they form a bootstrap sample). The expectation is taken with respect to the bootstrap measure.

The expectation in (1) cannot in general be evaluated exactly, and so we form the bagged estimator by Monte Carlo

(2)

and the are elements in the bootstrap sample. As , we recover the perfectly bagged estimator .

Basic variance estimates

The goal of our paper is to study the sampling variance of bagged learners

In other words, we ask how much variance would have once we make large enough to eliminate the bootstrap effects. We consider two basic estimates of : The Infinitesimal Jackknife estimate (Efron, 2012), which results in the simple expression

(3)

where is the covariance between and the number of times the training example appears in a bootstrap sample; and the Jackknife-after-Bootstrap estimate (Efron, 1992)

(4)

where is the average of over all the bootstrap samples not containing the example and is the mean of all the .

The jackknife-after-bootstrap estimate arises directly by applying the jackknife to the bootstrap distribution. The infinitesimal jackknife (Jaeckel, 1972), also called the non-parametric delta method, is an alternative to the jackknife where, instead of studying the behavior of a statistic when we remove one observation at a time, we look at what happens to the statistic when we individually down-weight each observation by an infinitesimal amount. When the infinitesimal jackknife is available, it sometimes gives more stable predictions than the regular jackknife. Efron (2012) shows how an application of the infinitesimal jackknife principle to the bootstrap distribution leads to the simple estimate .

Finite-B bias

In practice, we can only ever work with a finite number of bootstrap replicates. The natural Monte Carlo approximations to the estimators introduced above are

(5)

and

(6)

Here, indicates the number of times the observation appears in the bootstrap sample .

In our experience, these finite- estimates of variance are often badly biased upwards if the number of bootstrap samples is too small. Fortunately, bias-corrected versions are available:

(7)
(8)

These bias corrections are derived in Section 4. In many applications, the simple estimators (5) and (6) require bootstrap replicates to reduce Monte Carlo noise down to the level of the inherent sampling noise, whereas our bias-corrected versions only require replicates. The bias-corrected jackknife (8) was also discussed by Sexton and Laake (2009).

Figure 2: Testing the performance of the bias-corrected infinitesimal jackknife estimate of variance for bagged predictors, as defined in (15), on a bagged regression tree. We compare the true sampling error with the average standard error estimate produced by our method across multiple runs; the dotted lines indicate one-standard-error-wide confidence bands for our standard error estimate.

In Figure 2, we show how can be used to accurately estimate the variance of a bagged tree. We compare the true sampling variance of a bagged regression tree with our variance estimate. The underlying signal is a step function with four jumps that are reflected as spikes in the variance of the bagged tree. On average, our variance estimator accurately identifies the location and magnitude of these spikes.

Figure 3: Performance, as a function of , of the jackknife and IJ estimators and their bias-corrected modifications on the cholesterol dataset of Efron and Feldman (1991). The boxplots depict bootstrap realizations of each estimator. The dotted line indicates the mean of all the realizations of the IJ-U and J-U estimators (weighted by ).

Figure 3 compares the performance of the four considered variance estimates on a bagged adaptive polynomial regression example described in detail in Section 4.4. We see that the uncorrected estimators and are badly biased: the lower whiskers of their boxplots do not even touch the limiting estimate with . We also see that that has about half the variance of . This example highlights the importance of using estimators that use available bootstrap replicates efficiently: with bootstrap replicates, can give us a reasonable estimate of , whereas is quite unstable and biased upwards by a factor 2.

The figure also suggests that the Monte Carlo noise of decays faster (as a function of ) than that of . This is no accident: as we show in Section 4.2, the infinitesimal jackknife requires 1.7 times less bootstrap replicates than the jackknife to achieve a given level of level of Monte Carlo error.

Limiting sampling distributions

The performance of and depends on both sampling noise and Monte Carlo noise. In order for (and analogously ) to be accurate, we need both the sampling error of , namely , and the Monte Carlo error to be small.

It is well known that jackknife estimates of variance are in general biased upwards (Efron and Stein, 1981). This phenomenon also holds for bagging: is somewhat biased upwards for . We present some evidence suggesting that is biased downwards by a similar amount, and that the arithmetic mean of and is closer to being unbiased for than either of the two estimators alone.

We also develop a simple estimator for the variance of itself:

where and is the mean of the .

3 Random Forest Experiments

Random forests (Breiman, 2001) are a widely used extension of bagged trees. Suppose that we have a tree-structured predictor and training data . Using notation from (2), the bagged version of this tree predictor is

Random forests extend bagged trees by allowing the individual trees to depend on an auxiliary noise source . The main idea is that the auxiliary noise encourages more diversity among the individual trees, and allows for more variance reduction than bagging. Several variants of random forests have been analyzed theoretically by, e.g., Biau et al. (2008), Biau (2012), Lin and Jeon (2006), and Meinshausen (2006).

Standard implementations of random forests use the auxiliary noise to randomly restrict the number of variables on which the bootstrapped trees can split at any given training step. At each step, features are randomly selected from the pool of all possible features and the tree predictor must then split on one of these features. If the tree can always split on any feature and the random forest becomes a bagged tree; if , then the tree has no freedom in choosing which feature to split on.

Following Breiman (2001), random forests are usually defined more abstractly for theoretical analysis: any predictor of the form

(9)

is called a random forest. Various choices of noise distribution lead to different random forest predictors. In particular, trivial noise sources are allowed and so the class of random forests includes bagged trees as a special case. In this paper, we only consider random forests of type (9) where individual trees are all trained on bootstrap samples of the training data. We note, however, that that variants of random forests that do not use bootstrap noise have also been found to work well (e.g., Dietterich, 2000; Geurts et al., 2006).

All our results about bagged predictors apply directly to random forests. The reason for this is that random forests can also be defined as bagged predictors with different base learners. Suppose that, on each bootstrap replicate, we drew times from the auxiliary noise distribution instead of just once. This would give us a predictor of the form

Adding the extra draws from to the random forest does not change the limit of the random forest. If we take , we effectively marginalize over the noise from , and get a predictor

In other words, the random forest as defined in (9) is just a noisy estimate of a bagged predictor with base learner .

It is straight-forward to check that our results about and also hold for bagged predictors with randomized base learners. The extra noise from using instead of does not affect the limiting correlations in (3) and (4); meanwhile, the bias corrections from (7) and (8) do not depend on how we produced the and remain valid with random forests. Thus, we can estimate confidence intervals for random forests from and using exactly the same formulas as for bagging.

In the rest of this section, we show how the variance estimates studied in this paper can be used to gain valuable insights in applications of random forests. We use the variance estimate (7) to minimize the required computational resources. We implemented the IJ-U estimator for random forests on top of the R package randomForest (Liaw and Wiener, 2002).

3.1 E-mail Spam Example

The e-mail spam dataset (spambase) is part of a standard classification task, the goal of which is to distinguish spam e-mail (1) from non-spam (0) using features. Here, we investigate the performance of random forests on this dataset.

(a)
(b)
(c) (bagged tree)
Figure 4: Standard errors of random forest predictions on the e-mail spam data. The random forests with , , and splitting variables were all trained on a train set of size ; the panels above show class predictions and IJ-U estimates for standard errors on a test set of size 1,536. The solid curves are smoothing splines () fit through the data (including both correct and incorrect predictions).

We fit the spam data using random forests with , and splitting variables. With , the trees were highly constrained in their choice of splitting variables, while is just a bagged tree. The three random forests obtained test-set accuracies of 95.1%, 95.2% and 94.7% respectively, and it appears that the or forests are best. We can use the IJ-U variance formula to gain deeper insight into these numbers, and get a better understanding about what is constraining the accuracy of each predictor.

In Figure 4, we plot test-set predictions against IJ-U estimates of standard error for all three random forests. The random forest appears to be quite unstable, in that the estimated errors are high. Because many of its predictions have large standard errors, it is plausible that the predictions made by the random forest could change drastically if we got more training data. Thus, the forest appears to suffer from overfitting, and the quality of its predictions could improve substantially with more data.

Figure 5: Comparison of the predictions made by the and random forests. The stars indicate pairs of test set predictions; the solid line is a smoothing spline () fit through the data.

Conversely, predictions made by the random forest appear to be remarkably stable, and almost all predictions have standard errors that lie below 0.1. This suggests that the forest may be mostly constrained by bias: if the predictor reports that a certain e-mail is spam with probability , then the predictor has effectively abandoned any hope of unambiguously classifying the e-mail. Even if we managed to acquire much more training data, the class prediction for that e-mail would probably not converge to a strong vote for spam or non-spam.

The forest appears to have balanced the bias-variance trade-off well. We can further corroborate our intuition about the bias problem faced by the forest by comparing its predictions with those of the forest. As shown in Figure 5, whenever the forest made a cautious prediction that an e-mail might be spam (e.g., a prediction of around 0.8), the forest made the same classification decision but with more confidence (i.e., with a more extreme class probability estimate ). Similarly, the forest tended to lower cautious non-spam predictions made by the forest. In other words, the forest appears to have often made lukewarm predictions with mid-range values of on e-mails for which there was sufficient information in the data to make confident predictions. This analysis again suggests that the forest was constrained by bias and was not able to efficiently use all the information present in the dataset.

3.2 California Housing Example

(a) MSE and mean variance
(b) Drivers of sampling variance
Figure 6: Performance of random forests on the California housing data. The left panel plots MSE and mean sampling variance as a function of the number of splitting variables. The MSE estimate is the out-of bag error, while the mean sampling variance is the average estimate of variance computed over all training examples. The right panel displays the drivers of sampling variance, namely the variance of the individual bootstrapped trees (bootstrap variance ) and their correlation (tree correlation ).

In the previous example, we saw that the varying accuracy of random forests with different numbers of splitting variables primarily reflected a bias-variance trade-off. Random forests with small had high bias, while those with large had high variance. This bias-variance trade-off does not, however, underlie all random forests. The California housing dataset—a regression task with and —provides a contrasting example.

In Figure 5(a), we plot the random forest out-of-bag MSE and IJ-U estimate of average sampling variance across all training examples, with between 1 and 8. We immediately notice that the sampling variance is not monotone increasing in . Rather, the sampling variance is high if is too big or too small, and attains a minimum at . Meanwhile, in terms of MSE, the optimal choice is . Thus, there is no bias-variance trade-off here: picking a value of around 4 or 5 is optimal both from the MSE minimization and the variance minimization points of view.

We can gain more insight into this phenomenon using ideas going back to Breiman (2001), who showed that the sampling variance of a random forest is governed by two factors: the variance of the individual bootstrapped trees and their correlation . The variance of the ensemble is then . In Figure 5(b), we show how both and react when we vary . Trees with large are fairly correlated, and so the random forest does not get as substantial a variance reduction over the base learner as with a smaller . With a very small , however, the variance of the individual trees shoots up, and so the decrease in is no longer sufficient to bring down the variance of the whole forest. The increasing -curve and the decreasing -curve thus jointly produce a U-shaped relationship between and the variance of the random forest. The forest achieves a low variance by matching fairly stable base learners with a small correlation .

4 Controlling Monte Carlo Error

In this section, we analyze the behavior of both the IJ and jackknife estimators under Monte Carlo noise. We begin by discussing the Monte Carlo distribution of the infinitesimal jackknife estimate of variance with a finite ; the case of the jackknife-after-bootstrap estimate of variance is similar but more technical and is presented in Appendix A. We show that the jackknife estimator needs 1.7 times more bootstrap replicates than the IJ estimator to control Monte Carlo noise at a given level. We also highlight a bias problem for both estimators, and recommend a bias correction. When there is no risk of ambiguity, we use the short-hand for .

4.1 Monte Carlo Error for the IJ Estimator

We first consider the Monte Carlo bias of the infinitesimal jackknife for bagging. Let

(10)

be the perfect IJ estimator with (Efron, 2012). Then, the Monte Carlo bias of is

is the Monte Carlo estimate of the bootstrap covariance. Since depends on all observations, and can in practice be treated as independent for computing , especially when is large (see remark below). Thus, as , we see that

(11)

Notice that is the standard bootstrap estimate for the variance of the base learner . Thus, the bias of grows linearly in the variance of the original estimator that is being bagged.

Meanwhile, by the central limit theorem, converges to a Gaussian random variable as gets large. Thus, the Monte Carlo asymptotic variance of is approximately . The can be treated as roughly independent, and so the limiting distribution of the IJ estimate of variance has approximate moments

(12)

Interestingly, the Monte Carlo mean squared error (MSE) of mostly depends on the problem through , where is the bootstrap estimate of the variance of the base learner. In other words, the computational difficulty of obtaining confidence intervals for bagged learners depends on the variance of the base learner.

Remark: The IJ Estimator for Sub-bagging

We have focused on the case where each bootstrap replicate contains exactly samples. However, in some applications, bagging with subsamples of size has been found to work well (e.g., Bühlmann and Yu, 2002; Buja and Stuetzle, 2006; Friedman, 2002; Strobl et al., 2007). Our results directly extend to the case where samples are drawn with replacement from the original sample. We can check that (10) still holds, but now . Carrying out the same analysis as above, we can establish an analogue to (12):

(13)

For simplicity of exposition, we will restrict our analysis to the case for the rest of this paper.

Remark: Approximate Independence

In the above derivation, we used the approximation

We can evaluate the accuracy of this approximation using the formula

In the case of the sample mean paired with the Poisson bootstrap, this term reduces to

and the correction to (11) would be .

4.2 Comparison of Monte Carlo Errors

As shown in Appendix A, the Monte Carlo error for the jackknife-after-bootstrap estimate of variance has approximate moments

(14)

where is the jackknife estimate computed with bootstrap replicates. The Monte Carlo stability of again primarily depends on .

By comparing (12) with (14), we notice that the IJ estimator makes better use of a finite number of bootstrap replicates than the jackknife estimator. For a fixed value of , the Monte Carlo bias of is about or 1.7 times as large as that of ; the ratio of Monte Carlo variance starts off at 3 for small values of and decays down to 1.7 as gets much larger than . Alternatively, we see that the IJ estimate with bootstrap replicates has errors on the same scale as the jackknife estimate with replicates.

This suggests that if computational considerations matter and there is a desire to perform as few bootstrap replicates as possible while controlling Monte Carlo error, the infinitesimal jackknife method may be preferable to the jackknife-after-bootstrap.

4.3 Correcting for Monte Carlo Bias

The Monte Carlo MSEs of and are in practice dominated by bias, especially for large . Typically, we would like to pick large enough to keep the Monte Carlo MSE on the order of . For both (12) and (14), we see that performing bootstrap iterations is enough to control the variance. To reduce the bias to the desired level, namely , we would need to take bootstrap samples.

Although the Monte Carlo bias for both and is large, this bias only depends on and so is highly predictable. This suggests a bias-corrected modification of the IJ and jackknife estimators respectively:

(15)
(16)

Here and are as defined in (5), and is the bootstrap estimate of variance from (11). The letter U stands for unbiased. This transformation effectively removes the Monte Carlo bias in our experiments without noticeably increasing variance. The bias corrected estimates only need bootstrap replicates to control Monte Carlo MSE at level .

4.4 A Numerical Example

To validate the observations made in this section, we re-visit the cholesterol dataset used by Efron (2012) as a central example in developing the IJ estimate of variance. The dataset (introduced by Efron and Feldman, 1991) contains records for participants in a clinical study, all of whom received a proposed cholesterol-lowering drug. The data contains a measure of the cholesterol level decrease observed for each subject, as well as a measure of compliance (i.e. how faithful the subject was in taking the medication). Efron and Feldman originally fit as a polynomial function of ; the degree of the polynomial was adaptively selected by minimizing Mallows criterion (1973).

We here follow Efron (2012) and study the bagged adaptive polynomial fit of against for predicting the cholesterol decrease of a new subject with a specific compliance level. The degree of the polynomial is selected among integers between 1 and 6 by minimization. Efron (2012) gives a more detailed description of the experiment. We restrict our attention to predicting the cholesterol decrease of a new patient with compliance level ; this corresponds to the patient with the lowest observed compliance level.

Figure 7: Predicted and actual performance ratios for the uncorrected and estimators in the cholesterol compliance example. The plot shows both and . The observations are derived from the data presented in Figure 3; the error bars are one standard deviation in each direction. The solid lines are theoretical predictions obtained from (12) and (14).

In Figure 3, we compare the performance of the variance estimates for bagged predictors studied in this paper. The boxplots depict repeated realizations of the variance estimates with a finite . We can immediately verify the qualitative insights presented in this section. Both the jackknife and IJ rules are badly biased for small , and this bias goes away more slowly than the Monte Carlo variance. Moreover, at any given , the jackknife estimator is noticeably less stable than the IJ estimator.

The J-U and IJ-U estimators appear to fix the bias problem without introducing instability. The J-U estimator has a slightly higher mean than the IJ-U one. As discussed in Section 5.2, this is not surprising, as the limiting () jackknife estimator has an upward sampling bias while the limiting IJ estimator can have a downward sampling bias. The fact that the J-U and IJ-U estimators are so close suggests that both methods work well for this problem.

The insights developed here also appear to hold quantitatively. In Figure 7, we compare the ratios of Monte Carlo bias and variance for the jackknife and IJ estimators with theoretical approximations implied by (12) and (14). The theoretical formulas appear to present a credible picture of the relative merits of the jackknife and IJ rules.

5 Sampling Distribution of Variance Estimates

In practice, the and estimates are computed with a finite number of bootstrap replicates. In this section, however, we let go to infinity, and study the sampling properties of the IJ and jackknife variance estimates in the absence of Monte Carlo errors. In other words, we study the impact of noise in the data itself. Recall that we write and for the limiting estimators with bootstrap replicates.

We begin by developing a simple formula for the sampling variance of itself. In the process of developing this variance formula, we obtain an ANOVA expansion of that we then use in Section 5.2 to compare the sampling biases of the jackknife and infinitesimal jackknife estimators.

5.1 Sampling Variance of the IJ Estimate of Variance

If the data are independently drawn from a distribution , then the variance of the IJ estimator is very nearly given by

(17)
(18)

This expression suggests a natural plug-in estimator

(19)

where is a bootstrap estimate for and is the mean of the . The rest of the notation is as in Section 2.

The relation (17) arises from a general connection between the infinitesimal jackknife and the theory of Hájek projections. The Hájek projection of an estimator is the best approximation to that estimator that only considers first-order effects. In our case, the Hájek projection of is

(20)

where is as in (18). The variance of the Hájek projection is .

The key insight behind (17) is that the IJ estimator is effectively trying to estimate the variance of the Hájek projection of , and that

(21)

The approximation (17) then follows immediately, as the right-hand side of the above expression is a sum of independent random variables. Note that we cannot apply this right-hand side expression directly, as depends on the unknown underlying distribution .

The connections between Hájek projections and the infinitesimal jackknife have been understood for a long time. Jaeckel (1972) originally introduced the infinitesimal jackknife as a practical approximation to the first-order variance of an estimator (in our case, the right-hand side of (21)). More recently, Efron (2012) showed that is equal to the variance of a “bootstrap Hájek projection.” In Appendix B, we build on these ideas and show that, in cases where a plug-in approximation is valid, (21) holds very nearly for bagged estimators.

(a) Bagged polynomial fit.
(b) Errors for the IJ estimator
Figure 8: Stability of the IJ estimate of variance on the cholesterol data. The left panel shows the bagged fit to the data, along with error bars generated by the IJ method; the stars denote the data (some data points have -values that exceed the range of the plot). In the right panel, we use (19) to estimate error bars for the error bars in the first panel. All error bars are one standard deviation in each direction.

We apply our variance formula to the cholesterol dataset of Efron (2012), following the methodology described in Section 4.4. In Figure 8, we use the formula (19) to study the sampling variance of as a function of the compliance level . The main message here is rather reassuring: as seen in Figure 7(b), the coefficient of variation of appears to be fairly low, suggesting that the IJ variance estimates can be trusted in this example. Note that, the formula from (19) can require many bootstrap replicates to stabilize and suffers from an upward Monte Carlo bias just like . We used bootstrap replicates to generate Figure 8.

5.2 Sampling Bias of the Jackknife and IJ Estimators

We can understand the sampling biases of both the jackknife and IJ estimators in the context of the ANOVA decomposition of Efron and Stein (1981). Suppose that we have data drawn independently from a distribution , and compute our estimate based on this data. Then, we can decompose its variance as

(22)

where

is the variance due to first-order effects, is the variance due to second-order effects of the form

and so on. Note that all the terms are non-negative.

Efron and Stein (1981) showed that, under general conditions, the jackknife estimate of variance is biased upwards. In our case, their result implies that the jackknife estimator computed on data points has variance

(23)

Meanwhile, (21) suggests that

(24)

In other words, on average, both the jackknife and IJ estimators get the first-order variance term right. The jackknife estimator then proceeds to double the second-order term, triple the third-order term etc, while the IJ estimator just drops the higher order terms.

By comparing (23) and (24), we see that the upward bias of and the downward bias of partially cancel out. In fact,

(25)

and so the arithmetic mean of and has an upward bias that depends only on third- and higher-order effects. Thus, we might expect that in small-sample situations where and exhibit some bias, the mean of the two estimates may work better than either of them taken individually.

(a) Bagged tree sampling bias.
(b) Prostate cancer example.
Figure 9: Sampling bias of the jackknife and IJ rules. In the left panel, we compare the expected values of the jackknife and IJ estimators as well as their mean with the true variance of a bagged tree. In this example, the features take values in ; we depict variance estimates along the diagonal . The prostate cancer plot can be interpreted in the same way as Figure 3, except that the we now indicate the weighted means of the J-U and IJ-U estimators separately.

To test this idea, we used both the jackknife and IJ methods to estimate the variance of a bagged tree trained on a sample of size . (See Appendix C for details.) Since the sample size is so small, both the jackknife and IJ estimators exhibit some bias as seen in Figure 8(a). However, the mean of the two estimators is nearly unbiased for the true variance of the bagged tree. (It appears that this mean has a very slight upward bias, just as we would expect from (25).)

This issue can arise in real datasets too. When training bagged forward stepwise regression on a prostate cancer dataset discussed by Hastie et al. (2009), the jackknife and IJ methods give fairly different estimates of variance: the jackknife estimator converged to 0.093, while the IJ estimator stabilized at 0.067 (Figure 8(b)). Based on the discussion in this section, it appears that should be considered a more unbiased estimate of variance than either of the two numbers on their own.

Function ERR
Bias
Cosine 50 2 200 Var
MSE
Bias
Cosine 200 2 500 Var
MSE
Bias
Noisy XOR 50 50 200 Var
MSE
Bias
Noisy XOR 200 50 500 Var
MSE
Bias
Noisy AND 50 500 200 Var
MSE
Bias
Noisy AND 200 500 500 Var
MSE
Bias
Auto 314 7 1000 Var
MSE
Table 1: Simulation study. We evaluate the mean bias, variance, and MSE of different variance estimates for random forests. Here, is the number of test examples used, is the number of features, and is the number of trees grown; the numbers in parentheses are 95% confidence errors from sampling. The best methods for each evaluation metric are highlighted in bold. The data-generating functions are described in Appendix C.

In the more extensive simulations presented in Table 1, averaging and is in general less biased than either of the original estimators (although the “Noisy AND” experiment seems to provide an exception to this rule, suggesting that most of the bias of for this function is due to higher-order interactions). However, has systematically lower variance, which allows it to win in terms of overall mean squared error. Thus, if unbiasedness is important, averaging and seems like a promising idea, but appears to be the better rule in terms of raw MSE minimization.

Finally, we emphasize that this relative bias result relies on the heuristic relationship (24). While this approximation does not seem problematic for the first-order analysis presented in Section 5.1, we may be concerned that the plug-in argument from Appendix B used to justify it may not give us correct second- and higher-order terms. Thus, although our simulation results seem promising, developing a formal and general understanding of the relative biases of and remains an open topic for follow-up research.

6 Conclusion

In this paper, we studied the jackknife-after-bootstrap and infinitesimal jackknife (IJ) methods (Efron, 1992, 2012) for estimating the variance of bagged predictors. We demonstrated that both estimators suffer from considerable Monte Carlo bias, and we proposed bias-corrected versions of the methods that appear to work well in practice. We also provided a simple formula for the sampling variance of the IJ estimator, and showed that from a sampling bias point of view the arithmetic mean of the jackknife and IJ estimators is often preferable to either of the original methods. Finally, we applied these methods in numerous experiments, including some random forest examples, and showed how they can be used to gain valuable insights in realistic problems.

Acknowledgments

The authors are grateful for helpful suggestions from the action editor and three anonymous referees. S.W. is supported by a B.C. and E.J. Eaves Stanford Graduate Fellowship.

A The Effect of Monte Carlo Noise on the Jackknife Estimator

In this section, we derive expressions for the finite- Monte Carlo bias and variance of the jackknife-after-bootstrap estimate of variance. Recall from (6) that

and indicates the number of times the observation appears in the bootstrap sample . If is not defined because for either all or none of the , then just set .

Now is the sum of squares of noisy quantities, and so will be biased upwards. Specifically,

where is the jackknife estimate computed with bootstrap replicates. For convenience, let

and recall that

For all or , the conditional expectation is

in the degenerate cases with . Thus,

and so

Meanwhile, for ,

where

Thus,

where .

As and get large, converges in law to a Gaussian random variable

and the above expressions are uniformly integrable. We can verify that

and

Finally, this lets us conclude that