# Finite Sample Inference for Targeted Learning

## Abstract

The Highly-Adaptive-Lasso(HAL)-TMLE is an efficient estimator of a pathwise differentiable parameter in a statistical model that at minimal (and possibly only) assumes that the sectional variation norm of the true nuisance parameters are finite. It relies on an initial estimator (HAL-MLE) of the nuisance parameters by minimizing the empirical risk over the parameter space under the constraint that sectional variation norm is bounded by a constant, where this constant can be selected with cross-validation. In the formulation of the HAL-MLE this sectional variation norm corresponds with the sum of absolute value of coefficients for an indicator basis. Due to its reliance on machine learning, statistical inference for the TMLE has been based on its normal limit distribution, thereby potentially ignoring a large second order remainder in finite samples. In this article we present four methods for construction of a finite sample 0.95-confidence interval that use the nonparametric bootstrap to estimate thefinite sample distribution of the HAL-TMLE or a conservative distribution dominating the true finite sample distribution. We prove that it consistently estimates the optimal normal limit distribution, while its approximation error is driven by the performance of the bootstrap for a well behaved empirical process. We demonstrate our generalinferential methods for 1) nonparametric estimation of the average treatmenteffect based on observing on each unit a covariate vector, binary treatment,and outcome, and for 2) nonparametric estimation of the integral of the square of the multivariate density of the data distribution.

**Keywords:** Asymptotically efficient estimator, asymptotically linear estimator, canonical gradient, finite sample inference, empirical process, highly adaptive Lasso (HAL), influence curve, nonparametric bootstrap, sectional variation norm, super-learner, targeted minimum loss-based estimation (TMLE).

## 1Introduction

We consider estimation of a pathwise differentiable real valued target parameter based on observing independent and identically distributed observations with a data distribution known to belong in a highly nonparametric statistical model . A target parameter I-.2em R is a mapping that maps a possible data distribution into real number, while represents the answer to the question of interest about the data experiment. The canonical gradient of the pathwise derivative of the target parameter at a defines an asymptotically efficient estimator among the class of regular estimators [5]: An estimator is asymptotically efficient at if and only if it is asymptotically linear at with influence curve :

The target parameter depends on the data distribution through a parameter , while the canonical gradient possibly also depends on another nuisance parameter : . Both of these nuisance parameters are chosen so that they can be defined as a minimizer of the expectation of a specific loss function: and , where we used the notation . We assume that the parameter spaces and for these nuisance parameters and are contained in the set of multivariate cadlag functions with sectional variation norm [10] bounded by a constant (this norm will be defined in the next section).

We consider a targeted minimum loss-based (substitution) estimator [33] of the target parameter that uses as initial estimator of these nuisance parameters the highly adaptive lasso minimum loss-based estimators (HAL-MLE) defined by minimizing the empirical mean of the loss over the parameter space [2]. Since the HAL-MLEs converge at a rate faster than w.r.t. the loss-based quadratic dissimilarities (which corresponds with a rate faster than for estimation of and ), this HAL-TMLE has been shown to be asymptotically efficient under weak regularity conditions [27]. Statistical inference could therefore be based on the normal limit distribution in which the asymptotic variance is estimated with an estimator of the variance of the canonical gradient. In that case, inference is ignoring the potentially very large contributions of the higher order remainder which could in finite samples easily dominate the first order empirical mean of the efficient influence curve term when the size of the nuisance parameter spaces is large (e.g., dimension of data is large and model is nonparametric).

In this article we present four methods for inference that use the nonparametric bootstrap to estimate the finite sample distribution of the HAL-TMLE or a conservative distribution dominating its true finite sample distribution.

### 1.1Organization

Firstly, in Section 2 we formulate the estimation problem and motivate the challenge for statistical inference. We also provide an easy to implement finite sample highly conservative confidence interval whose width converges to zero at the usual square-root sample size rate, but is not asymptotically sharp. We use this result to demonstrate the potential impact of the dimension of the data and sectional variation norm bound on the width of a finite sample confidence interval.

In Section 3 we present the nonparametric bootstrap estimator of the actual sampling distribution of the HAL-TMLE which thus incorporates estimation of its higher order stochastic behavior, and can thereby be expected to outperform the Wald-type confidence intervals. We prove that this nonparametric bootstrap is asymptotically consistent for the optimal normal limit distribution. Our results also prove that the nonparametric bootstrap preserves the asymptotic behavior of the HAL-MLEs of our nuisance parameters and , providing further evidence for good performance of the nonparametric bootstrap. In the second subsection of Section 3 we propose to bootstrap the exact second-order expansion of the HAL-TMLE. This results in a very direct estimator of the exact sampling distribution of the HAL-TMLE, although it comes at a cost of not respecting that the HAL-TMLE is a substitution estimator. Importantly, our results demonstrate that the approximation error of the two nonparametric bootstrap estimates of the true finite sample distribution of the HAL-TMLE is mainly driven by the approximation error of the nonparametric bootstrap for estimating the finite sample distribution of a well behaved empirical process. We suggest that these two nonparametric bootstrap methods are the preferred methods for *accurate* inference, among our proposals, by not being aimed to be conservative.

In Section 4 we upper-bound the absolute value of the exact remainder for the second-order expansion of the HAL-TMLE in terms of a specified function of the loss-based dissimilarities for the HAL-MLEs of the nuisance parameters and . The resulting conservative finite sample second-order expansion is highly conservative but is still asymptotically sharp by converging to the actual normal limit distribution of the HAL-TMLE (but from above). We then propose to use the nonparametric bootstrap to estimate this conservative finite sample distribution. In the Appendix Section 11 we further upper bound the previously obtained conservative finite sample expansion by taking a supremum over a set of possible realizations of the HAL-MLEs that will contain the true and with probability tending to 1, where this probability is controlled/set by the user. We also propose a simplified conservative approximation of this supremum which is easy to implement. Even though these two sampling distributions are even more conservative they are still asymptotically sharp, so that also the corresponding nonparametric bootstrap method is asymptotically converging to the optimal normal limit distribution.

In Section 5 we demonstrate our methods for two examples involving a nonparametric model and a specified target parameter (average treatment effect and integral of the square of the data density). We conclude with a discussion in Section 6. Some of the technical results and proofs have been deferred to the Appendix, while the overall proofs are presented in the main part of the article.

### 1.2Why does it work, and how it applies to adaptive TMLE

The key behind the validity of the nonparametric bootstrap for estimation of the sampling distribution of the HAL-MLE and HAL-TMLE is that the HAL-MLE is an actual MLE thereby avoiding data adaptive trade-off of bias and variance as naturally achieved with cross-validation. However, even though the inference is based on such a non-adaptive HAL-TMLE, one can still use an highly adaptive HAL-TMLE as point estimate in our reported confidence intervals. Specifically, one can use our confidence intervals with the point estimate defined as a TMLE using a super-learner [28] that includes the HAL-MLE as one of the candidate estimators in its library. By the oracle inequality for the cross-validation selector, such a super-learner will improve on the HAL-MLE so that the proposed inference based on the non-adaptive HAL-TMLE will be more conservative. In addition, our confidence intervals can be used with the point estimate defined by adaptive TMLEs incorporating additional refinements such as collaborative TMLE [29]; cross-validated TMLE [42]; higher order TMLE [6]; and double robust inference TMLE [26]. Again, such refinements generally improve the finite sample accuracy of the estimator, so that it will improve the coverage of the confidence intervals based on the non-adaptive HAL-TMLE.

Our confidence intervals can also be used if the statistical model has no known bound on the sectional variation norm of the nuisance parameters. In that case, we recommend to select such a bound with cross-validation (just as one selects the -norm penalty in Lasso regression with cross-validation), which, by the oracle inequality for the cross-validation selector [28] is guaranteed to be larger that the sectional variation norm of the true nuisance parameters with probability tending to 1. In that case, the confidence intervals will still be asymptotically correct, incorporate most of the higher order variability, but ignores the potential finite sample underestimation of the true sectional variation norm. In addition, in that case the inference adapts to the underlying unknown sectional variation norm of the true nuisance parameters . We plan to evaluate the practical performance of our methods in the near future.

### 1.3Relation to literature on higher order influence functions

J. Pfanzagl [20] introduced the notion of higher order pathwise differentiability of finite dimensional target parameters and corresponding higher order gradients. He used these higher order expansions of the target parameter to define higher order one-step estimators that might result in asymptotically linear estimators where regular one-step estimators [15] might fail to behave well due to a too large second-order remainder. This is the perspective that inspired the seminal contributions of J. Robins, L. Li, E. Tchetgen & A. van der Vaart (e.g., [22]). They develop a rigorous theory for (e.g.) second-order one-step estimators, including the typical case that the parameter is not second-order pathwise differentiable. They allow the case that the second-order remainder asymptotically dominates the first order term, resulting in estimators and confidence intervals that converge to zero at a slower rate than . Their second-order expansion uses approximations of ”would be” second-order gradients, where the approximation results in a bias term they termed the representation error. Unfortunately, this representation error, due to the lack of second order pathwise differentiability, obstructs the construction of estimators with a third order remainder (and thereby asymptotic linearity under the condition that a third order term is ) These second-order one-step estimators involve careful selection of tuning/smoothing parameters for approximating the ”would be” second-order gradient in order to obtain an optimal bias-variance trade-off. These authors applied their theory to nonparametric estimation of a mean with missing data and the integral of the square of the density. The higher-order expansions that come with the construction of higher order one-step estimators can be directly incorporated in the construction of confidence intervals, thereby possibly leading to improved finite sample coverage. These higher order expansions rely on hard to estimate objects such as a multivariate density in a denominator, giving rise to enormous practical challenges to construct robust higher order confidence intervals, as noted in the above articles.

[19] already pointed out that the one-step estimators and till a larger degree higher order one-step estimators fail to respect global known bounds implied by the model and target parameter mapping, by adding to an initial estimator an empirical mean of a first order influence function and higher order U-statistics (i.e. higher orders empirical averages) of higher order influence functions. He suggested that to circumvent this problem one would have to carry out the updating process in the model space instead of in the parameter space. This is precisely what is carried out by the general TMLE framework [33], and higher order TMLE based on approximate higher order influence functions were developed in [6]. These higher order TMLE represents the TMLE-analogue of higher order one-step estimators, just as the regular TMLE is an analogue of the regular one-step estimator. These TMLEs automatically satisfy the known bounds and thus never produce non-sensical output such a negative number for a probability. The higher order TMLE is just another TMLE but using a least favorable submodel with an extra parameter, thereby providing a crucial safeguard against erratic behavior due to estimation of the higher order influence functions, while also being able to utilize the C-TMLE framework to select the tuning parameters for approximating these higher order influence functions.

The approach in this article for construction of higher order confidence intervals is quite different from the construction of higher order one-step estimators or higher order TMLE and using the corresponding higher order expansion for inference. To start with, we use an asymptotically efficient HAL-TMLE so that we preserve the -rate of convergence, asymptotic normality and efficiency, even in nonparametric models that only assume that the true nuisance parameters have finite sectional variation norm. As point estimate we can still use an adaptive HAL-TMLE which can, for example, include the higher-order HAL-TMLE refinement, beyond refinements mentioned above. However, for inference, we avoid the delicate higher order expansions based on approximate higher order gradients, but instead use the exact second-order expansion implied by the definition of the exact second-order remainder (Equation 10), which thus incorporates any higher order term. In addition, by using the robust HAL-MLE as estimators of , the HAL-TMLE is not only efficient but one can also use nonparametric bootstrap to estimate its sampling distribution. We then use the nonparametric bootstrap to estimate the sampling distribution of HAL-TMLE itself, or its exact expansion, or an exact conservative expansion in which is replaced by a robust upper bound which only depends on well behaved empirical processes for which the nonparametric bootstrap works (again, due to using the HAL-MLE). Our confidence intervals have width of order and are asymptotically sharp by converging to the optimal normal distribution based confidence interval as sample size increases. In addition, they are easy to implement as a by product of the computation of the HAL-TMLE itself.

## 2General formulation of statistical estimation problem and motivation for finite sample inference

### 2.1Statistical model and target parameter

Let be i.i.d. copies of a random variable . Let be the empirical probability measure of . Let I-.2em R be a real valued parameter that is pathwise differentiable at each with canonical gradient . That is, given a collection of one dimensional submodels through at with score , for each of these submodels the derivative can be represented as . The latter is an inner product of a gradient with the score in the Hilbert space of functions of with mean zero (under ) endowed with inner product . Let be the Hilbert space norm. Such an element is called a gradient of the pathwise derivative of at . The canonical gradient is the unique gradient that is an element of the tangent space defined as the closure of the linear span of the collection of scores generated by this family of submodels.

Define the exact second-order remainder

where since has mean zero under .

Let be a function valued parameter so that for some . For notational convenience, we will abuse notation by referring to the target parameter with and interchangeably. Let be a function valued parameter so that for some . Again, we will use the notation and interchangeably.

Suppose that I-.2em R is a -variate random variable with support contained in a d-dimensional cube . Let be the Banach space of -variate real valued cadlag functions endowed with a supremum norm [17]. Let and be loss functions that identify the true and in the sense that and . Let and be the loss-based dissimilarities for these two nuisance parameters.

**Loss functions and canonical gradient have a uniformly bounded sectional variation norm:** We assume that these loss functions and the canonical gradient map into functions in with a sectional variation norm bounded by some universal finite constant:

For a given function , we define the sectional variation norm as follows. For a given subset , let be the -specific section of that sets the coordinates outside the subset equal to 0, where we used the notation for the vector whose -th component equals if and otherwise. The sectional variation norm is now defined by

where the sum is over all subsets of . Note that is the standard variation norm of the measure generated by its -specific section on the -dimensional edge of the -dimensional cube . Thus, the sectional variation norm is the sum of the variation of itself and of all its -specific sections, plus . We also note that any function with finite sectional variation norm (i.e., ) can be represented as follows [10]:

As utilized in [27] to define the HAL-MLE, since , this representation shows that can be written as an infinite linear combination of -specific indicator basis functions indexed by a cut-off , across all subsets , where the coefficients in front of the indicators are equal to the infinitesimal increments of at . For discrete measures this integral becomes a finite linear combination of such -way indicators. One could think of this representation as a saturated model of a function in terms of single way indicators, two-way indicators, etc, till the final -way indicator basis functions. For a function , we also define the supremum norm .

**Assuming that parameter spaces for and are cartesian products of sets of cadlag functions with bounds on sectional variation norm:** Although the above bounds are the only relevant bounds for the asymptotic performance of the HAL-MLE and HAL-TMLE, for practical formulation of a model one might prefer to state the sectional variation norm restrictions on the parameters and themselves. For that purpose, let’s assume that for variation independent parameters that are themselves -dimensional cadlag functions on I-.2em R with sectional variation norm bounded by some upper-bound and lower bound , , and similarly for with sectional variation norm bounds and , . Typically, we have . Specifically, let

denote the parameter spaces for and , and assume that these parameter spaces are contained in the class of -variate cadlag functions with sectional variation norm bounded from above by and from below by , , . These bounds and will then imply bounds . In such a setting, would be defined as a sum loss function and . We also define the vector losses , , and corresponding vector dissimilarities and .

In a typical case we would have that the parameter space of () or () would be equal to

for some set of possible values for , , , where one evaluates this restriction on in terms of the representation (Equation 3). Note that we used short-hand notation for being zero for . We will make the convention that if excludes , then it corresponds with assuming .

This subset of all cadlag functions with sectional variation norm smaller than further restricts the support of these functions to a set . For example, might set for subsets of size larger than for all values , in which case one assumes that the nuisance parameter can be represented as a sum over all subsets of size and of a function of the variables indicated by .

In order to allow modeling of monotonicity (e..g, nuisance parameter is an actual cumulative distribution function), we also allow that this set restricts for all . We will denote the latter parameter space with

For the parameter space (Equation 5) of monotone functions we allow that the sectional variation norm is known by setting (e.g, for the class of cumulative distribution functions we would have ), while for the parameter space (Equation 4) of cadlag functions with sectional variation norm between and we assume .

Although not necessary at all, for the analysis of our proposed nonparametric bootstrap sampling distributions *we assume this extra structure* that or for some set , , . This extra structure allows us to obtain concrete results for the validity of the nonparametric bootstrap for the HAL-MLEs and defined below, and thereby the HAL-TMLE (see Appendix Section 9). In addition, the implementation of the HAL-MLE for such a parameter space still corresponds with fitting a linear combination of indicator basis functions under the sole constraint that the sum of the absolute value of the coefficients is bounded by (and possibly from below by ), and possibly that the coefficients are non-negative, where the set implies the set of indicator basis functions that are included. Specifically, in the case that the nuisance parameter is a conditional mean we can compute the HAL-MLE with standard lasso regression software [2]. Therefore, this restriction on our set of models allows straightforward computation of its HAL-MLEs and corresponding HAL-TMLE.

Thus, a typical statistical model would be of the form for sets , but the model might include additional restrictions on beyond restricting the variation independent components of and to be elements of these sets , as long as their parameter spaces equal these sets or .

**Remark regarding creating nuisance parameters with parameter space of type () or ():** In our first example we have a nuisance parameter that is not just assumed to be cadlag and have bounded sectional variation norm but is also bounded between and for some . This means that the parameter space for this is not exactly of type (Equation 4). This is easily resolved by reparameterizing where can be any cadlag function with sectional variation norm bounded by some constant. One now defines the nuisance parameter as instead of itself. Similarly, in our second example, is the data density itself, which is assumed to be bounded from below by a and from above by an , beyond being cadlag and having a bound on the sectional variation norm. In this case, we could parameterize as , where is the normalizing constant guaranteeing that . One now defines the nuisance parameter as instead of itself. These just represent a few examples showcasing that one can reparametrize the natural nuisance parameters and in terms of nuisance parameters that have a parameter space of the form (Equation 4) or (Equation 5). These representations are actually natural steps for the implementation of the HAL-MLE since they allow us now to minimize the empirical risk over a linear model with the sole constraint that the sum of absolute value of coefficients is bounded (and possibly coefficients are non-negative).

**Bounding the exact second-order remainder in terms of loss-based dissimilarities:** Let

for some mapping possibly indexed by . We often have that is a sum of second-order terms of the types , and for certain specifications of and . Specifically, in all our applications it has the form for some quadratic function . If it only involves terms of the third type, then has a double robust structure allowing the construction of double robust estimators whose consistency relies on consistent estimation of either or . In particular, in that case the HAL-TMLE is double robust as well.

We assume the following upper bound:

for some function I-.2em RI-.2em R, , of the form , a quadratic polynomial with positive coefficients . In all our examples, one simply uses the Cauchy-Schwarz inequality to bound in terms of -norms of and , and subsequently one relates these -norms to its loss-based dissimilarities and , respectively. This bounding step will also rely on a positivity assumption so that denominators in are uniformly bounded away from zero.

**Continuity of efficient influence curve as function of :** We also assume a basic uniform continuity condition on the efficient influence curve:

The above two uniform bounds (Equation 6) and (Equation 7) on the model will generally hold under a strong positivity assumption that guarantees that there are no nuisance parameters (e.g., a parameter of ) in the denominator of and that can be arbitrarily close to 0 on the support of .

### 2.2HAL-MLEs of nuisance parameters

We estimate with HAL-MLEs satisfying

Due to the sum-loss and variation independence of the components of and , these HAL-MLEs correspond with separate HAL-MLEs for each component. We have the following previously established result [27] for these HAL-MLEs. We represent estimators as mappings on the nonparametric model containing all possible realizations of the empirical measure .

Application of this general lemma proves that and . It also shows that we have the following actual empirical process upper-bounds:

where we defined and . These upper bounds will be utilized in our proposed conservative sampling distributions of the HAL-TMLE in Appendix Section 11.

**Super learner including HAL-MLE outperforms HAL-MLE** Suppose that we estimate and instead with super-learners in which the library of the super-learners contains this HAL-MLE and . Then, by the oracle inequality for the super-learner, we know that and will be asymptotically equivalent with the oracle selected estimator, so that and represent asymptotic upper bounds for and [27]. In addition, practical experience has demonstrated that the super-learner outperforms its library candidates in finite samples. Therefore, assuming that each estimator in the library of the super-learners for and falls in the parameter spaces and of and , respectively, our proposed estimators of the sampling distribution of the HAL-TMLE can also be used to construct a confidence interval around the super-learner based TMLE. These width of these confidence intervals are not adapting to possible superior performance of the super-learner and could thus be overly conservative in case the super-learner outperforms the HAL-MLE.

### 2.3Hal-Tmle

Consider a finite dimensional local least favorable model through at so that the linear span of the components of at includes . Let for . We assume that this one-step TMLE already satisfies

As shown in [27] this holds for the one-step HAL-TMLE under regularity conditions. Alternatively, one could use the one-dimensional canonical universal least favorable model satisfying at each (see our second example in Section 5). In that case, the efficient influence curve equation (Equation 8) is solved exactly with the one-step TMLE: i.e., [30]. The HAL-TMLE of is now the plug-in estimator . Sometimes, we will refer to this estimator as the HAL-TMLE to indicate its dependence on the specification of .

In the Appendix Section 8 we show that under smoothness condition on the least favorable submodel (as function of ) converges at the same rate as (see ( ?)). This also implies this result for any -th step TMLE with fixed. The advantage of a one-step or -th step TMLE is that it is always well defined, and it easily follows that it converges at the same rate as the initial to . Even though we derive some more explicit results for the one-step TMLE (and thereby -th step TMLE), our results are presented so that they can be applied to any TMLE , including iterative TMLE, but we then simply assume that it has been shown that converges at same rate to zero as .

It is assumed that for any in its parameter space for some so that the least favorable model preserves the bound on the sectional variation norm. Since the HAL-MLE has the maximal allowed uniform sectional variation norm , it is likely that has a slightly larger variation norm than this bound.

### 2.4Asymptotic efficiency theorem for HAL-TMLE and CV-HAL-TMLE

The bound , the rate results for and implied by Lemma ?, combined with (Equation 6), now shows that the second-order term .

We have the following identity for the HAL-TMLE:

The second term on the right-hand side is by empirical process theory and the continuity condition (Equation 7) on . Thus, this proves the following asymptotic efficiency theorem.

**Wald type confidence interval:** A first order asymptotic 0.95-confidence interval is given by where is a consistent estimator of . Clearly, this first order confidence interval ignores the exact remainder in the exact expansion as presented in (Equation 9):

The asymptotic efficiency proof above of the HAL-TMLE relies on the HAL-MLEs to converge to the true at rate faster than , and that their sectional variation norm is uniformly bounded from above by . Both of these conditions are still known to hold for the CV-HAL-MLE in which the constants are selected with the cross-validation selector [27]. This follows since the cross-validation selector is asymptotically equivalent with the oracle selector, thereby guaranteeing that will exceed the sectional variation norm of the true with probability tending to 1. Therefore, we have that this CV-HAL-TMLE is also asymptotically efficient. Of course, this CV-HAL-TMLE is more practical and powerful than the HAL-TMLE at an apriori specified since it adapts the choice of bounds to the true sectional variation norms for .

In general, when the model is defined by global constraints, then one should use cross-validation to select these constraints, which will only improve the performance of the initial estimators and corresponding TMLE, due to its asymptotic equivalence with the oracle selector. So our model might have more global constraints beyond and these could then also be selected with cross-validation resulting in a CV-HAL-MLE and corresponding HAL-TMLE (see also our two examples).

### 2.5Motivation for finite sample inference

In order to understand how large the exact remainder could be relative to the leading first order term, we need to understand the size of and . This will then motivate us to propose methods that estimate the finite sample distribution of the HAL-TMLE or conservative versions thereof.

To establish this behavior of we will use the following general integration by parts formula, and a resulting bound.

**Proof:** The representation of is presented in [10]. Using this representation yields the presented integration by parts formula as follows:

For a and we define

By Lemma ?, we have

By [39], we can bound the expectation of the supremum norm of an empirical process over a class of functions with uniformly bounded envelope by the entropy integral:

The covering number for the class of indicators behaves as . This proves that , and thus

In particular, this shows that the exact second-order remainder (Equation 10) can be bounded in expectation as follows:

Even though these bounds are overly conservative, these bounds provide a clear indication how the size of and , and thereby the second-order remainder is potentially affected by the dimension (i.e., for nonparametric models) and the allowed complexity of the model as measured by the bounds .

One can thus conclude that there are many settings in which the exact second-order remainder will dominate the leading linear term in finite samples. Therefore, for the sake of accurate inference we will need methods that estimate the actual finite sample sampling distribution of the HAL-TMLE.

### A very conservative finite sample confidence interval

Consider the case that . Let and be the deterministic upper bound on the sectional variation norms of and . Let . The integration by parts bound applied to (Equation 9) yields the following bound:

Let . Then, we obtain the following bound:

Let be the -quantile of . A conservative finite sample -confidence interval is then given by:

where . One could estimate the distribution of with the nonparametric bootstrap and thereby obtain an bootstrap-estimate of . One could push the conservative nature of this confidence interval further by using theoretical bounds for the tail-probability and define the quantile in terms of this theoretical upper bound (such exponential bounds are available in (e.g.) [38], but the constants in these exponential bounds appear to not be concretely specified).

The bound (Equation 11) simplifies if we focus on the sampling distribution of the one-step estimator by being able to replace the targeted version by . For the one-step estimator we have

Let and be the upper bound on the sectional variation norms of and . Analogue to above, we obtain

Recall . Then, we obtain the following conservative sampling distribution: