Correcting an estimator of a multivariate monotone functionwith isotonic regression

# Correcting an estimator of a multivariate monotone function with isotonic regression

Ted Westling
Center for Causal Inference
University of Pennsylvania
Mark J. van der Laan
Department of Biostatistics
University of California, Berkeley
Marco Carone
Department of Biostatistics
University of Washington
###### Abstract

In many problems, a sensible estimator of a possibly multivariate monotone function may itself fail to be monotone. We study the correction of such an estimator obtained via projection onto the space of functions monotone over a finite grid in the domain. We demonstrate that this corrected estimator has no worse supremal estimation error than the initial estimator, and that analogously corrected confidence bands contain the true function whenever initial bands do, at no loss to average or maximal band width. Additionally, we demonstrate that the corrected estimator is uniformly asymptotically equivalent to the initial estimator provided that the initial estimator satisfies a uniform stochastic equicontinuity condition and that the true function is Lipschitz and strictly monotone. We provide simple sufficient conditions for uniform stochastic equicontinuity in the important special case that the initial estimator is uniformly asymptotically linear. We scrutinize the use of these results for estimation of a G-computed distribution function, a quantity often of interest in causal inference, and a conditional distribution function, and illustrate their implications through numerical results. Our experiments suggest that the projection step can yield practical improvements in performance for both the estimator and confidence band.

## 1 Introduction

### 1.1 Background

In many scientific problems, the parameter of interest is a component-wise monotone function. In practice, an estimator of this function may have several desirable statistical properties, yet fail to be monotone. This often occurs when this estimator is obtained through the pointwise application of a statistical procedure over the domain of the function. For instance, we may be interested in estimating a conditional cumulative distribution function , defined pointwise as , over its domain . Here, may represent an outcome and an exposure. The map is necessarily monotone for each fixed . In some scientific contexts, it may be known that is also monotone for each , in which case is a bivariate component-wise monotone function. An estimator of can be constructed by estimating the conditional distribution function for each on a finite grid and performing suitable interpolation elsewhere. If estimation is performed separately for each exposure value, the resulting estimator may fail to be monotone as a function of for certain values of .

Whenever the function of interest is component-wise monotone, failure of an estimator to itself be monotone can be problematic. This is most apparent if the monotonicity constraint is probabilistic in nature – that is, the parameter mapping is monotone under all possible probability distributions. This is the case, for instance, if is a distribution function. In such settings, returning a function estimate that fails to be monotone is nonsensical, somewhat like reporting a probability estimate outside the interval . However, even if the monotonicity constraint is based on scientific knowledge rather than probabilistic constraints, failure of an estimator to be monotone can be an issue. For example, if the parameter of interest represents average height or weight among children as a function of age, scientific collaborators would likely be unsatisfied if presented with an estimated curve that is not monotone. Finally, as we will see, there are often finite-sample performance benefits to ensuring that the monotonicity constraint is respected.

Whenever this phenomenon occurs, it is natural to seek an estimator that respects the monotonicity constraint but nevertheless remains close to the initial estimator, which may otherwise have good statistical properties. A monotone estimator can be naturally constructed by projecting the initial estimator onto the space of monotone functions with respect to some norm. A common choice is the -norm, which amounts to using multivariate isotonic regression to correct the initial estimator.

### 1.2 Contribution and organization of the article

In this article, we discuss correcting an initial estimator of a multivariate monotone function by computing the isotonic regression of the estimator over a finite grid in the domain, and interpolating between grid points. We also consider correcting an initial confidence band by using the same procedure applied to the upper and lower limits of the band. We provide three results regarding this simple procedure.

1. Building on the results of Robertson et al. (1988) and Chernozhukov et al. (2009), we demonstrate that the corrected estimator is at least as good as the initial estimator, meaning:

1. its uniform error over the grid used in defining the projection is less than or equal to that of the initial estimator for every sample;

2. its uniform error over the entire domain is less than or equal to that of the initial estimator asymptotically;

3. the corrected confidence band contains the true function on the projection grid whenever the initial band does, at no cost in terms of average or uniform band width.

2. We provide high-level sufficient conditions under which the uniform difference between the initial and corrected estimators is for a generic sequence .

3. We provide simpler lower-level sufficient conditions at the rate for use whenever the initial estimator is uniformly asymptotically linear.

We apply our theoretical results to two set of examples: nonparametric efficient estimation of a G-computed distribution function for a binary exposure, and kernel-based estimation of a conditional distribution function with a continuous exposure.

Other authors have considered the correction of an initial estimator using isotonic regression. To name a few, Mukarjee and Stern (1994) used a projection-like procedure applied to a kernel smoothing estimator of a regression function, whereas Patra and Sen (2016) used the projection procedure applied to a univariate cumulative distribution function in the context of a mixture model. These articles addressed the properties of the projection procedure in their specific applications. In contrast, we provide general results that are applicable broadly.

We also note that the projection approach is not the only possible correction procedure. Dette et al. (2006), Chernozhukov et al. (2009), and Chernozhukov et al. (2010) studied a correction based on monotone rearrangements and compared the two procedures. However, monotone rearrangements do not generalize to the multivariate setting as naturally as projections – for example, Chernozhukov et al. (2009) proposed averaging a variety of possible multivariate monotone rearrangements to obtain a final monotone estimator. Daouia and Park (2013) also proposed a correction procedure that consists of taking a convex combination of upper and lower monotone envelope functions. They demonstrated that their estimator is asymptotically equivalent in supremum norm to the projection-based correction, so application of their results requires studying the properties of the projection correction. In our view, the projection approach is preferable to strategies based either on rearrangements or enveloping because the projection is unique in both univariate and multivariate contexts. Additionally, projections are easy to understand, rely on familiar tools, and have minimal squared-error impact on the initial estimator among all candidate monotone corrections.

## 2 Main results

### 2.1 Definitions and statistical setup

Let be a statistical model of probability measures on a probability space . Let be a parameter of interest on , where and is the Banach space of bounded functions from to equipped with supremum norm . We have specified this particular for simplicity, but the results established here apply to any bounded rectangular domain . For each , denote by the evaluation of at and note that is a bounded real-valued function on . For any , denote by the evaluation of at .

For any vector and , denote by the component of . Define the partial order on by setting if and only if for each . A function is called (component-wise) monotone non-decreasing if implies that . Denote for any vector . Additionally, denote by the convex set of bounded monotone non-decreasing functions from to . For concreteness, we focus on non-decreasing functions, but all results established here apply equally to non-increasing functions.

Let and suppose that is nonempty. Generally, this inclusion is strict only if, rather than being implied by the rules of probability, the monotonicity constraint stems at least in part from prior scientific knowledge. Also, define . We are primarily interested in settings where , since in this case there is no additional knowledge about encoded by , and in particular there is no danger of yielding a corrected estimator that is compatible with no .

Suppose that observations are sampled independently from an unknown distribution , and that we wish to estimate based on these observations. Suppose that, for each , we have access to an estimator of based on . We note that the assumption that the data are independent and identically distributed is not necessary for Theorems 1 and 2 below.

The central premise of this article is that may have desirable statistical properties for each or even uniformly in , but that as an element of may not fall in for any finite or even with probability tending to one. Our goal is to provide a corrected estimator that necessarily falls in , and yet retains the statistical properties of . A natural way to accomplish this is to define as the closest element of to in some norm on . Ideally, we would prefer to take to minimize over . However, this is not tractable for two reasons. First, optimization over the entirety of is an infinite-dimensional optimization problem, and is hence frequently computationally intractable. To resolve this issue, for each , we let be a finite rectangular lattice in over which we will perform the optimization, and define and consider as the supremum norm over . While it is now computationally feasible to define as a minimizer over of the finite-dimensional objective function , this objective function is challenging due to its non-differentiability. Instead, we define

 θ∗n∈argminθ∈Θ∑t∈Tn[θ(t)−θn(t)]2 . (2.1)

The squared-error objective function is smooth in its arguments. In dimension , thus defined is simply the isotonic regression of on the grid , which has a closed-form representation as the greatest convex minorant of the so-called cumulative sum diagram. Furthermore, since , many of our results also apply to .

We note that is only uniquely defined on . To completely characterize , we must monotonically interpolate function values between elements of . We will permit any monotonic interpolation that satisfies a weak condition. By the definition of a rectangular lattice, every can be assigned a hyper-rectangle whose vertices are elements of and whose interior has empty intersection with . If multiple such hyper-rectangles exist for , such as when lies on the boundary of two or more such hyper-rectangles, one can be assigned arbitrarily. We will assume that, for , for weights such that . In words, we assume that is a convex combination of the values of on the vertices of the hyper-rectangle containing . A simple interpolation approach consists of setting with the element of closest to , and choosing any such element if there are multiple elements of equally close to . This particular scheme satisfies our requirement.

Finally, for each , we let denote lower and upper endpoints of a confidence band for . We then define and as the corrected versions of and using the same projection and interpolation procedure defined above for obtaining from .

### 2.2 Properties of the projected estimator

The projected estimator is the isotonic regression of over the grid . Hence, many existing finite-sample results on isotonic regression can be used to deduce properties of . Theorem 1 below collects a few of these properties, building upon the results of Barlow et al. (1972) and Chernozhukov et al. (2009). We denote as the mesh of in .

###### Theorem 1.
• It holds that .

• If and is continuous on , then .

• If there exists some for which as , then .

• Whenever for all , for all .

• It holds that and .

Before presenting the proof of Theorem 1, we remark briefly on its implications. Part (i) says that the estimation error of over the grid is never worse than that of , whereas parts (ii) and (iii) say that the estimation error of on all of is asymptotically no worse than the estimation error of in supremum norm. Similarly, part (iv) says that the isotonized band never has worse coverage than the original band over . Finally, part (v) says that the potential increase in coverage comes at no cost to the average or supremum width of the bands over . We note that parts (i), (iv) and (v) hold true for each .

We now present the proof of Theorem 1.

###### Proof of Theorem 1.

Part (i) follows from Corollary B to Theorem 1.6.1 of Robertson et al. (1988). For parts (ii) and (iii), we note that by assumption

 |θ∗n(t)−θ0(t)| ≤ ∑kλk,n(t)|θ∗n(sk)−θ0(sk)|+∑kλk,n(t)|θ0(sk)−θ0(t)|

for every , where , and for each , and . By part (i), the first term is bounded above by . The second term is bounded above by , where we define . If is continuous on , then it is also uniformly continuous since is compact. Therefore, as , so that if . If as , then .

Part (iv) follows from the proof of Proposition 3 of Chernozhukov et al. (2009), which applies to any order-preserving monotonization procedure. For the first statement of (v), by their definition as minimizers of the least-squares criterion function, we note that , and similarly for . The second statement of (v) follows from a slight modification of Theorem 1.6.1 of Robertson et al. (1988). As stated, the result says that for any convex function and monotone function , where is the isotonic regression of over . A straightforward adaptation of the proof indicates that , where now and are the isotonic regressions of and over , respectively. As in Corollary B, taking and letting yields that . Applying this with and establishes the second portion of (v). ∎

While comprehensive in scope, Theorem 1 does not rule out the possibility that performs strictly better, even asymptotically, than , or that the band is asymptotically strictly more conservative than . In order to construct confidence intervals or bands with correct asymptotic coverage, a stronger result is needed: it must be that , where is a diverging sequence such that converges in distribution to a non-degenerate limit distribution. Then, we would have that converges in distribution to this same limit, and hence confidence bands constructed using approximations of this limit distribution would have correct coverage when centered around , as we discuss more below.

We consider the following conditions on and the initial estimator :

(A)

there exists a deterministic sequence tending to infinity such that, for all ,

 sup∥t−s∥<δ/rn|rn[θn(t)−θ0(t)]−rn[θn(s)−θ0(s)]|→P0;
(B)

there exist such that for all .

Condition (A) is related to, but slightly weaker than, uniform stochastic equicontinuity (van der Vaart and Wellner, 1996, p. 37). (A) follows if, in particular, the process converges weakly to a tight limit in the space . However, the latter condition is sufficient but not necessary for (A) to hold. This is important for application of our results to kernel smoothing estimators, which typically do not converge weakly to a tight limit, but for which condition (A) nevertheless often holds. We discuss this at length in Section 3.2. Condition (B) constrains the variation of from both above and below, and is slightly more restrictive than a requirement for strict monotonicity. If, for instance, is differentiable, then (B) is satisfied if all first-order partial derivatives of are bounded away from zero and above.

Based on these conditions, we have the following result.

###### Theorem 2.

If conditions (A)–(B) hold and , then .

We prove Theorem 2 below. This result indicates that the projected estimator is uniformly asymptotically equivalent to the original estimator in supremum norm at the rate . In addition to conditions (A)–(B), Theorem 2 requires that the mesh of tend to zero in probability faster than . Since is chosen by the user, this is not a problem in practice.

The left-hand side of the inequality in condition (B) excludes, for instance, situations in which is differentiable with null derivative over an interval. In such cases, may have strictly smaller variance on these intervals than because will pool estimates across the flat region while may not. Hence, in such cases, may potentially asymptotically improve on , so that and are not asymptotically equivalent at the rate .

We prove Theorem 2 via three lemmas, which we first establish. The first lemma controls the size of deviations in over small neighborhoods.

###### Lemma 1.

If (A)–(B) hold and , then .

###### Proof of Lemma 1.

In view of the triangle inequality, we note that is bounded above by . The first term is by (A), whereas the second term is by (B).∎

The second lemma controls the size of neighborhoods over which violations in monotonicity can occur. Henceforth, we define

###### Lemma 2.

If conditions (A)–(B) hold, then .

###### Proof of Lemma 2.

Let and . Suppose that . Then, there exist with and such that . We claim that there must also exist with and such that . To see this, let , and note that . Define for , and set . Thus, and for each . Since then , it must be that for at least one . This proves the claim.

We now have that implies that there exist with and such that . This further implies that

 {θn(t)−θ0(t)}−{θn(s)−θ0(s)}≤−{θ0(t)−θ0(s)}≤−K0∥t−s∥≤−K0ηn/2

by condition (B). Finally, this allows us to write

 P0(κn>ϵ/rn) ≤ P0(sup∥t−s∥≤ϵ/rn|rn[θn(t)−θ0(t)]−rn[θn(s)−θ0(s)]|≥K0ϵ/2).

By condition (A), this probability tends to zero for every , which completes the proof.∎

Our final lemma bounds the maximal absolute deviation between and over the grid in terms of the supremal deviations of over neighborhoods smaller than .

###### Lemma 3.

The inequality holds.

###### Proof of Lemma 3.

By Theorem 1.4.4 of Robertson et al. (1988), for any ,

 θ∗n(t)=maxU∈UtminL∈Ltθn(U∩L)=minL∈LtmaxU∈Utθn(U∩L),

where, for any finite set , is defined as . The sets range over the collection of upper sets of containing , where is called an upper set if and implies . The sets range over the collection of lower sets of containing , where is called a lower set if and implies .

Let and . First, suppose there exists and with and . Then, we claim that there exists another lower set such that . If , then satisfies the claim. Otherwise, if , let . One can verify that , and since , is a strict subset of . Furthermore, by definition of , for all such that , and since , removing these elements from can only reduce the average, so that . This establishes the claim. By an analogous argument, we can show that if there exists and with and , then there exists another upper set such that .

Let and . Then

 θ∗n(t)=maxU∈UtminL∈Ltθn(U∩L) ≥ minL∈Ltθn(Ut∩L)=θn(Ut∩L∗) θ∗n(t)=minL∈LtmaxU∈Utθn(U∩L) ≤ maxU∈Utθn(U∩Lt)=θn(U∗∩Lt) .

Hence, . By the above argument, and . Therefore,

 inf{θn(s)−θn(t):∥t−s∥≤κn}≤θ∗n(t)−θn(t)≤sup{θn(s)−θn(t):∥t−s∥≤κn} ,

and thus, . Taking the maximum over yields the claim.∎

The proof of Theorem 2 follows easily from Lemmas 12, and 3.

###### Proof of Theorem 2.

By construction, for each , we can write

 |θ∗n(t)−θn(t)| ≤ Σ2dj=1λj,n(t)|θ∗n(sj)−θn(sj)|+Σ2dj=1λj,n(t)|θn(sj)−θn(t)| ,

where and for all by definition. Thus, since ,

 supt∈T|θ∗n(t)−θn(t)| ≤ maxt∈Tn|θ∗n(t)−θn(t)|+sup∥s−t∥≤2ωn|θn(s)−θn(t)| .

By Lemma 3, the first summand is bounded above by , which is by Lemmas 1 and 2. The second summand is by Lemma 1. ∎

### 2.3 Construction of confidence bands

Suppose there exists a fixed function such that and satisfy:

(a)

,

(b)

,

(c)

.

As an example of a confidence band that satisfies conditions (a)–(c), suppose that is a scaling function and is a fixed constant such that, as tends to infinity,

 P0(rn∥∥∥θn−θ0σ0∥∥∥T≥cα)⟶1−α .

If is an estimator of satisfying and is an estimator of such that , then the Wald-type band defined by lower and upper endpoints and satisfies (a)–(c) with . However, the latter conditions can also be satisfied by other types of bands, such as those constructed with a consistent bootstrap procedure.

Under conditions (a)–(c), the confidence band has asymptotic coverage . When conditions (A) and (B) also hold, the corrected band has the same asymptotic coverage as the original band , as stated in the following result.

###### Corollary 1.

If conditions (A)–(B) and (a)–(c) hold, is uniformly continuous on , and , then the confidence band has asymptotic coverage .

The proof of Corollary 1 is presented in Appendix A. We also note that Theorem 2 immediately implies that Wald-type confidence bands constructed around have the same asymptotic coverage if they are constructed around instead.

### 2.4 Special case: asymptotically linear estimators

Suppose now that the initial estimator is uniformly asymptotically linear: for each , there exists depending on such that , , and

 θn(t)=θ0(t)+1nn∑i=1ϕ0,t(Xi)+Rn,t (2.2)

for a remainder term with . The function is the influence function of under sampling from . It is desirable for to have representation (2.2) because this immediately implies its uniform weak consistency as well as the pointwise asymptotic normality of for each . If in addition the collection of influence functions forms a -Donsker class, converges weakly in to a Gaussian process with covariance function . Uniform asymptotic confidence bands based on can then be formed by using appropriate quantiles from any suitable approximation of the distribution of the supremum of the limiting Gaussian process.

(A1)

the collection of influence curves is a -Donsker class;

(A2)

is uniformly continuous in the sense that

Whenever is uniformly asymptotically linear, Theorem 2 can be shown to hold under (A1), (A2) and (B), as implied by the theorem below. The validity of (A1) and (A2) can be assessed by scrutinizing the influence function of for each . This fact renders the verification of these conditions very simple once uniform asymptotic linearity has been established.

###### Theorem 3.

For any estimator satisfying (2.2), (A1) and (A2) together imply (A).

The proof of Theorem 3 is provided in Appendix B.

## 3 Illustrative examples

### 3.1 Example 1: Estimation of a G-computed distribution function

We first demonstrate the use of Theorem 3 in the particular problem in which we wish to draw inference on a G-computed distribution function. Suppose that the data unit is the vector , where is an outcome, is an exposure, and is a vector of baseline covariates. The observed data consist of independent draws from , where is a nonparametric model.

For and , we define the parameter value pointwise as , the G-computed distribution function of evaluated at , where the outer expectation is over the marginal distribution of under . We are interested in estimating . This parameter is often of interest as an interpretable marginal summary of the relationship between and accounting for the potential confounding induced by . Under certain causal identification conditions, is the distribution function of the counterfactual outcome defined by the intervention that deterministically sets exposure to (Robins, 1986; Gill and Robins, 2001).

For each , the parameter is pathwise differentiable in a nonparametric model, and its nonparametric efficient influence function at is given by

 φP,a0,t(y,a,w):=I(a=a0)gP(a0∣w)[I(y≤t)−¯QP(t∣a0,w)]+¯QP(t∣a0,w)−θP,a0(t) ,

where is the propensity score and is the conditional exposure-specific distribution function, as implied by (van der Laan and Robins, 2003). Given estimators and of and , respectively, several approaches can be used to construct, for each , an asymptotically linear estimator of with influence function . For example, the use of either optimal estimating equations or the one-step correction procedure leads to the doubly-robust augmented inverse-probability-of-weighting estimator

 θn,a0(t):=1nn∑i=1I(Ai=a0)gn(a0∣Wi)[I(Yi≤t)−¯Qn(t∣a0,Wi)]+1nn∑i=1¯Qn(t∣a0,Wi)

as discussed in detail in van der Laan and Robins (2003). Under conditions on and , including consistency at fast enough rates, is asymptotically efficient relative to . In this case, satisfies (2.2) with influence function . However, there is no guarantee that is monotone.

In the context of this example, we can identify simple sufficient conditions under which conditions (A)–(B), and hence the asymptotic equivalence of the initial and isotonized estimators of the G-computed distribution function, are guaranteed. Specifically, we find this to be the case when:

1. there exists some such that almost surely under , and;

2. there exist non-negative real-valued functions such that

 K1(w)|t−s| ≤ |¯Q0(t∣a0,w)−¯Q0(s∣a0,w)| ≤ K2(w)|t−s|

for all , and such that, under , is strictly positive with non-zero probability and has finite second moment.

We conducted a simulation study to validate our theoretical results in the context of this particular example. For samples sizes , we generated random datasets as follows. We first simulated a bivariate covariate with independent components and , respectively distributed as a Bernoulli variate with success probability and a uniform variate on . Given , exposure was simulated from a logistic regression model with