lspartition: Partitioning-Based Least Squares RegressionWe thank Sebastian Calonico, David Drukker, and Xinwei Ma for thoughtful comments and suggestions on our software implementation and related articles. Cattaneo gratefully acknowledges financial support from the National Science Foundation (SES-1459931).

lspartition: Partitioning-Based Least Squares Regressionthanks: We thank Sebastian Calonico, David Drukker, and Xinwei Ma for thoughtful comments and suggestions on our software implementation and related articles. Cattaneo gratefully acknowledges financial support from the National Science Foundation (SES-1459931).

Matias D. Cattaneo Department of Operation Research and Financial Engineering, Princeton University.    Max H. Farrell Booth School of Business, University of Chicago.    Yingjie Feng Department of Politics, Princeton University.
July 1, 2019
Abstract

Nonparametric partitioning-based least squares regression is an important tool in empirical work. Common examples include regressions based on splines, wavelets, and piecewise polynomials. This article discusses the main methodological and numerical features of the R software package lspartition, which implements results for partitioning-based least squares (series) regression estimation and inference from Cattaneo and Farrell (2013) and Cattaneo, Farrell and Feng (2019). These results cover the multivariate regression function as well as its derivatives. First, the package provides data-driven methods to choose the number of partition knots optimally, according to integrated mean squared error, yielding optimal point estimation. Second, robust bias correction is implemented to combine this point estimator with valid inference. Third, the package provides estimates and inference for the unknown function both pointwise and uniformly in the conditioning variables. In particular, valid confidence bands are provided. Finally, an extension to two-sample analysis is developed, which can be used in treatment-control comparisons and related problems.

\setenumerate

[1]label*=()

Keywords: series nonparametrics, partitioning least squares, tuning parameter selection, robust bias-corrected inference, confidence bands, splines, wavelets, piecewise polynomials, R.

1 Introduction

Nonparametric partitioning-based least squares regression estimation is an important method for estimating conditional expectation functions in statistics, economics, and other disciplines. These methods first partition the support of covariates and then construct a set of local basis functions on top of the partition to approximate the unknown regression function or its derivatives. Empirically popular basis functions include splines, compactly supported wavelets, and piecewise polynomials. For textbook reviews on classical and modern nonparametric regression methodology see, among others, Fan and Gijbels (1996), Györfi, Kohler, Krzyżak and Walk (2002), and Ruppert, Wand and Carroll (2009). For a review on partitioning-based approximations in nonparametrics and machine learning see Zhang and Singer (2010) and references therein.

This article gives a detailed discussion of the software package lspartition, available for R, which implements partitioning-based least squares regression estimation and inference. This package offers several features which improve on existing tools, leveraging the recent results of Cattaneo and Farrell (2013) and Cattaneo, Farrell and Feng (2019), and delivering data-driven methods to easily implement partitioning-based estimation and inference, including optimal tuning parameter choices and uniform inference results such as confidence bands. We cover splines, wavelets, and piecewise polynomials, in a unified way, encompassing prior methods and routines previously unavailable without manual coding by researchers.

The first contribution offered by lspartition is a data-driven choice of the number of partitioning knots that is optimal in an integrated mean squared error (IMSE) sense. A major hurdle to practical implementation of any nonparametric estimator is tuning parameter choice, and by offering several feasible IMSE-optimal methods for splines, compactly supported wavelets, and piecewise polynomials, lspartition provides practitioners with tools to overcome this important implementation issue.

However, point estimation optimal tuning parameter choices yield invalid inference in general, and the IMSE-optimal choice is no exception. The second contribution of lspartition is the inclusion of robust bias correction methods, which allow for inference based on optimal point estimator. lspartition implements the three methods studied by Cattaneo, Farrell and Feng (2019), which are based on novel bias expansions therein. Both the bias and variance quantities are kept in pre-asymptotic form, yielding better bias correction and standard errors robust to conditional heteroskedasticity of unknown form. Collectively, this style of robust bias correction has been proven to yield improved inference in other nonparametric contexts (Calonico, Cattaneo and Farrell, 2018, 2019).

The third main contribution is valid inference, both pointwise and uniformly in the support of the conditioning variables. When robust bias correction is employed, this inference is valid for the IMSE-optimal point estimator, allowing the researcher to combine an optimal partition for point estimation and a “faithful” measure of uncertainty (i.e., one that uses the same nonparametric estimation choices, here captured by the partition). In particular, lspartition delivers valid confidence bands that cover the entire regression function and its derivatives. These data-driven confidence bands are constructed by approximating the distribution of -statistic processes, using either a plug-in approach or a bootstrap approach. Importantly, the construction of confidence bands does not employ (asymptotic) extreme value theory, but instead uses the strong approximation results of Cattaneo, Farrell and Feng (2019), which perform substantially better in samples of moderate size.

Last but not least, the package also offers a convenient function to implement estimation and inference for linear combinations of regression estimators of different groups with all the features mentioned above. This function can be used to analyze conditional treatment effects in random control trials in particular, or for two-sample comparisons more generally. For example, a common question in applications is whether two groups have the same “trend” in a regression function, and this is often answered in a restricted way by testing a single interaction term in a (parametric) linear model. In contrast, lspartition delivers a valid measure of this difference nonparametrically and uniformly over the support of the conditioning variables, greatly increasing its flexibility in applications.

All of these contributions are fully implemented for splines, wavelets, and piecewise polynomials through the following four functions included in the package lspartition:

  • lsprobust(): This function implements estimation and inference for partitioning-based least squares regression. It takes the partitioning scheme as given, and constructs point and variance estimators, bias correction, conventional and robust bias-corrected confidence intervals, and simulation-based conventional and robust bias-corrected uniform inference measures (e.g., confidence bands). Three approximation bases are provided: B-splines, Cohen-Daubechies-Vial wavelets, and piecewise polynomials. When the partitioning scheme is not specified, the companion function lspkselect() is used to select a tensor-product partition in a fully data-driven fashion.

  • lspkselect(): This function implements data-driven procedures to select the number of knots for partitioning-based least squares regression. It allows for evenly-spaced and quantile-spaced knot placements, and computes the corresponding IMSE-optimal choices. Two selectors are provided: rule of thumb (ROT) and direct plug-in (DPI) rule.

  • lsplincom(): This function implements estimation and robust inference procedures for linear combinations of regression estimators of multiple groups based on lsprobust(). Given a user-specified linear combination, it offers all the estimation and inference methods available in the functions lsprobust() and lspkselect().

  • lsprobust.plot(): This function builds on ggplot2, and is used as a wrapper for plotting results. It plots regression function curves, robust bias-corrected confidence intervals and uniform confidence bands, among other possibilities.

The paper continues as follows. Section 2 describes the basic setup including a brief introduction to partitioning-based least squares regression and the empirical example to be used throughout to illustrate features of lspartition. Section 3 discusses data-driven IMSE-optimal selection of the number of knots and gives implementation details. Estimation and inference implementation is covered in Section 4, including bias correction methods. The last section concludes. We defer to Cattaneo, Farrell and Feng (2019, CFF hereafter) for complete theoretical and technical details. Statements below are sometimes specific versions of a general case therein.

2 Setup

We assume that is an observed random sample of a scalar outcome and a -vector of covariates . The object of interest is the regression function or its derivative, the latter denoted by , for a -tuple with .

Estimation and inference is based on least squares regression of on set of basis functions of which are themselves built on top of a partition of the support . A partition, denoted by , is a collection of disjoint open sets such that the closure of their union is . For a partition, a set of basis functions, each of order and denoted by , is constructed so that each individual function (i.e., each element of the vector ) is nonzero on a fixed number of contiguous . lspartition allows for three such bases: piecewise polynomials, B-splines, and Cohen-Daubechies-Vial wavelets (Cohen, Daubechies and Vial, 1993). For the first two bases, the order of the basis can be any positive integer, and any derivative of up to total order can be estimated employing such a basis. For Daubechies wavelets, the current version allows for (i.e., up to cubic wavelets), and . The package takes (linear basis) as default. To fix ideas, consider with piecewise polynomials. Each is an interval and collects all the indicator functions , .

Once the basis is constructed, the final estimator of is

(1)

where . When , we write for simplicity.

The approximation power of such estimators increases with the granularity of the partition and the order . We take the latter as fixed in practice. The most popular structure of in applications is a tensor-product form, which partitions each covariate marginally into intervals and then sets to be the set of all tensor (Cartesian) products of these intervals (CFF consider more general cases). For this type of partition, the user must choose the number and placement of the partitioning knots in each dimension. lspartition allows for three knot placement types: user-specified, evenly-spaced, and quantile-spaced. In the first case, the user has complete freedom to choose both the number and positions of knots for each dimension. In the latter two cases, the knot placement scheme is pre-specified, and hence only the number of subintervals for each dimension needs to be chosen.

We denote the number of knots in the dimensions of the regressor by , which can be either specified by users or selected by data-driven procedures (see Section 3 below). Moreover, for wavelet bases, motivated by the standard multi-resolution analysis, we provide an option J for the regression command lsprobust(), which indicates the resolution level of a wavelet basis. This gives , for a resolution (see Chui, 2016, for a review). In any case, the tuning parameter to be chosen is . In the next section we choose to minimize the IMSE of the estimator (1).

2.1 Package and Data

We will showcase the main aspects of lspartition using a running empirical example. The package is available in R and can be installed as follows:

> install.packages("lspartition", dependencies = TRUE)
> library(lspartition)

The data we use come from Capital Bikeshare, and is available at https://www.kaggle.com/c/bike-sharing-demand/data/. For the first 19 days of each month of 2011 and 2012 we observe the outcome count, the total number of rentals and the covariates atemp, the “feels-like” temperature, and workingday, a binary indicator for working days (versus weekends and holidays). The data is summarized as follows.

> data <- read.csv("bikeSharing.csv", header = TRUE)
> summary(data)
     count           atemp         workingday
 Min.   :  1.0   Min.   : 0.76   Min.   :0.0000
 1st Qu.: 42.0   1st Qu.:16.66   1st Qu.:0.0000
 Median :145.0   Median :24.24   Median :1.0000
 Mean   :191.6   Mean   :23.66   Mean   :0.6809
 3rd Qu.:284.0   3rd Qu.:31.06   3rd Qu.:1.0000
 Max.   :977.0   Max.   :45.45   Max.   :1.000

We will investigate nonparametrically the relationship between temperature and number of rentals and compare the two groups defined by the type of days:

> y <- data$count
> x <- data$atemp
> g <- data$workingday

The sample code that follows will use this designation of y, x, and g.

3 Partitioning Scheme Selection

We will now briefly describe the IMSE expansion and its use in tuning parameter selection. To differentiate the original point estimator of (1) and the post-bias-correction estimators, we will add a subscript to the original estimator: . The three bias corrections discussed below will add corresponding subscripts of 1, 2, and 3. We first discuss the bias and variance of , and then use these for minimizing the IMSE. Throughout, denotes that the approximation holds for large sample in probability and denotes the sample average over . To simplify notation, we may write the estimator as

Again, note the subscript “0”; the bias-corrected estimators are of the same form (see below).

3.1 Bias and Variance

The bias expansion for the is:

(2)
(3)

is the leading approximation error in the -norm and the second term is the accompanying error from the linear projection of onto the space spanned by the basis functions. The form of each of these is complex, and depends on the basis, but what is crucial for the present purposes is that the form is known and the only unknown elements are derivatives of order , , . CFF derive exact expressions for splines, wavelets, and piecewise polynomials. Both bias terms will, in general, contribute to the same order, and both will matter in finite samples. However, the second term in (3) will be higher order if the bases are carefully constructed so that is orthogonal to in with respect to the Lebesgue measure. lspartition allows users to choose whether the projection of the leading error is used in partitioning scheme selection, as well as estimation and inference.

The conditional variance is straightforward from least squares algebra, and takes the familiar sandwich form:

(4)

where . Only is unknown here, and will be replaced by a residuals-based estimator. In particular lspartition allows for the standard Heteroskedasticity-Consistent (HC) class of estimators via the options hc0, hc1, hc2, hc3. See Long and Ervin (2000) for a review in the context of least squares regression.

3.2 Integrated Mean Squared Error

In general, for a weighting function , CFF derive the following (conditional) IMSE expansion:

where the -varying quantities and correspond to a fixed- approximation to the variance and squared bias, respectively.

Under regularity conditions on the partition and basis used, CFF derive explicit leading constants in this expansion. lspartition implements IMSE-minimization for the common simple case where is a tensor product of marginally formed intervals where the same number of intervals are used for each dimension. Specifically, partitions into subintervals, and the complete partition where denotes tensor (Cartesian) product. Thus, the IMSE-optimal number of cells of a tensor-product partition is .

To select , or equivalently , assume that the partitioning knots are generated as quantiles of some marginal distributions , , that is,

where . Then, the IMSE-optimal choice for is

where is a ceiling operator that outputs the smallest integer that is no less than and is a (squared) bias term that may depend on the marginals and, as before, is entirely known up to order derivatives: , .

3.3 Implementation Details

Two popular choices of partitioning schemes are evenly-spaced partitions (ktype="uni"), which sets to be the uniform distribution over the support of the data, and quantile-spaced partitions (ktype="qua"), which sets to be the empirical distribution function of each covariate. The package lspartition implements both partitioning schemes, and for each case offers two IMSE-optimal tuning parameter selection procedures: rule of thumb (imse-rot) and direct plug-in (imse-dpi) choices. We close this section with a brief description of the implementation details and an illustration using real data.

3.3.1 Rule-of-Thumb Choice

The rule-of-thumb choice is based on the special case of . Let the weighting function be the density of . The implementation steps are summarized in the following:

  • Bias constant. The unknown quantities in the bias constants are: , , which is estimated by a global polynomial regression of degree ; and the density of , which is either assumed to be uniform or estimated by a trimmed-from-below Gaussian reference model (controlled by the option rotnorm).

  • Variance constant. The unknown quantities in the variance constants are: the conditional variance , which is estimated by global polynomial regressions of degree ; and the density of , which is either assumed to be uniform or estimated by a trimmed-from-below Gaussian reference model.

  • Rule-of-thumb . Using the above results, a simple rule-of-thumb choice of is

    where and are the estimates of bias and variance constants respectively. While this choice of is obtained under strong parametric assumptions, it still exhibits the correct convergence rate ().

The command lspkselect() implements the rule-of-thumb selection (kselect="imse-rot"). For example, we focus on a subsample of bike rentals during working days (g==1), and then the selected number of knots are reported in the following:

> summary(lspkselect(y, x, kselect = "imse-rot", subset = (g ==
+ 1)))
Call: lspkselect
Sample size (n)                            =    7412
Basis function (method)                    =    B-spline
Order of basis point estimation (m)        =    2
Order of derivative (deriv)                =    (0)
Order of basis bias correction (m.bc)      =    3
Knot placement (ktype)                     =    Uniform
Knot method  (kselect)                     =    imse-rot
=======================
         IMSE-ROT
       k     k.bc
=======================
       5        9
=======================

In this example, for the point estimator based on an evenly-spaced partition, the rule-of-thumb estimate of the IMSE-optimal number of knots is , and for the derivative estimators used in bias correction for later inference, the rule-of-thumb choice is .

3.3.2 Direct Plug-in Choice

Assuming that the weighting is equal to the density of , the package lspartition implements a direct-plug-in (DPI) procedure summarized by the following steps.

  • Preliminary choice of : Implement the rule-of-thumb procedure to obtain .

  • Preliminary regression. Given the user-specified basis, knot placement scheme, and rule-of-thumb choice , implement a partitioning-based regression of order to estimate all necessary order- derivatives; denote these by , .

  • Bias constant. Construct an estimate of the leading error by replacing by . can be obtained similarly. Then, use the pre-asymptotic version of the conditional bias to estimate the bias constant:

    As mentioned before, for the three bases considered in the package lspartition, the second term in the conditional bias is of smaller order under some additional conditions. We employ this property to simplify the estimate of bias constant for wavelets. For splines and piecewise polynomials, however, users may specify whether the projection of the leading error is taken into account in the selection procedure (see option proj).

  • Variance constant. Implement a partitioning-based series regression of order with , and then use the pre-asymptotic version of the conditional variance to estimate the variance constant. Specifically, let

    where ’s are regression residuals, is an estimate of , and is the weighting scheme used to construct different HC variance estimators.

  • Direct plug-in . Collecting all these results, a direct plug-in choice of is

The following shows the results of the direct plug-in procedure based on the real data:

> summary(lspkselect(y, x, kselect = "imse-dpi", subset = (g ==
+ 1)))
Call: lspkselect
Sample size (n)                            =    7412
Basis function (method)                    =    B-spline
Order of basis point estimation (m)        =    2
Order of derivative (deriv)                =    (0)
Order of basis bias correction (m.bc)      =    3
Knot placement (ktype)                     =    Uniform
Knot method  (kselect)                     =    imse-dpi
=======================
         IMSE-DPI
       k     k.bc
=======================
       8       10
=======================

The direct plug-in procedure gives more partitioning knots than the rule-of-thumb, leading to a finer partition. For point estimation, knots are suggested, while for bias correction purpose, it selects knots to estimate derivatives in the leading bias. Quantile-spaced knot placement is obtained by adding ktype = "qua".

4 Estimation and Inference

This section reviews and illustrates the estimation and inference procedures implemented. A crucial ingredient is the bias correction that allows for valid inference after tuning parameter selection.

4.1 Point Estimation and Bias Correction

The estimator is IMSE-optimal from a point estimation perspective when implemented using the choice to form , but conventional inference methods based on this resulting point estimator will be invalid. More precisely, the ratio of bias to standard error in the -statistic is non-negligible, requiring either ad-hoc undersmoothing or some form of bias correction. In addition to the (uncorrected) point estimate in (1), the package lspartition implements the three bias correction options derived by CFF for valid (pointwise and uniform) inference. All these strategies resort to a higher-order basis, , of order . The partition where is built on may be different from but need not be. These approaches allow researchers to combine an optimal point estimate based on the IMSE-optimal with inference based on the same tuning parameter and partitioning scheme choices.

Our bias correction strategies are based on (2) and (3), where the only unknowns are , , and for . These are summarized as follows; see CFF for details.

  • Approach 1, bc="bc1": Higher-order-basis bias correction. Use to construct a higher-order least squares estimator which takes exactly the same form as but has less bias. If we substitute and for and in (2) respectively and subtract this estimated bias from , the resulting “bias-corrected” estimator is equivalent to . This option is called by bc="bc1".

  • Approach 2: Least squares bias correction. Construct and substitute it for in (2), but replace by rather than . The least squares bias-corrected estimator is obtained by subtracting this estimated bias from . The supplement to CFF discusses in detail how this approach relates to higher-order-basis bias correction and when they are equivalent. This option is called by bc="bc2".

  • Approach 3: Plug-in bias correction. Referring to (3), use to construct for all needed . Substitute and for and in and respectively. Subtracting this estimated bias from leads to a plug-in bias-corrected estimator . This option is called by bc="bc3".

The optimal (uncorrected) point estimator () and the three bias-corrected estimators () can be written in a unified form:

These estimators only differ in and , which depend in different ways on and . See CFF for exact formulas.

4.2 Pointwise Inference

Pointwise inference relies on a Gaussian approximation for the -statistics:

where is an estimator of the conditional variance of , and denotes convergence in distribution. is a consistent estimator of where and ’s are additional weights leading to various HC variance estimators. Then nominal -percent symmetric confidence intervals are

(5)

where is the quantile of the standard normal distribution.

For conventional confidence intervals (), the (asymptotically) correct coverage relies on undersmoothing () that renders the bias negligible relative to the standard error in large samples. Though straightforward in theory, it is difficult to implement in a principled way. In comparison, given the IMSE-optimal tuning parameter, all three bias-corrected estimators () have only higher-order bias, and thus the corresponding confidence intervals based on these estimators will have asymptotically correct coverage. Importantly, the Studentization quantity also captures the additional variability introduced by bias correction.

We now illustrate the pointwise inference features of lsprobust() using the bike rental data. The previous result of knot selection based on the DPI procedure will be employed. Specifically, we set nknot=8 for point estimation. For higher-order-basis bias correction (bc="bc1"), the same number of knots is used to correct bias by default, while for plug-in bias correction (bc="bc3"), we use knots (bnknot=10) to estimate the higher-order derivatives in the leading bias. One may leave these options unspecified and then the command lsprobust() will automatically implement knot selection using the command lspkselect().

> est_workday_bc1 <- lsprobust(y, x, neval = 20, bc = "bc1", nknot = 8,
+ subset = (g == 1))
> est_workday_bc3 <- lsprobust(y, x, neval = 20, bc = "bc3", nknot = 8,
+ bnknot = 10, subset = (g == 1))
> summary(est_workday_bc1)
Call: lprobust
Sample size (n)                             =    7412
Num. covariates (d)                         =    1
Basis function (method)                     =    B-spline
Order of basis point estimation (m)         =    2
Order of derivative (deriv)                 =    (0)
Order of basis bias correction (m.bc)       =    3
Smoothness point estimation (smooth)        =    0
Smoothness bias correction (bsmooth)        =    1
Knot placement (ktype)                      =    Uniform
Knots method (kselect)                      =    User-specified
Uniform inference method (uni.method)       =    NA
Num. knots point estimation (nknot)         =    (8)
Num. knots bias correction (bnknot)         =    (8)
=================================================================
      Eval               Point      Std.       Robust B.C.
       X1          n      Est.     Error      [ 95% C.I. ]
=================================================================
1      9.850    7412    90.667     5.316    [77.610 , 96.347]
2     12.120    7412   110.509     3.909   [100.736 , 119.604]
3     13.635    7412   123.937     3.580   [115.071 , 133.583]
4     15.150    7412   137.364     5.183   [129.929 , 144.504]
5     16.665    7412   148.437     3.627   [139.724 , 158.148]
-----------------------------------------------------------------
6     17.425    7412   153.989     3.571   [144.494 , 164.327]
7     20.455    7412   173.306     5.690   [164.945 , 181.894]
8     21.210    7412   174.599     4.600   [167.492 , 186.141]
9     22.725    7412   177.194     3.771   [171.250 , 190.769]
10    24.240    7412   179.789     5.300   [173.561 , 189.839]
-----------------------------------------------------------------
11    25.000    7412   182.743     5.708   [172.595 , 189.229]
12    25.760    7412   189.044     4.662   [172.267 , 191.494]
13    26.515    7412   195.303     4.070   [174.665 , 196.009]
14    28.790    7412   214.165     5.899   [201.197 , 220.363]
15    30.305    7412   231.911     5.770   [228.211 , 248.431]
-----------------------------------------------------------------
16    31.060    7412   243.335     4.760   [239.920 , 262.104]
17    31.820    7412   254.833     4.486   [251.063 , 273.840]
18    33.335    7412   277.755     6.284   [270.701 , 291.816]
19    34.850    7412   298.199     7.278   [280.463 , 309.527]
20    36.365    7412   313.696     6.596   [289.109 , 324.772]
-----------------------------------------------------------------
=================================================================

The above table summarizes the results for pointwise estimation and inference, including point estimates, conventional standard errors, and robust confidence intervals based on higher-order-basis bias correction for quantile-spaced evaluation points. We can use the companion plotting command lsprobust.plot() to visualize the results:

> lsprobust.plot(est_workday_bc1, xlabel = "Temperature", ylabel = "Number of Rentals",
+ legendGroups = "Working Days") + theme(legend.position = c(0.15,
+ 0.9))
> ggsave("output/pointwise1.pdf")
> lsprobust.plot(est_workday_bc3, xlabel = "Temperature", ylabel = "Number of Rentals") +
+ theme(legend.position = "none")
> ggsave("output/pointwise2.pdf")
(a) Higher-order-basis bias correction
(b) Plug-in bias correction
Figure 1: Point estimation and pointwise robust confidence intervals

The result is displayed in Figure 1. As the temperature gets higher, the number of rentals increases as expected. Both panels show the same point estimator, . We plot both the robust confidence intervals based on higher-order-basis bias correction (Figure (a)a) and plug-in bias correction (Figure (b)b). Since the higher-order-basis approach is equivalent to a quadratic spline fitting, the resulting confidence interval has a smoother shape.

4.3 Uniform Inference

To obtain uniform inference (over the support of ), CFF establish Gaussian approximations for the whole -statistic processes, and propose several sampling-based approximations which are easy to implement in practice. To be concrete, for each , there exists a Gaussian process such that

where and are population counterparts of and , is a -dimensional standard normal random vector, and is the length of , and is proportional to . The notation means that the two processes are approximately equal in distribution in the following sense: in a sufficiently rich probability space, we have identical copies of and whose difference converges in probability to zero uniformly.

The Gaussian stochastic process is not feasible in practice because it involves unknown population quantities. Thus, the package lspartition offers two options for implementation: plug-in or bootstrap.

  • Plug-in. Replace all unknowns in by some consistent estimators:

    CFF show that delivers a valid distributional approximation to . In practice one may obtain many simulated realizations of by sampling from the -dimensional standard normal distribution conditional on the data. This option is called by uni.method="pl".

  • Bootstrap. Construct a bootstrapped version of the approximation process (conditional on the data):

    where , and is an i.i.d sequence of bounded random variables with zero mean and unit variance. CFF show that this bootstrapped process also approximates conditional on the data. Thus one can implement bootstrapping by sampling from the distribution of given the data. In the package lspartition, the ’s are taken to be Rademacher variables, and this option is called by uni.method="wb".

Importantly, these strong approximations apply to the whole -statistic processes, and thus can be used to implement general inference procedures based on transformations of . The main regression command lsprobust() will output the the following quantities for uniform analyses upon setting uni.out=TRUE:

  • t.num.pl, t.num.wb1, t.num.wb2: The numerators of approximation processes except the “simulated components”, which are evaluated at a set of pre-specified grid points . Suppose that contains grid points. Then for the plug-in method, the numerator, stored in t.num.pl, is the matrix . For wild bootstrap, the numerator is separated to t.num.wb1 and t.num.wb2, which are and respectively.

  • t.denom: The denominator of approximation processes, i.e., , stored in a vector of length .

  • res: Residuals from the specified bias-corrected regression (needed for bootstrap-based approximation).

For example, the following command requests the necessary quantities for uniform inference based on the plug-in method:

> est_workday_bc1 <- lsprobust(y, x, bc = "bc1", nknot = 4, uni.method = "pl",
+ uni.ngrid = 100, uni.out = T, subset = (g == 1))
> round(est_workday_bc1$uni.output$t.num.pl[1:5, ], 3)
       [,1]   [,2]  [,3]   [,4]  [,5]   [,6]  [,7]
[1,] 30.549 -4.923 2.311 -1.470 0.779 -0.451 0.121
[2,] 27.104 -3.553 1.746 -1.162 0.620 -0.354 0.090
[3,] 23.856 -2.285 1.236 -0.880 0.474 -0.266 0.062
[4,] 20.803 -1.117 0.780 -0.624 0.341 -0.185 0.037
[5,] 17.946 -0.052 0.379 -0.395 0.221 -0.113 0.014

We list the first rows of the numerator matrix. Each row corresponds to a grid point. Since we use a linear spline for point estimation and set nknot=4, the higher-order-basis bias correction is equivalent to quadratic spline fitting. Thus the numerator matrix has columns corresponding to the quadratic spline basis.

As a special application, these results can be used to construct uniform confidence bands, which builds on the suprema of . The function lsprobust() computes the critical value to construct confidence bands. Specifically, it generates many simulated realizations of or using the methods described above, and then obtains an estimated -quantile of or given the data, denoted by . Then, confidence band for is given by

For example, the following command requests a critical value for constructing confidence bands:

> est_workday_bc1 <- lsprobust(y, x, neval = 20, bc = "bc1", uni.method = "pl",
+ nknot = 8, subset = (g == 1), band = T)
> est_workday_bc1$sup.cval
     95%
2.993436

Once the critical value is available, the command lsprobust.plot() is able to visualize confidence bands:

> lsprobust.plot(est_workday_bc1, CS = "all", xlabel = "Temperature",
+ ylabel = "Number of Rentals", legendGroups = "Working Days") +
+ theme(legend.position = c(0.15, 0.9))
> ggsave("output/uniform1.pdf")
Figure 2: Robust inference: plug-in method, higher-order-basis basis correction

The result is displayed in Figure 2. Since we set CS="all", the command simultaneously plots pointwise confidence intervals (error bars) and a uniform confidence band (shaded region).

It is also possible to specify other bias correction approaches or uniform methods:

> est_workday_bc3 <- lsprobust(y, x, neval = 20, bc = "bc3", nknot = 8,
+ bnknot = 10, uni.method = "wb", subset = (g == 1), band = T)
> est_workday_bc3$sup.cval
     95%
3.009244
> lsprobust.plot(est_workday_bc3, CS = "all", xlabel = "Temperature",
+ ylabel = "Number of Rentals", legendGroups = "Working Days") +
+ theme(legend.position = c(0.15, 0.9))
> ggsave("output/uniform2.pdf")
Figure 3: Robust inference: bootstrap method, plug-in bias correction

The result is displayed in Figure 3. In this example, the critical values based on different methods are quite close, but in general their difference could be more pronounced in finite samples. See CFF for some simulation evidence.

4.4 Linear Combinations

The package lspartition also includes a function lsplincom(), which implements estimation and inference for a linear combination of regression functions of different subgroups. To be concrete, consider a random trial with groups. Let be the conditional expectation function (CEF) for group , . The parameter of interest is , i.e., a linear combination of CEFs (or derivatives thereof) for different groups. To fix ideas, consider the most common application, the difference between two groups (or the conditional average treatment effect). Here, , , and . Then .

To implement estimation and inference for , lsplincom() first calls lsprobust() to obtain a point estimate and all other objects for each group. The tuning parameter for each group can be selected by the data-driven procedures above. Then the point estimate of is

The standard error of can be obtained simply by taking the appropriate linear combination of standard errors for each and their estimated covariances. Robust confidence intervals can be similarly constructed as in (5).

lsplincom() also allows users to construct confidence bands for . Specifically, it requests lsprobust() to output the numerators (t.num.pl for “plug-in”, or t.num.wb1 and t.num.wb2 for “bootstrap”) and denominators (t.denom) of the feasible approximation processes or . Let and denote the numerator and denominator from group based on bias correction approach , and . The approximation process for the -statistic process based on is

where is a collection of independent standard normal vectors, and indicates the dimension of . As discussed before, the dimensionality of these normal vectors depends on the particular bias correction approach and may vary across groups since the selected number of knots may be different across groups. The bootstrap approximation process can be constructed similarly.

Given these processes, inference is implemented by sampling from standard normal vectors (“plug-in” method) or groups of Rademacher vectors given the data. Then critical values used to construct confidence bands for are estimated similarly by empirical quantiles of or .

As an illustration, we compare the number of rentals during working days and other time periods (weekends and holidays) based on linear splines and plug-in bias correction. To begin with, we first estimate the conditional mean function for each group using the command lsprobust().

> est_workday <- lsprobust(y, x, neval = 20, bc = "bc3", nknot = 8,
+ subset = (g == 1))
> est_nworkday <- lsprobust(y, x, neval = 20, bc = "bc3", nknot = 8,
+ subset = (g == 0))
> lsprobust.plot(est_workday, est_nworkday, legendGroups = c("Working Days",
+ "Nonworking Days"), xlabel = "Temperature", ylabel = "Number of Rentals") +
+ theme(legend.position = c(0.15, 0.85))
> ggsave("output/diff1.pdf")

The pointwise results for each group are displayed in Figure 4. The shaded regions represent confidence intervals. Clearly, when temperature is low, two regions are well separated, implying that people may rent bikes more during working days than weekends or holidays when the weather is cold.

Figure 4: Point estimation and robust confidence intervals for two groups

Next, we employ the command lsplincom() to formally test this result. We specify R=(-1, 1), denoting that is the coefficient of the conditional mean function for the group workingday==0 and is the coefficient of the conditional mean function for the group workingday==1.

> diff <- lsplincom(y, x, data$workingday, R = c(-1, 1), band = T,
+ cb.method = "pl")
> summary(diff)
Call: lprobust
Sample size (n)                            =    10886
Num. covariates (d)                        =    1
Num. groups (G)                            =    2
Basis function (method)                    =    B-spline
Order of basis point estimation (m)        =    2
Order of derivative (deriv)                =    (0)
Order of basis bias correction (m.bc)      =    3
Smoothness point estimation (smooth)       =    0
Smoothness bias correction (bsmooth)       =    1
Knot placement (ktype)                     =    Uniform
Knots method (kselect)                     =    imse-dpi
Confidence band method (cb.method)         =    Plug-in
=========================================================
      Eval       Point      Std.       Robust B.C.
       X1         Est.     Error      [ 95% C.I. ]
=========================================================
1      9.850    32.170     6.077    [24.120 , 47.837]
2     12.120    49.661     5.552    [37.497 , 61.394]
3     13.635    39.749     4.553    [30.882 , 51.186]
4     15.150    29.838     6.463    [17.013 , 42.425]
5     16.665    17.571     7.049     [3.137 , 30.514]
---------------------------------------------------------
6     17.425    16.300     6.121     [4.717 , 29.559]
7     19.695    12.569     7.733    [-4.275 , 26.973]
8     21.210     3.039     8.339   [-12.379 , 19.761]
9     21.970     1.653     7.540    [-9.502 , 21.073]
10    23.485     3.060     6.664   [-13.960 , 14.078]
---------------------------------------------------------
11    25.000     6.118     8.836    [-6.110 , 27.954]
12    25.760    11.823     9.513    [-2.996 , 33.270]
13    26.515    12.311     9.746   [-23.007 , 15.243]
14    28.790   -17.533     8.520   [-20.891 , 15.791]
15    30.305   -32.221    10.024   [-49.905 , -11.277]
---------------------------------------------------------
16    31.060   -36.962    11.016   [-67.843 , -25.825]
17    31.820   -31.760     9.171   [-37.713 , -1.062]
18    33.335   -21.347     8.789   [-46.161 , -9.332]
19    34.850   -13.412    11.053   [-34.039 , 8.122]
20    36.365   -15.438    11.606   [-44.170 , 1.813]
---------------------------------------------------------
=========================================================

The pointwise results are summarized in the above table. Clearly, when temperature is low, the point estimate of the rental difference is significantly positive since the robust confidence intervals do not cover . In contrast, when the temperature is above 18, it is no longer significant. This implies that the difference in the number of rentals between working days and other periods is less pronounced when the weather is warm. Again, we can use the command lsprobust.plot() to plot point estimates, confidence intervals and uniform band simultaneously:

> lsprobust.plot(diff, CS = "all", xlabel = "Temperature", ylabel = "Number of Rentals",
+ legendGroups = "Difference between Working and Other Days") +
+ theme(legend.position = c(0.3, 0.2))
> ggsave("output/diff2.pdf")
Figure 5: Point estimation and robust inference: rental difference

The confidence band for the difference is constructed based on the plug-in distributional approximation computed previously. It leads to an even stronger conclusion: the entire difference as a function of temperature is significantly positive uniformly over a range of low temperatures since the confidence band is above zero when the temperature is low.

5 Summary

We gave an introduction to the software package lspartition, which offers estimation and robust inference procedures (both pointwise and uniform) for partitioning-based least squares regression. In particular, splines, wavelets, and piecewise polynomials are implemented. The main underlying methodologies were illustrated empirically using real data. Finally, installation details, scripts replicating the numerical results reported herein, links to software repositories, and other companion information, can be found in the package’s website:

References

  • Calonico et al. (2018) Calonico, S., Cattaneo, M. D., and Farrell, M. H. (2018), “On the Effect of Bias Estimation on Coverage Accuracy in Nonparametric Inference,” Journal of the American Statistical Association, 113, 767–779.
  • Calonico et al. (2019)   (2019), “Coverage Error Optimal Confidence Intervals for Local Polynomial Regression,” arXiv:1808.01398.
  • Cattaneo and Farrell (2013) Cattaneo, M. D., and Farrell, M. H. (2013), “Optimal Convergence Rates, Bahadur Representation, and Asymptotic Normality of Partitioning Estimators,” Journal of Econometrics, 174, 127–143.
  • Cattaneo et al. (2019) Cattaneo, M. D., Farrell, M. H., and Feng, Y. (2019), “Large Sample Properties of Partitioning-Based Estimators,” Annals of Statistics, forthcoming.
  • Chui (2016) Chui, C. K. (2016), An Introduction to Wavelets, Elsevier.
  • Cohen et al. (1993) Cohen, A., Daubechies, I., and Vial, P. (1993), “Wavelets on the Interval and Fast Wavelet Transforms,” Applied and Computational Harmonic Analysis, 1, 54–81.
  • Fan and Gijbels (1996) Fan, J., and Gijbels, I. (1996), Local Polynomial Modelling and Its Applications, New York: Chapman & Hall/CRC.
  • Györfi et al. (2002) Györfi, L., Kohler, M., Krzyżak, A., and Walk, H. (2002), A Distribution-Free Theory of Nonparametric Regression, Springer-Verlag.
  • Long and Ervin (2000) Long, J. S., and Ervin, L. H. (2000), “Using Heteroscedasticity Consistent Standard Errors in the Linear Regression Model,” The American Statistician, 54, 217–224.
  • Ruppert et al. (2009) Ruppert, D., Wand, M. P., and Carroll, R. (2009), Semiparametric Regression, New York: Cambridge University Press.
  • Zhang and Singer (2010) Zhang, H., and Singer, B. H. (2010), Recursive Partitioning and Applications, Springer.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
373097
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description