Alternative Asymptotics and the Partially Linear Model with Many Regressors
Abstract
Nonstandard distributional approximations have received considerable attention in recent years. They often provide more accurate approximations in small samples, and theoretical improvements in some cases. This paper shows that the seemingly unrelated “many instruments asymptotics” and “small bandwidth asymptotics” share a common structure, where the object determining the limiting distribution is a Vstatistic with a remainder that is an asymptotically normal degenerate Ustatistic. We illustrate how this general structure can be used to derive new results by obtaining a new asymptotic distribution of a series estimator of the partially linear model when the number of terms in the series approximation possibly grows as fast as the sample size, which we call “many terms asymptotics”.
JEL classification: C13, C31.
Keywords: nonstandard asymptotics, partially linear model, many terms, adjusted variance.
1 Introduction
Many instrument asymptotics, where the number of instruments grows as fast as the sample size, has proven useful for instrumental variables (IV) estimators. Kunitomo (1980) and Morimune (1983) derived asymptotic variances that are larger than the usual formulae when the number of instruments and sample size grow at the same rate, and Bekker (1994) and others provided consistent estimators of these larger variances. Hansen, Hausman, and Newey (2008) showed that using many instrument standard errors provides a theoretical improvement for a range of number of instruments and a practical improvement for estimating the returns to schooling. Thus, many instrument asymptotics and the associated standard errors have been demonstrated to be a useful alternative to the usual asymptotics for instrumental variables.
Instrumental variable estimators implicitly depend on a nonparametric series estimator. Many instrument asymptotics has the number of series terms growing so fast that the series estimator is not consistent. Analogous asymptotics for kernelbased densityweighted average derivative estimators has been considered by Cattaneo, Crump, and Jansson (2010, 2014b). They show that when the bandwidth shrinks faster than needed for consistency of the kernel estimator, the variance of the estimator is larger than the usual formula. They also find that correcting the variance provides an improvement over standard asymptotics for a range of bandwidths.
The purpose of this paper is to show that these results share a common structure, and to illustrate how this structure can be used to derive new results. The common structure is that the object determining the limiting distribution is a Vstatistic, which can be decomposed into a bias term, a sample average, and a “remainder” that is an asymptotically normal degenerate Ustatistic. Asymptotic normality of the remainder distinguishes this setting from other ones involving Vstatistics. Here the asymptotically normal remainder comes from the number of series terms going to infinity or bandwidth shrinking to zero, while the behavior of a degenerate Ustatistic tends to be more complicated in other settings. When the number of terms grows as fast as the sample size, or the bandwidth shrinks to zero at an appropriate rate, the remainder has the same magnitude as the leading term, resulting in an asymptotic variance larger than just the variance of the leading term. The many instrument and small bandwidth results share this structure. In keeping with this common structure, we will henceforth refer to such results under the general heading of “alternative asymptotics”.
The alternative asymptotics that we discuss in this paper applies to statistics that take a specific Vstatistic representation, or may be approximated by it sufficiently accurately, and therefore it does not apply broadly to all possible semiparametric settings. Nonetheless, as we illustrate below, this structure arises naturally in several interesting problems in Economics and Statistics. In particular, we show formally that applying this common structure to a series estimator of the partially linear model leads to new results. These results allow the number of terms in the series approximation to grow as fast as the sample size. The asymptotic distribution of the estimator is derived and it is shown to have a larger asymptotic variance than the usual formula, which is in fact a natural and generic consequence of the specific structure that we highlight in this paper. We also find that under homoskedasticity, the classical degrees of freedom adjusted homoskedastic standard error estimator from linear models is consistent even when the number of terms is “large” relative to the sample size. This result offers a large sample, distribution free justification for the degrees of freedom correction when many series terms are employed. Constructing automatic consistent standard error estimator under (conditional) heteroskedasticity of unknown form in this setting turns out to be quite challenging. In Cattaneo, Jansson, and Newey (2015), we present a detailed discussion of heteroskedasticityrobust standard errors for general linear models with increasing dimension, which covers the partially linear model with many terms studied herein as a special case.
The rest of the paper is organized as follows. Section 2 describes the common structure of many instrument and small bandwidth asymptotics, and also shows how the structure leads to new results for the partially linear model. Section 3 formalizes the new distributional approximation for the partially linear model. Section 4 reports results from a small simulation study aimed to illustrate our results in small samples. Section 5 concludes. The appendix collects the proofs of our results.
2 A Common Structure
To describe the common structure of many instrument and small bandwidth asymptotics, let denote independent random vectors. We consider an estimator of a generic parameter of interest satisfying
(1) 
where is a function that can depend on , , and . We allow to depend on to account for number of terms or bandwidths that change with the sample size. Also, we allow to vary with and to account for dependence on variables that are being conditioned on in the asymptotics, and so treated as nonrandom.
We assume throughout this section that there exists a sequence of nonrandom matrices satisfying for the identity matrix, and hence we focus on the Vstatistic . (All limits are taken as unless explicitly stated otherwise.) This Vstatistic has a well known (Hoeffdingtype) decomposition that we describe here because it is an essential feature of the common structure. For notational implicitly we will drop the and arguments and set and .
Letting denote the Euclidean norm, and if for all , then
(2) 
where
so that , , and .
This decomposition of a Vstatistic is well known (e.g., van der Vaart (1998, Chapter
11)), and shows that can be decomposed into a
sum of independent terms, a Ustatistic remainder that is a
martingale difference sum and uncorrelated with , and a pure bias
term .
An interesting feature of the decomposition (2) in semiparametric settings is that is asymptotically normal at some rate when the number of series terms grow or the bandwidth shrinks to zero. To be specific, under regularity conditions and appropriate tuning parameter sequences that we make precise below, it turns out that
In other settings, where the underlying kernel of the Ustatistic does not vary with the sample size, the asymptotic behavior of is usually more complicated: because it is a degenerate Ustatistic, it would converge to a weighted sum of independent chisquare random variables (e.g., van der Vaart (1998, Chapter 12)). However, in semiparametrictype settings as those considered here, the kernel of the underlying Ustatistic forming changes with the sample size and hence, under particular tuning parameter configurations, the individual contributions to can be made small enough to satisfy a LindebergFeller condition and thus obtain a Gaussian limiting distribution (usually employing the martingale property of ). For an interesting discussion of this phenomenon, see de Jong (1987). The asymptotic normality property of has been shown for certain classes of both series and kernel based estimators, as further explained below.
Alternative asymptotics occurs when the number of series terms grows or the bandwidth shrinks fast enough so that and have the same magnitude in the limit. Because of uncorrelatedness of and , the asymptotic variance will be larger than the usual formula which is (assuming the limit exists). As a consequence, consistent variance estimation under alternative asymptotics requires accounting for the contribution of to the (asymptotic) sampling variability of the statistic. Accounting for the presence of should also yield improvements when numbers of series terms and bandwidths do not satisfy the knifeedge conditions of alternative asymptotics, since is part of the semiparametric statistic. For instance, if the number of series terms grows just slightly slower than the sample size then accounting for the presence of should still give a better large sample approximation. Hansen, Hausman, and Newey (2008) show such an improvement for many instrument asymptotics. It would be good to consider such improved approximations more generally, though it is beyond the scope of this paper to do so.
Distribution theory under alternative asymptotics may be seen as a generalization of the conventional large sample distributional approximation approach in the sense that under conventional sequences of tuning parameters the asymptotic variances emerging from both approaches coincide. But, the alternative asymptotic approximation also allows for other tuning parameter sequences and, in this case, the limiting asymptotic variance is seen to be larger than usual. Thus, in general, there is no reason to expect that the usual standard error formulas derived under conventional asymptotics will remain valid more generically. From this perspective, alternative asymptotics are useful to provide theoretical justification for new standard error formulas that are consistent under more general sequences of tuning parameters, that is, under both conventional and alternative asymptotics. We refer to the latter standard error formulas as being more robust than the usual standard error formulas available in the literature. For instance, using these ideas, the need for new, more robust standard errors formulas was made before for many instrument asymptotics in IV models (Hansen, Hausman, and Newey (2008)) and small bandwidth asymptotics in kernelbased semiparametrics (Cattaneo, Crump, and Jansson (2014b)).
To illustrate these ideas, we show next that both many instrument asymptotics and small bandwidth asymptotics have the structure described above, and we also employ this approach to derive new results in the case of a series estimator of the partially linear model, which we refer to as “many terms asymptotics”.
Example 1: “Many Instrument Asymptotics”
The first example is concerned with the case of many instrument asymptotics. For simplicity we focus on the JIVE2 estimator of Angrist, Imbens, and Krueger (1999), but the idea applies to other IV estimators such as the limited information maximum likelihood estimator. See Chao, Swanson, Hausman, Newey, and Woutersen (2012) for more details, including regularity conditions under which the following discussion can be made rigorous.
Let , , be a random sample generated by the model
(3) 
where is a scalar dependent variable, is a vector of endogenous variables, is a disturbance, and is a vector of instrumental variables.
To describe the JIVE2 estimator of in (3), let denote the th element of , where . After centering and scaling, the JIVE2 estimator satisfies
Conditional on has the structure in (1) with and
where is the indicator function.
For , and
where can be interpreted as the reduced form for observation . As a consequence, (2) is satisfied with ,
Because is the th residual from regressing the reduced form observations on , by appropriate definition of the reduced form this can generally be assumed to vanish as the sample size grows. In that case,
Furthermore, under standard asymptotics will go to zero, so the limiting variance of the leading term in corresponds to the usual asymptotic variance for IV. The degenerate Ustatistic term is
Chao, Swanson, Hausman, Newey, and Woutersen (2012) apply a martingale central limit theorem to show that this will be asymptotically normal when and certain regularity conditions hold. The conditions of the martingale central limit theorem are verified by showing that certain linear combinations with coefficients depending on the elements of go to zero as . In the proof, this makes individual terms asymptotically negligible, with a LindebergFeller condition being satisfied. Alternative asymptotics occurs when grows as fast as , resulting in and having the same magnitude in the limit.
Example 2: “Small Bandwidth Asymptotics”
The second example shows that small bandwidth asymptotics for certain kernelbased semiparametric estimators also has the structure outlined above. To keep the exposition simple we focus on an estimator of the integrated squared density, but the structure of this estimator is shared by the densityweighted average derivative estimator of Powell, Stock, and Stoker (1989) treated in Cattaneo, Crump, and Jansson (2014b) and more generally by estimators of densityweighted averages and ratios thereof (see, e.g., Newey, Hsieh, and Robins (2004, Section 2) and references therein).
Suppose , , are i.i.d. continuously distributed dimensional random vectors with smooth p.d.f. and consider estimation of the integrated squared density
A leaveoneout kernelbased estimator is
where is a symmetric kernel and . This estimator has the Vstatistic form of (1) with and
Here, is an approximation to the well known influence function for estimators of the integrated squared density. Under regularity conditions, converges to in mean square as , so that
A martingale central limit theorem can be applied as in Cattaneo, Crump, and Jansson (2014b) to show that the degenerate Ustatistic term will be asymptotically normal as and , provided that . It is easy to show that d (under regularity conditions). Alternative asymptotics occurs when shrinks as fast as , resulting in and having the same magnitude in the limit.
Example 3: “Many Terms Asymptotics”
The previous two examples show how several estimators share the common structure outlined above. To illustrate how this structure can be applied to derive new results, the third example studies series estimation in the context of the partially linear model. The results will shed light on the asymptotic behavior of this estimator, and the associated inference procedures, when the number of terms are allowed to grow as fast as the sample size.
Let , , be a random sample of generated by the partially linear model
(4) 
where is a scalar dependent variable, and are explanatory variables, is a disturbance, is an unknown function, and is of full rank.
A series estimator of is obtained by regressing on and approximating functions of . To describe the estimator, let be approximating functions, such as polynomials or splines, and let be a dimensional vector of such functions. Letting denote the th element of where , a series estimator of in (4) is given by
Donald and Newey (1994) gave conditions for asymptotic normality of this estimator using standard asymptotics. See also for example Linton (1995), references therein, for related asymptotic results when using kernel estimators.
Conditional on , has the structure outlined earlier:
(5) 
with
where In other words, has the Vstatistic form of (1) with and .
By we have . Therefore, letting as we have done previously, we have
for , where and . In this case, the bias term in (2) is
which will be negligible under regularity conditions, as shown in the next section. Moreover,
where has mean zero and converges to zero in mean square as grows, as further discussed below. Under standard asymptotics will go to one and hence the limiting variance of the leading term in corresponds to the usual asymptotic variance.
Finally, we find that the degenerate Ustatistic term is
Remarkably, this term is essentially the same as the degenerate Ustatistic term for JIVE2 that was discussed above. Consequently, the central limit theorem of Chao, Swanson, Hausman, Newey, and Woutersen (2012) is applicable to this problem. We will employ it to show that is asymptotically normal as , even when does not converge to zero.
This example highlights a new approach to studying the asymptotic distribution of semilinear regression under many terms asymptotics. The alternative asymptotic approximation is useful, for instance, when the number of covariates entering the nonparametric part is large relative to the sample size, as is often the case in empirical applications.
3 Many Terms Asymptotics
In this section we make precise the discussion given in Example 3, and also discuss consistent standard error estimation under homoskedasticity. The estimator described in Example 3 can be interpreted as a twostep semiparametric estimator with tuning parameter , the first step involving series estimation of the the unknown (regression) functions and . Donald and Newey (1994) gave conditions for asymptotic normality of this estimator when . Here we generalize their findings by obtaining an asymptotic distributional result that is valid even when is bounded away from zero.
The analysis proceeds under the following assumption.
 Assumption PLM

(Partially Linear Model)

(a) , , is a random sample.

(b) There is a such that and .

(c) There is a such that and .

(d) (a.s.) and there is a such that .

(e) For some , there is a such that
Because , an implication of part (d) is that , but crucially Assumption PLM does not imply that . Part (e) is implied by conventional assumptions from approximation theory. For instance, when the support of is compact commonly used basis of approximation, such as polynomials or splines, will satisfy this assumption with and , where and denotes the number of continuous derivatives of and , respectively. Further discussion and related references for several basis of approximation may be found in Newey (1997), Chen (2007) and Belloni, Chernozhukov, Chetverikov, and Kato (2015), among others.
3.1 Asymptotic Distribution
From equation (5), and the discussion in the previous section, we see that the asymptotic distribution of will be determined by the behavior of and . The following lemma approximates without requiring that .
Lemma 1
If Assumption PLM is satisfied and if , then
Because , it follows from this result that in the homoskedastic case (i.e., when ) is close to
in probability. More generally, with heteroskedasticity, will be close to the weighted average . Importantly, this result includes standard asymptotics as a special case when , where , the law of large numbers and iterated expectations imply
Next, we study
The following lemma quantifies the magnitude of the bias term as well as the additional variability arising from the (remainder) term .
Lemma 2
If Assumption PLM is satisfied and if then and .
Like the previous lemma, this lemma does not require . Interestingly, the bias term involves approximation of both unknown functions and , implying an implicit tradeoff between smoothness conditions for and . The implied bias condition only requires that be large enough, but not necessarily that and separately be large. It follows that if this bias condition holds, then
as claimed in Example 3 above.
Having dispensed with asymptotically negligible contributions to , we turn to its leading term. This term is shown below to be asymptotically Gaussian with asymptotic variance given by
Here, the first term following the second equality corresponds to the usual asymptotic approximation, while the second term adds an additional term that accounts for large . Once again it is interesting to consider what happens in some special cases. Under homoskedasticity of (i.e., when ),
because . If, in addition, , then . Also, if , then by and the law of large numbers, we have
which corresponds to the standard asymptotics limiting variance.
The following theorem combines Lemmas 1 and 2 with a central limit theorem for quadratic forms to show asymptotic normality of .
Theorem 1
If Assumption PLM is satisfied and if , then
If, in addition, , then .
This theorem shows that is asymptotically normal when need not converge to zero. An implication of this result is that inconsistent seriesbased nonparametric estimators of the unknown functions and may be employed when forming , that is, is allowed (increasing the variability of the nonparametric estimators), provided that (to remove nonparametric smoothing bias). This asymptotic distributional result does not rely on asymptotic linearity, nor on the actual convergence of the matrices and , and leads to a new (larger) asymptotic variance that captures terms that are assumed away by the classical result. The asymptotic distribution result of Donald and Newey (1994) is obtained as a special case where . More generally, when does not converge to zero, the asymptotic variance will be larger than the usual formula because it accounts for the contribution of “remainder” in equation (2). For instance, when both and are homoskedastic, the asymptotic variance is
which is larger than the usual asymptotic variance by the degrees of freedom correction
3.2 Asymptotic Variance Estimation under Homoskedasticity
Consistent asymptotic variance estimation is useful for large sample inference. If the assumptions of Theorem 1 are satisfied and if , then
implying that valid largesample confidence intervals and hypothesis tests for
linear and nonlinear transformations of the parameter vector can be
based on .
denote the usual OLS estimator of incorporating a degrees of freedom correction. The following theorem shows that is a consistent estimator, even when the number of terms is “large” relative to the sample size.
Theorem 2
Suppose the conditions of Theorem 1 are satisfied. If , then and , where .
This theorem provides a distribution free, large sample justification for the degreesoffreedom correction required for exact inference under homoskedastic Gaussian errors. Intuitively, accounting for the correct degrees of freedom is important whenever the number of terms in the semilinear model is “large” relative to the sample size.
4 Small Simulation Study
We conducted a Monte Carlo experiment to explore the extent to which the asymptotic theoretical results obtained in the previous section are present in small samples. Using the notation already introduced, we consider the following partially linear model:

where , , , with i.i.d. , . The unknown regression functions are set to , which are not additive separable in the covariates . The simulation study is based on replications, each replication taking a random sample of size with all random variables generated independently. We consider data generating processes (DGPs) as follows:

Specifically, Models 1, 3 and 5 correspond to homoskedastic (in ) DGPs, while Models 2, 4 and 5 correspond to heteroskedastic (in ) DGPs. For the latter models, the constant was chosen so that . The three distributions considered for the unobserved error terms and are: the standard Normal (labelled “Gaussian”) and two Mixture of Normals inducing either an asymmetric or a bimodal distribution; their Lebesgue densities are depicted in Figure 1. We explored other specifications for the regression functions, heteroskedasticity form, and distributional assumptions, but we do not report these additional results because they were qualitative similar to those discussed here.
The estimators considered in the Monte Carlo experiment are constructed using power series approximations. We do not impose additive separability on the basis, though we do restrict the interaction terms to not exceed degree 5. To be specific, we consider the following polynomial basis expansion:
