The EFM approach for singleindex models
Abstract
Singleindex models are natural extensions of linear models and circumvent the socalled curse of dimensionality. They are becoming increasingly popular in many scientific fields including biostatistics, medicine, economics and financial econometrics. Estimating and testing the model index coefficients is one of the most important objectives in the statistical analysis. However, the commonly used assumption on the index coefficients, , represents a nonregular problem: the true index is on the boundary of the unit ball. In this paper we introduce the EFM approach, a method of estimating functions, to study the singleindex model. The procedure is to first relax the equality constraint to one with components of lying in an open unit ball, and then to construct the associated estimating functions by projecting the score function to the linear space spanned by the residuals with the unknown link being estimated by kernel estimating functions. The root consistency and asymptotic normality for the estimator obtained from solving the resulting estimating equations are achieved, and a Wilks type theorem for testing the index is demonstrated. A noticeable result we obtain is that our estimator for has smaller or equal limiting variance than the estimator of Carroll et al. [J. Amer. Statist. Assoc. 92 (1997) 447–489]. A fixedpoint iterative scheme for computing this estimator is proposed. This algorithm only involves onedimensional nonparametric smoothers, thereby avoiding the data sparsity problem caused by high model dimensionality. Numerical studies based on simulation and on applications suggest that this new estimating system is quite powerful and easy to implement.
10.1214/10AOS871 \volume39 \issue3 2011 \firstpage1658 \lastpage1688 \newproclaimremRemark \newproclaimexmExample \setattributeabstractwidth290pt
The EFM approach for singleindex models
A]\fnmsXia \snmCui\thanksreft1label=e1]cuixia@mail.sysu.edu.cn, B]\fnmsWolfgang Karl \snmHärdle\thanksreft2label=e2]haerdle@wiwi.huberlin.de and C]\fnmsLixing \snmZhu\corref\thanksreft3label=e3]lzhu@hkbu.edu.hk \thankstextt1Supported by NNSF project (11026194) of China, RFDP (20100171120042) of China and “the Fundamental Research Funds for the Central Universities” (11lgpy26) of China. \thankstextt2Supported by Deutsche Forschungsgemeinschaft SFB 649 “Ökonomisches Risiko.” \thankstextt3Supported by a Grant (HKBU2030/07P) from Research Grants Council of Hong Kong, Hong Kong, China.
class=AMS] \kwd62G08 \kwd62G08 \kwd62G20.
Singleindex models \kwdindex coefficients \kwdestimating equations \kwdasymptotic properties \kwditeration.
1 Introduction
Singleindex models combine flexibility of modeling with interpretability of (linear) coefficients. They circumvent the curse of dimensionality and are becoming increasingly popular in many scientific fields. The reduction of dimension is achieved by assuming the link function to be a univariate function applied to the projection of explanatory covariate vector on to some direction. In this paper we consider an extension of singleindex models where, instead of a distributional assumption, assumptions of only the mean function and variance function of the response are made. Let , denote the observed values with being the response variable and as the vector of explanatory variables. The relationship of the mean and variance of is specified as follows:
(1) 
where is a known monotonic function, is a known covariance function, is an unknown univariate link function and is an unknown index vector which belongs to the parameter space . Here we assume the parameter space is rather than the entire in order to ensure that in the representation (1) can be uniquely defined. This is a commonly used assumption on the index parameter [see Carroll et al. (1997), Zhu and Xue (2006), Lin and Kulasekera (2007)]. Another reparameterization is to let for the sign identifiability and to transform to for the scale identifiability. Clearly can also span the parameter space by simply checking that and the first component . However, the fixedpoint algorithm recommended in this paper for normalized vectors may not be suitable for such a reparameterization. Model (1) is flexible enough to cover a variety of situations. If is the identity function and is equal to constant 1, (1) reduces to a singleindex model Härdle, Hall and Ichimura (1993). Model (1) is an extension of the generalized linear model McCullagh and Nelder (1989) and the singleindex model. When the conditional distribution of is logistic, then and .
For singleindex models: and , various strategies for estimating have been proposed in the last decades. Two most popular methods are the average derivative method (ADE) introduced in Powell, Stock and Stoker (1989) and Härdle and Stoker (1989), and the simultaneous minimization method of Härdle, Hall and Ichimura (1993). Next we will review these two methods in short. The ADE method is based on that which implies that the gradient of the regression function is proportional to the index parameter . Then a natural estimator for is with denoting and being the Euclidean norm. An advantage of the ADE approach is that it allows estimating directly. However, the highdimensional kernel smoothing used for computing suffers from the “curse of dimensionality” if the model dimension is large. Hristache, Juditski and Spokoiny (2001) improved the ADE approach by lowering the dimension of the kernel gradually. The method of Härdle, Hall and Ichimura (1993) is carried out by minimizing a least squares criterion based on nonparametric estimation of the link with respect to and bandwidth . However, the minimization is difficult to implement since it depends on an optimization problem in a highdimensional space. Xia et al. (2002) proposed to minimize average conditional variance (MAVE). Because the kernel used for computing is a function of , MAVE meets the problem of data sparseness. All the above estimators are consistent under some regular conditions. Asymptotic efficiency comparisons of the above methods have been discussed in Xia (2006) resulting in the MAVE estimator of having the same limiting variance as the estimators of Härdle, Hall and Ichimura (1993), and claiming alternative versions of the ADE method having larger variance. In addition, Yu and Ruppert (2002) fitted the partially linear singleindex models using a penalized spline method. Huh and Park (2002) used the local polynomial method to fit the unknown function in singleindex models. Other dimension reduction methods that were recently developed in the literature are sliced inverse regression, partial least squares and canonical correlation method. These methods handle highdimensional predictors; see Zhu and Zhu (2009a, 2009b) and Zhou and He (2008).
The main challenges of estimation in the semiparametric model (1) are that the support of the infinitedimensional nuisance parameter depends on the finitedimensional parameter , and the parameter is on the boundary of a unit ball. For estimating the former challenge forces us to deal with the infinitedimensional nuisance parameter . The latter one represents a nonregular problem. The classic assumptions about asymptotic properties of the estimates for are not valid. In addition, as a model proposed for dimension reduction, the dimension may be very high and one often meets the problem of computation. To attack the above problems, in this paper we will develop an estimating function method (EFM) and then introduce a computational algorithm to solve the equations based on a fixedpoint iterative scheme. We first choose an identifiable parameterization which transforms the boundary of a unit ball in to the interior of a unit ball in . By eliminating , the parameter space can be rearranged to a form . Then the derivatives of a function with respect to are readily obtained by the chain rule and the classical assumptions on the asymptotic normality hold after transformation. The estimating functions (equations) for can be constructed by replacing with . The estimate for the nuisance parameter is obtained using kernel estimating functions and the smoothing parameter is selected using fold crossvalidation. For the problem of testing the index, we establish a quasilikelihood ratio based on the proposed estimating functions and show that the test statistics asymptotically follow a distribution whose degree of freedom does not depend on nuisance parameters, under the null hypothesis. Then a Wilks type theorem for testing the index is demonstrated.
The proposed EFM technique is essentially a unified method of handling different types of data situations including categorical response variable and discrete explanatory covariate vector. The main results of this research are as follows:

[(a)]

Efficiency. A surprising result we obtain is that our EFM estimator for has smaller or equal limiting variance than the estimator of Carroll et al. (1997).

Computation. The estimating function system only involves onedimensional nonparametric smoothers, thereby avoiding the data sparsity problem caused by high model dimensionality. Unlike the quasilikelihood inference [Carroll et al. (1997)] where the maximization is difficult to implement when is large, the reparameterization and the explicit formulation of the estimating functions facilitate an efficient computation algorithm. Here we use a fixedpoint iterative scheme to compute the resultant estimator. The simulation results show that the algorithm adapts to higher model dimension and richer data situations than the MAVE method of Xia et al. (2002).
It is noteworthy that the EFM approach proposed in this paper cannot be obtained from the SLS method proposed in Ichimura (1993) and investigated in Härdle, Hall and Ichimura (1993). SLS minimizes the weighted least squares criterion , which leads to a biased estimating equation when we use its derivative if does not contain the parameter of interest. It will not in general provide a consistent estimator [see Heyde (1997), page 4]. Chang, Xue and Zhu (2010) and Wang et al. (2010) discussed the efficient estimation of singleindex model for the case of additive noise. However, their methods are based on the estimating equations induced from the least squares rather than the quasilikelihood. Thus, their estimation does not have optimal property. Also their comparison is with the one from Härdle, Hall and Ichimura (1993) and its later development. It cannot be applied to the setting under study. In this paper, we investigate the efficiency and computation of the estimates for the singleindex models, and systematically develop and prove the asymptotic properties of EFM.
The paper is organized as follows. In Section 2, we state the singleindex model, discuss estimation of using kernel estimating functions and of using profile estimating functions, and investigate the problem of testing the index using quasilikelihood ratio. In Section 3 we provide a computation algorithm for solving the estimating functions and illustrate the method with simulation and practical studies. The proofs are deferred to the Appendix.
2 Estimating function method (EFM) and its large sample properties
In this section, which is concerned with inference based on the estimating function method, the model of interest is determined through specification of mean and variance functions, up to an unknown vector and an unknown function . Except for Gaussian data, model (1) need not be a full semiparametric likelihood specification. Note that the parameter space means that is on the boundary of a unit ball and it represents therefore a nonregular problem. So we first choose an identifiable parameterization which transforms the boundary of a unit ball in to the interior of a unit ball in . By eliminating , the parameter space can be rearranged to a form . Then the derivatives of a function with respect to are readily obtained by chain rule and the classic assumptions on the asymptotic normality hold after transformation. This reparameterization is the key to analyzing the asymptotic properties of the estimates for and to facilitating an efficient computation algorithm. We will investigate the estimation for and and propose a quasilikelihood method to test the statistical significance of certain variables in the parametric component.
2.1 The kernel estimating functions for the nonparametric part
If is known, then we estimate and using the local linear estimating functions. Let denote the bandwidth parameter, and let denote the symmetric kernel density function satisfying . The estimation method involves local linear approximation. Denote by and the values of and evaluating at , respectively. The local linear approximation for in a neighborhood of is . The estimators and are obtained by solving the kernel estimating functions with respect to :
(2) 
Having estimated at as , the local linear estimators of and are and , respectively.
The key to obtain the asymptotic normality of the estimates for lies in the asymptotic properties of the estimated nonparametric part. The following theorem will provide some useful results. The following notation will be used. Let , and the Jacobian matrix of size with
The moments of and are denoted, respectively, by,
Proposition 1
Under regularity conditions (a), (b), (d) and (e) given in the Appendix, we have: {longlist}[(iii)]
With , such that and , , the asymptotic conditional bias and variance of are given by
(3)  
With , such that and , for the estimates of the derivative , it holds that
(4)  
With , such that and , we have that
(5) 
The proof of this proposition appears in the Appendix. Results (i) and (ii) in Proposition 1 are routine and similar to Carroll, Ruppert and Welsh (1998). In the situation where and the function is identity, results (i) and (ii) coincide with those given by Fan and Gijbels (1996). From result (iii), it is seen that converges in probability to , rather than as if were known. That is, , which means that the convergence in probability and the derivation of the sequence (as a function of ) cannot commute. This is primarily caused by the fact that the support of the infinitedimensional nuisance parameter depends on the finitedimensional projection parameter . In contrast, a semiparametric model where the support of the nuisance parameter is independent of the finitedimensional parameter is a partially linear regression model having form . It is easy to check that the limit of is equal to , which is the derivative of with respect to . Result (iii) ensures that the proposed estimator does not require undersmoothing of to obtain a root consistent estimator for and it is also of its own interest in inference theory for semiparametric models.
2.2 The asymptotic distribution for the estimates of the parametric part
We will now proceed to the estimation of . We need to estimate the dimensional vector , the estimator of which will be defined via
(6) 
This is the direct analogue of the “ideal” estimating equation for known , in that it is calculated by replacing with . An asymptotically equivalent and easily computed version of this equation is
with the Jacobian mentioned above, and are defined by (2), and the local linear estimate for ,
where , We use (2.2) to estimate in the singleindex model, and then use the fact that to obtain . The use of (2.2) constitutes in our view a new approach to estimating singleindex models; since (2.2) involves smooth pilot estimation of , and we call it the Estimation Function Method (EFM) for .
The estimating equations can be represented as the gradient vector of the following objective function:
with and the inverse function of . The existence of such a potential function makes to inherit properties of the ideal likelihood score function. Note that is an open, connected subset of . By the regularity conditions assumed on (for details see the Appendix), we know that the quasilikelihood function is twice continuously differentiable on such that the global maximum of can be achieved at some point. One may ask whether the solution is unique and also consistent. Some elementary calculations lead to the Hessian matrix , because the partial derivative then
By the regularity conditions in the Appendix, the multipliers of the residuals in the first sum of (2.2) are bounded. Mimicking the proof of Proposition 1, the first sum can be shown to converge to 0 in probability as goes to infinity. The second sum converges to a negative semidefinite matrix. If the Hessian matrix is negative definite for all values of , has a unique root. At sample level, however, estimating functions may have more than one root. For the EFM method, the quasilikelihood exists, which can be used to distinguish local maxima from minima. Thus, we suppose (2.2) has a unique solution in the following context.
It can be seen from the proof in the Appendix that the population version of is
(8) 
which is obtained by replacing with in (2.2). One important property of (8) is that the second Bartlett identity holds, for any :
This property makes the semiparametric efficiency of the EFM (2.2) possible.
Let denote the true parameter and denote the Moore–Penrose inverse of any given matrix . We have the following asymptotic result for the estimator .
Assume the estimating function (2.2) has a unique solution and denote it by . If the regularity conditions (a)–(e) in the Appendix are satisfied, the following results hold: {longlist}[(ii)]
With , such that , converges in probability to the true parameter .
If and ,
(9) 
where , and
Note that , so the nonnegative matrix degenerates in the direction of . If the mean function is the identity function and the variance function is equal to a scale constant, that is, , , the matrix in Theorem 2.2 reduces to be
Technically speaking, Theorem 2.2 shows that an undersmoothing approach is unnecessary and that root consistency can be achieved. The asymptotic covariance in general can be estimated by replacing terms in its expression by estimates of those terms. The asymptotic normality of will follow from Theorem 2.2 with a simple application of the multivariate deltamethod, since . According to the results of Carroll et al. (1997), the asymptotic variance of their estimator is . Define the block partition of matrix as follows:
(10) 
where is a positive constant, is a dimensional row vector, is a dimensional column vector and is a nonnegative definite matrix.
Corollary 1
The possible smaller limiting variance derived from the EFM approach partly benefits from the reparameterization so that the quasilikelihood can be adopted. As we know, the quasilikelihood is often of optimal property. In contrast, most existing methods treat the estimation of as if it were done in the framework of linear dimension reduction. The target of linear dimension reduction is to find the directions that can linearly transform the original variables vector into a vector of one less dimension. For example, ADE and SIR are two relevant methods. However, when the link function is identity, the limiting variance derived here may not be smaller or equal to the ones of Wang et al. (2010) and Chang, Xue and Zhu (2010) when the quasilikelihood of (2.5) is applied.
2.3 Profile quasilikelihood ratio test
In applications, it is important to test the statistical significance of added predictors in a regression model. Here we establish a quasilikelihood ratio statistic to test the significance of certain variables in the linear index. The null hypothesis that the model is correct is tested against a full model alternative. Fan and Jiang (2007) gave a recent review about generalized likelihood ratio tests. Bootstrap tests for nonparametric regression, generalized partially linear models and singleindex models have been systematically investigated [see Härdle and Mammen (1993), Härdle, Mammen and Müller (1998), Härdle, Mammen and Proenca (2001)]. Consider the testing problem:
(12)  
We mainly focus on testing , though the following test procedure can be easily extended to a general linear testing where is a known matrix with full row rank and . The profile quasilikelihood ratio test is defined by
(13) 
where and is the inverse function of . The following Wilks type theorem shows that the distribution of is asymptotically chisquared and independent of nuisance parameters.
Under the assumptions of Theorem 2.2, if , then
(14) 
3 Numerical studies
3.1 Computation of the estimates
Solving the joint estimating equations (2) and (2.2) poses some interesting challenges, since the functions and depend on implicitly. Treating as a new predictor (with given ), (2) gives us as in Fan, Heckman and Wand (1995). We therefore focus on (2.2), as estimating equations. It cannot be solved explicitly, and hence one needs to find solutions using numerical methods. The Newton–Raphson algorithm is one of the popular and successful methods for finding roots. However, the computational speed of this algorithm crucially depends on the initial value. We propose therefore a fixedpoint iterative algorithm that is not very sensitive to starting values and is adaptive to larger dimension. It is worth noting that this algorithm can be implemented in the case that is slightly larger than , because the resultant procedure only involves onedimensional nonparametric smoothers, thereby avoiding the data sparsity problem caused by high dimensionality.
Rewrite the estimating functions as with
and
Setting , we have that
(15) 
Note that ,