A Provable Smoothing Approach for High Dimensional Generalized Regression with Applications in Genomics
Abstract
In many applications, linear models fit the data poorly. This article studies an appealing alternative, the generalized regression model. This model only assumes that there exists an unknown monotonically increasing link function connecting the response to a single index of explanatory variables . The generalized regression model is flexible and covers many widely used statistical models. It fits the data generating mechanisms well in many real problems, which makes it useful in a variety of applications where regression models are regularly employed. In low dimensions, rankbased Mestimators are recommended to deal with the generalized regression model, giving root consistent estimators of . Applications of these estimators to high dimensional data, however, are questionable. This article studies, both theoretically and practically, a simple yet powerful smoothing approach to handle the high dimensional generalized regression model. Theoretically, a family of smoothing functions is provided, and the amount of smoothing necessary for efficient inference is carefully calculated. Practically, our study is motivated by an important and challenging scientific problem: decoding gene regulation by predicting transcription factors that bind to cisregulatory elements. Applying our proposed method to this problem shows substantial improvement over the stateoftheart alternative in real data.
MSIMfinalsupp
Keywords: semiparametric regression, generalized regression model, rankbased Mestimator, smoothing approximation, transcription factor binding, genomics.
1 Introduction
Regression models play a fundamental role in characterizing the relation among variables. Nonparametric and semiparametric regression models are commonly used alternatives to linear regression when the latter fails to fit the data well. Their advantages over simple linear regression models have been established in various fields (Huber and Ronchetti, 2011; Ruppert et al., 2003).
In this article, we study a semiparametric generalization of linear regression as such an alternative. We assume
(1.1) 
Here is the scalar response variable, is an unknown increasing function, is an unknown strictly increasing function regarding each of its arguments, represents the vector of explanatory variables, is an unspecified noise term independent of , and is the regression coefficient characterizing the relation between and . The coefficient is assumed to be sparse. Model (1.1) is referred to as the generalized regression model. It was first proposed in econometrics (Han, 1987), and is a very flexible semiparametric model, containing a parametric part, encoded in the linear term , and a nonparametric part, encoded in link functions , and the noise term . In practice, we assume that independent realizations of , denoted as , are observed. These observations will be used to fit the model.
1.1 A motivating genomic challenge
Model (1.1) naturally occurs in many applications. Below we elaborate a challenging scientific problem that motivates our study. To help put the model into context, some background introduction is necessary. One fundamental question in biology is how genes’ transcriptional activities are controlled by transcription factors (TFs). TFs are an important class of proteins that can bind to DNA to induce or repress the transcription of genes nearby the binding sites (Figure 1(a)). Human has hundreds of different TFs. Each TF can bind to  different genomic loci to control the expression of  target genes. The genomic sites bound by TFs are also called cisregulatory elements or ciselements. Promoters and enhancers are typical examples of ciselements. TF binding activities are contextdependent. The binding activity of a given TF at a given ciselement varies from cell type to cell type. It depends on the TF’s expression level in each cell type as well as numerous other factors. One ciselement can be bound by multiple collaborating TFs that form a protein complex. Different TFs can bind to different ciselements. In order to comprehensively understand the gene regulatory program, a crucial step is to identify all active ciselements and their binding TFs in each cell type and biological condition.
The stateoftheart technology for mapping genomewide TF binding sites (TFBSs) is Chromatin Immunoprecipitation coupled with sequencing (ChIPseq) (Park, 2009). Unfortunately, each ChIPseq experiment can only analyze one TF. Using this technology to map binding sites of all human TFs would require one to conduct hundreds of such experiments. Moreover, ChIPseq requires highquality antibodies which are not available for all TFs. Therefore, mapping binding sites for all TFs using ChIPseq is both costly and technically infeasible. An alternative approach to mapping TFBSs is based on genomewide sequencing of DNase I hypersensitive sites (DNaseseq) (Boyle et al., 2008). TFBSs are often sensitive to the cleavage of DNase I enzyme. Thus, DNase I hypersensitivity (DH), which can be measured in a genomewide fashion using DNaseseq, can be used to locate ciselements actively bound by TFs (Figure 1(a)). This approach is capable of mapping binding sites of all TFs in a biological sample through a single experimental assay. However, a major limitation of DNaseseq is that it does not reveal the identities of TFs that bind to each ciselement. If one could solve this problem by correctly predicting which TFs bind to each element, one would then be able to combine DNaseseq with computational predictions to identify all active ciselements and their binding TFs in a biological sample using a single experimental assay. This would help scientists to remove a major roadblock in the study of gene regulation.
Many TFs recognize specific DNA sequence patterns called motifs (Figure 1(a)). Different TFs recognize different motifs. A conventional way to infer the identities of TFs that bind to a ciselement is to examine which TFs’ DNA motifs are present in the ciselement. Unfortunately, motifs for 2/3 of all human TFs are unknown. Therefore, solely relying on DNA motifs is not sufficient to solve this problem. This motivates development of an alternative solution that leverages massive amounts of gene expression and DNaseseq data in public databases to circumvent the requirement for DNA motifs. The Encyclopedia of DNA Elements (ENCODE) project (ENCODE Project Consortium, 2004) has generated DNaseseq and gene expression data for a variety of different cell types. Using these data, one may examine how the proteinbinding activity measured by DNaseseq at a ciselement varies across different cell types and how such variation is explained by variations in the expression of TFs (Figure 1(b)). Through this analysis, one may infer which TFs bind to each ciselement.
Formally, let be the activity of a ciselement in a particular cell type measured by DNaseseq, and let be the expression level of different TFs in the same cell type (Figure 1(b)). The relationship between ciselement activity and TF expression can be described using a generalized regression model, , with a high dimensional sparse regression coefficient vector . One expects the relationship between and to be monotonic since a TF has to be expressed in order to be able to bind to ciselements. Also, increased TF expression may lead to increased binding. The relationship may not be linear and the noise may not be normal since DNaseseq generates counts data. Although after normalization, the data may no longer be integers, they usually are still nonnormal and may be zeroinflated. The model is high dimensional since there are hundreds of of TFs (i.e., ), whereas the sample size (i.e., the number of ENCODE cell types with both DNaseseq and gene expression data) is small (=50100). Lastly, has to be sparse since the number of TFs that can bind to a ciselement is expected to be small. This is because ciselements are typically short. Each element cannot have physical contacts with too many different proteins. In this model, the nonzero components of may be used to infer the identities of binding TFs.
1.2 Existing works on generalized regression
In order to properly position our results in the literature, below we briefly review existing methodological works that are most relevant to our study on deciphering the generalized regression model. The generalized regression model contains many widelyused econometrical and statistical models, including important subclasses such as the monotonic transformation model and monotonic singleindex model, of the following forms:
(1.2)  
(1.3) 
where the univariate link function is assumed to be strictly increasing.
There has been research in estimating the generalized regression model or its variants in low dimensions. These works follow two tracks. In the first track, Han (1987) and Cavanagh and Sherman (1998) proposed rankbased Mestimators for directly estimating , while treating link functions and as nuisance parameters. The corresponding estimator aims to maximize certain rank correlation measurement between and , and hence often involves a discontinuous loss function. Based on an estimate of , Horowitz (1996), Ye and Duan (1997), and Chen (2002) further proposed methods to estimate the link function under different parametric assumptions on link functions. This method is also extended in Dai et al. (2014) and Shi et al. (2014) to ultrahigh dimensional settings via coupling it with a lassotype penalty. In the second track, Ahn et al. (1996), Tanaka (2008), Kakade et al. (2011), among many others, focused on studying more specific models in (1.2) and (1.3), and suggested to approximate for estimating via exploiting the kernel regression and sieve approximation. These approaches therefore naturally require geometric assumptions and smoothness conditions on .
In high dimensions when could be much larger than the sample size , serious drawbacks are associated with methods in both tracks.
For the second track, first, simultaneous estimation of and requires extra prior assumptions on , which may not hold in practice. Secondly, nonparametric estimation is well known to be difficult in high dimensions. This could hurt the estimation of . Thirdly, these estimation procedures usually are very sensitive to outliers, which are common in real applications.
For the first track, rankbased Mestimators treat as nuisance and hence could potentially gain efficiency and modeling flexibility in estimating . However, these rankbased methods also have serious computational and theoretical drawbacks. Computationally, the loss functions of rankbased Mestimators are commonly discontinuous. This violates the regularity conditions in most optimization algorithms (Luo and Tseng, 1993; Nesterov, 2012) and makes the optimization problem intractable. It could create serious computational issues, especially in high dimensions. Theoretically, the discontinuity of loss functions adds substantial difficulty for analysis, especially in high dimensions. This further makes the statistical performance of corresponding estimators intractable.
1.3 Overview of key results
This article studies a simple smooth alternative to the above rankbased methods. This results in estimators that are computationally efficient to calculate, while keeping the modeling flexibility in estimating . The core idea is to replace the nonsmooth rankbased loss function by a smooth loss function , indexed by . is designed to become closer to when increases. A family of smoothing functions is accordingly studied.
Of note, the idea to approximate a nonsmooth loss function using a smooth one has proven its successfulness in literature: see, for example, Horowitz (1992), Ma and Huang (2005), Horowitz (1998), Zhang et al. (2013), and Shi et al. (2014) for smoothing Manski’s maximum score estimator, an estimator targeting at the area under the receiver operating characteristic curve (AUC), quantile regression estimator, and Han’s rankbased Mestimator in low and high dimensions. However, our results add new observations to this track of works both theoretically and in some new biological applications, which will be outlined below.
Theoretically, although a fairly studied approach in low dimensions, smoothing approximation to nonsmooth and nonconvex loss functions has received little attention in high dimensions. This is mainly due to the extreme irregularity of original loss functions, which form discontinuous Uprocesses (Sherman, 1993). Theoretically speaking, smoothing renders two types of errors: (i) the bias which characterizes the error introduced by employing to approximate ; (ii) the variance which characterizes the error introduced by the randomness of . Our study characterizes behaviors of both types of errors, based on which one can calculate the amount of smoothing necessary to balance the bias and variance. Our theory holds without any assumption on other than monotonically increasing. Additionally, the noise term is allowed to be noncentered, arbitrarily heavytailed, including these Cauchy distributed ones with median possibly nonzero, and contain a substantial amount of outliers. Our theory hence confirms, mathematically rigorously, several advantages of smoothed Han’s maximum rank estimator.
Practically, the aforementioned advantages of the studied procedure are important for problems where a large number of different regression models need to be fitted. Consider our motivating problem of predicting binding TFs of ciselements. Since different ciselements behave differently, the relationship between and may have different functional forms for different ciselements even though all these functions are monotonic. Additionally, different ciselements may have different noise distributions. Despite this heterogeneity, different ciselements can be conveniently handled in a unified fashion using our generalized regression procedure regardless of the form of and . By contrast, manually constructing parametric models of different forms for differet ciselements would be difficult due to the large number of models that need to be constructed. The advantages of our approach over existing high dimensional linear and robust regression couterparts (e.g., those in Loh (2015) and Fan et al. (2017)) are hence clear.
Applying our method to the binding TF prediction problem, we find that our approach is substantially more accurate than the competing lasso method for predicting binding TFs. This demonstrates the practical utility of the smoothing approach and shows that it can provide a solution to a longstanding open problem in biology.
1.4 Other related works
Monotonic singleindex and transformation models are important subclasses of the generalized regression model. In contrast to the monotonic transformation model, there exists an increasing amount of research studying the monotonic singleindex model. These include Kalai and Sastry (2009), Kakade et al. (2011), and Foster et al. (2013), to just name a few. However, they require much stronger modeling assumptions, and are sensitive to different types of data contamination.
There are two more related semiparametric approaches to generalize the linear regression model. The first is the general singleindex model, with no explicit geometric constraint on in (1.3). In high dimensions, Plan et al. (2017) and Yi et al. (2015) studied such a relaxed model. For this, besides being sensitive to data contamination, they need to first sphericalize the data, which is extremely difficult in high dimensions. Recently, Radchenko (2015) proposed an alternative approach that does not require data sphericity. However, boundedness assumption on , subgaussianity of , and certain smoothness conditions on are still required.
The second is sufficient dimension reduction. Related literature in this direction includes Li (1991), Li (1992), Cook and Weisberg (1991), Cook (1998), and Yin and Hilafu (2015). Sufficient dimension reduction approaches only assume is independent of conditional on some linear projections of . However, a data sphericalization step is also crucial in all these approaches, and in each step of derivation, we need to proceed. These make the sufficient dimension reduction approaches vulnerable to high dimensionality, which will be further illustrated in simulations and real data experiments.
1.5 Paper organization
The rest of the paper is organized as follows. The next section presents our smoothing approach to estimating the generalized regression model. In Section 3, we use this method to solve our motivating scientific problem of decoding transcription factor binding. We demonstrate that this semiparametric regression approach is capable of substantially improving the accuracy over the stateoftheart alternatives in real data. Section 4 gives theoretical results for understanding the proposed approach and calculates the appropriate smoothing amount. Section 5 provides discussions. Finitesample simulation results and proofs are relegated to an appendix.
1.6 Notation
Let and be a dimensional real vector and a by real matrix. For sets , let be the subvector of with entries indexed by , and be the submatrix of with rows and columns indexed by and . Let represent the cardinality of the set . For , we define the vector , , and (pseudo)norms of to be , , and . For the symmetric matrix , let and represent the largest and smallest eigenvalues of . We write if is positive semidefinite. For any , we define the sign function , where by convention we write . For any two random vectors , we write if and are identically distributed. We let be two generic absolute positive constants, whose actual values may vary at different locations. We write to be the indicator function. We let represent the ball . For any two real sequences and , we write , or equivalently , if there exists a constant such that for any large enough . We write if and . The symbols and are analogous to , and , but these relations hold stochastically.
2 Problem setup and methods
In this section, we provide some background on the generalized regression model and associated rankbased estimators. We further describe the class of nonconvex smoothness approximations that will be exploited and covered by our theory.
Throughout the paper, we assume Model (1.1) holds, and we have independent and identically distributed (i.i.d.) observations generated from (1.1). We do not pose any parametric assumption on either the link functions or the noise term , except for assuming ,
(2.1) 
as long as . For model identifiability, in the sequel, we assume we know at least one specific entry of that is nonzero, and fix it to be one. Without loss of generality, we assume .
The generalized regression model of form (1.1) represents a large class of models. These include the monotonic transformation and singleindex models of the forms (1.2) and (1.3). More specifically, the generalized regression model covers the linear regression model, with ; the BoxCox transformation model (Box and Cox, 1964), with for some ; the loglinear model and accelerated failure time model (Kalbfleisch and Prentice, 2011), with ; the binary choice model (Maddala, 1986), with ; the censored regression model (Tobin, 1958), with .
We focus on the following rankbased Mestimator, called the maximum rank correlation (MRC), which is first proposed in Han (1987):
(2.2) 
The formulation of MRC is a Uprocess (Honoré and Powell, 1997; Zhang et al., 2013).
Intrinsically, MRC aims to maximize the Kendall’s tau correlation coefficient (Kendall, 1938) between and , while treating the link functions as nuisance. This is attainable only via assuming is monotonic, and has its roots in transformation and copula models (Nelsen, 2013). For such models, rankbased approaches have been wellknown to be of certain efficiency properties (Klaassen and Wellner, 1997; Hallin and Paindaveine, 2006).
For fully appreciating the rationality of MRC, we provide a proposition that characterizes MRC’s relation to the linear regression model, and further addresses the identifiability issue. Although the result in the first part is very straightforward, we do not find an explicit one in the literature that shows this relation,
Suppose is continuous and has a positive definite covariance matrix. We have, (i) when the link function corresponds to the linear regression model (without requiring and to be centered), is the unique optimum to maximize the Pearson correlation between and up to a scaling:
(ii) As long as the link functions and satisfy (2.1), is the unique optimum to maximize the Kendall’s tau correlation coefficient between and up to a scaling:
Throughout, we are interested in the settings where the dimension could be much larger than the sample size . In such settings, due to restrictive information obtainable, we have to further regularize the parameter space. In particular, sparsity on the parametric space is commonly assumed. A seemingly natural regularized MRC estimator is as follows:
(2.3) 
or its equivalent formulation:
However, (2.3) involves a discontinuous loss function that has abrupt changes. As stated in the introduction, this incurs serious computational and statistical issues. For tackling such issues, we propose the following smoothing approximation to (2.3), using a set of cumulative distribution functions (CDFs) to approximate the indicator function :
Here “” and “” represent the sets of local maxima and minima for a given function respectively. In addition, we write
Of note, is an explicitly stated smoothness parameter controlling the approximation speed, presumably increasing to infinity with . For any pair , is a predetermined fixed smooth continuous CDF, satisfying for arbitrary .
Note we can equivalently write
(2.4) 
where is the signed pairwise difference.
On one hand, the smoothed loss function is close to the MRC loss function in (2.3) when is large enough. This guarantees “the bias term” is small enough. On the other hand, is smooth, giving computational and statistical guarantees for convergence in optimizing the loss function (2.4).
There are several notable remarks for the proposed smoothing approach.
In practice, we can take the approximation function of the following forms:

sigmoid function: ;

standard Gaussian CDF: , where represents the standard Gaussian CDF;

standard double exponential CDF: .
As will be shown later, the above approximation functions all guarantee efficient inference. For the approximation parameter , theoretically, we recommend choosing it using a specific rate. Such a rate depends on , and a sparseness parameter, and will be stated more explicitly in Section 4. In practice, crossvalidation is recommended (Stone, 1974).
We note the formulation (2.4) is related to those smoothing approaches introduced in Brown and Wang (2005), Zhang et al. (2013), and Shi et al. (2014). Actually, their approaches could be regarded as special cases of (2.4), by taking to be the Gaussian CDF or sigmoid function. However, as will be seen in Sections 3 and 4, we will add new contributions to literature in both theory and applications.
It is also worth comparing the formulation in (2.4) to the other robust regression formulations introduced in the high dimensional statistics literature. The original lasso estimator is wellknown to be vulnerable to nonlinear link functions (Van de Geer, 2008), and heavytailed noise term (Bickel et al., 2009). Lozano et al. (2016), Fan et al. (2017), and Loh (2015) proposed different robust (non)convex approaches to address the possible heavytailedness issue of . In particular, Loh (2015) provided a framework for investigating a group of (non)convex loss functions (e.g., Huber’s loss and Tukey’s biweight), and studied the corresponding estimators. However, these procedures all stick to the linear link function, and hence will lead to inconsistent estimation, invalid statistical inference, and erroneous predictions when the link function is nonlinear. As is discussed in the introduction, nonlinearity is common in complex biology systems.
3 Real data example
In this section, we apply our approach to the motivating scientific problem – predicting TFs that bind to individual ciselements. As introduced before, solving this problem is crucial for studying gene regulation. DNaseseq experiments can be used to map active ciselements in a genomewide fashion. If one can correctly predict which TFs bind to each ciselement, one would be able to couple DNaseseq with computational predictions to efficiently predict genomewide binding sites of a large number of TFs simultaneously in a new biological sample. This cannot be achieved using any other existing experimental technology.
The conventional approach that predicts binding TFs based on DNA motifs is contingent on the availability of known TF motifs. However, 2/3 of human TFs do not have known motifs. This motivates us to investigate an alternative solution that does not require DNA motif information. This new approach leverages large amounts of publicly available DNaseseq and gene expression data generated by the ENCODE project. Using data from multiple ENCODE cell types, this approach models the relationship between a ciselement’s proteinbinding activity measured by DNaseseq and the gene expression levels of TFs, , measured by exon arrays. It predicts binding TFs by identifying important variables in in the regression model. Below we present real data results from a smallscale pilot study as a way to illustrate our semiparametric regression approach and compare it with other methods. A comprehensive wholegenome analysis and investigation of biology are beyond the scope of this article and will be addressed elsewhere.
In our pilot study, human DNaseseq and matching gene expression exon array data from 57 cell types were obtained from the ENCODE project. After data processing and normalization (see the appendix Section C.1 for details), 169 TFs whose gene expression varies across the 57 cell types were obtained. In parallel, 1,000 ciselements were randomly sampled from a total of over ciselements in the human genome for method evaluation. For each ciselement, the objective is to identify which of the 169 TFs may bind to it. Let be a ciselement’s DNase I hypersensitivity (a surrogate for proteinbinding activity), measured by its normalized and logtransformed DNaseseq read count, in a cell type. Let be the normalized gene expression values of the 169 TFs in the same cell type. We want to use and observed from 57 different cell types to learn their relationship and subsequently predict binding TFs.
Six competing methods are compared, listed below. For all methods except random prediction, the nonzero components of the estimate were used to predict which TFs can bind to a ciselement. The code that implements our method has been released online (https://github.com/zji90/RMRCE).

RMRCE: the generalized regression model was fitted using Regularized Maximum Rank Correlation Estimator (RMRCE) in (2.4). The tuning parameter was selected using cross validation and the Gaussian smoothing approximation was used.

Hinge: the indicator function in Han’s proposal is replaced by a hinge loss approximation. Specifically, the loss function is changed to:

Lasso: the lasso (Tibshirani, 1996) was used.

SIM: the method as proposed by Radchenko (2015).

SDR: the sufficient dimension reduction method as proposed by Yin and Hilafu (2015).

Random: TFs randomly sampled from the 169 TFs were treated as the predicted binding TFs.
Among these methods, the lasso represents the stateoftheart linear model for characterizing the relationship between a response and sparse predictors. SIM and SDR represent competing semiparametric regression models. Hinge is a simple convex relaxation of Han’s proposal. We include this comparison to find out whether the smoothed rank correlation is better than convex relaxation. The random method serves as a negative control. For all methods except random prediction, we tried different tuning parameters ( in RMRCE was selected using cross validation) and calculated the overall accuracy under each parameter setting. Detailed implementation strategy for the methods is presented in Section A.1 in the appendix.
Prediction accuracy of different methods is compared using DNA motif information. The rationale is that DNA motifs were not used to make predictions, therefore they provide an independent source of information for validation. For a method with better prediction accuracy, one should expect that its predicted binding TFs for a ciselement are more likely to be supported by the presence of corresponding motifs in the ciselement. By contrast, it is less likely to find motifs in a ciselement for incorrectly predicted binding TFs. Based on this reasoning, we downloaded all known vertebrate motif matrices from the JASPAR database (Mathelier et al., 2015) and mapped them to human genome using the CisGenome software (Ji et al., 2008) under its default setting. For a given TF, if its motif(s) occurred one or multiple times within 500 base pairs (bp’s) of the center of a ciselement, then the TF’s motif was called to be found in the ciselement. Let be the index of ciselements. Let denote the set of TFs whose motifs were found in ciselement . In order to characterize a method’s prediction accuracy, we applied the method to predict binding TFs for each ciselement. If a predicted TF does not have any known DNA motif, we lack information for evaluating the correctness of the prediction. Therefore, for each ciselement, we only retained the predicted TFs that had known DNA motifs in the JASPAR database for estimating the prediction accuracy. Among all the 169 TFs, 63 TFs had known DNA motifs and were included in the evaluation. Let be the set of retained TFs for ciselement , and let be the number of TFs in . Let be the subset of TFs in whose motifs were found in ciselement (and hence validated), and let be the number of TFs in . The prediction accuracy of a method was characterized by the following ratio
This ratio is the percentage of all testable predictions that were validated by the presence of DNA motifs. The higher the ratio, the more accurate a method is.
Figure 2 compares the accuracy of different methods. For each method, we gradually increased the number of reported TFs by relaxing the tuning parameter (e.g., setting a smaller tuning parameter will result in more TFs being reported by RMRCE), and the accuracy was plotted as a function of increasing number of predicted binding TFs. This figure shows that RMRCE is significantly more accurate than all the other methods, and the random prediction method has the worst performance. Of note, as the number of selected TFs increases, the accuracy of all methods drops (except for the random, which remains stable). This is as expected since the overall signal strength decreases as more TFs are reported. RMRCE performance with different choices of is presented in the appendix (Section C.2). A model diagnostic heuristic is developed to check the monotonicity assumption of the proposed model. The detailed descriptions and model diagnostic results in real data application can be found in the appendix (Section C.3).
To shed light on why RMRCE substantially outperformed the lasso, Figure 3 shows data from two ciselements as examples. For each ciselement, we used the lasso to identify binding TFs from the 169 TFs. The observed response was then plotted against its fitted value in Figure 3(a) and Figure 3(c). The blue curve in each plot represents a smooth curve fitted using the generalized additive model with cubic splines and default parameters as implemented in the R package mgcv. It treats as response and as independent variable. Clearly, and do not have a linear relationship. Moreover, the figures show that the relationship between and for different ciselements have different functional forms. This makes the use of parametric models complicated as one would need to build models with different functional forms for different ciselements, which would be tedious if one wants to analyze millions of ciselements in the whole genome. Figure 3(b) and Figure 3(d) show the normal qqplots for the residuals that were obtained from the fitted smooth curves. These figures show that, even when a nonlinear smoothed function was fitted to the data, the residuals are still nonnormal and may have a complicated distribution.
Figure 4 shows a similar analysis using RMRCE. In Figure 4(a) and Figure 4(c), was plotted against the RMRCE fitted . Clearly, the relationship between and is nonlinear, but such a nonlinear, yet monotonically increasing, relationship can be handled by our method in a unified fashion regardless of the specific functionnonlinearal forms. Figure 4(b) and Figure 4(d) are the residual qqplots where the residuals were obtained from the fitted smooth curves in Figure 4(a) and Figure 4(c). These qqplots show that the distributions of the residuals are nonnormal. The nonnormal residuals, however, can be naturally handled by our generalized regression model.
The above analyses demonstrate the value of our approach for handling noisy, monotonic, nonnormal, and nonlinear data. Whereas the simple linear regression and marginal screening cannot fully capture the delicateness of such a complex system, RMRCE handles these challenges very well. Thus, RMRCE is a more appealing method to tackle the studied problem than simple linear models based methods such as the lasso. Similar conclusion applies to the comparison with the other four competing methods. In particular, as will be shown in the Section of synthetic data analysis (Section A), Hinge is usually a convex approximation too crude to the studied method, and when the generalized regression model is reasonable in modeling the data (which is hinted by the experimental results in this section), SIM and SDR are much less efficient in handling the data than RMRCE.
4 Theory
This section is devoted to investigating the statistical performance of the smoothing estimator in (2.4). For characterizing the estimation error of to , we separately study the approximation error (bias part) and the stochastic error (variance part).
Before introducing main theorems, let’s first introduce some additional notation. Let’s first define the population approximation minimizer as
where can be easily derived as
(4.1) 
Note, analogously, Proposition 2 shows
(4.2) 
We decompose the statistical error into two parts:
We will first characterize the approximation error in the next section. Studies of the stochastic error are in Section 4.2.
Because the loss functions , , and are all nonconvex, adopting a similar argument as in Loh (2015), we first focus on the following specified minima:
The estimators and are constructed merely for theoretical purposes. In addition, the parameter there, controlling the convexity region around the truth , is no need to be specified in practice.
As will be shown in the next two subsections, and are local optima to and under some explicitly characterized regularity conditions. Thus, we will end this section with a general theorem for quantifying the behaviors of some local optimum .
4.1 Approximation error
This section first shows in (4.1) could well approximate in (4.2). We will then study the behaviors of minima of and .
4.1.1 Generalized regression model
For the generalized regression model of the form (1.1), to guarantee the fast approximation of to , we require the following two assumptions on data generating schemes and approximation functions .

(A2). Assume satisfies for arbitrary . Further assume there exist some absolute constants such that for any .
On one hand, Assumption (A1) is a commonly adopted assumption in the high dimensional regression literature (Raskutti et al., 2010; Negahban et al., 2012). Of note, Assumption (A1) is by no means necessary. By checking the proof of Lemma 4.1.1, we could readily relax it to a multivariate subgaussian assumption (Cai and Zhou, 2012). However, for presentation clearness, this paper is focused on the multivariate Gaussian case. On the other hand, Assumption (A2) is mild. It is easy to check sigmoid function, Gaussian, and double exponential CDFs discussed in Remark 2 all satisfy (A2).
Under Assumptions (A1) and (A2), the next lemma investigates the approximation error of to . {lemma} Assume Assumptions (A1) and (A2) hold. We then have
where absolute constants and are given in Assumption (A2).
Lemma 4.1.1 shows uniformly approximates linearly with regard to . However, it is not enough to show convergence for to . For guaranteeing this, we require a “local strong convexity” of around the truth . To this end, we write
to be the second derivative of the population loss function in (4.2). The next assumption regularizes the behavior of ’s spectrum.

(A3). Assume and are respectively lower and upper bounded by two absolute positive constants.
Assumption (A3) is also posed inexplicitly in Sherman (1993). With Assumption (A3), the next proposition, essentially coming from Sherman (1993), shows the local strong convexity of .
[Sherman (1993)] Assume Assumptions (A1) and (A3) hold. We can then pick positive absolute constant set with close to 0, such that for some small enough only depending on , as long as , we have
Combining Lemma 4.1.1 and Proposition 4.1.1, the next lemma characterizes the approximation error of to the truth . {lemma} Assume Assumptions (A1), (A2), and (A3) hold. We then have
Lemma 4.1.1 verifies that, when we choose to increase to infinity with , the approximation error decays to zero at a rate while assuming the boundedness of the eigenvalues of . We are focused on such and the corresponding estimator .
4.1.2 Monotonic transformation model
This section aims to study how sharp the results in the last subsection are. We focus on the monotonic transformation model (1.2). We first show Lemma 4.1.1 cannot be improved too much, even under a much more restrictive monotonic transformation model.
Under Model (1.2), assume Assumption (A1) and satisfies for arbitrary . For arbitrary vectors , we then have the following three statements true.

(1). Supposing , we have for any .

(2). With Gaussian distributed with bounded parameters, fixing and supposing , is an increasing function of for all three examples of approximations given in Remark 2.

(3). If we further assume sigmoid approximation and Gaussian distributed with bounded parameters, we have
Furthermore, Combined with the second fact yields, for any ,
Lemma 4.1.2 shows, even under a very ideal parametric model, the approximation error in Lemma 4.1.1 can only be improved slightly from linearly to quadratically decaying.
Secondly, we note Proposition 4.1.1 relies on an inexplicit assumption (A3). To fully appreciate this proposition, we provide an alternative way to clearly reveal its connection to the data generating parameter . For this, we assume the monotonic transformation model (1.2) and one more assumption.

(A3’). Assume Model (1.2) holds with explanatory variables and noises satisfying Assumption (A1). We further assume the noises are absolutely continuous, satisfying
Here represents the probability density function (PDF) of .
We note the noise assumption in (A3’) is mild. It does not require any moment condition on the noises . In particular, the next proposition shows both Gaussian and Cauchy distributions satisfy Assumption (A3’). {proposition} Suppose noises are arbitrarily Gaussian or Cauchy distributed with bounded parameters. Then they satisfy Assumption (A3’). With Assumption (A3’), we are now ready to prove the local strong convexity of . {lemma} Assume Assumptions (A1) and (A3’) hold. We can then pick positive absolute constant set with close to 0, such that for some small enough only depending on , as long as , we have
As a simple consequence, for and , we have
Compared to the result in Proposition 4.1.1, Lemma 4.1.2 gives explicit inequalities based on , and the proof techniques exploited are utterly different from these in Sherman (1993) that are based on Taylor’s expansion.
4.2 Stochastic error
This section investigates the stochastic error term . This falls in the application regime of the high dimensional Mestimators theory (Negahban et al., 2012) with some slight modifications due to the additional constraint on and .
4.2.1 A general framework for constrained Mestimators
In this section we consider studying general Mestimators. In detail, let’s be focused on the following constrained Mestimator:
which aims to estimate the truth
Here is a subset of the dimensional real space, is the loss function, and is the penalty term. We pose the following five assumptions on , , and .

(B1). Assume is starshaped.

(B2). Assume is convex differentiable in , and is a seminorm.

(B3). (Decomposability). There exist subspaces such that

(B4). (Restricted strong convexity). Define the set
where represents the projection of to for arbitrary subspace of . We assume, for all , we have
where , , and are three constants.

(B5). We assume
Here is the dual norm of .
Assumptions (B1)(B5) are posed for the purpose of involving estimators like , and hence also (slightly) generalize the corresponding ones in Negahban et al. (2012). Under the above assumptions, we have the following theorem hold, which is a straightforward extension to Theorem 1 in Negahban et al. (2012). {theorem} Assume Assumptions (B1)(B5) hold. We then have
4.2.2 Stochastic error analysis
For analyzing the high dimensional stochastic error, sparsity is commonly encouraged for model identifiability and efficient inference. We adopt this idea. In particular, we assume the following regularity condition:

(A0). The data are generated in a triangular array setting as in Greenshtein and Ritov (2004). We assume, for some prespecified , , while the parameter changes with . Note, in this setting, is not necessarily sparse.
This formulation of sparseness can be regarded as a working assumption, and intrinsically comes from the sieve idea (Grenander, 1981; Shen et al., 1999). Note a similar assumption is inexplicitly stated in Fan et al. (2017).
For guaranteeing efficient inference, we still require three more assumptions, listed below.

(A4). Assume is twicedifferentiable, and there exists an absolute constant such that for arbitrary pair with .

(A5). We assume for arbitrary with such that for some .

(A6). There exists an absolute constant such that .
Here Assumption (A4) is easy to check. Actually, it is straightforward to verify sigmoid function, Gaussian, and double exponential CDFs as approximation functions introduced in Remark 2 all satisfy Assumption (A4). On the other hand, for Assumption (A6), we require a little bit more stringent assumption on than what is required for the lasso. This is because of the additional effort on controlling the smoothness approximation error. Finally, noticing
is very easy to calculate, we note Assumption (A5) could be verified empirically. As a matter of fact, in the appendix