1 Introduction

Abstract

We study theoretical properties of regularized robust -estimators, applicable when data are drawn from a sparse high-dimensional linear model and contaminated by heavy-tailed distributions and/or outliers in the additive errors and covariates. We first establish a form of local statistical consistency for the penalized regression estimators under fairly mild conditions on the error distribution: When the derivative of the loss function is bounded and satisfies a local restricted curvature condition, all stationary points within a constant radius of the true regression vector converge at the minimax rate enjoyed by the Lasso with sub-Gaussian errors. When an appropriate nonconvex regularizer is used in place of an -penalty, we show that such stationary points are in fact unique and equal to the local oracle solution with the correct support—hence, results on asymptotic normality in the low-dimensional case carry over immediately to the high-dimensional setting. This has important implications for the efficiency of regularized nonconvex -estimators when the errors are heavy-tailed. Our analysis of the local curvature of the loss function also has useful consequences for optimization when the robust regression function and/or regularizer is nonconvex and the objective function possesses stationary points outside the local region. We show that as long as a composite gradient descent algorithm is initialized within a constant radius of the true regression vector, successive iterates will converge at a linear rate to a stationary point within the local region. Furthermore, the global optimum of a convex regularized robust regression function may be used to obtain a suitable initialization. The result is a novel two-step procedure that uses a convex -estimator to achieve consistency and a nonconvex -estimator to increase efficiency. We conclude with simulation results that corroborate our theoretical findings.

Statistical consistency and asymptotic normality for high-dimensional robust -estimators


Po-Ling Loh
loh@wharton.upenn.edu
Department of Statistics
The Wharton School
University of Pennsylvania
Philadelphia, PA 19104


October 10, 2018

1 Introduction

Ever since robustness entered the statistical scene in Box’s classical paper of 1953 [9], many significant steps have been taken toward analyzing and quantifying robust statistical procedures—notably the work of Tukey [59], Huber [28], and Hampel [22], among others. Huber’s seminal work on -estimators [28] established asymptotic properties of a class of statistical estimators containing the maximum likelihood estimator, and provided initial theory for constructing regression functions that are robust to deviations from normality. Despite the substantial body of now existent work on robust -estimators, however, research on high-dimensional regression estimators has mostly been limited to penalized likelihood-based approaches (e.g., [58, 16, 20, 52]). Several recent papers [46, 35, 36] have shed new light on high-dimensional -estimators, by presenting a fairly unified framework for analyzing statistical and optimization properties of such estimators. However, whereas the -estimators studied in those papers are finite-sample versions of globally convex functions, many important -estimators, such as those arising in classical robust regression, only possess convex curvature over local regions—even at the population level. In this paper, we present new theoretical results, based only on local curvature assumptions, which may be used to establish statistical and optimization properties of regularized -estimators with highly nonconvex loss functions.

Broadly, we are interested in linear regression estimators that are robust to the following types of deviations:

  • Model misspecification. The ordinary least squares objective function may be viewed as a maximum likelihood estimator for linear regression when the additive errors are normally distributed. It is well known that the -penalized ordinary least squares estimator is still consistent when the ’s are sub-Gaussian [8, 61]; however, if the distribution of the ’s deviates more wildly from the normal distribution (e.g., the ’s are heavy-tailed), the regression estimator based on the least squares loss no longer converges at optimal rates. In addition, whereas the usual regularity assumptions on the design matrix such as the restricted eigenvalue condition have been shown to hold with high probability when the covariates are sub-Gaussian [51, 55], we wish to devise estimators that are also consistent under weaker assumptions on the distribution of the covariates.

  • Outliers. Even when the covariates and error terms are normally distributed, the regression estimator may be inconsistent when observations are contaminated by outliers in the predictors and/or response variables [54]. Whereas the standard ordinary least squares loss function is non-robust to outliers in the observations, alternative estimators exist in a low-dimensional setting that are robust to a certain degree of contamination. We wish to extend this theory to high-dimensional regression estimators, as well.

Inspired by the classical theory on robust estimators for linear regression [30, 41, 23], we study regularized versions of low-dimensional robust regression estimators and establish statistical guarantees in a high-dimensional setting. As we will see, the regularized robust regression functions continue to enjoy good behavior in high dimensions, and we can quantify the degree to which the high-dimensional estimators are robust to the types of deviations described above.

Our first main contribution is to provide a general set of sufficient conditions under which optima of regularized robust -estimators are statistically consistent, even in the presence of heavy-tailed errors and outlier contamination. The conditions involve a bound on the derivative of the regression function, as well as restricted strong convexity of the loss function in a neighborhood of constant radius about the true parameter vector, and the conclusions are given in terms of the tails of the error distribution. The notion of restricted strong convexity, as used previously in the literature [46, 2, 35, 36], traditionally involves a global condition on the behavior of the loss function. However, due to the highly nonconvex behavior of the robust regression functions of interest, we assume only a local condition of restricted strong convexity in the development of our statistical results. Consequently, our main theorem provides guarantees only for stationary points within the local region of strong curvature. We show that all such local stationary points are statistically consistent estimators for the true regression vector; when the covariates are sub-Gaussian, the rate of convergence agrees (up to a constant factor) with the rate of convergence for -penalized ordinary least squares regression with sub-Gaussian errors. We also use the same framework to study generalized -estimators and provide results for statistical consistency of local stationary points under weaker distributional assumptions on the covariates.

The wide applicability of our theorem on statistical consistency of high-dimensional robust -estimators opens the door to an important question regarding the design of robust regression estimators, which is the topic of our second contribution: In the setting of heavy-tailed errors, if all regression estimators with bounded derivative are statistically consistent with rates agreeing up to a constant factor, what are the advantages of using a complicated nonconvex regression function over a simple convex function such as the Huber loss? In the low-dimensional setting, several independent lines of work provide reasons for using nonconvex -estimators over their convex alternatives [30, 56]. One compelling justification is from the viewpoint of statistical efficiency. Indeed, the log likelihood function of the heavy-tailed -distribution with one degree of freedom gives rise to the nonconvex Cauchy loss, which is consequently asymptotically efficient [32]. In our second main theorem, we prove that by using a suitable nonconvex regularizer [16, 69], we may guarantee that local stationary points of the regularized robust -estimator agree with a local oracle solution defined on the correct support. Thus, provided the sample size scales sufficiently quickly with the level of sparsity, results on asymptotic normality of low-dimensional -estimators with a diverging number of parameters [29, 67, 50, 39, 25] may be used to establish asymptotic normality of the corresponding high-dimensional estimators, as well. In particular, when the loss function equals the negative log likelihood of the error distribution, stationary points of the high-dimensional -estimator will also be efficient in an asymptotic sense. Our oracle result and subsequent conclusions regarding asymptotic normality resemble a variety of other results in the literature on nonconvex regularization [17, 10, 33], but our result is stronger because it provides guarantees for all stationary points in the local region. Our proof technique leverages the primal-dual witness construction recently proposed in Loh and Wainwright [36]; however, we require a more refined analysis here in order to extend the result to one involving only local properties of the loss function.

Our third and final contribution addresses algorithms used optimize our proposed -estimators. Since our statistical consistency and oracle results only provide guarantees for the behavior of local solutions, we need to devise an optimization algorithm that always converges to a stationary point inside the local region. Indeed, local optima that are statistically inconsistent are the bane of nonconvex -estimators, even in low-dimensional settings [19]. To remedy this issue, we propose a novel two-step algorithm that is guaranteed to converge to a stationary point within the local region of restricted strong convexity. Our algorithm consists of optimizing two separate regularized -estimators in succession, and may be applied to situations where both the loss and regularizer are nonconvex. In the first step, we optimize a convex regularized -estimator to obtain a sufficiently close point that is then used to initialize an optimization algorithm for the original (nonconvex) -estimator in the second step. We use the composite gradient descent algorithm [47] in both steps of the algorithm, and prove rigorously that if the initial point in the second step lies within the local region of restricted curvature, all successive iterates will continue to lie in the region and converge at a linear rate to an appropriate stationary point. Any convex, statistically consistent -estimator suffices for the first step; we use the -penalized Huber loss in our simulations involving sub-Gaussian covariates with heavy-tailed errors, since global optima are statistically consistent by our earlier theory. Our resulting two-step estimator, which first optimizes a convex Huber loss to obtain a consistent estimator and then optimizes a (possibly nonconvex) robust -estimator to obtain a more efficient estimator, is reminiscent of the one-step estimators common in the robust regression literature [7]—however, here we require full runs of composite gradient descent in each step of the algorithm, rather than a single Newton-Raphson step. Note that if the goal is to optimize an -estimator involving a convex loss and nonconvex regularizer, such as the SCAD-penalized Huber loss, our two-step algorithm is also applicable, where we optimize the -penalized loss in the first step.

Related work: We close this section by highlighting three recent papers on related topics. The analysis in this paper most closely resembles the work of Lozano and Meinshausen [37], in that we study stationary points of nonconvex functions used for robust high-dimensional linear regression within a local neighborhood of the true regression vector. Although the technical tools we use here are similar, we focus on regression functions are expressible as -estimators; the minimum distance loss function proposed in that paper does not fall into this category. In addition, we formalize the notion of basins of attraction for optima of nonconvex -estimators and develop a two-step optimization algorithm that consists of optimizing successive regularized -estimators, which goes beyond their results about local convergence of a composite gradient descent algorithm.

Another related work is that of Fan et al. [15]. While that paper focuses exclusively on developing estimation bounds for penalized robust regression with the Huber loss function, the results presented in our paper are strictly more general, since they hold for nonconvex -estimators, as well. The analysis of the -penalized Huber loss is still relevant to our analysis, however, because as shown below, its global convergence guarantees provide us with a good initialization point for the composite gradient algorithm that we will apply in the first step of our two-step algorithm.

Finally, we draw attention to the recent work by Mendelson [44]. In that paper, careful derivations based on empirical process theory demonstrate the advantage of using differently parametrized convex loss functions tuned according to distributional properties of the additive noise in the model. Our analysis also reveals the impact of different parameter choices for the regression function on the resulting estimator, but the rates of Mendelson [44] are much sharper than ours (albeit agreeing up to a constant factor). However, our analysis is not limited to convex loss functions, and covers nonconvex loss functions possessing local curvature, as well. Finally, note that while Mendelson [44] is primarily concerned with optimizing the regression estimator with respect to - and -error, our oracle results suggest that it may be instructive to consider second-order properties as well. Indeed, taking into account attributes such as the variance and asymptotic efficiency of the estimator may lead to a different parameter choice for a robust loss function than if the primary goal is to minimize the bias alone.

The remainder of our paper is organized as follows: In Section 2, we provide the basic background concerning - and generalized -estimators, and introduce various robust loss functions and regularizers to be discussed in the sequel. In Section 3, we present our main theorem concerning statistical consistency of robust high-dimensional -estimators and unpack the distributional conditions required for the assumptions of the theorem to hold for specific robust estimators through a series of propositions. We also present our main theorem concerning oracle properties of nonconvex regularized -estimators, with a corollary illustrating the types of asymptotic normality conclusions that may be derived from the oracle result. Section 4 provides our two-step optimization algorithm and corresponding theoretical guarantees. We conclude in Section 5 with a variety of simulation results. A brief review of robustness measures is provided in Appendix A, and proofs of the main theorems and all supporting lemmas and propositions are contained in the remaining supplementary appendices.

Notation: For functions and , we write to mean that for some universal constant , and similarly, when for some universal constant . We write when and hold simultaneously. For a vector and a subset , we write to denote the vector restricted to . For a matrix , we write to denote the spectral norm. For a function , we write to denote a gradient or subgradient of the function.

2 Background and problem setup

In this section, we provide some background on - and generalized -estimators for robust regression. We also describe the classes of nonconvex regularizers that will be covered by our theory.

Throughout, we will assume that we have i.i.d. observations from the linear model

(1)

where , , and is a -sparse vector. We also assume that and both are zero-mean random variables. We are interested in high-dimensional regression estimators of the form

(2)

where is the empirical loss function and is a penalty function. For instance, the Lasso program is given by the loss and penalty , but this framework allows for much more general settings. Since we are interested in cases where the loss and regularizer may be nonconvex, we include the side condition in the program (2) in order to guarantee the existence of local/global optima. We will require , so that the true regression vector is feasible for the program.

In the scenarios below, we will consider loss functions that satisfy

(3)

When the population-level loss is a convex function, equation (3) implies that is a global optimum of . When is nonconvex, the condition (3) ensures that is at least a stationary point of the function. Our goal is to develop conditions under which certain stationary points of the program (2) are statistically consistent estimators for .

2.1 Robust -estimators

We wish to study loss functions that are robust to outliers and/or model misspecification. Consequently, we borrow our loss functions from the classical theory of robust regression in low dimensions; the additional regularizer appearing in the program (2) encourages sparsity in the solution and endows it with appealing behavior in high dimensions. Here, we provide a brief review of -estimators used for robust linear regression. For a more detailed treatment of the basic concepts of robust regression, see the books [30, 41, 23] and the many references cited therein.

Let denote the regression function defined on an individual observation pair . The corresponding -estimator is then given by

(4)

Note that

so the condition (3) is always satisfied. In particular, the maximum likelihood estimator corresponds to the choice , where is the probability density function of the additive errors . Note that when , the MLE corresponds to the choice , and the resulting loss function is convex.

Some of the loss functions that we will analyze in this paper include the following:

Huber loss: We have

In this case, is a convex function. Although does not exist everywhere, is continuous and .

Tukey’s biweight: We have

Note that is nonconvex. We also compute the first derivative

and second derivative

Note that is continuous. Furthermore, . One may check that Tukey’s biweight function is not an MLE. Furthermore, although exists everywhere and is continuous, does not exist for .

Cauchy loss: We have

Note that is nonconvex. When , the function is proportional to the MLE for the -distribution with one degree of freedom (a heavy-tailed distribution). This suggests that for heavy-tailed distributions, nonconvex loss functions may be more desirable from the point of view of statistical efficiency, although optimization becomes more difficult; we will explore this idea more fully in Section 3.3 below. For the Cauchy loss, we have

In particular, is maximized when , so . We may also check that and .

Although second and third derivatives do not always exist for the loss functions above, a unifying property is that the derivative is bounded in each case. This turns out to be an important property for robustness of the resulting estimator. Intuitively, we may view a solution of the program (2) as an approximate sparse solution to the estimating equation , or equivalently,

(5)

When , equation (5) becomes

(6)

In particular, if a pair satisfies the linear model (1) but is an outlier, its contribution to the sum in equation (6) is bounded when is bounded, lessening the contamination effect of gross outliers.

In the robust regression literature, a redescending -estimator has the additional property that there exists such that , for all . Then is known as a finite rejection point, since outliers with will be completely eliminated from the summand in equation (6). For instance, Tukey’s biweight function gives rise to a redescending -estimator.1 Note that redescending -estimators will always be nonconvex, so computational efficiency will be sacrificed at the expense of finite rejection properties. For an in-depth discussion of redescending -estimators vis-à-vis different measures of robustness, see the article by Shevlyakov et al. [56].

2.2 Generalized -estimators

Whereas the -estimators described in Section 2.1 are robust with respect to outliers in the additive noise terms , they are non-robust to outliers in the covariates . This may be quantified using the concept of influence functions (see Appendix A). Intuitively, an outlier in may cause the corresponding term in equation (6) to behave arbitrarily badly. This motivates the use of generalized -estimators that downweight large values of (also known as leverage points). The resulting estimating equation is then defined as follows:

(7)

where is defined appropriately. As will be discussed in the sequel, generalized -estimators may allow us to relax the distributional assumptions on the covariates; e.g., from sub-Gaussian to sub-exponential.

We will focus on functions that take the form

(8)

where are weighting functions. Note that the -estimators considered in Section 2.1 may also be written in this form, where .

Some popular choices of of the form presented in equation (8) include the following:

Mallows estimator [38]: We take and of the form

(9)

for parameters and . Note that is indeed bounded for a fixed choice of and , since

The Mallows estimator effectively shrinks data points for which is large toward an elliptical shell defined by , and likewise pushes small data points closer to the shell.

Hill-Ryan estimator [26]: We take , where is defined such that is bounded (e.g., equation (9)). In addition to downweighting the influence function similarly to the Mallows estimator, the Hill-Ryan estimator scales the residuals according to the leverage weight of the ’s.

Schweppe estimator [45]: For a parameter , we take and . Like the Mallows estimator, Schweppe’s estimator downweights the contribution of data points with high leverage as a summand in the estimating equation (7). If is locally linear around the origin and flattens out for larger values, Schweppe’s estimator additionally dampens the effect of a residual only when it is large compared to the leverage of . As discussed in Hampel et al. [23], Schweppe’s estimator is designed to be optimal in terms of a measure of variance robustness, subject to a bound on the influence function.

Note that when takes the form in equation (8), the estimating equation (7) may again be seen as a zero-gradient condition , where

(10)

Under reasonable conditions, such as oddness of and symmetry of the error distribution, the condition (3) may be seen to hold (cf. condition 2 of Proposition 1 below and the following remark). The overall program for a generalized -estimator then takes the form

2.3 Nonconvex regularizers

Finally, we provide some background on the types of regularizers we will use in our analysis of the composite objective function (2). Following the theoretical development of Loh and Wainwright [35, 36], we require the regularizer to satisfy the following properties:

Assumption 1 (Amenable regularizers).

The regularizer is coordinate-separable:

for some scalar function . In addition:

  • The function is symmetric around zero and .

  • The function is nondecreasing on .

  • The function is nonincreasing on .

  • The function is differentiable for .

  • .

  • There exists such that the function is convex.

  • There exists such that for all .

If satisfies conditions (i)–(vi) of Assumption 1, we say that is -amenable. If also satisfies condition (vii), we say that is -amenable [36]. In particular, if is -amenable, then is everywhere differentiable. Defining the vector version accordingly, it is easy to see that is convex.

Some examples of amenable regularizers are the following:

Smoothly clipped absolute deviation (SCAD) penalty: This penalty, due to Fan and Li [16], takes the form

(11)

where is fixed. The SCAD penalty is -amenable, with and .

Minimax concave penalty (MCP): This penalty, due to Zhang [68], takes the form

(12)

where is fixed. The MCP regularizer is -amenable, with and .

Standard -penalty: The -penalty is an example of a regularizer that is -amenable, but not -amenable, for any .

As studied in detail in Loh and Wainwright [36] and leveraged in the results of Section 3.3 below, using -amenable regularizers allows us to derive a powerful oracle result concerning local stationary points, which will be useful for our discussion of asymptotic normality.

3 Main statistical results

We now present our core statistical results concerning stationary points of the high-dimensional robust -estimators described in Section 2. We begin with a general deterministic result that ensures statistical consistency of stationary points of the program (2) when the loss function satisfies restricted strong convexity and the regularizer is -amenable. Next, we interpret the consequences of our theorem for specific -estimators and generalized -estimators through a series of propositions, and provide conditions on the distributions of the covariates and error terms in order for the assumptions of the theorem to hold with high probability. Lastly, we provide a theorem establishing that stationary points are equal to a local oracle estimator when the regularizer is nonconvex and -amenable.

Recall that is a stationary point of the program (2) if

for all feasible , where with a slight abuse of notation, we write (recall that is differentiable by our assumptions). In particular, the set of stationary points includes all local and global minima, as well as interior local maxima [6, 11].

3.1 General statistical theory

We require the loss function to satisfy the following local RSC condition:

Assumption 2 (RSC condition).

There exist and a radius such that

(13)

for all such that .

Note that the condition (13) imposes no conditions on the behavior of outside the ball of radius centered at . In this way, it differs from the RSC condition used in Loh and Wainwright [35], where a weaker inequality is assumed to hold for vectors outside the local region. This paper focuses on the local behavior of stationary points around , since the loss functions used for robust regression may be more wildly nonconvex away from the origin. As discussed in more detail below, we will take to scale as a constant independent of , and . The ball of radius essentially cuts out a local basin of attraction around in which stationary points of the -estimator are well-behaved. Furthermore, our optimization results in Section 4 guarantee that we may efficiently locate stationary points within this constant-radius region via a two-step -estimator.

We have the following main result, which requires the regularizer and loss function to satisfy the conditions of Assumptions 1 and 2, respectively. The theorem guarantees that stationary points within the local region where the loss function satisfies restricted strong convexity are statistically consistent.

Theorem 1.

Suppose satisfies the RSC condition (13) with and is -amenable, with . Suppose and and

(14)

Let be a stationary point of the program (2) such that . Then exists and satisfies the bounds

The proof of Theorem 1 is contained in Section B.1. Note that the statement of Theorem 1 is entirely deterministic, and the distributional properties of the covariates and error terms in the linear model come into play in verifying that the inequality (14) and the RSC condition (13) hold with high probability under the prescribed sample size scaling.

Remark: Although Theorem 1 only guarantees the statistical consistency of stationary points within the local region of radius , it is essentially the strongest conclusion one can draw based on the local RSC assumption (13) alone. The power of Theorem 1 lies in the fact that when is chosen to be a constant and , as is the case in our robust regression settings of interest, all stationary points within the constant-radius region are actually guaranteed to fall within a shrinking ball of radius centered around . Hence, the stationary points in the local region are statistically consistent at the usual minimax rate expected for -penalized ordinary least squares regression with sub-Gaussian data. As we will illustrate in more detail in the next section, if robust loss functions with bounded derivatives are used in place of the ordinary least squares loss, the statistical consistency conclusion of Theorem 1 still holds even when the additive errors follow a heavy-tailed distribution or are contaminated by outliers.

3.2 Establishing sufficient conditions

From Theorem 1, we see that the key ingredients for statistical consistency of local stationary points are (i) the boundedness of in inequality (14), which ultimately dictates the -rate of convergence of to up to a factor of , and (ii) the local RSC condition (13) in Assumption 2. We provide more interpretable sufficient conditions in this section via a series of propositions.

For the results of this section, we will require some boundedness conditions on the derivatives of the loss function , which we state in the following assumption:

Assumption 3.

Suppose there exist such that

(15)
(16)

Note that the bounded derivative assumption (15) holds for all the robust loss functions highlighted in Section 2 (but not for the ordinary least squares loss), and in each of those cases. Furthermore, inequality (16) holds with when is convex and twice-differentiable, but the inequality also holds for nonconvex losses such as the Tukey and Cauchy loss with . By a more careful argument, we may eschew the condition (16) if is a convex function that is in but not , as in the case of the Huber loss, since Theorem 1 only requires first-order differentiability of and ; however, we state the propositions with Assumption 3 for the sake of simplicity.

We have the following proposition, which establishes the gradient bound (14) with high probability under fairly mild assumptions:

Proposition 1.

Suppose satisfies the bounded derivative condition (15) and the following conditions also hold:

  • is sub-Gaussian with parameter .

  • Either

    • and , or

    • .

With probability at least , the loss function defined by equation (10) satisfies the bound

The proof of Proposition 1 is a simple but important application of sub-Gaussian tail bounds and is provided in Appendix C.1.

Remark: Note that for the unweighted -estimator (4), conditions (1) and (2a) of Proposition 1 hold when is sub-Gaussian and . If the ’s are not sub-Gaussian, condition (1) nonetheless holds whenever is bounded. Furthermore, condition (2b) holds whenever has a symmetric distribution and is an odd function. We further highlight the fact that aside from a possible mild requirement of symmetry, the concentration result given in Proposition 1 is independent of the distribution of , and holds equally well for heavy-tailed error distributions. The distributional effect of the ’s is captured in the sub-Gaussian parameter ; in settings where the contaminated data still follow a sub-Gaussian distribution, but the sub-Gaussian parameter is inflated due to large leverage points, using a weight function as defined in equation (9) may lead to a significant decrease in the value of . This decreases the finite-sample bias of the overall estimator.

Establishing the local RSC condition in Assumption 2 is more subtle, and the propositions described below depend in a more complex fashion on the distribution of the ’s. As noted above, the statistical consistency result in Theorem 1 only requires Assumption 2 to hold when . However, for the stronger oracle result of Theorem 2, we will require the full form of Assumption 2 to hold over all pairs in the local region. We will quantify the parameters of the RSC condition in terms of an additional parameter , which is treated as a fixed constant. Define the tail probability

(17)

and the lower-curvature bound

(18)

where is assumed to exist on the interval . We assume that is chosen small enough so that .

We first consider the case where the loss function takes the usual form of an unweighted -estimator (4). We have the following proposition, proved in Appendix C.2:

Proposition 2.

Suppose the ’s are drawn from a sub-Gaussian distribution with parameter and the loss function is defined by equation (4). Also suppose the bound

(19)

holds. Suppose satisfies Assumption 3, and suppose the sample size satisfies . With probability at least , the loss function satisfies Assumption 2 with

Remark: Note that for a fixed value of , inequality (19) places a tail condition on the distribution of via the term . This may be interpreted as a bound on the variance of the error distribution when is sub-Gaussian, or a bound on the fraction of outliers when has a contaminated distribution. Furthermore, the exponential term decreases as a function of the ratio . Hence, for a larger value of , the radius will need to be smaller in order to satisfy the bound (19). This agrees with the intuition that the local basin of good behavior for the -estimator is smaller for larger levels of contamination. Finally, note that although and are deterministic functions of the known regression function and could be computed, the values of and are usually unknown a priori. Hence, Proposition 2 should be viewed as more of a qualitative result describing the behavior of the RSC parameters as the amount of contamination of the error distribution increases, rather than a bound that can be used to select a suitable robust loss function.

The situation where takes the form of a generalized -estimator (10) is more difficult to analyze in its most general form, so we will instead focus on verifying the RSC condition (13) for the Mallows and Hill-Ryan estimators described in Section 2.2. We will show that the RSC condition holds under weaker conditions on the distribution of the ’s. We have the following lemmas, proved in Appendices C.3 and C.4:

Proposition 3 (Mallows estimator).

Suppose the ’s are drawn from a sub-exponential distribution with parameter and the loss function is defined by

and . Also suppose the bound

holds. Suppose satisfies Assumption 3, and suppose the sample size satisfies . With probability at least , the loss function satisfies Assumption 2 with

Proposition 4 (Hill-Ryan estimator).

Suppose the loss function is defined by

where . Also suppose the bound

(20)

holds. Suppose satisfies Assumption 3, and suppose the sample size satisfies . With probability at least , the loss function satisfies Assumption 2 with

Remark: Due to the presence of the weighting function , Proposition 3 imposes weaker distributional requirements on the ’s than Proposition 2, and the requirements imposed in Proposition 4 are still weaker. In fact, a version of Proposition 3 could be derived with , which would not require the ’s to be sub-exponential. The tradeoff in comparing Proposition 4 to Propositions 2 and 3 is that although the RSC condition holds under weaker distributional assumptions on the covariates, the absolute bound used in place of the sub-Gaussian/exponential parameter may be much larger. Hence, the relative size of and the radius will need to be smaller in order for inequality (20) to be satisfied, relative to the requirement for inequality (19).

In Section 5 below, we explore the consequences of Propositions 14 for heavy-tailed, outlier, and sub-exponential distributions.

3.3 Oracle results and asymptotic normality

As discussed in the preceding two subsections, penalized robust -estimators produce local stationary points that enjoy - and -consistency whenever is bounded and the errors and covariates satisfy suitable mild assumptions. In fact, a distinguishing aspect of different robust regression loss functions lies not in first-order comparisons, but in second-order considerations concerning the variance of the estimator. This is a well-known concept in classical robust regression analysis, where is fixed, , and the objective function does not contain a penalty term. By the Cramér-Rao bound and under fairly general regularity conditions [32], the optimal choice of that minimizes the asymptotic variance in the low-dimensional setting is the MLE function, , where is the probability density function of . When the class of regression functions is constrained to those with bounded influence functions (or bounded values of ), however, a more complex analysis reveals that choices of corresponding, e.g., to the losses introduced in Section 2.2 produce better performance [30].

In this section, we establish oracle properties of penalized robust -estimators. Our main result shows that under many of the assumptions stated earlier, local stationary points of the regularized -estimators actually agree with the local oracle result, defined by

(21)

This is particularly attractive from a theoretical standpoint, because the oracle result implies that local stationary points inherit all the properties of the lower-dimensional oracle estimator , which is covered by previous theory.

Note that is truly an oracle estimator, since it requires knowledge of both the actual support set of and of itself; the optimization of the loss function is taken only over a small neighborhood around . In cases where is convex or global optima of that are supported on lie in the ball of radius centered around , the constraint may be omitted. If satisfies a local RSC condition (13), the oracle program (21) is guaranteed to be convex, as stated in the following simple lemma, proved in Appendix E.1:

Lemma 1.

Suppose satisfies the local RSC condition (13) and . Then is strongly convex over the region .

In particular, the oracle estimator is guaranteed to be unique.

Our central result of this section shows that when the regularizer is -amenable and the loss function satisfies the local RSC condition in Assumption 2, stationary points of the -estimator (2) within the local neighborhood of are in fact unique and equal to the oracle estimator (21). We also require a beta-min condition on the minimum signal strength, which we denote by . For simplicity, we state the theorem as a probabilistic result for sub-Gaussian covariates and the unweighted -estimator (4); similar results could be derived for generalized -estimators under weaker distributional assumptions, as well.

Theorem 2.

Suppose the loss function is given by the -estimator (4) and is twice differentiable in the -ball of radius around . Suppose the regularizer is -amenable. Under the same conditions of Theorem 1, suppose in addition that and , and . Suppose the sample size satisfies . With probability at least , any stationary point of the program (2) such that satisfies and .

The proof of Theorem 2 builds upon the machinery developed in the recent paper [36]. However, the argument here is slightly simpler, because we only need to prove the oracle result for stationary points within a radius of . For completeness, we include a proof of Theorem 2 in Section B.2, highlighting the modifications that are necessary to obtain the statement in the present paper.

Remark: Several other papers [17, 10, 33] have established oracle results of a similar flavor, but only in cases where the -estimator takes the form described in Section 2.1 and the loss function is convex. Furthermore, the results of previous authors only concern global optima and/or guarantee the existence of local optima with the desired oracle properties. Hence, our conclusions are at once more general and more complex, since we need a more careful treatment of possible local optima.

In fact, since the oracle program (21) is essentially a -dimensional optimization problem, Theorem 2 allows us to apply previous results in the literature concerning the asymptotic behavior of low-dimensional -estimators to simultaneously analyze the asymptotic distribution of and . Huber [29] studied asymptotic properties of -estimators when the loss function is convex, and established asymptotic normality assuming , a result which was improved upon by Yohai and Maronna [67]. Portnoy [50] and Mammen [39] extended these results to nonconvex -estimators. Fewer results exist concerning generalized -estimators: Bai and Wu [4] and He and Shao [24] established asymptotic normality for a fairly general class of estimators, but the assumption is that is fixed and . He and Shao [25] extended their results to the case where is also allowed to grow and proved asymptotic normality when , assuming a convex loss.

Although the overall -estimator may be highly nonconvex, the restricted program (21) defining the oracle estimator is nonetheless convex (cf. Lemma 1 above). Hence, the standard convex theory for -estimators with a diverging number of parameters applies without modification. Since the regularity conditions existing in the literature that guarantee asymptotic normality vary substantially depending on the form of the loss function, we only provide a sample corollary for a specific (unweighted) case, as an illustration of the types of results on asymptotic normality that may be derived from Theorem 2.

Corollary 1.

Suppose the loss function is given by the -estimator (4) and the regularizer is -amenable. Under the same conditions of Theorem 2, suppose in addition that , , and . Let be any stationary point of the program (2) such that . If , then . If , then for any , we have

where

The proof of Corollary 1 is provided in Appendix D. Analogous results may be derived for other loss functions considered in this paper under slightly different regularity assumptions, by modifying appropriate low-dimensional results with diverging dimensionality (e.g., [50, 39]).

4 Optimization

We now discuss how our statistical theory gives rise to a useful two-step algorithm for optimizing the resulting high-dimensional -estimators. We first present some theory for the composite gradient descent algorithm, including rates of convergence for the regularized problem. We then describe our new two-step algorithm, which is guaranteed to converge to a stationary point within the local region where the RSC condition holds, even when the -estimator is nonconvex.

4.1 Composite gradient descent

In order to obtain stationary points of the program (2), we use the composite gradient descent algorithm [47]. Denoting , we may rewrite the program as

Then the composite gradient iterates are given by

(22)

where is the stepsize parameter. Defining the soft-thresholding operator componentwise according to

a simple calculation shows that the iterates (22) take the form

(23)

The following theorem guarantees that the composite gradient descent algorithm will converge at a linear rate to point near as long as the initial point is chosen close enough to . We will require the following assumptions on , where

denotes the Taylor remainder.

Assumption 4.

Suppose satisfies the restricted strong convexity condition

(24)

for all such that . In addition, suppose satisfies the restricted smoothness condition

(25)

Note that the definition of differs slightly from the definition of the related Taylor difference used in Assumption 2. However, one may verify the RSC condition (24) in exactly the same way as we verify the RSC condition (13) via the mean value theorem argument of Section 3.2, so we do not repeat the proofs here. The restricted smoothness condition (25) is fairly mild and is easily seen to hold with when the loss function appearing in the definition of the -estimator has a bounded second derivative. We will also assume for simplicity that is convex, as is the case for the SCAD and MCP regularizers; the theorem may be extended to situations where is nonconvex, given an appropriate quadratic bound on the Taylor remainder of .

We have the following theorem, proved in Appendix B.3. It guarantees that as long as the initial point of the composite gradient descent algorithm is chosen close enough to , the log of the -error between iterates and a global minimizer of the regularized -estimator (2) will decrease linearly with up to the order of the statistical error .

Theorem 3.

Suppose satisfies the RSC condition (24) and the RSM condition (25), and suppose is -amenable with and is convex. Suppose the regularization parameters satisfy the scaling

Also suppose is a global optimum of the objective (2) over the set . Suppose and

(26)

If is chosen such that , successive iterates of the composite gradient descent algorithm satisfy the bound

where is a tolerance parameter, , and .

Remark: It is not obvious a priori that even if