1 Introduction
Abstract

Regularized regression has become very popular nowadays, particularly on high-dimensional problems where the addition of a penalty term to the log-likelihood allows inference where traditional methods fail. A number of penalties have been proposed in the literature, such as lasso, SCAD, ridge and elastic net to name a few. Despite their advantages and remarkable performance in rather extreme settings, where , all these penalties, with the exception of ridge, are non-differentiable at zero. This can be a limitation in certain cases, such as computational efficiency of parameter estimation in non-linear models or derivation of estimators of the degrees of freedom for model selection criteria. With this paper, we provide the scientific community with a differentiable penalty, which can be used in any situation, but particularly where differentiability plays a key role. We show some desirable features of this function and prove theoretical properties of the resulting estimators within a regularized regression context. A simulation study and the analysis of a real dataset show overall a good performance under different scenarios. The method is implemented in the R package DLASSO freely available from CRAN, http://CRAN.R-project.org/package=DLASSO.

A Differentiable Alternative to the Lasso Penalty

Hamed Haselimashhadi, Veronica Vinciotti

Keywords : Differentiable penalty, penalised likelihood, regularized regression

1 Introduction

In the usual regularized regression situation, the regression parameters are estimated by minimising

(1)

with the observations on the response , which we assume to be centered, and , , the observations on the covariates. Various forms of the penalty function have been suggested in the literature, such as the norm, as in ridge regression [11], the norm as in the popular lasso regression [21], the Smoothly Clipped Absolute Deviation (SCAD) penalty [7], the fused lasso [22], group lasso [24], combinations of and norms, such as elastic net [25] and the Smooth-Lasso [9]. Aside from the ridge penalty, which does not necessarily lead to sparsity and variable selection, all of the other penalties are non-differentiable at zero. This can be a limitation in certain cases, such as computational efficiency for non-linear models [19] or derivation of the degrees of freedom for model selection criteria, such as the generalised information criterion [18], as pointed out by [1].

In this paper, we address this gap by proposing a penalty function that is differentiable at zero, and which possesses also many of the desirable properties of existing penalty functions. The function has one tuning parameter, by varying, which one can obtain a penalty extremely close to the absolute value (lasso) or a quadratic function (ridge) or combinations of these. In Section 2, we define this new penalty function, which we call dlasso, and list its properties. In Section 3, we study the properties of the estimators under a dlasso penalty in a regularized regression context. In Section 4, we provide an efficient algorithm for parameter estimation in regularized regression, by exploiting the differentiability of the penalty function. In Section 5 and 6, we study the performance of this new approach on a number of simulated scenarios and on a real dataset, by comparing it with existing methods. Finally, in Section 7, we draw some conclusions and point to directions for future work.

2 Our proposal: dlasso

Looking at the literature for differentiable approximations of the absolute value, a number of proposals have been made, such as

(2)
(3)
(4)
(5)

Equation (2) is a special case of (3) and the length of the interval from equation (3) and (4) is always less than  [14]. The approximation (5) has been used by [19] in a penalized likelihood context. This function is twice differentiable and with the maximum absolute difference of , but it does not pass through zero. This, however, is a desirable property for a penalty function if one wants the tuning parameter to cover a number of penalties such as .

Motivated by this challenge, and noting some advantageous properties of the error function [15], in this paper we propose the following penalty function

(6)

The function can be written in different forms,

either in terms of the error function erf, or of its complementary erf = 1-erf, or of the cdf of a normal distribution with mean 0 and standard deviation , which we denote by .

The function has a number of properties, some of which make it an appealing choice for regularized inference:

  1. for any .

  2. is twice differentiable with respect to , with the derivatives given by

    with the density function of the normal distribution. Note that, similarly to the SCAD penalty, the dlasso penalty is not convex. For example the second derivative is positive if or .

  3. As , the function converges to exponentially fast. In fact, we prove that

    Proof.

    For the proof, we use the bound on the complementary error function given by  [2]

    Using the inequalities above we get,

    Following a similar approach for leads to the same result. Consequently,

    The right hand side (RHS) of the inequality tends to zero as at an exponential speed. Figure 1 (left) accompanies this result, by showing that our chosen function converges to the absolute value faster than its opponents.

    Figure 1: Key properties of the dlasso penalty. Left: Fast convergence to the absolute value compared to other approximations. Right: dlasso is like lasso for small and like ridge for and small .

    FIGURE 1 ABOUT HERE

  4. If we set , the function behaves like the norm in the vicinity of zero.
    This is due to the fact that for small :

    Then, setting , the RHS becomes . On the other hand, , from which .

The last two points are summarized in Figure 1 (right): when is small the function behaves like the absolute value, when , the function behaves like in the vicinity of .

The final point to discuss is about computational complexity, which is the only potential difficulty with our proposal. However, a number of good and fast approximations are provided in the literature for evaluating the error function or the cdf of a normal distribution. One option is to use Taylor approximations. For instance, approximations can be based on one of the expressions below

(7)
(8)
(9)

For small , the series in (7) is slightly faster than the series in (8) because there is no need to compute an exponential. However, the series (8) is preferable to (7) for moderate because it involves no cancellation. For large , neither series are satisfactory and in this case it is preferable to use the asymptotic expansion for the complementary error function (9).

Beside Taylor approximations, there are alternative fast algorithms that approximate the error function or the normal cdf, see for example [23, 15, 5, 16, 13, 6, 3]. In particular two fast approximations are given by

These are not only fast but also very precise, e.g. the maximum absolute error of the last function is .

3 Regularized regression based on dlasso

In this section, we use the new penalty function in the traditional regularized regression context and discuss the theoretical properties of the resulting estimators.

Using the new penalty function in equation (1) results in the optimization problem

(10)

In order to get an insight into the resulting estimators, let us consider the case , and . Then are the solutions of

Figure 2 shows the estimators for a range of values of . As discussed before and as evident from this plot, (a), (c) and (d) show similar regularizations to lasso, ridge and non-penalized linear regression, respectively.

FIGURE 2 ABOUT HERE

Figure 2: Plot of thresholding functions with and assuming different values: (lasso), , (ridge), (ordinary least squares).

Given the very good approximation of the function to the absolute value when is close to zero, we expect the estimators to have similar properties to the lasso estimators in this case. This is proved by the next two theorems, where we follow a similar approach to [12].

Theorem 3.1.

For any , and define,

where is the objective function in equation (10). Then,

where denotes a normally distributed random variable.

Proof.

Recalling from equation (10), then

and

Theorem (3.1) shows that the limit distribution of estimations under the new penalty is similar to lasso, see [12, Theorem 1], provided is close enough to zero. That is, the penalization is capable of producing sparse estimations. The theorem however does not provide any optimal value for to ensure this convergence. In the next theorem we show that the minimum speed of that guarantees the convergence of estimations to lasso is for any .

Theorem 3.2.

Let be a sparse set of coefficients, , , , , and where is non-singular. Then, where,

Proof.

Consider . Then

where and are sets of zero and non-zero coefficients respectively. ∎

Similar to the derivation of [12], one can show that this results guarantees a sparse estimation of the parameters. In the proof of Theorem (3.2), we assumed that . Thus, in practice, if one chooses any less than , the resulting estimators are similar to lasso.

4  Algorithm for parameter estimation

The penalised likelihood (10) is differentiable with respect to , so standard optimization routines can be used to find its minimum. These however can be slow. In this section we propose an efficient algorithm, which exploits the differentiability of the dlasso penalty function. To this end, we follow  [7] and define an iterative algorithm as,

(11)

where is an initial estimation for the parameters and is defined by,

In order to derive this, we take the first order Taylor approximation of the dlasso penalty function around given by,

Note that the differentiability of dlasso means that we do not need to resort to local quadratic approximations as in [7]. Given , we can now rewrite this as

(12)

Substituting (12) into the penalised likelihood (10) results in

the optimum of which can be found efficiently by iteratively computing the ridge regression as in (11).

This algorithm has been implemented in the R package DLASSO, which is freely available from CRAN, http://CRAN.R-project.org/package=DLASSO. The package allows also to select the tuning parameters and by common model selection criteria, such as Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) or Generalized Cross-Validation (GCV).

5 Simulation study

We have performed a simulation study to assess the performance of the new penalty in a regression context, , . Similar to [25] we design three simulation scenarios.

  1. : Standard. We set , simulate the predictors from a multivariate normal distribution with mean zero and correlation and generate the response with .

  2. : Small s. Same as the first scenario except that .

  3. : Correlated predictors. We consider and divide the coefficients into three groups, , and . We consider a high correlation of amongst each pair of the first five covariates. Similarly, we assume a correlation of in the second group, whereas we assume no dependency in the third group. Finally, we set .

For all three scenarios, we generate datasets containing observations: observations are assigned to the training set and the remaining are assigned to the test set. The penalty parameters are tuned on the training set using 10-fold Cross-Validation (CV) on the mean squared error.

We compare the following models: dlasso (with fixed ), dlasso (with both and tuned), lasso, ridge, , elastic net (enet), SCAD and Ordinary Least Squares regression (OLS). For lasso and elastic net, we use the R package msgps [10], for SCAD we use the R package ncvreg [4], for ridge we use the R package glmnet [8]. For the dlasso methods, we use the R package DLASSO. In the first scenario, we also consider the log-approximation of [19] as in Equation 5 (with ). However, this penalty turned out to be rather unstable in the optimization, so we did not include it in the other scenarios.

Figure 3 shows the results. For each scenario, we plot the distribution (over the 50 iterations) of the mean squared error on the test set, defined by and the mean squared error of the estimated parameters, defined by , with and denoting the true and estimated values of the parameters, respectively. Among all the penalties, the first four plotted are differentiable at zero (OLS, ridge, dlasso with s=0.01 and general dlasso), the other ones are popular penalties in the regularized regression literature. Overall, the dlasso penalty performs better or the same as existing penalties and is superior to ridge, which is the only alternative differentiable penalty for regularized problems, and to SCAD, which is the only alternative non-convex penalty.

Figure 3: Simulation study comparing dlasso with existing methods on three scenarios. Two versions of dlasso are considered: one where both and are tuned by 10-fold CV (referred to as dlasso), and the other where is fixed at 0.01, i.e. close to an penalty, and tuned by CV (referred to as dlasso.s). The plots report the Mean Squared Error (MSE) of the prediction on the test set (left) and of the estimated regression parameters (right).

6 Prostate cancer example

We consider the prostate data by [20], previously analysed by [25] using regularized regression methods. The objective of the analysis is to investigate the correlation between the level of prostate specific antigen (lpsa) and a number of clinical measurements in 97 men who were about to receive a radical prostatectomy. There are eight covariates: log cancer volume (lcanvol), log prostate weight (lweight), log benign prostatic hyperplasia amount (lbph), log capsular penetration (lcp), age, Gleason score (gleason), percentage Gleason scores 4 or 5 (pgg45), seminal vesicle invasion (svi). All covariates are normalized to have zero mean and unit variance and the response to have zero mean.

Lasso, ridge, SCAD, OLS, elastic-net and dlasso are applied to the data. BIC is used to select all tuning parameters. For dlasso, we also consider the case of fixed to 1, where the results are expected to be similar to ridge, and fixed to 100, where we expect a solution similar to OLS. The results in Table 1 show a similar performance of dlasso compared with lasso and elastic net. All three models select the same five predictors and are superior to SCAD in terms of AIC and BIC. The comparison with ridge and OLS confirms our expectations. Finally, dlasso shows a better BIC compared to OLS, probably due to the effect of a small amount of regularization still present for .

Method Precision AIC BIC df Significant Variables
dlasso s=0.001 207.6 216.1 5 lcavol, ibph, lweight, pgg45,svi
lasso - 206.7 215.3 5 lcavol, ibph, lweight, pgg45,svi
elastic net =0.001 206.8 215.3 5 lcavol, ibph, lweight, pgg45,svi
SCAD - 214.6 231 4 lcavol, ibph, lweight,svi
Ridge
dlasso 1 207.4 227 8 all variables
ridge - 207.1 226 8 all variables
OLS
dlasso 100 204 228 8 all variables
OLS - 202 233 8 all variables
Table 1: Comparison of lasso, ridge, SCAD, OLS, elastic-net and dlasso for (BIC optimal), (ridge), (OLS) on the prostate dataset. All tuning parameters are selected by BIC. The methods are compared based on AIC, BIC and sparsity.

7 Conclusions

In this paper, we have proposed a novel penalty term that is capable of producing similar results to other well-known penalty functions in the context of regularized regression. One key difference, however, is that this new penalty is differentiable. This opens up the possibility of using it in many contexts where differentiability plays a key role. For example, a differentiable objective function could lead to more efficient implementations of parameter estimation procedures for certain models or to improved model selection criteria by a more accurate estimation of the bias term. These aspects will be investigated in future work.

References

  • [1] A. Abbruzzo, I. Vujačić, E. Wit, and A. M. Mineo. Generalized information criterion for model selection in penalized graphical models. ArXiv e-prints, 2014.
  • [2] M. Abramowitz and I.A. Stegun. Handbook of mathematical functions: With formulas, graphs, and mathematical tables. Dover Publications, 2012.
  • [3] P Sundberg Borjesson et al. Simple approximations of the error function Q (x) for communications applications. Communications, 27(3), 1979.
  • [4] Patrick Breheny and Jian Huang. Coordinate descent algorithms for nonconvex penalized regression, with applications to biological feature selection. Annals of Applied Statistics, 5(1):232–253, 2011.
  • [5] Sylvain Chevillard and Nathalie Revol. Computation of the error function erf in arbitrary precision with correct rounding. JD Bruguera et M. Daumas (editeurs): RNC, 8:27–36, 2008.
  • [6] W. J. Cody. Performance evaluation of programs for the error and complementary error functions. ACM Trans. Math. Softw., 16(1):29–37, 1990.
  • [7] Jianqing Fan and Runze Li. Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association, 96(456):1348–1360, 2001.
  • [8] Jerome Friedman, Trevor Hastie, and Rob Tibshirani. Regularization paths for generalized linear models via coordinate descent. Journal of Statistical Software, 33(1):1–22, 2010.
  • [9] Mohamed Hebiri, Sara van de Geer, et al. The Smooth-Lasso and other + penalized methods. Electronic Journal of Statistics, 5:1184–1226, 2011.
  • [10] Kei Hirose. msgps: Degrees of freedom of elastic net, adaptive lasso and generalized elastic net, 2012. R package version 1.3.
  • [11] Arthur E Hoerl and Robert W Kennard. Ridge regression: Biased estimation for nonorthogonal problems. Technometrics, 12(1):55–67, 1970.
  • [12] Keith Knight and Wenjiang Fu. Asymptotics for Lasso-type estimators. Annals of statistics, pages 1356–1378, 2000.
  • [13] Chu-In Charles Lee. On Laplace continued fraction for the normal integral. Annals of the Institute of Statistical Mathematics, 44(1):107–120, 1992.
  • [14] Yu Nesterov. Smooth minimization of non-smooth functions. Mathematical programming, 103(1):127–152, 2005.
  • [15] F.W.J. Olver, National Institute of Standards, and Technology (U.S.). NIST Handbook of mathematical functions. Cambridge University Press, 2010.
  • [16] W.H. Press. Numerical Recipes in C: The art of scientific computing. Number v. 4. Cambridge University Press, 1992.
  • [17] Carlos Ramirez, Reinaldo Sanchez, et al. is the most computationally efficient smooth approximation to . Journal of Uncertain Systems, 8, 2014.
  • [18] Genshiro Kitagawa Sadanori Konishi. Generalised information criteria in model selection. Biometrika, 83(4):875–890, 1996.
  • [19] Mark Schmidt, Glenn Fung, and Rmer Rosales. Fast optimization methods for l1 regularization: A comparative study and two new approaches. In Machine Learning: ECML 2007, pages 286–297. 2007.
  • [20] Thomas A Stamey, John N Kabalin, John E McNeal, Iain M Johnstone, Fuad Freiha, Elise A Redwine, and Norman Yang. Prostate specific antigen in the diagnosis and treatment of adenocarcinoma of the prostate. II. Radical prostatectomy treated patients. The Journal of Urology, 141(5):1076–1083, 1989.
  • [21] Robert Tibshirani. Regression shrinkage and selection via the Lasso. Journal of the Royal Statistical Society. Series B (Methodological), pages 267–288, 1996.
  • [22] Robert Tibshirani, Michael Saunders, Saharon Rosset, Ji Zhu, and Keith Knight. Sparsity and smoothness via the fused lasso. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 67(1):91–108, 2005.
  • [23] Hector Vazquez Leal, Roberto Castaneda Sheissa, Uriel Filobello Nino, Arturo Sarmiento Reyes, and Jesus Sanchez Orea. High accurate simple approximation of normal distribution integral. Mathematical Problems in Engineering, 2012, 2012.
  • [24] Ming Yuan and Yi Lin. Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 68(1):49–67, 2006.
  • [25] Hui Zou and Trevor Hastie. Regularization and variable selection via the Elastic Net. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 67(2):301–320, 2005.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
19279
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description