A Proof of Model Consistency Results

Bolasso: Model Consistent Lasso Estimation through the Bootstrap

Abstract

We consider the least-square linear regression problem with regularization by the -norm, a problem usually referred to as the Lasso. In this paper, we present a detailed asymptotic analysis of model consistency of the Lasso. For various decays of the regularization parameter, we compute asymptotic equivalents of the probability of correct model selection (i.e., variable selection). For a specific rate decay, we show that the Lasso selects all the variables that should enter the model with probability tending to one exponentially fast, while it selects all other variables with strictly positive probability. We show that this property implies that if we run the Lasso for several bootstrapped replications of a given sample, then intersecting the supports of the Lasso bootstrap estimates leads to consistent model selection. This novel variable selection algorithm, referred to as the Bolasso, is compared favorably to other linear regression methods on synthetic data and datasets from the UCI machine learning repository.

1 Introduction

Regularization by the -norm has attracted a lot of interest in recent years in machine learning, statistics and signal processing. In the context of least-square linear regression, the problem is usually referred to as the Lasso [?]. Much of the early effort has been dedicated to algorithms to solve the optimization problem efficiently. In particular, the Lars algorithm of \singleemcitelars allows to find the entire regularization path (i.e., the set of solutions for all values of the regularization parameters) at the cost of a single matrix inversion.

Moreover, a well-known justification of the regularization by the -norm is that it leads to sparse solutions, i.e., loading vectors with many zeros, and thus performs model selection. Recent works [?, ?, ?, ?] have looked precisely at the model consistency of the Lasso, i.e., if we know that the data were generated from a sparse loading vector, does the Lasso actually recover it when the number of observed data points grows? In the case of a fixed number of covariates, the Lasso does recover the sparsity pattern if and only if a certain simple condition on the generating covariance matrices is verified [?]. In particular, in low correlation settings, the Lasso is indeed consistent. However, in presence of strong correlations, the Lasso cannot be consistent, shedding light on potential problems of such procedures for variable selection. Adaptive versions where data-dependent weights are added to the -norm then allow to keep the consistency in all situations [?].

In this paper, we first derive a detailed asymptotic analysis of sparsity pattern selection of the Lasso estimation procedure, that extends previous analysis [?, ?, ?], by focusing on a specific decay of the regularization parameter. We show that when the decay is proportional to , where is the number of observations, then the Lasso will select all the variables that should enter the model (the relevant variables) with probability tending to one exponentially fast with , while it selects all other variables (the irrelevant variables) with strictly positive probability. If several datasets generated from the same distribution were available, then the latter property would suggest to consider the intersection of the supports of the Lasso estimates for each dataset: all relevant variables would always be selected for all datasets, while irrelevant variables would enter the models randomly, and intersecting the supports from sufficiently many different datasets would simply eliminate them. However, in practice, only one dataset is given; but resampling methods such as the bootstrap are exactly dedicated to mimic the availability of several datasets by resampling from the same unique dataset [?]. In this paper, we show that when using the bootstrap and intersecting the supports, we actually get a consistent model estimate, without the consistency condition required by the regular Lasso. We refer to this new procedure as the Bolasso (bootstrap-enhanced least absolute shrinkage operator). Finally, our Bolasso framework could be seen as a voting scheme applied to the supports of the bootstrap Lasso estimates; however, our procedure may rather be considered as a consensus combination scheme, as we keep the (largest) subset of variables on which all regressors agree in terms of variable selection, which is in our case provably consistent and also allows to get rid of a potential additional hyperparameter.

The paper is organized as follows: in Section 2, we present the asymptotic analysis of model selection for the Lasso; in Section 3, we describe the Bolasso algorithm as well as its proof of model consistency, while in Section 4, we illustrate our results on synthetic data, where the true sparse generating model is known, and data from the UCI machine learning repository. Sketches of proofs can be found in Appendix A.

Notations

For a vector , we let denote the -norm, the -norm and the -norm. For , denotes the sign of , defined as if , if , and if . For a vector , denotes the the vector of signs of elements of .

Moreover, given a vector and a subset of , denotes the vector in of elements of indexed by . Similarly, for a matrix , denotes the submatrix of composed of elements of whose rows are in and columns are in .

2 Asymptotic Analysis of Model Selection for the Lasso

In this section, we describe existing and new asymptotic results regarding the model selection capabilities of the Lasso.

2.1 Assumptions

We consider the problem of predicting a response from covariates . The only assumptions that we make on the joint distribution of are the following:

  • The cumulant generating functions and are finite for some .

  • The joint matrix of second order moments is invertible.

  • and for some and .

We let denote the sparsity pattern of , the sign pattern of , and the additive noise.1 Note that our assumption regarding cumulant generating functions is satisfied when and have compact support, and also, when the densities of and have light tails.

We consider independent and identically distributed (i.i.d.) data , , sampled from ; the data are given in the form of matrices and .

Note that the i.i.d. assumption, together with (A      (A1)-      (A3)), are the simplest assumptions for studying the asymptotic behavior of the Lasso; and it is of course of interest to allow more general assumptions, in particular growing number of variables , more general random variables, etc. (see, e.g., \singleemciteyuinfinite), which are outside the scope of this paper.

2.2 Lasso Estimation

We consider the square loss function and the regularization by the -norm defined as . That is, we look at the following Lasso optimization problem [?]:

(1)

where is the regularization parameter. We denote any global minimum of Eq. (1)—it may not be unique in general, but will with probability tending to one exponentially fast under assumption (A      (A1)).

2.3 Model Consistency - General Results

In this section, we detail the asymptotic behavior of the Lasso estimate , both in terms of the difference in norm with the population value (i.e., regular consistency) and of the sign pattern , for all asymptotic behaviors of the regularization parameter . Note that information about the sign pattern includes information about the support, i.e., the indices for which is different from zero; moreover, when is consistent, consistency of the sign pattern is in fact equivalent to the consistency of the support.

We now consider five mutually exclusive possible situations which explain various portions of the regularization path (we assume (A      (A1)-      (A3))); many of these results appear elsewhere [?, ?, ?, ?, ?] but some of the finer results presented below are new (see Section 2.4).

  1. If tends to infinity, then with probability tending to one.

  2. If tends to a finite strictly positive constant , then converges in probability to the unique global minimum of . Thus, the estimate never converges in probability to , while the sign pattern tends to the one of the previous global minimum, which may or may not be the same as the one of .2

  3. If tends to zero slower than , then converges in probability to (regular consistency) and the sign pattern converges to the sign pattern of the global minimum of . This sign pattern is equal to the population sign vector if and only if the following consistency condition is satisfied:

    (2)

    Thus, if Eq. (2) is satisfied, the probability of correct sign estimation is tending to one, and to zero otherwise [?].

  4. If for , then the sign pattern of agrees on with the one of with probability tending to one, while for all sign patterns consistent on with the one of , the probability of obtaining this pattern is tending to a limit in (in particular strictly positive); that is, all patterns consistent on are possible with positive probability. See Section 2.4 for more details.

  5. If tends to zero faster than , then is consistent (i.e., converges in probability to ) but the support of is equal to with probability tending to one (the signs of variables in may be negative or positive). That is, the -norm has no sparsifying effect.

Among the five previous regimes, the only ones with consistent estimates (in norm) and a sparsity-inducing effect are tending to zero and tending to a limit (i.e., potentially infinite). When , then we can only hope for model consistent estimates if the consistency condition in Eq. (2) is satisfied. This somewhat disappointing result for the Lasso has led to various improvements on the Lasso to ensure model consistency even when Eq. (2) is not satisfied [?, ?]. Those are based on adaptive weights based on the non regularized least-square estimate. We propose in Section 3 an alternative way which is based on resampling.

In this paper, we now consider the specific case where for , where we derive new asymptotic results. Indeed, in this situation, we get the correct signs of the relevant variables (those in ) with probability tending to one, but we also get all possible sign patterns consistent with this, i.e., all other variables (those not in ) may be non zero with asymptotically strictly positive probability. However, if we were to repeat the Lasso estimation for many datasets obtained from the same distribution, we would obtain for each , a set of active variables, all of which include with probability tending to one, but potentially containing all other subsets. By intersecting those, we would get exactly .

However, this requires multiple copies of the samples, which are not usually available. Instead, we consider bootstrapped samples which exactly mimic the behavior of having multiple copies. See Section 3 for more details.

2.4 Model Consistency with Exact Root- Regularization Decay

In this section we present detailed new results regarding the pattern consistency for tending to zero exactly at rate (see proofs in Appendix A):

Proposition 1

Assume (A      (A1)-      (A3)) and , . Then for any sign pattern such that , tends to a limit , and we have:

Proposition 2

Assume (A      (A1)-      (A3)) and , . Then, for any pattern such that , there exist a constant such that

The last two propositions state that we get all relevant variables with probability tending to one exponentially fast, while we get exactly get all other patterns with probability tending to a limit strictly between zero and one. Note that the results that we give in this paper are valid for finite , i.e., we could derive actual bounds on probability of sign pattern selections with known constants that explictly depend on  and .

3 Bolasso: Bootstrapped Lasso

Given the i.i.d. observations , , given by matrices and , we consider bootstrap replications of the data points [?]; that is, for , we consider a ghost sample , , given by matrices and . The pairs , , are sampled uniformly at random with replacement from the original pairs in . The sampling of the pairs of observations is independent. In other words, we defined the distribution of the ghost sample by sampling points with replacement from , and, given , the ghost samples are independently sampled i.i.d. from the distribution of .

The asymptotic analysis from Section 2 suggests to estimate the supports of the Lasso estimates for the bootstrap samples, , and to intersect them to define the Bolasso model estimate of the support: . Once is selected, we estimate by the unregularized least-square fit restricted to variables in . The detailed algorithm is given in Algorithm 1. The algorithm has only one extra parameter (the number of bootstrap samples ). Following Proposition 3, should be chosen growing with asymptotically slower than . In simulations, we always use (except in Figure 3, where we exactly study the influence of ).

  Input: data            number of bootstrap replicates            regularization parameter
  for  to  do
     Generate bootstrap samples
     Compute Lasso estimate from
     Compute support
  end for
  Compute
  Compute from
Algorithm 1 Bolasso

Note that in practice, the Bolasso estimate can be computed simultaneously for a large number of regularization parameters because of the efficiency of the Lars algorithm (which we use in simulations), that allows to find the entire regularization path for the Lasso at the (empirical) cost of a single matrix inversion [?]. Thus computational complexity of the Bolasso is .

The following proposition (proved in Appendix A) shows that the previous algorithm leads to consistent model selection.

Proposition 3

Assume (A      (A1)-      (A3)) and , . Then the probability that the Bolasso does not exactly select the correct model, i.e., for all , has the following upper bound:

where are strictly positive constants.

Therefore, if tends to infinity slower than when tends to infinity, the Bolasso asymptotically selects with overwhelming probability the correct active variable, and by regular consistency of the restricted least-square estimate, the correct sign pattern as well. Note that the previous bound is true whether the condition in Eq. (2) is satisfied or not, but could be improved on if we suppose that Eq. (2) is satisfied. See Section 4.1 for a detailed comparison with the Lasso on synthetic examples.

4 Simulations

In this section, we illustrate the consistency results obtained in this paper with a few simple simulations on synthetic examples similar to the ones used by \singleemcitegrouplasso and some medium scale datasets from the UCI machine learning repository [?].

4.1 Synthetic examples

For a given dimension , we sampled from a normal distribution with zero mean and covariance matrix generated as follows: (a) sample a matrix with independent standard normal distributions, (b) form , (c) scale to unit diagonal. We then selected the first variables and sampled non zero loading vectors as follows: (a) sample each loading from independent standard normal distributions, (b) rescale those to unit magnitude, (c) rescale those by a scaling which is uniform at random between and (to ensure ). Finally, we chose a constant noise level equal to times , and the additive noise is normally distributed with zero mean and variance . Note that the joint distribution on thus defined satisfies with probability one (with respect to the sampling of the covariance matrix) assumptions (A      (A1)-      (A3)).

Figure 1: Lasso: log-odd ratios of the probabilities of selection of each variable (white = large probabilities, black = small probabilities) vs. regularization parameter. Consistency condition in Eq. (2) satisfied (left) and not satisfied (right).
Figure 2: Bolasso: log-odd ratios of the probabilities of selection of each variable (white = large probabilities, black = small probabilities) vs. regularization parameter. Consistency condition in Eq. (2) satisfied (left) and not satisfied (right).

In Figure 1, we sampled two distributions with and relevant variables, one for which the consistency condition in Eq. (2) is satisfied (left), one for which it was not satisfied (right). For a fixed number of sample , we generated 256 replications and computed the empirical frequencies of selecting any given variable for the Lasso as the regularization parameter varies. Those plots show the various asymptotic regimes of the Lasso detailed in Section 2. In particular, on the right plot, although no leads to perfect selection (i.e., exactly variables with indices less than are selected), there is a range where all relevant variables are always selected, while all others are selected with probability within .

In Figure 2, we plot the results under the same conditions for the Bolasso (with a fixed number of bootstrap replications ). We can see that in the Lasso-consistent case (left), the Bolasso widens the consistency region, while in the Lasso-inconsistent case (right), the Bolasso “creates” a consistency region.

In Figure 3, we selected the same two distributions and compared the probability of exactly selecting the correct support pattern, for the Lasso, and for the Bolasso with varying numbers of bootstrap replications (those probabilities are computed by averaging over 256 experiments with the same distribution). In Figure 3, we can see that in the Lasso-inconsistent case (right), the Bolasso indeed allows to fix the unability of the Lasso to find the correct pattern. Moreover, increasing looks always beneficial; note that although it seems to contradict the asymptotic analysis in Section 3 (which imposes an upper bound for consistency), this is due to the fact that not selecting (at least) the relevant variables has very low probability and is not observed with only 256 replications.

Figure 3: Bolasso (red, dashed) and Lasso (black, plain): probability of correct sign estimation vs. regularization parameter. Consistency condition in Eq. (2) satisfied (left) and not satisfied (right). The number of bootstrap replications is in .

Finally, in Figure 4, we compare various variable selection procedures for linear regression, to the Bolasso, with two distributions where , and varying . For all the methods we consider, there is a natural way to select exactly variables with no free parameters (for the Bolasso, we select the most stable pattern with elements, i.e., the pattern which corresponds to most values of ). We can see that the Bolasso outperforms all other variable selection methods, even in settings where the number of samples becomes of the order of the number of variables, which requires additional theoretical analysis, subject of ongoing research. Note in particular that we compare with bagging of least-square regression [?] followed by a thresholding of the loading vector, which is another simple way of using bootstrap samples: the Bolasso provides a more efficient way to use the extra information, not for usual stabilization purposes [?], but directly for model selection. Note finally, that the bagging of Lasso estimates requires an additional parameter and is thus not tested.

Figure 4: Comparison of several variable selection methods: Lasso (black circles), Bolasso (green crosses), forward greedy (magenta diamonds), thresholded LS estimate (red stars), adaptive Lasso (blue pluses). Consistency condition in Eq. (2) satisfied (left) and not satisfied (right). The averaged (over 32 replications) variable selection error is computed as the square distance between sparsity pattern indicator vectors.

4.2 UCI datasets

The previous simulations have shown that the Bolasso is succesful at performing model selection in synthetic examples. We now apply it to several linear regression problems and compare it to alternative methods for linear regression, namely, ridge regression, Lasso, bagging of Lasso estimates [?], and a soft version of the Bolasso (referred to as Bolasso-S), where instead of intersecting the supports for each bootstrap replications, we select those which are present in at least of the bootstrap replications. In Table 1, we consider data randomly generated as in Section 4.1 (with , , ), where the true model is known to be composed of a sparse loading vector, while in Table 2, we consider regression datasets from the UCI machine learning repository. For all of those, we perform 10 replications of 10-fold cross validation and for all methods (which all have one free regularization parameter), we select the best regularization parameter on the 100 folds and plot the mean square prediction error and its standard deviation.

Note that when the generating model is actually sparse (Table 1), the Bolasso outperforms all other models, while in other cases (Table 2) the Bolasso is sometimes too strict in intersecting models, i.e., the softened version works better and is competitive with other methods. Studying the effects of this softened scheme (which is more similar to usual voting schemes), in particular in terms of the potential trade-off between good model selection and low prediction error, and under conditions where is large, is the subject of ongoing work.

5 Conclusion

We have presented a detailed analysis of variable selection properties of a boostrapped version of the Lasso. The model estimation procedure, referred to as the Bolasso, is provably consistent under general assumptions. This work brings to light that poor variable selection results of the Lasso may be easily enhanced thanks to a simple parameter-free resampling procedure. Our contribution also suggests that the use of bootstrap samples by L. Breiman in Bagging/Arcing/Random Forests [?] may have been so far slightly overlooked and considered a minor feature, while using boostrap samples may actually be a key computational feature in such algorithms for good model selection performances, and eventually good prediction performances on real datasets.

The current work could be extended in various ways: first, we have focused on a fixed total number of variables, and allowing the numbers of variables to grow is important in theory and in practice [?]. Second, the same technique can be applied to similar settings than least-square regression with the -norm, namely regularization by block -norms [?] and other losses such as general convex classification losses. Finally, theoretical and practical connections could be made with other work on resampling methods and boosting [?].

0.93 1.20 1.42 1.28
Ridge
Lasso
Bolasso
Bagging
Bolasso-S
Table 1: Comparison of least-square estimation methods, data generated as described in Section 4.1, with (cf. Eq. (2)). Performance is measured through mean squared prediction error (multiplied by 100).
Autompg Imports Machine Housing
Ridge
Lasso
Bolasso
Bagging
Bolasso-S
Table 2: Comparison of least-square estimation methods, UCI regression datasets. Performance is measured through mean squared prediction error (multiplied by 100).

Appendix A Proof of Model Consistency Results

In this appendix, we give sketches of proofs for the asymptotic results presented in Section 2 and Section 3. The proofs rely on the well-known property of the Lasso optimization problems, namely that if the sign pattern of the solution is known, then we can get the solution in closed form.

a.1 Optimality Conditions

We let denote , and . First, we can equivalently rewrite Eq. (1) as:

(3)

The optimality conditions for Eq. (3) can be written in terms of the sign pattern and the sparsity pattern  [?]:

(4)
(5)

In this paper, we focus on regularization parameters of the form . The main idea behind the results is to consider that are distributed according to their limiting distributions, obtained from the law of large numbers and the central limit theorem, i.e., converges to a.s. and is asymptotically normally distributed with mean zero and covariance matrix . When assuming this, Propositions 1 and 2 are straightforward. The main effort is to make sure that we can safely replace by their limiting distributions. The following lemmas give sufficient conditions for correct estimation of the signs of variables in and for selecting a given pattern (note that all constants could be expressed in terms of and , details are omitted here):

Lemma 1

Assume (A      (A2)) and . Then implies , where .

Lemma 2

Assume (A      (A2)) and let such that . Let . Assume

(6)
(7)
(8)
(9)

with are positive constants. Then .

Those two lemmas are interesting because they relate optimality of certain sign patterns to quantities from which we can derive concentration inequalities.

a.2 Concentration Inequalities

Throughout the proofs, we need to provide upper bounds on the following quantities and . We obtain, following standard arguments [?]: if and (where are constants),

We also consider multivariate Berry-Esseen inequalities [?]; the probability can be estimated as where is normal with mean zero and covariance matrix . The error is then uniformly (for all convex sets ) upperbounded by:

a.3 Proof of Proposition 1

By Lemma 2, for any given , and large enough, the probability that the sign is different from is upperbounded by

where is the set of such that (a) and (b) for all . Note that here tends to zero and that we have: . All terms (if is large enough) are thus .

This shows that where –the probability is strictly between 0 and 1 because the set and its complement have non empty interiors and the normal distribution has a positive definite covariance matrix . The other inequality can be proved similarly. Note that the constant in depends on but by carefully considering this dependence on , we could make the inequality uniform in as long as tends to zero or infinity at most at a logarithmic speed (i.e., deviates from by at most a logarithmic factor). Also, it would be interesting to consider uniform bounds on portions of the regularization path.

a.4 Proof of Proposition 2

From Lemma 1, the probability of not selecting any of the variables in is upperbounded by , which is straightforwardly upper bounded (using Section A.2) by a term of the required form.

a.5 Proof of Proposition 3

In order to simplify the proof, we made the simplifying assumption that the random variables and have compact supports. Extending the proofs to take into account the looser condition that and have non uniformly infinite cumulant generating functions (i.e., assumption (A      (A1))) can be done with minor changes. The probability that is different from is upper bounded by the sum of the following probabilities:

(a) Selecting at least variables in :

the probability that for the -th replication, one index in is not selected, each of them which is upper bounded by , where corresponds to the ghost sample; as common in theoretical analysis of the bootstrap, we relate to as follows: (and similarly for ). Because we have assumed that and have compact supports, the bootstrapped variables have also compact support and we can use concentration inequalities (given the original variables , and also after expectation with respect to ). Thus the probability for one bootstrap replication is upperbounded by where and are strictly positive constants. Thus the overall contribution of this part is less than .

(b) Selecting at most variables in :

the probability that for all replications, the set is not exactly selected (note that this is not tight at all since on top of the relevant variables which are selected with overwhelming probability, different additional variables may be selected for different replications and cancel out when intersecting).

Our goal is thus to bound . By previous lemmas, we have that is upper bounded by where now, given , is normally distributed with mean and covariance matrix .

The first two terms and the last two ones are uniformly (if is large enough). We then have to consider the remaining term. We have . By Hoeffding’s inequality, we can replace the covariance matrix that depends on and by , at cost . We thus have to bound for normally distributed and a fixed compact set. Because the set is compact, there exist constants such that, if for large enough, then . Thus, by truncation, we obtain a bound of the form: , where we have used Hoeffding’s inequality to upper bound . By minimizing in closed form with respect to , i.e., with , we obtain the desired inequality.

Footnotes

  1. Throughout this paper, we use boldface fonts for population quantities.
  2. Here and in the third regime, we do not take into account the pathological cases where the sign pattern of the limit in unstable, i.e., the limit is exactly at a hinge point of the regularization path.

References

  1. Asuncion and Newman][2007]UCI Asuncion, A., & Newman, D. (2007). UCI machine learning repository.
  2. Bach][2007]grouplasso Bach, F. (2007). Consistency of the group Lasso and multiple kernel learning (Tech. report HAL-00164735). http://hal.archives-ouvertes.fr.
  3. Bentkus][2003]bentkus Bentkus, V. (2003). On the dependence of the Berry–Esseen bound on dimension. Journal of Statistical Planning and Inference, 113, 385–402.
  4. Boucheron et al.][2004]concentration Boucheron, S., Lugosi, G., & Bousquet, O. (2004). Concentration inequalities. Advanced Lectures on Machine Learning. Springer.
  5. Breiman][1996a]bagging Breiman, L. (1996a). Bagging predictors. Machine Learning, 24, 123–140.
  6. Breiman][1996b]stabilizing Breiman, L. (1996b). Heuristics of instability and stabilization in model selection. Ann. Stat., 24, 2350–2383.
  7. Breiman][1998]arcing Breiman, L. (1998). Arcing classifier. Ann. Stat., 26, 801–849.
  8. Bühlmann][2006]boosting Bühlmann, P. (2006). Boosting for high-dimensional linear models. Ann. Stat., 34, 559–583.
  9. Efron et al.][2004]lars Efron, B., Hastie, T., Johnstone, I., & Tibshirani, R. (2004). Least angle regression. Ann. Stat., 32, 407.
  10. Efron and Tibshirani][1998]efron Efron, B., & Tibshirani, R. J. (1998). An introduction to the bootstrap. Chapman & Hall.
  11. Fu and Knight][2000]fu Fu, W., & Knight, K. (2000). Asymptotics for Lasso-type estimators. Ann. Stat., 28, 1356–1378.
  12. Meinshausen and Yu][2006]yuinfinite Meinshausen, N., & Yu, B. (2006). Lasso-type recovery of sparse representations for high-dimensional data (Tech. report 720). Dpt. of Statistics, UC Berkeley.
  13. Tibshirani][1994]lasso Tibshirani, R. (1994). Regression shrinkage and selection via the Lasso. J. Roy. Stat. Soc. B, 58, 267–288.
  14. Wainwright][2006]martin Wainwright, M. J. (2006). Sharp thresholds for noisy and high-dimensional recovery of sparsity using -constrained quadratic programming (Tech. report 709). Dpt. of Statistics, UC Berkeley.
  15. Yuan and Lin][2007]yuanlin Yuan, M., & Lin, Y. (2007). On the non-negative garrotte estimator. J. Roy. Stat. Soc. B, 69, 143–161.
  16. Zhao and Yu][2006]Zhaoyu Zhao, P., & Yu, B. (2006). On model selection consistency of Lasso. J. Mac. Learn. Res., 7, 2541–2563.
  17. Zou][2006]zou Zou, H. (2006). The adaptive Lasso and its oracle properties. J. Am. Stat. Ass., 101, 1418–1429.
100202
This is a comment super asjknd jkasnjk adsnkj
""
The feedback cannot be empty
Submit
Cancel
Comments 0
""
The feedback cannot be empty
   
Add comment
Cancel

You’re adding your first comment!
How to quickly get a good reply:
  • Offer a constructive comment on the author work.
  • Add helpful links to code implementation or project page.