# Generalized Fiducial Inference for Ultrahigh Dimensional Regression
^{1}^{1}1The authors thank Professors Jianqing Fan and Ning Hao for sharing the housing price appreciation data set. The work of Hannig was supported in part by the National Science Foundation under Grants 1007543 and 1016441. The work of Lee was supported in part by the National Science Foundation under Grants 1007520, 1209226 and 1209232.

###### Abstract

In recent years the ultrahigh dimensional linear regression problem has attracted enormous attentions from the research community. Under the sparsity assumption most of the published work is devoted to the selection and estimation of the significant predictor variables. This paper studies a different but fundamentally important aspect of this problem: uncertainty quantification for parameter estimates and model choices. To be more specific, this paper proposes methods for deriving a probability density function on the set of all possible models, and also for constructing confidence intervals for the corresponding parameters. These proposed methods are developed using the generalized fiducial methodology, which is a variant of Fisher’s controversial fiducial idea. Theoretical properties of the proposed methods are studied, and in particular it is shown that statistical inference based on the proposed methods will have exact asymptotic frequentist property. In terms of empirical performances, the proposed methods are tested by simulation experiments and an application to a real data set. Lastly this work can also be seen as an interesting and successful application of Fisher’s fiducial idea to an important and contemporary problem. To the best of the authors’ knowledge, this is the first time that the fiducial idea is being applied to a so-called “large small ” problem.

Keywords: confidence intervals, large small , minimum description length principle, uncertainty quantification, variability estimation

## 1 Introduction

The ultrahigh dimensional linear regression problem has attracted enormous attentions in recent years. A typical description of the problem begins with the usual linear model

where is a vector of responses, is a design matrix of size with i.i.d. variables , is a vector of parameters, and is a vector of i.i.d. random errors with zero mean and unknown variance . It is assumed that and are independent, and that is larger than and grows at an exponential rate as increases. It is this last assumption that makes the ultrahigh dimensional regression problem different from the classical multiple regression problem, for which .

When , it is customary to assume that the number of significant predictors in the true model is small; i.e., the true model is sparse. The problem is then to identify which ’s are non-zero, and to estimate their values. To solve this variable selection problem, one common strategy is to first apply a so-called screening procedure to remove a large number of insignificant predictors, and then apply a penalized method such as the LASSO method of Tibshirani (1996) or the SCAD method of Fan and Li (2001) to the surviving predictors to select the final set of variables. For screening procedures, one of the earliest is the sure independence screening procedure of Fan and Lv (2008). Since then various screening procedures have been proposed: Wang (2009) developed a consistent screening procedure that combines forward regression and the extended BIC criterion of Chen and Chen (2008), Bühlmann et al. (2010) proposed a screening procedure that is based on conditional partial corrections, and Cho and Fryzlewicz (2011) constructed a screening procedure that utilizes information from both marginal correlation and tilted correlation. Also, other screening procedures are developed for more complicated settings, including generalized linear models and nonparametric additive modeling; e.g., Meier et al. (2009), Ravikumar et al. (2009), Fan and Lv (2011) and Fan et al. (2011). For an overview of variable selection for high dimensional problems, see Fan and Lv (2010).

While much efforts have been spent on model selection and parameter estimation for the ultrahigh dimensional regression problem, virtually no published work is devoted to quantify the uncertainty in the chosen models and their parameter estimates. A notable exception is the pioneer work of Fan et al. (2012), where a cross-validation based method is proposed to estimate the error variance . Given such an estimate and a final model, confidence intervals for ’s can be constructed using classical linear model theory. However, this approach does not account for the additional variability contributed by the need of selecting a final model.

The goal of this paper is to investigate the use of Fisher’s fiducial idea (Fisher, 1930) in the ultrahigh dimensional regression problem. In particular a new procedure is developed for constructing confidence intervals for all the parameters (including ) in the final selected model. This procedure automatically accounts for the variability introduced by model selection. To the best of our knowledge, this is the first time that Fisher’s fiducial idea is being applied to the so-called “large small ” problem.

Fisher (1930) introduced fiducial inference in order to define a statistically meaningful distribution on the parameter space in cases when one cannot use a Bayes theorem due to the lack of prior information. While never formally defined, fiducial inference has a long and storied history. We refer an interested reader to Hannig (2009) and Salome (1998) where a wealth of references can be found.

Ideas related to fiducial inference has experienced an exciting resurgence in the last decade. Some of these modern ideas are Dempster-Shafer calculus and its generalizations (Dempster, 2008; Martin et al., 2010; Zhang and Liu, 2011; Martin and Liu, 2013), confidence distributions (Singh et al., 2005; Xie et al., 2011), generalized inference (Weerahandi, 1993, 1995) and reference priors in objective Bayesian inference (Berger et al., 2009). There has also been a wealth of successful applications of these methods to practical problems. For selected examples see McNally et al. (2003); Wang and Iyer (2005); E et al. (2008); Edlefsen et al. (2009); Hannig and Lee (2009) and Cisewski and Hannig (2012).

The particular variant of Fisher’s fiducial idea that this paper considers is the so-called generalized fiducial inference. Some early ideas were developed by Hannig et al. (2006), and later Hannig (2009) used these ideas to formally define a generalized fiducial distribution. An brief description of generalized fiducial inference is given below.

The rest of this paper is organized as follows. Section 2 provides some background material on generalized fiducial inference, and applies the methodology to the ultrahigh dimensional regression problem. The theoretical properties of the proposed solution are examined in Section 3, while its empirical properties are illustrated in Section 4. Lastly, concluding remarks are offered in Section 5 and technical details are delayed to the appendix.

## 2 Methodology

Generalized fiducial inference begins with expressing the relationship between the data and the parameters as

(1) |

where is sometimes known as the structural equation, and is the random component of the relation whose distribution is completely known; e.g., a vector of i.i.d. U(0,1)’s. Recall that in the definition of the celebrated maximum likelihood estiimator, Fisher “switched” the roles of and : the random is treated as deterministic in the likelihood function, while the deterministic is treated as random. Through (1) generalized fiducial inference uses this “switching principle” to define a valid probability distribution on .

This switching principle proceeds as follows. For the moment suppose for any given realization of , the inverse

(2) |

always exists for any realization of . Since the distribution of is assumed known, one can always generate a random sample , and via (2) a random sample of can be obtained by . This is called a fiducial sample of , which can be used to calculate estimates and construct confidence intervals for in a similar fashion as with a Bayesian posterior sample. Through the above switching and the inverse operations, one can see that a density function for is implicitly defined. We term the generalized fiducial density for , and the corresponding distribution the generalized fiducial distribution for . An illustrative example of applying this idea to simple linear regression can be found in Hannig and Lee (2009), and a formal mathematical definition of generalized fiducial inference is described in detail in Hannig (2009). The latter work also provides strategies to ensure the existence of the inverse (2).

Observe that for the ultrahigh dimensional regression problem that this paper considers, can be decomposed into three components: , where denotes a candidate model and can be seen as a sequence of binary variables indicating which predictors are significant, is the noise standard deviation and is the coefficients of the significant predictors. In the next subsection we derive the generalized fiducial density for , and then we will demonstrate how to generate a fiducial sample using .

### 2.1 Generalized Fiducial Density for Ultrahigh Dimensional Regression

While the above formal definition of generalized fiducial inference is conceptually simple and very general, it may not be easily applicable in some practical situations. When the model dimension is known, Hannig (2013) derived a workable formula for for many practical situations. Assume that the parameter is -dimensional and that the inverse to (1) exists. This assumption is satisfied for many natural structural equations, provided that and have the same dimension and is smooth. Note that this inverse is different from the inverse in (2). Then under some differentiability assumptions, Hannig (2013) showed that the generalized fiducial distribution is absolutely continuous with density

(3) |

where

(4) |

In the above is the likelihood and the sum goes over all -tuples of indexes . Also, for each we denoted the list of unused indexes by , the collection of variables indexed by by , and its complement by . The formula stood for the Jacobian matrix computed with respect to all parameters and the observations . Similarly stood for the Jacobian matrix computed with respect to the observations .

Recall that the formula (3) was derived for situations where the model dimension is known, and hence it cannot be directly applied to the current problem. When model selection is required, Hannig and Lee (2009) proposed adding extra penalty structural equations to (3). This is similar to adding a penalty term to the likelihood function to account for model complexity. In particular their derivation shows that the fiducial probability of each candidate model is proportional to

(5) |

where is the likelihood, is the Jacobian (4), and is the penalty associated with the model . In the context of wavelet regression, they recommended using the minimum description length (MDL) principle (Rissanen, 1989, 2007) to derive the penalty , which is shown to possess attractive theoretical and empirical properties.

Given the success of Hannig and Lee (2009), we also attempted to use the MDL principle to derive a penalty for the current problem, which gives with being the number of significant parameters in . However, this form of fails here, as the classical MDL principle was not designed to handle the “” scenario. To overcome this issue, we propose using the following penalty

(6) |

where the additional second term comes from the need to encode which of the parameters are left as zero. Here is a constant measuring the quality of the encoding; the most natural choice is but other choices are possible. In all our numerical work we use . We note that the second term of (6) is similar to the EBIC penalty of Chen and Chen (2008).

### 2.2 Practical Generation of Fiducial Sample

In this subsection we propose a practical procedure for generating a fiducial sample using (7). First note that even for a moderate , the total number of models is huge and hence any method that is exhaustive in nature is computationally not feasible.

The proposed procedure begins with constructing a class of candidate models, denoted as . This should satisfy the following two properties: is small and it contains the true model and models that have non-negligible values of . To construct , we first apply the sure independence screening (SIS) procedure of Fan and Lv (2008) to reduce the number of predictors from to , where is of order . To further reduce the number of possible models (which is ), we apply LASSO to those predictors that survived SIS, and take all those models that lie on the LASSO solution path as . Note that the LASSO solution path can be quickly obtained via the least angle regression method (Efron et al., 2004), and that constructing in this way will ensure the true model is captured in with probability 1 (Fan and Lv, 2008).

Once is obtained, for each , calculate

and approximate the generalized fiducial probability (7) by

(8) |

Next for and . For any given , it is straightforward to show that the generalized fiducial distribution of conditional on is

(9) |

and that of conditional on and is

(10) |

where is the maximum likelihood estimate of for model , and is the design matrix for model .

### 2.3 Point Estimates and Confidence Intervals

Applying the above procedure repeatedly one can obtain multiple copies of that form a fiducial sample for . This fiducial sample can be used to form estimates and confidence intervals for in a similar manner as with a Bayesian posterior sample. For example, the average of all ’s can be used as an estmate of , while the 2.5% smallest and 2.5% largest values can be used respectively as the lower and upper limits for a 95% confidence interval for .

Obtaining estimates and confidence intervals for is, however, less straightforward. It is because for any , it is possible that it is included in some but not all ’s. In other words, some of the generated fiducial values for are zeros, some are not.

We use the following simple procedure to deal with this issue. For each , we count the percentage of zero fiducial sample values. If it is more than 50%, we declare that this particular is not significant. Otherwise, we treat as a significant parameter, and use all the non-zero fiducial sample values to obtain estimates and confidence intervals for it, in the same way as for . Note that a similar idea has been used by Barbieri and Berger (2004) to determine the significance of a parameter in the Bayesian context.

## 3 Theoretical Properties

This section investigates the theoretical properties of the above-proposed generalized fiducial based method, under the situation that is diverging and the size of true model is either fixed or diverging. For similar results in the classical situations where is fixed, see Hannig (2009, 2013).

First, some notations. Let be any model, be the true model, and be the projection matrix of ; i.e., . Define

where . Throughout this section we assume the following identifiability condition holds:

(11) |

for some fixed . This condition ensures that the true model is identifiable and has been used for example by Luo and Chen (2013). It can be shown that, under the sparse Reisz condition and the condition

the identifiability condition (11) holds. However, the inverse does not hold in general.

Let be the collection of models such that for some fixed . The restriction is imposed because in practice we only consider models with size comparable with the true model.

If is large, the size of could still be too large in practice. In this situation, we could use a variable screening procedure to reduce the size. This variable screening procedure should result in a class of candidate models which satisfies

(12) |

where contains all models in that are of size . The first condition in (12) guarantees the model class contains the true model, at least asymptotically. The second condition in (12) ensures that the size of the model class is not too large. These two conditions are satisfied by the practical algorithm presented in Section 2.2.

In Appendix B the following theorem is established.

###### Theorem 3.1.

Equation (13) states that the true model has the highest generalized fiducial probability amongst all the models in . However, it does not imply equation (14) in general because the class of candidate models can be very large. If we constrain the class of models being considered in such a way that (12) holds, then equation (14) states that, with probability tending to 1, the true model will be selected. From Theorem 3.1, one can conclude the following important corollary.

###### Corollary 3.1.

Statistical inference that is based on the generalized fiducial density (7) will have exact asymptotic frequentist property. Consequently the generalized fiducial distribution and derived point estimators are consistent.

## 4 Finite Sample Properties

### 4.1 Simulations

A simulation study was conducted to evaluate the practical performance of the proposed procedure. The following model from Fan et al. (2012) was used to generate the noisy data

where is i.i.d. standard normal error, is the number of significant predictors, and the coefficient controls the signal-to-noise ratio. All the covariates are standard normal variables with correlation . Three combinations of were used: , and . For each of these three combinations, 3 choices of and 2 choices of were used: and , and and . Therefore, a total of experimental configurations were considered. The number of repetitions for each experimental configuration was 1000. For , the cases and correspond to the cases when the signal-to-noise ratios are 1, 2 and 3 respectively.

For each generated data set, we applied the proposed generalized fiducial procedure described in Section 2.2 to obtain a fiducial sample of size 10,000 for , and from this we computed the generalized fiducial estimate for . We also obtained two other estimates for : the first one from the refitted cross-validation (RCV) method of Fan et al. (2012), while the second one is the classical maximum likelihood estimate for obtained from the true model. Of course the last estimate cannot be obtained in practice, but it is computed here for benchmark comparisons. In sequel it is termed as the oracle estimate. Also, for RCV, the particular version we compared with is RCV-LASSO.

The bias of these three estimates for are summarized in Table 1. From this table one can see that the bias of the fiducial estimates are usually not much larger than the bias from the oracle estimates. The RCV estimates sometimes have very large bias.

proposed | (0.323) | (0.271) | 0.230 (0.219) | |
---|---|---|---|---|

RCV | 1.507 (0.488) | (0.330) | (0.221) | |

oracle | (0.317) | (0.263) | (0.200) | |

proposed | (0.327) | (0.259) | (0.202) | |

RCV | (0.465) | (0.353) | (0.255) | |

oracle | (0.321) | (0.260) | (0.200) | |

proposed | (0.332) | (0.256) | 0.103 (0.203) | |

RCV | (0.451) | (0.362) | (0.286) | |

oracle | (0.328) | (0.254) | (0.201) | |

proposed | 0.352 (0.335) | 0.271 (0.285) | 1.046 (0.227) | |

RCV | 0.455 (0.467) | (0.334) | (0.247) | |

oracle | 0.367 (0.329) | (0.258) | (0.205) | |

proposed | (0.328) | (0.263) | (0.199) | |

RCV | (0.442) | (0.357) | (0.257) | |

oracle | (0.325) | (0.261) | (0.198) | |

proposed | (0.304) | 0.135 (0.259) | (0.198) | |

RCV | (0.430) | (0.342) | (0.274) | |

oracle | (0.302) | (0.258) | (0.197) |

We also obtained two sets of 90%, 95% and 99% confidence intervals for from each simulated data set. The first set was computed using the proposed generalized fiducial method, and the second was calculated by applying classical theory to the true model. Again, the last method cannot be used in practice, and is used for benchmark comparisons; i.e., the oracle method. The empirical coverage rates of these confidence intervals are summarized in Table 2. It can be seen that the generalized fiducial confidence intervals are nearly as good as the oracle confidence intervals.

90% | 95% | 99% | |||
---|---|---|---|---|---|

proposed | 0.895 (0.338) | 0.949 (0.405) | 0.985 (0.537) | ||

oracle | 0.896 (0.336) | 0.948 (0.402) | 0.985 (0.534) | ||

proposed | 0.892 (0.337) | 0.937 (0.404) | 0.987 (0.535) | ||

oracle | 0.892 (0.335) | 0.941 (0.401) | 0.988 (0.532) | ||

proposed | 0.884 (0.338) | 0.941 (0.404) | 0.986 (0.536) | ||

oracle | 0.886 (0.335) | 0.943 (0.401) | 0.986 (0.533) | ||

proposed | 0.895 (0.344) | 0.945 (0.412) | 0.988 (0.547) | ||

oracle | 0.896 (0.338) | 0.946 (0.404) | 0.988 (0.536) | ||

proposed | 0.889 (0.339) | 0.939 (0.405) | 0.991 (0.538) | ||

oracle | 0.891 (0.336) | 0.94 (0.402) | 0.991 (0.534) | ||

proposed | 0.906 (0.335) | 0.955 (0.401) | 0.993 (0.532) | ||

oracle | 0.908 (0.332) | 0.957 (0.397) | 0.992 (0.528) | ||

proposed | 0.891 (0.277) | 0.948 (0.331) | 0.985 (0.438) | ||

oracle | 0.898 (0.273) | 0.948 (0.326) | 0.987 (0.432) | ||

proposed | 0.909 (0.275) | 0.951 (0.328) | 0.987 (0.434) | ||

oracle | 0.904 (0.272) | 0.95 (0.325) | 0.985 (0.43) | ||

proposed | 0.913 (0.274) | 0.953 (0.328) | 0.993 (0.433) | ||

oracle | 0.907 (0.273) | 0.955 (0.326) | 0.993 (0.431) | ||

proposed | 0.887 (0.286) | 0.936 (0.342) | 0.984 (0.453) | ||

oracle | 0.898 (0.272) | 0.948 (0.325) | 0.992 (0.43) | ||

proposed | 0.894 (0.275) | 0.947 (0.328) | 0.99 (0.434) | ||

oracle | 0.893 (0.273) | 0.946 (0.326) | 0.992 (0.432) | ||

proposed | 0.906 (0.274) | 0.954 (0.328) | 0.99 (0.433) | ||

oracle | 0.906 (0.273) | 0.952 (0.326) | 0.99 (0.432) | ||

proposed | 0.88 (0.215) | 0.939 (0.257) | 0.989 (0.339) | ||

oracle | 0.909 (0.211) | 0.952 (0.252) | 0.99 (0.332) | ||

proposed | 0.898 (0.212) | 0.942 (0.253) | 0.991 (0.333) | ||

oracle | 0.899 (0.211) | 0.942 (0.251) | 0.991 (0.332) | ||

proposed | 0.901 (0.212) | 0.952 (0.253) | 0.991 (0.333) | ||

oracle | 0.9 (0.211) | 0.953 (0.252) | 0.992 (0.332) | ||

proposed | 0.865 (0.224) | 0.935 (0.267) | 0.985 (0.352) | ||

oracle | 0.9 (0.21) | 0.94 (0.251) | 0.99 (0.331) | ||

proposed | 0.895 (0.211) | 0.95 (0.252) | 0.993 (0.332) | ||

oracle | 0.895 (0.21) | 0.949 (0.251) | 0.992 (0.331) | ||

proposed | 0.905 (0.211) | 0.947 (0.251) | 0.989 (0.331) | ||

oracle | 0.903 (0.21) | 0.945 (0.251) | 0.99 (0.331) |

Lastly, for each simulated data set we applied three methods to compute the confidence intervals for the regression coefficients ’s and the mean function evaluated at 50 randomly selected design points ’s. The three methods are the proposed generalized fiducial method, the RCV method of Fan et al. (2012), and the oracle method that uses the true model. As before the empirical coverage rates of these confidence intervals are calculated and they are reported in Tables 3 and 4. Note that only the confidence intervals for are reported, as the confidence intervals for other ’s have similar coverage rates. Overall one can see that the generalized fiducial method gave quite reliable results, except for a few experimental settings where the confidence intervals were over-liberal.

In an attempt to produce a single summary statistic for comparing the empirical coverage rates of the confidence intervals produced by different methods, the following calculation has been done. For all the 90% generalized fiducial confidence intervals for , we counted the number of times that their empirical coverage rates are within the range , where and is the number of repetitions performed for each experimental setting. Similar calculations were then performed for the 95% and 99% (i.e., and ) confidence intervals. And it turns out that, for the proposed generalized fiducial method, out of the 54 empirical coverage rates, 33 of them are within their corresponding target ranges. We have also done the same calculations for the RCV and the oracle methods, and the numbers of their empirical coverage rates that are inside their target ranges are, respectively, 17 and 50. Lastly, we repeated the same calculations for the empirical coverage rates for , and the corresponding numbers for the proposed, RCV and oracle methods are, respectively, 44, 23 and 54. Of course, these numbers are not perfect for judging the relative merits of the different methods, but they seem to suggest that the proposed generalized fiducial method provides improvement over the RCV method.

90% | 95% | 99% | |||
---|---|---|---|---|---|

proposed | 0.888 (0.236) | 0.946 (0.283) | 0.987 (0.377) | ||

RCV | 0.869 (0.250) | 0.915 (0.298) | 0.956 (0.392) | ||

oracle | 0.897 (0.235) | 0.946 (0.279) | 0.988 (0.367) | ||

proposed | 0.884 (0.235) | 0.948 (0.282) | 0.991 (0.376) | ||

RCV | 0.887 (0.238) | 0.945 (0.284) | 0.988 (0.373) | ||

oracle | 0.889 (0.234) | 0.946 (0.279) | 0.990 (0.367) | ||

proposed | 0.892 (0.236) | 0.947 (0.282) | 0.987 (0.376) | ||

RCV | 0.896 (0.238) | 0.95 (0.284) | 0.99 (0.373) | ||

oracle | 0.897 (0.234) | 0.952 (0.279) | 0.987 (0.367) | ||

proposed | 0.886 (0.282) | 0.936 (0.338) | 0.985 (0.454) | ||

RCV | 0.814 (0.289) | 0.849 (0.345) | 0.902 (0.453) | ||

oracle | 0.894 (0.271) | 0.943 (0.323) | 0.988 (0.424) | ||

proposed | 0.898 (0.271) | 0.944 (0.325) | 0.987 (0.433) | ||

RCV | 0.903 (0.274) | 0.945 (0.326) | 0.988 (0.429) | ||

oracle | 0.894 (0.270) | 0.949 (0.322) | 0.986 (0.423) | ||

proposed | 0.901 (0.269) | 0.948 (0.322) | 0.989 (0.429) | ||

RCV | 0.899 (0.271) | 0.953 (0.323) | 0.988 (0.424) | ||

oracle | 0.897 (0.269) | 0.955 (0.321) | 0.99 (0.422) | ||

proposed | 0.810 (0.191) | 0.896 (0.229) | 0.976 (0.303) | ||

RCV | 0.903 (0.204) | 0.935 (0.243) | 0.956 (0.320) | ||

oracle | 0.900 (0.192) | 0.948 (0.229) | 0.992 (0.301) | ||

proposed | 0.871 (0.189) | 0.936 (0.226) | 0.984 (0.300) | ||

RCV | 0.897 (0.201) | 0.936 (0.239) | 0.981 (0.315) | ||

oracle | 0.907 (0.191) | 0.959 (0.228) | 0.989 (0.300) | ||

proposed | 0.888 (0.19) | 0.934 (0.227) | 0.984 (0.301) | ||

RCV | 0.900 (0.197) | 0.945 (0.235) | 0.979 (0.309) | ||

oracle | 0.879 (0.192) | 0.941 (0.228) | 0.991 (0.300) | ||

proposed | 0.812 (0.269) | 0.887 (0.322) | 0.963 (0.427) | ||

RCV | 0.871 (0.236) | 0.915 (0.281) | 0.960 (0.369) | ||

oracle | 0.912 (0.221) | 0.954 (0.264) | 0.992 (0.346) | ||

proposed | 0.895 (0.250) | 0.949 (0.299) | 0.989 (0.396) | ||

RCV | 0.864 (0.224) | 0.922 (0.266) | 0.975 (0.350) | ||

oracle | 0.891 (0.222) | 0.950 (0.264) | 0.991 (0.347) | ||

proposed | 0.908 (0.250) | 0.950 (0.299) | 0.990 (0.397) | ||

RCV | 0.852 (0.220) | 0.917 (0.262) | 0.975 (0.344) | ||

oracle | 0.904 (0.222) | 0.949 (0.264) | 0.983 (0.347) | ||

proposed | 0.781 (0.148) | 0.875 (0.177) | 0.978 (0.233) | ||

RCV | 0.813 (0.151) | 0.857 (0.180) | 0.884 (0.237) | ||

oracle | 0.910 (0.149) | 0.954 (0.177) | 0.993 (0.233) | ||

proposed | 0.853 (0.147) | 0.919 (0.176) | 0.980 (0.232) | ||

RCV | 0.804 (0.156) | 0.878 (0.186) | 0.965 (0.244) | ||

oracle | 0.902 (0.148) | 0.947 (0.177) | 0.988 (0.232) | ||

proposed | 0.873 (0.147) | 0.925 (0.176) | 0.986 (0.232) | ||

RCV | 0.841 (0.155) | 0.911 (0.184) | 0.981 (0.242) | ||

oracle | 0.897 (0.149) | 0.944 (0.177) | 0.988 (0.233) | ||

proposed | 0.820 (0.206) | 0.885 (0.246) | 0.950 (0.324) | ||

RCV | 0.895 (0.179) | 0.935 (0.213) | 0.965 (0.280) | ||

oracle | 0.925 (0.172) | 0.965 (0.204) | 0.995 (0.269) | ||

proposed | 0.897 (0.193) | 0.949 (0.230) | 0.988 (0.304) | ||

RCV | 0.861 (0.169) | 0.922 (0.202) | 0.976 (0.265) | ||

oracle | 0.893 (0.171) | 0.944 (0.204) | 0.989 (0.268) | ||

proposed | 0.888 (0.193) | 0.945 (0.230) | 0.989 (0.304) | ||

RCV | 0.840 (0.168) | 0.909 (0.201) | 0.968 (0.264) | ||

oracle | 0.899 (0.171) | 0.942 (0.204) | 0.987 (0.268) |

90% | 95% | 99% | |||
---|---|---|---|---|---|

proposed | 0.899 (0.421) | 0.948 (0.511) | 0.988 (0.696) | ||

RCV | 0.966 (1.160) | 0.981 (1.382) | 0.993 (1.817) | ||

oracle | 0.896 (0.343) | 0.947 (0.409) | 0.989 (0.538) | ||

proposed | 0.903 (0.424) | 0.953 (0.516) | 0.990 (0.704) | ||

RCV | 0.857 (0.603) | 0.910 (0.718) | 0.966 (0.944) | ||

oracle | 0.888 (0.342) | 0.944 (0.408) | 0.988 (0.536) | ||

proposed | 0.911 (0.428) | 0.956 (0.519) | 0.991 (0.709) | ||

RCV | 0.931 (0.605) | 0.965 (0.720) | 0.992 (0.947) | ||

oracle | 0.897 (0.343) | 0.947 (0.409) | 0.987 (0.537) | ||

proposed | 0.903 (0.452) | 0.948 (0.549) | 0.987 (0.748) | ||

RCV | 0.925 (1.281) | 0.943 (1.526) | 0.964 (2.005) | ||

oracle | 0.892 (0.344) | 0.944 (0.410) | 0.987 (0.538) | ||

proposed | 0.910 (0.444) | 0.955 (0.538) | 0.990 (0.733) | ||

RCV | 0.855 (0.583) | 0.907 (0.695) | 0.963 (0.914) | ||

oracle | 0.896 (0.343) | 0.948 (0.408) | 0.988 (0.536) | ||

proposed | 0.913 (0.438) | 0.959 (0.532) | 0.993 (0.725) | ||

RCV | 0.925 (0.492) | 0.961 (0.587) | 0.993 (0.771) | ||

oracle | 0.899 (0.342) | 0.947 (0.408) | 0.989 (0.536) | ||

proposed | 0.888 (0.444) | 0.938 (0.536) | 0.981 (0.725) | ||

RCV | 0.951 (1.864) | 0.973 (2.221) | 0.99 (2.919) | ||

oracle | 0.898 (0.388) | 0.950 (0.462) | 0.990 (0.607) | ||

proposed | 0.909 (0.439) | 0.956 (0.531) | 0.992 (0.724) | ||

RCV | 0.949 (1.291) | 0.977 (1.538) | 0.995 (2.022) | ||

oracle | 0.900 (0.386) | 0.949 (0.46) | 0.990 (0.605) | ||

proposed | 0.909 (0.429) | 0.957 (0.519) | 0.992 (0.708) | ||

RCV | 0.942 (0.915) | 0.973 (1.090) | 0.995 (1.432) | ||

oracle | 0.897 (0.387) | 0.948 (0.461) | 0.990 (0.606) | ||

proposed | 0.871 (0.496) | 0.925 (0.602) | 0.975 (0.820) | ||

RCV | 0.953 (1.641) | 0.978 (1.956) | 0.996 (2.570) | ||

oracle | 0.898 (0.387) | 0.947 (0.461) | 0.988 (0.606) | ||

proposed | 0.914 (0.437) | 0.962 (0.531) | 0.994 (0.728) | ||

RCV | 0.947 (0.741) | 0.977 (0.883) | 0.996 (1.160) | ||

oracle | 0.901 (0.387) | 0.954 (0.461) | 0.991 (0.606) | ||

proposed | 0.914 (0.422) | 0.960 (0.512) | 0.993 (0.701) | ||

RCV | 0.914 (0.431) | 0.958 (0.514) | 0.992 (0.676) | ||

oracle | 0.900 (0.388) | 0.951 (0.462) | 0.991 (0.607) | ||

proposed | 0.841 (0.445) | 0.896 (0.534) | 0.951 (0.711) | ||

RCV | 0.934 (1.889) | 0.960 (2.251) | 0.983 (2.958) | ||

oracle | 0.902 (0.409) | 0.953 (0.488) | 0.991 (0.641) | ||

proposed | 0.907 (0.435) | 0.955 (0.522) | 0.991 (0.697) | ||

RCV | 0.951 (1.573) | 0.980 (1.874) | 0.997 (2.463) | ||

oracle | 0.903 (0.409) | 0.951 (0.487) | 0.990 (0.640) | ||

proposed | 0.900 (0.429) | 0.951 (0.515) | 0.990 (0.687) | ||

RCV | 0.957 (1.187) | 0.983 (1.415) | 0.998 (1.860) | ||

oracle | 0.898 (0.409) | 0.949 (0.488) | 0.989 (0.641) | ||

proposed | 0.829 (0.501) | 0.892 (0.601) | 0.958 (0.803) | ||

RCV | 0.945 (1.713) | 0.978 (2.041) | 0.996 (2.682) | ||

oracle | 0.905 (0.408) | 0.951 (0.486) | 0.992 (0.639) | ||

proposed | 0.907 (0.430) | 0.956 (0.517) | 0.993 (0.693) | ||

RCV | 0.951 (0.708) | 0.979 (0.844) | 0.997 (1.109) | ||

oracle | 0.900 (0.408) | 0.951 (0.487) | 0.992 (0.640) | ||

proposed | 0.903 (0.421) | 0.953 (0.505) | 0.991 (0.675) | ||

RCV | 0.900 (0.417) | 0.949 (0.497) | 0.990 (0.653) | ||

oracle | 0.898 (0.409) | 0.949 (0.487) | 0.990 (0.640) |

### 4.2 Real Data Example: Housing Price Appreciation

This section analyses a data set that contains 119 months of housing price appreciation (HPA) of the national house price index (HPI) for 381 core-based statistical areas (CBSAs) in the united states. Here HPA is defined as the percentage of monthly change in log-HPI for each of the 381 CBSAs. The goal of the analysis is to predict future HPA values for these CBSAs using existing data. This data set was recorded from 1996 to 2005, and has been studied for example by Fan et al. (2012).

Of course, house prices depend on geographical locations and various macroeconomic factors. As argued by Fan et al. (2012), effects from macroeconomic factors can be well summarized by the national HPA. Let be the HPA of the -th CBSA in month , and be the national HPA of month . Then for any , a reasonable model for a 1-year ahead HPA prediction for the -th CBSA is

where ’s and are model parameters and is an independent random error. Given the national HPA , it is reasonable to assume that areas that are far away would have minimal influence on the local house prices, therefore one can assume the ’s are sparse. Note that for any given , we have “”, as and .

For illustrative purposes, we apply the proposed generalized fiducial procedure to the above model for one of the CBSAs: San Francisco-San Mateo-Redwood. Two fitted models with non-negligible fiducial probabilities are returned: with probability 0.335 the housing appreciation of this area depends on itself and its nearby CBSA San Jose-San Francisco-Oakland, while with probability about 0.663, it depends only on the CBSA San Jose-San Francisco-Oakland.

We also obtained estimate for the noise standard deviation , which can be interpreted as a measure of prediction accuracy when forecasting the housing appreciation. Our point estimate for is 0.56 with a 95% confidence as . Our point estimate agrees with those reported in Fan et al. (2012), although no confidence intervals are reported there.

## 5 Conclusion

In this paper we studied the issue of uncertainty quantification in the ultrahigh dimensional regression problem. We applied the generalized fiducial inference methodology to develop an inferential procedure for this problem. Our theoretical results show that estimates obtained by this procedure are consistent, while confidence intervals constructed by this procedure are asymptotically correct in the frequentist sense. Numerical results from simulation experiments confirm with these theoretical findings. To the best of our knowledge, there are very few published papers that are devoted to quantify uncertainties in the ultrahigh dimensional regression problem, and hence the current paper is one of the first to provide a systematic treatment to this problem. It also opens the possibility for using fiducial and related methods for conducting statistical inference for other “large small ” problems, such as classification and covariance matrix estimation.

## Appendix A Derivation of (7)

This appendix derives the generalized fiducial density (7). A major challenge is to obtain a computable expression for the Jacobian (4).

First observe that the term in (4) can be further simplified. The product of Jacobian matrices in each of the summands of (4) simplifies to a matrix containing the -columns of the matrix and the columns of the identity matrix with columns removed. Thus we have

(15) |

where for any matrix , the sub-matrix is the matrix containing the rows of .

Then notice that each of the candidate model is a multiple regression model, with an implicit structural equation

where is the observations, is the design matrix for model , and are parameters, and is a vector of i.i.d. standard normal random variables. Plugging this into (15) and after some calculations one has

Substituting this into (5) we have

(16) |

where denotes the residual sum of squares of model when the parameters are estimated using maximum likelihood, and the term that controls the model dimension is given by (6).

The expression (16) has done well in our simulations. However, the need for computing a sum of terms makes it very computationally expensive. To seek for a faster alternative, we re-express the response for each fixed model as a column vector

With this the Jacobian (15) becomes

The simplification in the previous formula happens because all but the first rows of the matrix obtained as the product of matrices in the above expression are 0 and we therefore have only one non-zero determinant in the sum. This together with the penalty (6) brings us to the final generalized fiducial distribution (7).

Notice that both and are of the form where is a specific constant depending only on the observed data. Therefore the Jacobians can be viewed as improper Bayesian priors As discussed in Berger and Pericchi (2001) one the issues with the use of improper priors in Bayesian model selection is that a selection of a constant in the prior is arbitrary. This is not a problem when a posterior with respect to one model is considered because the arbitrary constant cancels. However, it becomes a problem in model selection as the arbitrary constants influence the result making the use of improper prior for model selection difficult. Thus a contribution of fiducial inference is the choice of a particular constant for each of the model.

## Appendix B Proof of Theorem 3.1

### b.1 Lemmas

First we present three lemmas, where detailed proofs can be found in Luo and Chen (2013). Lemma B.1 is proved by applying Stirling’s formula. Lemma B.2 is proved by integration by parts and Lemma B.3 is proved by applying Lemma B.2.

###### Lemma B.1.

If as , then

###### Lemma B.2.

Let be a chi-square random variable with degrees of freedom . If and , then

uniformly over .

###### Lemma B.3.

Let be a chi-square random variable with degrees of freedom . Let . If , then for any ,

### b.2 Proof of Theorem 3.1

This appendix presents the proof of Theorem 3.1. Some of the arguments are similar to those in Luo and Chen (2013).

Denote as the collection of models for which (11) holds, i.e., for some fixed . We first prove that . WLOG, assume that . Let and whenever there is no ambiguity. Notice that and . Rewrite

where

We are going to show that the followings hold uniformly for all :