Valid postselection inference
Abstract
It is common practice in statistical data analysis to perform datadriven variable selection and derive statistical inference from the resulting model. Such inference enjoys none of the guarantees that classical statistical theory provides for tests and confidence intervals when the model has been chosen a priori. We propose to produce valid “postselection inference” by reducing the problem to one of simultaneous inference and hence suitably widening conventional confidence and retention intervals. Simultaneity is required for all linear functions that arise as coefficient estimates in all submodels. By purchasing “simultaneity insurance” for all possible submodels, the resulting postselection inference is rendered universally valid under all possible model selection procedures. This inference is therefore generally conservative for particular selection procedures, but it is always less conservative than full Scheffé protection. Importantly it does not depend on the truth of the selected submodel, and hence it produces valid inference even in wrong models. We describe the structure of the simultaneous inference problem and give some asymptotic results.
10.1214/12AOS1077 \volume41 \issue2 2013 \firstpage802 \lastpage837 \newproclaimdefinitionDefinition
Valid postselection inference
A]\fnmsRichard \snmBerklabel=a1]berk@wharton.upenn.edu,
A]\fnmsLawrence \snmBrown\corref\thanksrefnsflabel=a2]lbrown@wharton.upenn.edu,
A]\fnmsAndreas \snmBuja\thanksrefnsflabel=a3]buja.at.wharton@gmail.com,
A]\fnmsKai \snmZhang\thanksrefnsflabel=a4]zhangk@wharton.upenn.edu
and A]\fnmsLinda \snmZhao\thanksrefnsflabel=a5]lzhao@wharton.upenn.edu
nsfSupported in part by NSF Grant DMS1007657.
class=AMS] \kwd62J05 \kwd62J15 Linear regression \kwdmodel selection \kwdmultiple comparison \kwdfamilywise error \kwdhighdimensional inference \kwdsphere packing
1 Introduction: The problem with statistical inference after model selection.
Classical statistical theory grants validity of statistical tests and confidence intervals assuming a wall of separation between the selection of a model and the analysis of the data being modeled. In practice, this separation rarely exists, and more often a model is “found” by a datadriven selection process. As a consequence inferential guarantees derived from classical theory are invalidated. Among model selection methods that are problematic for classical inference, variable selection stands out because it is regularly taught, commonly practiced and highly researched as a technology. Even though statisticians may have a general awareness that the datadriven selection of variables (predictors, covariates) must somehow affect subsequent classical inference from  and based tests and confidence intervals, the practice is so pervasive that it appears in classical undergraduate textbooks on statistics such as Moore and McCabe (2003).
The reason for the invalidation of classical inference guarantees is that a datadriven variable selection process produces a model that is itself stochastic, and this stochastic aspect is not accounted for by classical theory. Models become stochastic when the stochastic component of the data is involved in the selection process. (In regression with fixed predictors the stochastic component is the response.) Models are stochastic in a welldefined way when they are the result of formal variable selection procedures such as stepwise or stagewise forward selection or backward elimination or allsubset searches driven by complexity penalties (such as , AIC, BIC, riskinflation, LASSO) or prediction criteria such as crossvalidation, or more recent proposals such as LARS and the Dantzig selector; for an overview see, for example, Hastie, Tibshirani and Friedman (2009). Models are also stochastic but in an illdefined way when they are informally selected through visual inspection of residual plots or normal quantile plots or other regression diagnostics. Finally, models become stochastic in an opaque way when their selection is affected by human intervention based on posthoc considerations such as “in retrospect only one of these two variables should be in the model” or “it turns out the predictive benefit of this variable is too weak to warrant the cost of collecting it.” In practice, all three modes of variable selection may be exercised in the same data analysis: multiple runs of one or more formal search algorithms may be performed and compared, the parameters of the algorithms may be subjected to experimentation and the results may be critiqued with graphical diagnostics; a round of finetuning based on substantive deliberations may finalize the analysis.
Posed so starkly, the problems with statistical inference after variable selection may well seem insurmountable. At a minimum, one would expect technical solutions to be possible only when a formal selection algorithm is (1) wellspecified (1a) in advance and (1b) covering all eventualities, (2) strictly adhered to in the course of data analysis and (3) not “improved” on by informal and posthoc elements. It may, however, be unrealistic to expect this level of rigor in most data analysis contexts, with the exception of wellconducted clinical trials. The real challenge is therefore to devise statistical inference that is valid following any type of variable selection, be it formal, informal, post hoc or a combination thereof. Meeting this challenge with a relatively simple proposal is the goal of this article. This proposal for valid PostSelection Inference, or “PoSI” for short, consists of a largescale familywise error guarantee that can be shown to account for all types of variable selection, including those of the informal and posthoc varieties. On the other hand, the proposal is no more conservative than necessary to account for selection, and in particular it can be shown to be less conservative than Scheffé’s simultaneous inference.
The framework for our proposal is in outline as follows—details to be elaborated in subsequent sections: we consider linear regression with predictor variables whose values are considered fixed, and with a response variable that has normal and homoscedastic errors. The framework does not require that any of the eligible linear models is correct, not even the full model, as long as a valid error estimate is available. We assume that the selected model is the result of some procedure that makes use of the response, but the procedure does not need to be fully specified. A crucial aspect of the framework concerns the use and interpretation of the selected model: we assume that, after variable selection is completed, the selected predictor variables—and only they—will be relevant; all others will be eliminated from further consideration. This assumption, seemingly innocuous and natural, has critical consequences: it implies that statistical inference will be sought for the coefficients of the selected predictors only and in the context of the selected model only. Thus the appropriate targets of inference are the best linear coefficients within the selected model, where each coefficient is adjusted for the presence of all other included predictors but not those that were eliminated. Therefore the coefficient of an included predictor generally requires inference that is specific to the model in which it appears. Summarizing in a motto, a difference in adjustment implies a difference in parameters and hence in inference. The goal of the present proposal is therefore simultaneous inference for all coefficients within all submodels. Such inference can be shown to be valid following any variable selection procedure, be it formal, informal, post hoc, fully or only partly specified.
Problems associated with postselection inference were recognized long ago, for example, by Buehler and Feddersen (1963), Brown (1967), Olshen (1973), Sen (1979), Sen and Saleh (1987), Dijkstra and Veldkamp (1988), Pötscher (1991), Kabaila (1998). More recently specific problems have been the subject of incisive analyses and criticisms by the “Vienna School” of Pötscher, Leeb and Schneider; see, for example, Leeb and Pötscher (2003, 2005, 2006a, 2006b, 2008a, 2008b, 2008c), Pötscher (2006), Leeb (2006), Pötscher and Leeb (2009), Pötscher and Schneider (2009, 2010, 2011), as well as Kabaila and Leeb (2006) and Kabaila (2009). Important progress was made by Hjort and Claeskens (2003) and Claeskens and Hjort (2003).
This article proceeds as follows: in Section 2 we first develop the “submodel view” of the targets of inference after model selection and contrast it with the “full model view” (Section 2.1); we then introduce assumptions with a view toward valid inference in “wrong models” (Section 2.2). Section 3 is about estimation and its targets from the submodel point of view. Section 4 develops the methodology for PoSI confidence intervals (CIs) and tests. After some structural results for the PoSI problem in Section 5, we show in Section 6 that with increasing number of predictors the width of PoSI CIs can range between the asymptotic rates and . We give examples for both rates and, inspired by problems in sphere packing and covering, we give upper bounds for the limiting constant in the case. We conclude with a discussion in Section 7. Some proofs are deferred to the Appendix, and some elaborations to the online Appendix in the supplementary material [Berk et al. (2013)].
Computations will be described in a separate article. Simulationbased methods yield satisfactory accuracy specific to a design matrix up to , while nonasymptotic universal upper bounds can be computed for larger .
2 Targets of inference and assumptions.
It is a natural intuition that model selection distorts inference by distorting sampling distributions of parameter estimates: estimates in selected models should tend to generate more type I errors than conventional theory allows because the typical selection procedure favors models with strong, hence highly significant predictors. This intuition correctly points to a multiplicity problem that grows more severe as the number of predictors subject to selection increases. This is the problem we address in this article.
Model selection poses additional problems that are less obvious but no less fundamental: there exists an ambiguity as to the role and meaning of the parameters in submodels. In one view, the relevant parameters are always those of the full model, hence the selection of a submodel is interpreted as estimating the deselected parameters to be zero and estimating the selected parameters under a zero constraint on the deselected parameters. In another view, the submodel has its own parameters, and the deselected parameters are not zero but nonexistent. These distinctions are not academic as they imply fundamentally different ideas regarding the targets of inference, the measurement of statistical performance, and the problem of postselection inference. The two views derive from different purposes of equations:

Underlying the full model view of parameters is the use of a full equation to describe a “data generating” mechanism for the response; the equation hence has a causal interpretation.

Underlying the submodel view of parameters is the use of any equation to merely describe association between predictor and response variables; no data generating or causal claims are implied.
In this article we address the latter use of equations. Issues relating to the former use are discussed in the online Appendix of the supplementary material [Berk et al. (2013), Section B.1].
2.1 The submodel interpretation of parameters.
In what follows we elaborate three points that set the submodel interpretation of coefficients apart from the full model interpretation, with important consequences for the rest of this article: {longlist}[(3)]
The full model has no special status other than being the repository of available predictors.
The coefficients of excluded predictors are not zero; they are not defined and therefore do not exist.
The meaning of a predictor’s coefficient depends on which other predictors are included in the selected model.
(1) The full model available to the statistician often cannot be argued to have special status because of inability to identify and measure all relevant predictors. Additionally, even when a large and potentially complete suite of predictors can be measured, there is generally a question of predictor redundancy that may make it desirable to omit some of the measurable predictors from the final model. It is a common experience in the social sciences that models proposed on theoretical grounds are found on empirical grounds to have their predictors entangled by collinearities that permit little meaningful statistical inference. This situation is not limited to the social sciences: in gene expression studies it may well occur that numerous sites have a tendency to be expressed concurrently, hence as predictors in disease studies they will be strongly confounded. The emphasis on full models may be particularly strong in econometrics where there is a “notion that a longer regression …has a causal interpretation, while a shorter regression does not” [Angrist and Pischke (2009), page 59]. Even in causal models, however, there is a possibility that included adjuster variables will “adjust away” some of the causal variables of interest. Generally, in any creative observational study involving novel predictors, it will be difficult a priori to exclude collinearities that might force a rethinking of the predictors. In conclusion, whenever predictor redundancy is a potential issue, it cannot a priori be claimed that the full model provides the parameters of primary interest.
(2) In the submodel interpretation of parameters, claiming that the coefficients of deselected predictors are zero does not properly describe the role of predictors. Deselected predictors have no role in the submodel equation; they become no different than predictors that had never been considered. The selected submodel becomes the vehicle of substantive research irrespective of what the full model was. As such the submodel stands on its own. This view is especially appropriate if the statistician’s task is to determine which predictors are to be measured in the future.
(3) The submodel interpretation of parameters is deeply seated in how we teach regression. We explain that the meaning of a regression coefficient depends on which of the other predictors are included in the model: “the slope is the average difference in the response for a unit difference in the predictor, at fixed levels of all other predictors in the model.” This “ceteris paribus” clause is essential to the meaning of a slope. That there is a difference in meaning when there is a difference in covariates is most drastically evident when there is a case of Simpson’s paradox. For example, if purchase likelihood of a hightech gadget is predicted from age, it might be found against expectations that younger people have lower purchase likelihood, whereas a regression on age and income might show that at fixed levels of income younger people have indeed higher purchase likelihood. This case of Simpson’s paradox would be enabled by the expected positive collinearity between age and income. Thus the marginal slope on age is distinct from the incomeadjusted slope on age as the two slopes answer different questions, apart from having opposite signs. In summary, different models result in different parameters with different meanings.
Must we use the full model with both predictors? Not if income data is difficult to obtain or if it provides little improvement in beyond age. The model based on age alone cannot be said to be a priori “wrong.” If, for example, the predictor and response variables have jointly multivariate normal distributions, then every linear submodel is “correct.” These considerations drive home, once again, that sometimes no model has special status.
In summary, a range of applications call for a framework in which the full model is not the sole provider of parameters, where rather each submodel defines its own. The consequences of this view will be developed in Section 3.
2.2 Assumptions, models as approximations, and error estimates.
We state assumptions for estimation and for the construction of valid tests and CIs when fitting arbitrary linear equations. The main goal is to prepare the ground for valid statistical inference after model selection—not assuming that selected models are correct.
We consider a quantitative response vector , assumed random, and a full predictor matrix , , assumed fixed. We allow to be of nonfull rank, and and to be arbitrary. In particular, we allow . Throughout the article we let
(1) 
Due to frequent reference we call () “the classical case.”
It is common practice to assume the full model to be correct. In the present framework, however, firstorder correctness, , will not be assumed. By implication, firstorder correctness of any submodel will not be assumed either. Effectively,
(2) 
is allowed to be unconstrained and, in particular, need not reside in the column space of . That is, the model given by is allowed to be “firstorder wrong,” and hence we are, in a welldefined sense, serious about G.E.P. Box’s famous quote. What he calls “wrong models,” we prefer to call “approximations”: all predictor matrices provide approximations to , some better than others, but the degree of approximation plays no role in the clarification of statistical inference. The main reason for elaborating this point is as follows: after model selection, the case for “correct models” is clearly questionable, even for “consistent model selection procedures” [Leeb and Pötscher (2003), page 101]; but if correctness of submodels is not assumed, it is only natural to abandon this assumption for the full model also, in line with the idea that the full model has no special status. As we proceed with estimation and inference guarantees in the absence of firstorder correctness we will rely on assumptions as follows:

For estimation (Section 3), we will only need the existence of .

For testing and CI guarantees (Section 4), we will make conventional secondorder and distributional assumptions,
(3)
The assumptions (3) of homoscedasticity and normality are as questionable as firstorder correctness, and we will report elsewhere on approaches that avoid them. For now we follow the vast model selection literature that relies on the technical advantages of assuming homoscedastic and normal errors.
Accepting the assumption (3), we address the issue of estimating the error variance , because the valid tests and CIs we construct require a valid estimate of that is independent of LS estimates. In the classical case, the most common way to assert such an estimate is to assume that the full model is firstorder correct, in addition to (3), in which case the mean squared residual (MSR) of the full model will do. However, other possibilities for producing a valid estimate exist, and they may allow relaxing the assumption of firstorder correctness:

Exact replications of the response obtained under identical conditions might be available in sufficient numbers. An estimate can be obtained as the MSR of the oneway ANOVA of the groups of replicates.

In general, a larger linear model than the full model might be considered as correct; hence could be the MSR from this larger model.

A different possibility is to use another dataset, similar to the one currently being analyzed, to produce an independent estimate by whatever valid estimation method.

A special case of the preceding is a random splitsample approach whereby one part of the data is reserved for producing and the other part for estimating coefficients, selecting models and carrying out postmodel selection inference.

A different type of estimate, , may be based on considerations borrowed from nonparametric function estimation [Hall and Carroll (1989)].
The purpose of pointing out these possibilities is to separate, at least in principle, the issue of firstorder model incorrectness from the issue of error estimation under assumption (3). This separation puts the case within our framework as the valid and independent estimation of is a problem faced by all “” approaches.
3 Estimation and its targets in submodels.
Following Section 2.1, the value and meaning of a regression coefficient depends on what the other predictors in the model are. An exception occurs, of course, when the predictors are perfectly orthogonal, as in some designed experiments or in function fitting with orthogonal basis functions. In this case a coefficient has the same value and meaning across all submodels. This article is hence a story of (partial) collinearity.
3.1 Multiplicity of regression coefficients.
We will give meaning to LS estimators and their targets in the absence of any assumptions other than the existence of , which in turn is permitted to be entirely unconstrained in . Besides resolving the issue of estimation in “firstorder wrong models,” the major purpose here is to elaborate the idea that the slope of a predictor generates different parameters in different submodels. As each predictor appears in submodels, the regression coefficients of the full model generally proliferate into a plethora of as many as distinct regression coefficients according to the submodels they appear in. To describe the situation we start with notation.
To denote a submodel we use the (nonempty) index set of the predictors in the submodel; the size of the submodel is and that of the full model is . Let denote the submatrix of with columns indexed by . We will only allow submodels for which is of full rank,
We let be the unique least squares estimate in ,
(4) 
Now that is an estimate, what is it estimating? Following Section 2.1, we will not interpret as estimates of the full model coefficients and, more generally, of any model other than . Thus it is natural to ask that define its own target through the requirement of unbiasedness,
(5) 
This definition requires no other assumption than the existence of . In particular there is no need to assume firstorder correctness of or . Nor does it matter to what degree provides a good approximation to in terms of approximation error .
In the classical case , we can define the target of the fullmodel estimate as a special case of (5) with ,
(6) 
In the general (including the nonclassical) case, let be any (possibly nonunique) minimizer of ; the link between and is as follows:
(7) 
Thus the target is an estimable linear function of , without firstorder correctness assumptions. Equation (7) follows from .
Notation: to distinguish the regression coefficients of the predictor relative to the submodel it appears in, we write for the components of with . An important convention is that indexes are always elements of the full model, , for what we call “full model indexing.”
3.2 Interpreting regression coefficients in firstorder incorrect models.
The regression coefficient is conventionally interpreted as the “average difference in the response for a unit difference in , ceteris paribus in the model .” This interpretation no longer holds when the assumption of firstorder correctness is given up. Instead, the phrase “average difference in the response” should be replaced with the unwieldy phrase “average difference in the response approximated in the submodel .” The reason is that the target of the fit in the submodel is , hence in we estimate unbiasedly not the true but its LS approximation .
A second interpretation of regression coefficients is in terms of adjusted predictors: for define the adjusted predictor as the residual vector of the regression of on all other predictors in . Multiple regression coefficients, both estimates and parameters , can be expressed as simple regression coefficients with regard to the adjusted predictors,
(8) 
The lefthand formula lends itself to an interpretation of in terms of the wellknown leverage plot which shows plotted against and the line with slope . This plot is valid without firstorder correctness assumption.
A third interpretation can be derived from the second: to unclutter notation let be any adjusted predictor , so that and are the corresponding and . Introduce (1) casewise slopes through the origin, both as estimates and as parameters , and (2) casewise weights . Equations (8) are then equivalent to the following:
Hence regression coefficients are weighted averages of casewise slopes, and this interpretation holds without firstorder assumptions.
4 Universally valid postselection confidence intervals.
4.1 Test statistics with one error estimate for all submodels.
We consider inference for and its target . Following Section 2.2 we require a normal homoscedastic model for , but we leave its mean entirely unspecified: . We then have equivalently
Again following Section 2.2 we assume the availability of a valid estimate of that is independent of all estimates , and we further assume for degrees of freedom. If the full model is assumed to be correct, and , then . In the limit we obtain , the case of known , which will be used starting with Section 6.
Let denote a ratio for that uses irrespective of ,
(9) 
where refers to the diagonal element corresponding to . The quantity has a central distribution with degrees of freedom. Essential is that the standard error estimate in the denominator of (9) does not involve the MSR from the submodel , for two reasons:

We do not assume that the submodel is firstorder correct; hence would, in general, have a distribution that is a multiple of a noncentral distribution with unknown noncentrality parameter.
These problems are avoided by using one valid estimate that is independent of all submodels.
With this choice of , confidence intervals for take the form
If is the quantile of a distribution with degrees of freedom, then the interval is marginally valid with a coverage guarantee
This holds if the submodel is not the result of variable selection.
4.2 Model selection and its implications for parameters.
In practice, the model tends to be the result of some form of model selection that makes use of the stochastic component of the data, which is the response vector ( being fixed, Section 2.2). This model should therefore be expressed as . In general we allow a variable selection procedure to be any (measurable) map
(11) 
where is the set of all fullrank submodels
(12) 
Thus the procedure is a discrete map that divides into as many as different regions with shared outcome of model selection.
Data dependence of the selected model has strong consequences:

Most fundamentally, the selected model is now random. Whether the model has been selected by an algorithm or by human choice, if the response has been involved in the selection, the resulting model is a random object because it could have been different for a different realization of the random vector .

Associated with the random model is the parameter vector of coefficients , which is now randomly chosen also:

It has a random dimension : .

For any fixed , it may or may not be the case that .

Conditional on , the parameter changes randomly as the adjuster covariates in vary randomly.

Thus the set of parameters for which inference is sought is random also.
4.3 Postselection coverage guarantees for confidence intervals.
With randomness of the selected model and its parameters in mind, what is a desirable form of postselection coverage guarantee for confidence intervals? A natural requirement would be a confidence guarantee for the coefficients of the predictors that are selected into the model,
(13) 
Several points should be noted:

The guarantee is familywise for all selected predictors , though the sense of “familywise” is unusual because is random.

The guarantee has nothing to say about predictors that have been deselected, regardless of the substantive interest they might have. Predictors of overarching interest should be protected from variable selection, and for these one can use a modification of the PoSI approach which we call “PoSI1;” see Section 4.10.

Because predictor selection is random, , two realized samples from may result in different sets of selected predictors, . It would be a fundamental misunderstanding to wonder whether the guarantee holds for both realizations. Instead, the guarantee (13) is about the procedure
for the long run of independent realizations of (by the LLN), and not for any particular realizations . A standard formulation used to navigate these complexities after a realization of has been analyzed is the following: “for we have confidence that the interval contains .”

Marginal guarantees for individual predictors require some care because does not exist for . This makes an incoherent statement that does not define an event. Guarantees are possible if the condition is added with a conjunction or is being conditioned on: The marginal and conditional probabilities
respectively, are both welldefined and can be the subject of coverage guarantees; see the online Appendix of the supplementary material [Berk et al. (2013), Section B.4].
Finally, we note that the smallest constant that satisfies the guarantee (13) is specific to the procedure . Thus different variable selection procedures would require different constants. Finding procedurespecific constants is a challenge that will be intentionally bypassed by the present proposals.
4.4 Universal validity for all selection procedures.
The “PoSI” procedure proposed here produces a constant that provides universally valid postselection inference for all model selection procedures ,
(14) 
Universal validity irrespective of the model selection procedure is a strong property that raises questions of whether the approach is too conservative. There are, however, some arguments in its favor:
(1) Universal validity may be desirable or even essential for applications in which the model selection procedure is not specified in advance or for which the analysis involves some ad hoc elements that cannot be accurately prespecified. Even so, we should think of the actually chosen model as part of a “procedure” , and though the ad hoc steps are not specified for other than the observed one, this is not a problem because our protection is irrespective of what a specification might have been. This view also allows data analysts to change their minds, to improvise and informally decide in favor of a model other than that produced by a formal selection procedure, or to experiment with multiple selection procedures.
(2) There exists a model selection procedure that requires the full strength of universally valid PoSI, and this procedure may not be entirely unrealistic as an approximation to some types of data analytic activities: “significance hunting,” that is, selecting that model which contains the statistically most significant coefficient; see Section 4.9.
(3) There is a general question about the wisdom of proposing ever tighter confidence and retention intervals for practical use when in fact these intervals are valid only under tightly controlled conditions. It might be realistic to suppose that much applied work involves more data peeking than is reported in published articles. With inference that is universally valid after any model selection procedure, we have a way to establish which rejections are safe, irrespective of unreported data peeking as part of selecting a model.
(4) Related to the previous point is the fact that today there is a realization that a considerable fraction of published empirical work is unreproducible or reports exaggerated effects; well known in this regard is Ioannidis (2005). A factor contributing to this problem might well be liberal handling of variable selection and absent accounting for it in subsequent inference.
4.5 Restricted model selection.
The concerns over PoSI’s conservative nature can be alleviated somewhat by introducing a degree of flexibility to the PoSI problem with regard to the universe of models being searched. Such flexibility is additionally called for from a practical point of view because it is not true that all submodels in (12) are always being searched. Rather, the search is often limited in a way that can be specified a priori, without involvement of . For example, a predictor of interest may be forced into the submodels of interest, or there may be a restriction on the size of the submodels. Indeed, if is large, a restriction to a manageable set of submodels is a computational necessity. In much of what follows we can allow the universe of allowable submodels to be an (almost) arbitrary but prespecified nonempty subset of ; w.l.o.g. we can assume . Because we allow only nonsingular submodels [see (12)] we have , where as always . Selection procedures are now maps
(15) 
The following are examples of model universes with practical relevance; see also Leeb and Pötscher (2008a), Section 1.1, Example 1. {longlist}[(5)]
Submodels that contain the first predictors (): . Classical: . Example: forcing an intercept into all models.
Submodels of size or less (“sparsity option”): . Classical: .
Submodels with fewer than predictors dropped from the full model: . Classical: .
Nested models: . . Example: selecting the degree up to in a polynomial regression.
Models dictated by an ANOVA hierarchy of main effects and interactions in a factorial design. This list is just an indication of possibilities. In general, the smaller the set is, the less conservative the PoSI approach is, and the more computationally manageable the problem becomes. With sufficiently strong restrictions, in particular using the sparsity option (2) and assuming the availability of an independent valid estimate , it is possible to apply PoSI in certain nonclassical situations.
Further reduction of the PoSI problem is possible by prescreening adjusted predictors without the response . In a fixeddesign regression, any variable selection procedure that does not involve does not invalidate statistical inference. For example, one may decide not to seek inference for predictors in submodels that impart a “Variance Inflation Factor” () above a userchosen threshold: if is centered, hence does not make use of , and elimination according to does not invalidate inference.
4.6 Reduction of universally valid postselection inference to simultaneous inference.
We show that universally valid postselection inference (14) follows from simultaneous inference in the form of familywise error control for all parameters in all submodels. The argument depends on the following lemma that may fall into the category of the “trivial but not immediately obvious.”
Lemma 4.1 ((“Significant triviality bound”))
For any model selection procedure , the following inequality holds for all :
This is a special case of the triviality , where .
The righthand max bound of the lemma is sharp in the sense that there exists a variable selection procedure that attains the bound; see Section 4.9. Next we introduce the quantile of the righthand max statistic of the lemma: let be the minimal value that satisfies
(16) 
This value will be called “the PoSI constant.” It does not depend on any model selection procedures, but it does depend on the design matrix , the universe of models subject to selection, the desired coverage , and the degrees of freedom in , hence .
Theorem 4.1
For all model selection procedures we have
(17) 
where is the PoSI constant.
This follows immediately from Lemma 4.1. Although mathematically trivial we give the above the status of a theorem as it is the central statement of the reduction of universal postselection inference to simultaneous inference. The following is just a repackaging of Theorem 4.1:
Corollary 4.1
“Simultaneous postselection confidence guarantees” hold for any model selection procedure ,
(18) 
where is the PoSI constant.
Simultaneous inference provides strong familywise error control, which in turn translates to strong error control for tests following model selection.
Corollary 4.2
“Strong postselection error control” holds for any model selection procedure ,
where is the PoSI constant and is the statistic for the null hypothesis .
The proof is standard; see the online Appendix of the supplementary material [Berk et al. (2013), Section B.3]. The corollary states that, with probability , in a selected model all PoSIsignificant rejections have detected true alternatives.
4.7 Computation of the POSI constant.
Several portions of the following treatment are devoted to a better understanding of the structure and value of the POSI constant . Except for very special choices it does not seem possible to provide closed form expressions for its value. However, the structural geometry and other properties to be described later do enable a reasonably efficient computational algorithm. Rcode for computing the POSI constant for small to moderate values of is available on the authors’ web pages. This code is accompanied by a manuscript that will be published elsewhere describing the computational algorithm and generalizations. For the basic setting involving the algorithm will conveniently provide values of for matrices of rank , or slightly larger depending on available computing speed and memory. It can also be adapted to compute for some other families contained within , such as some discussed in Section 4.5.
4.8 Scheffé protection.
Realizing the idea that the LS estimators in different submodels are generally unbiased estimates of different parameters, we generated a simultaneous inference problem involving up to linear contrasts . In view of the enormous number of linear combinations for which simultaneous inference is sought, one should wonder whether the problem is not best solved by Scheffé’s method [Scheffé (1959)] which provides simultaneous inference for all linear combinations. To accommodate rankdeficient , we cast Scheffé’s result in terms of statistics for arbitrary nonzero :
(19) 
The statistics in (9) are obtained for . Scheffé’s guarantee is
(20) 
where the Scheffé constant is
(21) 
It provides an upper bound for all PoSI constants:
Proposition 4.1
.
Thus for a parameter estimate whose ratio exceeds in magnitude is universally safe from having the rejection of “” invalidated by variable selection. The universality of the Scheffé constant is a tipoff that it may be too loose for some predictor matrices , and obtaining the sharper constant may be worthwhile. An indication is given by the following comparison as :

For the Scheffé constant it holds .

For orthogonal designs it holds .
(For orthogonal designs see Section 5.5.) Thus the PoSI constant is much smaller than . The large gap between the two suggests that the Scheffé constant may be too conservative at least in some cases. We will study certain nonorthogonal designs for which the PoSI constant is in Section 6.1. On the other hand, the PoSI constant can approach the order of the Scheffé constant as well, and we will study an example in Section 6.2.
Even though in this article we will give asymptotic results for and only, we mention another kind of asymptotics whereby is held constant while : in this case is in the order of the product of and the quantile of the inverserootchisquare distribution with degrees of freedom. In a similar way, the constant for orthogonal designs is in the order of the product of and the quantile of the inversechisquare distribution with degrees of freedom.
4.9 PoSIsharp model selection—“SPAR.”
There exists a model selection procedure that requires the full protection of the simultaneous inference procedure (16). It is the “significance hunting” procedure that selects the model containing the most significant “effect”:
We name this procedure “SPAR” for “Single Predictor Adjusted Regression.” It achieves equality with the “significant triviality bound” in Lemma 4.1 and is therefore the worst case procedure for the PoSI problem. In the submodel , the less significant predictors matter only in so far as they boost the significance of the winning predictor by adjusting it accordingly. This procedure ignores the quality of the fit to provided by the model. While our present purpose is to point out the existence of a selection procedure that requires full PoSI protection, SPAR could be of practical interest when the analysis is centered on strength of “effects,” not quality of model fit.
4.10 One primary predictor and controls—“PoSI1.”
Sometimes a regression analysis is centered on a predictor of interest, , and on inference for its coefficient . The other predictors in act as controls, so their purpose is to adjust the primary predictor for confounding effects and possibly to boost the primary predictor’s own “effect.” This situation is characterized by two features:

Variable selection is limited to models that contain the primary predictor. We therefore define for any model universe a subuniverse of models that contain the primary predictor ,
so that for we have iff .

Inference is sought for the primary predictor only, hence the relevant test statistic is now and no longer . The former statistic is coherent because it is assumed that .
We call this the “PoSI1” situation in contrast to the unconstrained PoSI situation. Similar to PoSI, PoSI1 starts with a “significant triviality bound”:
Lemma 4.2 ((“Primary predictor’s significant triviality bound”))
For a fixed predictor and model selection procedure , it holds that
For a “proof,” the only thing to note is by the assumption . We next define the “PoSI1” constant for the predictor as the quantile of the max statistic on the righthand side of the lemma: let be the minimal value that satisfies
(22) 
Importantly, this constant is dominated by the general PoSI constant,
(23) 
for the obvious reason that the present max is smaller than the general PoSI max due to and the restriction of inference to . The constant provides the following “PoSI1” guarantee shown as the analog of Theorem 4.1 and Corollary 4.1 folded into one:
Theorem 4.2
Let be a selection procedure that always includes the predictor in the model. Then we have
(24) 
and accordingly we have the following postselection confidence guarantee:
(25) 
Inequality (24) is immediate from Lemma 4.2. The “triviality bound” of the lemma is attained by the following variable selection procedure which we name “SPAR1”:
(26) 
It is a potentially realistic description of some data analyses when a predictor of interest is determined a priori, and the goal is to optimize this predictor’s “effect.” This procedure requires the full protection of the PoSI1 constant .
In addition to its methodological interest, the PoSI1 situation addressed by Theorem 4.2 is of theoretical interest: even though the PoSI1 constant is dominated by the unrestricted PoSI constant , we will construct in Section 6.2 an example of predictor matrices for which the PoSI1 constant increases at the Scheffé rate and is asymptotically more than 63% of the Scheffé constant . It follows that nearScheffé protection can be needed even for SPAR1 variable selection.
5 The structure of the PoSI problem.
5.1 Canonical coordinates.
We can reduce the dimensionality of the PoSI problem from to , where , by introducing Scheffé’s canonical coordinates. This reduction is important both geometrically and computationally because the PoSI coverage problem really takes place in the column space of .
Let be any orthonormal basis of the column space of . Note that is the orthogonal projection of onto the column space of even if is not of full rank. We call and canonical coordinates of and .
We extend the notation for extraction of subsets of columns to canonical coordinates . Accordingly slopes obtained from canonical coordinates will be denoted by to distinguish them from the slopes obtained from the original data , if only to state in the following proposition that they are identical.
Proposition 5.1
Properties of canonical coordinates: {longlist}[(7)]
.
and .
for all submodels .
, where .
, where and is the residual vector of the regression of onto the other columns of .
.
In the classical case , can be chosen to be an upper triangular or a symmetric matrix.
The proofs of (1)–(6) are elementary. As for (7), an upper triangular can be obtained from a QRdecomposition based on a Gram–Schmidt procedure, , . A symmetric is obtained from a singular value decomposition, , , .
Canonical coordinates allow us to analyze the PoSI coverage problem in . In what follows we will freely assume that all objects are rendered in canonical coordinates and write and for and , implying that the predictor matrix is of size and the response is of size .
5.2 PoSI coefficient vectors in canonical coordinates.
We simplify the PoSI coverage problem (16) as follows: due to pivotality of statistics, the problem is invariant under translation of and rescaling of ; see equation (9). Hence it suffices to solve coverage problems for and . In canonical coordinates this implies , hence . For this reason we use the more familiar notation instead of . The random vector has a dimensional distribution with degrees of freedom, and any linear combination with a unit vector has a onedimensional distribution. Letting be the adjusted predictors in canonical coordinates, estimates (8) and their statistics (9) simplify to
(27) 
which are linear functions of and , respectively, with “PoSI coefficient vectors” and that equal up to scale
(28) 
As we now operate in canonical coordinates, we have and , the unit sphere in . To complete the structural description of the PoSI problem we let
(29) 
If , we omit the second argument and write .
Proposition 5.2
The PoSI problem (16) is equivalent to a dimensional coverage problem for linear functions of the multivariate vector ,
(30) 
5.3 Orthogonalities of PoSI coefficient vectors.
The set of unit vectors has interesting geometric structure which is the subject of this and the next subsection. The following proposition (proof in Appendix .1) elaborates the fact that is essentially the predictor vector orthogonalized with regard to the other predictors in the model . Vectors will always be assumed in canonical coordinates and hence dimensional.
Proposition 5.3
Orthogonalities in : the following statements hold assuming that the models referred to are in (hence are of full rank). {longlist}[(4)]
Adjustment properties:
The following vectors form an orthonormal “Gram–Schmidt” series:
Other series are obtained using in place of .
Vectors and are orthogonal if , and .
Classical case and : each vector is orthogonal to vectors (not all of which may be distinct).
The cardinality of orthogonalities in the classical case and is as follows: if the predictor vectors have no orthogonal pairs among them, then . If there exist orthogonal pairs, then is less. For example, if there exists exactly one orthogonal pair, then . When is a fully orthogonal design, then .
5.4 The PoSI polytope.
Coverage problems can be framed geometrically in terms of probability coverage of polytopes in . For the PoSI problem the polytope with halfwidth is defined by
(31) 
henceforth called the “PoSI polytope.” The PoSI coverage problem (30) is equivalent to calibrating such that
The simplest case of a PoSI polytope, for , is illustrated in Figure 1 in the online Appendix of the supplementary material [Berk et al. (2013), Section B.7]. More general polytopes are obtained for arbitrary sets of unit vectors, that is, subsets of the unit sphere in . For the special case the “polytope” is the “Scheffé ball” with coverage as :