A Modern MaximumLikelihood Theory
for Highdimensional Logistic Regression
Abstract
Every student in statistics or data science learns early on that when the sample size largely exceeds the number of variables, fitting a logistic model produces estimates that are approximately unbiased. Every student also learns that there are formulas to predict the variability of these estimates which are used for the purpose of statistical inference; for instance, to produce pvalues for testing the significance of regression coefficients. Although these formulas come from large sample asymptotics, we are often told that we are on reasonably safe grounds when is large in such a way that or . This paper shows that this is far from the case, and consequently, inferences routinely produced by common software packages are often unreliable.
Consider a logistic model with independent features in which and become increasingly large in a fixed ratio. Then we show that (1) the MLE is biased, (2) the variability of the MLE is far greater than classically predicted, and (3) the commonly used likelihoodratio test (LRT) is not distributed as a chisquare. The bias of the MLE is extremely problematic as it yields completely wrong predictions for the probability of a case based on observed values of the covariates. We develop a new theory, which asymptotically predicts (1) the bias of the MLE, (2) the variability of the MLE, and (3) the distribution of the LRT. We empirically also demonstrate that these predictions are extremely accurate in finite samples. Further, an appealing feature is that these novel predictions depend on the unknown sequence of regression coefficients only through a single scalar, the overall strength of the signal. This suggests very concrete procedures to adjust inference; we describe one such procedure learning a single parameter from data and producing accurate inference. For space reasons, we do not provide a full mathematical analysis of our results. However, we give a brief overview of the key arguments, which rely on the theory of (generalized) approximate message passing algorithms as well as on leaveoneobservation/predictor out approaches.
1 Introduction
1.1 Logistic regression: classical theory and practice
Logistic regression [42, 43, 58] is by and large the most frequently used model to estimate the probability of a binary response from the value of multiple features/predictor variables. It is used all the time in the social sciences, the finance industry, the medical sciences, and so on. As an example, a typical application of logistic regression may be to predict the risk of developing a given coronary heart disease from a patient’s observed characteristics. Consequently, every graduate student in statistics or any field that remotely involves data analysis learns about logistic regression, perhaps before any other nonlinear multivariate model. In particular, every student knows how to interpret the excerpt of the computer output from Figure 1, which displays regression coefficient estimates, standard errors and pvalues for testing the significance of the regression coefficients. In textbooks we learn the following:

Fitting a model via maximum likelihood produces estimates that are approximately unbiased.

There are formulas to predict the accuracy or variability of the maximumlikelihood estimate (MLE) (used in the computer output from Figure 1).
These approximations come from asymptotic results. Imagine we have independent observations where is the response variable and the vector of predictor variables. The logistic model posits that the probability of a case conditional on the covariates is given by
where is the standard sigmoidal function. Then everyone knows [42, 43, 58] that in the limit of large samples in which is fixed and , the MLE obeys
(1) 
where is the Fisher information matrix evaluated at the true . This result holds under very mild conditions, which essentially guarantee the existence and uniqueness of the MLE. Now the approximation (1) justifies the first claim of near unbiasedness. Further, software packages then return standard errors by evaluating the inverse Fisher information matrix at the MLE (this is essentially what R does in Figure 1). In turn, these standard errors are then used for the purpose of statistical inference; for instance, they are used to produce pvalues for testing the significance of regression coefficients, which researchers use in thousands of scientific studies.
Another wellknown result in logistic regression is Wilks’ theorem [61], which gives the asymptotic distribution of the likelihood ratio test (LRT):

Consider the likelihood ratio obtained by dropping variables from the model under study. Then under the null hypothesis that none of the dropped variables belongs to the model, twice the loglikelihood ratio (LLR) converges to a chisquare distribution with degrees of freedom in the limit of large samples.
Once more, this approximation is often used in lots of statistical software packages to obtain pvalues for testing the significance of individual and/or groups of coefficients.
1.2 Failures in moderately large dimensions
In modernday data analysis, new technologies now produce extremely large datasets, often with huge numbers of features on each of a comparatively small number of experimental units. Yet, software packages and practitioners continue to perform calculations as if classical theory applies and, therefore, the main issue is this: do these approximations hold in the modern setting where is not vanishingly small compared to ?
To address this question, we begin by showing results from an empirical study. Throughout this section, we set and unless otherwise specified, (so that the ‘dimensionality’ is equal to 1/5). We work with a matrix of covariates, which has i.i.d. entries, and different types of regression coefficients scaled in such a way that
This is a crucial point: we want to make sure that the size of the logodds ratio does not increase with or , so that is not trivially equal to either or . Instead, we want to be in a regime where accurate estimates of translate into a precise evaluation of a nontrivial probability. With our scaling , about 95% of the observations will be such that so that .
Unbiasedness?
Figure 2 plots the true and fitted coefficients in the setting where one quarter of the regression coefficients have a magnitude equal to , and the rest are zero. Half of the nonzero coefficients are positive and the other half are negative. A striking feature is that the black curve does not pass through the center of the blue scatter. This is in stark contradiction to what we would expect from classical theory. Clearly, the regression estimates are not close to being unbiased. When the true effect size is positive, we see that the MLE has a strong tendency to overestimate it. Symmetrically, when is negative, the MLE tends to underestimate the effect sizes in the sense that the fitted values are in the same direction but with magnitudes that are too large. In other words, for most indices so that we are overestimating the magnitudes of the effects.
The strong bias is not specific to this example as the theory we develop in this paper will make clear. Consider a case where the entries of are drawn i.i.d. from (the setup is otherwise unchanged). Figure 3(a), shows that the pairs are not distributed around a straight line of slope 1; rather, they are distributed around a line with a larger slope. Our theory predicts that the points should be scattered around a line with slope shown in red, as if we could think that .
(a)  (b) 
The strong bias is highly problematic for estimating the probability of our binary response. Suppose we are given a vector of covariates and estimate the regression function with
Then because we tend to overestimate the magnitudes of the effects, we will also tend to overestimate or underestimate the probabilities depending on whether is greater or less than a half. This is illustrated in Figure 3(b). Observe that when , lots of predictions tend to be close to zero, even when is nowhere near zero. A similar behavior is obtained by symmetry; when , we see a shrinkage toward the other end point, namely, one. Hence, we see a massive shrinkage towards the extremes and the phenomenon is amplified as the true probability approaches zero or one. Expressed differently, the MLE may predict that an outcome is almost certain (i.e. is close to zero or one) when, in fact, the outcome is not at all certain. This behavior is misleading.
Accuracy of classical standard errors?
For the same setting as above, Figure 4 shows standard errors of ML estimates calculated from (1), along with estimates of standard deviations computed via Monte Carlo. The latter is achieved by fixing the signal and resampling the response vector and covariate matrix times. In this case, the actual standard deviation of the coefficients is clearly much higher than that predicted via the Fisher information matrix. This suggests that in higher dimensions, the variance of the MLE is likely to be much larger than that computed by available software packages. In turn, this leads to grossly incorrect pvalues and confidence statements, a major issue first noticed in [13].
(a)  (b) 
Distribution of the LRT?
By now, the reader should be suspicious that the chisquare approximation for the distribution of the likelihoodratio test holds in higher dimensions. Indeed, it does not and this actually is not a new observation. In [55], the authors established that for a class of logistic regression models, the LRT converges weakly to a multiple of a chisquare variable in an asymptotic regime in which both and tend to infinity in such a way that . The multiplicative factor is an increasing function of the limiting aspect ratio , and exceeds one as soon as is positive. This factor can be computed by solving a nonlinear system of two equations in two unknowns given in (7) below. Furthermore, [55] links the distribution of the LRT with the asymptotic variance of the marginals of the MLE, which turns out to be provably higher than that given by the inverse Fisher information. These findings are of course completely in line with the conclusions from the previous paragraphs. The issue is that the results from [55] assume that ; that is, they apply under the global null where the response does not depend upon the predictor variables, and it is a priori not clear how the theory would extend beyond this case. Our goal in this paper is to study properties of the MLE and the LRT for highdimensional logistic regression models under general signal strengths—restricting to the regime where the MLE exists.
To investigate what happens when we are not under the global null, consider the same matrix as before and regression coefficients now sampled as follows: half of the ’s are i.i.d. draws from , and the other half vanish. Figure 5 shows the histogram of the pvalues for testing a null coefficient based on the chisquare approximation. Not only are the pvalues far from uniform, the enormous mass near zero is extremely problematic for multiple testing applications, where one examines pvalues at very high levels of significance, e.g. near Bonferroni levels. In such applications, one would be bound to make a very large number of false discoveries from using pvalues produced by software packages. To further demonstrate the large inflation near the small pvalues, we display in Table 1 estimates of the pvalue probabilities in bins near zero. The estimates are much higher than what is expected from a uniform distribution. Clearly, the distribution of the LRT is far from a .
Classical  

Summary.
We have hopefully made the case that classical results, which software packages continue to rely upon, are downright erroneous in higher dimensions.

Estimates seem systematically biased in the sense that effect magnitudes are overestimated.

Estimates are far more variable than classically predicted.

Inference measures, e.g. pvalues, are unreliable especially at small values.
Given the widespread use of logistic regression in high dimensions, a novel theory explaining how to adjust inference to make it valid is seriously needed.
1.3 Our contribution
Our contribution is to develop a brand new theory, which applies to highdimensional logistic regression models with independent variables, and is capable of accurately describing all the phenomena we have discussed. Taking them one by one, the theory from this paper predicts:

the bias of the MLE;

the variability of the MLE;

and the distribution of the LRT.
These predictions are, in fact, asymptotically exact in a regime where the sample size and the number of features grow to infinity in a fixed ratio. Moreover, we shall see that our theoretical predictions are extremely accurate in finite sample settings in which is a fraction of , e.g. .
A very useful feature of this novel theory is that in our model, all of our predictions depend on the true coefficients only through the signal strength , where . This immediately suggests that estimating some highdimensional parameter is not required to adjust inference. We propose in Section 3 a method for estimating and empirically study the quality of inference based on this estimate.
At the mathematical level, our arguments are very involved. Our strategy is to introduce an approximate message passing algorithm that tracks the MLE in the limit of a large number of features and samples. In truth, a careful mathematical analysis is delicate and requires a great number of steps. This is why in this expository paper we have decided to provide the reader only with the main ideas. All the details may be found in a separate document.
1.4 Prior work
Asymptotic properties of Mestimators in the context of linear regression have been extensively studied in diverging dimensions starting from [36], followed by [47] and [48]. These papers investigated the consistency and asymptotic normality properties of Mestimators in a regime where , for some . Later on, the regime where is comparable to became the subject of a series of remarkable works [29, 6, 23, 27, 28]; these works only concern the linear model. The finding in these papers is that although Mestimators remain asymptotically unbiased, they are shown to exhibit a form of ‘variance inflation’.
Moving on to more general exponential families, [49] studied the asymptotic behavior of likelihood methods and established that the classical Wilks’ theorem holds if and, moreover, that the classical normal approximation to the MLE holds if . Subsequently, [34] quantified the estimation error of the MLE when . Very recently, the authors from [31] investigated the classical asymptotic normality of the MLE under the global null and regimes in which it may break down as the dimensionality increases. In parallel, there also exists an extensive body of literature on penalized maximum likelihood estimates/procedures for generalized linear models, see [57, 40, 12, 37, 56, 30, 7], for example. This body of literature often allows to be larger than but relies upon strong assumptions about the extreme sparsity of the underlying signal. The setting in these works is, therefore, completely different from ours.
Finite sample behavior of both the MLE and the LRT have been extensively studied in the literature. It has been observed that when the sample size is small, the MLE is found to be biased for the regression coefficients. In this context, a series of works— [32, 2, 44, 53, 14, 18, 16] and the references therein—proposed finite sample corrections to the MLE, which typically hinges on an asymptotic expansion of the MLE up to terms. One can plug in an estimator of the term, which would make the resultant corrected statistic accurate. All of these works are in the lowdimensional setting where the MLE is still asymptotically unbiased. The observed bias was simply attributed to a finite sample effect. Jackknife bias reduction procedures for finite samples have been proposed (see [11] and the references cited therein for other finite sample corrections). Similar corrections for the LRT have been studied, see for instance, [4, 10, 41, 15, 17, 8, 22, 16, 45]. It was demonstrated in [55] that such finite sample corrections do not yield valid pvalues in the highdimensional regime we consider. In [39], the author proposed a measure for detecting inadequacy of inference that was based on explicit computation of the third order term in the Taylor expansion of the likelihood function. This term is known to be asymptotically negligible in the lowdimensional setting, and is found to be negligible asymptotically in our highdimensional regime as well, as will be shown in this paper. Thus, this proposal also falls under the niche of a finite sample correction.
A line of simulation based results exist to guide practitioners regarding how large sample sizes are needed so that such finite sample problems would not arise while using classical inference for logistic regression. The rule of thumb is usually events per variable (EPV) or more as mentioned in [46, 35], while a later study [59] suggested that it could be even less. As we clearly see in this paper, such a rule is not at all valid when the number of features is large. [19] contested the previously established EPV rule.
To the best of our knowledge, logistic regression in the regime where is comparable to has been quite sparsely studied. As already mentioned, this paper follows up on the earlier contribution [55] of the authors, which characterized the LLR distribution in the case where there is no signal (global null). This earlier reference derived the asymptotic distribution of the LLR as a function of the limiting ratio . As we will see later in Section 2, this former result may be seen as a special case of the novel Theorem 3, which deals with general signal strengths. As is expected, the arguments are now more complicated than when working under the global null.
2 Main Results
Setting.
We describe the asymptotic properties of the MLE and the LRT in a highdimensional regime, where and both go to infinity in such a way that . We work with independent observations from a logistic model such that . We assume here that , where is the dimensional identity matrix. (This means that the columns of the matrix of covariates are unitnormed in the limit of large samples.). The exact scaling of is not important. As noted before, the important scaling is the signal strength and we assume that the regression coefficients (recall that increases with ) are scaled in such a way that
(2) 
where is fixed. It is useful to think of the parameter as the signal strength. Another way to express (2) is to say that .
The MLE is the minimizer of the negative loglikelihood defined via (observe that the sigmoid is the first derivative of )
(3) 
A first important remark is that in high dimensions, the MLE does not asymptotically exist if the signal strength exceeds a certain functional of the dimensionality. This happens because in such cases, there is a perfect separating hyperplane—separating the cases from the controls if you will—sending the MLE to infinity. In [55], the authors proved that if then (to be exact, they assumed ). To be more precise, the MLE exists if whereas it does not if [21, 20]. Having said this, although several necessary and sufficient conditions for existence of the MLE have been developed in the past [1, 54, 52, 60, 33, 9], a general characterization of the boundary curve is missing from the literature.
A system of nonlinear equations.
As we shall soon see, the asymptotic behavior of both the MLE and the LRT is characterized by a system of equations in three variables :
(4) 
where is a bivariate normal variable with mean and covariance
(5) 
With as in (3), the proximal mapping operator is defined via
(6) 
(a)  (b) 
The system of equations (4) is parameterized by the pair of dimensionality and signal strength parameters. It turns out that the – plane is divided into two regions shown in Figure 6: one in which the system admits a unique solution and one in which there is no solution. Numerically, we observe that the boundary curve in the figure asymptotically separates the region in which the MLE exists, from that in which it does not.
It is instructive to note that in the case where the signal strength vanishes, , the system of equations (4) reduces to the following twodimensional system:
(7) 
This holds because . It is not surprising that this system be that from [55] since that work considers and, therefore, .
(a)  (b)  (c) 
2.1 The average behavior of the MLE
Our first main result characterizes the ‘average’ behavior of the MLE.
Theorem 1.
Assume the dimensionality and signal strength parameters and are such that the system (4) has a unique solution (they are in the region shown in Figure 6). Assume the logistic model described at the beginning of the section in which is such that the empirical distribution converges weakly to a distribution with finite second moment. Suppose further that the second moment converges in the sense that as , . Then for any pseudoLipschitz function of order ,^{1}^{1}1A function is said to be pseudoLipschitz of order if there exists a constant such that for all , . the marginal distributions of the MLE coordinates obey
(8) 
where , independent of .
Among the many consequences of this result, we give three:

This result quantifies the exact bias of the MLE in some statistical sense. This can be seen by taking in (8), which leads to
and says that is centered about . This can be seen from the empirical results from the previous sections as well. When and , the solution to (4) obeys and Figure 3(a) shows that this is the correct centering.

Second, our result also provides the asymptotic variance of the MLE marginals after they are properly centered. This can be seen by taking , which leads to
As before, this can also be seen from the empirical results from the previous section. When and , the solution to (4) obeys and this is what we see in Figure 4.

Third, our result establishes that upon centering the MLE around , it becomes decorrelated from the signal . This can be seen by taking , which leads to
This can be seen from our earlier empirical results in Figure 6(b). The scatter directly shows the decorrelated structure and the xaxis passes right through the center, corroborating our theoretical finding.
It is of course interesting to study how the bias and the standard deviation depend on the dimensionality and the signal strength . We numerically observe that the larger the dimensionality and/or the larger the signal strength, the larger the bias . This dependence is illustrated in Figure 7(a). Further, note that as approaches zero, the bias , indicating that the MLE is asymptotically unbiased if . The same behavior applies to ; that is, increases in either or as shown in Figure 7(b). This plot shows the theoretical prediction divided by the average classical standard deviation obtained from , the inverse of the Fisher information. As approaches zero, the ratio goes to , indicating that the classical standard deviation value is valid for ; this is true across all values of . As increases, the ratio deviates increasingly from and we observe higher and higher variance inflation. In summary, the MLE increasingly deviates from what is classically expected as either the dimensionality or the signal strength, or both, increase.
Theorem 1 is an asymptotic result, and we study how fast the asymptotic kicks in as we increase the sample size . To this end, we set and let a half of the coordinates of have constant value , and the other half be zero. Note that in this example, as before. Our goal is to empirically determine the parameters and from runs, for each taking values in . Note that there are several ways of determining empirically. For instance, the limit (8) directly suggests taking the ratio . An alternative is to consider taking the ratio when restricting the summation to nonzero indices. Empirically, we find there is not much difference between these two choices and choose the latter option, denoting it as . With , the solution to (4) is equal to . Table 2 shows that is very slightly larger than in finite samples. However, observe that as the sample size increases, approaches , confirming the result from (8). We defer the study of the asymptotic variance to the next section.
Parameter  

2.2 The distribution of the null MLE coordinates
Whereas Theorem 1 describes the average or bulk behavior of the MLE across all of its entries, our next result provides the explicit distribution of whenever , i.e. whenever the th variable is independent from the response .
Theorem 2.
Let be any variable such that . Then in the setting of Theorem 1, the MLE obeys
(9) 
In words, the null MLE coordinates are asymptotically normal with mean zero and variance given by the solution to the system (4). An important remark is this: we have observed that is an increasing function of . Hence, we conclude that for a null variable , the variance of is increasingly larger as the magnitude of the other regression coefficients increases.
We return to the finite sample precision of the theoretically predicted asymptotic variance . As an empirical estimate, we use averaged over the null coordinates since it is approximately unbiased for . We work in the setting of Table 2 in which , averaging our estimates. The results are given in this same table; we observe that is very slightly larger than . However, it progressively gets closer to as the sample size increases.
Next, we study the accuracy of the asymptotic convergence results in (9). In the setting of Table 2, we fit independent logistic regression models and plot the empirical cumulative distribution function of in Figure 8(a) for some fixed null coordinate. Observe the perfect agreement with a straight line of slope 1.
2.3 The distribution of the LRT
We finally turn our attention to the distribution of the likelihood ratio statistic for testing .
Theorem 3.
Consider the LLR for testing . In the setting of Theorem 1, twice the LLR is asymptotically distributed as a multiple of a chisquare under the null,
(10) 
This theorem explicitly states that the LRT does not follow a distribution as soon as since the multiplicative factor is then larger than one, as demonstrated in Figure 7(c). In other words, the LRT is stochastically quite larger than a , explaining the large spike near zero in Figure 5. Also, Figure 7(c) suggests that as , the classical result is recovered.
Theorem 3 extends to arbitrary signal strengths the earlier result from [55], which described the distribution of the LRT under the global null for all ). One can quickly verify that when , the multiplicative factor in (10) is that given in [55], which easily follows from the fact that in this case, the system (4) reduces to (7). Furthermore, if the signal is sparse in the sense that coefficients have nonzero values, , which immediately implies that the asymptotic distribution for the LLR from [55] still holds in such cases.
To investigate the quality of the accuracy of (10) in finite samples, we work on the pvalue scale. We select a null coefficient and compute pvalues based on (10). The histogram for the pvalues across runs is shown in Figure 8(b) and the empirical cumulative distribution function (cdf) in Figure 8(c). In stark contrast to Figure 4, we observe that the pvalues are uniform over the bulk of the distribution.
From a multiple testing perspective, it is essential to understand the accuracy of the rescaled chisquare approximation in the tails of the distribution. We plot the empirical cdf of the pvalues, zooming in the tail, in Figure 8(d). We find that the rescaled chisquared approximation works extremely well even in the tails of the distribution.
To obtain a more refined idea of the quality of approximation, we zoom in the smaller bins close to zero and provide estimates of the pvalue probabilities in Table 3 for and . The tail approximation is accurate, modulo a slight deviation in the bin for for the smaller sample size. For , however, this deviation vanishes and we find perfect coverage of the true values. It seems that our approximation is extremely precise even in the tails.
2.4 Other scalings
Throughout this section, we worked under the assumption that , which does not depend on , and we explained that this is the only scaling that makes sense to avoid a trivial problem. We set the variables to have variance but this is of course somewhat arbitrary. For example, we could choose them to have variance as in . This means that , where is as before. This gives , where . The conclusions from Theorem 1 and 2 then hold for the model with predictors and regression coefficient sequence . Consequently, by simple rescaling, we can pass the properties of the MLE in this model to those of the MLE in the model with predictors and coefficients . For instance, the asymptotic standard deviation of is equal to , where is just as in Theorems 1 and 2. On the other hand, the result for the LRT, namely, Theorem 3 is scale invariant; no such trivial adjustment is necessary.
2.5 NonGaussian covariates
Our model assumes that the features are Gaussian, yet, we expect that the same results hold under other distributions with the proviso that they have sufficiently light tails. In this section, we empirically study the applicability of our results for certain nonGaussian features.
(a)  (b) 
In genetic studies, we often wish to understand how a binary response/phenotype depends on single nucleotide polymorphisms (SNPs), which typically take on values in . When the th SNP is in HardyWeinberg equilibrium, the chance of observing , and is respectively , and , where is between and . Below we generate independent features with marginal distributions as above for parameters varying in . We then center and normalize each column of the feature matrix to have zero mean and unit variance. Keeping everything else as in the setting of Figure 3, we study the bias of the MLE in Figure 9(a). As for Gaussian designs, the MLE seriously overestimates effect magnitudes and our theoretical prediction accurately corrects for the bias. We also see that the biasadjusted residuals are uncorrelated with the effect sizes , as shown in Figure 9(b).
The bulk distribution of a null coordinate suggested by Theorem 2 and the LRT distribution from Theorem 3 are displayed in Figure 10. Other than the design, the setting is the same as for Figure 8. The theoretical predictions are once again accurate. Furthermore, upon examining the tails of the pvalue distribution, we once more observe a close agreement with our theoretical predictions. All in all, these findings indicate that our theory is expected to apply to a far broader class of features.
3 Adjusting Inference by Estimating the Signal Strength
All of our asymptotic results, namely, the average behavior of the MLE, the asymptotic distribution of a null coordinate, and the LLR, depend on the unknown signal strength . In this section, we describe a simple procedure for estimating this single parameter from an idea proposed by Boaz Nadler and Rina Barber after the second author presented the new highdimensional ML theory from this paper at the Mathematisches Forshunginstitut Oberwolfach on March 12, 2018.
3.1 ProbeFrontier: estimating by probing the MLE frontier
We estimate the signal strength by actually using the predictions from our theory, namely, the fact that we have asymptotically characterized in Section 2 the region where the MLE exists. We know from Figure 6 that for each , there is a maximum dimensionality at which the MLE ceases to exist. We propose an estimate of and set . Below, we shall refer to this as the ProbeFrontier method.
Given a data sample , we begin by choosing a fine grid of values . For each , we execute the following procedure:
 Subsample

Sample observations from the data without replacement, rounding to the nearest integer. Ignoring the rounding, the dimensionality of this subsample is .
 Check whether MLE exists

For the subsample, check whether the MLE exists or not. This is done by solving a linear programming feasibility problem; if there exists a vector such that is positive when and negative otherwise, then perfect separation between cases and controls occurs and the MLE does not exist. Conversely, if the linear program is infeasible, then the MLE exists.
 Repeat

Repeat the two previous steps times and compute the proportion of times the MLE does not exist.
We next find , such that is the smallest value in for which . By linear interpolation between and , we obtain for which the proportion of times the MLE does not exist would be . We set . (Since the ‘phasetransition’ boundary for the existence of the MLE is a smooth function of , as is clear from Figure 6, choosing a sufficiently fine grid would make the linear interpolation step sufficiently precise.)
3.2 Empirical performance of adjusted inference
We demonstrate the accuracy of ProbeFrontier via some empirical results. We begin by generating i.i.d. observations using the same setup as in Figure 8 ( and half of the regression coefficients are null). We work with a sequence of points spaced apart by and obtain via the procedure described above, drawing subsamples. Solving the system (4) using and yields estimates for the theoretical predictions equal to . In turn, this yields an estimate for the multiplicative factor in (10) equal to . Recall from Section 2 that the theoretical values are and . Next, we compute the LLR statistic for each null and pvalues from the approximation (10) in two ways: first, by using the theoretically predicted values, and second, by using our estimates. A scatterplot of these two sets of pvalues is shown in Figure 11(a) (blue). We observe impeccable agreement.
Next, we study the accuracy of across different choices for , ranging from to . We begin by selecting a fine grid of values and for each, we generate observations with , (so that ), and half the coefficients have a nonvanishing magnitude scaled in such a way that the signal strength is . Figure 12(a) displays versus in blue, and we notice that ProbeFrontier works very well. We observe that the blue points fluctuate very mildly above the diagonal for larger values of the signal strength but remain extremely close to the diagonal throughout. This confirms that ProbeFrontier estimates the signal strength with reasonably high precision. Having obtained an accurate estimate for , plugging it into (4) immediately yields an estimate for the bias , standard deviation and the rescaling factor in (10). We study the accuracy of these estimates in Figure 12(b)(d). We observe a similar behavior in all these cases, with the procedure yielding extremely precise estimates for smaller values, and reasonably accurate estimates for higher values.
Parameters  

True  
Estimates 
.
Finally, we focus on the estimation accuracy for a particular pair across several replicates. In the setting of Figure 8, we generate samples and obtain estimates of bias (), std. dev. (), and rescaling factor for the LRT (). The average of these estimates are reported in Table 4. Our estimates always recover the true values up to the first digit. It is instructive to study the precision of the procedure on the pvalue scale. To this end, we compute pvalues from (10), using the estimated multiplicative factor . The empirical cdf of the pvalues both in the bulk and in the extreme tails is shown in Figure 13. We observe the perfect agreement with the uniform distribution, establishing the practical applicability of our theory and methods.
(a)  (b) 
3.3 Debiasing the MLE and its predictions
We have seen that maximum likelihood produces biased coefficient estimates and predictions. The question is how precisely can our proposed theory and methods correct this. Recall the example from Figure 3, where the theoretical prediction for the bias is . For this dataset, ProbeFrontier yields , shown as the green line in Figure 14(a). Clearly, the estimate of bias is extremely precise and coefficient estimates appear nearly unbiased.
Further, we can also use our estimate of bias to debias the predictions since we can estimate the regression function by . Figure 14(b) shows our predictions on the same dataset. In stark contrast to Figure 3(b), the predictions are now centered around the regression function (the method seems fairly unbiased), and the massive shrinkage towards the extremes has disappeared. The predictions constructed from the debiased MLE no longer falsely predict almost certain outcomes. Rather, we obtain fairly nontrivial chances of being classified in either of the two response categories—as it should be.
(a)  (b) 
4 Main Mathematical Ideas
As we mentioned earlier, we do not provide detailed proofs in this paper. The reader will find them in the first author’s Ph. D. thesis. However, in this section we give some of the key mathematical ideas and explain some of the main steps in the arguments, relying as much as possible on published results from [55].
4.1 The bulk distribution of the MLE
To analyze the MLE, we introduce an approximate message passing (AMP) algorithm that tracks the MLE in the limit of large and . Our purpose is a little different from the work in [50] which, in the context of generalized linear models, proposed AMP algorithms for Bayesian posterior inference, and whose properties have later been studied in [38] and [3]. To the best of our knowledge, an AMP algorithm for tracking the MLE from a logistic model has not yet been proposed in the literature. Our starting point is to write down a sequence of AMP iterates , with , using the following scheme: starting with an initial guess , set and for , inductively define
(11)  
where the function is applied elementwise and is equal to
(12) 
Observe that the evolution (11) depends on a sequence of parameters whose dynamics we describe next.
This description requires introducing an augmented sequence . With these two extra parameters , we let be a bivariate normal variable with mean and covariance matrix defined exactly as in (5). Then starting from an initial pair , for , we inductively define as the solution to
(13) 
and the extra parameters as
(14)  
To repeat, we run the AMP iterations (11) using the scalar variables calculated via the variance map updates (13)–(14).
In the regime where the MLE exists (see Figure 6), the variance map updates (13)–(14) converge (as ) to a unique fixed point . Note that by definition, is the solution to our system (4) in three unknowns. From now on, we use , so that the sequence is stationary; i. e. for all ,
With this stationary sequence of parameters, imagine now initializing the AMP iterations with a vector obeying
It is not hard to see that if the proposed AMP algorithm converges to a fixed point , then it is such that (see Appendix A); that is, obeys the MLE optimality conditions. This provides some intuition as to why the above algorithm would turn out to be useful in this context.
The crucial point is that we can study the properties of the MLE by studying the properties of the AMP iterates with the proviso that they converge. It turns out that the study of the sequence is amenable to a rigorous analysis because several transformations reduce the above algorithm to a generalized AMP algorithm [38], which in turn yields a characterization of the limiting variance of the AMP iterates: for any function as in Theorem 1, we have as ,
(15) 
where is drawn from the distribution (see Theorem 1) independently of , and is as above. To summarize, the asymptotic behavior of the AMP iterates can be characterized through a standard Gaussian variable, the distribution and the scalar quantity determined by the iteration (13)–(14). The description of our AMP algorithm and large sample properties of the iterates are understood only when we understand the behavior of the scalar sequences , which are known as the state evolution sequence in the literature; this formalism was introduced in [5, 24, 25, 26]. From here on, an analysis similar to that in [55] establishes that in the limit of large iteration counts, the AMP iterates converge to the MLE, that is,
which is the content of Theorem 1.
4.2 The distribution of a null coordinate
We sketch the proof of Theorem 2 in the case where the empirical limiting distribution has a point mass at zero. The analysis in the general case, where the number of vanishing coefficients is arbitrary, and in particular, , is very different and may be found in Appendix B.
Now consider Theorem 1 with . Strictly speaking, this is a discontinuous function which is not pseudoLipschitz. However, we can work with a smooth approximation , instead, obtained using standard techniques for smoothing an indicator function, such that the error is arbitrarily small. For simplicity, we skip the technical details underlying this approximation, and motivate the subsequent arguments using directly. Theorem 1 then yields
(16) 
Without loss of generality, assume that the first coordinates of vanish, and that is partitioned as and similarly for . From the rotational distributional invariance of the ’s, it can be shown that for any fixed orthogonal matrix , . Consequently, is uniformly distributed on the unit sphere and is independent of . Thus, any null coordinate has the same distribution as , where , independent of . From (16) and the weak law of large numbers, we have , leading to .
4.3 The distribution of the LRT
Once the distribution of for a null is known, the distribution of the LRT is a stone throw away, at least conceptually; that is to say, if we are willing to ignore some technical difficulties and leverage existing work. Indeed, following a reduction similar to that in [55], one can establish that
(17) 
where in which is the negative loglikelihood with the th variable removed and the corresponding MLE. Put . Then following an approach similar to that in [55, Appendix I], it can be established that . This gives that is a multiple of a variable with multiplicative factor given by .
This rough analysis shows that the distribution of the LLR in high dimensions deviates from a due to the coupled effects of two highdimensional phenomena. The first is the inflated variance of the MLE, which is larger than classically predicted. The second comes from the term , which is approximately equal to , where is the Hessian of the negative loglikelihood function. In the classical setting, this Hessian converges to a population limit. This is not the case in higher dimensions and the greater spread in the eigenvalues also contributes to the magnitude of the LRT.
5 Broader Implications and Future Directions
This paper shows that in highdimensions, classical ML theory is unacceptable. Among other things, classical theory predicts that the MLE is approximately unbiased when in reality it seriously overestimates effect magnitudes. Since the purpose of logistic modeling is to estimate the risk of a specific disease given a patient’s observed characteristics, say, the bias of the MLE is extremely problematic. As we have seen, an immediate consequence of the strong bias is that the MLE either dramatically overestimates, or underestimates, the chance of being sick. The issue becomes increasingly severe as either the dimensionality or the signal strength, or both, increase. This, along with the fact that pvalues computed from classical approximations are misleading, clearly make the case that routinely used statistical tools fail to provide meaningful inferences from both an estimation and a testing perspective.
We have developed a new theory which gives the asymptotic distribution of the MLE and the LRT in a model with independent covariates. As seen in Section 2.5, our results likely hold for a broader range of feature distributions (i.e. other than Gaussian) and it would be important to establish this rigorously. Further, we have also shown how to adjust inference by plugging in an estimate of signal strength in our theoretical predictions. Although our method works extremely well, it would be of interest to study others as well.
Finally, we conclude this paper with two promising directions for future work: (1) It would be of great interest to develop corresponding results in the case where the predictors are correlated. (2) It would be of interest to extend the results from this paper to other generalized linear models.
Acknowledgements
P. S. was partially supported by the Ric Weiland Graduate Fellowship in the School of Humanities and Sciences, Stanford University. E. C. was partially supported by the Office of Naval Research under grant N000141612712, by the National Science Foundation via DMS 1712800, by the Math + X Award from the Simons Foundation and by a generous gift from TwoSigma. P. S. would like to thank Andrea Montanari and Subhabrata Sen for helpful discussions on AMP. We thank Małgorzata Bogdan, Nikolaos Ignatiadis and Chiara Sabatti for useful comments about an early version of the paper.
References
 [1] A. Albert and J. A. Anderson. On the existence of maximum likelihood estimates in logistic regression models. Biometrika, 71(1):1–10, 1984.
 [2] JA Anderson and SC Richardson. Logistic discrimination and bias correction in maximum likelihood estimation. Technometrics, 21(1):71–78, 1979.
 [3] Jean Barbier, Florent Krzakala, Nicolas Macris, Léo Miolane, and Lenka Zdeborová. Phase transitions, optimal errors and optimality of messagepassing in generalized linear models. arXiv preprint arXiv:1708.03395, 2017.
 [4] Maurice S Bartlett. Properties of sufficiency and statistical tests. Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences, pages 268–282, 1937.
 [5] Mohsen Bayati and Andrea Montanari. The dynamics of message passing on dense graphs, with applications to compressed sensing. IEEE Transactions on Information Theory, 57(2):764–785, 2011.
 [6] Derek Bean, Peter J Bickel, Noureddine El Karoui, and Bin Yu. Optimal mestimation in highdimensional regression. Proceedings of the National Academy of Sciences, 110(36):14563–14568, 2013.
 [7] Alexandre Belloni, Victor Chernozhukov, and Ying Wei. Honest confidence regions for a regression parameter in logistic regression with a large number of controls. Technical report, cemmap working paper, Centre for Microdata Methods and Practice, 2013.
 [8] Peter J Bickel and JK Ghosh. A decomposition for the likelihood ratio statistic and the bartlett correction–a bayesian argument. The Annals of Statistics, pages 1070–1090, 1990.
 [9] Krzysztof Bogdan and Małgorzata Bogdan. On existence of maximum likelihood estimators in exponential families. Statistics: A Journal of Theoretical and Applied Statistics, 34(2):137–149, 2000.
 [10] George Box. A general distribution theory for a class of likelihood criteria. Biometrika, 36(3/4):317–346, 1949.
 [11] SB Bull, WW Hauck, and CMT Greenwood. Twostep jackknife bias reduction for logistic regression mles. Communications in StatisticsSimulation and Computation, 23(1):59–88, 1994.
 [12] Florentina Bunea et al. Honest variable selection in linear and logistic regression models via ℓ1 and ℓ1+ ℓ2 penalization. Electronic Journal of Statistics, 2:1153–1194, 2008.
 [13] Emmanuel Candès, Yingying Fan, Lucas Janson, and Jinchi Lv. Panning for gold: Modelfree knockoffs for highdimensional controlled variable selection. arXiv preprint arXiv:1610.02351, 2016.
 [14] John B Copas. Binary regression models for contaminated data. Journal of the Royal Statistical Society. Series B (Methodological), pages 225–265, 1988.
 [15] Gauss M Cordeiro. Improved likelihood ratio statistics for generalized linear models. Journal of the Royal Statistical Society. Series B (Methodological), pages 404–413, 1983.
 [16] Gauss M Cordeiro and Francisco CribariNeto. An introduction to Bartlett correction and bias reduction. Springer, 2014.
 [17] Gauss M Cordeiro, Franciso CribariNeto, Elisete CQ Aubin, and Silvia LP Ferrari. Bartlett corrections for oneparameter exponential family models. Journal of Statistical Computation and Simulation, 53(34):211–231, 1995.
 [18] Gauss M Cordeiro and Peter McCullagh. Bias correction in generalized linear models. Journal of the Royal Statistical Society. Series B (Methodological), pages 629–643, 1991.
 [19] Delphine S Courvoisier, Christophe Combescure, Thomas Agoritsas, Angèle GayetAgeron, and Thomas V Perneger. Performance of logistic regression modeling: beyond the number of events per variable, the role of data structure. Journal of clinical epidemiology, 64(9):993–1000, 2011.
 [20] Thomas M Cover. Geometrical and statistical properties of linear threshold devices. Ph.D. thesis, May 1964.
 [21] Thomas M Cover. Geometrical and statistical properties of systems of linear inequalities with applications in pattern recognition. IEEE transactions on electronic computers, (3):326–334, 1965.
 [22] Francisco CribariNeto and Gauss M Cordeiro. On bartlett and bartletttype corrections francisco cribarineto. Econometric reviews, 15(4):339–367, 1996.
 [23] David Donoho and Andrea Montanari. High dimensional robust Mestimation: Asymptotic variance via approximate message passing. Probability Theory and Related Fields, pages 1–35, 2013.
 [24] David L Donoho, Arian Maleki, and Andrea Montanari. Messagepassing algorithms for compressed sensing. Proceedings of the National Academy of Sciences, 106(45):18914–18919, 2009.
 [25] David L Donoho, Arian Maleki, and Andrea Montanari. Message passing algorithms for compressed sensing: I. motivation and construction. In Information Theory (ITW 2010, Cairo), 2010 IEEE Information Theory Workshop on, pages 1–5. IEEE, 2010.
 [26] David L Donoho, Arian Maleki, and Andrea Montanari. The noisesensitivity phase transition in compressed sensing. IEEE Transactions on Information Theory, 57(10):6920–6941, 2011.
 [27] Noureddine El Karoui. Asymptotic behavior of unregularized and ridgeregularized highdimensional robust regression estimators: rigorous results. arXiv preprint arXiv:1311.2445, 2013.
 [28] Noureddine El Karoui. On the impact of predictor geometry on the performance on highdimensional ridgeregularized generalized robust regression estimators. Probability Theory and Related Fields, pages 1–81, 2017.
 [29] Noureddine El Karoui, Derek Bean, Peter J Bickel, Chinghway Lim, and Bin Yu. On robust regression with highdimensional predictors. Proceedings of the National Academy of Sciences, 110(36):14557–14562, 2013.
 [30] Jianqing Fan and Jinchi Lv. Nonconcave penalized likelihood with NPdimensionality. IEEE Transactions on Information Theory, 57(8):5467–5484, 2011.
 [31] Yingying Fan, Emre Demirkaya, and Jinchi Lv. Nonuniformity of pvalues can occur early in diverging dimensions. https://arxiv.org/abs/1705.03604, May 2017.
 [32] David Firth. Bias reduction of maximum likelihood estimates. Biometrika, 80(1):27–38, 1993.
 [33] Shelby J Haberman. The analysis of frequency data, volume 4. University of Chicago Press, 1977.
 [34] Xuming He and QiMan Shao. On parameters of increasing dimensions. Journal of Multivariate Analysis, 73(1):120–135, 2000.
 [35] David W Hosmer Jr, Stanley Lemeshow, and Rodney X Sturdivant. Applied logistic regression, volume 398. John Wiley & Sons, 2013.
 [36] Peter J Huber. Robust regression: asymptotics, conjectures and Monte Carlo. The Annals of Statistics, pages 799–821, 1973.
 [37] Jana Janková and Sara van de Geer. Honest confidence regions and optimality in highdimensional precision matrix estimation. Test, 26(1):143–162, 2017.
 [38] Adel Javanmard and Andrea Montanari. State evolution for general approximate message passing algorithms, with applications to spatial coupling. Information and Inference: A Journal of the IMA, 2(2):115–144, 2013.
 [39] Dennis E Jennings. Judging inference adequacy in logistic regression. Journal of the American Statistical Association, 81(394):471–476, 1986.
 [40] Sham Kakade, Ohad Shamir, Karthik Sindharan, and Ambuj Tewari. Learning exponential families in highdimensions: Strong convexity and sparsity. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pages 381–388, 2010.
 [41] DN Lawley. A general method for approximating to the distribution of likelihood ratio criteria. Biometrika, 43(3/4):295–303, 1956.
 [42] Erich L Lehmann and Joseph P Romano. Testing statistical hypotheses. Springer Science & Business Media, 2006.
 [43] Peter McCullagh and James A Nelder. Generalized linear models. Monograph on Statistics and Applied Probability, 1989.
 [44] GJ McLachlan. A note on bias correction in maximum likelihood estimation with logistic discrimination. Technometrics, 22(4):621–627, 1980.
 [45] Lawrence H Moulton, Lisa A Weissfeld, and Roy T St Laurent. Bartlett correction factors in logistic regression models. Computational statistics & data analysis, 15(1):1–11, 1993.
 [46] Peter Peduzzi, John Concato, Elizabeth Kemper, Theodore R Holford, and Alvan R Feinstein. A simulation study of the number of events per variable in logistic regression analysis. Journal of clinical epidemiology, 49(12):1373–1379, 1996.
 [47] Stephen Portnoy. Asymptotic behavior of Mestimators of regression parameters when is large. i. consistency. The Annals of Statistics, pages 1298–1309, 1984.
 [48] Stephen Portnoy. Asymptotic behavior of Mestimators of regression parameters when is large; ii. normal approximation. The Annals of Statistics, pages 1403–1417, 1985.
 [49] Stephen Portnoy et al. Asymptotic behavior of likelihood methods for exponential families when the number of parameters tends to infinity. The Annals of Statistics, 16(1):356–366, 1988.
 [50] Sundeep Rangan. Generalized approximate message passing for estimation with random linear mixing. In IEEE International Symposium on Information Theory, pages 2168–2172. IEEE, 2011.
 [51] Mark Rudelson, Roman Vershynin, et al. HansonWright inequality and subgaussian concentration. Electron. Commun. Probab, 18(82):1–9, 2013.
 [52] Thomas J Santner and Diane E Duffy. A note on a. albert and ja anderson’s conditions for the existence of maximum likelihood estimates in logistic regression models. Biometrika, 73(3):755–758, 1986.
 [53] Robert L Schaefer. Bias correction in maximum likelihood logistic regression. Statistics in Medicine, 2(1):71–78, 1983.
 [54] Mervyn J Silvapulle. On the existence of maximum likelihood estimators for the binomial response models. Journal of the Royal Statistical Society. Series B (Methodological), pages 310–313, 1981.
 [55] Pragya Sur, Yuxin Chen, and Emmanuel J Candès. The likelihood ratio test in highdimensional logistic regression is asymptotically a rescaled chisquare. arXiv preprint arXiv:1706.01191, 2017.
 [56] Sara Van de Geer, Peter Bühlmann, Ya’acov Ritov, Ruben Dezeure, et al. On asymptotically optimal confidence regions and tests for highdimensional models. The Annals of Statistics, 42(3):1166–1202, 2014.
 [57] Sara A Van de Geer et al. Highdimensional generalized linear models and the lasso. The Annals of Statistics, 36(2):614–645, 2008.
 [58] A. W. Van der Vaart. Asymptotic statistics, volume 3. Cambridge university press, 2000.
 [59] Eric Vittinghoff and Charles E McCulloch. Relaxing the rule of ten events per variable in logistic and cox regression. American journal of epidemiology, 165(6):710–718, 2007.
 [60] RWM Wedderburn. On the existence and uniqueness of the maximum likelihood estimates for certain generalized linear models. Biometrika, 63(1):27–32, 1976.
 [61] Samuel S Wilks. The largesample distribution of the likelihood ratio for testing composite hypotheses. The Annals of Mathematical Statistics, 9(1):60–62, 1938.