Shrinkage Estimation in Multilevel Normal Models

# Shrinkage Estimation in Multilevel Normal Models

\fnmsCarl N. \snmMorris\correflabel=e1 [    text=morris@ stat.harvard.edu]morris@stat.harvard.edu    \fnmsMartin \snmLysylabel=e2 [    text=lysy@stat.harvard. edu]lysy@stat.harvard.edu Harvard University Carl N. Morris is Professor of Statistics, Department of Statistics, Harvard University, Cambridge, Massachusetts 02138, USA
\printeade1. Martin Lysy is PhD student, Department of Statistics, Harvard University, Cambridge, Massachusetts 02138, USA
###### Abstract

This review traces the evolution of theory that started when Charles Stein in 1955 [In Proc. 3rd Berkeley Sympos. Math. Statist. Probab. I (1956) 197–206, Univ. California Press] showed that using each separate sample mean from k\geq 3 Normal populations to estimate its own population mean \mu_{i} can be improved upon uniformly for every possible \mu=(\mu_{1},\ldots,\mu_{k})^{\prime}. The dominating estimators, referred to here as being “Model-I minimax,” can be found by shrinking the sample means toward any constant vector. Admissible minimax shrinkage estimators were derived by Stein and others as posterior means based on a random effects model, “Model-II” here, wherein the \mu_{i} values have their own distributions. Section 2 centers on Figure 2, which organizes a wide class of priors on the unknown Level-II hyperparameters that have been proved to yield admissible Model-I minimax shrinkage estimators in the “equal variance case.” Putting a flat prior on the Level-II variance is unique in this class for its scale-invariance and for its conjugacy, and it induces Stein’s harmonic prior (SHP) on \mu_{i}.

Component estimators with real data, however, often have substantially “unequal variances.” While Model-I minimaxity is achievable in such cases, this standard requires estimators to have “reverse shrinkages,” as when the large variance component sample means shrink less (not more) than the more accurate ones. Section 3 explains how Model-II provides appropriate shrinkage patterns, and investigates especially estimators determined exactly or approximately from the posterior distributions based on the objective priors that produce Model-I minimaxity in the equal variances case. While correcting the reversed shrinkage defect, Model-II minimaxity can hold for every component. In a real example of hospital profiling data, the SHP prior is shown to provide estimators that are Model-II minimax, and posterior intervals that have adequate Model-II coverage, that is, both conditionally on every possible Level-II hyperparameter and for every individual component \mu_{i}, i=1,\ldots,k.

\kwd
\volume

27 \issue1 2012 \firstpage115 \lastpage134 \doi10.1214/11-STS363 \runtitleShrinkage Estimation

{aug}

and

Hierarchical model \kwdempirical Bayes \kwdunequal variances \kwdModel-II evaluations \kwdStein’s harmonic prior.

## 1 Introduction: Stein and Shrinkage Estimation

Charles Stein stein55 () stunned the statistical world by showing that estimating k population means \mu=(\mu_{1},\ldots,\mu_{k})^{\prime} with their sample means y=(y_{1},\ldots,y_{k})^{\prime} is inadmissible. That result assumes k\geq 3 independent Normal distributions and a sum of mean squared component errors risk function. With Willard James jamesstein61 (), he provided a specific shrinkage estimator, the James–Stein estimator, which dominates the sample mean vector very substantially.

This first section introduces the history of the James–Stein minimax estimator and its extensions when “equal variances” prevail, with “Model-I” evaluations that are conditional on \mu. However, “Mo-del-I” does not allow certain practical needs to be met, such as valid confidence intervals. Section 2 shows how this has been rectified by enlarging “Mo-break del-I” to “Model-II” wherein random effects distributions are assigned in Level-II. The resulting framework enables repeated sampling (frequency based) interval estimates efronmorris75 (), morris83 () and frees practitioners from determining and specifying valid relative weights for each squared-error component loss, upon which Model-I minimax estimators depend critically. Model-II even supports developing admissible minimax shrinkage estimators via posterior mean calculations by simplifying the specification of prior distributions, proper and otherwise, on the Level-II parameters. The centerpiece of Section 2 is Figure 2, which graphically organizes some priors on the Level-II variance that lead to minimax estimators. Stein’s harmonic prior (SHP) in Figure 2 corresponds to an admissible shrinkage estimator that provides acceptable frequency coverage intervals in Model-II evaluations.

Section 3 reviews the unequal variances case that arises regularly in practice, but for which mathematical evaluations are difficult. The previous sections are meant especially to provide the background needed for more research on the operating characteristics in repeated sampling of unequal variances procedures, while Section 3 shows why that is needed. It is shown in Section 3 why, in substantially unequal variances settings, the Model-II random effects framework works well while the Model-I perspective provides inappropriate (“reversed”) shrinkage patterns. The SHP prior in that setting leads to estimators and to formal posterior intervals for \mu_{i}, i=1,\ldots,k, that appear to provide approximate (or conservative) frequency confidence intervals with respect to Model-II evaluation standards that for each individual \mu_{i} approximates or exceeds its nominal 95\% coverage, no matter what the true Level-II variance.

The data in Table 1 provide an equal variance example based on a 1992 medical profiling evaluation of k=10 New York hospitals. We are to consider these as Normally-distributed indices of successful outcome rates for patients at these 10 hospitals following coronary artery bypass graft (CABG) surgeries. The indices are centered so that the New York statewide average outcome over all hospitals lies near 0. Larger estimates y_{i} indicate hospitals that performed better for these surgeries. For example, Hospital 10 was more than 2 standard deviations above the statewide mean. All 10 sample means have nearly the same variances, which we have scaled so the common variance is about V=1.00. The variances V_{i} must be the same in order to meet the equal variance assumption upon which the James–Stein estimator is based. This “equal variance” case enables various mathematical calculations that are difficult, if not impossible, for the widely encountered “unequal variances” situation.

The vector of sample means y has total mean squared error (risk) as an estimator of \mu given by

 E\sum_{i=1}^{k}(y_{i}-\mu_{i})^{2}=\sum_{i=1}^{k}V_{i}=kV.

This unbiased estimator is minimax, since its constant risk is the limit of the risks of a sequence of proper Bayes’ rules (see, e.g., Theorem 18 of Chapter 5 in berger85 ()).

In the simplest situation, the James–Stein estimator “shrinks” y_{i} toward an arbitrarily preassigned constant \mu_{0}. It is appropriate to set \mu_{0}=0 in this case because we have recentered the CABG indices to have NY statewide mean equal to 0. Then with \mu_{0}=0, the sum of squared residuals for these data,

 S=\sum_{i=1}^{10}y_{i}^{2}/V=11.62,

would have a \chi^{2}_{(10)} distribution if the hypothesis that all values of \mu_{i}\equiv\mu_{0}=0 were true, thereby failing to reject the null at even the 30% level. However, most members of the medical community would not believe that all hospitals are equally effective, and many in the statistical community would be reluctant to think that the first and last hospitals in the list, whose quality estimates differ by more than 2 standard deviations from 0, should be declared to have the same underlying quality as all the others.

On the other hand, S isn’t far from its expectation k=10 if all the \mu_{i} are 0, and some extreme rates would occur, at least in part, because of randomness. Thus, regression-toward-the-mean (RTTM), that is, shrinkage toward \mu_{0}, would be expected if more data were to appear for these hospitals.

RTTM is anticipated if one believes that there is some similarity among the hospitals, and that sampling variation is part of the reason for the extreme hospitals. That is, the hospital with the highest quality index with y_{10}=2.05 probably has a true mean \mu_{10} smaller than 2.05 because

 E\Bigl{[}\max_{1\leq i\leq k}y_{i}|\mu\Bigr{]}>\max_{1\leq i\leq k}E[y_{i}|\mu% _{i}]=\max_{1\leq i\leq k}\mu_{i}\geq\mu_{10}

(by Jensen’s inequality and convexity of the maximum function). So we expect in this case that the observed maximum y_{10}=2.05 exceeds \mu_{10}, and a shrunken estimator is in order. The two-level Model-II, soon to be described, anticipates and models RTTM, leading to shrinkage estimation.

Following earlier notation set in a series of papers by Efron and Morris, for example, efronmorris75 (), about Stein’s estimator and its generalizations, we denote shrinkage factors by the letter B (often with subscripts). The James–Stein shrinkage coefficient for this setting is calculated as

 \hat{B}_{\mathrm{JS}}=(k-2)/S,

which for these data is \hat{B}_{\mathrm{JS}}=8/11.62=0.688. This estimator then shrinks the usual unbiased estimates y_{i} toward \mu_{0}=0 according to

 \hat{\mu}_{\mathrm{JS},i}=(1-\hat{B}_{\mathrm{JS}})y_{i}+\hat{B}_{\mathrm{JS}}% \mu_{0}=(1-\hat{B}_{\mathrm{JS}})y_{i}.

Based on this shrinkage estimate, future observations are being predicted to regress about 68.8\% of the way toward 0. Column 5 of Table 1 lists the shrunken values

 (1-0.688)\times y_{i}+0.688\times 0=0.312\times y_{i}

for each hospital, the James–Stein estimate of the mean. For example, the estimate of Hospital 10’s quality index is reduced from 2.05 standard deviations above the New York mean to 0.64 standard deviations. The RTTM effect is strong for these 10 hospitals, which are estimated to be more similar than different, with only 31.2% of the weight allocated to each hospital’s own estimate. Figure 1 illustrates the shrinkage pattern.

The parameter \mu_{i} can be thought of as the quality index that would result for hospital i if that hospital theoretically could have performed a huge number of CABG surgeries in 1992. Whether the JS estimator of quality is a better estimator of \mu than y for these data cannot be guaranteed because the true values of \mu aren’t known. However, one can calculate an unbiased estimator of the expected risk (i.e., for sum of squared errors) of the JS estimator jamesstein61 (). This unbiased estimator of the risk is

 \hat{R}=V\bigl{(}k-(k-2)\hat{B}_{\mathrm{JS}}\bigr{)},

a function of y only through S. That y is inadmissible and that the JS estimate is “minimax” (risk never exceeds kV) follows because \hat{B}_{\mathrm{JS}}>0 for all data sets. This proves minimaxity, that the risk of \hat{\mu}_{\mathrm{JS}}=(\hat{\mu}_{\mathrm{JS},1},\ldots,\hat{\mu}_{\mathrm{% JS},k})^{\prime} as a function of \mu is

 E[\hat{R}|\mu]=V\bigl{(}k-(k-2)E\hat{B}_{\mathrm{JS}}\bigr{)}

For these data, \hat{R}=1.00\times(10-8\times 0.688)=4.496. This is a large reduction in mean squared error, less than half of kV=10, the risk of the separate unshrunken estimates y_{i}. In fact, the smallest possible value of the risk for the JS estimator is 2V, when \mu=0, for any value of k\geq 3, thus offering very substantial possible improvements on y.

The JS estimator can be extended in the equal variance setting to cover more general situations. For example, as Stein and others showed, one can shrink the y_{i} toward the grand mean of the data, \overline{y}=\sum y_{i}/k. With these hospital data this would shrink toward \overline{y}=0.272 (which differs by less than one standard error of the overall average for the 10 hospitals from the assumed mean 0). More generally, if along with each y_{i} one collects a vector of r>0 covariate vectors x_{i} (possibly including the intercept), each y_{i} can be shrunk toward its regression prediction x_{i}^{\prime}b, where

 b=(X^{\prime}X)^{-1}X^{\prime}y

and X is the k\times r covariate matrix with columns x_{i}, i=1,\ldots,k. Doing this forfeits r degrees of freedom, so that

 \hat{B}_{\mathrm{JS}}=(k-r-2)/S,

with S now replaced by

 S=\sum_{i=1}^{k}(y_{i}-x_{i}^{\prime}b)^{2}/V.

The James–Stein estimates of the \mu_{i} then become

 \displaystyle\hat{\mu}_{\mathrm{JS},i} \displaystyle= \displaystyle(1-\hat{B}_{\mathrm{JS}})y_{i}+\hat{B}_{\mathrm{JS}}x_{i}^{\prime}b \displaystyle= \displaystyle(1-\hat{B}_{\mathrm{JS}})(y_{i}-x_{i}^{\prime}b)+x_{i}^{\prime}b.

Writing \hat{\mu}_{\mathrm{JS},i} this way suggests that shrinking with r>0 does not affect the r-dimensional regression space, but only shrinks toward 0 in the k-r dimensional space orthogonal to it. Indeed, the problem can be “rotated” to an equivalent one in which the last r values of the residuals y_{i}-x_{i}^{\prime}b are all equal to 0, regardless of the value of y, for example, Stein stein66 (). The example just considered, with shrinkage toward zero, shows what happens to the residuals when shrinkage is toward a regression model.

Of course V needn’t be 1.00, or even be known, provided there exists an independent Chi-square estimate of V. While that can be handled straightforwardly in the equal variance case stein66 (), it will not be a central issue in any case if the degrees of freedom are substantial.

Using the JS estimator seems easy and powerful, but many complicating issues arise in practice:

1. What is the standard error of each individual estimate? One hopes the JS estimator for Hospital 10 improves y_{10}=2.05 (with standard deviation{}=1.00) by using the better estimate \hat{\mu}_{10}=0.64. The sum of individual risks has decreased from 10 to 4.5 for all 10 hospitals, but this does not mean the variance for each individual estimate has dropped to 0.45. Furthermore, the JS estimator cannot even guarantee that every component (hospital) has a smaller risk (expected squared error) as a function of \mu. Such an improvement is impossible because each individual y_{i} is an admissible estimate of its own \mu_{i}, in one dimension. Rather, minimaxity of the JS estimator for sum of squared errors is accomplished by “balancing” or “trading off” component risks. Components with mean square errors that exceed V are guaranteed to have their risks more than offset by risk improvements on the remaining components. The minimaxity claim (improvement on the unshrunken vector of unbiased estimates) is for aggregate risk, and not for every component.

2. Why, even in this equal variance case, should the loss function be an unweighted sum of squares? In applications the loss function could require different relative weights to reflect unequal economic loss for the mean squared errors of different components (hospitals, here). That is, the appropriate loss function could be

 L(\hat{\mu},\mu)=\sum_{i=1}^{k}W_{i}(\mu_{i}-\hat{\mu}_{i})^{2}

for some appropriate weights W_{1},\ldots,W_{k}>0.

Users of the James–Stein estimator typically assume that all W_{i} are equal in assessing its risk benefits. But would NY hospital administrators agree that hospital errors can be traded off with equal weights? Perhaps weights should differ for teaching hospitals, or for military hospitals, or for children’s or other specialty hospitals, or for hospitals in areas far from medical centers, or for large hospitals. Getting agreement on that issue has arisen with various real shrinkage applications. Even if the administrators could agree on the values of the W_{i}, the James–Stein estimator would not dominate y when the W_{i} are sufficiently unequal. There is a way out that seems reassuring, at first, because a shrinkage estimator can be found to dominate y for any given weights W_{i}. But there is a rub. The dominating estimator for a set of weights depends on the specified weights, and then it cannot be expected to dominate y for a different set of weights. Only the unshrunken estimator y can be guaranteed to be minimax independently of the weights W_{i}. Its risk, the minimax risk, is V=\sum W_{i}. More on this in Section 3.

3. Even with equal weights, W_{i}\,{\equiv}\,1, another problem arising in practice and in the theory is that \hat{B}_{\mathrm{JS}} can exceed 1. A (uniformly) better shrinkage constant uses \min(1,\hat{B}_{\mathrm{JS}}) instead and easily is seen to reduce the total risk. That change necessitates developing a new unbiased estimator of risk. This was made possible, and easy, by a simple calculus pioneered by Stein stein73 (), stein81 () and independently by Berger’s integration by parts technique berger76 ().

This truncated shrinkage estimator’s improvement shows that the James–Stein estimator is inadmissible itself. The improved truncated estimator also is inadmissible, as it has a discontinuous derivative, while admissible estimators must have all their derivatives (as a function of the data). The search for admissible estimators began soon after the James–Stein estimator, for example, Stein jamesstein61 () and Brown brown66 ().

4. We already have noted that there is no agreed-upon way to estimate the component variances of the JS estimator. Correspondingly, there is no way to determine separate confidence intervals for each \mu_{i}. Confidence ellipsoids, for example, Stein stein62 () and Brown brown09 (), can be and have been developed for the equal variance setting. However, ellipsoids may be unattractive to a data analyst who has the alternative of estimating with y_{i} and using V^{1/2} as the standard error, with a corresponding exact confidence interval for each component obtained via the Normal distribution. Unfortunately, only aggregates (ellipsoidal sets in this context) can provide uniformly better coverage if coverage must hold conditionally on the underlying \mu for all \mu, that is, with Model-I evaluations. There is no agreed upon component-wise procedure for standard errors and intervals for individual components \mu_{i} simply because no such procedure is possible as a function of \mu. This problem (and others too) can be rectified only via acceptance of a two-level, random effects model referred to here as Model-II.

5. The overriding difficulty for the JS estimator as a practical tool for data analysts is that, except for data produced by carefully designed experiments, real data rarely occur with equal variances V_{i}=V. Even the hospital data of Table 1 do not have exactly the same variances. The first author has participated in developing and in using shrinkage techniques for hospital profiling and for other applications (e.g., morris83 (), christiansenmorris97 ()) without ever seeing hospital or medical data with equal variances, simply because hospital caseloads (numbers of patients) vary considerably. For this initial discussion to illustrate the JS estimator and related shrinkage procedures in the equal variances setting, we have picked 10 of the 31 hospitals (the 31 to be described later) that had similar variances. These 10 each have sample sizes within 15% of 550 patients.

## 2 Theoretical and Bayesian Developments for the Equal Variance Case

This section reviews expansion of the assumptions of “Model-I” to a two-level model, “Model-II,” which at Level-II includes a random effects model on the \mu_{i}, with the Level-II parameters unknown but estimable from the data. Model-II and Stein’s harmonic prior (SHP), to be introduced in this section, will be especially important as a basis for developing frequency procedures in the difficult unequal variances situation of Section 3. After briefly introducing the unequal variances case in this section, the equal variances setting is studied because of its relatively easy calculations. This enables Bayesian analysis that uses formal priors on the Level-II parameters that produce shrinkage estimators as posterior means. In the equal variance setting, many of these estimators have been proven to be minimax (some also are admissible) in the original Model-I sense of Stein, that is, for total square error loss and for every possible mean vector \mu. The centerpiece of this section is Figure 2, which displays graphically certain famous distributions on the Level-II variance, “A” that are known to provide minimax shrinkage estimators. Of central importance is Stein’s harmonic prior (SHP) on \mu, which stems from imposing an improper flat prior on A and yields an admissible, minimax modification of the James–Stein estimator. This SHP shrinkage estimator leads to posterior interval estimates that meet confidence requirements for coverages in Model-II evaluations.

A generalization of the James–Stein estimator almost always is required in practice because the unequal variance situation arises, and also because data analysts often must provide interval estimates. Uniform risk dominance as a function of \mu will be seen in Section 3 to require inappropriate (reversed) shrinkage patterns in practice. Of course, shrinkage methods are used commonly in applications, almost always being based on a two-level random effects model, the \mu_{i} being random effects with their own distributions. Such models belong to frequentists and Bayesians alike, known as hierarchical models, multilevel models, empirical Bayes models and by other terms. Table 2 shows one such model with Normally distributed observations (Level-I), and Normally distributed random effects (Level-II). The two columns, that is, the Descriptive and the Inferential versions of the model, are equivalent in that both sides give rise to the same joint distributions of the data and the random effects, (y,\mu), given the hyperparameter \alpha that governs the joint distribution. These models allow “unequal variances” V_{i}, perhaps because V_{i}=\sigma^{2}/n_{i} with different sample sizes. That anticipates Section 3, but in this “equal variances” Section we always assume V_{i}\equiv V.

In what follows, Model-I will refer to the distribution of y|\mu at Level-I of Table 2 which treats \mu as the unknown parameter, whereas Model-II will refer to the random effects model combining Model-I and the Level-II distribution of \mu|\alpha, which has unknown parameter \alpha=(\beta,A). Model-III will refer to the fully Bayesian model embracing all Levels I, II and III for a single prior \pi(\alpha) on \alpha, and is used here primarily to construct Bayes rules to be evaluated via the assumptions of Model-I or Model-II, in the frequency sense for all y.

If the hyperparameter \alpha=(\beta,A) were known in Model-II of Table 2, one would use the Level-II distribution of \mu|{y,\alpha} in Table 2 to make inferences about each component value \mu_{i}. For squared error loss the best estimator of \mu_{i} then would be the posterior mean, which estimates \mu_{i} by using the shrinkage factor

 B_{i}=\frac{V_{i}}{V_{i}+A}

to compromise between the prior mean x_{i}^{\prime}\beta and the sample mean y_{i}.

While shrinkages needn’t arise for many distributions that one could choose for Level-II, they do with the Normal distribution on \mu_{i} because the Normal distribution on \mu in Level-II is conjugate to the Normal Level-I likelihood. Conjugate priors at Level-II lead to linear posterior means and shrinkage coefficients for the Normal and for other exponential family models too; see, for example, Diaconis and Ylvisaker diaconisylvisaker79 () and Morris and Lock morrislock09 (). They also are the “G_{2} minimax” choice for Level-II jacksonetal70 (), morris83c ().

With k>r+2 components, it is not required to assume \alpha=(\beta,A) is known because information builds up through the k observations y_{i}, whose distributions are governed by their shared dependence on (\beta,A) via the likelihood function given by the right half of Level-I in Table 2. For the rest of this section we focus on the simplest case of the Table 2 model with \beta=0 (r=0). Thus, \alpha=A is the only unknown hyperparameter. With equal variances, studying the case \beta=0 is much less restrictive than it might seem because use of the orthogonality trick described in Section 1 allows developments for \beta=0 to be extended back to the case with \beta unknown.

Early work on the equal variance case strongly emphasized Model-I squared error evaluations made conditionally on \mu. Even so, it was realized, forexample, Stein stein66 (), that if one also assumes Mo-del-II, then it is easy to motivate shrinkage estimators and the JS estimator, since one can estimate A by considering the likelihood of A, or, equivalently, of B_{i}\equiv B. The likelihood of B follows from the marginal distribution of y|B in the inferential column of Table 2 which has the form of a Gamma density, but conditioned on B\leq 1,

 L(B)=B^{k/2}\exp(-BS/2).

Because of the equal variance assumption, L(B) only depends on the 1-dimensional sufficient statisticfor B in the model for y|A:

 S=\sum_{i=1}^{k}y_{i}^{2}/V.

The maximum likelihood estimate of B is \hat{B}=k/S. However, B (not A) enters linearly in E[\mu_{i}|y], and by noting that

 S|B\sim B^{-1}\chi^{2}_{(k)},

one sees that the James–Stein shrinkage estimate \hat{B}_{\mathrm{JS}}=(k-2)/S is the best unbiased estimate of B. Both of these estimates lead to shrinkage or “empirical Bayes” estimators of \mu_{i} via substituting k/S or (k-2)/S for the shrinkage B, where B appears in

 E[\mu_{i}|{y,B}]=(1-B)y_{i}.

Minimaxity of these and of other shrinkage estimators can be checked via Baranchik’s minimax theorem, from his 1964 dissertation baranchik64 () under Stein. Assume the equal variance Normal setting of Table 2, r=0, k\geq 3, and Model-I only. Suppose an estimator shrinks its k components toward 0 based on a shrinkage factor of the form

 \hat{B}(S)=u(S)/S,

with u(S) nondecreasing and with 0\leq u(S)\leq 2(k-2). Then the estimator is minimax for total mean squared error risk under Model-I, that is, with risk at most kV for all \mu. A similar but more general condition for minimaxity that lets u(S) be decreasing also exists efronmorris76 (). These minimaxity conditions easily extend to include shrinkage toward a fitted r>0 dimensional subspace by making S be the residual sum of squares and by accounting for the loss of r degrees of freedom, so then 0\leq u(S)\leq 2(k-r-2) is required.

### 2.1 Bayes and Formal Bayes Rules

The model of Table 2 can be expanded to Le-vel-III to allow Bayesian and formal Bayesian inferences by assuming that \alpha in general (in our simplified context, the unknown variance parameter A) has a proper or improper prior distribution. Shrinkage factors are then determined as integrals over the posterior distribution of B,

 E[B|S]=\frac{\int_{0}^{1}BL(B)\pi(B)\,\mathrm{d}B}{\int_{0}^{1}L(B)\pi(B)\,% \mathrm{d}B}

for some prior density \pi on B.

Two obvious families of priors arise in this context, to be charted in Figure 2:

1. Scale-invariant priors on A. Indexed by constants c\geq 0, these are improper (i.e., not finitely integrable) formal priors, with differential elements

As a distribution on B, this corresponds to

These have the form of Beta densities, but they do not integrate finitely. Only propriety of the posterior distribution is required, that is, after multiplication by L(B), which imposes the additional restriction 0<c<k.

2. Conjugate priors on B take the form of the likelihood function L(B), but with different values of k,S. We index this conjugate family by k_{0}>2 and by S_{0}\geq 0, perhaps thinking of them as previous values of k and S. Posterior propriety now requires that k_{0} satisfy k_{0}+k>0. The prior and posterior densities take the same form as L(B), having differential element

If S_{0}>0, these are “truncated” \chi^{2}_{(k_{0}-2)} distributions on B\leq 1, scaled by S_{0}. This second family involves proper priors if k_{0}>2, known as “Strawderman’s priors” strawderman71 () when S_{0}=0. Strawderman showed (via Baranchik’s theorem) that the posterior mean of B for these priors provides minimax and admissible shrinkage estimators if k_{0}\leq k-2 (so k\geq 5 is required). These properties also hold if S_{0}>0. When S_{0}=0 B has a \operatorname{Beta}((k_{0}-2)/2,1) distribution and

 EB=(k_{0}-2)/k_{0}

a priori, again requiring k_{0}>2 for propriety. EB\leq(k-4)/(k-2) is the upper limit for minimaxity, requiring k\geq 5. The special choice k_{0}=4 puts a \operatorname{Uniform}(0,1) prior distribution on B and minimaxity then requires k\geq 6. Derived from proper priors, the posterior mean of \mu, given the data y for any of these Strawderman priors, automatically qualifies as an admissible, minimax estimator in the Model-I sense for quadratic loss.

The densities of these two prior families can be combined by multiplication (and some reparametrization) to yield a 3-parameter family with densities on B of the form

If S_{0}=0, this class of prior densities has the form

 \operatorname{Beta}\bigl{(}\tfrac 12(u-k),\tfrac 12(k+k_{0}-u)\bigr{)} (2)

with u=k+k_{0}-c. They are proper only if k_{0}>c>0, that is, if k_{0}>u-k>0. The posterior density is proper if and only if k_{0}+k>c>0 since the exponential term that also appears in the posterior density, \exp(-B(S+S_{0})/2), is bounded in B so that term cannot affect posterior propriety.

Figure 2 shows the key regions for this formal Beta family (2) of prior densities (scaled for k=10) in terms of the two parameters (u,k_{0}), ignoring the nearly irrelevant S_{0}=0. It emphasizes regions when minimaxity holds, 0\leq u\leq 2(k-2). Instead of c, the horizontal axis uses u=k+k_{0}-c, because u determines minimaxity. It can be seen that as S\to\infty, S\times E[B|S]\to u for these priors, and Baranchik’s theorem tells us that minimaxity for large S fails unless 0\leq u\leq 2(k-2). This condition is necessary for minimaxity, but not sufficient.

Some explanation is in order, as follows in (a) through (h): {longlist}

Priors on B=V/(V+A) that lead to minimax estimators are limited to 0\leq u\leq 2(k-2).

The posterior distribution is proper only if k_{0}>u-k. The 45 degree line k_{0}=u-k in Figure 2 marks the (unattainable) lower bound for these priors.

Proper priors require k_{0}>c, so that u>k. Proper priors lie in the darkly shaded region to the right of the vertical line u=k. Improper priors are those with u\leq k.

The scale-invariant priors A^{c/2}\,\mathrm{d}A/A on A>0 are on the horizontal axis k_{0}=0, so c=k-u in these priors. Posterior propriety for these priors, as seen in (b), requires 0\leq u\leq k so that shrinkage cannot extend all the way to the Baranchik limit 2(k-2). Scale invariant priors cannot be proper.

Viewed as distributions on \mu, these scale-invariant priors have differential elements

 \,\mathrm{d}\mu/\|\mu\|^{u}

[by integrating the \mathcal{N}(0,AI_{k}) density with respect to A^{c/2}\,\mathrm{d}A/A, and using c=k-u]. If u=0, that is, the prior located in Figure 2 at (u,k_{0})=(0,0), we have the k-dimensional Lebesgue measure \,\mathrm{d}\mu, which leads to using y as the estimator of \mu, that is, no shrinkage.

One never should use (u,k_{0})=(k,0), although researchers sometimes make this mistake, thinking that the prior is vague because this is Jeffreys’ form \,\mathrm{d}A/A in other contexts. Actually, this prior forces B=1 a posteriori, no matter what the magnitude of S might be. Obviously this full-shrinkage estimator cannot be minimax.

The conjugate priors B^{(k_{0}-4)/2}\,\mathrm{d}B on 0<B<1 (setting S_{0}=0) form the upsloping line k_{0}=u-(k-2). These have proper posteriors because this line lies above (and is parallel to) the line k_{0}=u-k. They are proper if u>k, that is, k_{0}>2, being Strawderman’s priors.

All these conjugate priors produce an easily calculated shrinkage factor (u need not be an integer in the Chi-squares) in this equal variances setting:

 \hat{B}=E[B|S]=\frac{u}{S}\times\frac{P[\chi^{2}_{(u+2)}\leq S+S_{0}]}{P[\chi^% {2}_{(u)}\leq S+S_{0}]}. (3)

In this expression, S\hat{B} is monotone increasing in S because the ratio of \chi^{2}_{(u+2)} and \chi^{2}_{(u)} densities is monotone increasing. Therefore, Baranchik’s theorem applies and verifies minimaxity.

The vertical line at u=k-2 denotes priors that have the smallest Model-I risks as \|\mu\|\to\infty. This holds because all priors in Figure 2 have shrinkages E[B|S] near to \hat{B}=u/S for large S, and this must occur when \|\mu\| is large.

On the other hand, the mean-squared-error risk for shrinkage estimators of the form u/S=a\hat{B}_{\mathrm{JS}}, with a=u/(k-2) for any 0\leq a\leq 2, is

 \displaystyle E\Biggl{[}\sum_{i=1}^{k}\bigl{(}(1-u/S)y_{i}-\mu_{i}\bigr{)}^{2}% \Big{|}\mu\Biggr{]} \displaystyle\quad=k-(k-2)a(2-a)E[B_{\mathrm{JS}}].

This risk is minimized uniformly at a=1, showing that the James–Stein estimator is optimal among estimators of the form u/S. Combining these two facts shows that minimax priors with u=k-2 lead to estimators with risk functions that, for large \|\mu\|, will be smaller than those in Figure 2 with u\neq k-2.

Admissibility of the resulting Bayes estimators of \mu holds immediately for proper priors, so the priors in the rightmost wedge with k<u\leq 2(k-2) provide admissible minimax estimators.

Improper priors may or may not produce admissible estimators. Various estimators based on priors with k-2\leq u\leq k are admissible and minimax at least if k_{0} isn’t too small. The SHP prior, which corresponds to (u,k_{0})=(k-2,0), \,\mathrm{d}A is an improper prior that does yield an admissible estimator. More on this later.

Inadmissibility holds for many (perhaps all) of the priors with u<k-2. That this holds is suggested by the fact that the risk of an estimator with u<k-2 can be lowered for large \|\mu\| by using a prior with u=k-2 [as argued in (f) above]. Then it seems likely that such a prior can be found on the u=k-2 vertical axis of Figure 2 that would increase shrinkage (shrinkage generally increases in the rightward direction on Figure 2) with lower risk everywhere as a function of \|\mu\|.

Early after it was recognized that the estimator y could be uniformly improved upon, numerous authors proposed priors captured by Figure 2, motivated by Bayesian and/or admissibility concerns. Many of these were scale-invariant priors with k_{0}=0, especially with k-2\leq u\leq k-1, for example, Stein, K. Alam, T. Leonard, I. J. Good and D. Wallace, D. Rubin, D. V. Lindley and A. F. M. Smith. Others were proposed on the conjugacy line k_{0}=u-(k-2), including \,\mathrm{d}B/B, that is, (u,k_{0})=(k-1,1), which has Jeffreys’ form, and (being improper) falls at the edge of Strawderman’s priors. Various authors since have repeated these and other suggestions, partly as “reference priors.” Our hope is that these priors that decision theory has shown to lead to the best and most trustworthy estimators for the equal variances setting of Figure 2 are “transportable” to the unequal variances setting.

Charles Stein’s choice is a prior on \mu, not on A, “Stein’s harmonic prior,” SHP, corresponds to \mu having a measure that stems from c=2, A with a flat density. It is

 p(\mu)\,\mathrm{d}\mu\propto\,\mathrm{d}\mu/\|\mu\|^{k-2}.

By (d), this corresponds to (u,k_{0})=(k-2,0) in Figure 2. The term “harmonic” refers to the fact that the Laplacian of the prior \nabla^{2}p(\mu) is uniformly equal to 0, except at the origin where it fails to exist. Technically, since \nabla^{2}p(0)=-\infty, the prior is actually superharmonic (Laplacian less than or equal to 0), a term Stein himself employed when showing that the resulting Bayes rule was both admissible and minimax by Model-I standards stein81 (). However, the term “harmonic” is simpler, nearly correct, and used by most researchers.

One motivation for the SHP prior stems from an easy calculation that shows the James–Stein shrinkage coefficient satisfies E[B|S]=(k-2)/S=\hat{B}_{\mathrm{JS}}if one assumes the (absurd) prior that A\sim\break\operatorname{Uniform}[-V,\infty) morris83b (). Of course, allowing A<0 is illogical, and removing that part of the support for A gives A\sim\operatorname{Uniform}[0,\infty), which yields the SHP.

A second motivation is that taking A uniform on (0,\infty) lies uniquely in Figure 2 at the intersection of the scale-invariant priors (k_{0}=0) and the sloped line of conjugate priors [k_{0}=u-(k-2)]. That is Stein’s SHP. Indeed, the SHP sits on the “admissible boundary,” being the scale-invariant admissible prior that shrinks least among the admissible ones. It is also optimal as \|\mu\|\to\infty (u=k-2). Being formal Bayes but not proper Bayes, it provides little prior information about A. Its conjugacy makes its shrinkages easy to compute in the equal variance setting.

A third motivation, as will be seen, is that the aggregate conditional posterior risk {R}^{*}={R}^{*}(S)<kV for this prior, and, in turn, {R}^{*} exceeds the unbiased estimate of the aggregate risk \hat{R}(S), not shown, on the SHP estimator; see Morris morris77 (), morris83b (). More on this momentarily.

Using (3), the posterior mean of B resulting from the SHP prior is

 \hat{B}_{\mathrm{SHP}}=E[B|y]=\frac{k-2}{S}\times\frac{P[\chi^{2}_{(k)}\leq S]% }{P[\chi^{2}_{(k-2)}\leq S]}.

The posterior variance of B morris77 (), morris83b () is, for k\geq 3,

 \displaystyle v \displaystyle= \displaystyle\operatorname{var}(B|y) \displaystyle= \displaystyle\frac{2}{k-2}\hat{B}_{\mathrm{SPH}}^{2}-(\hat{B}_{\mathrm{JS}}-% \hat{B}_{\mathrm{SHP}})\biggl{(}1-\frac{k}{k-2}\hat{B}_{\mathrm{SHP}}\biggr{)}.

For the k=10 hospitals we have \hat{B}_{\mathrm{SHP}}=0.668\times 0.829=0.571 and v=(0.218)^{2}.

From the SHP posterior mean \hat{B}_{\mathrm{SHP}} we obtain the formal Bayes rule of \mu_{i},

 \hat{\mu}_{\mathrm{SHP},i}=E[\mu_{i}|y]=(1-\hat{B}_{\mathrm{SHP}})y_{i}.

But what of interval estimates for \mu_{i}? Our Mo-del-III construction via SHP suggests use of posterior probability intervals. For the SHP these can easily be approximated after computing the posterior variance of \mu_{i}, which for r=0 is

 s_{i}^{2}=\operatorname{var}(\mu_{i}|y)=V(1-\hat{B}_{\mathrm{SHP}})+vy_{i}^{2}.

Figure 3 from Morris and Tang morristang09 () shows coverage rates of \mu_{i} for 2-sided intervals with nominal coverage 95%. Each interval is centered at its SHP shrinkage estimate and approximates each of the k posterior distributions as Normally distributed with interval widths determined by adding and subtracting 1.96s_{i}.

The true coverages in Figure 3 for SHP are never less than 94.5% for any value of A for any of the three values of k=4,10,20 shown. The coverage probabilities do not depend on i or on V, so this is tantamount to a proof that this procedure comes close to providing or exceeding the nominal coverage. Over-coverages rise noticeably above 95% as the between-groups variance A approaches 0, that is, as the shrinkage B approaches 1. One must keep in mind, however, that while these intervals are based on the posterior mean and variance, they are not the posterior intervals because such are not symmetric. For example, the under-coverage by 0.5% for SHP when k=20 does not account for the posterior skewness of the distributions of \mu_{i}, which is considerable when y_{i} is one of the extreme observations.

Intervals based on a different estimator, determined by an approximation technique called adjustment for density maximization (ADM) morris88 (), morristang09 (), also are shown in Figure 3, having slightly better minimum coverage. These estimators are described in the next section.

Componentwise intervals better than those centered at y_{i}, that is, intervals that average being shorter than 2\times 1.96\mathrm{sd}_{i}, do not exist for all \mu, by Model-I standards. Such may exist for all A, when averaging over \mu|A in Model-II. Indeed, note that s_{i}^{2} also can be interpreted as the Bayes risk of the SHP rule \hat{\mu}_{\mathrm{SHP},i},

 {R}^{*}_{i}=E[(\hat{\mu}_{\mathrm{SHP},i}-\mu_{i})^{2}|y]=s_{i}^{2}.

Let us contrast this with

 \hat{R}_{i}=V(1-2\hat{B})+y_{i}^{2}(\hat{B}^{2}+2v),

the unique unbiased estimate of the component risk of \hat{\mu}{}_{\mathrm{SHP},i}. That is,

 E\hat{R}_{i}=E[(\hat{\mu}_{\mathrm{SHP},i}-\mu_{i})^{2}|\mu]

is the Model-I component risk for any value of \mu. Letting {R}^{*}=\sum_{i=1}^{k}s_{i}^{2} and \hat{R}=\sum_{i=1}^{k}\hat{R}_{i}, by rearranging terms morris77 (), morris83 () one sees that

 \hat{R}<{R}^{*}

That {R}^{*}<kV shows Model-I minimaxity of \hat{\mu}_{\mathrm{SHP}}, since its risk is less than that of the minimax y. That \hat{R}<{R}^{*} shows that the SHP prior is so vague that its Bayes risk is more conservative than its frequency-based unbiased estimate of risk. Averaging over both \mu and y, the k componentwise risks

 E[{R}^{*}_{i}|A]=E[(\hat{\mu}_{i}-\mu_{i})^{2}|A]

are all the same. Thus, each is less than V for all A\geq 0. This establishes Model-II componentwise minimaxity, that is, improvement on y_{i} for all A\geq 0 and for every i=1,\ldots,k.

Not only is the SHP rule componentwise minimax under Model-II evaluations, but its (approximate) coverage intervals are shorter on average than those accompanying the unbiased estimate y (since Es_{i}<\sqrt{V} by Jensen’s inequality). However, values of y exist for which some s_{i}^{2}>V, although this happens with small probability.

For the 10 hospitals we obtain {R}^{*}=4.85 and \hat{R}=3.48. Componentwise risks and other calculations are displayed in Table 3. Notice that some components have negative unbiased estimates of their mean-square-error, a not uncommon occurrence, and an undesirable feature of using this unbiased estimation approach for assessing component risks.

Unfortunately, real data rarely come with equal variances, designed experiments being the exception. Decision theorists have focused on this symmetric case because it is simple enough to enable exact (small sample) calculations. Decision theory has identified the SHP and other priors close to it that lead to shrinkage estimators with good frequency properties. Now the hope is that such priors are “transportable” to the unequal variances situation.

It should be clear that Model-I verifications are rarely appropriate for scientific applications, even when equal variances obtain. Acceptance of Model-II, and thus of evaluations that average over Level-II distributions (given the hyperparameters, e.g., A), has many advantages for applications. It makes assessing weights for the loss function become unimportant. Model-II allows estimators to exist that are minimax for every component for all A, not just when summed over all components. Confidence intervals exist that are on average shorter than standard intervals centered at y_{i}, and these also can have (nearly) uniformly higher coverages. The unequal variance setting gives further impetus to Model-II as a basis for evaluating the operating characteristics of shrinkage procedures, and also for constructing them from proper or improper priors that lead to good repeated sampling properties.

## 3 Approaches to Unequal Variance Data

In practice, equal variances are the exception rather than the rule. The variances for all 31 NY hospitals, not just the middle 10, differ by a factor of more than 20. Table 4 lists the data for these k=31 NY hospitals and several shrinkage-related estimators, to be discussed further. The raw data contain the number of deaths d_{i} within a month of CABG surgeries for each hospital i, sorted by increasing caseload n_{i}. The indices for success rates are calculated as

 y_{i}=C\times\bigl{(}\arcsin(1-2d_{i}/n_{i})-\arcsin(1-2\overline{d}/\overline% {n})\bigr{)},

a variance stabilizing transformation of the unbiased success rate estimates \hat{p}_{i}=d_{i}/n_{i}, assuming Binomial data, in which case the variance of the y_{i} is approximately V_{i}=\overline{n}/n_{i} (with \overline{n}=\tfrac 1k\sum_{i=1}^{k}n_{i}). The factor C is chosen so that the harmonic mean of the V_{i}, that is,

 V_{H}=\frac{k}{\sum_{i=1}^{k}V_{i}^{-1}},

is equal to 1. Larger values of y_{i} correspond to higher success rates. The 10 hospitals used in the previous sections appear here as Hospitals 15–24, but in a different order.

The y_{i} cannot be nearly Normally distributedwhen n_{i} is small, for example, Hospitals 1 and 2, but we act here as if the y_{i} are Normal because that distribution is required for the estimators being considered. A more accurate model might approximate the data d_{i} as Poisson, as Christiansen and Morris christiansenmorris97 () do for medical profiling. For the remainder of this section we also focus on shrinkage to 0 (r=0), the approximate average of the y_{i}.

### 3.1 Minimaxity in Model-I

It may seem for unequal variances that the James–Stein estimator, which requires equal variances, can still be used. To do this, one would divide the values y_{i} by their standard errors \mathrm{sd}_{i}=\sqrt{V_{i}} to create equal variances and apply James–Stein to y_{i}/\mathrm{sd}_{i}. Then the shrinkage \hat{B}_{\mathrm{JS}}=(k-2)/S=0.697, where S=\sum y_{i}^{2}/V_{i}=41.59, emerges for estimating \mu_{i}/\mathrm{sd}_{i}. Transforming back to estimate \mu_{i} yields a constant-shrinkage estimator

 \hat{\mu}_{\mathrm{JS},i}=(1-\hat{B}_{\mathrm{JS}})y_{i}.

This procedure is Model-I minimax if the loss function,

 L(\hat{\mu},\mu)=\sum_{i=1}^{k}W_{i}(\hat{\mu}_{i}-\mu_{i})^{2},

has weights W_{i}=1/V_{i}=n_{i}/\overline{n}. However, if the loss function has equal weights W_{i}\equiv 1, then this estimator won’t be minimax when the variances, equivalently the patient case-loads n_{i}, are substantially unequal, that is, it won’t have uniformly lower mean squared error than y for all \mu. Does any health leader exist with the insight to identify the proper weights W_{i} and the authority to enforce their use?

For unequal variances, component shrinkageswould be expected to depend on i. How should these shrinkages be estimated, and by what criteria should the estimates be guided? Data analysts desire more shrinkage for larger V_{i} and less for smaller V_{i}, a pattern consistent with the law of large numbers, and with anticipated regression toward the mean, both of which suggest placing greater reliance on estimates y_{i} that are based on more data and that have smaller variances. Paradoxically, Model-I minimaxity in the unequal variance setting requires reversed shrinkages (more shrinkage for smaller V_{i}), as shown next.

Using an integration by parts technique pioneered by Stein stein73 () and Berger berger76 (), Hudson hudson74 () and Berger berger76 () independently developed a simple Model-I minimax shrinkage estimator for the sum of (unweighted) squared errors, that is, having risk less than \sum V_{i}, the risk of the unbiased estimate y, for all \mu. Their estimator directly extends the James–Stein estimator to unequal variances by shrinking each y_{i} toward 0 using the shrinkage factor

 \hat{B}_{\mathrm{HB},i}=\frac{(k-2)/V_{i}}{\sum_{j=1}^{k}(y_{j}/V_{j})^{2}}.

More generally, this estimator can be adapted easily to provide a minimax estimator for any set of weights W_{i} in the loss function (by rescaling the y_{i} to W_{i}^{1/2}y_{i}, obtaining the shrinkage factors above, and then transforming back to the original scale). In the special case W_{i}=1/V_{i}, this rescaling will produce the James–Stein estimator with its equal shrinkages \hat{B}_{\mathrm{JS},i}\equiv\hat{B}_{\mathrm{JS}}.

With equal weights W_{i}\equiv 1, the risk of this minimax estimator has a simple unbiased estimate:

 \hat{R}_{\mathrm{HB}}=\sum_{i=1}^{k}V_{i}\bigl{(}1-(k-2)\hat{B}_{\mathrm{HB},i% }\bigr{)}.

This is less than \sum V_{i} for all values of y, because \hat{B}_{\mathrm{HB},i}>0. It follows that the expectation of \hat{R}_{\mathrm{HB}} given \mu is less than \sum V_{i}, thereby proving the Hudson–Berger estimator uniformly dominates y and is minimax for an equally weighted loss function.

For the k=31 hospitals the risk estimate of the Hudson–Berger rule is \hat{R}_{\mathrm{HB}}=31.25. This is 36.3% smaller than the risk of the unbiased estimate’s\sum V_{i}=49.06. Slightly more improvement stems from using shrinkages \min(1,\hat{B}_{\mathrm{HB},i}). Five hospitals, Hospitals 27–31, have such \hat{B}_{\mathrm{HB},i}>1, as shown in Table 4, and these shrinkages should be truncated at 1. However, these Model-I minimax shrinkage factors are smallest for the hospitals with the largest variances, even though the purpose of combining data in these applications is to borrow strength and thereby improve estimates for hospitals with less data.

Unfortunately, none of the 15 hospitals with the largest variances shrinks even as much as 2/3 of its standard error. By contrast, two of the six hospitals already with the most data and with the smallest variances (Hospitals 26–31) shrunk by about two of their own (small) standard errors, a dramatic adjustment for them. This minimax estimator would thrill the management of Hospital 26, whose negative performance estimate y_{26} (2.8 standard deviations below the mean) is shrunken upward by 2.5 standard deviations to make it nearly average. On the other hand, this minimax estimator shrinks Hospitals 27–31 all the way to 0 (the statewide average), so that Hospital 31 has its strong positive performance y_{31}=1.14 (1.84 standard deviations above the mean) reduced by those 1.84 standard deviations so it also is estimated as average.

### 3.2 Exchangeability in Model-II

The culprit here is the Model-I minimax criterion, and not the mathematically elegant procedure derived to achieve Model-I minimaxity. With substantially unequal variances and summed equally-weighted squared error losses, achieving Model-I minimaxity (nearly) requires reversed shrinkages, that is, smaller shrinkages for those components with larger V_{i}. (“Nearly” acknowledges that one could drastically diminish all the larger shrinkages to eliminate the reversal, but then with minuscule resulting shrinkages and no practical benefit.) Meanwhile, procedures that do not suffer from reversed shrinkages abound in practice, by relying instead on exchangeability assumptions in multilevel models and onBayesian and empirical Bayesian considerations.

Figure 4 shows the Hudson–Berger Model-I minimax shrinkage factors, labeled as “HB,” plotted against the variances V_{i}. Note their reversed shrinkages that decrease as variances increase. The James–Stein shrinkage factors are constant at \hat{B}_{\mathrm{JS}}=0.697, as shown by the horizontal line labeled “JS.” Four other shrinkage rules will be introduced next, all motivated by Model-II considerations, so all with shrinkages that increase as variances increase.

Componentwise risks and interval coverages become more valuable when based on averages over both levels of Model-II. This requires accepting Level-II exchangeability for the random effects \mu_{i} (or when r>0, accepting exchangeability of the residuals \mu_{i}-x_{i}^{\prime}\beta), given A. Shrinkages now may increase as the variances V_{i} increase. Exchangeability of \mu (or of its residuals) replaces assessing weights for component losses in applications. As in the equal variances case, procedures that dominate on all k components become possible, as well as confidence intervals. With decision theoretic Model-II evaluations, componentwise dominance becomes the goal.

Most data analysts and modelers of real data are familiar with recognizing problems for which exchangeability assumptions are reasonable, for example, they make such judgements routinely for error terms when fitting regressions. Exchangeability considerations would stop anyone from combining estimates of butterfly populations and percentages of sports car sales to augment the estimation of the 31 NY hospital success rates. Model-I standards provide no guidance on this, in favor of requiring assessment of relative weights W_{i} for butterfly vs. hospital data.

With sufficiently disparate V_{i}, the minimax estimator of Hudson and Berger is not necessarily minimax for every component by Model-II evaluations. However, Model-II minimax shrinkage estimators do exist for any set of V_{i}. A recent such procedure by Brown, Nie and Xie brownetal10 () produces shrinkages that increase with V_{i} and with componentwise squared errors smaller than V_{i} for every i, for all A\geq 0, and for any variance pattern V_{1},\ldots,V_{k} for k\geq 3.

A popular Model-II shrinkage technique is based on the MLE of A. It provides relatively simple MLE estimates of the shrinkages \hat{B}_{\mathrm{MLE},i}=V_{i}/(V_{i}+\hat{A}_{\mathrm{MLE}}) and of the unknown means \hat{\mu}_{\mathrm{MLE},i}=(1-\hat{B}_{\mathrm{MLE},i})y_{i}. It is often used to construct confidence intervals for the \mu_{i} by estimating the conditional variance

 \operatorname{var}(\mu_{i}|{y,A})=(1-B_{i})V_{i}.

For r=0, \hat{A}_{\mathrm{MLE}} maximizes

 L(A)=\sum_{i=1}^{k}\bigl{(}-S_{i}B_{i}+\log(B_{i})\bigr{)}/2,

where S_{i}=y_{i}^{2}/V_{i} and B_{i}=V_{i}/(V_{i}+A). If r>0 and Level-II in Table 2 specifies an unknown mean

 E[\mu_{i}|\alpha]=x_{i}^{\prime}\beta,

then restricted maximum likelihood (REML) should be used. This can be accomplished by analytically integrating out (not maximizing out) the r-dimensional \beta, assuming its prior density is flat in r dimensions, as in morristang09 (). In this case the likelihood L(A) above would be replaced by the resulting integral over \beta, and then maximization would lead to \hat{A}_{\mathrm{REML}}.

When r>0, a larger value of k is required for any possibility of minimaxity, at least k\geq 3+r, with k\geq 5+r needed for minimaxity of the MLE in the equal variance case. The MLE shrinkages are graphed in Figure 4 for the 31 hospitals on the curve labeled “MLE.”

A flaw of the MLE is that \hat{A}_{\mathrm{MLE}}=0 occurs commonly. This not only dictates full shrinkage, but also when r=0 the conditional variance estimates (1-\hat{B}_{\mathrm{MLE},i})V_{i} are all equal to zero. In such cases using these for confidence intervals asserts that \mu_{i}=0 with 100% confidence, a gross overstatement morristang09 ().

### 3.3 Construction at Level-III

Bayesian modeling extends Model-II to Model-III by constructing procedures from a single prior on the hyperparameters at Level-III. Bayes and formal Bayes procedures provide posterior means, variances and posterior distributions for the random effects \mu_{i}, given the data. As such Model-III Bayesian procedures are widely used in applications, the question is: what are their frequency properties? The posterior moments and distributions may not be computable exactly, but they are estimable for any particular data set and prior via MCMC and other simulation techniques. Moreover, the fundamental theorem of decision theory tells us that Model-III constructions (Bayes and formal Bayes) are required for Model-II admissibility.

From the decision-theoretic perspective much more is yet to be learned, even for models as simple as the Normal distributions of Table 2 in Levels I–II. It still isn’t known, even with r=0, whether (formal) priors exist that provide Model-II minimax estimators of \mu no matter how varied the V_{i}. Beyond that, only a little has been done in the unequal variance case to determine if posterior probability intervals for formal priors, perhaps computed to offer posterior coverages of 95%, actually cover \mu_{i} for every i, A\geq 0 at that nominal 95% level.

#### 3.3.1 Stein’s prior: Transported from the equal variance case

For the family of priors discussed in the equal variances case in Section 2, Stein’s SHP stands out as the prime candidate for minimaxity and for confidence intervals in the unequal variances setting, assuming Model-II evaluations. Unfortunately, no general theorems about these properties have been proved for the SHP, formal mathematical proofs being hindered by the complexity of the posterior moments and intervals. However, particular investigations with the SHP have been encouraging.

Indeed, for any shrinkage estimator \hat{\mu}_{i}=(1-\hat{B}_{i})y_{i} with 0<\hat{B}_{i}<1, the difference between the component risks of y_{i} and \hat{\mu}_{i} conditioned on A and y,

 \displaystyle r_{i} \displaystyle= \displaystyle E[(y_{i}-\mu_{i})^{2}|{A,y}]-E[(\hat{\mu}_{i}-\mu_{i})^{2}|{A,y}] \displaystyle= \displaystyle B_{i}^{2}y_{i}^{2}-(B_{i}-\hat{B}_{i})^{2}y_{i}^{2} \displaystyle= \displaystyle(2B_{i}-\hat{B}_{i})\hat{B}_{i}y_{i}^{2},

is positive for any value of A<V_{i}, which, when integrating over y, shows that the Model-II risk of \hat{\mu}_{i} is less than V_{i} for any A<V_{i}. Also, SHP will dominate the unbiased estimate y_{i} when A becomes large enough, since the componentwise Model-II risk converges to that of equal variances as A tends to infinity.

The estimator E[\mu_{i}|y] for any prior on A, for each i and set of variances V_{i}, involves computing E[B_{i}|y]. For the SHP, with L(A) being the Model-II likelihood of A, this is

 E[B_{i}|y]=\hat{B}_{\mathrm{SHP},i}=\frac{\int_{0}^{\infty}V_{i}/(V_{i}+A)L(A)% \,\mathrm{d}A}{\int_{0}^{\infty}L(A)\,\mathrm{d}A},

and the resulting estimate of \mu_{i} is

 \hat{\mu}_{\mathrm{SHP},i}=(1-\hat{B}_{\mathrm{SHP},i})y_{i}.

As with the equal variance case, the posterior variances s_{i}^{2}=\operatorname{var}(\mu_{i}|y) for any prior are given by

 s_{i}^{2}=V_{i}(1-E[B_{i}|y])+v_{i}y_{i}^{2},

where v_{i} is the posterior variance of B_{i}. For SHP this is

 \displaystyle v_{i} \displaystyle= \displaystyle\operatorname{var}(B_{i}|y) \displaystyle= \displaystyle\frac{\int_{0}^{\infty}V_{i}^{2}/(V_{i}+A)^{2}L(A)\,\mathrm{d}A}{% \int_{0}^{\infty}L(A)\,\mathrm{d}A}-\hat{B}_{\mathrm{SHP},i}^{2}.

The SHP shrinkage estimates \hat{B}_{\mathrm{SHP},i} for the hospital data are plotted in Figure 4 on the curve labeled “SHP.” The associated posterior standard deviations \sqrt{v_{i}} are given by the dotted curve labeled “\sqrt{v}.” Figure 5 displays a stochastic estimate of the relative Model-II risk improvement of SHP over the unbiased estimate y_{i},

 \frac{V_{i}-E[(\hat{\mu}_{\mathrm{SHP},i}-\mu_{i})^{2}|A]}{V_{i}}=\frac{E[r_{i% }|A]}{V_{i}}

for k=31 and for the variance pattern of the 31 hospitals V_{1},\ldots,V_{31} as a function of A.

This was done by simulating 20,000 replicates of y at 15 different values of A, and averaging the 20,000 values of r_{i}/V_{i} at each A. Different curves plot the risk improvement for different components i. All the curves are positive and strictly decreasing. The curves are ordered according to their V_{i} values, the largest (V_{1}) providing the top curve. Thus, for this variance pattern, at least and seemingly generally, the greatest shrinkage benefit accrues to the components with the greatest uncertainty.

The graph’s monotonicity suggests that the minimum Model-II risk improvement for each component occurs as A approaches infinity. That corresponds to the limiting equal variance case. Interestingly, despite their stochastic nature, the curves do not cross each other. These results, although only for one data set, give hope for establishing componentwise Model-II risk dominance for all A of the SHP shrinkage procedure over the unbiased estimate y.

For equal variances, Figure 3 showed that \hat{\mu}_{\mathrm{SHP},i}\pm 1.96s_{\mathrm{SHP},i} produces minimum coverage of \mu_{i} very close to 95%. Figure 6 investigates the corresponding coverage properties for the unequal variances in the pattern of the 31 NY hospitals. For each y and A of the previous simulation, the coverage probability

 P(\mu_{i}\in\{\hat{\mu}_{\mathrm{SHP},i}\pm 1.96s_{\mathrm{SHP},i}\}|{y,A})

of \mu_{i} by the “SHP Normal” interval given y and A is analytically computed from

 \mu_{i}|{y,A}\lx@stackrel{{\scriptstyle\mathrm{ind}}}{{\sim}}\mathcal{N}[(1-B_% {i})y_{i},V_{i}(1-B_{i})],

then averaged over the 20,000 values of y for each A. Thus, Figure 6 displays the coverage probabilities for each Hospital i using Model-II of Table 2 as a function of the harmonic mean B_{H} of the shrinkage factors,

 B_{H}=\frac{k}{\sum_{i=1}^{k}B_{i}^{-1}}=\frac{V_{H}}{V_{H}+A}=\frac{1}{1+A},

a monotone decreasing function of A (recall that the 31 CABG indices have been scaled to have V_{H}=1).

All but two of the 31 curves exhibit a pattern similar to that of equal variances in Figure 3 when k=20: exactly 95% coverage for B_{H} close to 0, a minimum with 0.5% under-coverage near B_{H}=0.6, and over-coverage for B_{H} close to 1. The curves are nonintersecting and increasing with V_{i} for the 4 highest values of B_{H}, but cross each other repeatedly for B_{H}<0.5, presumably because of simulation inaccuracy. The two nearly superimposed highest curves which never (or barely) overcover \mu_{i} correspond to the two hospitals with the highest variances, Hospitals 1–2, these variances being nearly 8 times the size of the 31 variances’ harmonic mean. In all cases the coverage probabilities are never below 94.5%.

Figure 7 compares SHP and unshrunken estimates, and their standard deviations for the data with the 31 NY hospitals. The absolute value of the rules, |\hat{\mu}_{\mathrm{SHP},i}| (circle) and |y_{i}| (+/-), are plotted above the x-axis, and the negative standard deviations, -s_{\mathrm{SHP},i} and -\mathrm{sd}_{i}, are plotted below. “Plus” signs indicate that the estimates were positive, for example, Hospitals 3 and 8, whereas “minus” signs indicate that the estimates were negative, for example, Hospitals 1–2. It appears for these data that all the SHP coverage intervals will be shorter than those of the unbiased estimate, although this need not always hold for all data sets y, as discussed earlier for the equal variances in Section 2.

#### 3.3.2 Posterior mean versus posterior mode: The ADM technique

Deriving the SHP rule for unequal variances requires numerical computation of k+1 integrals (including the common denominatorin \hat{B}_{\mathrm{SHP},i}). ADM (adjustment for density maximization, Morris morris88 ()) is used here for shrinkage estimation to provide a relatively simple approximation to the SHP, as in Morris and Tang morristang09 (). To explain the ADM, the MLE provides a simple shrinkage formula from the mode of the likelihood L(A) that is equivalent to the posterior mode of A for the SHP. However, the mode of a right-skewed distribution like that of A underestimates the mean. Furthermore, the mean E[B_{i}|y] is needed, not the mode. The ADM provides a better approximation than the MLE for shrinkage factors while still requiring only two derivatives to approximate the posterior distributions of B_{i}|y. The ADM can be used with various priors in Figure 2, but here we apply it to approximate the posterior distribution of each shrinkage B_{i}|y when the SHP is the chosen prior distribution for A.

For shrinkage estimation, ADM approximates the distribution of each B_{i}=V_{i}/(V_{i}+A) by a Beta distribution. Because shrinkage coefficients lie in [0,1], and these coefficients linearly determine the Level-II distributions (Table 2), two-parameter Beta distributions are the natural choice for shrinkage approximations, and not the Normal distribution (the distribution for which MLE and the posterior mean would coincide). When the prior on A is taken to be the SHP, and with Beta distribution approximations to B_{i}=V_{i}/(V_{i}+A), the ADM “adjustment” simply amounts to maximizing A\cdot L(A), rather than L(A), for each i and V_{i}. Note that the maximum always occurs with A\geq 0. Calling the maximizing value \hat{A}_{\mathrm{ADM}}, then E[B_{i}|y] is approximated by \hat{B}_{\mathrm{ADM},i}=V_{i}/(V_{i}+\hat{A}_{\mathrm{ADM}}). This ADM approach has been used before for shrinkage estimation, for example, byChristiansen and Morris christiansenmorris97 (), Li and Lahiri lilahiri10 () and Morris and Tang morristang09 ().

For the 31 hospitals, \hat{A}_{\mathrm{ADM}}=0.657, so E[B_{i}|y] is approximated by \hat{B}_{i}=V_{i}/(V_{i}+0.657). The variances of B_{i} could be obtained from the second derivative of \log(A\cdot L(A)) at the adjusted mode, \hat{A}_{\mathrm{ADM}}=0.657. The ADM shrinkages are graphed in Figure 4 on the curve labeled “ADM.” They are more conservative than those of the MLE, and indeed follow the SHP curve closely for all but the smallest variances V_{i}.

As was seen before in Figure 3, standard errors and interval estimates with the SHP coverages as approximated by ADM are never perceptibly below 95%, for equal variances and k=4,10,20. The ADM is readily applicable to approximate posterior point and interval estimates for other priors on A in the unequal variance case. Further, Model-II evaluations of ADM include investigations by Morris–Tang morristang09 () for Normal distributions, Everson–Morris eversonmorris00 () for multivariate Normal data, and Christiansen–Morris christiansenmorris97 () for Poisson data. Evidence therein with special cases and/or with special data sets has been quite encouraging, with no negative experiences thus far.

### 3.4 Potential of the Multilevel Model: A Useful Rule of Thumb

For equal variances, good shrinkage rules such as James–Stein or SHP are simple enough to calculate that they can be implemented immediately in practice. For unequal variances the calculations are much more involved and easily accessed software may be unavailable or need to be mastered. Researchers justifiably may ask how much they stand to gain by fitting a hierarchical model before actually fitting it, their alternatives being to use unbiased estimates \hat{\mu}_{i}=y_{i} or the fully shrunken estimates, here \hat{\mu}_{i}=0 (for r=0), or when r>0 to shrink all the way to a grand mean or to a linear regression estimate.

A helpful feature of using MLE or ADM methods to fit shrinkages, perhaps with a model like that of Table 2, is that a simple point estimate \hat{A} of A suffices to estimate all shrinkage factors B_{i}, and consequently also all means \mu_{i}. Moreover, an estimate \hat{A} of A leads to a simple estimate \hat{B}_{H} of the harmonic mean of the shrinkage factors B_{H} through the identity

 B_{H}=V_{H}/(V_{H}+A). (4)

Analogously to its equal variance counterpart B, the harmonic mean shrinkage 0\leq B_{H}\leq 1 provides a useful summary for gauging the benefits of fitting a shrinkage model. Values of B_{H} close to 0 suggest that there will be relatively little shrinkage overall, in which case a researcher might be justified to use the unbiased estimates y_{i}. Or, values of B_{H} close to 1 might justify using the fully shrunken regression estimates x_{i}^{\prime}b,

 b=(X^{\prime}V^{-1}X)^{-1}X^{\prime}V^{-1}y,

where X^{\prime}=[x_{1},\ldots,x_{k}] and V=\operatorname{diag}(V_{1},\ldots,V_{k}).Values of B_{H} near 1/2 give the strongest case for estimating shrinkages.

Letting S=\sum_{i=1}^{k}(y_{i}-x_{i}^{\prime}b)^{2}/V_{i}, when r\geq 0 and the variances are equal, B_{H}=B and we have

which leads to the James–Stein estimator. When the variances are unequal, it is easily seen for r=0 that

 E[S|A]=\sum_{i=1}^{k}(V_{i}+A)/V_{i}=k/B_{H}.

Taken together these facts suggest a simple point estimate for B_{H},

 \hat{B}_{H}=\frac{k-r-2}{S}=\frac{k-r-2}{k-r}\times\frac{1}{\hat{\sigma}^{2}},

where

 \hat{\sigma}^{2}=\frac{1}{k-r}\times\sum_{i=1}^{k}\frac{(y_{i}-x_{i}^{\prime}b% )^{2}}{V_{i}}

is the mean square error from a (weighted linear) regression output. Note that one can easily rearrange (4) to solve for

 \hat{A}=V_{H}(1-\hat{B}_{H})/\hat{B}_{H}.

This estimate, in turn, can be used to provide simple estimates of each individual shrinkage factors B_{i} by V_{i}/(V_{i}+\hat{A}). Even if \hat{B}_{H} is small, having this rough estimate of every B_{i} is useful in case there are a few \hat{B_{i}} that are appreciably bigger than 0.

These estimates of B_{i} are plotted as the fourth and final Model-II rule in Figure 4, labeled “F,” giving a curve that is almost identical to the MLE shrinkages. Data analysts can use this easy “rule-of-thumb” that can be based on regression outputs for anticipating individual and overall shrinkages, without computing more precise shrinkage estimates. For the 31 hospitals S=41.59 and \hat{B}_{H}=0.697, suggesting that a good Model-II rule would outperform both the individual estimates y_{i} and the fully shrunken estimates, alike.

## 4 Summary and Conclusions

We have reviewed a special and relatively simple class of hierarchical models, models for Normal distributions that have received significant attention from a nonasymptotic (in k) decision-theoretic perspective. Early equal-variance Model-I shrinkage estimators, evaluated by a (unweighted) sum of squared errors criterion, were found that provided Model-I minimaxity and even admissibility. That opened exciting new vistas. However, the great preponderance of applications (even when Normal distributions apply) arise with unequal variances, and there Model-II evaluations are seen to be much more appropriate. Model-II evaluations are both less and more general than Model-I, less because they average over the Level-II parameters, and more general by not requiring judgements about appropriate weights for component losses, and also by empowering interval estimation. A Level-II exchangeability assumption, for example, as in Table 2, enables componentwise Model-II dominance to be possible.

Many more investigations are needed in the Model-II setting for small and moderate numbers k of random effects \mu=(\mu_{1},\ldots,\mu_{k})^{\prime}. Does Stein’s harmonic prior (SHP) transport to the unequal variance case, for example, by offering Model-II componentwise minimaxity, conditionally on all hyperparameters, especially on all A\geq 0? Our experience suggests that this is entirely possible for both the equal and the unequal variances settings, but there are no formal proofs yet. Does the full posterior distribution, geared to offer 95% posterior probability of coverage for fixed data with the SHP prior, provide intervals that cover at least 95% of the time? Showing this with Model-II would require at least 95% coverage for every fixed value of A\geq 0 that holds for every component (e.g., for every hospital), after averaging over both levels of Model-II. If intervals cover less than 95% of the cases, how close does the minimum coverage come to 95%? How well and when do relatively simple methods for estimating shrinkages work, like MLE and ADM methods? What Level-III priors lead to Model-II dominance by providing componentwise minimaxity and confidence intervals that are shorter for every component? Do SHP intervals cover every \mu_{i} more often for every i,A than do the standard (unshrunken) confidence intervals used by data analysts?

These theoretical questions about operating characteristics under Model-II evaluations can be asked for other yet more complicated models, especially for other distributions at Level-I and at Level-II. Shrinkage estimators arise when fitting generalized linear multilevel models to data that follow exponential families at Level-I, if conjugate distributions are used for the Level-II random effects. That is, just as Normal conjugate distributions are used at Level-II in Table 2, Gamma distributions are conjugate when Level-I specifies Poisson likelihoods, and Betas are conjugate for Binomial likelihoods. The advantage of conjugate distributions at Level-II is that shrinkage factors arise in conditional means, given the observations. Crucially, conjugate distributions are relatively robust, having the virtue of being “G_{2} minimax” among all possible Level-II distributions (priors) in the sense of Jackson et al. jacksonetal70 (), morris83c (). This helps make shrinkage estimators simple and robust. Shrinkage factors also provide useful summaries, so can serve a purpose like R^{2} does with OLS regressions.

We have argued that Model-II and it’s exchangeability assumptions are more appropriate than Model-I for developing and evaluating shrinkage estimators. This holds especially for applications in which improvements would be expected to hold for every \mu_{i}. Hospital directors might agree to having their own hospital’s performance be estimated by combining information from other hospitals, but not unless each was assured that doing so would make their own hospital’s estimate more accurate.

This paper argues especially that evaluations of shrinkage methods for unequal variance data have received too little attention, relative to the large literature on the Normal equal variances case. It is time to change that.

## References

• (1) {btechreport}[author] \bauthor\bsnmBaranchik, \bfnmA.\binitsA. (\byear1964). \btitleMultiple regression and estimation of the mean of a multivariate normal population \btypeTechnical Report 51, \binstitutionDept. Statistics, Stanford Univ. \bptokimsref \endbibitem
• (2) {barticle}[mr] \bauthor\bsnmBerger, \bfnmJames O.\binitsJ. O. (\byear1976). \btitleAdmissible minimax estimation of a multivariate normal mean with arbitrary quadratic loss. \bjournalAnn. Statist. \bvolume4 \bpages223–226. \bidissn=0090-5364, mr=0397940 \bptokimsref \endbibitem
• (3) {bbook}[mr] \bauthor\bsnmBerger, \bfnmJames O.\binitsJ. O. (\byear1985). \btitleStatistical Decision Theory and Bayesian Analysis, \bedition2nd ed. ed. \bpublisherSpringer, \baddressNew York. \bidmr=0804611 \bptokimsref \endbibitem
• (4) {barticle}[mr] \bauthor\bsnmBrown, \bfnmLawrence David\binitsL. D. (\byear1966). \btitleOn the admissibility of invariant estimators of one or more location parameters. \bjournalAnn. Math. Statist. \bvolume37 \bpages1087–1136. \bidissn=0003-4851, mr=0216647 \bptokimsref \endbibitem
• (5) {bmisc}[author] \bauthor\bsnmBrown, \bfnmL. D.\binitsL. D. (\byear2009). \bhowpublishedPersonal communication. \bptokimsref \endbibitem
• (6) {barticle}[author] \bauthor\bsnmBrown, \bfnmL. D.\binitsL. D., \bauthor\bsnmNie, \bfnmH.\binitsH. \AND\bauthor\bsnmXie, \bfnmX.\binitsX. (\byear2011). \btitleEnsemble minimax estimation for multivariate normal means. \bjournalAnn. Statist. \bnoteTo appear. \bptokimsref \endbibitem
• (7) {barticle}[mr] \bauthor\bsnmChristiansen, \bfnmCindy L.\binitsC. L. \AND\bauthor\bsnmMorris, \bfnmCarl N.\binitsC. N. (\byear1997). \btitleHierarchical Poisson regression modeling. \bjournalJ. Amer. Statist. Assoc. \bvolume92 \bpages618–632. \bidissn=0162-1459, mr=1467853 \bptokimsref \endbibitem
• (8) {barticle}[mr] \bauthor\bsnmDiaconis, \bfnmPersi\binitsP. \AND\bauthor\bsnmYlvisaker, \bfnmDonald\binitsD. (\byear1979). \btitleConjugate priors for exponential families. \bjournalAnn. Statist. \bvolume7 \bpages269–281. \bidissn=0090-5364, mr=0520238 \bptokimsref \endbibitem
• (9) {barticle}[author] \bauthor\bsnmEfron, \bfnmB.\binitsB. \AND\bauthor\bsnmMorris, \bfnmC.\binitsC. (\byear1975). \btitleData analysis using Stein’s estimator and its generalizations. \bjournalJ. Amer. Statist. Assoc. \bvolume70 \bpages311–319. \bptokimsref \endbibitem
• (10) {barticle}[mr] \bauthor\bsnmEfron, \bfnmBradley\binitsB. \AND\bauthor\bsnmMorris, \bfnmCarl\binitsC. (\byear1976). \btitleFamilies of minimax estimators of the mean of a multivariate normal distribution. \bjournalAnn. Statist. \bvolume4 \bpages11–21. \bidissn=0090-5364, mr=0403001 \bptokimsref \endbibitem
• (11) {barticle}[mr] \bauthor\bsnmEverson, \bfnmPhilip J.\binitsP. J. \AND\bauthor\bsnmMorris, \bfnmCarl N.\binitsC. N. (\byear2000). \btitleInference for multivariate normal hierarchical models. \bjournalJ. R. Stat. Soc. Ser. B Stat. Methodol. \bvolume62 \bpages399–412. \biddoi=10.1111/1467-9868.00239, issn=1369-7412, mr=1749547 \bptokimsref \endbibitem
• (12) {btechreport}[author] \bauthor\bsnmHudson, \bfnmH. M.\binitsH. M. (\byear1974). \btitleEmpirical Bayes estimation \btypeTechnical Report 58, \binstitutionDept. Statistics, Stanford Univ. \bptokimsref \endbibitem
• (13) {barticle}[mr] \bauthor\bsnmJackson, \bfnmD. A.\binitsD. A., \bauthor\bsnmO’Donovan, \bfnmT. M.\binitsT. M., \bauthor\bsnmZimmer, \bfnmW. J.\binitsW. J. \AND\bauthor\bsnmDeely, \bfnmJ. J.\binitsJ. J. (\byear1970). \btitle\mathcal{G}_{2}-minimax estimators in the exponential family. \bjournalBiometrika \bvolume57 \bpages439–443. \bidissn=0006-3444, mr=0270491 \bptokimsref \endbibitem
• (14) {bincollection}[mr] \bauthor\bsnmJames, \bfnmW.\binitsW. \AND\bauthor\bsnmStein, \bfnmCharles\binitsC. (\byear1961). \btitleEstimation with quadratic loss. In \bbooktitleProc. 4th Berkeley Sympos. Math. Statist. Probab. \bvolumeI \bpages361–379. \bpublisherUniv. California Press, \baddressBerkeley, CA. \bidmr=0133191 \bptokimsref \endbibitem
• (15) {barticle}[mr] \bauthor\bsnmLi, \bfnmHuilin\binitsH. \AND\bauthor\bsnmLahiri, \bfnmP.\binitsP. (\byear2010). \btitleAn adjusted maximum likelihood method for solving small area estimation problems. \bjournalJ. Multivariate Anal. \bvolume101 \bpages882–892. \biddoi=10.1016/j.jmva.2009.10.009, issn=0047-259X, mr=2584906 \bptokimsref \endbibitem
• (16) {barticle}[author] \bauthor\bsnmMorris, \bfnmC.\binitsC. (\byear1977). \btitleInterval estimation for empirical Bayes generalizations of Stein’s estimator. The Rand Paper Series, The Rand Corporation. \bptokimsref \endbibitem
• (17) {barticle}[mr] \bauthor\bsnmMorris, \bfnmCarl N.\binitsC. N. (\byear1983). \btitleParametric empirical Bayes inference: Theory and applications (with discussion). \bjournalJ. Amer. Statist. Assoc. \bvolume78 \bpages47–65. \bidissn=0162-1459, mr=0696849 \bptnotecheck related\bptokimsref \endbibitem
• (18) {barticle}[mr] \bauthor\bsnmMorris, \bfnmCarl N.\binitsC. N. (\byear1983). \btitleNatural exponential families with quadratic variance functions: Statistical theory. \bjournalAnn. Statist. \bvolume11 \bpages515–529. \bidissn=0090-5364, mr=0696064 \bptokimsref \endbibitem
• (19) {bincollection}[mr] \bauthor\bsnmMorris, \bfnmCarl N.\binitsC. N. (\byear1983). \btitleParametric empirical Bayes confidence intervals. In \bbooktitleScientific Inference, Data Analysis, and Robustness (Madison, Wis., 1981). \bseriesPubl. Math. Res. Center Univ. Wisconsin \bvolume48 \bpages25–50. \bpublisherAcademic Press, \baddressOrlando, FL. \bidmr=0772762 \bptokimsref \endbibitem
• (20) {bincollection}[mr] \bauthor\bsnmMorris, \bfnmC. N.\binitsC. N. (\byear1988). \btitleApproximating posterior distributions and posterior moments. In \bbooktitleBayesian Statistics 3 (Valencia, 1987) \bpages327–344. \bpublisherOxford Univ. Press, \baddressNew York. \bidmr=1008054 \bptokimsref \endbibitem
• (21) {barticle}[mr] \bauthor\bsnmMorris, \bfnmCarl N.\binitsC. N. \AND\bauthor\bsnmLock, \bfnmKari F.\binitsK. F. (\byear2009). \btitleUnifying the named natural exponential families and their relatives. \bjournalAmer. Statist. \bvolume63 \bpages247–253. \biddoi=10.1198/tast.2009.08145, issn=0003-1305, mr=2750349 \bptokimsref \endbibitem
• (22) {barticle}[author] \bauthor\bsnmMorris, \bfnmC.\binitsC. \AND\bauthor\bsnmTang, \bfnmR.\binitsR. (\byear2011). \btitleEstimating random effects via adjustment for density maximization. \bjournalStatist. Sci. \bvolume26 \bpages271–287. \bptokimsref \endbibitem
• (23) {binproceedings}[mr] \bauthor\bsnmStein, \bfnmCharles\binitsC. (\byear1956). \btitleInadmissibility of the usual estimator for the mean of a multivariate normal distribution. In \bbooktitleProc. 3rd Berkeley Sympos. Math. Statist. Probab. \bvolumeI \bpages197–206. \bpublisherUniv. California Press, \baddressBerkeley. \bidmr=0084922 \bptokimsref \endbibitem
• (24) {bincollection}[mr] \bauthor\bsnmStein, \bfnmCharles\binitsC. (\byear1966). \btitleAn approach to the recovery of inter-block information in balanced incomplete block designs. In \bbooktitleResearch Papers in Statistics (Festschrift J. Neyman) \bpages351–366. \bpublisherWiley, London. \bidmr=0210232 \bptokimsref \endbibitem
• (25) {binproceedings}[mr] \bauthor\bsnmStein, \bfnmCharles\binitsC. (\byear1974). \btitleEstimation of the mean of a multivariate normal distribution. In \bbooktitleProceedings of Prague Symposium on Asymptotic Statistics (Charles Univ., Prague, 1973) \bvolumeII \bpages345–381. \bpublisherCharles Univ., \baddressPrague. \bidmr=0381062 \bptokimsref \endbibitem
• (26) {barticle}[mr] \bauthor\bsnmStein, \bfnmC. M.\binitsC. M. (\byear1962). \btitleConfidence sets for the mean of a multivariate normal distribution. \bjournalJ. Roy. Statist. Soc. Ser. B \bvolume24 \bpages265–296. \bidissn=0035-9246, mr=0148184 \bptokimsref \endbibitem
• (27) {barticle}[mr] \bauthor\bsnmStein, \bfnmCharles M.\binitsC. M. (\byear1981). \btitleEstimation of the mean of a multivariate normal distribution. \bjournalAnn. Statist. \bvolume9 \bpages1135–1151. \bidissn=0090-5364, mr=0630098 \bptokimsref \endbibitem
• (28) {barticle}[mr] \bauthor\bsnmStrawderman, \bfnmWilliam E.\binitsW. E. (\byear1971). \btitleProper Bayes minimax estimators of the multivariate normal mean. \bjournalAnn. Math. Statist. \bvolume42 \bpages385–388. \bidissn=0003-4851, mr=0397939 \bptokimsref \endbibitem
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters

283120

How to quickly get a good answer:
• Keep your question short and to the point
• Check for grammar or spelling errors.
• Phrase it like a question
Test
Test description