Exact prior-free probabilistic inference on the heritability coefficient in a linear mixed model

Exact prior-free probabilistic inference on the heritability coefficient in a linear mixed model


Linear mixed-effect models with two variance components are often used when variability comes from two sources. In genetics applications, variation in observed traits can be attributed to biological and environmental effects, and the heritability coefficient is a fundamental quantity that measures the proportion of total variability due to the biological effect. We propose a new inferential model approach which yields exact prior-free probabilistic inference on the heritability coefficient. In particular we construct exact confidence intervals and demonstrate numerically our method’s efficiency compared to that of existing methods.

Keywords and phrases: Differential equation; inferential model; plausibility; unbalanced design; variance components.

1 Introduction

Normal linear mixed effects models are useful in a variety of biological, physical, and social scientific applications with variability coming from multiple sources; see Khuri and Sahai (1985) and Searle et al. (1992). In this paper, we focus on the case of two variance components, and the general model can be written as


where is a -vector of response variables, and are design matrices for the fixed and random effects, of dimension and , respectively, is a -vector of unknown parameters, is a normal random -vector with mean 0 and covariance matrix , and is a normal random -vector with mean 0 and covariance matrix . Here, is the pair of variance components. Assume is known and is of rank . The unknown parameters in this model are the fixed-effect coefficients and the two variance components , so the parameter space is -dimensional.

In biological applications, the quantities and in (1) denote the genetic and environmental effects, respectively. Given that “a central question in biology is whether observed variation in a particular trait is due to environmental or biological factors” (Visscher et al. 2008), the heritability coefficient, , which represents the proportion of phenotypic variance attributed to variation in genotypic values, is a fundamentally important quantity. Indeed, mixed-effect models and inference on the heritability coefficient has been applied recently in genome-wide association studies (Yang et al. 2010; Golan and Rosset 2011); see Section 5 for more on these applications.

Given the importance of the heritability coefficient, there are a number of methods available to construct confidence intervals for or, equivalently, for the variance ratio . When the design is balanced, Graybill (1976) and Searle et al. (1992) give a confidence intervals for and other quantities. When the design is possibly unbalanced, as we assume here, the problem is more challenging; in particular, exact ANOVA-based confidence intervals generally are not available. Wald (1940) gave intervals for in the unbalanced case, and subsequent contributions include Harville and Fenech (1985), Fenech and Harville (1991), Lee and Seely (1996), and Burch and Iyer (1997). Bayesian (Wolfinger and Kass 2000; Gelman et al. 2004; Gelman 2006) and fiducial methods (Fisher 1935; E et al. 2008; Cisewski and Hannig 2012) are also available.

The focus in this paper is exact prior-free probabilistic inference on the heritability coefficient based on the recently proposed inferential model (IM) framework. Martin and Liu (2013, 2014a, 2014b) give a detailed account of the general framework, along with comparisons to other related approaches, including fiducial (Hannig 2009, 2013; Fisher 1973). The IM approach is driven by the specification of an association between unobservable auxiliary variables, the observable data, and the unknown parameters. This association is followed up by using properly calibrated random sets to predict these auxiliary variables. Liu and Martin (2014) argue that it is precisely this prediction of the auxiliary variables that sets the IM approach apart from fiducial, Dempster–Shafer (Dempster 2008; Shafer 1976), and structural inference (Fraser 1968). The IM output is a plausibility function that can be used for inference on the parameter of interest. In particular, the plausibility function yields confidence intervals with exact coverage (e.g., Martin 2014). A brief overview of the general IM framework is given in Section 2.1. A key feature of the IM approach, and the motivation for this work, is that, besides being exact, the IM approach’s careful handling of uncertainty often leads to more efficient inference.

A key to the construction of an efficient IM is to reduce the dimension of the auxiliary variable as much as possible and, for the heritability coefficient, there are several dimension reduction steps. A first dimension reduction eliminates the nuisance parameter . These well-known calculations are reviewed in Section 2.2. The main technical contribution here is in Section 3.1, where we employ a differential equation-driven technique to reduce the auxiliary variable dimension, leading to a conditional IM for the heritability coefficient. In particular, this step allows us to reduce the dimension beyond that allowed by sufficiency. A predictive random set is introduced in Section 3.2, and in Section 3.3 we show that the corresponding plausibility function is properly calibrated and, therefore, the plausibility function-based confidence intervals are provably exact. Sections 4.2 and 4.3 illustrate the proposed method in simulated and real data examples. The general message is that our proposed confidence intervals for the heritability coefficient are exact and tend to be more efficient compared to existing methods.

2 Preliminaries

2.1 Overview of inferential models

The goal of the IM framework is to get valid prior-free probabilistic inference. This is facilitated by first associating the observable data and unknown parameters to a set of unobservable auxiliary variables. For example, the marginal distribution of from (1) is


which can be written in association form:


That is, observable data and unknown parameters are associated with unobservable auxiliary variables , in this case, a -vector of independent standard normal random variables. Given this association, the basic IM approach is to introduce a predictive random set for and combine with the observed value of to get a plausibility function. Roughly, this plausibility function assigns, to assertions about the parameter of interest, values in measuring the plausibility that the assertion is true. This plausibility function can be used to design exact frequentist tests or confidence regions.

Martin and Liu (2013) give a general description of this three-step IM construction and its properties; in Section 3.1 below we will flesh out these three steps, including choice of a predictive random set, for the variance components problem at hand. The construction of exact plausibility intervals for the heritability coefficient is presented in Section 3.3, along with a theoretical justification of the claimed exactness.

Notice, in the association (3), that the auxiliary variable is, in general, of higher dimension than that of the parameter . There are two reasons for trying to reduce the auxiliary variable dimension. First, it is much easier to specify good predictive random sets when the auxiliary variable is of low dimension. Second, inference is more efficient when only a relatively small number of auxiliary variables require prediction. This dimension reduction is via a conditioning operation, and Martin and Liu (2014a) give a general explanation of the gains as well as a novel technique for carrying out this reduction. We employ these techniques in Section 3.1 to get a dimension-reduced association for the heritability coefficient in the linear mixed model.

Another important challenge is when nuisance parameters are present. Our interest is in the heritability coefficient, a function of the full parameter , so we need to marginalize over the nuisance parameters. The next section marginalizes over using standard techniques; further marginalization will be carried out in Section 3.1.

2.2 Marginalizing out the fixed effect

Start with the linear mixed model (1). Following the setup in E et al. (2008), let be a matrix such that and . Next, let . Then is a one-to-one mapping. Moreover, the distribution of depends on only, and the distribution of depends on , with a location parameter. In particular, from (2), we get

where and is a covariance matrix of a known form that depends on ; its precise form is not important. From this point, the general theory in Martin and Liu (2014b) allows us to marginalize over by simply deleting the component. Therefore, a marginal association for is

This marginalization reduces the auxiliary variable dimension from to .

In the marginal association for above, there are auxiliary variables but only two parameters. Classical results on sufficient statistics in mixed effects model that will facilitate further dimension reduction. For the matrix defined above, let denote the (distinct) ordered eigenvalues with multiplicities , respectively. Let be a orthogonal matrix such that is diagonal with eigenvalues , in their multiplicities, on the diagonal. For , a matrix, define

Olsen et al. (1976) showed that is a minimal sufficient statistic for . Moreover, the distribution of is determined by


This reduces the auxiliary variable dimension from to . We take (4) as our “baseline association.” Even in this reduced baseline association, there are auxiliary variables but only two parameters, which means there is room for even further dimension reduction. The next section shows how to reduce to a scalar auxiliary variable when the parameter of interest is the scalar heritability coefficient.

3 Inferential model for heritability

3.1 Association

For the moment, it will be convenient to work with the variance ratio, . Since is a one-to-one function of , the two parametrizations are equivalent. Rewrite the baseline association (4) as


If we make the following transformations,

then the association (5) becomes

Since for every , there exists a that solves the right-most equation, it follows from the general theory in Martin and Liu (2014b) that a marginal association for is obtained by deleting the component above involving . In particular, a marginal association for is

If we write

then we get a marginal association for of the form


Marginalization reduces the auxiliary variable dimension by 1. Further dimension reduction will be considered next. Note that the new auxiliary variable is a multivariate F-distributed random vector (e.g., Amos and Bulgren 1972).

Here we construct a local conditional IM for as described in Martin and Liu (2014a). Select a fixed value ; more on this choice in Section 3.3. To reduce the dimension of the auxiliary variable , in (6), from to , we construct two pairs of functions—one pair, , on the sample space, and one pair, , on the auxiliary variable space. We insist that and are both one-to-one, and and are allowed to depend on the selected . The goal is to find which is, in a certain sense, not sensitive to changes in the auxiliary variable.

Write the association (6) so that is a function of and , i.e.,

We want to choose the function such that the partial derivative of with respect to vanishes at . By the chain rule, we have

so our goal is to find to solve the following partial differential equation:

here is a vector and is a matrix of rank . Toward solving the partial differential equation, first we get that the partial derivative of with respect to satisfies

where . This simplifies the relevant partial differential equation to the following:


where is a diagonal matrix constructed from a vector . The method of characteristics (e.g., Polyanin et al. 2002) for solving partial differential equations identifies a logarithmic function of the form


where is a matrix with rows orthogonal to at . For example, since the matrix that projects to the orthogonal complement to the column space of has rank , we can take to be a matrix whose rows form a basis for that space. For defined in this way, it is easy to check that in (8) is indeed a solution to the partial differential equation (7).

We have completed specification of the function; it remains to specify and . The easiest to specify next is , the value of , as a function of :

As we describe below, the goal is to condition on the observed value of .

Next, we define and to supplement and , respectively. In particular, take a -vector which is not orthogonal to at . It is easy to check that the entries in are strictly positive for all . Therefore, we can take independent of , e.g., , is not orthogonal to . Now set


This is a log-linear transformation, and the linear part is non-singular, so this is a one-to-one mapping. Finally, we take as

Since is log-linear, just like , it is also one-to-one.

We can now write the conditional association for . For the given , the mapping describes a split of our previous association (6) into two pieces:

The first piece carries direct information about . The second piece plays a conditioning role, correcting for the fact that some information was lost in reducing the -dimensional to a one-dimensional . To complete the specification of the conditional association, write and . Then we have


where is the conditional distribution of , given that equals to the observed value of . To summarize, (10) completes the association step that describes the connection between observable data, unknown parameter of interest, and unobservable auxiliary variables. Of particular interest is that this association involves only a one-dimensional auxiliary variable compared to the association (5) obtained from the minimal sufficient statistics that involves an -dimensional auxiliary variable. This dimension reduction will come in handy for the choice of predictive random set in the following section. The price we paid for this dimension reduction was the choice of a particular localization point . In Section 3.3 we employ a trick to side-step this issue when the goal is, as in this paper, to construct confidence intervals.

3.2 Predictive random sets

Having reduced the auxiliary variable to a scalar in (10), the choice of an efficient predictive random set is now relatively simple. Though there is an available theory of optimal predictive random sets (Martin and Liu 2013), here we opt for simplicity; in particular, we propose a default predictive random set that is theoretically sound and computational and intuitively simple.

Consider the following version of what Martin and Liu (2013) call the “default” predictive random set:


This , with distribution , is a random interval, centered at the mean of the conditional distribution . One key feature of this predictive random set is that it is nested, i.e., for any two distinct realizations of , one is a subset of the other. The second key feature is a calibration of the predictive random set distribution with the underlying distribution of the auxiliary variable. Following Martin (2014), define the contour function

which represents the probability that the predictive random set contains the value of the auxiliary variable. We shall require that


For the default predictive random set in (11), it is easy to check that

where is the distribution function of for . From the construction above, it is clear that it is a continuous distribution. Then, is a continuous random variable, so is uniformly distributed on . Therefore, (12) holds for the default predictive random set . Results on optimal predictive random sets are available (Martin and Liu 2013), but here, again, our focus is on simplicity. See Section 5.

3.3 Plausibility intervals

Here we combine the association in Section 3.1 with the predictive random set described above to produce a plausibility function for inference about . In general, a plausibility function is a data-dependent mapping that assigns, to each assertion about the parameter, a value in , with the interpretation that small values suggest the assertion is not likely to be true, given the data; see Martin and Liu (2013). For simplicity, we focus only on the collection of singleton assertions, i.e., for . These are also the relevant assertions for constructing interval estimates based on the IM output.

Let be the observations in (6). The association step in Section 3.1 yields a data-dependent collection of sets indexed by the auxiliary variable. In particular, write , a set-valued function of . These sets are combined with the predictive random set in Section 3.2 to get an enlarged -dependent random set:


Now, for a given assertion , we compute the plausibility function,

the probability that the random set contains the asserted value of . A simple calculation shows that, in this case with singleton assertions, we have

where is defined in Section 3.2. The above display shows that the plausibility function can be expressed directly in terms of the distribution of the predictive random set, without needing to go through the construction of as in (13).

We pause here to answer a question that was left open from Section 3.1, namely, how to choose the localization point . Following Martin and Liu (2014a), we propose here to choose to match the value of specified by the singleton assertion. That is, we propose to let the localization point depend on the assertion. All the elements in the plausibility function above with a 0 subscript, denoting dependence on , are changed in an obvious way to get a new plausibility function


We treat this as a function of to be used for inference. In particular, we can construct a % plausibility interval for as follows:


The plausibility function, and the corresponding plausibility region, are easy to compute, as we describe in Section 4.1. Moreover, the calibration (12) of the predictive random set leads to exact plausibility function-based confidence intervals, as we now show.

We need some notation for the sampling distribution of , given all the relevant parameters. Recall that the distribution of actually depends on or, equivalently, . The error variance is a nuisance parameter, but still appears in the sampling model for . We write this sampling distribution as .

Theorem 1.

Take the association (10) and the default predictive random set in (11). Then for any , any value of , and any , the plausibility function satisfies


For given , if is the value of , then it follows from the conditional distribution construction that the plausibility function in (14), as a function of with , is . Then the equality in (16) follows immediately. ∎

Averaging the left-hand side of (16) over , with respect to the distribution of , and using iterated expectation gives the following unconditional version of Theorem 1.

Corollary 1.

Under the conditions of Theorem 1, for any ,

Since we have proper calibration of the plausibility function, both conditionally and unconditionally, coverage probability results for the plausibility interval (15) are also available. This justifies our choice to call a % plausibility interval, i.e., the frequentist coverage probability of is exactly .

Corollary 2.

The coverage probability of in (15) is exactly .

4 Numerical results

4.1 Implementation

Evaluation of the plausibility function in (14) requires the distribution function of corresponding to the conditional distribution of , given . This conditional distribution is not of a convenient form, so numerical methods are needed. For fixed, since the transformation (9) from to is of a log-linear form, and the density function of can be written in closed-form, we can evaluate the joint density for and, hence, the conditional density of . Numerical integration is used to evaluate the normalizing constant, the mean , and the distribution function . R code is available at www.math.uic.edu/~rgmartin.

4.2 Simulation results

In this section, we consider a standard one-way random effects model, i.e.,

where are independent with common distribution , and the s are independent with common distribution ; the s and s are also mutually independent. Our goal is to compare the proposed IM-based plausibility intervals for with the confidence intervals based on several competing methods. Of course, the properties of the various intervals depend on the design, in this case, the within-group sample sizes , and the values of . Our focus here is on cases with small sample sizes, namely, where the total sample size is fixed at 15. The three design patterns considered are: , , and . The nine pairs considered are: , , , , , , , , and . Without loss of generality, we set .

For each design pattern and pair of , 1000 independent data sets were generated and 95% two-sided interval estimates for were computed based on the exact method of Burch and Iyer (1997), the fiducial method of E et al. (2008), and the proposed IM method. Empirical coverage probabilities and average length of the confidence interval under each setting were compared to investigate the performance of each method. Besides these three methods, we also implemented Bayesian and profile likelihood approaches. The three aforementioned methods all gave intervals with better coverage and length properties than the Bayesian method, and the profile likelihood method was very unstable with small sample sizes, often having very high coverage with very wide intervals or very low coverage with very narrow intervals. So, these results are not reported.

A summary of the simulation results is displayed in Figure 1. Panel (a) displays the coverage probabilities, and Panel (b) displays the relative length difference, which is defined as the length of the particular interval minus the length of the IM interval, scaled by the length of the IM interval. As we expect from Corollary 2, the IM plausibility intervals have coverage at the nominal 95% level. We also see that the IM intervals tend to be shorter than the fiducial and Burch–Iyer confidence intervals. The fiducial intervals have coverage probability exceeding the nominal 95% level, but this comes at the expense of longer intervals on average. Overall, the proposed IM-based method performs quite well compared to these existing methods. We also replicated the simulation study in E et al. (2008), which involves larger sample sizes and a broader range of imbalance, and the relative comparisons between these three methods are the same as here.

(a) Coverage probability
(b) Relative length difference
Figure 1: Simulation results from Section 4.2. BI corresponds to the exact method of Burch and Iyer (1997), and FI corresponds to the fiducial method of E et al. (2008).

4.3 Real-data analysis

Example 1.

Equal numbers of subjects are tested under each standard and test preparations and a blank dose under a ()-point symmetrical slope-ratio assay. The response, on logarithmic scale, is assumed to depend linearly on the dose level. A modified balanced incomplete block design with block size is introduced by Das and Kulkarni (1966). The th dose levels for standard and test preparations are represented by and , . Under this design, the dose will be equally spaced and listed in ascending order. A balanced incomplete block design with doses of the standard preparation inside blocks is constructed and used as the basic design. Then a modified design is constructed by adding a blank dose and doses of the test preparation into every block, under the rule that dose should accompany in every blocks. The model developed by Das and Kulkarni can be written as

where , , represent observation response in th block for th dose of standard preparation, test preparation and blank dose; and represent th dose level for standard and test preparation; is zero by default, denotes th block effect; and denotes independent random errors with common distribution . We consider random block effects and assume that are independent with common distribution . Independence of and is also assumed.

We analyze data coming from a nine-point slope-ratio assay on riboflavin content of yeast, with two replications in each dose; see Table 2 in E et al. (2008) for the design and data. For this design, we have distinct eigenvalues, namely, 4.55, 1, and 0, with multiplicities 1, 1, and 10, respectively. A plot of the plausibility function for is shown in Figure 2(a). The function exceeds 0.2 at , which implies that 90% and 95% plausibility intervals include zero. The left panel of Table 1 shows the 90% and 95% interval estimates for based on the Burch–Iyer, fiducial, and IM methods. In this case, the IM intervals are provably exact and shorter.

(a) Example 1
(b) Example 2
Figure 2: Plausibility functions for in the two examples.
Example 1 Example 2
Method 90% 95% 90% 95%
Burch–Iyer 0.913 0.956 0.567 0.615
Fiducial 0.916 0.957 0.466 0.530
IM 0.881 0.924 0.554 0.597
Table 1: Upper bounds on the interval estimates for (lower bounds are all zero) based on the three methods in the two real-data examples.
Example 2.

Harville and Fenech (1985) analyzed data on birth weights of lambs. These data consist of the weights information at the birth of 62 single-birth male lambs, and were collected from three selection lines and two control lines. Each lamb was the offspring of one of the 23 rams and each lamb had a distinct dam. Age of the dam was also recorded and separated into three categories, numbered 1 (1–2 years), 2 (2–3 years), and 3 (over 3 years). A linear mixed model for these data is

where represents the weight of the th offspring of the th sire in the th population lines and of a dam in the th age category; represents th level age effect; represents the th line effects; denotes random sire effects and are assumed to be independently distributed as ; and random errors denoted by is supposed to be independently distributed as . Furthermore, the s and s are assumed to be independent. In this case, , , and ; all non-zero eigenvalues have multiplicity 1 except with multiplicity 2. A plot of the plausibility function for is shown in Figure 2(b). As in the previous example, the plausibility function is positive at , which means that plausibility intervals with any reasonable level will contain . We also used each of the three methods considered above to compute 90% and 95% interval estimates for . The results are shown in the right panel of Table 1. In this case, IM gives a shorter interval compared to Burch–Iyer. The fiducial interval, however, is shorter than both exact intervals. We expect the IM interval to be most efficient, so we explore the relative performance a bit further by simulating 1000 independent data sets from the fitted model in this case, i.e., with and as the true values. In these simulations, the fiducial and IM coverage probabilities were 0.944 and 0.954, respectively, both within an acceptable range of the nominal level, but the average lengths of the intervals are 0.488 and 0.456. That is, the IM intervals tend to be shorter than the fiducial intervals in problems similar to this example, as we would expect.

5 Concluding remarks

The IM method proposed here gives exact confidence intervals for the heritability coefficient , as well as the variance ratio , and numerical results suggest increased efficiency compared to existing methods. A question is if these same techniques can be employed for exact prior-free probabilistic inference on other quantities related to the variance components . It is well-known that, for the unbalanced design case, exact marginalization is challenging. In the IM context, this means that the association is not “regular” in the sense of Martin and Liu (2014b). Therefore, some more sophisticated tools are needed for exact inference on, say, the individual variance components and . This application provides clear motivation for our ongoing investigations into more general IM-based marginalization strategies.

Another important question is if the techniques presented herein can be applied in more complex and high-dimensional mixed-effect models. In genome-wide association studies, for example, the dimensions of the problem are extremely large. We expect that, conceptually, the techniques described here will carry over to the more complex scenario. However, there will be computational challenges to overcome, as with all approaches (Kang et al. 2010; Zhou and Stephens 2014). This, along with the incorporation of optimal predictive random sets (e.g. Martin and Liu 2013) is a focus of ongoing research.


The authors thank Chuanhai Liu for helpful comments and suggestions. This work is partially supported by the U.S. National Science Foundation, grant DMS–1208833.


  1. Amos, D. E. and Bulgren, W. G. (1972). Computation of a multivariate distribution. Math. Comp., 26:255–264.
  2. Burch, B. D. and Iyer, H. K. (1997). Exact confidence intervals for a variance ratio (or heritability) in a mixed linear model. Biometrics, 53(4):1318–1333.
  3. Cisewski, J. and Hannig, J. (2012). Generalized fiducial inference for normal linear mixed models. Ann. Statist., 40(4):2102–2127.
  4. Das, M. N. and Kulkarni, G. A. (1966). Incomplete block designs for bio-assays. Biometrics, 22:706–729.
  5. Dempster, A. P. (2008). The Dempster–Shafer calculus for statisticians. Internat. J. Approx. Reason., 48(2):365–377.
  6. E, L., Hannig, J., and Iyer, H. (2008). Fiducial intervals for variance components in an unbalanced two-component normal mixed linear model. J. Amer. Statist. Assoc., 103(482):854–865.
  7. Fenech, A. P. and Harville, D. A. (1991). Exact confidence sets for variance components in unbalanced mixed linear models. Ann. Statist., 19(4):1771–1785.
  8. Fisher, R. A. (1935). The fiducial argument in statistical inference. Ann. Eugenics, 6:391–398.
  9. Fisher, R. A. (1973). Statistical Methods and Scientific Inference. Hafner Press, New York, 3rd edition.
  10. Fraser, D. A. S. (1968). The Structure of Inference. John Wiley & Sons Inc., New York.
  11. Gelman, A. (2006). Prior distributions for variance parameters in hierarchical models (comment on article by Browne and Draper). Bayesian Anal., 1(3):515–533.
  12. Gelman, A., Carlin, J. B., Stern, H. S., and Rubin, D. B. (2004). Bayesian Data Analysis. Chapman & Hall/CRC, Boca Raton, FL, second edition.
  13. Golan, D. and Rosset, S. (2011). Accurate estimation of heritability in genome wide studies using random effects models. Bioinformatics, 27:317–323.
  14. Graybill, F. A. (1976). Theory and application of the linear model. Duxbury Press, North Scituate, Mass.
  15. Hannig, J. (2009). On generalized fiducial inference. Statist. Sinica, 19(2):491–544.
  16. Hannig, J. (2013). Generalized fiducial inference via discretization. Statist. Sinica, 23(2):489–514.
  17. Harville, D. A. and Fenech, A. P. (1985). Confidence intervals for variance ratio, or for heritability, in an unbalanced mixed linear model. Biometrics, 41(1):137–152.
  18. Kang, H. M., Sul, J. H., Service, S. K., Zaitlen, N. A., Kong, S.-y., Freimer, N. B., Sabatti, C., and Eskin, E. (2010). Variance component model to account for sample structure in genome-wide association studies. Nat. Genet., 42(4):348–354.
  19. Khuri, A. I. and Sahai, H. (1985). Variance components analysis: a selective literature survey. Internat. Statist. Rev., 53(3):279–300.
  20. Lee, Y. and Seely, J. (1996). Computing the Wald interval for a variance ratio. Biometrics, 52:1486–1491.
  21. Liu, C. and Martin, R. (2014). Frameworks for prior-free posterior probabilistic inference. WIREs Comp. Stat., to appear.
  22. Martin, R. (2014). Random sets and exact confidence regions. Sankhya A, to appear; arXiv:1302.2023.
  23. Martin, R. and Liu, C. (2013). Inferential models: A framework for prior-free posterior probabilistic inference. J. Amer. Statist. Assoc., 108(501):301–313.
  24. Martin, R. and Liu, C. (2014a). Conditional inferential models: combining information for prior-free probabilistic inference. J. R. Stat. Soc. Ser. B. Stat. Methodol., to appear; arXiv:1211.1530.
  25. Martin, R. and Liu, C. (2014b). Marginal inferential models: prior-free probabilistic inference on interest parameters. Unpublished manuscript, arXiv:1306.3092.
  26. Olsen, A., Seely, J., and Birkes, D. (1976). Invariant quadratic unbiased estimation for two variance components. Ann. Statist., 4(5):878–890.
  27. Polyanin, A. D., Zaitsev, V. F., and Moussiaux, A. (2002). Handbook of first order partial differential equations, volume 1 of Differential and Integral Equations and Their Applications. Taylor & Francis Ltd., London.
  28. Searle, S. R., Casella, G., and McCulloch, C. E. (1992). Variance components. Wiley Series in Probability and Mathematical Statistics: Applied Probability and Statistics. John Wiley & Sons, Inc., New York. A Wiley-Interscience Publication.
  29. Shafer, G. (1976). A Mathematical Theory of Evidence. Princeton University Press, Princeton, N.J.
  30. Visscher, P. M., Hill, W. G., and Wray, N. R. (2008). Heritability in the genomics era—concepts and misconceptions. Nat. Rev. Genet., 9:255–266.
  31. Wald, A. (1940). A note on the analysis of variance with unequal class frequencies. Ann. Math. Statistics, 11:96–100.
  32. Wolfinger, R. D. and Kass, R. E. (2000). Nonconjugate Bayesian analysis of variance component models. Biometrics, 56(3):768–774.
  33. Yang, J., Benyamin, B., McEvoy, B. P., Gordon, S., Henders, A. K., Nyholt, D. R., Madden, P. A., Heath, A. C., Martin, N. G., Montgomery, G. W., Goddard, M. E., and Visscher, P. M. (2010). Common SNPs explain a large proportion of the heritability for human height. Nat. Genet., 42(7):565–569.
  34. Zhou, X. and Stephens, M. (2014). Efficient multivariate linear mixed model algorithms for genome-wide association studies. Nat. Methods, 11:407–409.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minumum 40 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description