Measuring support for a hypothesis about a random parameter without estimating its unknown prior

Measuring support for a hypothesis about a random parameter without estimating its unknown prior

Abstract

For frequentist settings in which parameter randomness represents variability rather than uncertainty, the ideal measure of the support for one hypothesis over another is the difference in the posterior and prior log odds. For situations in which the prior distribution cannot be accurately estimated, that ideal support may be replaced by another measure of support, which may be any predictor of the ideal support that, on a per-observation basis, is asymptotically unbiased. Two qualifying measures of support are defined. The first is minimax optimal with respect to the population and is equivalent to a particular Bayes factor. The second is worst-sample minimax optimal and is equivalent to the normalized maximum likelihood. It has been extended by likelihood weights for compatibility with more general models.

One such model is that of two independent normal samples, the standard setting for gene expression microarray data analysis. Applying that model to proteomics data indicates that support computed from data for a single protein can closely approximate the estimated difference in posterior and prior odds that would be available with the data for 20 proteins. This suggests the applicability of random-parameter models to other situations in which the parameter distribution cannot be reliably estimated.

Running headline: Support for a random hypothesis


David R. Bickel
Ottawa Institute of Systems Biology
Department of Biochemistry, Microbiology, and Immunology
University of Ottawa

Keywords: empirical Bayes; indirect evidence; information for discrimination; minimum description length; model selection; multiple comparisons; multiple testing; normalized maximum likelihood; strength of statistical evidence; weighted likelihood

1 Introduction

The p-value has now served science for a century as a measure of the incompatibility between a simple (point) null hypothesis and an observed sample of data. The celebrated advantage of the p-value is its objectivity relative to Bayesian methods in the sense that it is based on a model of frequencies of events in the world rather than on a model that describes the beliefs or decisions of an ideal agent.

On the other hand, the Bayes factor has the salient advantage that it is easily interpreted in terms of combining with previous information. Unlike the p-value, it is a measure of support for one hypothesis over another; that is, it quantifies the degree to which the data change the odds that the hypothesis is true, whether or not a prior odds is available in the form of known frequencies. Although the Bayes factor does not depend on a prior probability of hypothesis truth, it does depend on which priors are assigned to the parameter distribution under the alternative hypothesis unless that alternative hypothesis is simple, in which case the Bayes factor reduces to the likelihood ratio if the null hypothesis is also simple. Unfortunately, the improper prior distributions generated by conventional algorithms cannot be directly applied to the Bayes factor. That has been overcome to some extent by dividing the data into training and test samples, with the training samples generating proper priors for use with test samples, but at the expense of requiring the specification of training samples and, when using multiple training samples, a method of averaging (Berger and Pericchi, 1996).

On the basis of concepts defined in Section 2, Section 3 will marshal results of information theory to seize the above advantages of the p-value and Bayes factor by deriving measures of hypothesis support of wide applicability that are objective enough for routine scientific reporting. While such results have historically been cast in terms of minimum description length (MDL), an idealized minimax length of a message encoding the data, they will be presented herein without reliance on that analogy. For the present paper, it is sufficient to observe that the proposed level of support for one hypothesis over another is the difference in their MDLs and that Rissanen (1987) used a difference in previous MDLs to compare hypotheses.

To define support in terms of the difference between posterior and prior log-odds without relying on non-frequency probability, Section 2.2 will relate the prior probability of hypothesis truth to the fraction of null hypotheses that are true. This framework is the two-groups model for the analysis of gene expression data by empirical Bayes methods (Efron et al., 2001) and later adapted to other data of high-dimensional biology such as those of genome-wide association studies (Efron, 2010b; Yang and Bickel, 2010, and references) and to data of medium-dimensional biology such as those of proteins and metabolites (Bickel, 2010a, b). In such applications, each gene or other biological feature corresponds to a different random parameter, the value of which determines whether its null hypothesis is true.

While the proposed measures of hypothesis support fall under the two-groups umbrella, they are not empirical Bayes methods since they operate without any estimation or knowledge of prior distributions. Nonetheless, the unknown prior is retained in the model as a distribution across random parameters, including but not necessarily limited to those that generate the observed data.

Thus, the methodology of this paper is applicable to situations in which reliable estimation the unknown two-groups prior is not possible. Such situations often arise in practice. For example, the number of random parameters for which measurements are available and that have sufficient independence between parameters is often considered too small for reliable estimation of the prior distribution. Qiu et al. (2005) argued that, due to correlations in expression levels between genes, this is the case with microarray data. Less controversially, few would maintain that the prior can be reliably estimated when only one random parameter generated data, e.g., when the expression of only a single gene has been recorded. Another example is the setting in which the data cannot be reduced to continuous test statistics that adequately meet the assumptions of available empirical Bayes methods of estimating the prior distribution.

Section 2 fixes basic notation and explains the two-groups model. Under that framework, Section 3 defines support for one hypothesis over another in terms of a difference between the posterior and prior log-odds. Thus, reporting support in a scientific paper enables each reader to roughly determine what the posterior probability of either hypothesis would be using a different hypothetical value of its unknown prior probability. Section 4 then gives two qualifying measures of support, each of which is minimax optimal in a different sense. In Section 5, one of the optimal measures is compared to empirical Bayes methodology using real proteomics data. That case study addresses the extent to which optimal support on the basis of abundance measurements of a single protein can approximate the analogous value that would be available in the presence of measurements across multiple proteins. Finally, Section 6 closes with a concluding summary.

2 Preliminaries

2.1 Distributions given the parameter values

For all , the observed data vector of observations is assumed to be the outcome of , the random variable of density function on sample space for some in parameter space . Hypotheses about , called the full parameter, are stated in terms of the subparameter , called the parameter of interest, which lies in a set . Consider the member of in order to define the null hypotheses , , , , . The conditional density notation reflects the randomness of the parameter to be specified in Section 2.2.

A measurable map yields as the observed value of the random test statistic . The application of the map can often reduce the data to a lower-dimensions statistic, but the identity map may be employed if no reduction is desired: . In some cases, the map may be chosen to eliminate the nuisance parameter, which means the probability density function of , conditional on , may be written as . Otherwise, the interest parameter is identified with the full parameter , in which case . Thus, the following methodology applies even when the nuisance parameter cannot be eliminated by data reduction.

2.2 Hierarchical model

Let denote the alternative-hypothesis prior distribution, assumed to have measure-theoretic support , and let denote the probability that a given null hypothesis is true. (Unless prefaced by measure-theoretic, the term support in this paper means strength of statistical evidence (§1) rather than what it means in measure theory.) Like most hierarchical models, including those of empirical-Bayes and random-effects methods, this two-groups model uses random parameters to represent real variability rather than subjective uncertainty:

(1)

where , and where and are the null and alternative density functions, respectively.

Let denote a joint probability distribution of and such that , , and admits as the density function of conditional on for all . Let denote the random variable indicating whether, for all , the th null hypothesis is true or whether the alternative hypothesis is true . For sufficiently large and sufficient independence between random parameters, approximates, with high probability, the proportion of the null hypotheses that are true.

Bayes’s theorem then gives

(2)

but that cannot be used directly without knowledge of and of , which is unknown since is unknown. Since the empirical Bayes strategy of estimating those priors is not always feasible (§1), the next section presents an alternative approach for inference about whether a particular null hypothesis is true.

3 General definition of support

One distribution will be said to surrogate the other if it can represent or take the place of the other for inferential purposes. Before precisely defining surrogation, the reason for introducing the concept will be explained. Given , a probability density function that surrogates , let denote the probability distribution that satisfies both for and

(3)

where has the mixture probability density function rather than that of equation (1). Equation (2) and entail that surrogates inasmuch as surrogates , which is unknown since it depends on . Thus, posterior probabilities of hypothesis truth can be surrogated by using in place of . Although the surrogate posterior probability depends on the proportion , the measure of support to be derived from equation (3) does not require that be known or even that it be estimated.

The concept of surrogation will be patterned after that of universality. Let stand for the expectation operator defined by . A probability density function is universal for the family if, for any , the Kullback-Leibler divergence satisfies

(4)

The terminology comes from the theory of universal source coding (Grünwald, 2007, p. 200); is called “universal” because it is a single density function typifying all of the distributions of the parametric family. Equation (4) may be interpreted as the requirement that the per-observation bias in as a predictor of asymptotically vanishes. This lemma illustrates the concept of universality with an important example:

Lemma 1.

Let denote a probability distribution that has measure-theoretic support . The mixture density defined by for all is universal for .

Proof.

By the stated assumption about , there is a such that and

(5)

for all and . With and , inequality (5) entails that

for all and . While that yields , the information inequality has . The universality of then follows from equation (4). (This proof generalizes a simpler argument using probability mass functions (Grünwald, 2007, p. 176).) ∎

Universality suggests a technical definition for surrogation. With respect to the family , a probability density function surrogates any probability density function for which

(6)

for all . The idea is that one distribution can represent or take the place of another for inferential purposes if their mean per-observation difference vanishes asymptotically. The following lemma then says that any universal distribution can stand in the place of any other distribution that is universal for the same family. It is a direct consequence of equations (4) and (6).

Lemma 2.

If the probability density functions and are universal for , then surrogates with respect to .

The inferential use of one density function in place of another calls for a concept of surrogation error. The surrogation error of each probability distribution based on the probability density function in place of is defined by

Then is said to surrogate if

(7)

for all and . Equation (7) states the criterion that the per-observation bias in as a predictor of the true posterior log odds asymptotically vanishes. This bias is conservative:

Proposition 3.

If is based on a density function on , then for all .

Proof.

The following holds for all . By equations (2) and (3) with for ,

but by the information inequality. ∎

The next result connects the concepts of surrogation (asymptotic per-observation unbiasedness) and universality.

Theorem 4.

If is based on a density function that is universal for , then it surrogates .

Proof.

Since has measure-theoretic support , Lemma 1 implies that is universal for . The universality of and for then entails that surrogates by Lemma 2. According to equation (6), such surrogation means

(8)

By equations (2) and (3) with for ,

which equation (8) says is equal to 0. ∎

The difference in conditional and marginal log-odds,

(9)

is called the support that the observation transmits to the hypothesis that over the hypothesis that according to , which by assumption surrogates . While the concise terminology follows Edwards (1992), the basis on a change in log-odds is that of the information for discrimination (Kullback, 1968). Royall (2000a), Blume (2002), and others have used the term strength of statistical evidence as a synonym for support in the original sense of Edwards (1992).

Proposition 5.

If surrogates based on the universal density function , then the support that the observation transmits to the hypothesis that over the hypothesis that according to is

(10)
Proof.

Substituting the solution of equation (9) for into equation (10) recovers equation (9). ∎

Since the support according to only depends on through its universal density, is more simply called the support that the observation transmits to the hypothesis that over the hypothesis that according to Hence, the same value of the support applies to different hypothetical values of and even across different density functions as , the unknown alternative distribution of the reduced data.

4 Optimal measures of support

Equations (2) and (3) with for imply that the surrogation error of is equal to the surrogation error of ,

which depends neither on nor on any other aspect of apart from . Thus, the problem of minimizing the surrogation error of reduces to that of optimizing the universal density on which is based. Such optimality may be either with respect to the population represented by or with respect to the observed sample reduced to . The remainder of this section formalizes each type of optimality as a minimax problem with a worst-case member of in place of the unknown mixture density .

4.1 Population optimality

Among all probability density functions on , let be that which minimizes the maximum average log loss

(11)

Since the loss at each is averaged over the population represented by the sampling density , the solution will be called the population-optimal density function relative to . That density function has the mixture density

for all , where is the probability density function on that maximizes

(Rissanen, 2007, §5.2.1).

The prior density function thereby defined is difficult to compute at finite samples but asymptotically approaches the Jeffreys prior (Rissanen, 2009, §2.3.2), which was originally derived for Bayesian inference from an invariance argument (Jeffreys, 1948). Whereas is an unknown distribution of parameter values that describe physical reality, is a default prior that serves as a tool for inference for scenarios in which suitable estimates of are not available. Lemma 1 secures the universality of , which in turn implies that qualifies as support by Proposition 5.

For the observation , may likewise be considered as a default integrated likelihood and the support (10) as the logarithm of a default Bayes factor. Drmota and Szpankowski (2004) reviewed asymptotic properties of the population-optimal density function and related it to the universal density function satisfying the optimality criterion of the next subsection.

4.2 Sample optimality

Among all probability density functions on , let be the one that minimizes the maximum worst-case log loss

(12)

Since the regret incurred by any observed sample is no greater than that of the worst-case sample, will be referred to as the sample-optimal density function relative to . As proved by Shtarkov (1987), the unique solution to that minimax problem is

(13)

with the normalizing constant automatically acting as a penalty for model complexity, where the maximum likelihood estimate (MLE) for any is denoted by (Rissanen, 2007; Grünwald, 2007). The probability density is thus known as the normalized maximum likelihood (NML). Its universality (4) follows from the convergence of

to 0, which holds under the consistency of since the growth of is asymptotically proportional to (Rissanen, 2007; Grünwald, 2007). Thus, Proposition 5 guarantees that measures support.

For inference about , the incidental statistics provide side information or “indirect evidence” (Efron, 2010a) in addition to the “direct evidence” provided by the focus statistic . The problem of incorporating side information into inference has been addressed with the weighted likelihood function (Hu and Zidek, 2002; Wang and Zidek, 2005) defined by

(14)

for all , where the focus weight is no less than any of the incidental weights . For notational economy and parallelism with , the left-hand side expresses dependence on the focus statistic but not on the incidental statistics.

Replacing the likelihood function in equation (12) with the weighted likelihood function, while taking the worst-case sample of the focus statistic and holding the incidental statistics fixed, has the unique solution

(15)

where the maximum weighted likelihood estimate (MWLE) for any is denoted by (Bickel, 2010b). Accordingly, will be called the sample-optimal density function relative to and . If for all and , then are single-observation weights in the sense that (Bickel, 2010b). In accordance with equation (10), the corresponding sample-optimal support is . When data are only available for one of the populations, the NMWL using single-observation weights may be closely approximated by considering

(16)

as the logarithm of the weighted likelihood, where is a pseudo-observation such as the mode of under the null hypothesis (Bickel, 2010b).

The probability density is called the normalized maximum weighted likelihood (NMWL). It applies to more general contexts than the NML: there are many commonly used distribution families for which but not is finite (Bickel, 2010b). As with other extensions of the NML to such families (Grünwald, 2007, Chapter 11), conditions under which the NMWL is universal have yet to be established. Thus, Proposition 5 cannot be invoked at this time, and one may only conjecture that satisfies the general criterion of a measure of support (§3) in a particular context. The conjecture is suggested for the normal family by the finding of the next section that can closely approximate a universal density even for very small samples.

5 Proximity to simultaneous inference: a case study

This section describes a case study on the extent to which support computed on the basis of measurements of the abundance of a single protein can approximate the true difference between posterior and prior log odds. Since that true difference is unknown, it will be estimated using an empirical Bayes method to simultaneously incorporate the available abundance measurements for all proteins.

Specifically, the individual sample-optimal support of each protein was compared to an estimated Bayes factor using levels of protein abundance in plasma as measured in the laboratory of Alex Miron at the Dana-Farber Cancer Institute. The participating women include 55 with HER2-positive breast cancer, 35 mostly with ER/PR-positive breast cancer, and 64 without breast cancer. The abundance levels, available in Li (2009), were transformed by shifting them to ensure positivity and by taking the logarithms of the shifted abundance levels (Bickel, 2010a).

The transformed abundance levels of protein were assumed to be IID normal within each health condition and with an unknown variance common to all three conditions. For one of the cancer conditions and for the non-cancer condition, and will denote the means of the respective normal distributions, and and will likewise denote the numbers of women with each condition. Let represent the absolute value of the Student statistic appropriate for testing the null hypothesis of , where and

the standardized cancer-healthy difference in the population mean transformed abundance in the th protein. Under the stated assumptions, the Student statistic, conditional on , has a noncentral distribution with degrees of freedom and noncentrality parameter (Bickel, 2010a). Thus, because is the absolute value of that statistic, is the only unknown parameter of , the probability density function of .

With that model and test statistic, the NMWL and the corresponding sample-optimal support were computed separately for each protein using in equation (16), as in Bickel (2010b). For the analysis of the data of all proteins simultaneously, the same model and test statistics were used with the two-component mixture model defined by equation (1) with for some unknown . The true alternative density function was estimated by plugging in the maximum likelihood estimate obtained from maximizing the likelihood function

over and (Bickel, 2010a). The results appear in Fig. 1 and are discussed in the next section.

Figure 1: Single-comparison, sample-optimal support (“minimax”; ) as an approximation to the estimated support that could be achieved with multiple comparisons (“simultaneous”; ). The “upper bound” is (Bickel, 2010c), exceeding the optimal support by a constant amount.

6 Discussion

The proposed framework of evidential support may be viewed as an extension of likelihoodism, classically expressed in Edwards (1992), to nuisance parameters and multiple comparisons. Edwards (1992, §3.2) argued that a measure of evidence in data or support for one simple hypothesis (sampling distribution) over another should be compatible with Bayes’s theorem in the sense that whenever real-world parameter probabilities are available, the support quantifies the departure of posterior odds from prior odds. The likelihood ratio has that property, but the p-value does not since it only depends on the distribution of the null hypothesis. As compelling as the argument is for comparing two simple hypotheses, the pure likelihood approach does not apply to a composite hypothesis, a set of sampling distributions.

Perceiving the essential role of composite hypotheses in many applications, Zhang (2009) previously extended the likelihoodism by replacing the likelihood for the single distribution that represents a simple hypothesis with the likelihood maximized over all parameter values that constitute a composite hypothesis. Thus, the strength of evidence for the alternative hypothesis that is in some interval (or union of intervals) over the null hypothesis that is in some other interval would be . For example, the strength of evidence favoring over would be . The related approach of Bickel (2010c) performs the maximization after eliminating the nuisance parameter: . While that approach to some extent justifies the use of likelihood intervals (Fisher, 1973) and has intuitive support from the principle of inference to the best explanation (Bickel, 2010c), it tends to overfit the data from a predictive viewpoint. For example, if , then the evidence for the hypothesis that would be just as strong as the evidence for the hypothesis that even if the latter hypothesis were in primary view before observing . Thus, the maximum likelihood ratio is considered as an upper bound of support in Fig. 1.

The present paper also generalizes the pure likelihood approach but without such overfitting. The proposed approach grew out of the Bayes-compatibility criterion of Edwards (1992). By leveraging recent advances in J. Rissanen’s information-theoretic approach to model selection, the Bayes-compatibility criterion was recast in terms of predictive distributions, thereby making support applicable to composite hypotheses. To qualify as a measure of support, a statistic must asymptotically mimic the difference between the posterior and prior log-odds, where the parameter distributions considered are physical in the empirical Bayes or random effects sense that they correspond to real frequencies or proportions (Robinson, 1991), whether or not the distributions can be estimated.

Generalized Bayes compatibility has advantages even when support is not used with a hypothetical prior probability. For example, defining support in terms of the difference between the posterior and prior log-odds (9) is sufficient for interpreting or some other some level of support in the same way for any sample size (Royall, 2000b). In other words, no sample-size calibration is necessary (cf. Bickel, 2010b).

In addition to the Bayes-compatibility condition, an optimality criterion such as one of the two lifted from information theory is needed to uniquely specify a measure of support (§4). One of the resulting minimax-optimal measures of support performed well compared to the upper bound when applied to measured levels of a single protein (§5). The standard of comparison was the difference between posterior and prior log odds that could be estimated by simultaneously using the measurements of all 20 proteins. While both the minimax support and the upper bound come close to the simultaneous-inference standard, the conservative nature of the minimax support prevented it from overshooting the target as much as did the upper bound (Fig. 1). The discrepancy between the minimax support and the upper bound will become increasingly important as the dimension of the interest parameter increases. In high-dimensional applications, overfitting will render the upper bound unusable, but minimax support will be shielded by a correspondingly high penalty factor in equation (13).

Acknowledgments

Biobase (Gentleman et al., 2004) facilitated data management. This research was partially supported by the Canada Foundation for Innovation, by the Ministry of Research and Innovation of Ontario, and by the Faculty of Medicine of the University of Ottawa.

References

  • Berger and Pericchi (1996) Berger, J. O., Pericchi, L. R., 1996. The intrinsic Bayes factor for model selection and prediction. Journal of the American Statistical Association 91 (433), 109–122.
  • Bickel (2010a) Bickel, D. R., 2010a. Minimum description length methods of medium-scale simultaneous inference. Technical Report, Ottawa Institute of Systems Biology, arXiv:1009.5981.
  • Bickel (2010b) Bickel, D. R., 2010b. Statistical inference optimized with respect to the observed sample for single or multiple comparisons. Technical Report, Ottawa Institute of Systems Biology, arXiv:1010.0694.
  • Bickel (2010c) Bickel, D. R., 2010c. The strength of statistical evidence for composite hypotheses: Inference to the best explanation. Technical Report, Ottawa Institute of Systems Biology, COBRA Preprint Series, Article 71, available at biostats.bepress.com/cobra/ps/art71.
  • Blume (2002) Blume, J. D., 2002. Likelihood methods for measuring statistical evidence. Statistics In Medicine 21 (17), 2563–2599.
  • Drmota and Szpankowski (2004) Drmota, M., Szpankowski, W., 2004. Precise minimax redundancy and regret. IEEE Transactions on Information Theory 50 (11), 2686–2707.
  • Edwards (1992) Edwards, A. W. F., 1992. Likelihood. Johns Hopkins Press, Baltimore.
  • Efron (2010a) Efron, B., 2010a. The future of indirect evidence. Statistical Science 25 (2), 145–157.
  • Efron (2010b) Efron, B., 2010b. Large-Scale Inference: Empirical Bayes Methods for Estimation, Testing, and Prediction. Cambridge University Press.
  • Efron et al. (2001) Efron, B., Tibshirani, R., Storey, J. D., Tusher, V., 2001. Empirical Bayes analysis of a microarray experiment. J. Am. Stat. Assoc. 96 (456), 1151–1160.
  • Fisher (1973) Fisher, R. A., 1973. Statistical Methods and Scientific Inference. Hafner Press, New York.
  • Gentleman et al. (2004) Gentleman, R. C., Carey, V. J., Bates, D. M., et al., 2004. Bioconductor: Open software development for computational biology and bioinformatics. Genome Biology 5, R80.
  • Grünwald (2007) Grünwald, P. D., 2007. The Minimum Description Length Principle. The MIT Press, London.
  • Hu and Zidek (2002) Hu, F. F., Zidek, J., 2002. The weighted likelihood. Canadian Journal of Statistics 30 (3), 347–371.
  • Jeffreys (1948) Jeffreys, H., 1948. Theory of Probability. Oxford University Press, London.
  • Kullback (1968) Kullback, S., 1968. Information Theory and Statistics. Dover, New York.
  • Li (2009) Li, X., 2009. ProData. Bioconductor.org documentation for the ProData package.
  • Qiu et al. (2005) Qiu, X., Klebanov, L., Yakovlev, A., 2005. Correlation between gene expression levels and limitations of the empirical Bayes methodology for finding differentially expressed genes. Statistical Applications in Genetics and Molecular Biology 4 (1), i–30.
  • Rissanen (1987) Rissanen, J., 1987. Stochastic complexity. Journal of the Royal Statistical Society.Series B (Methodological) 49 (3), 223–239.
  • Rissanen (2007) Rissanen, J., 2007. Information and Complexity in Statistical Modeling. Springer, New York.
  • Rissanen (2009) Rissanen, J., 2009. Model selection and testing by the MDL principle. Information Theory and Statistical Learning. Springer, New York, Ch. 2, pp. 25–43.
  • Robinson (1991) Robinson, G. K., 1991. That BLUP is a good thing: The estimation of random effects. Statistical Science 6 (1), 15–32.
  • Royall (2000a) Royall, R., 2000a. On the probability of observing misleading statistical evidence. Journal of the American Statistical Association 95 (451), 760–768.
  • Royall (2000b) Royall, R., 2000b. Rejoinder to comments on r. royall, "on the probability of observing misleading statistical evidence". Journal of the American Statistical Association 95 (451), 773–780.
  • Shtarkov (1987) Shtarkov, Y. M., 1987. Universal sequential coding of single messages. Problems of information transmission 23 (3), 175–186.
  • Wang and Zidek (2005) Wang, X., Zidek, J. V., 2005. Derivation of mixture distributions and weighted likelihood function as minimizers of KL-divergence subject to constraints. Annals of the Institute of Statistical Mathematics 57 (4), 687–701.
  • Yang and Bickel (2010) Yang, Y., Bickel, D. R., 2010. Minimum description length and empirical Bayes methods of identifying snps associated with disease. Technical Report, Ottawa Institute of Systems Biology, COBRA Preprint Series, Article 74, available at biostats.bepress.com/cobra/ps/art74.
  • Zhang (2009) Zhang, Z., 2009. A law of likelihood for composite hypotheses. arXiv:0901.0463.

David R. Bickel
Ottawa Institute of Systems Biology
Department of Biochemistry, Microbiology, and Immunology
University of Ottawa
451 Smyth Road
Ottawa, Ontario, K1H 8M5
dbickel@uottawa.ca

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
294403
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description