Entropy inference and the JamesStein estimator, with application to nonlinear gene association networks
Abstract
We present a procedure for effective estimation of entropy and mutual information from smallsample data, and apply it to the problem of inferring highdimensional gene association networks. Specifically, we develop a JamesSteintype shrinkage estimator, resulting in a procedure that is highly efficient statistically as well as computationally. Despite its simplicity, we show that it outperforms eight other entropy estimation procedures across a diverse range of sampling scenarios and datagenerating models, even in cases of severe undersampling. We illustrate the approach by analyzing E. coli gene expression data and computing an entropybased geneassociation network from gene expression data. A computer program is available that implements the proposed shrinkage estimator.
Keywords: Entropy, shrinkage estimation, JamesStein estimator, “small , large ” setting, mutual information, gene association network.
Acknowledgments This work was partially supported by an Emmy Noether grant of the Deutsche Forschungsgemeinschaft (to K.S.). We thank the anonymous referees and the editor for very helpful comments.
1 Introduction
Entropy is a fundamental quantity in statistics and machine learning. It has a large number of applications, for example in astronomy, cryptography, signal processing, statistics, physics, image analysis neuroscience, network theory, and bioinformatics—see, for example, Stinson (2006), Yeo and Burge (2004), MacKay (2003) and Strong et al. (1998). Here we focus on estimating entropy from smallsample data, with applications in genomics and gene network inference in mind (Margolin et al., 2006; Meyer et al., 2007).
To define the Shannon entropy, consider a categorical random variable
with alphabet size
and associated cell probabilities with
and . Throughout the article, we assume that
is fixed and known.
In this setting, the Shannon entropy in natural units is given by
(1) 
In practice, the underlying probability mass function are unknown, hence and need to be estimated from observed cell counts .
A particularly simple and widely used estimator of entropy is the maximum likelihood (ML) estimator
constructed by plugging the ML frequency estimates
(2) 
into Eq. 1, with being the total number of counts.
In situations with , that is, when the dimension is low and when there are many observation, it is easy to infer entropy reliably, and it is wellknown that in this case the ML estimator is optimal. However, in highdimensional problems with it becomes extremely challenging to estimate the entropy. Specifically, in the “small , large ” regime the ML estimator performs very poorly and severely underestimates the true entropy.
While entropy estimation has a long history tracing back to more than 50 years ago, it is only recently that the specific issues arising in highdimensional, undersampled data sets have attracted attention. This has lead to two recent innovations, namely the NSB algorithm (Nemenman et al., 2002) and the ChaoShen estimator (Chao and Shen, 2003), both of which are now widely considered as benchmarks for the smallsample entropy estimation problem (Vu et al., 2007).
Here, we introduce a novel and highly efficient smallsample entropy estimator based on JamesStein shrinkage (Gruber, 1998). Our method is fully analytic and hence computationally inexpensive. Moreover, our procedure simultaneously provides estimates of the entropy and of the cell frequencies suitable for plugging into the Shannon entropy formula (Eq. 1). Thus, in comparison the estimator we propose is simpler, very efficient, and at the same time more versatile than currently available entropy estimators.
2 Conventional Methods for Estimating Entropy
Entropy estimators can be divided into two groups: i) methods, that rely on estimates of cell frequencies, and ii) estimators, that directly infer entropy without estimating a compatible set of . Most methods discussed below fall into the first group, except for the MillerMadow and NSB approaches.
2.1 Maximum Likelihood Estimate
The connection between observed counts and frequencies is given by the multinomial distribution
(3) 
Note that because otherwise the distribution is singular. In contrast, there may be (and often are) zero counts . The ML estimator of maximizes the right hand side of Eq. 3 for fixed , leading to the observed frequencies with variances and as .
2.2 MillerMadow Estimator
While is unbiased, the corresponding plugin entropy estimator is not. First order bias correction leads to
where is the number of cells with . This is known as the MillerMadow estimator (Miller, 1955).
2.3 Bayesian Estimators
Bayesian regularization of cell counts may lead to vast improvements over the ML estimator (Agresti and Hitchcock, 2005). Using the Dirichlet distribution with parameters as prior, the resulting posterior distribution is also Dirichlet with mean
where . The flattening constants play the role of pseudocounts (compare with Eq. 2), so that may be interpreted as the a priori sample size.
Cell frequency prior  Entropy estimator  

no prior  maximum likelihood  
Jeffreys prior (Jeffreys, 1946)  Krichevsky and Trofimov (1981)  
BayesLaplace uniform prior  Holste et al. (1998)  
Perks prior (Perks, 1947)  Schürmann and Grassberger (1996)  
minimax prior (Trybula, 1958) 
Some common choices for are listed in Tab. 1, along with references to the corresponding plugin entropy estimators,
While the multinomial model with Dirichlet prior is standard Bayesian folklore (Gelman et al., 2004), there is no general agreement regarding which assignment of is best as noninformative prior—see for instance the discussion in Tuyl et al. (2008) and Geisser (1984). But, as shown later in this article, choosing inappropriate can easily cause the resulting estimator to perform worse than the ML estimator, thereby defeating the originally intended purpose.
2.4 NSB Estimator
The NSB approach (Nemenman et al., 2002) avoids overrelying on a particular choice of in the Bayes estimator by using a more refined prior. Specifically, a Dirichlet mixture prior with infinite number of components is employed, constructed such that the resulting prior over the entropy is uniform. While the NSB estimator is one of the best entropy estimators available at present in terms of statistical properties, using the Dirichlet mixture prior is computationally expensive and somewhat slow for practical applications.
2.5 ChaoShen Estimator
Another recently proposed estimator is due to Chao and Shen (2003). This approach applies the HorvitzThompson estimator (Horvitz and Thompson, 1952) in combination with the GoodTuring correction (Good, 1953; Orlitsky et al., 2003) of the empirical cell probabilities to the problem of entropy estimation. The GoodTuringcorrected frequency estimates are
where is the number of singletons, that is, cells with . Used jointly with the HorvitzThompson estimator this results in
an estimator with remarkably good statistical properties (Vu et al., 2007).
3 A JamesStein Shrinkage Estimator
The contribution of this paper is to introduce an entropy estimator that employs JamesSteintype shrinkage at the level of cell frequencies. As we will show below, this leads to an entropy estimator that is highly effective, both in terms of statistical accuracy and computational complexity.
JamesSteintype shrinkage is a simple analytic device to perform regularized highdimensional inference. It is ideally suited for smallsample settings  the original estimator (James and Stein, 1961) considered sample size . A general recipe for constructing shrinkage estimators is given in Appendix A. In this section, we describe how this approach can be applied to the specific problem of estimating cell frequencies.
JamesStein shrinkage is based on averaging two very different models: a highdimensional model with low bias and high variance, and a lower dimensional model with larger bias but smaller variance. The intensity of the regularization is determined by the relative weighting of the two models. Here we consider the convex combination
(4) 
where is the shrinkage intensity that takes on a value between 0 (no shrinkage) and 1 (full shrinkage), and is the shrinkage target. A convenient choice of is the uniform distribution . This is also the maximum entropy target. Considering that and using the unbiased estimator we obtain (cf. Appendix A) for the shrinkage intensity
(5) 
Note that this also assumes a nonstochastic target . The resulting plugin shrinkage entropy estimate is
(6) 
Remark 1:
There is a one to one correspondence between the shrinkage and the Bayes estimator. If we write and , then . This implies that the shrinkage estimator is an empirical Bayes estimator with a datadriven choice of the flattening constants—see also Efron and Morris (1973). For every choice of there exists an equivalent shrinkage intensity . Conversely, for every there exist an equivalent .
Remark 2:
Developing we obtain the approximate estimate , which in turn recovers the “pseudoBayes” estimator described in Fienberg and Holland (1973).
Remark 3:
The shrinkage estimator assumes a fixed and known . In many practical applications this will indeed be the case, for example, if the observed counts are due to discretization (see also the data example). In addition, the shrinkage estimator appears to be robust against assuming a larger than necessary (see scenario 3 in the simulations).
Remark 4:
4 Comparative Evaluation of Statistical Properties
In order to elucidate the relative strengths and weaknesses of the entropy estimators reviewed in the previous section, we set to benchmark them in a simulation study covering different data generation processes and sampling regimes.
4.1 Simulation Setup
We compared the statistical performance of all nine described estimators (maximum likelihood, MillerMadow, four Bayesian estimators, the proposed shrinkage estimator (Eqs. 4–6), NSB und ChaoShen) under various sampling and data generating scenarios:

The dimension was fixed at .

Samples size varied from , , , , , , to . That is, we investigate cases of dramatic undersampling (“small , large ”) as well as situations with a larger number of observed counts.
The true cell probabilities were assigned in four different fashions, corresponding to rows 14 in Fig. 1:

Sparse and heterogeneous, following a Dirichlet distribution with parameter ,

Random and homogeneous, following a Dirichlet distribution with parameter ,

As in scenario 2, but with half of the cells containing structural zeros, and

Following a Zipftype power law.
For each sampling scenario and sample size, we conducted 1000 simulation runs. In each run, we generated a new set of true cell frequencies and subsequently sampled observed counts from the corresponding multinomial distribution. The resulting counts were then supplied to the various entropy and cell frequencies estimators and the squared error was computed. From the 1000 repetitions we estimated the mean squared error (MSE) of the cell frequencies by averaging over the individual squared errors (except for the NSB, MillerMadow, and ChaoShen estimators). Similarly, we computed estimates of MSE and bias of the inferred entropies.
4.2 Summary of Results from Simulations
Fig. 1 displays the results of the simulation study, which can be summarized as follows:

Unsurprisingly, all estimators perform well when the sample size is large.

The maximum likelihood and MillerMadow estimators perform worst, except for scenario 1. Note that these estimators are inappropriate even for moderately large sample sizes. Furthermore, the bias correction of the MillerMadow estimator is not particularly effective.

The minimax and Bayesian estimators tend to perform slightly better than maximum likelihood, but not by much.

The Bayesian estimators with pseudocounts and perform very well even for small sample sizes in the scenarios 2 and 3. However, they are less efficient in scenario 4, and completely fail in scenario 1.

Hence, the Bayesian estimators can perform better or worse than the ML estimator, depending on the choice of the prior and on the sampling scenario.

The NSB, the ChaoShen and the shrinkage estimator all are statistically very efficient with small MSEs in all four scenarios, regardless of sample size.

The NSB and ChaoShen estimators are nearly unbiased in scenario 3.
The three topperforming estimators are the NSB, the ChaoShen and the prosed shrinkage estimator. When it comes to estimating the entropy, these estimators can be considered identical for practical purposes. However, the shrinkage estimator is the only one that simultaneously estimates cell frequencies suitable for use with the Shannon entropy formula (Eq. 1), and it does so with high accuracy even for small samples. In comparison, the NSB estimator is by far the slowest method: in our simulations, the shrinkage estimator was faster by a factor of 1000.
5 Application to Statistical Learning of Nonlinear Gene Association Networks
In this section we illustrate how the shrinkage entropy estimator can be applied to the problem of inferring regulatory interactions between genes through estimating the nonlinear association network.
5.1 From Linear to Nonlinear Gene Association Networks
One of the aims of systems biology is to understand the interactions among genes and their products underlying the molecular mechanisms of cellular function as well as how disrupting these interactions may lead to different pathologies. To this end, an extensive literature on the problem of gene regulatory network “reverse engineering” has developed in the past decade (Friedman, 2004). Starting from gene expression or proteomics data, different statistical learning procedures have been proposed to infer associations and dependencies among genes. Among many others, methods have been proposed to enable the inference of largescale correlation networks (Butte et al., 2000) and of highdimensional partial correlation graphs (Dobra et al., 2004; Schäfer and Strimmer, 2005a; Meinshausen and Bühlmann, 2006), for learning vectorautoregressive (OpgenRhein and Strimmer, 2007a) and state space models (Rangel et al., 2004; Lähdesmäki and Shmulevich, 2008), and to reconstruct directed “causal” interaction graphs (Kalisch and Bühlmann, 2007; OpgenRhein and Strimmer, 2007b).
The restriction to linear models in most of the literature is owed at least in part to the already substantial challenges involved in estimating linear highdimensional dependency structures. However, cell biology offers numerous examples of threshold and saturation effects, suggesting that linear models may not be sufficient to model gene regulation and genegene interactions. In order to relax the linearity assumption and to capture nonlinear associations among genes, entropybased network modeling was recently proposed in the form of the ARACNE (Margolin et al., 2006) and MRNET (Meyer et al., 2007) algorithms.
The starting point of these two methods is to compute the mutual information for all pairs of genes and , where and represent the expression levels of the two genes for instance. The mutual information is the KullbackLeibler distance from the joint probability density to the product of the marginal probability densities:
(7) 
The mutual information (MI) is always nonnegative, symmetric, and equals zero only if and are independent. For normally distributed variables the mutual information is closely related to the usual Pearson correlation,
Therefore, mutual information is a natural measure of the association between genes, regardless whether linear or nonlinear in nature.
5.2 Estimation of Mutual Information
To construct an entropy network, we first need to estimate mutual information for all pairs of genes. The entropy representation
(8) 
shows that MI can be computed from the joint and marginal entropies of the two genes and . Note that this definition is equivalent to the one given in Eq. 7 which is based on the KullbackLeibler divergence. From Eq. 8 it is also evident that is the information shared between the two variables.
For gene expression data the estimation of MI and the underlying entropies is challenging due to the small sample size, which requires the use of a regularized entropy estimator such as the shrinkage approach we propose here. Specifically, we proceed as follows:

As a prerequisite the data must be discrete, with each measurement assuming one of levels. If the data are not already discretized, we propose employing the simple algorithm of Freedman and Diaconis (1981), considering the measurements of all genes simultaneously.

Finally, from the estimated cell frequencies we calculate , , and the desired .
5.3 Mutual Information Network for E. Coli Stress Response Data
For illustration, we now analyze data from SchmidtHeck et al. (2004) who conducted an experiment to observe the stress response in E. Coli during expression of a recombinant protein. This data set was also used in previous linear network analyzes, for example, in Schäfer and Strimmer (2005b). The raw data consist of 4289 protein coding genes, on which measurements were taken at 0, 8, 15, 22, 45, 68, 90, 150, and 180 minutes. We focus on a subset of differentially expressed genes as given in SchmidtHeck et al. (2004).
Discretization of the data according to Freedman and Diaconis (1981) yielded distinct gene expression levels. From the genes, we estimated MIs for 5151 pairs of genes. For each pair, the mutual information was based on an estimated contingency table, hence . As the number of time points is , this is a strongly undersampled situation which requires the use of a regularized estimate of entropy and mutual information.
The distribution of the shrinkage estimates of mutual information for all 5151 gene pairs is shown in the left side of Fig. 2. The right hand side depicts the distribution of mutual information values after applying the ARACNE procedure, which yields 112 gene pairs with nonzero MIs.
The model selection provided by ARACNE is based on applying the information processing inequality to all gene triplets. For each triplet, the gene pair corresponding to the smallest MI is discarded, which has the effect to remove genegene links that correspond to indirect rather than direct interactions. This is similar to a procedure used in graphical Gaussian models where correlations are transformed into partial correlations. Thus, both the ARACNE and the MRNET algorithms can be considered as devices to approximate the conditional mutual information (Meyer et al., 2007). As a result, the 112 nonzero MIs recovered by the ARACNE algorithm correspond to statistically detectable direct associations.
The corresponding gene association network is depicted in Fig. 3. The most striking feature of the graph are the “hubs” belonging to genes hupB, sucA and nuoL. hupB is a well known DNAbinding transcriptional regulator, whereas both nuoL and sucA are key components of the E. coli metabolism. Note that a Lassotype procedure (that implicitly limits the number of edges that can connect to each node) such as that of Meinshausen and Bühlmann (2006) cannot recover these hubs.
6 Discussion
We proposed a JamesSteintype shrinkage estimator for inferring entropy and mutual information from small samples. While this is a challenging problem, we showed that our approach is highly efficient both statistically and computationally despite its simplicity.
In terms of versatility, our estimator has two distinct advantages over the NSB and ChaoShen estimators. First, in addition to estimating the entropy, it also provides the underlying multinomial frequencies for use with the Shannon formula (Eq. 1). This is useful in the context of using mutual information to quantify nonlinear pairwise dependencies for instance. Second, unlike NSB, it is a fully analytic estimator.
Hence, our estimator suggests itself for applications in large scale estimation problems. To demonstrate its application in the context of genomics and systems biology, we have estimated an entropybased gene dependency network from expression data in E. coli. This type of approach may prove helpful to overcome the limitations of linear models currently used in network analysis.
In short, we believe the proposed smallsample entropy estimator will be a valuable contribution to the growing toolbox of machine learning and statistics procedures for highdimensional data analysis.
Appendix A: Recipe For Constructing JamesSteintype Shrinkage Estimators
The original JamesStein estimator (James and Stein, 1961) was proposed to estimate the mean of a multivariate normal distribution from a single (!) vector observation. Specifically, if is a sample from then JamesStein estimator is given by
Intriguingly, this estimator outperforms the maximum likelihood estimator in terms of mean squared error if the dimension is . Hence, the JamesStein estimator dominates the maximum likelihood estimator.
The above estimator can be slightly generalized by shrinking towards the component average rather than to zero, resulting in
with estimated shrinkage intensity
The JamesStein shrinkage principle is very general and can be put to to use in many other highdimensional settings. In the following we summarize a simple recipe for constructing JamesSteintype shrinkage estimators along the lines of Schäfer and Strimmer (2005b) and OpgenRhein and Strimmer (2007a).
In short, there are two key ideas at work in JamesStein shrinkage:

regularization of a highdimensional estimator by linear combination with a lowerdimensional target estimate , and

adaptive estimation of the shrinkage parameter from the data by quadratic risk minimization.
A general form of a JamesSteintype shrinkage estimator is given by
(9) 
Note that and are two very different estimators (for the same underlying model!). as a highdimensional estimate with many independent components has low bias but for small samples a potentially large variance. In contrast, the target estimate is lowdimensional and therefore is generally less variable than but at the same time is also more biased. The JamesStein estimate is a weighted average of these two estimators, where the weight is chosen in a datadriven fashion such that is improved in terms of mean squared error relative to both and .
A key advantage of JamesSteintype shrinkage is that the optimal shrinkage intensity can be calculated analytically and without knowing the true value , via
(10) 
A simple estimate of is obtained by replacing all variances and covariances in Eq. 10 with their empirical counterparts, followed by truncation of at 1 (so that always holds).
Eq. 10 is discussed in detail in Schäfer and Strimmer (2005b) and OpgenRhein and Strimmer (2007a). More specialized versions of it are treated, for example, in Ledoit and Wolf (2003) for unbiased and in Thompson (1968) (unbiased, univariate case with deterministic target). A very early version (univariate with zero target) even predates the estimator of James and Stein, see Goodman (1953). For the multinormal setting of James and Stein (1961), Eq. 9 and Eq. 10 reduce to the shrinkage estimator described in Stigler (1990).
JamesStein shrinkage has an empirical Bayes interpretation (Efron and Morris, 1973). Note, however, that only the first two moments of the distributions of and need to be specified in Eq. 10. Hence, JamesStein estimation may be viewed as a quasiempirical Bayes approach (in the same sense as in quasilikelihood, which also requires only the first two moments).
Appendix B: Computer Implementation
The proposed shrinkage estimators of entropy and mutual information, as well as all other investigated entropy estimators, have been implemented in R (R Development Core Team, 2008). A corresponding R package “entropy” was deposited in the R archive CRAN and is accessible at the URL http://cran.rproject.org/web/packages/entropy/ under the GNU General Public License.
Footnotes
 In this paper we use the following conventions: denotes the natural logarithm (not base 2 or base 10), and we define .
References
 Agresti, A. and Hitchcock, D. B. (2005). Bayesian inference for categorical data analysis. Statist. Meth. Appl., 14:297–330.
 Butte, A. J., Tamayo, P., Slonim, D., Golub, T. R., and Kohane, I. S. (2000). Discovering functional relationships between RNA expression and chemotherapeutic susceptibility using relevance networks. Proc. Natl. Acad. Sci. USA, 97:12182–12186.
 Chao, A. and Shen, T.J. (2003). Nonparametric estimation of Shannon’s index of diversity when there are unseen species. Environ. Ecol. Stat., 10:429–443.
 Dobra, A., Hans, C., Jones, B., Nevins, J. R., Yao, G., and West, M. (2004). Sparse graphical models for exploring gene expression data. J. Multiv. Anal., 90:196–212.
 Efron, B. and Morris, C. N. (1973). Stein’s estimation rule and its competitors–an empirical Bayes approach. J. Amer. Statist. Assoc., 68:117–130.
 Fienberg, S. E. and Holland, P. W. (1973). Simultaneous estimation of multinomial cell probabilities. J. Amer. Statist. Assoc., 68:683–691.
 Freedman, D. and Diaconis, P. (1981). On the histogram as a density estimator: L2 theory. Z. Wahrscheinlichkeitstheorie verw. Gebiete, 57:453–476.
 Friedman, N. (2004). Inferring cellular networks using probabilistic graphical models. Science, 303:799–805.
 Geisser, S. (1984). On prior distributions for binary trials. The American Statistician, 38:244–251.
 Gelman, A., Carlin, J. B., Stern, H. S., and Rubin, D. B. (2004). Bayesian Data Analysis. Chapman & Hall/CRC, Boca Raton, 2nd edition.
 Good, I. J. (1953). The population frequencies of species and the estimation of population parameters. Biometrika, 40:237–264.
 Goodman, L. A. (1953). A simple method for improving some estimators. Ann. Math. Statist., 24:114–117.
 Gruber, M. H. J. (1998). Improving Efficiency By Shrinkage. Marcel Dekker, Inc., New York.
 Holste, D., Große, I., and Herzel, H. (1998). Bayes’ estimators of generalized entropies. J. Phys. A: Math. Gen., 31:2551–2566.
 Horvitz, D. G. and Thompson, D. J. (1952). A generalization of sampling without replacement from a finite universe. J. Amer. Statist. Assoc., 47:663–685.
 James, W. and Stein, C. (1961). Estimation with quadratic loss. In Proc. Fourth Berkeley Symp. Math. Statist. Probab., volume 1, pages 361–379, Berkeley. Univ. California Press.
 Jeffreys, H. (1946). An invariant form for the prior probability in estimation problems. Proc. Roc. Soc. (Lond.) A, 186:453–461.
 Kalisch, M. and Bühlmann, P. (2007). Estimating highdimensional directed acyclic graphs with the PCalgorithm. J. Machine Learn. Res., 8:613–636.
 Krichevsky, R. E. and Trofimov, V. K. (1981). The performance of universal encoding. IEEE Trans. Inf. Theory, 27:199–207.
 Lähdesmäki, H. and Shmulevich, I. (2008). Learning the structure of dynamic Bayesian networks from time series and steady state measurements. Mach. Learn., 71:185–217.
 Ledoit, O. and Wolf, M. (2003). Improved estimation of the covariance matrix of stock returns with an application to portfolio selection. J. Empir. Finance, 10:603–621.
 MacKay, D. J. C. (2003). Information Theory, Inference, and Learning Algorithms. Cambridge University Press, Cambridge.
 Margolin, A., Nemenman, I., Basso, K., Wiggins, C., Stolovitzky, G., Dalla Favera, R., and Califano, A. (2006). ARACNE: an algorithm for the reconstruction of gene regulatory networks in a mammalian cellular context. BMC Bioinformatics, 7 (Suppl. 1):S7.
 Meinshausen, N. and Bühlmann, P. (2006). Highdimensional graphs and variable selection with the Lasso. Ann. Statist., 34:1436–1462.
 Meyer, P. E., Kontos, K., Lafitte, F., and Bontempi, G. (2007). Informationtheoretic inference of large transcriptional regulatory networks. EURASIP J. Bioinf. Sys. Biol., page doi:10.1155/2007/79879.
 Miller, G. A. (1955). Note on the bias of information estimates. In Quastler, H., editor, Information Theory in Psychology IIB, pages 95–100. Free Press, Glencoe, IL.
 Nemenman, I., Shafee, F., and Bialek, W. (2002). Entropy and inference, revisited. In Dietterich, T. G., Becker, S., and Ghahramani, Z., editors, Advances in Neural Information Processing Systems 14, pages 471–478, Cambridge, MA. MIT Press.
 OpgenRhein, R. and Strimmer, K. (2007a). Accurate ranking of differentially expressed genes by a distributionfree shrinkage approach. Statist. Appl. Genet. Mol. Biol., 6:9.
 OpgenRhein, R. and Strimmer, K. (2007b). From correlation to causation networks: a simple approximate learning algorithm and its application to highdimensional plant gene expression data. BMC Systems Biology, 1:37.
 Orlitsky, A., Santhanam, N. P., and Zhang, J. (2003). Always Good Turing: asymptotically optimal probability estimation. Science, 302:427–431.
 Perks, W. (1947). Some observations on inverse probability including a new indifference rule. J. Inst. Actuaries, 73:285–334.
 R Development Core Team (2008). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. ISBN 3900051070.
 Rangel, C., Angus, J., Ghahramani, Z., Lioumi, M., Sotheran, E., Gaiba, A., Wild, D. L., and Falciani, F. (2004). Modeling Tcell activation using gene expression profiling and state space modeling. Bioinformatics, 20:1361–1372.
 Schäfer, J. and Strimmer, K. (2005a). An empirical Bayes approach to inferring largescale gene association networks. Bioinformatics, 21:754–764.
 Schäfer, J. and Strimmer, K. (2005b). A shrinkage approach to largescale covariance matrix estimation and implications for functional genomics. Statist. Appl. Genet. Mol. Biol., 4:32.
 SchmidtHeck, W., Guthke, R., Toepfer, S., Reischer, H., Duerrschmid, K., and Bayer, K. (2004). Reverse engineering of the stress response during expression of a recombinant protein. In ?, editor, Proceedings of the EUNITE symposium, 1012 June 2004, Aachen, Germany, volume ?, pages 407–412, ? Verlag Mainz.
 Schürmann, T. and Grassberger, P. (1996). Entropy estimation of symbol sequences. Chaos, 6:414–427.
 Stigler, S. M. (1990). A Galtonian perspective on shrinkage estimators. Statistical Science, 5:147–155.
 Stinson, D. (2006). Cryptography: Theory and Practice. CRC Press.
 Strong, S. P., Koberle, R., de Ruyter van Steveninck, R., and Bialek, W. (1998). Entropy and information in neural spike trains. Phys. Rev. Letters, 80:197–200.
 Thompson, J. R. (1968). Some shrinkage techniques for estimating the mean. J. Amer. Statist. Assoc., 63:113–122.
 Trybula, S. (1958). Some problems of simultaneous minimax estimation. Ann. Math. Statist., 29:245–253.
 Tuyl, F., Gerlach, R., and Mengersen, K. (2008). A comparison of BayesLaplace, Jeffreys, and other priors: the case of zero events. The American Statistician, 62:40–44.
 Vu, V. Q., Yu, B., and Kass, R. E. (2007). Coverageadjusted entropy estimation. Stat. Med., 26:4039–4060.
 Yeo, G. and Burge, C. B. (2004). Maximum entropy modeling of short sequence motifs with applications to RNA splicing signals. J. Comp. Biol., 11:377–394.