Entropy inference and the James-Stein estimator, with application to nonlinear gene association networks

Entropy inference and the James-Stein estimator, with application to nonlinear gene association networks

Abstract

We present a procedure for effective estimation of entropy and mutual information from small-sample data, and apply it to the problem of inferring high-dimensional gene association networks. Specifically, we develop a James-Stein-type shrinkage estimator, resulting in a procedure that is highly efficient statistically as well as computationally. Despite its simplicity, we show that it outperforms eight other entropy estimation procedures across a diverse range of sampling scenarios and data-generating models, even in cases of severe undersampling. We illustrate the approach by analyzing E. coli gene expression data and computing an entropy-based gene-association network from gene expression data. A computer program is available that implements the proposed shrinkage estimator.

Keywords: Entropy, shrinkage estimation, James-Stein estimator, “small , large ” setting, mutual information, gene association network.

Acknowledgments This work was partially supported by an Emmy Noether grant of the Deutsche Forschungsgemeinschaft (to K.S.). We thank the anonymous referees and the editor for very helpful comments.

1 Introduction

Entropy is a fundamental quantity in statistics and machine learning. It has a large number of applications, for example in astronomy, cryptography, signal processing, statistics, physics, image analysis neuroscience, network theory, and bioinformatics—see, for example, Stinson (2006), Yeo and Burge (2004), MacKay (2003) and Strong et al. (1998). Here we focus on estimating entropy from small-sample data, with applications in genomics and gene network inference in mind (Margolin et al., 2006; Meyer et al., 2007).

To define the Shannon entropy, consider a categorical random variable with alphabet size and associated cell probabilities with and . Throughout the article, we assume that is fixed and known. In this setting, the Shannon entropy in natural units is given by1

(1)

In practice, the underlying probability mass function are unknown, hence and need to be estimated from observed cell counts .

A particularly simple and widely used estimator of entropy is the maximum likelihood (ML) estimator

constructed by plugging the ML frequency estimates

(2)

into Eq. 1, with being the total number of counts.

In situations with , that is, when the dimension is low and when there are many observation, it is easy to infer entropy reliably, and it is well-known that in this case the ML estimator is optimal. However, in high-dimensional problems with it becomes extremely challenging to estimate the entropy. Specifically, in the “small , large ” regime the ML estimator performs very poorly and severely underestimates the true entropy.

While entropy estimation has a long history tracing back to more than 50 years ago, it is only recently that the specific issues arising in high-dimensional, undersampled data sets have attracted attention. This has lead to two recent innovations, namely the NSB algorithm (Nemenman et al., 2002) and the Chao-Shen estimator (Chao and Shen, 2003), both of which are now widely considered as benchmarks for the small-sample entropy estimation problem (Vu et al., 2007).

Here, we introduce a novel and highly efficient small-sample entropy estimator based on James-Stein shrinkage (Gruber, 1998). Our method is fully analytic and hence computationally inexpensive. Moreover, our procedure simultaneously provides estimates of the entropy and of the cell frequencies suitable for plugging into the Shannon entropy formula (Eq. 1). Thus, in comparison the estimator we propose is simpler, very efficient, and at the same time more versatile than currently available entropy estimators.

2 Conventional Methods for Estimating Entropy

Entropy estimators can be divided into two groups: i) methods, that rely on estimates of cell frequencies, and ii) estimators, that directly infer entropy without estimating a compatible set of . Most methods discussed below fall into the first group, except for the Miller-Madow and NSB approaches.

2.1 Maximum Likelihood Estimate

The connection between observed counts and frequencies is given by the multinomial distribution

(3)

Note that because otherwise the distribution is singular. In contrast, there may be (and often are) zero counts . The ML estimator of maximizes the right hand side of Eq. 3 for fixed , leading to the observed frequencies with variances and as .

2.2 Miller-Madow Estimator

While is unbiased, the corresponding plugin entropy estimator is not. First order bias correction leads to

where is the number of cells with . This is known as the Miller-Madow estimator (Miller, 1955).

2.3 Bayesian Estimators

Bayesian regularization of cell counts may lead to vast improvements over the ML estimator (Agresti and Hitchcock, 2005). Using the Dirichlet distribution with parameters as prior, the resulting posterior distribution is also Dirichlet with mean

where . The flattening constants play the role of pseudo-counts (compare with Eq. 2), so that may be interpreted as the a priori sample size.

Cell frequency prior Entropy estimator
no prior maximum likelihood
Jeffreys prior (Jeffreys, 1946) Krichevsky and Trofimov (1981)
Bayes-Laplace uniform prior Holste et al. (1998)
Perks prior (Perks, 1947) Schürmann and Grassberger (1996)
minimax prior (Trybula, 1958)
Table 1: Common choices for the parameters of the Dirichlet prior in the Bayesian estimators of cell frequencies, and corresponding entropy estimators.

Some common choices for are listed in Tab. 1, along with references to the corresponding plugin entropy estimators,

While the multinomial model with Dirichlet prior is standard Bayesian folklore (Gelman et al., 2004), there is no general agreement regarding which assignment of is best as noninformative prior—see for instance the discussion in Tuyl et al. (2008) and Geisser (1984). But, as shown later in this article, choosing inappropriate can easily cause the resulting estimator to perform worse than the ML estimator, thereby defeating the originally intended purpose.

2.4 NSB Estimator

The NSB approach (Nemenman et al., 2002) avoids overrelying on a particular choice of in the Bayes estimator by using a more refined prior. Specifically, a Dirichlet mixture prior with infinite number of components is employed, constructed such that the resulting prior over the entropy is uniform. While the NSB estimator is one of the best entropy estimators available at present in terms of statistical properties, using the Dirichlet mixture prior is computationally expensive and somewhat slow for practical applications.

2.5 Chao-Shen Estimator

Another recently proposed estimator is due to Chao and Shen (2003). This approach applies the Horvitz-Thompson estimator (Horvitz and Thompson, 1952) in combination with the Good-Turing correction (Good, 1953; Orlitsky et al., 2003) of the empirical cell probabilities to the problem of entropy estimation. The Good-Turing-corrected frequency estimates are

where is the number of singletons, that is, cells with . Used jointly with the Horvitz-Thompson estimator this results in

an estimator with remarkably good statistical properties (Vu et al., 2007).

3 A James-Stein Shrinkage Estimator

The contribution of this paper is to introduce an entropy estimator that employs James-Stein-type shrinkage at the level of cell frequencies. As we will show below, this leads to an entropy estimator that is highly effective, both in terms of statistical accuracy and computational complexity.

James-Stein-type shrinkage is a simple analytic device to perform regularized high-dimensional inference. It is ideally suited for small-sample settings - the original estimator (James and Stein, 1961) considered sample size . A general recipe for constructing shrinkage estimators is given in Appendix A. In this section, we describe how this approach can be applied to the specific problem of estimating cell frequencies.

James-Stein shrinkage is based on averaging two very different models: a high-dimensional model with low bias and high variance, and a lower dimensional model with larger bias but smaller variance. The intensity of the regularization is determined by the relative weighting of the two models. Here we consider the convex combination

(4)

where is the shrinkage intensity that takes on a value between 0 (no shrinkage) and 1 (full shrinkage), and is the shrinkage target. A convenient choice of is the uniform distribution . This is also the maximum entropy target. Considering that and using the unbiased estimator we obtain (cf. Appendix A) for the shrinkage intensity

(5)

Note that this also assumes a non-stochastic target . The resulting plugin shrinkage entropy estimate is

(6)

Remark 1:

There is a one to one correspondence between the shrinkage and the Bayes estimator. If we write and , then . This implies that the shrinkage estimator is an empirical Bayes estimator with a data-driven choice of the flattening constants—see also Efron and Morris (1973). For every choice of there exists an equivalent shrinkage intensity . Conversely, for every there exist an equivalent .

Remark 2:

Developing we obtain the approximate estimate , which in turn recovers the “pseudo-Bayes” estimator described in Fienberg and Holland (1973).

Remark 3:

The shrinkage estimator assumes a fixed and known . In many practical applications this will indeed be the case, for example, if the observed counts are due to discretization (see also the data example). In addition, the shrinkage estimator appears to be robust against assuming a larger than necessary (see scenario 3 in the simulations).

Remark 4:

The shrinkage approach can easily be modified to allow multiple targets with different shrinkage intensities. For instance, using the Good-Turing estimator (Good, 1953; Orlitsky et al., 2003), one could setup a different uniform target for the non-zero and the zero counts, respectively.

4 Comparative Evaluation of Statistical Properties

In order to elucidate the relative strengths and weaknesses of the entropy estimators reviewed in the previous section, we set to benchmark them in a simulation study covering different data generation processes and sampling regimes.

4.1 Simulation Setup

We compared the statistical performance of all nine described estimators (maximum likelihood, Miller-Madow, four Bayesian estimators, the proposed shrinkage estimator (Eqs. 46), NSB und Chao-Shen) under various sampling and data generating scenarios:

  • The dimension was fixed at .

  • Samples size varied from , , , , , , to . That is, we investigate cases of dramatic undersampling (“small , large ”) as well as situations with a larger number of observed counts.

The true cell probabilities were assigned in four different fashions, corresponding to rows 1-4 in Fig. 1:

  1. Sparse and heterogeneous, following a Dirichlet distribution with parameter ,

  2. Random and homogeneous, following a Dirichlet distribution with parameter ,

  3. As in scenario 2, but with half of the cells containing structural zeros, and

  4. Following a Zipf-type power law.

For each sampling scenario and sample size, we conducted 1000 simulation runs. In each run, we generated a new set of true cell frequencies and subsequently sampled observed counts from the corresponding multinomial distribution. The resulting counts were then supplied to the various entropy and cell frequencies estimators and the squared error was computed. From the 1000 repetitions we estimated the mean squared error (MSE) of the cell frequencies by averaging over the individual squared errors (except for the NSB, Miller-Madow, and Chao-Shen estimators). Similarly, we computed estimates of MSE and bias of the inferred entropies.

Figure 1: Comparing the performance of nine different entropy estimators (maximum likelihood, Miller-Madow, four Bayesian estimators, the proposed shrinkage estimator, NSB und Chao-Shen) in four different sampling scenarios (rows 1 to 4). The estimators are compared in terms of MSE of the underlying cell frequencies (except for Miller-Madow, NSB, Chao-Shen) and according to MSE and Bias of the estimated entropies. The dimension is fixed at while the sample size varies from 10 to 10000.

4.2 Summary of Results from Simulations

Fig. 1 displays the results of the simulation study, which can be summarized as follows:

  • Unsurprisingly, all estimators perform well when the sample size is large.

  • The maximum likelihood and Miller-Madow estimators perform worst, except for scenario 1. Note that these estimators are inappropriate even for moderately large sample sizes. Furthermore, the bias correction of the Miller-Madow estimator is not particularly effective.

  • The minimax and Bayesian estimators tend to perform slightly better than maximum likelihood, but not by much.

  • The Bayesian estimators with pseudocounts and perform very well even for small sample sizes in the scenarios 2 and 3. However, they are less efficient in scenario 4, and completely fail in scenario 1.

  • Hence, the Bayesian estimators can perform better or worse than the ML estimator, depending on the choice of the prior and on the sampling scenario.

  • The NSB, the Chao-Shen and the shrinkage estimator all are statistically very efficient with small MSEs in all four scenarios, regardless of sample size.

  • The NSB and Chao-Shen estimators are nearly unbiased in scenario 3.

The three top-performing estimators are the NSB, the Chao-Shen and the prosed shrinkage estimator. When it comes to estimating the entropy, these estimators can be considered identical for practical purposes. However, the shrinkage estimator is the only one that simultaneously estimates cell frequencies suitable for use with the Shannon entropy formula (Eq. 1), and it does so with high accuracy even for small samples. In comparison, the NSB estimator is by far the slowest method: in our simulations, the shrinkage estimator was faster by a factor of 1000.

5 Application to Statistical Learning of Nonlinear Gene Association Networks

In this section we illustrate how the shrinkage entropy estimator can be applied to the problem of inferring regulatory interactions between genes through estimating the nonlinear association network.

5.1 From Linear to Nonlinear Gene Association Networks

One of the aims of systems biology is to understand the interactions among genes and their products underlying the molecular mechanisms of cellular function as well as how disrupting these interactions may lead to different pathologies. To this end, an extensive literature on the problem of gene regulatory network “reverse engineering” has developed in the past decade (Friedman, 2004). Starting from gene expression or proteomics data, different statistical learning procedures have been proposed to infer associations and dependencies among genes. Among many others, methods have been proposed to enable the inference of large-scale correlation networks (Butte et al., 2000) and of high-dimensional partial correlation graphs (Dobra et al., 2004; Schäfer and Strimmer, 2005a; Meinshausen and Bühlmann, 2006), for learning vector-autoregressive (Opgen-Rhein and Strimmer, 2007a) and state space models (Rangel et al., 2004; Lähdesmäki and Shmulevich, 2008), and to reconstruct directed “causal” interaction graphs (Kalisch and Bühlmann, 2007; Opgen-Rhein and Strimmer, 2007b).

The restriction to linear models in most of the literature is owed at least in part to the already substantial challenges involved in estimating linear high-dimensional dependency structures. However, cell biology offers numerous examples of threshold and saturation effects, suggesting that linear models may not be sufficient to model gene regulation and gene-gene interactions. In order to relax the linearity assumption and to capture nonlinear associations among genes, entropy-based network modeling was recently proposed in the form of the ARACNE (Margolin et al., 2006) and MRNET (Meyer et al., 2007) algorithms.

The starting point of these two methods is to compute the mutual information for all pairs of genes and , where and represent the expression levels of the two genes for instance. The mutual information is the Kullback-Leibler distance from the joint probability density to the product of the marginal probability densities:

(7)

The mutual information (MI) is always non-negative, symmetric, and equals zero only if and are independent. For normally distributed variables the mutual information is closely related to the usual Pearson correlation,

Therefore, mutual information is a natural measure of the association between genes, regardless whether linear or nonlinear in nature.

5.2 Estimation of Mutual Information

To construct an entropy network, we first need to estimate mutual information for all pairs of genes. The entropy representation

(8)

shows that MI can be computed from the joint and marginal entropies of the two genes and . Note that this definition is equivalent to the one given in Eq. 7 which is based on the Kullback-Leibler divergence. From Eq. 8 it is also evident that is the information shared between the two variables.

For gene expression data the estimation of MI and the underlying entropies is challenging due to the small sample size, which requires the use of a regularized entropy estimator such as the shrinkage approach we propose here. Specifically, we proceed as follows:

  • As a prerequisite the data must be discrete, with each measurement assuming one of levels. If the data are not already discretized, we propose employing the simple algorithm of Freedman and Diaconis (1981), considering the measurements of all genes simultaneously.

  • Next, we estimate the cell frequencies of the contingency table for each pair and using the shrinkage approach (Eqs. 4 and  5). Note that typically the sample size is much smaller than , thus simple approaches such as ML are not valid.

  • Finally, from the estimated cell frequencies we calculate , , and the desired .

5.3 Mutual Information Network for E. Coli Stress Response Data

Figure 2: Left: Distribution of estimated mutual information values for all 5151 gene pairs of the E. coli data set. Right: Mutual information values after applying the ARACNE gene pair selection procedure. Note that the most MIs have been set to zero by the ARACNE algorithm.

Figure 3: Mutual information network for the E. coli data inferred by the ARACNE algorithm based on shrinkage estimates of entropy and mutual information.

For illustration, we now analyze data from Schmidt-Heck et al. (2004) who conducted an experiment to observe the stress response in E. Coli during expression of a recombinant protein. This data set was also used in previous linear network analyzes, for example, in Schäfer and Strimmer (2005b). The raw data consist of 4289 protein coding genes, on which measurements were taken at 0, 8, 15, 22, 45, 68, 90, 150, and 180 minutes. We focus on a subset of differentially expressed genes as given in Schmidt-Heck et al. (2004).

Discretization of the data according to Freedman and Diaconis (1981) yielded distinct gene expression levels. From the genes, we estimated MIs for 5151 pairs of genes. For each pair, the mutual information was based on an estimated contingency table, hence . As the number of time points is , this is a strongly undersampled situation which requires the use of a regularized estimate of entropy and mutual information.

The distribution of the shrinkage estimates of mutual information for all 5151 gene pairs is shown in the left side of Fig. 2. The right hand side depicts the distribution of mutual information values after applying the ARACNE procedure, which yields 112 gene pairs with nonzero MIs.

The model selection provided by ARACNE is based on applying the information processing inequality to all gene triplets. For each triplet, the gene pair corresponding to the smallest MI is discarded, which has the effect to remove gene-gene links that correspond to indirect rather than direct interactions. This is similar to a procedure used in graphical Gaussian models where correlations are transformed into partial correlations. Thus, both the ARACNE and the MRNET algorithms can be considered as devices to approximate the conditional mutual information (Meyer et al., 2007). As a result, the 112 nonzero MIs recovered by the ARACNE algorithm correspond to statistically detectable direct associations.

The corresponding gene association network is depicted in Fig. 3. The most striking feature of the graph are the “hubs” belonging to genes hupB, sucA and nuoL. hupB is a well known DNA-binding transcriptional regulator, whereas both nuoL and sucA are key components of the E. coli metabolism. Note that a Lasso-type procedure (that implicitly limits the number of edges that can connect to each node) such as that of Meinshausen and Bühlmann (2006) cannot recover these hubs.

6 Discussion

We proposed a James-Stein-type shrinkage estimator for inferring entropy and mutual information from small samples. While this is a challenging problem, we showed that our approach is highly efficient both statistically and computationally despite its simplicity.

In terms of versatility, our estimator has two distinct advantages over the NSB and Chao-Shen estimators. First, in addition to estimating the entropy, it also provides the underlying multinomial frequencies for use with the Shannon formula (Eq. 1). This is useful in the context of using mutual information to quantify non-linear pairwise dependencies for instance. Second, unlike NSB, it is a fully analytic estimator.

Hence, our estimator suggests itself for applications in large scale estimation problems. To demonstrate its application in the context of genomics and systems biology, we have estimated an entropy-based gene dependency network from expression data in E. coli. This type of approach may prove helpful to overcome the limitations of linear models currently used in network analysis.

In short, we believe the proposed small-sample entropy estimator will be a valuable contribution to the growing toolbox of machine learning and statistics procedures for high-dimensional data analysis.

Appendix A: Recipe For Constructing James-Stein-type Shrinkage Estimators

The original James-Stein estimator (James and Stein, 1961) was proposed to estimate the mean of a multivariate normal distribution from a single (!) vector observation. Specifically, if is a sample from then James-Stein estimator is given by

Intriguingly, this estimator outperforms the maximum likelihood estimator in terms of mean squared error if the dimension is . Hence, the James-Stein estimator dominates the maximum likelihood estimator.

The above estimator can be slightly generalized by shrinking towards the component average rather than to zero, resulting in

with estimated shrinkage intensity

The James-Stein shrinkage principle is very general and can be put to to use in many other high-dimensional settings. In the following we summarize a simple recipe for constructing James-Stein-type shrinkage estimators along the lines of Schäfer and Strimmer (2005b) and Opgen-Rhein and Strimmer (2007a).

In short, there are two key ideas at work in James-Stein shrinkage:

  1. regularization of a high-dimensional estimator by linear combination with a lower-dimensional target estimate , and

  2. adaptive estimation of the shrinkage parameter from the data by quadratic risk minimization.

A general form of a James-Stein-type shrinkage estimator is given by

(9)

Note that and are two very different estimators (for the same underlying model!). as a high-dimensional estimate with many independent components has low bias but for small samples a potentially large variance. In contrast, the target estimate is low-dimensional and therefore is generally less variable than but at the same time is also more biased. The James-Stein estimate is a weighted average of these two estimators, where the weight is chosen in a data-driven fashion such that is improved in terms of mean squared error relative to both and .

A key advantage of James-Stein-type shrinkage is that the optimal shrinkage intensity can be calculated analytically and without knowing the true value , via

(10)

A simple estimate of is obtained by replacing all variances and covariances in Eq. 10 with their empirical counterparts, followed by truncation of at 1 (so that always holds).

Eq. 10 is discussed in detail in Schäfer and Strimmer (2005b) and Opgen-Rhein and Strimmer (2007a). More specialized versions of it are treated, for example, in Ledoit and Wolf (2003) for unbiased and in Thompson (1968) (unbiased, univariate case with deterministic target). A very early version (univariate with zero target) even predates the estimator of James and Stein, see Goodman (1953). For the multinormal setting of James and Stein (1961), Eq. 9 and Eq. 10 reduce to the shrinkage estimator described in Stigler (1990).

James-Stein shrinkage has an empirical Bayes interpretation (Efron and Morris, 1973). Note, however, that only the first two moments of the distributions of and need to be specified in Eq. 10. Hence, James-Stein estimation may be viewed as a quasi-empirical Bayes approach (in the same sense as in quasi-likelihood, which also requires only the first two moments).

Appendix B: Computer Implementation

The proposed shrinkage estimators of entropy and mutual information, as well as all other investigated entropy estimators, have been implemented in R (R Development Core Team, 2008). A corresponding R package “entropy” was deposited in the R archive CRAN and is accessible at the URL http://cran.r-project.org/web/packages/entropy/ under the GNU General Public License.

Footnotes

  1. In this paper we use the following conventions: denotes the natural logarithm (not base 2 or base 10), and we define .

References

  1. Agresti, A. and Hitchcock, D. B. (2005). Bayesian inference for categorical data analysis. Statist. Meth. Appl., 14:297–330.
  2. Butte, A. J., Tamayo, P., Slonim, D., Golub, T. R., and Kohane, I. S. (2000). Discovering functional relationships between RNA expression and chemotherapeutic susceptibility using relevance networks. Proc. Natl. Acad. Sci. USA, 97:12182–12186.
  3. Chao, A. and Shen, T.-J. (2003). Nonparametric estimation of Shannon’s index of diversity when there are unseen species. Environ. Ecol. Stat., 10:429–443.
  4. Dobra, A., Hans, C., Jones, B., Nevins, J. R., Yao, G., and West, M. (2004). Sparse graphical models for exploring gene expression data. J. Multiv. Anal., 90:196–212.
  5. Efron, B. and Morris, C. N. (1973). Stein’s estimation rule and its competitors–an empirical Bayes approach. J. Amer. Statist. Assoc., 68:117–130.
  6. Fienberg, S. E. and Holland, P. W. (1973). Simultaneous estimation of multinomial cell probabilities. J. Amer. Statist. Assoc., 68:683–691.
  7. Freedman, D. and Diaconis, P. (1981). On the histogram as a density estimator: L2 theory. Z. Wahrscheinlichkeitstheorie verw. Gebiete, 57:453–476.
  8. Friedman, N. (2004). Inferring cellular networks using probabilistic graphical models. Science, 303:799–805.
  9. Geisser, S. (1984). On prior distributions for binary trials. The American Statistician, 38:244–251.
  10. Gelman, A., Carlin, J. B., Stern, H. S., and Rubin, D. B. (2004). Bayesian Data Analysis. Chapman & Hall/CRC, Boca Raton, 2nd edition.
  11. Good, I. J. (1953). The population frequencies of species and the estimation of population parameters. Biometrika, 40:237–264.
  12. Goodman, L. A. (1953). A simple method for improving some estimators. Ann. Math. Statist., 24:114–117.
  13. Gruber, M. H. J. (1998). Improving Efficiency By Shrinkage. Marcel Dekker, Inc., New York.
  14. Holste, D., Große, I., and Herzel, H. (1998). Bayes’ estimators of generalized entropies. J. Phys. A: Math. Gen., 31:2551–2566.
  15. Horvitz, D. G. and Thompson, D. J. (1952). A generalization of sampling without replacement from a finite universe. J. Amer. Statist. Assoc., 47:663–685.
  16. James, W. and Stein, C. (1961). Estimation with quadratic loss. In Proc. Fourth Berkeley Symp. Math. Statist. Probab., volume 1, pages 361–379, Berkeley. Univ. California Press.
  17. Jeffreys, H. (1946). An invariant form for the prior probability in estimation problems. Proc. Roc. Soc. (Lond.) A, 186:453–461.
  18. Kalisch, M. and Bühlmann, P. (2007). Estimating high-dimensional directed acyclic graphs with the PC-algorithm. J. Machine Learn. Res., 8:613–636.
  19. Krichevsky, R. E. and Trofimov, V. K. (1981). The performance of universal encoding. IEEE Trans. Inf. Theory, 27:199–207.
  20. Lähdesmäki, H. and Shmulevich, I. (2008). Learning the structure of dynamic Bayesian networks from time series and steady state measurements. Mach. Learn., 71:185–217.
  21. Ledoit, O. and Wolf, M. (2003). Improved estimation of the covariance matrix of stock returns with an application to portfolio selection. J. Empir. Finance, 10:603–621.
  22. MacKay, D. J. C. (2003). Information Theory, Inference, and Learning Algorithms. Cambridge University Press, Cambridge.
  23. Margolin, A., Nemenman, I., Basso, K., Wiggins, C., Stolovitzky, G., Dalla Favera, R., and Califano, A. (2006). ARACNE: an algorithm for the reconstruction of gene regulatory networks in a mammalian cellular context. BMC Bioinformatics, 7 (Suppl. 1):S7.
  24. Meinshausen, N. and Bühlmann, P. (2006). High-dimensional graphs and variable selection with the Lasso. Ann. Statist., 34:1436–1462.
  25. Meyer, P. E., Kontos, K., Lafitte, F., and Bontempi, G. (2007). Information-theoretic inference of large transcriptional regulatory networks. EURASIP J. Bioinf. Sys. Biol., page doi:10.1155/2007/79879.
  26. Miller, G. A. (1955). Note on the bias of information estimates. In Quastler, H., editor, Information Theory in Psychology II-B, pages 95–100. Free Press, Glencoe, IL.
  27. Nemenman, I., Shafee, F., and Bialek, W. (2002). Entropy and inference, revisited. In Dietterich, T. G., Becker, S., and Ghahramani, Z., editors, Advances in Neural Information Processing Systems 14, pages 471–478, Cambridge, MA. MIT Press.
  28. Opgen-Rhein, R. and Strimmer, K. (2007a). Accurate ranking of differentially expressed genes by a distribution-free shrinkage approach. Statist. Appl. Genet. Mol. Biol., 6:9.
  29. Opgen-Rhein, R. and Strimmer, K. (2007b). From correlation to causation networks: a simple approximate learning algorithm and its application to high-dimensional plant gene expression data. BMC Systems Biology, 1:37.
  30. Orlitsky, A., Santhanam, N. P., and Zhang, J. (2003). Always Good Turing: asymptotically optimal probability estimation. Science, 302:427–431.
  31. Perks, W. (1947). Some observations on inverse probability including a new indifference rule. J. Inst. Actuaries, 73:285–334.
  32. R Development Core Team (2008). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. ISBN 3-900051-07-0.
  33. Rangel, C., Angus, J., Ghahramani, Z., Lioumi, M., Sotheran, E., Gaiba, A., Wild, D. L., and Falciani, F. (2004). Modeling T-cell activation using gene expression profiling and state space modeling. Bioinformatics, 20:1361–1372.
  34. Schäfer, J. and Strimmer, K. (2005a). An empirical Bayes approach to inferring large-scale gene association networks. Bioinformatics, 21:754–764.
  35. Schäfer, J. and Strimmer, K. (2005b). A shrinkage approach to large-scale covariance matrix estimation and implications for functional genomics. Statist. Appl. Genet. Mol. Biol., 4:32.
  36. Schmidt-Heck, W., Guthke, R., Toepfer, S., Reischer, H., Duerrschmid, K., and Bayer, K. (2004). Reverse engineering of the stress response during expression of a recombinant protein. In ?, editor, Proceedings of the EUNITE symposium, 10-12 June 2004, Aachen, Germany, volume ?, pages 407–412, ? Verlag Mainz.
  37. Schürmann, T. and Grassberger, P. (1996). Entropy estimation of symbol sequences. Chaos, 6:414–427.
  38. Stigler, S. M. (1990). A Galtonian perspective on shrinkage estimators. Statistical Science, 5:147–155.
  39. Stinson, D. (2006). Cryptography: Theory and Practice. CRC Press.
  40. Strong, S. P., Koberle, R., de Ruyter van Steveninck, R., and Bialek, W. (1998). Entropy and information in neural spike trains. Phys. Rev. Letters, 80:197–200.
  41. Thompson, J. R. (1968). Some shrinkage techniques for estimating the mean. J. Amer. Statist. Assoc., 63:113–122.
  42. Trybula, S. (1958). Some problems of simultaneous minimax estimation. Ann. Math. Statist., 29:245–253.
  43. Tuyl, F., Gerlach, R., and Mengersen, K. (2008). A comparison of Bayes-Laplace, Jeffreys, and other priors: the case of zero events. The American Statistician, 62:40–44.
  44. Vu, V. Q., Yu, B., and Kass, R. E. (2007). Coverage-adjusted entropy estimation. Stat. Med., 26:4039–4060.
  45. Yeo, G. and Burge, C. B. (2004). Maximum entropy modeling of short sequence motifs with applications to RNA splicing signals. J. Comp. Biol., 11:377–394.
104930
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
Edit
-  
Unpublish
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel
Comments 0
Request comment
""
The feedback must be of minumum 40 characters
Add comment
Cancel
Loading ...

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description