1 Introduction

Bayesian estimators of the Gamma distribution

A. Llera, C. F. Beckmann.

Technical report

Donders Institute for Brain Cognition and Behaviour.

Abstract

In this paper we introduce two Bayesian estimators for learning the parameters of the Gamma distribution. The first algorithm uses a well known unnormalized conjugate prior for the Gamma shape and the second one uses a non-linear approximation to the likelihood and a prior on the shape that is conjugate to the approximated likelihood. In both cases use the Laplace approximation to compute the required expectations. We perform a theoretical comparison between maximum likelihood and the presented Bayesian algorithms that allow us to provide non-informative parameter values for the priors hyper parameters. We also provide a numerical comparison using synthetic data. The introduction of these novel Bayesian estimators open the possibility of including Gamma distributions into more complex Bayesian structures, e.g. variational Bayesian mixture models.

## 1 Introduction

The Gamma distribution is the most considered when modeling positive data [1, 11, 9, 6]. The fastest and oldest method used to estimate the parameters of a Gamma distribution is the Method of Moments (MM) . This method can be applied to the Gamma distribution since there exists an unique relationship between its first two moments and its parameters. The MM uses a Gaussian approximation to the moments to provide a closed form parameters estimation. Obviously this method is exact for the Gaussian distribution and less accurate for distributions deviating from Gaussianity. Another classic approach for parameter estimation is the well known maximum likelihood (ML), based in the maximization of the data log-likelihood; for the Gamma distribution there exist a closed form ML derivation for its scale parameter but not for the shape parameter. Two interesting algorithms to deal with this issue can be found in . One of them uses a linear approximation to the otherwise intractable terms appearing in the derivative of the log-likelihood and the second one uses a more advanced non-linear approximation to the log-likelihood to obtain higher convergence rate [8, 9]. While the MM and ML approaches provide point estimates for the distribution parameter values, a full Bayesian estimation approach introduces distributions over the parameters ; when choosing conjugate prior distributions over the parameters [7, 2], the posterior expectations can be analytically derived. The conjugate prior for the Gamma rate parameter is known to be Gamma distributed but there exist no proper conjugate prior for the shape parameter.

In section 2 we present the Bayesian methods proposed in this article. In subsection 2.1 we review the conjugate prior for the rate (inverse scale) parameter. Since for the shape parameter there exist no proper conjugate prior distribution, here we propose two different strategies to deal with this issue. In subsection 2.2 we consider the use of an unnormalized conjugate prior  and in section 2.3 we introduce an unnormalized prior on the shape which is conjugate with the accurate non-linear likelihood approximation presented in . In both cases we use the Laplace approximation to compute the required expectations. In section 3.1 we provide a theoretical comparison between the two ML algorithms presented in  and the two presented Bayesian strategies. In section 3.2 we present a numerical comparison between the four algorithms. In section 3.3 we briefly consider the hyper parameters value choices for the Bayesian algorithms. We conclude the paper with a brief discussion in section 4. For completeness of the methodology and ease of reading, in Appendix A we review the Method of Moments estimation, and in Appendix B the two Maximum Likelihood algorithms introduced in .

## 2 Bayesian Learning (BL) for the Gamma distribution

Consider the Gamma distribution defined over the support

 G(x|α,β)=xα−1Γ(α)βαexp(−xβ), (1)

for any positive real parameter values and , denoting the distribution shape and scale respectively, and where is the Gamma function. Given a vector of positive real values , in the following subsections we present two different Bayesian strategies to find posterior estimations and for the distribution parameters and .

To find the posterior probability of the Gamma parameters, , we use Bayes rule

 p(θ|x)=p(x|θ)p(θ)p(x), (2)

where is some positive vector of observations. Since the denominator only depends on data the posterior is proportional to the likelihood multiplied by the prior

 p(θ|x)∝p(x|θ)p(θ). (3)

Obtaining analytical solutions for the parameters requires the use of conjugate priors [2, 4]. A prior is called conjugate with a likelihood function if the prior functional form remains unchanged after multiplication by the likelihood.

### 2.1 Rate parameter R=1β

A well known conjugate prior for , the rate parameter of the Gamma distribution, is a Gamma distribution parametrized using shape and rate ,

 p(R)=G(R|d,e). (4)

Given the observations vector x, and multiplying its Gamma likelihood by the prior on the rate (4) we get its posterior, with

 ^d=d+nα,^e=e+n∑i=1xi. (5)

Since is Gamma distributed its posterior expectation is

 ^R=^d^e. (6)

Since the scale is the inverse of rate we have that .

### 2.2 Unnormalized prior on shape parameter α (Bl1)

The unnormalized prior used for the shape parameter of the gamma distribution  has the form

 p(α)∝aα−1RαcΓ(α)b, (7)

where is the Gamma rate parameter and are hyper parameters. Given some observations x, we multiply its likelihood under the Gamma distribution by the prior on shape (7) to obtain an expression for the posterior distribution

 q(α)∝^aα−1Rα^cΓ(α)^b (8)

with

 ^a=an∏i=1xi,^b=b+n,^c=c+n. (9)

Finding the posterior expectation of , , implies computing the expectation of . Here we use the Laplace approximation to (7), which can be shown to be a Gaussian with mean

 m=Ψ−1(loga+clogRb),

and precision , where . Consequently we approximate the posterior expected shape by

 ^α≈Ψ−1(log^a+^clogR^b). (10)

We note here that the expectation of the Laplace approximation to corresponds to the maximum a posteriori (MAP) estimate of . The use of the Laplace approximation in this context then reduces to using a MAP estimation in place of the expected value. Note also that in order to compute the expectation (10) we need to estimate

 log^a=loga+n∑i=1logxi, (11)

and not longer require in equation (9). This fact has the advantage of avoiding numerical issues for large sample sizes. Further, substituting into equation (16) we obtain an expression with no dependence which is more compact for algorithmic use, namely

 (12)

Algorithm 1 summarizes this first Bayesian approach for learning the Gamma parameters which we denote as . First, the MM algorithm is used to initialize (equation (23) left panel); , , and are computed through equations (5 right side), (11) and (9). Then equation (12) is iterated till convergence to obtain the expected . Finally is computed using equation (5 left side) and through (6).

### 2.3 Likelihood approximation and its conjugate prior on α (Bl2)

Given an initial parameter value estimation , we use an approximation to the log-likelihood as presented in , namely

 logG(x|α)=k0+k1α+k2logα. (13)

Learning the parameters is straightforward (check Appendix B). We then define a prior on that is conjugate with this approximated log-likelihood, namely

 logp(α)∝w0+w1α+w2logα (14)

for any set of hyper parameter values . It is straightforward to observe that the posterior values for the hyper parameters are

 ~wi=wi+ki,∀i∈{0,1,2}. (15)

Once more we use the Laplace approximation to (14), which can be shown to be a Gaussian with mean

 m=−~w2~w1

and precision . Consequently we approximate the posterior expected shape by

 ^α≈−~w2~w1. (16)

Again, the Laplace approximation corresponds to the maximum a posteriori (MAP) estimate. Algorithm 2 summarizes this alternative for Bayesian learning the parameters of the Gamma distribution which will be further denoted as .

## 3 Results

In section 3.1 we highlight the relationship between the ML algorithms presented in  and the Bayesian ones presented here. Both ML algorithms (ML1 and ML2) can be found in Appendix B. In section 3.2 we present numerical results obtained using synthetic data and in section 3.3 we consider the hyper parameter value setting for the Bayesian shape priors.

### 3.1 Algorithms comparison

It is to note that ML and BL algorithms have notable similarities; first, note that the ML and the Bayesian scale estimators presented are

 ^βML=μα=∑ni=1xinα^βBL=^R−1=^e^d=e+∑ni=1xid+nα (17)

respectively. It is easy to observe that

 limd=e→0^βBL→^βML, (18)

which means that the Bayesian scale estimator tends to the ML one in the limit of an infinite variance gamma prior over . With respect to the shape parameter , since initializing results in (in BL1), and taking again the same limit, we can rewrite the update as

 limd=e→0b=c^αBL→Ψ−1(log^a^b+log(nα)−logn∑i=1xi) (19)

Comparing this to the ML update in the case of a linear constrain on (ML1)

 ^αML1=Ψ−1(logα+¯¯¯¯¯¯¯¯¯¯¯logx−log¯x)=Ψ−1(¯¯¯¯¯¯¯¯¯¯¯logx+lognα−logn∑i=1xi) (20)

it is clear that both models perform an iterative update of the shape parameter dependent in the inverse of the gamma function of plus a data dependent constant which differs for ML1 and BL1; they also have the same dependence on one of the data dependent terms, , independently of the hyper parameter values used for the prior on (under the assumption b=c). Further, since and , in the extreme case of considering a very small hyper parameter value for , we can simplify the BL1 estimation update further, resulting in

 limd=e→0b=c→0^αBL→Ψ−1(logan+¯¯¯¯¯¯¯¯¯¯¯logx+log(nα)−logn∑i=1xi) (21)

Remembering that the posterior estimation for and are and , this is a very interesting result that shows that when choosing little informative hyper parameter values for and , the BL1 estimation involves a small sample bias correction term which tends to as the number of observed samples increases. Further, there is, by construction, a close relationship between ML2 and BL2; in fact they are equivalent when considering a flat prior over , that is when in equation (14).

### 3.2 Numerical results

In this section we perform a numerical comparison between the two maximum likelihood (ML) algorithms presented in  and the Bayesian approaches (BL) presented in this work. For each simulation we generate varying amount of samples from a Gamma distribution with fixed parameters and and we applied the four considered algorithms. The parameter values and are initialized to different random positive numbers at each simulation. For each we performed 500 different simulations. The ML and the BL algorithms are considered to converge when the relative parameter change between two consecutive iterations is smaller than . In this example we use the information from the previous section to fix the hyper parameters to values for which the BL solutions tend to the ML ones; we used the hyper parameter values , and for BL1 and , for BL2. To asses the quality of the estimators we computed KL-divergences between the true distribution and the estimated ones. Then, for each sample size (N), we performed a paired t-test between the KL-divergences of each possible pair of algorithms and found no statistically significant differences between them. This is not a surprising result since both ML algorithms converge to the same solution and the priors in the Bayesian setting are chosen to tend to obtain a estimator tending to the ML one. In each sub-figure of figure 1 we present error bars (mean and standard deviation) of the bias in the estimation of each parameter, shape in first row, scale in the second one. Each algorithm is presented in a different column and the x-axis represent the number of samples used for the estimation. These results are, for the ML case, in agreement with the ones presented in ; the ML shape estimators are upwards biased for the shape and downwards biased for the scale parameter. The bias of the presented Bayesian algorithms is equivalent to that of the existing ML algorithms.

Figure 2 shows boxplots of the computational costs in seconds; the mean, and percentile are presented on each box and the red crosses represent outliers of the estimated distributions. The cost of ML1 and BL1 is equivalent while ML2 and BL2 are much faster. It is to note that in all cases the iterative process depends only on the inverse of the digamma function of a real number and there is no algorithmic cost dependency in the loop.

### 3.3 Shape conjugate prior hyper parameters values

It is easy to see flat prior for the shape parameter in the BL2 case can be obtained by choosing in equation (14). On the other side, looking at equation (7) it is clear that choosing a flat prior for BL1 is not straightforward. As we showed in section 3.3, choosing and results in a prior providing a small bias correction which tends to zero for high sample sizes. To get some more intuition we next provide some numerical examples illustrating such prior for different hyper parameter values and demonstrating the behavior of the posterior hyper parameter updates (equations 9,11) and of the Laplace approximation (equation 10) used to obtain the expected shape value .

We generate 1000 samples from an Gamma distribution with parameter values and . is computed through equation (23) left panel followed by equations (5,6) where a flat prior was used, namely . For the shape prior hyper parameters we consider different values , and is computed independently on each case through equations (9,11,16). In Figure 3, the top row shows the log-prior for the parameter values indicated in the titles as a function of the shape value represented in the x-axis. The bottom row shows the corresponding log-posteriors . In all cases the red dot represents the true shape value, , and the green ones in the bottom row represents the posterior estimated value . Figure 3: Top row shows the log-prior logp(α|a,b,c,β) and the bottom row the log-posterior logp(α|^a,^b,^c,^β). The red dots mark the shape parameter α from which data was generated. The green dots represent the Bayesian posterior estimation.

The expectation we obtained for the posterior of , marked as as green circle, is a very good approximation. Since the parameter value for the scale was also estimated using Bayesian inference and its part of the prior and posterior on , the figure indirectly confirms also the proper behaviour of the scale Bayesian estimation procedure.

These examples are representative of the general behaviour observed for a broad range of hyper parameter values . Although this shows the weak dependence on the prior when estimating the posterior, further analyses relating the effect of the hyper parameter value of with relation to the well known bias correction strategies for the parameters of the Gamma distribution  would be of interest and are focus of ongoing research.

## 4 Discussion

We reviewed two maximum likelihood algorithms for the estimation of the parameters of a Gamma distribution and developed two Bayesian algorithms; the first one uses an unnormalized conjugate prior to the shape parameter and the second one uses a likelihood approximation and prior conjugated with the approximation. In both cases we use the Laplace approximation to compute the required expectations. We performed a theoretical analyses which showed which hyper-parameter values for the conjugate priors will recover the ML algorithms and we also show numerically that with such hyper-parameter choices the new Bayesian algorithms converge towards the ML solutions. We also evaluated numerically the bias in the parameters estimation and show that the Bayesian algorithms have the same bias properties as the ML ones. Including bias corrections in the Bayesian setting for small sample sizes could be achieved as showed in  for the ML case and is part of ongoing research. The Bayesian Learning approaches introduced in section 2 has important implications since it can be used inside more complex Bayesian inference tasks, such as fitting mixture models involving Gamma distributions within a (Variational) Bayesian framework to be used for example in the context of medical image segmentation . Extending this beyond modeling one dimensional data, the presented methodology can also be included in multivariate models, e.g. for Bayesian ICA or Non-negative matrix factorizations.

## References

•  H. Aksoy. Use of gamma distribution in hydrological analysis. Journal of Engineering Enviromental Science, (24):419–228, 2000.
•  C. M. Bishop. Pattern Recognition and Machine Learning (Information Science and Statistics). Springer, 1 edition, 2007.
•  J. D. Cook. Inverse gamma distribution. Online: http://www. johndcook. com/inverse gamma. pdf, Technical. Report, 2008.
•  D. Fink. A compendium of conjugate priors, 1997.
•  D. Giles and H. Feng. Bias of the maximum likelihood estimators of the two-parameter gamma distribution revisited. Econometrics Working Papers 0908, Department of Economics, University of Victoria, 2009.
•  A. Khalili, D. Potter, P. Yan, L. Li, J. Gray, T. Huang, and S. Lin. Gamma-normal-gamma mixture model for detecting differentially methylated loci in three breast cancer cell lines. Cancer Inform, 3:43–54, 2007.
•  D. J. C. MacKay. Information Theory, Inference and Learning Algorithms. Cambrdige University Press, 2003.
•  T. P. Minka. Beyond Newton’s method. Aug. 2000.
•  T. P. Minka. Estimating a gamma distribution. Microsoft Research, 2002.
•  M. Woolrich, T. Behrens, C. Beckmann, and S. Smith. Mixture models with adaptive spatial regularization for segmentation with an application to fmri data. Medical Imaging, IEEE Transactions on, 24(1):1–11, Jan 2005.
•  S. Yue. A bivariate gamma distribution for use in multivariate flood frequency analysis. Hydrological Processes, 15(6):1033–1045, 2001.

## Appendix A Method of Moments (MM)

The first two moments of a Gamma distribution are 

 (22)

Using a Gaussian approximation to equation (22)

 μ≈αβ,v≈αβ2,

where and are the mean and variance estimated from the observed data vector , and solving the linear system for and we obtain the MM estimation for the Gamma distribution parameters, namely

 ^α≈μ2v,^β≈vμ (23)

Algorithm 3 summarizes the process.

Obviously this approximation is very fast since it simply requires estimation of mean and variance. Nevertheless it will provide more biased estimations as the underlying Gamma distribution deviates more from a Gaussian i.e. as the Gamma distribution higher order moments more deviate from zero.

## Appendix B Maximum Likelihood (ML)

In this subsection we describe the two algorithms for ML learning of Gamma distributions parameters originally presented in . The log-likelihood of the positive vector of observations under the Gamma distribution (1) can be written as

 logG(x|α,β)=n(α−1)¯¯¯¯¯¯¯¯¯¯¯logx−nlogΓ(α)−nαlogβ−n¯¯¯¯¯¯¯¯% xβ (24)

where for easy of notation the upper bar operand denotes the arithmetic mean operator. Finding ML estimations for the parameters is achieved by maximizing equation (24) with respect to the parameters ; it is easy to verify that equation (24) has a maximum at

 β=¯¯¯xα, (25)

and that direct maximization of equation (24) with respect to it is not possible. Substituting equation (25) into equation (24) gives

 logG(x|α)=n(α−1)¯¯¯¯¯¯¯¯¯¯¯logx−nlogΓ(α)−nαlog¯¯¯x+nαlogα−nα (26)

Direct maximization of equation (26) with respect to is (obviously) also not possible. Using a linear constrain around a given value for , denoted as , we have

 αlog(α)≥(1+logα0)(α−α0)+α0log(α0). (27)

Substituting (27) into equation (26) provides a lower bound on the log-likelihood. Differentiating with respect to , equaling to zero and solving for we have that

 ^α=Ψ−1(logα0+¯¯¯¯¯¯¯¯¯¯¯logx−log¯x) (28)

where represents the digamma function. For an initial value of equation (28) can be iterated till convergence to obtain the desired value and then can be computed using equation (25). In this work we initialize the value of using the method of moments and Algorithm 4 summarizes the process.

Algorithm 4 provides an accurate ML solution but it suffers from slow convergence. To accelerate the process, one can approximate Gamma log-likelihood presented in equation (26) by

 f(α)=k0+k1α+k2logα. (29)

Taking first and second derivatives of

 (30)

and matching and its derivatives to and its first two derivatives with respect to respectively, we obtain values for . If , has a maximum at iff . This process gives us the update rule

 1α←1α+¯¯¯¯¯¯¯¯¯¯¯logx−log¯x+logα−Ψ(α)α2(1α−Ψ′(α)) (31)

As before we use the MM relationship for the shape parameter (equation 23, left side) to initialize . After iteration till convergence to get we use equation (25) to get . Algorithm 5 summarizes the process:

The two ML algorithms presented here deviate from the originally presented in  in the initialization. While we use the MM for initialization, in the original work a closed form estimation was used.

You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters   