EM Algorithm for Estimation of the Offspring Distribution in Multitype Branching Processes with Terminal Types

# EM Algorithm for Estimation of the Offspring Distribution in Multitype Branching Processes with Terminal Types

###### Abstract

Multitype branching processes (MTBP) model branching structures, where the nodes of the resulting tree are objects of different types. One field of application of such models in biology is in studies of cell proliferation. A sampling scheme that appears frequently is observing the cell count in several independent colonies at discrete time points (sometimes only one). Thus, the process is not observable in the sense of the whole tree, but only as the ”generation” at given moment in time, which consist of the number of cells of every type. This requires an EM-type algorithm to obtain a maximum likelihood (ML) estimation of the parameters of the branching process. A computational approach for obtaining such estimation of the offspring distribution is presented in the class of Markov branching processes with terminal types.

MSC: Primary 60J80, 62G05; Secondary 60J80, 62P10
Keywords: multitype branching processes, terminal types, offspring distribution, maximum likelihood estimation, expectation maximization, inside-outside algorithm.

## 1 Introduction

Branching processes (BP) are stochastic models used in population dynamics. The theory of such processes could be found in a number of books ([12], [2], [1]), and application of branching processes in biology is discussed in [14], [22], [15], [11]. Statistical inference in BP depends on the kind of observation available, whether the whole family tree has been observed, or only the generation sizes at given moments. Some estimators considering different sampling schemes could be found in [10] and [24]. The problems get more complicated for multitype branching procesess (MTPB) where the particles are of different types (see [18]). Yakovlev and Yanev [23] develop some statistical methods to obtain ML estimators for the offspring characteristics, based on observation on the relative frequencies of types at time . Other approaches use simulation and Monte Carlo methods (see [13], [8]).

When the entire tree was not observed, but only the objects existing at given moment, an Expectation Maximization (EM) algorithm could be used, regarding the tree as the hidden data. Guttorp [10] presents an EM algorithm for the single-type process knowing generation sizes and in [9] an EM algorithm is used for parametric estimation in a model of Y-linked gene in bisexual BP. Such algorithms exist for strictures, called Stochastic Context-free Grammars (SCFG). A number of sources point out the relation between MTBPs and SCFGs (see [20], [7]). This relation has been used in previous work [5] to propose a computational scheme for estimating the offspring distribution of MTBP using the Inside-Outside algorithm for SCFG ([16]). A new method, related to this, but constructed entirely for BP will be presented here.

The EM algorithm specifies a sequence that is guaranteed to converge to the ML estimator under certain regularity conditions. The idea is to replace one difficult (sometimes impossible) maximization of the likelihood with a sequence of simpler maximization problems whose limit is the desired estimator. To define an EM algorithm two different likelihoods are considered – for the ”incomplete-data problem” and for ”complete-data problem”. When the incomplete-data likelihood is difficult to work with, the complete-data could be used in order to solve the problem. The conditions for convergence of the EM sequence to the incomplete-data ML estimator are known and should be considered when such an algorithm is designed. More about the theory and applications of the EM algorithm could be found in [17].

The paper is organized as follows. In Section 2 the model of MTBP with terminal types is introduced. Section 3 shows the derivation of an EM algorithm for estimating the offspring probabilities in general, and then proposes a recurrence scheme that could be used to ease the computations. A comprehensive example is given in Section 4 and the results of a simulation study are shown in Section 5.

## 2 The Model

A MTBP could be represented as , where denotes the number of objects of type at time , . An individual of type has offspring of different types according to a -variate distribution and every object evolves independently. If , this is the Bienaymé-Galton-Watson (BGW) process. For the process with continuous time , define the embedded generation process as follows (see [2]). Let number of objects in the -th generation of . If we take the sample tree and transform it to a tree having all its branches of unit length but otherwise identical to , then , where is a BGW process. We call the embedded generation process for . The trees associated either with BGW process, or the embedded generation BGW process will be used to estimate the offspring probabilities.

Now we consider MTBP where certain terminal types of objects, once created, neither die nor reproduce (see [20]). If is the set of non-terminal types and is the set of terminal types, then an object of type produces offspring of any type and an object of type does not reproduce any more. Here each constitutes a final group (see [12]). This way we model a situation where ”dead” objects do not disappear, but are registered and present as ”dead” through the succeeding generations. The process described above is reducible, because once transformed into a terminal type, an object remains in it’s final group. For irreducible processes some statistical theory has been developed (see [3] for example), but for reducible ones statistical inference is case-dependent.

We are interested in estimation of the offspring probabilities. If the whole tree is observed the natural ML estimator for the offspring probabilities is

 ^p(Tv→A)=c(Tv→A)c(Tv), (1)

where is the number of times a node of type appears in the tree and is the number of times a node of type produces offspring in . It is not always possible to observe the whole tree though, often we have the following sampling scheme , for some . Let consists of 1 object of some type. Suppose we are able to observe a number of independent instances of the process, meaning that they start with identical objects and reproduce according to the same offspring distribution. Such observational scheme is common in biological laboratory experiments. If is discrete is the number of objects in the -th generation. For continuous time it is a ”generation” in the embedded generation BGW process. A simple illustration is presented in fig. 1 a)–c), where ”alive” objects are grey, ”dead” ones are white and numbers represent the different types.

Here the notion that ”dead” objects present and could be observed in succeeding generations as terminal types is crucial. If ”dead” particles disappeared somewhere inside the ”hidden” tree structure, estimation would be impossible (see fig. 2 for an example).

## 3 The EM Proposal

### 3.1 The EM Algorithm

The EM algorithm was explained and given its name in a paper by Dempster, Laird, and Rubin in 1977 [6]. It is a method for finding maximum likelihood estimates of parameters in statistical models, where the model depends on unobserved latent variables. Let a statistical model be determined by parameters , is the observation and is some ”hidden” data, which determines the probability distribution of . Then the density of the ”complete” observation is and the density of the ”incomplete” observation is the marginal one . We can write the likelihoods and the conditional density of given and this way:

 L(θ|x,y)=f(x,y|θ),L(θ|x)=g(x|θ),andk(y|θ,x)=f(x,t|θ)g(x|θ).

The aim is to maximize the log likelihood

 logL(θ|x)=logg(x|θ)=logL(θ|x,y)−logk(y|θ,x).

As is not observed the right side is replaced with its expectation under :

 logL(θ|x)=Eθ′[logL(θ|x,Y)]−Eθ′[logk(Y|θ,x)].

Let It has been proved that

 logL(θ|x)−logL(θ(i)|x)≥Q(θ|θ(i))−Q(θ(i)|θ(i))

with equality only if , or if for some other . Choosing will make the difference positive and so, the likelihood could only increase in each step. The Expectation Maximization Algorithm is usually stated formally like this:

E-step: Calculate function .

M-step: Maximize with respect to .

The convergence of the EM sequence depends on the form of the likelihood and the expected likelihood functions. The following condition, due to Wu ([21]), guarantees convergence to a stationary point. It is presented here as cited in [4].

If the expected complete-data log likelihood is continuous both in and , then all limit points of an EM sequence are stationary points of , and converges monotonically to for some stationary point .

### 3.2 Derivation of an EM Algorithm for MTBP

Let be the observed set of particles, is the unobserved tree structure and is the set of parameters – the offspring probabilities (the probability that a particle of type produces the set of particles ). Then the likelihood of the ”complete” observation is:

 L(p|π,x)=P(π,x|p)=∏v,A:Tv→Ap(Tv→A)c(Tv→A;π,x),

where is a counting function – is the number of times a particle of type produces the set of particles in the tree , observing . The probability of the ”incomplete” observation is the marginal probability . For the EM algorithm we need to compute the function

 Q(p|p(i))=Ep(i)(logP(π,x|p))=∑πP(π|x,p(i))logP(π,x|p) =∑πP(π|x,p(i))∑Tv→Ac(Tv→A;π,x)logp(Tv→A) =∑Tv→A∑πP(π|x,p(i))c(Tv→A;π,x)logp(Tv→A) =∑Tv→AEp(i)c(Tv→A)logp(Tv→A). (2)

We need to maximize the function (3.2) under the condition , for every and . The Lagrange method requires to introduce the function

 Φ(p)=∑Tv→AEp(i)c(Tv→A)logp(Tv→A)+λ(1−∑Ap(Tv→A)).

Taking partial derivatives with respect to and obtaining the Lagrangian multiplier , we get to the result that the re-estimating parameters are the normalized expected counts, which look like the ML estimators in the ”complete” observation case (1), but the observed counts are replaced with their expectations.

 p(i+1)(Tv→A)=Ep(i)c(Tv→A)∑AEp(i)c(Tv→A)=Ep(i)c(Tv→A)Ep(i)c(Tv), (3)

where the expected number of times a particle of type appears in the tree is:

 Ep(i)c(Tv)=∑πP(π|x,p(i))c(Tv;π,x),

and the expected number of times a particle of type gives offspring in the tree is:

 Ep(i)c(Tv→A)=∑πP(π|x,p(i))c(Tv→A;π,x).

It is easy to check that in this case the convergence condition stated above is fulfilled. We consider the representation

 Q(p|p(i))=∑Tv→A∑πP(π|x,p(i))c(Tv→A;π,x)logp(Tv→A),

where is a polynomial function of -s – the offspring probabilities, estimated on the -th step. Then is a sum of continuous functions in all the parameters and , so it is also continuous. This way we are sure to converge to a stationary value, though it might not always be a global maximizer.

It is a case of EM where the M-step is explicitly solved, so the computational effort will be on the E-step. The problem is that in general enumerating all possible trees is of exponential complexity. The method proposed below is aimed to reduce complexity.

### 3.3 The Recurrence Scheme

We have shown that the E-step of the algorithm consists of determining the expected number of times a given type or a given production appears in a tree . A general method will be proposed here for computing these counts. The algorithm consists of three parts – calculation of the inner probabilities, the outer probabilities and EM re-estimation, which are shown below.

Let us define the inner probability of a subtree rooted at particle of type to produce outcome where is the number of objects of type (fig. 3). From the basic branching property of the process we get the following recurrence:

 α(I,v)=∑wp(Tv→{Tw1,…,Twk})∑I1+…+Ik=Iα(I1,Tw1)…α(Ik,Twk)

where are all possible sets of particles that can produce.

The outer probability is the probability of the entire tree excluded a subtree, rooted at particle of type and producing outcome (fig. 4). The recurrence here is:

 β(I,v)=∑w∑vp(Tw→{Tv,Tv(2),…,Tv(m)}) ×∑J⊂X−Iβ(I+J,w)∑J2+…+Jm=Jα(J2,v(2))…α(Jm,v(m))

where are all possible sets of particles that can produce and is the observed set of objects.

For every the expected number of times that is used in the tree could be presented as follows:

 Epc(Tv)=∑πP(π|x,p)c(Tv;π,x)=∑πP(π,x|p)P(x|p)c(Tv;π,x) =1P(x|p)∑πP(π,x|p)c(Tv;π,x)=1P(x|p)∑π:Tv∈πP(π,x|p) =1P(x|p)∑Iα(I,v)β(I,v),

where and are the inner and outer probabilities for all .

Similarly, the expected number of times a production is used could be calculated:

 Ep(i)c(Tv→{Tw1,…,Twk}) =∑I∑I1+…+Ik=Iβ(I,v)α(I1,w1)…α(Ik,wk)p(i)(Tv→{Tw1,…,Twk}).

Dividing the expectations above, we obtain the EM re-estimation of the parameters:

 p(i+1)(Tv→{Tw1,…,Twk}) =∑I∑I1+…+Ik=Iβ(I,v)α(I1,w1)…α(Ik,wk)p(i)(Tv→{Tw1,…,Twk})∑Iα(I,v)β(I,v)

For several observed sets of objects the expected numbers in the nominator and denominator are summed for all sets.

Such generally stated the algorithm still has high complexity. In practical applications though, often there are small number of types and not all of the productions in the offspring distributions are allowed. Thus, for a number of cases, a specific dynamic programming schemes based on the above recurrence could be proposed, which will be less complex.

## 4 A Comprehensive Example

We consider a MTBP with four types of particles – two nonterminal , and two terminal , , and the productions that are allowed (with nonzero probability) are:

 T1\lx@stackrelp111⟶{T1,T1},T1\lx@stackrelp112⟶{T1,T2},T1\lx@stackrelp1T⟶{TT1},T2\lx@stackrelp222⟶{T2,T2},T2\lx@stackrelp2T⟶{TT2}. (4)

To cover the possibility of observing particles of types and the additional productions and are introduced.

Now, let and the process has started with one particle of type . We don’t have further information about the distribution, so an uniform one is assumed for every type: and .

Let be the probability of a tree rooted at a particle of type to produce the number of particles of types respectively. The initialization of should be as follows:

 α1(1,0,0,0)=p11=1/4,α1(0,0,1,0)=p1T=1/4,α2(0,1,0,0)=p22=1/3,α2(0,0,0,1)=p2T=1/3.

For the level of two particles we are interested in all possible subsets of containing two -s. The values of are calculated below:

 α1(1,0,1,0)=p111α1(1,0,0,0)α1(0,0,1,0)+p112α1(1,0,0,0)α2(0,0,1,0)+p112α1(0,0,1,0)α2(1,0,0,0) =1/4.1/4.1/4+1/4.1/4.0+1/4.1/4.0=1/64,

Similarly,

 α1(1,0,0,1)=1/48andα1(0,0,1,1)=1/48.

Also

 α2(1,0,1,0)=0,α2(1,0,0,1)=0,andα2(0,0,1,1)=0.

Finally

 α1(1,0,1,1)=p111[α1(1,0,1,0)α1(0,0,0,1)+α1(1,0,0,1)α1(0,0,1,0)+α1(1,0,0,0)α1(0,0,1,1)]+p112[α1(1,0,1,0)α2(0,0,0,1) +α1(1,0,0,1)α2(0,0,1,0)+α1(1,0,0,0)α2(0,0,1,1)+α2(1,0,1,0)α1(0,0,0,1)+α2(1,0,0,1)α1(0,0,1,0)+α2(1,0,0,0)α1(0,0,1,1)] =1/4.(1/64.0+1/48.1/4+1/4.1/48)+1/4.(1/64.1/3+1/48.0+1/4.0+0.0+0.1/4+0.1/48) =1/256.

Next follow the calculations of the outer probabilities . The initial values are and . Then:

 β1(1,0,1,0)=β1(1,0,1,1)[p111α1(0,0,0,1)+p112α2(0,0,0,1)]=1.(1/4.0+1/4.1/3)=1/12, β1(1,0,0,1)=1/16,β1(0,0,1,1)=1/16, β2(1,0,1,0)=β1(1,0,1,1)p112α1(0,0,0,1)+β2(1,0,1,1)p222α2(0,0,0,1)=1.1/4.0+0.1/3.1/3=0, β2(1,0,0,1)=1/16,β2(0,0,1,1)=1/16,
 β1(1,0,0,0)=β1(1,0,1,0)[p111α1(0,0,1,0)+p112α2(0,0,1,0)] +β1(1,0,0,1)[p111α1(0,0,0,1)+p112α2(0,0,0,1)]+β1(1,0,1,1)[p111α1(0,0,1,1)+p112α2(0,0,1,1)] =1/12.(1/4.1/4+1/4.0)+1/16.(1/4.0+1/4.1/3)+1.(1/4.1/48+1/4.0)=1/64, β1(0,0,1,0)=1/64andβ1(0,0,0,1)=3/256,
 β2(1,0,0,0)=β1(1,0,1,0)p112α1(0,0,1,0)+β2(1,0,1,0)p222α2(0,0,1,0)+β1(1,0,0,1)[p112α1(0,0,0,1) +β2(1,0,0,1)p222α2(0,0,0,1)]+β1(1,0,1,1)[p112α1(0,0,1,1)+β1(1,0,1,1)p222α2(0,0,1,1)] =1/12.1/4.1/4+0.1/3.0+1/16.1/4.0+1/16.1/3.1/3+1.1/4.1/48+1.1/3.0=5/288, β2(0,0,1,0)=5/288,andβ2(0,0,0,1)=1/256.

Using that , we are able to compute the expected values we need:

 Eθc(T1)=1P(x|θ)∑Iα1Iβ1I=11/256(1/4.1/64+1/4.1/64+0 +1/64.1/12+1/48.1/16+1/48.1/16+1/256)=4/2561/256=4,
 Eθc(T1→{T1,T1})=1P(x|θ)∑I∑I1+I2=Iβ1Iα1I1α1I2p(T1→{T1,T1})=1/2561/256=1
 Eθc(T1→{T1,T2})=1P(x|θ)∑I∑I1+I2=Iβ1Iα1I1α2I2p(T1→{T1,T2})=1/2561/256=1
 Eθc(T1→{TT1})=1P(x|θ)β1TT1p(T1→{TT1})=11/2561/4.1/64=1/2561/256=1,
 Eθc(T1→{T1})=1P(x|θ)β1T1p(T1→{T1})=11/2561/4.1/64=1/2561/256=1.

Thus, the estimated distribution for is

 ^p111=Eθc(T1→{T1,T1})Eθc(T1)=1/4,^p112=Eθc(T1→{T1,T2})Eθc(T1)=1/4, ^p11=Eθc(T1→{T1})Eθc(T1)=1/4,^p1T=Eθc(T1→{TT1})Eθc(T1)=1/4,

which is the same as the initially chosen one, so this would be the final estimation.

For the offspring distribution of similar computations lead to the result:

 Eθc(T2)=1,Eθc(T2→{T2,T2})=0,Eθc(T2→{T2})=0,Eθc(T2→{TT2})=1,

so the estimation is:

 ^p222=0,^p22=0,^p2T=1,

which converges on the next iteration also.

## 5 Simulation study

Simulation experiment has been performed to study behaviour of the estimates obtained via the algorithm. Observations have been simulated according to the model (4) in the previous section with offspring probabilities and . Estimation has been performed using different sample sizes both for the number of observations, and the tree sizes as well. All the computations were made in R [19].

It is important to investigate how the size of the tree, which corresponds to the size of the ”hidden” part of the observation, affects the estimates. In Table 1 are shown the result for small tree sizes and sample size 20. The most accurate estimates are obtained through averaging these results. It can be seen that there is great variation in the estimate of the individual distribution of type 2, thought the mean is close to the real values. For larger sample sizes the variance of the estimates is reduced, but there is some bias in the estimate for type 2 (Table 2). Larger sample trees also lead to biased estimates for the individual distribution for type 2 (Table 3). Using larger trees is also computationally more expensive.

The bias in the estimate for type 2 is due to the greater uncertainty in the process for type 2: these particles could be generated by a particle of their own type, as well as, by a particle of type 1. For example, production of one particle of type 1 and two of type 2 could happen in two ways: once and then , or twice . So, in general, productions take part more often in the estimation than productions . As the branching is hidden and all possible generations have to be taken in account, this results in underestimation of and some overestimation of when that hidden part gets larger.

## 6 Conclusion

A general EM algorithm has been proposed to find ML estimation of the offspring probabilities of MTBP with terminal types when only an observation of the generation at given moment is available. The example presented shows that the algorithm is straightforward and convenient to apply for a particular model. Simulation study shows that better estimates are obtained using smaller samples. Such algorithms would be useful in biological models like cell proliferation, genetics, genomics, evolution, and wherever a model of MTBP with terminal types is suitable.

## References

• [1] Asmussen, S., H. Hering (1983) Branching Processes, Birkhauser, Boston.
• [2] Athreya, K. B., P. E. Ney (1972) Branching Processes, Springer, Berlin.
• [3] Badalbaev, I. S., A. Mukhitdinov (1990) Statistical Problems in Multitype Branching Processes. Fan, Tashkent (in Russian).
• [4] Casella, G., R. L. Berger (2002) Statistical Inference, second edition, Duxbury.
• [5] Daskalova N. (2010) Using Inside-Outside Algorithm for Estimation of the Offspring Distribution in Multitype Branching Processes Serdica Journal of Computing, 4(4), 463–474.
• [6] Dempster, A. P., N. M. Laird, D. B. Rubin (1977) Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, B, 39, 1–38.
• [7] Geman, S., M. Johnson (2004) Probability and statistics in computational linguistics, a brief review Mathematical foundations of speech and language processing. Johnson, M.; Khudanpur, S.P.; Ostendorf, M.; Rosenfeld, R. (Eds.)
• [8] González M., J. Martín, R. Martínez, M. Mota (2008) Non-parametric Bayesian estimation for multitype branching processes through simulation-based methods, Computational Statistics & Data Analysis, 52(3), 1281–1291.
• [9] González, M. and Gutiérrez, C. and Martínez, R. (2010) Parametric inference for Y-linked gene branching model: Expectation-maximization method, proceeding of Workshop on Branching Processes and Their Applications, Lecture Notes in Statistics, 197, 191–204.
• [10] Guttorp, P. (1991) Statistical inference for branching processes, New York: Wiley.
• [11] Haccou, P., P. Jagers, V. A. Vatutin (2005) Branching Processes: Variation, Growth and Extinction of Populations. Cambridge University Press, Cambridge.
• [12] Harris, T. E. (1963) Branching Processes, Springer, New York.
• [13] Hyrien O. Pseudo-likelihood estimation for discretely observed multitype Bellman-Harris branching processes (2007) Journal of Statistical Planning and Inference, 137(4), 1375–1388.
• [14] Jagers, P. (1975) Branching Processes with Biological Applications, London, Wiley.
• [15] Kimmel, M., D. E. Axelrod (2002) Branching Processes in Biology, Springer, New York.
• [16] Lari, K., S. J. Young (1990) The Estimation of Stochastic Context-Free Grammars Using the Inside-Outside Algorithm, Computer Speech and Language, 4, 35–36.
• [17] McLachlan, G. J., T. Krishnan (2008) The EM Algorithm and Extensions, Wiley.
• [18] Mode, C. J. (1971) Multitype Branching Processes: Theory and Applications, Elsevier, New York.
• [19] R Development Core Team (2010) R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria, URL http://www.R-project.org.
• [20] Sankoff, D. (1971) Branching Processes with Terminal Types: Application to Context-Free Grammars Journal of Applied Probability 8(2), 233–240.
• [21] Wu, C. F. J. (1983) On the Convergence of the EM Algorithm. Ann. Stat. 11, 95–103.
• [22] Yakovlev, A. Y., N. M. Yanev (1989) Transient Processes in Cell Proliferation Kinetics. Berlin, Springer.
• [23] Yakovlev, A. Y., N. M. Yanev (2009) Relative frequencies in multitype branching processes Ann. Appl. Probab. 19(1), 1–14.
• [24] Yanev, N. M. (2008) Statistical Inference for Branching Processes. In: M. Ahsanullah, G. P. Yanev (Eds), Records and Branching Processes, Nova Sci. Publishers, Inc, 143 – 168.

Nina Daskalova Sofia University ”St.Kliment Ohridski”, Faculty of Mathematics and Informatics, Sofia, Bulgaria, e-mail: ninad@fmi.uni-sofia.bg
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters