Mutant number distribution in an exponentially growing population

# Mutant number distribution in an exponentially growing population

Peter Keller School of Mathematics, University of Edinburgh, Edinburgh, EH9 3FD, UK    Tibor Antal School of Mathematics, University of Edinburgh, Edinburgh, EH9 3FD, UK
July 30, 2019
###### Abstract

We present an explicit solution to a classic model of cell-population growth introduced by Luria and Delbrück LD1943 () 70 years ago to study the emergence of mutations in bacterial populations. In this model a wild-type population is assumed to grow exponentially in a deterministic fashion. Proportional to the wild-type population size, mutants arrive randomly and initiate new sub-populations of mutants that grow stochastically according to a supercritical birth and death process. We give an exact expression for the generating function of the total number of mutants at a given wild-type population size. We present a simple expression for the probability of finding no mutants, and a recursion formula for the probability of finding a given number of mutants. In the “large population-small mutation”-limit we recover recent results of Kessler and Levin kessler2013 () for a fully stochastic version of the process.

## 1 Introduction

When a population of bacteria is attacked by a lethal virus, often a sub-population survives. At the beginning of the 1940’s an important question was whether this resistance is due to adaptation which is induced under the stress of attack, or is simply due to mutations that occurred beforehand during the expansion of the population. To clarify this question, Luria and Delbrück conducted their now famous experiments in 1943 LD1943 (), and showed that indeed the natural variability of cells can withhold a sub-population from extinction. They formulated a simple mathematical model in which both wild-type and the mutant cells grow deterministically, but the mutants appear randomly, proportional to the wild-type population size. They derived many properties of the model, in particular for the mutant size distribution and proposed a method to estimate the mutation rate from data.

In the seminal paper of Lea and Coulson LC1949 (), the original model was extended to allow stochastic growth of the mutant population as a pure birth process. They derived the distribution of the number of mutants for the first time for neutral mutation. In 1955 Bailey published elegant computations and some results on his own modifications of the process, see Bailey1955 (). Since then many efforts have been undertaken to understand the process better. The review paper of Zheng zheng1999 (), gives a formidable overview of the history of the process and clarifies most concerns related to the infinite moments of the proposed distributions.

New interest has kindled recently in the mutant distribution of the fully stochastic version, where wild-type cells grows according to a birth and death process. Including cell death into the model extended the range of its possible applications. This model was formulated by Kendall kendall60 (), and a full solution was provided in AntalKrap2011 (), where the Kolmogorov equations for the generating function of both cell types were solved explicitly. From the generating function the joint probability of a given number of wild type and mutant cells can be obtained for any finite times. Expressions for finite times become important for experimental studies Clayton:2007aa (); antal10 (); bozic2013 (), where the asymptotic limit might be out of reach.

In many situations, most notably in the study of mutations in tumor growth vogelstein13 (); nowak06 (), the age of the wild-type population is rarely known. At tumor detection we have a fairly good idea about the size of the tumor but the time of its initiation is unknown. This led to studies of the mutant distribution at a fixed size of wild-type population. For neutral mutataion and pure birth processes this problem was solved by Angerer angerer2001 (). Iwasa, Nowak and Michor iwasa2006 () extended this model to non-neutral mutants and to birth-and-death processes. They derive mutant distributions and resistance probability assuming the product of population size and mutation rate to be small. Komarova suggested a very elegant method to obtain an approximate mutant distribution komarova07 (). More recently, in two remarkable papers kessler2013 (); KessleronArXiV () Kessler and Levin obtained the full mutant distribution for a large but fixed size wild-type population. They used approximate methods to simplify the Kolmogorov equations, and in an independent derivation they also used the exact solution of the fully stochastic case given in AntalKrap2011 (). By letting the previously constant product of mutation rate and population size go to infinity, they derive -stable distributions. For the same limit, similar results were derived with other methods by Durrett and Moseley durrettmoseley () for beneficial mutations. This result was already given, but not proven, by Mandelbrot in mandelbrot1974 (). Moehle treats the classic case of neutral mutation utilizing Compound-Poisson-Processes in moehle2005 (). In janson () Janson treats a similar model with fixed, non-random number of offspring, by mapping a reducible multi-type branching process to Pólya-Urns and investigates several limits.

In this paper we make the assumption that the wild-type population grows according to a deterministic exponential function. Leaning on the formalism for arbitrary growth functions introduced in dewanji2005 () and reviewed in Section 2. In Section 3 we rewrite the general integral representation of the generating function of the number of mutants explicitly in terms of hypergeometric functions. We consider the special case of neutral mutations separately in Section 4. After getting rid of the integral representation, we investigate limits of the mutant distribution in section 5. Indeed, we recover all corresponding results from the above mentioned papers for general parameters and extend them to finite wild-type populations, to mutants with explicit death, and to deleterious (disadvantageous compared to wild-type) mutations. We give a recursion to calculate the probability distribution of the mutants efficiently and analyze the distribution’s tail behavior in-depth in the final section, thereby extending the results of pakes1993 (); prodinger1996 ().

## 2 General population size functions

We consider a cell population that consists of two types of cells, a wild type (type A) and a mutant (type B). Each -cell independently of all other cells produces a mutant -cell at rate . If we approximate the size of the -cell population via the deterministic function , so that mutants are produced at rate , the arrival times of new mutants follow a non-homogeneous Poisson process. Each -cell descended from an -cell at time is the initiator of a new sub-population of mutants (a clone), whose size we denote by . At time the total number of clones is a Poisson random variable with mean

 m=\mathdsE(K)=∫t0νf(s)ds. (1)

We assume that clones develop indepently as some stochastic process with generating function . Since each clone is generated according to a Poisson-Process, the family is independent, identically distributed (iid) and the generating function of each clone is

 ψ(z)=\mathdsE(zY)=νm∫t0f(s)gt−s(z)ds. (2)

The total number of mutants at time is a Compound Poisson random variable

 Bt=K∑i=1Yi.

Using conditional expectation, the generating function of can be written as

 G(z)=\mathdsE(zBt)=\mathdsE(\mathdsE(zBt|K)).

Now

 \mathdsE(zBt|K=k)=ψk(z)

since the clones are independent and thus

 G(z)=\mathdsE(ψK)=∑k≥0(ψ(z)m)kk!e−m=em(ψ(z)−1), (3)

which appears in moehle2005 () and is characteristic for Compound Poisson variables. Using (1) and (2), we can also write

 G(z)=exp(ν∫t0f(s)[gt−s(z)−1]ds) (4)

which appears in dewanji2005 () in a more general setting.

Since the generating function of is of exponential form, we introduce the following notation for arbitrary random variable

 ΛX(z)=log\mathdsE(zX)=logGX(z)

and refer to as the log-generating function of .

## 3 Generating function for exponential growth

Let us consider the special case of an exponentially growing wild-type population, such that , for some . Hence mutants are produced at rate . Moreover, let us assume that each clone behaves like a linear birth-death process with birth rate and death rate with positive relative fitness , i.e. the process is supercritical. The extinction probability of a mutant clone is , and its generating function is also well known arthreyaney ()

 gs(z)=1−1−q1−ξe−λs,ξ=q−z1−z=1−1−q1−z.

The pure birth case of and thus is well studied and corresponds to the assumption that cells only divide, but never die.

We are interested in the distribution of the number of mutants at the time when the number of -cells reaches exactly . Since the -cells grow deterministically, this happens at time . We use the shorthand notation for the number of mutants at time . Therefore the mutant log-generating function (4) becomes

 ΛB(z)=ν∫τ0eδs(gτ−s(z)−1)ds=μγ∫τ01eλsN−1/γξ−1δeδsds (5)

where

 γ=δ/λ and μ=ν/α.

After a change of variable we can compute the integral

 ΛB(z)=−Nμγ∫11/N11−u1/γξdu=−Nμγ∫11/N∑k≥0(u1/γξ)kdu=−Nμ∑k≥0ξku(k+1)/γγ+k∣∣∣11/N=μ∑k≥0ξk[N−k/γγ+k−Nγ+k]. (6)

We can rewrite (6) in terms of the hypergeometric function (59)

 ΛB(z)=logGB(z)=Nμγ[1NF\/(1,γ1+γ;ξN−1/γ)−F\/(1,γ1+γ;ξ)], (7)

where we utilized the Pochhammer symbol (58) to verify that

 γγ+k=(γ)k(1+γ)k.

The above equation (7) is the final, exact, closed-form solution for the mutant distribution for an exponentially growing wild type population. In the rest of the paper we shall analyze its properties.

The mean and variance of the number of mutants can be calculated by taking the usual approach of differentiating the generating function (7) or by using Dewanji’s general expressions for mean and variance for the case of arbitrary growth function , see dewanji2005 (); this results in

 \mathdsE(B)=Nμ1−q⋅{logNγ=111−γ(N1/γ−1−1).γ≠1 (8)

and

 Var(B)=Nμ(1−q)2⋅⎧⎪ ⎪⎨⎪ ⎪⎩2(N−1)−(1+q)logNγ=1(1+q)(N−1/2−1)+logNγ=222−γN2/γ−1+1+qγ−1N1/γ−1+q(2−γ)+γ(2−γ)(1−γ)γ∉{1,2}. (9)

These expressions generalize those given in Zheng zheng1999 () (replace and , in (zheng1999, , (52),(53))). Figure 1: Orders of the mean number of mutants \mathdsE(B) and its standard deviation √Var(B) for large N. Mutants have a fitness advantage for 1/γ>1 and a disadvantage for 1/γ<1 with respect to the wild-type cells.

We give an overview of the large behavior of the expectation and the variance of the number of mutants in figure 1. The mean number of deleterious mutants () is of the same order as the wild type cells. However, the number of advantageous mutants () is growing faster than the wild type population. Note also that for advantageous mutants () the mean and the standard deviation have the same order, which implies that the fluctuations are important and a stochastic description of the process is essential. Only for very deleterious mutants () the process becomes self-averaging, and the fluctuations become as predicted by the central limit theorem. For intermediate deleterious mutants () the relative standard deviation varies continuously with .

We can obtain the probabilities by Taylor expanding in , or by using the Gauss inversion formula. Since this is computationally intense, we give a recursive formula for the probabilities instead

 pn={eq0,n=01n∑n−1k=0(n−k)qn−kpk,n≥1, (10)

where

 q0=Nμγ[1NF\/(1,γ1+γ;N−1/γq)−F\/(1,γ1+γ;q)] (11)

and for

 qk=μk∑j=1(k−1j−1)1j+γ(1−qq−N1/γ)jF\/(1,γ1+γ+j;N−1/γq)+Nμ(k−1)!(γ+1)kF\/(k,γ1+γ+k;q). (12)

We give a proof of this recursion in Appendix A.

## 4 Special case of neutral mutations, γ=1

Often there is interest in mutations which do not change the behavior of the cell, so called neutral mutations. In this special case when , that is , we can further simplify the log-generating function given in (7), by using (70). Alternatively, by using the series expansion

 log(1−z)=−∑k≥1zkk,

we can rewrite (6) for as

 ΛB(z)=μ∑k≥0ξk[N−kk+1−Nk+1]=Nμξlog(1−ξ1−ξ/N). (13)

Hence the generating function becomes

 G(z)=(1−ξ1−ξ/N)Nμξ. (14)

By introducing the variables

 y=z−q1−q=ξξ−1,ϕ=1−1N,θ=Nμ, (15)

we obtain the form

 G(z)=(1−ϕy)θ1−yy. (16)

If we further specialize to mutant cells that cannot die, that is (which implies and ), we find

 G(z)=(1−ϕz)θ1−zz. (17)

This formula was first derived in [Lea-Coulson], and also given in zheng1999 () with some historical perspective.

The coefficients in the recursion formula (10) become simpler for

 qk=⎧⎪ ⎪⎨⎪ ⎪⎩−θqlog(1+ϕq1−q)k=0θϕk[1−q(1−ϕ)]k[1k−ϕk+1F\/(1,12+k;−ϕq1−q)]k≥1 (18)

which is also derived in Appendix A. Note that

 p0=P(B=0)=eq0=(1+qϕ1−q)−θ/q (19)

and all other probabilities can be obtained from recursion (10). When mutants do not die, i.e. (which implies ), the coefficients further simplify to

 qk=⎧⎨⎩−θϕk=0θ(ϕkk−ϕk+1k+1)k≥0, (20)

as given in (zheng1999, , p.18). Note that the coefficients appear in (iwasa2006, , Appendix D) as the approximated probabilities for large and , in which case .

## 5 Limit behavior

In Section 3 we derived a recursion for the exact probability distribution of the number of mutants present at time , under the assumption of exponential wild-type-growth. The coefficients of this recursion, given in (12), are quite complicated, so we seek for an easier limit case. In (7), the first addend stems from the lower integral boundary and should vanish for large resp. large and small mutation rate. This turns out to be true only if the is held constant. The resulting distribution is a Compound Poisson random variable, enabling us to directly determine the distribution of the size of a clone, i.e. the sub-population size of mutants descended from one original mutant. This distribution shows already a power-law tail,which will be discussed in Section 6.

In applications, can be large, hence we discuss a consecutive limit in which goes to infinity. Since a Compound Poisson random variable is the sum of a random number of i.i.d. variables, it is not surprising that the distribution of the limit variable, which we call , is -stable. Although the theory of generalized limit theorems is rich, see for example durrettbook (), we use our explicit results for the generating function to perform the limit directly. Since we consider only the convergence of generating functions or Laplace-transforms, all limits in this paper are meant as convergence in distribution.

Note that in durrettmoseley (), a non-rigorous proof was given for the direct limit from to (see figure 2) for the fully stochastic case in a slightly different setting for . It utilizes an approximation of the wild-type population size by a deterministic exponential growth function, that stems from the fact that in the two-type, fully stochastic model the wild-type population behaves like a one-type birth and death process if the mutation rate is very small. Then the number of -cells can be approximated by , . A similar limit in moehle2005 () covers the case of the classic Luria Delbrück distribution.

Another limit approach is to fix the mutation rate and let under a proper rescaling. Results for this limit were presented in mandelbrot1974 () and their derivation given in a privately distributed second part of the paper, which is no longer available. We reproduce these results in terms of hypergeometric functions and then take the limit under a similar scaling as in the -constant case, to recover once again the limit variable . Figure 2: A schematic overview of the possible limits. All limits are convergence-in-distribution results.

### 5.1 Large Population-Small Mutation Limit

In applications, the mutation rate is usually small, while the population size is very large. We therefore investigate the simultaneous limit and , such that is held constant. We call this limit Large Population-Small Mutation Limit (LPSM). Note that we introduced already in (15), in analogy to the notation introduced in zheng1999 ().

Formally we can express the LPSM-limit as

 limN→∞,μ→0θ%const.B=V,

where is the limiting random variable, which we characterize via its log-generating function. When is held constant, the log-generating function of , given in (7), depends only in the first addend on . We therefore expand the first addend into a power series in . For arbitrary

Thus the immediate result for the LPSM-limit is

 ΛV(z)=limN→∞,μ→0θ const.ΛB(z)=−θγF\/(1,γ1+γ;ξ). (21)

which we can rewrite in terms of by using (63)

 (22)

This expression is the second addend in the general formula (7), so we can adapt the recursion (10) for the probability distribution of easily. This yields the recursion coefficients

 (23)

Note that the coefficients appear in (iwasa2006, , (16)) as the approximated probabilities for large and .

The expectation and variance of can be derived as usual via derivatives of the generating function. The computations are tedious, in particular because the convergence behavior of the hypergeometric function depends on , but not very interesting. We give the results in Table 1. Note that they are consistent with the application of the , limit directly to the mean and variance of given in (8) and (9). Interestingly, the mean is finite only for and the variance only for .

We note that the LPSM-limit is independent of the initial number of wild-type cells. This can be easily seen, when we choose in (4). Then the integral representation of the log-generating function (5) is just . This property directly mimics the branching property of a fully stochastic two-type branching process. Adapting the calculations from (6), the log-generating function

 ΛB(z)=logGN0B(z)=Nμγ[N0NF\/(1,γ1+γ;ξNN0−1/γ)−F\/(1,γ1+γ;ξ)]. (24)

Again, in the LPSM-limit only the first addend depends on resp.  and vanishes with .

Note further, that the limit log-generating function has a direct interpretation with respect to our initial model. We write

by (61). A change of variables with gives

 θγ∫101u1/γξ−1du=θγN∫τ−∞1eλsN−1/γξ−1δeδsds.

Indeed, is the generating function of the model started at instead of zero.

By another change of variables

 ΛV(z)=θγ∫∞01e−λtξ−1δe−δtdt=θ(1−q)γ∫∞0(gt(z)−1)δe−δtdt.

Let be an exponential random variable with mean , then

 ΛV(z)=θ(1−q)γ\mathdsE[gX(z)−1].

This is the log-generating function of a Compound-Poisson random variable, where

Via the usual interpretation of a Compound Poisson, the limiting number of clones (i.e. the number of actual mutation events) has a Poisson distribution with intensity and describes the clone size distribution, that is the size of the population founded by a single original mutant. Figure 3: On the left, we plot the Probability distribution of the number of mutants in the large population-small mutation limit with θ=Nμ constant. While keeping γ=1.5 and q=1/2 constant, we vary θ=1/10, 1/5, 1/3, 1/2, 1, 2, 3, 5, 10, beginning with the lowest (red) line. On the right, we vary γ choosing 0.25, 0.5, 0.75, 1, 1.5, 2, 3 and 4, beginning with the horizontal (red) line, for θ=1 and q=1/2. The case γ=1 is indicated with a dashed line in both figures. We used recursion (23) to calculate the probabilities. Figure 4: Comparison of the probability of no mutants p0 and a single mutant p1. Solid lines indicate p0=p1 depending on q. We included the extreme case q=1 as well. Figure 5: The mode of V plotted as function of θ and q=0.5. We chose (beginning from the left) γ equal to 0.5, 1, 1.5, 2 and 2.5. The jagged appearance of some of the lines is due to the mode taking only integer values. Figure 6: Plot of the contours dependend of γ and θ where the probability p0=1/2 for q=0,0.2,0.4,0.6,0.8 (left to right) and the approximation (dashed line) given in (5.1). Note that the quality of the approximation depends on q.

We identified the distribution of as (discrete) Compound Poisson random variable and argued that it has infinite mean for and infinite variance for , see Table 1. Since the potentially non-finiteness of the moments makes the application of moment-methods very difficult and unreliable, we give some information about the mode of . The mode is defined as the value k at which the probability mass function of takes its maximum. In uni-modal distributions the mode corresponds to the peak of the distribution.

From numerical analysis we are confident that the distribution of is indeed uni-modal, as can be seen in Figure 3 where we vary the parameters and . In general, however, it is difficult to derive an explicit formula since we only have the recursion (23) available. The recursion, however, makes the first few probabilities explicitly accessible. We therefore investigate the ratio

 p1p0=θγ+1F\/(1,γ2+γ;q). (25)

This ratio is less than one, if the probability is the maximum of the distribution and larger if the maximum is bigger or equal to one. This ratio depends not only on and , but also on . In Figure 4 we show a phase diagram on the - plane displaying the boundary of regions where no mutants are the most probable, for a few values of . The plot shows that the boundaries behave like a linear function. This is can be confirmed, if we set in (25), then and can be seperated and the boundary is described by

 θ=(1+γ)F\/(1,γ2+γ;q)−1∼1+q+(1−q)γ+O(γ2), for large γ. (26)

The last approximation is due to an asymptotic result for the hypergeometric function, which we derive in Appendix B. The ratio , however, gives no information about the actual value of the mode(B), that is the most probable number of mutants. By numerically sampling the mode, using an implementation of the recursion (23) in Mathematica, we find that the mode increases rapidly for large and small , see also Figure 5.

The probability is indeed an important quantity in itself, since the “resistance probability” indicates, in the sense of the original Luria-Delbrueck formulation, the probability that a population can escape extinction under the attack of a lethal virus due to the existence of resistent mutants. Thanks to the recursion (23), we can give explicitly

 \mathdsP(V=0)=exp[−θγF\/(1,γ1+γ;q)]. (27)

The expression , derived differently, appears in (iwasa2006, , (7)) (Note that they have another definition of the mutation rate, which maps into ours with and , where is the mutation probability). With increasing mode the resistance probability increases, in fact exponentially fast by (27). For and fixed the resistance probability goes to . For fixed and , the probability goes to one. Using a Taylor expansion of the hypergeometric function in (27), we can approximate the resistance probability for large as

 −logp0∼θγ(1−q)+q. (28)

For a fixed , the two variables and can be seperated and the contour associated to the fixed value of can be parameterized as

 θ∼−[γ(1−q)+q]logp0.

A plot of the contours, where the resistance probability of is equal to for different is given in Figure 6 together with their approximations.

### 5.2 Large values of θ

We now let , using the results of the previous section. Obviously, for (21) does not converge, thus we need to introduce a scaling. We formalize this convergence in distribution by

 limθ→∞Va−b=Z

where depend only on . The scaling factor is in fact already predetermined, see (durrettbook, , section 3.7, p.135), and is proportional to , but we will re-derive it for our case. We take the opportunity to scale out the survival probability wherever possible.

Let us abbreviate the logarithm of the Laplace transform of by

 EV(s)=log\mathdsE(exp(−Vs))=ΛV(e−s).

This definition directly implies

 EVa−b(s)=EV(s/a)+bs.

We use this notation also for .

For , eq. (22) turns into

 EV(s/a)=θγe−s/a−11−qF\/(1,11+γ;e−s/a−q1−q). (29)

We then apply equation (65) to derive

 EV(s/a)=θ(e−s/a−1)(1−q)(γ−1)F\/(1,12−γ;1−e−s/a1−q)+πsin(πγ)(−θ1/γ(e−s/a−1)e−s/a−q)γ

For to infinity and for all . With a Taylor expansion of around we can write

 EV(s/a)∼∑k≥1(−s)kθ−1ak(1−q)(γ−1)k!+πsin(πγ)(∑k≥1(−s)kθ−1/γak(1−q)k!)γ (30)

For the computations we consider four cases, which depend on the choice of . The results are listed in Table 2. Note, that although the expressions for and are equal, the means are not. Indeed this can be understood via a derivative with respect to and a limit .

For it is now intuitive to set , since for this choice all terms except the linear and quadratic term in the first addend of (30) vanish for . The linear term however diverges, so we compensate it by setting . Thus

 EZ(s)=limθ→∞EV(s/a)+bs=s22,

which proofs that the limit random variable has a standard normal distribution.

In this case it is sufficient to set , then (30) is

 EV(s/a)∼−θ1−1/γγ−1s+O(θ1−2/γ)+πsin(πγ)[s+O(θ−1/γ)]γ.

Therefore, if we set ,

 EZ(s)=limθ→∞EV(s/a)+bs=πsin(πγ)sγ.

Note that for , we could have chosen , since . The result resembles and in fact generalizes the result of (durrettmoseley, , Th. 6) and mandelbrot1974 ().

Here we have to consider (29), since the expansion (30) has a singularity at . We use (70) instead and gain

 EV(s/a)=θ(e−s/a−1)e−s/a−q[log(θ(e−s/a−1)q−1)−logθ].

With the same argumentation of Taylor expansion of as in the proceeding cases we notice that is a sensible choice, since then

 EV(s/a)∼slogs−slogθ

and consequently for

 EZ(s)=limθ→∞EV(s/a)+bs=slogs,

which also appears in moehle2005 ().

Here we face the same problem as in the previous case, since the expansion (30) is also singular at . Via consecutive application of (29) and (71) we gain

 EV(s/a)=θ(e−s/a−1)e−s/a−a−[θ(e−s/a−1)]2θ(e−s/a−q)2log(θe−s/a−1θ(q−1)).

With the very same argumentation as in the case, we find for

 EV(s/a)∼−s√θlogθ+log(θlogθ)2logθs2,

so that with

 EZ(s)=limθ→∞EV(s/a)+bs=12s2.

Again, has a standard normal distribution.

### 5.3 Large population limit at fixed mutation rate

In this section we extend the results of Mandelbrot from mandelbrot1974 () to allow the possibility of mutant death. We consider a similar approach as in the Large -limit but with mutation rate held constant, i.e.

 limN→∞Ba−b=W,

where and depend on . Like in (5.2), we abbreviate with the log-Laplace transform of and note that

 EBa−b(s)=EB(s/a)+bs.

Applying first (65) and then (63) to (7) gives, noticing that ,

The parameter controls the variance, therefore we know that we should choose of the order of . The parameter should be proportional to the order of the ratio , see also Figure 1. The limits can now be computed like in the LPSM-limit by expanding into a power series. We give the chosen scaling factors together with the limit results and their mean and variance in Table 3. Note that we could have removed the mean for , but we need a non-zero expectation for the limit, to recover .

In order to check that the two pathways for going from to on Fig 2 are equivalent, we take the limit of the above large limit. Again, we consider the limit

 limμ→0Wa=Z

where we assume for simplicity that is now only dependent on .

For there is nothing to show, since for any obviously . Therefore does not depend on and the limit for is trivially again a Gaussian. We notice that by using (72) in reverse

 μ2−γs2F\/(1,2−γ3−γ;−s)=μ1−γs−μ1−γsF\/(1,1−γ2−γ;−s)

and by eq. (65) we get

 μ1−γsF\/(1,2−γ3−γ;−s)=πsin(πγ)μsγ−sμ(s+1)γF\/(1,11+γ;11+s).

Finally, setting , we get

 EZ(s)=limμ→0EW(s/a)={slogsγ=1πsin(πγ)sγγ∈(0,2)∖{1}

That shows that the scaling limit of for recovers indeed the distributions we got for the Large limit, which we give in Table 2.

### 5.4 On α-stable distributions

We point out that the limiting Laplace transforms given in Table 2 are well known representatives of the class of -stable distributions, where . These distributions appear as the limit distributions in the generalized version of the Central Limit Theorem, where iid. random variables are added and rescaled, but the assumption of finite mean and variance is dropped. The Gaussian distribution is in this context the extreme case for and the only limit distribution in this class with finite variance. As a reference see (durrettbook, , sec. 2.7).

One of the many characterizations of -stable distributions is given via characteristic functions (the notation varies strongly throughout the literature). A random variable with characteristic function

 φX(s)=\mathdsE(exp(iXs)).

is -stable resp.  if and only if

 logφX(s)={−σα|s|α(1−iβsgnstanαπ2)+iμsα≠1−σ|s|(1+iβ2πsgnslogs)+iμsα=1.

By a formal substitution , we can rewrite the distribution of , as given in Table 2 using the above notation

 logφZ(s)=⎧⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪⎩−σγ|s|γ(1−isgn(s)tan(πγ2))γ∈(0,2)∖{1}−π2|s|(1+isgn(s)2πlogs)γ=1−12s2γ≥2, (31)

where

 σ=[πγ2csc(πγ2)]1/γ.

This means that is indeed an -stable random variable for all , where the shape-parameter .

Unfortunately, the densities are unknown for the majority of -stable distributions. One exception is the Gaussian distribution (). The Lévy-Distribution with density

 f(t)=(σ2π)1/21t3/2exp(−σ2t)

and infinite moments and the Holtsmark distribution with finite mean are other cases (for the latter a lengthy expression in terms of hypergeometric functions exists). In the special case for , we find that is characterized by the distribution , which is the Landau-Distribution. This distribution was also identified for the fully stochastic case by Kessler and Levin in kessler2013 ().

## 6 Tail behavior

The mutant distributions as we described them in the previous Section 5 are quite complicated. For this reason we study their tail behavior to gain some intuition. We use asymptotic analysis methods as discussed in flajolet_book ().

We start with the most interesting case, the LPSM limit, where the number of mutants, denoted by (see Figure 2), is characterized by generating function (22). The tail behavior of is encoded in around its relevant (closest to origin) singularity. This generating function is analytic at the origin, and its only singularity is at , which implies , so we expand around that. We first use the transformation formula (66) and then the reflection formulas (54), (55) to rewrite the log-generating function as

An expansion around gives

 Λ(z)=θ1−γ[(1−y)+(1−y)22−γ+O((1−y)3)]−θπsin(γπ)[(1−y)γ+γ(1−y)1+γ+O((1−y)2+γ)],

and by using the series expansion of the exponential we get

 G(z)=eΛ(z)=1+Λ(z)+Λ2(z)2+O(Λ3(z))=K(1−y)−κ(1−y)γ−κ(θ1−γ+γ)(1−y)1+γ+12κ2(1−y)2γ+O((1−y){3γ,2+γ,1+2γ}),

where is a polynomial with integer powers (hence do not contribute to the tail behavior), and for brevity we wrote

 κ=θπsin(γπ).

Note that this is already an expansion in since

 1−y=1−z1−q.

By using (73) and (54) we obtain the large expansion

 pn∼θΓ(1+γ)(1−q)γn−1−γ+θ2κ22Γ(−2γ)(1−q)2γn−1−2γ−θΓ(2+γ)(1−q)1+γ(θ1−γ+γ(1+q)2)n−2−γ+O(n{3γ,2+γ,1+2γ}). (32)

The leading order term has been derived in kessler2013 (). The sub leading order is represented by the term for , and by the term for .

Let us consider the above expansion at the special value . As , both sub-leading terms in (32) diverge as but these two singular terms cancel each other out, and one has to consider the next (constant) terms in the series. A logarithmic term appears through the expansion

 n1−γ=e(1−γ)logn=1+(1−γ)logn+O[(1−γ)logn]2

for . We arrive at

 pn∼θ(