# Rare mutations limit of a steady state dispersal evolution model

###### Abstract

The evolution of a dispersal trait is a classical question in evolutionary ecology, which has been widely studied with several mathematical models. The main question is to define the fittest dispersal rate for a population in a bounded domain, and, more recently, for traveling waves in the full space.

In the present study, we reformulate the problem in the context of adaptive evolution. We consider a population structured by space and a genetic trait acting directly on the dispersal (diffusion) rate under the effect of rare mutations on the genetic trait. We show that, as in simpler models, in the limit of vanishing mutations, the population concentrates on a single trait associated to the lowest dispersal rate. We also explain how to compute the evolution speed towards this evolutionary stable distribution.

The mathematical interest stems from the asymptotic analysis which requires a completely different treatment for each variable. For the space variable, the ellipticity leads to the use the maximum principle and Sobolev-type regularity results. For the trait variable, the concentration to a Dirac mass requires a different treatment. This is based on the WKB method and viscosity solutions leading to an effective Hamiltonian (effective fitness of the population) and a constrained Hamilton-Jacobi equation.

Key words: Dispersal evolution; Nonlocal pde; Constrained Hamilton-Jacobi equation; Effective fitness;

Mathematics Subject Classification (2010): 35B25; 35F21; 92D15

## 1 Evolution of dispersion

### Evolutionary dynamics of a structured population.

There are several well established mathematical formalisms to describe evolution. Game theory is widely used since the seminal paper [29]; see also [23]. Dynamical systems are also employed to describe the possible invasion of a population be a mutant, and to characterize Evolutionary Stable Strategies and other mathematical concepts; see, for instance, [16, 20]. Stochastic individual based models are often used to describe the evolution of individuals undergoing birth, death and mutations. Relations with other approaches can also been made at the large population limit, see [2, 12] and the references therein.

The formalism we use here is still different and relies on a population structured by a phenotypical trait and competing for a limited resource. This approach was initiated and has been widely studied in [17]. Several other versions use the formalism of a population structured by a trait, undergoing mutations and competition [34, 15, 25, 28]. All these papers, however, consider only proliferative advantage. The population with the highest birth rate or best competition ability survives while other traits undergo extinction. Mathematically this is represented by a limiting process where the population number density, denoted here by , takes the form of a weighted Dirac concentrated at the fittest trait .

The extension to models, where the phenotypical trait is combined with another structuring variable, usually space, is more recent and leads to considerable mathematical difficulties; see [1, 9, 37, 30, 13, 10]. This is mainly due to the fact that for the trait variable the solutions concentrate as described above while in the space variable solutions remain bounded.

Another motivation for considering the model in this note is to study the selection of the fittest individuals without a proliferative advantage. In this context, the reproduction rate might be compensated by another advantage. This gives rise to the question of defining an “effective fitness.” The gradient of the effective fitness determines the direction of trait evolution and its maximum defines the evolutionary stable strategy.

A particularly interesting example, both mathematically and biologically, in this directions is the selection of a dispersal rate, which we describe next in the context of a continuous dispersal trait. When only two species are represented by their number densities , and are competing for the same resource (carrying capacity) , the question is to know which of the two dispersal rates , is prefered in the competiton. Then, the model is

Is it better to ‘move’ faster or slower? In other words, is it favorable to have or the contrary?

### The model.

Here we assume that all dispersal rates are possible, and include mutations. We ask the question of the rare mutation limit of the steady state version for a still simple model of evolution of dispersal in a population. The main modeling assumptions are: (i) all individuals wear a phenotype characterized by a parameter , which induces a dispersal rate , (ii) a Fisher-type Lotka-Volterra growth/death rate with a space dependent carrying capacity and limitation by the total population whatever the trait is, and (iii) rare mutations acting on the genetic variable and modeled by a diffusion with covariance ; we refer to [12] for a derivation of this type of equations from individual based stochastic models.

More precisely, we study the asymptotic behavior, as , of the density , with a smooth domain, of the nonlocal and nonlinear problem

(1) |

with Neumann boundary condition on and periodicity in , that is, if is the external normal vector to ,

(2) |

We have chosen periodic boundary conditions in to simplify some technical aspects concerning a priori estimates.

As far as the carrying capacity and the dispersion rate are concerned, we assume

(3) |

and

(4) |

we note (4) is used to assert that the effective Hamiltonian also has a minimum at .

Finally, for technical reasons, we also assume that

(5) |

### Formal derivation of the mathematical result.

We proceed now formally to explain what happens in the limit and, hence, motivate the statement of the results. As it is often the case with problems where it is expected to see concentration in the limit, we make the exponential change of variables

(6) |

which leads to

(7) |

with

(8) |

It is clear from (7) that, if the ’s have, as , a limit , it must be independent of , and, hence, it is natural to expect the expansion

(9) |

Assuming that, as , the ’s converge to some , a formal computation suggests that is the positive eigenfunction of

(10) |

with eigenvalue , and that solves the constrained Hamilton-Jacobi equation

(11) |

The constraint on the becomes evident from the facts that, as it turns out, the ’s are bounded uniformly in and the equality

(12) |

which also suggests that, as , the ’s behave like a Dirac mass with weight .

### The mathematical result.

To state the result we recall that denotes the Dirac mass at and we introduce the nonlinear Fisher-type stationary problem

(13) |

which, in view of (3) and (4), admits a positive solution (see, for example, [3, 14].)

We have:

### Biological interpretation.

The conclusions of Theorem 1.1 can be thought as a justification of the fact that the population selects the “slowest” individuals in accordance with several previous observations on the evolution of dispersal. In this respect, the eigenvalue defines the fitness of individuals depending upon their trait. This fact can be stated using the canonical equation (31), which is formally derived in Section 6. In the words of adaptive dynamics, our result characterizes the unique Evolutionary Stable Distribution (or Strategy), [16, 25].

That mutants with lower dispersal rates can invade a population, that is the characterization of by property (4), is known from the first mathematical studies [22, 18]. However, these papers use time scale separation, heuristically assuming a mutant appears, with ‘small mutation’, and compete with the resident population. Our approach here is more intrinsic since we consider structured populations competing for resources and undergoing mutations. Surprisingly when set in the full space where the problem is characterized in terms of traveling waves, the opposite effect is observed, that is mutants with higher dispersal rates are selected giving rise to accelerating waves, [8, 9, 37, 4, 5]. For two competing populations, the combined effect of dispersal and a drift is studied in [21]. The analysis of dispersal evolution also gave rise to the notion of ideal free distribution [11, 13].

The question of dispersal evolution is a classical and important topic in evolutionary biology. The reader can consult [36] for a survey of the many related issues, to [33] for the case with patches and demographic stochasticity, to [24] for the case of trajectories with jumps (nonlocal operators) and to [8] for other biological references about accelerating fronts. A formalism using Fokker-Planck equation is used in [35]. Also, let us mention that a remarkable qualitative aspect fo spatial sorting is that in the full space, the largest dispersion rate is selected [8, 6, 7]. Finally, another mathematical approach to the concentration effect, stated in Theorem 1.1, can be found in [26].

### Organization of the paper.

In Section 2 we prove some uniform in estimates for the ’s that are then used in Section 3 where we derive the effective Hamiltonian, that is the eigenvalue problem of (10). In Section 4 we introduce the constrained Hamilton-Jacobi equation to conclude the proof of Theorem 1.1 and Theorem 4.1. In Section 5 we prove the two technical lemmata that were used in Section 4. Finally, In Section 6, we provide some perspectives about the problem, namely a more precise asymptotic expansion and the parabolic case, as well as numerical examples for the evolution driven by the parabolic equation.

## 2 Estimates on

We state and prove here some, uniform in , estimates for the ’s, which are fundamental for the analysis in the rest of the paper; here is the measure of and denotes a constant that depends on .

###### Lemma 2.1

Proof. We first observe that trivially satisfies the Neumann condition

(15) |

After dividing (1) by , integrating in and using the periodicity in , we find

and, hence, for some constant , which only depends on and , we have

Then (14)(i) follows from the strong maximum principle, while the -estimates are a consequence of the classical elliptic regularity theory.

The lower bound (14)(iii) comes from integrating (1) in and . Indeed, in view of the assumed periodicity, we find

The last claim is an immediate consequence of the a priori estimates and the usual Sobolev imbedding theorems.

## 3 The effective Hamiltonian

For and we consider the eigenfunction and the eigenvalue of

(16) |

Note that in view of (3) and the regularity of , the existence of the pair follows from, for example, [3, 14].

The next lemma provides some important estimates and information about . In the statement we use the notation

###### Lemma 3.1

The upper bound (17)(ii) follows from the positivity of , since, after dividing the equation by and integrating by parts, we find

For (18), we differentiate in the equation in (16) to find

(19) |

where and denote derivatives with respect to , we multiply by and integrate by parts using the boundary condition to get

(20) |

Next we use the fact that the normalization of yields and after multiplying the equation in (16) by , integrating by parts and subtracting the result from (20) we find

Since we have assumed in (3) that is not constant, does not vanish and the result follows.

To conclude, we observe that (4) and (18) yield that as the same monotonicity, in , as and, thus, has a unique local minimum at .

## 4 The constrained Hamilton-Jacobi equation

We prove here the generalized Gaussian-type convergence asserted in Theorem 1.1, derive the constrained Hamilton-Jacobi equation (11) and state some more properties. For the benefit of the reader we restate these assertions as a separate theorem below.

###### Theorem 4.1

The family is uniformly in Lipschitz continuous and converges, uniformly in and , to , which is independent of and satisfies, in the viscosity sense, the constrained Hamilton-Jacobi equation (11). Moreover,

(21) |

Since represents the fitness, it turns out that (21) characterizes as the Evolutionary Stable Distribution (or as the Evolutionary Stable Strategy). See [16, 25].

It follows from (21) that both and its derivative vanish at . As a result the viscosity solution of (11) also vanishes at as , and this makes the connection with the result of [26].

The proof is a consequence of the next two lemmata which are proved later in the paper.

###### Lemma 4.2 (Bounds on )

There exists an independent of constant such that

(22) |

###### Lemma 4.3 (Lipschitz estimates)

There exist an independent of constant such that

Moreover, the ’s converge, along a subsequence and uniformly in and , to a Lipschitz and -periodic function such that

Proof of Theorem 4.1. The fact that any limit of the ’s satisfies (11) is a standard application of the so-called perturbed test function method (see [19]) and we do not repeat the argument.

It follows from (11) that , while, at any maximum point of , we must have , and, hence, and, in view of Lemma 3.1,

As a result the only possible maximum point of any solution of (11) must be at at , which implies that the equation has a unique solution.

Also the knowledge of determines uniquely the limit , from equation (13), and, thus, the full family converges.

Proof of Theorem 1.1. The statement in terms of is an immediate consequence of Theorem 4.1. Because achieves a unique maximum at , from the Laplace formula for written as (6), we conclude that the ’s converge weakly in the sense of measures to , with the limit of (see Section 2).

Next, integrating equation (1) in we conclude that

Passing to the limit , and taking into account that is a Dirac mass at , we find that because they both satisfy the equation (13).

## 5 The proofs

We begin with the proof of Lemma 4.2

Proof of Lemma 4.2. Integrating (7), we find

(23) |

The claim for the maximum bound follows from the estimate on .

For each , let and choose such that Then it follows from (14)(i) that

(24) |

Inserting in (24) the estimate

we get

and, hence,

Using (23), we obtain

and this concludes the proof.

Next we discuss the proof of Lemma 4.3.

Proof of Lemma 4.3. We first assume the Lipschitz bound and prove the rest of the claims.

Let , with as in the proof of the previous lemma. It is immediate from (22) and the fact that that, for some ,

Next we show that , which, in turn, yields that .

Let be such that and write

Combining the lower bound on in Lemma 2.1 and the (Lipschitz) estimate , we get

and, thus, .

Now we turn to the proof of the Lipschitz bounds, which is an appropriate modification of the classical Bernstein estimates to take into account the different scales. We note and prove Lemma 4.3. Note that the convexity assumption on is used solely in this proof.

We begin by writing the equations satisfied by and which we obtain by differentiating (7) in and and multiplying by and . We have:

(25) |

and

(26) |

Let and compute

Assume that is a maximum point of . Because of the convexity assumption, we have on the boundary (see [27]) and thus .

Therefore, at this point , we have

and we choose small enough so that we can absorb the term in the left hand side.

Since there is a constant such that

we conclude (using the equation) that, for some ,

It follows that there exists some positive constant such that

From this we conclude that is bounded and the Lipschitz continuity statement is proved.

## 6 Conclusion and perspectives

### Conclusion

We have studied a steady state model describing the Evolutionary Stable Distribution for a simple model of dispersal evolution. The novelty is that mutations acting on the continuous dispersal trait (the diffusion rate) are modeled thanks to a Laplacian operator, and this replaces the standard ‘invasion of a favorable mutant’ in the usual time scale separation approach. When the mutation rate is small, we have shown that the minimal dispersal is achieved, in accordance with previous analyses. This is mathematically stated as the limit to a Dirac mass which selects the minimum of the diffusion coefficient in the equation. Technical difficulties rely on a priori estimates in order to make the approach rigorous and establish the constrained Hamilton-Jacobi equation which defines the potential in the Gaussian-like concentration.

We indicate two possible extensions of our results. The first concerns the way to make more precise the convergence result of Theorem 1.1. The second is about the time evolution problem.

### A more precise convergence result.

The question we address here is whether it is possible to make more precise the convergence in the weak sense of measures stated in Theorem 1.1.

The gradient bound in Lemma 4.3 implies that , and, therefore along subsequences, the ’s converge in . To prove more about the corrector, it necessary to have an estimate for , which is not, however, available from what we have here.

Another approach is to introduce, instead of , the eigenvalue problem

The change of unknown gives the equation

Standard gradient estimates yield that is bounded independently of , and, from the equation, we see that the ’s are also bounded independently of . It follows that, as , and, after an appropriate normalization, uniformly.

As a result we can factorize the solution of (1) as

and we claim that, for some other factor ,

(27) |

a statement which is more precise than that of Theorem 1.1.

To see this, we write the equation for as

To obtain bounds on we notice that is bounded (from above and below), and, since , we conclude that, for some and near and independent of constants and , and . It then follows from standard arguments that the ’s are bounded in .

The limiting equation is the eigenfunction problem (16) and positive solutions are all proportional to , which gives the statement.

### The parabolic problem.

Our approach is mainly motivated by the dynamics of the evolution of dispersal. The steady states are the Evolutionary Stable Distribution and are obtained as the long time distribution of competing populations [25]. This leads to the study of the time dependent problem

(28) |

with the Neuman boundary conditions on and periodicity in .

For this problem there are two limits of interest, namely and . So far, we have studied the limit , reaching a steady state (1) of (28), and then considered the limit .

Reversing the order, we need to study first what happens as . In this case, we expect that, in the weak sense of measures,

where at least formally, the weight is defined, for each , by the stationary Fisher/KPP equation

(29) |

We can follow the same derivation as before, and discover that the value of the fittest dispersal trait is now obtained through the time evolution constrained Hamilton-Jacobi equation

(30) |

Here the effective fitness is still given by the eigenvalue problem (16) with .

Note that, since derivatives vanish at a maximum point, we conclude that

and we also have , where ,

We recall that, still formally, we can derive from (30) a canonical equation for the fittest trait which takes the form, see [17, 28, 31, 32],

(31) |

We note that this equation also describes the fact that will evolve towards smaller values of and thus of the dispersal rate .

The main difficulties compared to the stationary case are to derive a priori estimates for analogous to those in Lemma 2.1 and to obtain gradient estimates on . Since the Lipschitz regularity of is optimal and only differentiability can be proved at the maximum point [34], giving a meaning to (31) is also a challenge.

### Some numerics on the parabolic problem.

Numerical simulations are presented in Figure 1 which illustrates the selection of lowest dispersal rate. Considering the parabolic problem (28), that means the convergence of the fittest trait to the smaller values of the trait (with the coefficients below, this value is ) as .

For this simulation we have chosen and the data

and we have used, for the convenience of numerics, Dirichlet boundary conditions both in and . The numerical scheme is the standard three points scheme in each direction, implicit in and explicit in because is small enough so as not to penalize the computational time.

In Figure 1, we observe that the average trait, which initially is , decreases and gets close to at the third time displayed here, also the trait exhibits a concentrated pattern around its average.

## References

- [1] A. Arnold, L. Desvillettes and C. Prévost. Existence of nontrivial steady states for populations structured with respect to space and a continuous trait, Commun. Pure Appl. Anal. 11, 83–96, 2012.
- [2] M. Baar, A. Bovier and N. Champagnat. From stochastic, individual-based models to the canonical equation of adaptive dynamics - in one step. arXiv:1505.02421, 2015. To appear in Ann. Appl. Probab.
- [3] H. Berestycki, B. Nicolaenko, and B. Scheurer. Traveling wave solutions to combustion models and their singular limits, SIAM J. Math. Anal., 16 (1985), 1207–1242.
- [4] H. Berestycki, T. Jin and L. Silvestre. Propagation in a nonlocal reaction diffusion equation with spatial and genetic trait structure. arXiv:1411.2019, 2014. To appear in Nonlinearity.
- [5] N. Berestycki, C. Mouhot, G. Raoul, Existence of self-accelerating fronts for a non-local reaction-diffusion equation. ArXiv:1512.00903, (2015).
- [6] E. Bouin, V. Calvez, Travelling waves for the cane toads equation with bounded traits, Nonlinearity 27(9), 2233, 2014. Doi:10.1088/0951-7715/27/9/2233.
- [7] E. Bouin, V. Calvez, Propagation in a Kinetic Reaction-Transport Equation: Travelling Waves And Accelerating Fronts. Archive for Rational Mechanics and Analysis, 217(2), 571–617, 2015.
- [8] E. Bouin, V. Calvez, N. Meunier, S. Mirrahimi, B. Perthame, G. Raoul, and R. Voituriez. Invasion fronts with variable motility: phenotype selection, spatial sorting and wave acceleration. C. R. Math. Acad. Sci. Paris, 350(15-16):761–766, 2012.
- [9] E. Bouin and S. Mirrahimi. A Hamilton-Jacobi limit for a model of population structured by space and trait. Comm. Math Sci., 13(6): 1431–1452, 2015.
- [10] F. Campillo and C. Fritsch. Weak convergence of a mass-structured individual-based model. Applied Mathematics & Optimization, Vol. 72 (2015), no. 1, 37–73.
- [11] R. S. Cantrell, C. Cosner, Y. Lou. Approximating the ideal free distribution via reaction- diffusion-advection equations. J. Differential Equations, 245 (2008), no. 12, 36873703.
- [12] N. Champagnat, R. Ferrière, and S. Méléard. Unifying evolutionary dynamics: From individual stochastic processes to macroscopic models. Th. Pop. Biol., 69(3):297–321, 2006.
- [13] C. Cosner, J. Dávila J. and S. Martínez. Evolutionary stability of ideal free nonlocal dispersal. J. Biol. Dyn., Vol. 6 (2012), No. 2, 395405.
- [14] J. Coville and L. Dupaigne. Propagation speed of travelling fronts in non local reaction- diffusion equations, Nonlin. Anal., 60 (2005), 797–819.
- [15] L. Desvillettes, P.-E. Jabin, S. Mischler, and G. Raoul. On selection dynamics for continuous structured populations. Comm. Math. Sci. 6(3), 729–747 (2008).
- [16] O. Diekmann, A beginner’s guide to adaptive dynamics. In Rudnicki, R. (Ed.), Mathematical modeling of population dynamics, Banach Center Publications ,Vol. 63, (2004) 47–86.
- [17] O. Diekmann, P.-E. Jabin, S. Mischler, and B. Perthame. The dynamics of adaptation: an illuminating example and a Hamilton-Jacobi approach. Th. Pop. Biol., 67(4):257–271, 2005.
- [18] J. Dockery, V. Hutson, K. Mischaikow and M. Pernarowski. The evolution of slow dispersal rates: a reaction diffusion model. J. Math. Biol. (1998) 37: 61–83.
- [19] L. C. Evans. The perturbed test function method for viscosity solutions of nonlinear PDE. Proc. Roy. Soc. Edinburgh, 111 A (1989), 359–375.
- [20] S. A. H. Geritz, J. A. J. Metz , E. Kisdi and G. Meszena. Dynamics of adaptation and evolutionary branching. Physical Review Letters 78 (1997) 2024–2027.
- [21] R. Hambrock and Y. Lou. The evolution of conditional dispersal strategies in spatially hetero- geneous habitats. Bull. Math. Biol. 71 (2009), no. 8, 17931817.
- [22] A. Hastings. Can spatial variation alone lead to selection for dispersal? Theoret. Popul. Biol., 24 (1983), 244–251.
- [23] J. Hofbauer and K. Sigmund, Evolutionary games and population dynamics. London Mathematical Society, Student texts 7. Cambridge University Press (2002).
- [24] V. Hutson, S. Martínez, K. Mischaikow, G. T. Vickers. The evolution of dispersal, J. Math. Biol., 47 (2003), 6, 483–517.
- [25] P.-E. Jabin and G. Raoul. Selection dynamics with competition. J. Math. Biol., (2011) 63(3) 493–517.
- [26] K.-Y. Lam and Y. Lou. A mutation-selection model for evolution of random dispersal. ArXiv 1506.00662, 2015.
- [27] P.-L. Lions. Neumann type boundary conditions for Hamilton-Jacobi equations, Duke Math. J., 52 (1985), 793–820.
- [28] A. Lorz, S. Mirrahimi, and B. Perthame. Dirac mass dynamics in multidimensional nonlocal parabolic equations. Comm. Partial Differential Equations, 36(6):1071–1098, 2011.
- [29] Maynard Smith, J. The theory of games and the evolution of animal conflicts, J. Theor. Biol. 47 (1974), 209–221.
- [30] S. Mirrahimi and B. Perthame. Asymptotic analysis of a selection model with space. J. Math. Pures et Appl. 104 (2015) 1108–1118.
- [31] S. Mirrahimi and J.-M. Roquejoffre. Uniqueness in a class of Hamilton-Jacobi equations with constraints. Comptes Rendus Ac. Sc. Paris, Mathematiques, Vol. 353, (2015), 489–494: http://dx.doi.org/10.1016/j.crma.2015.03.005
- [32] S. Mirrahimi and J.-M. Roquejoffre. A class of Hamilton-Jacobi equations with constraint: uniqueness and constructive approach. Journal of Differential Equations, Vol. 260(5), 2016, 4717–4738.
- [33] K. Parvinen, U. Dieckmann, M. Gyllenberg and J. A. J. Metz. Evolution of dispersal in metapopulations with local density dependence and demographic stochasticity, J. Evol. Biol., (2003) 16:143–53.
- [34] B. Perthame and G. Barles. Dirac concentrations in Lotka-Volterra parabolic PDEs. Indiana Univ. Math. J., 57(7):3275–3301, 2008.
- [35] A. Potapov, U. E. Schlägel and M. A. Lewis. Evolutionary stable diffusive dispersal, DCDS(B), 19(10), 2014, 3319–3340.
- [36] O. Ronce. How does it feel to be like a rolling stone? Ten questions about dispersal evolution. Annu. Rev. Ecol. Evol. Syst., (2007) 38:231–253.
- [37] O. Turanova. On a model of a population with variable motility. Mathematical Models and Methods in Applied Sciences, Vol. 25, No. 10, 1961–2014 (2015).