A representation for exchangeable coalescent trees and generalized tree-valued Fleming-Viot processes

A representation for exchangeable coalescent trees and generalized tree-valued Fleming-Viot processes

Stephan Gufler Technion, Faculty of Industrial Engineering and Management, Haifa 3200003, Israel, stephan.gufler@gmx.net
Abstract

We give a de Finetti type representation for exchangeable random coalescent trees (formally described as semi-ultrametrics) in terms of sampling iid sequences from marked metric measure spaces. We apply this representation to define versions of tree-valued Fleming-Viot processes from a -lookdown model. As state spaces for these processes, we use, besides the space of isomorphy classes of metric measure spaces, also the space of isomorphy classes of marked metric measure spaces and a space of distance matrix distributions. This allows to include the case with dust in which the genealogical trees have isolated leaves.

Keywords: Ultrametric, jointly exchangeable array, marked metric measure space, dust, tree-valued Fleming-Viot process, lookdown model, -coalescent.
AMS MSC 2010: Primary 60G09, Secondary 60J25, 60K35, 92D10

Contents

1 Introduction

1.1 Some background on coalescent trees, ultrametrics, and metric measure spaces

In population genetics, coalescents are common models for the genealogy of a sample from a population. The Kingman coalescent [K82-SPA] is a partition-valued process in which each individual of the sample forms its own block at time , and as we look into the past, each pair of blocks merges independently at constant rate. These blocks stand for the families of individuals that have a common ancestor at given times in the past. Generalizations of the Kingman coalescent include the -coalescent (Pitman [Pitman99], Sagitov [Sagitov99], Donnelly and Kurtz [DK99]) where multiple blocks are allowed to merge to a single block at the same time, and the -coalescent (Möhle and Sagitov [MS01], Schweinsberg [Schw00]) where several clusters of blocks may also merge simultaneously.

A (semi-)ultrametric is a (semi-)metric that satisfies the strong triangle inequality . A realization of a coalescent for an infinite sample can be expressed as a càdlàg path with values in the space of partitions of such that is a coarsening of for all . We assume that for each pair of integers, there is a time such that the elements of this pair are in a common block of . Then can equivalently be expressed as a semi-ultrametric on such that for all and ,

(1.1)

and (1.1) yields a one-to-one correspondence between these càdlàg paths and the semi-ultrametrics on , cf. [E08]*Example 3.41 and [EGW15]*p. 262.

Evans [E00] studies the completion of the random ultrametric space associated with the Kingman coalescent which he endows with a probability measure such that the mass on each ball is given by the asymptotic frequency of the corresponding family, and a class of more general coalescents is studied by Berestycki et al. [BBS08].

Remark 1.1.

Let us briefly recall the well-known correspondence between ultrametric spaces and real trees to which we will refer to explain main concepts in this article. A real tree is a metric space that is tree-like in the sense that (i) no subspace is homeomorphic to the unit circle, and (ii) for each , there exists an isometry from the real interval to with and , see e. g. Evans [E08] for an overview. An ultrametric space can be isometrically embedded into the real tree that is obtained by identifying the elements with distance zero of the semi-metric space given by . Then equals the set on in [EGW15]*p. 262 with and the metrics here and in [EGW15]*p. 262 coincide up to a factor . Clearly, is isometric to the subspace of the leaves of . For a semi-ultrametric space , we identify the elements with distance zero to obtain an ultrametric space which we associate with a real tree as above. A related embedding of an ultrametric space is given in [Hughes]*Section 6.

As in Remark 1.1, a semi-ultrametric on can be considered as an infinite tree whose leaves are labeled by the elements of . Often these labels are not relevant, for instance, when they only record the order in which iid samples from a population are drawn. To remove the labels, we could pass to the isometry class. However, the asymptotic block frequencies in the coalescent given by an ultrametric on are not determined by the isometry class, as one may apply an infinite permutation without changing the isometry class. To retain just this information besides the metric structure, we can take a measure-preserving isometry class of the completion of the ultrametric space that is endowed with a probability measure that charges each ball with the asymptotic frequency of the corresponding block, if such a probability measure exists. This probability measure can equivalently be described as the weak limit of the uniform probability measures on the individuals , as . Then we obtain the description by isomorphy classes of metric measure spaces of Greven, Pfaffelhuber, and Winter [GPW09] that was applied to -coalescents in the dust-free case. We speak of the dust-free case if the semi-ultrametric space has no isolated points, which means that the coalescent tree has no isolated leaves. Greven, Pfaffelhuber, and Winter [GPW09] also show that their approach is not directly applicable to -coalescents with dust. The most elementary example for the case with dust is the star-shaped coalescent which starts in the partition into singleton blocks which all merge into a single block at some instant. The associated ultrametric on induces the discrete topology. Here the uniform probability measures on do not converge weakly as they converge vaguely to the zero measure.

A triple that consists of a complete and separable metric space and a probability measure on the Borel sigma algebra on is called a metric measure space. For a metric measure space , one can consider the matrix of the distances between -iid samples . The distribution of is called the distance matrix distribution of . By the Gromov reconstruction theorem (see Theorem 4 of Vershik [Vershik02]), there exists a measure-preserving isometry between the supports of the measures of any two metric measure spaces that have the same distance matrix distribution, in which case we call them isomorphic.

We view a random semi-metric on as the random matrix , and we call it exchangeable if is distributed as for each (finite) permutation of . Under an appropriate condition which we interpret as dust-freeness in Remark 3.13, Vershik [Vershik02]*Theorem 5 associates with any typical realization of an exchangeable (and ergodic) random semi-metric on a metric measure space whose distance matrix distribution is the distribution of this semi-metric. In the next subsection, we discuss an extension of such a representation to the case with dust.

1.2 The sampling representation

We give a representation for all exchangeable random semi-ultrametrics on in terms of sampling from random marked metric measure spaces. Marked metric measure spaces are introduced in Depperschmidt, Greven, and Pfaffelhuber [DGP11]. A -marked metric measure space is a triple that consists of a complete and separable metric space and a probability measure on the Borel sigma algebra on the product space . The marked distance matrix distribution of a marked metric measure space is defined as the distribution of where is an -iid sequence in . Marked metric measure spaces with the same marked distance matrix distribution are called isomorphic.

In the present article, we use marked metric measure spaces to obtain from a random variable that has the marked distance matrix distribution of a marked metric measure space an exchangeable semi-metric on by

We call the distribution of the distance matrix distribution of the marked metric measure space. The basic result in this article (stated in Theorem 3.9 below) is that every exchangeable semi-ultrametric on can be represented as the outcome of a two-stage random experiment, where we have the isomorphy class of a random marked metric measure space in the first stage, and we sample from this marked metric measure space according to its distance matrix distribution in the second stage.

We construct realization-wise from the exchangeable semi-ultrametric : the key idea is to decompose the tree that is associated with a realization of into the external branches and the remaining subtree. Here we define that an external branch consists only of the leaf if that leaf corresponds to an integer that has -distance zero to another integer. In the marked metric measure space, the marks encode the external branch lengths, and the metric space describes the remaining subtree. We call the semi-ultrametric dust-free if the external branches all have length zero a. s. In this case, the marked metric measure space can also be replaced by a metric measure space (as in Corollary 3.12). We prove Theorem 3.9 in Section 10. In Section 2, we formulate the decomposition at the external branches in terms of semi-ultrametrics.

The representation for exchangeable semi-ultrametrics from Theorem 3.9 can also be seen in the more general but less explicit contexts of the ergodic decomposition (Section 3.5) and the Aldous-Hoover-Kallenberg representation (see e. g. [Kal05]*Section 7). In the representation result outlined above, the distance matrix distribution of the isomorphy class of the marked metric measure space is the ergodic component in whose support the realization lies. The ergodic component is also characterized by itself, or in the dust-free case by the isomorphy class of a metric measure space. The finite analog of the aforementioned ergodic decomposition is that a (discrete) random tree whose leaves are labeled exchangeably can be obtained by first drawing the random unlabeled tree and then sampling the labels of the leaves uniformly without replacement.

We mention that Evans, Grübel, and Wakolbinger [EGW15] also decompose real trees into the external branches and the remaining subtree to give a representation of the elements of the Doob-Martin boundary of Rémy’s algorithm in terms of sampling from a weighted real tree and an additional structure. In [EGW15]*Section 7, a sampling representation for exchangeable ultrametrics is considered (see Remark 10.9).

1.3 Evolving genealogies

In Section 4, we lay the foundation for our study of evolving genealogies by considering a general time-homogeneous Markov process with values in the space of semi-ultrametrics on ; this process describes evolving leaf-labeled trees. Assuming that the state at each time is exchangeable, we map this process realization-wise to the processes of the ergodic components. We express these ergodic components as (isomorphy classes of) metric measure spaces and marked metric measure spaces, and as distance matrix distributions, respectively. Here we use the representation result for exchangeable semi-ultrametrics. This approach characterizes the processes of the ergodic components up to null events only at countably many time points, i. e. as versions, as we discuss in Remark 4.4. Using the criterion of Rogers and Pitman [RP81]*Theorem 2, we deduce that these image processes are also Markovian, and we describe them by well-posed martingale problems. This is an example of Markov mapping in the sense of Kurtz [Kurtz98], and Kurtz and Nappo [KurtzNappo11].

In Sections 5 – 6, we study a concrete Markov process with values in the space of semi-ultrametrics, namely the process given by the evolving genealogical trees in a lookdown model with simultaneous multiple reproduction events. Lookdown models were introduced by Donnelly and Kurtz [DK96, DK99] to represent measure-valued processes along with their genealogy, see also e. g. Etheridge and Kurtz [EK14] and Birkner et al. [BBMST09]. A lookdown model can be seen as a (possibly) infinite population model in which each individual at each time is assigned a level. The role of this level is model-inherent, namely to order the individuals such that the restriction of the model to the first finitely many levels is well-behaved (i. e. only finitely many reproduction events are visible in bounded time intervals) and that the modeled quantity (e. g. types, genealogical distances) is exchangeable. In [DK99] and in the present article, the level is the rank among the individuals at the respective time according to the time of the latest descendant. Although the levels in finite restrictions of the lookdown model differ from the labels in the Moran model, the processes of the unlabeled genealogical trees coincide which is used to study the length of the genealogical trees in Pfaffelhuber, Wakolbinger, and Weisshaupt [PWW11] and Dahmer, Knobloch, and Wakolbinger [DKW14].

In Section 7, we remove the labels from the evolving genealogical trees in the infinite lookdown model by applying the result from Section 4 to the process from Sections 5 – 6. We call the processes of the ergodic components tree-valued Fleming-Viot processes, regardless which one of the three state spaces we use. The tree-valued Fleming-Viot process with values in the space of isomorphy classes of metric measure spaces is introduced in the case with binary reproduction events (which is associated with the Kingman coalescent) by Greven, Pfaffelhuber, and Winter [GPW13] as the solution of a well-posed martingale problem that is the limit in distribution of corresponding processes read off from finite Moran models. In [GPW13]*Remark 2.20, a construction of (a version of) this process from the lookdown model of Donnelly and Kurtz [DK96] is outlined. The aim in the present article regarding tree-valued Fleming-Viot process is the generalization to the case with dust. We remark that tree-valued Fleming-Viot processes with mutation and selection are studied in Depperschmidt, Greven, and Pfaffelhuber [DGP12, DGP13] where the states are isomorphy classes of marked metric measure spaces and the marks encode allelic types. In the present article, the marks encode lengths of external branches. We consider only the neutral case, and we describe genealogies without using types.

In Section 8, we show continuity properties of the semigroups of tree-valued Fleming-Viot processes and that the domains of the martingale problems for them are cores. In Section 9, we show that tree-valued Fleming-Viot processes converge in distribution to equilibrium.

While we construct versions of tree-valued Fleming-Viot processes in the present article using the representation result, the full sample paths are constructed by techniques specific to the lookdown model in the companion article [Pathw].

1.4 Additional related literature

Aldous [Ald93] represents consistent families of finite trees that satisfy a “leaf-tight” property by random measures on (and random subsets of ). Kingman’s coalescent is given as an example in [Ald93]. The “leaf-tight” property corresponds to the absence of dust. A representation for exchangeable hierarchies in terms of sampling from random weighted real trees is given by Forman, Haulk, and Pitman [FHP15]. There are many other representation results for exchangeable structures in the literature. For instance, by the Dovbysh-Sudakov theorem, see Austin [Austin15] for a proof based on a representation for exchangeable random measures, jointly exchangeable arrays that are non-negative definite can be represented in terms of sampling from the space .

The genealogy in the lookdown model is further studied in Pfaffelhuber and Wakolbinger [PW06]. Kliem and Löhr [KL14] further study marked metric measure spaces. In their article, tree-valued -Fleming-Viot processes in the dust-free case is also mentioned. Kliem and Winter [KW17] use marked metric measure spaces to describe trait-dependent branching processes. In the context of measure-valued spatial -Fleming-Viot processes with dust, Véber and Wakolbinger [VW15] work with a skeleton structure. Functionals of coalescents like external branch lengths have also been studied, see for example [M10]. Also the time evolution of such functionals has been studied for evolving coalescents, see for example [KSW14, DK14].

Bertoin and Le Gall [BLG03, BLG05, BLG06] represent -coalescents in terms of sampling from flows of bridges from which they also construct measure-valued Fleming-Viot processes. They also consider mass coalescents. Mass coalescents (see e. g. Chapter 4.3 in Bertoin [Bertoin]) also describe genealogies without labeling individuals. In Section 12, we construct the Fleming-Viot process with values in the space of distance matrix distributions from the dual flow of bridges. We also mention the work of Labbé [Lab12] where relations between the lookdown model and flows of bridges are studied.

2 Distance matrices and their decompositions

We write . Let denote the space of semi-ultrametrics on and let denote the space of semimetrics on . We view and as subspaces of in that we do not distinguish between a semi-metric and the distance matrix . We endow with a complete and separable metric that induces the product topology when is equipped with the Euclidean topology. Using the map

we define the space

whose elements we call decomposed semi-ultrametrics or marked distance matrices. As above, we view and as subspaces of which we endow with a complete and separable metric that induces the product topology.

We define the function

and we denote by the function that maps a semi-ultrametric to the decomposed semi-ultrametric that is given by and

for . The interpretation of these functions is given in Remark 2.2 below from which it follows that is a tree-like semi-metric (i. e., is -hyperbolic, see e. g. [E08]). Alternatively, it can be easily checked that satisfies the triangle inequality.

The function retrieves the semi-ultrametric from a decomposed semi-ultrametric. For instance, is the identity map on .

Remark 2.1.

Let us agree on the following notation. When we identify the elements of a semi-metric space that have -distance zero to obtain a metric space , we refer by each element also to the associated element of . Furthermore, we define the metric completion of the semi-metric space as the metric completion of .

Remark 2.2.

Let , , and let be the real tree associated with as in Remark 1.1 with . Then can be interpreted as the length, and as the starting vertex of the external branch that ends in the leaf of . Here we define that this external branch consists only of the leaf if there exists with . Furthermore, the map from to is distance-preserving.

In this sense, the map decomposes the coalescent tree that is given by into the external branches with lengths and the subtree spanned by their starting vertices whose mutual distances are given by . More generally, any element of can be seen as a decomposed coalescent tree.

We call a semi-ultrametric dust-free if , that is, if all external branches in the associated tree have length zero so that there are no isolated leaves.

3 Sampling from marked metric measure spaces

3.1 Preliminaries

Recall the definitions of metric measure spaces, marked metric measure spaces, and their (marked) distance matrix distributions from Sections 1.1 and 1.2. Also recall that two metric measure spaces are said to be isomorphic if they have the same distance matrix distributions. We denote the set of isomorphy classes of metric measure spaces by and we endow it with the Gromov-weak topology in which metric measure spaces converge if and only if their distance matrix distributions converge. Greven, Pfaffelhuber, and Winter [GPW09] showed that is then a Polish space.

Analogously, two marked metric measure spaces are said to be isomorphic if they have the same marked distance matrix distributions. We denote the set of isomorphy classes of marked metric measure spaces by and we endow it with the marked Gromov-weak topology in which marked metric measure spaces converge if and only if their marked distance matrix distributions converge weakly. This makes a Polish space, as shown by Depperschmidt, Greven, and Pfaffelhuber [DGP11].

We denote the distance matrix distribution of the isomorphy class of a metric measure space by . We denote the marked distance matrix distribution of by , so that is the distance matrix distribution of , in accordance with the definition in Section 1.2. (We denote by the pushforward measure of a measure on a measurable space under a measurable function on .)

Remark 3.1.

We call a marked metric measure space dust-free if the probability measure is of the form for a probability measure on the Borel sigma algebra on . Then the distance matrix distribution equals the distance matrix distribution of the metric measure space . We call the metric measure space associated with the dust-free marked metric measure space .

Let denote the group of finite permutations on . We define the action of on and , respectively, by and

for , , . A random variable, for instance with values in or , is called exchangeable if its distribution is invariant under the action of the group .

Remark 3.2.

Exchangeable random variables with values in or can be seen as jointly exchangeable arrays, see e. g. [Kal05]*Section 7. Also recall that the definition of exchangeability does not change when is replaced with the group of all bijections from to itself, as the finite restrictions determine the distribution of a random variable in or .

Remark 3.3.

The coalescents associated by (1.1) with the exchangeable semi-ultrametrics on form a larger class of processes than the so-called exchangeable coalescents defined in e. g. Section 4.2.2 of Bertoin [Bertoin]. For example, the coalescent process associated with an exchangeable semi-ultrametric on needs not be Markovian.

3.2 Tree-like marked metric measure spaces

We consider the space

of ultrametric measure spaces which is a closed subspace of , as shown in [GPW13]*Lemma 2.3. By the same argument, the space

is a closed subspace of . It contains the marked metric measure spaces with ultrametric distance matrix distribution. Following e. g. [GPW09, GPW13] and Remark 1.1, we call the elements of trees. Also the elements of may be called trees (as in Remark 10.8 below).

Proposition 3.4 below states that a. e. realization of a -valued random variable with the marked distance matrix distribution of a marked metric measure space in is the decomposition of a semi-ultrametric by the map from Section 2. As a consequence, the isomorphy class of a marked metric measure space in is determined already by its distance matrix distribution.

Proposition 3.4.

Let be a marked metric measure space with . Let be a -valued random variable with distribution . Then

The proof is deferred to Section 10.1.

Remark 3.5.

We call a semi-ultrametric dust-free if . It can be seen as a consequence of Proposition 3.4 that (the isomorphy class of) a marked metric measure space in is dust-free (as defined in Remark 3.1) if and only if a random variable with distribution is a. s. dust-free. In particular, a random variable with the distance matrix distribution of a metric measure space is a. s. dust-free.

3.3 Marked metric measure spaces from marked distance matrices

In this subsection, we define functions by which we construct a (marked) metric measure space from a (marked) distance matrix. An interpretation of these functions is given in Remark 3.8 below. In Remark 3.16, we state their role in the context of the ergodic decomposition.

First we define the function that maps to the isomorphy class of the metric measure space , given as follows: is the metric completion of . The probability measure is defined as the weak limit of the probability measures as tends to infinity, if this weak limit exists. If the limit does not exist, we define arbitrarily, let us set . Furthermore, we denote by the subset of distance matrices such that the weak limit in the definition above exists.

Analogously, we define the function that maps to the isomorphy class of the marked metric measure space , where is the metric completion of the semi-metric space and is the weak limit of the probability measures on if this weak limit exists, else we set . We denote by the subset of marked distance matrices such that the weak limit in the definition above exists.

We call and in the definitions of and also sampling measures.

Remark 3.6.

Let . Then implies . For a representative of , the isomorphy class of equals .

Proposition 3.7.

The functions and are measurable.

The proof, in which we write and as limits of continuous functions, is deferred to Section 10.2.

Remark 3.8 (An interpretation of and ).

For , the probability measure in the ultrametric metric measure space charges each ball with the asymptotic frequency of the corresponding block of the coalescent which is associated with by (1.1).

Similarly, for , let be the representative of from the definition of . We consider the completion of the real tree associated with as in Remark 2.2, and the extension of the isometry from Remark 2.2. Then the image measure charges each region of with the asymptotic frequency of the integers that label the leaves of that are the endpoints of external branches that begin in that region.

3.4 The sampling representation

The basic result in this paper is stated in Theorem 3.9 below. Here we consider an exchangeable random semi-ultrametric on , and we assert existence of a random variable with values in the space of isomorphy classes of marked metric measure spaces that has the following property: Let be a random variable whose conditional distribution given is the distance matrix distribution of . Then the random variables and have the same (unconditional) distribution. (In the language of the theory of random measures, this means that the distribution of is equal to the first moment measure . That is, for each bounded measurable .)

Theorem 3.9.

Let be an exchangeable -valued random variable. Let . Let be a -valued random variable whose conditional distribution given is . Then:

  1. a. s.

  2. and are equal in distribution.

  3. a. s.

Assertion 1 above states that for a typical realization of and its decomposition , the sampling measure in the definition of in Subsection 3.3 is the weak limit of the uniform probability measures therein. Assertion 3 states that the realization of can typically be reconstructed from the realization of . We interpret the reconstruction map in terms of the ergodic decomposition in Remark 3.16. We prove Theorem 3.9 in Section 10.4. We give two proofs of Theorem 3.91. In one of them, the de Finetti theorem yields the aforementioned sampling measure as the directing measure of an exchangeable sequence.

Remark 3.10.

In the context of Theorem 3.9, and are equal in distribution. Hence, is a regular conditional distribution of given .

We also note the following uniqueness property which is proved in Section 10.3.

Proposition 3.11.

Let and be -valued random variables. Let be a -valued random variable with conditional distribution given , and let be another -valued random variable with conditional distribution given . Then and are equal in distribution if and only if and are equal in distribution.

(In terms of first-moment measures, Proposition 3.11 says that and are equal in distribution if and only if .)

The aim of the present paper is the treatment of the case with dust. In the dust-free case, we need not decompose the semi-metric by the map . Instead, we can work directly with the map from Subsection 3.3. Theorem 3.9 then reduces to the setting of metric measure spaces as follows:

Corollary 3.12.

Let be an exchangeable -valued random variable that is a. s. dust-free. Let . Let be a -valued random variable whose conditional distribution given is . Then:

  1. a. s.

  2. and are equal in distribution.

  3. a. s.

Proof.

This is immediate from Theorem 3.9 and Remarks 3.1, 3.5, and 3.6. ∎

Remark 3.13.

The assertions of Corollary 3.12 are closely related to Vershik [Vershik02]: Condition (4) in [Vershik02]*Theorem 5 is a necessary and sufficient condition for an exchangeable (and ergodic) random semi-metric to have the distance matrix distribution of a metric measure space. By Remark 3.5, the marked metric measure space in Theorem 3.9 is a. s. dust-free if and only if is a. s. dust-free. Hence, for a semi-ultrametric , condition (4) in [Vershik02] is equivalent to dust-freeness. In the dust-free case, the metric measure space associated with as in Remark 3.1 is the completion of a typical realization of the semi-metric, endowed with the probability measure given by the asymptotic block frequencies of the associated coalescent (as in Remark 3.8). This can also be deduced from [Vershik02]*Equation (9). Assertion 3 can be proved by Proposition 10.5 below which is related to [Vershik02] as stated in Remark 10.6.

3.5 Interpretation as ergodic decomposition

In this subsection, we interpret the representation from Theorem 3.9 as the ergodic decomposition of an exchangeable distribution on the semi-ultrametrics on .

We denote by the space of exchangeable probability distributions on , and we endow with the Prohorov metric which is complete and separable. We will also consider the subspace

of distance matrix distributions of marked metric measure spaces. The sets and are in one-to-one correspondence by Proposition 3.4. Hence, also the elements of can be seen as trees.

We define the invariant sigma algebra on as the sigma algebra that is generated by those Borel sets that satisfy for all finite permutations . A distribution on is called ergodic (with respect to the action of the group of finite permutations) if for all .

Proposition 3.14.

The distance matrix distribution of a marked metric measure space is invariant and ergodic with respect to the action of the group of finite permutations.

Proof.

This is analogous to [Vershik02]*Lemma 7. For , the Borel set that given by

is invariant under finite permutations, that is,

From the ergodicity of an -iid sequence , we obtain

Proposition 3.15.

The subset consists of the ergodic distributions.

Proof.

By Theorem 3.92, each element of is a mixture of elements of . The assertion follows by Proposition 3.14 and as the ergodic distributions in are extreme in the convex set (see e. g. [Kal05]*Lemma A1.2). ∎

Remark 3.16.

Theorem 3.9 decomposes the distribution of the exchangeable -valued random variable into ergodic components in the sense of e. g. Theorem A1.4 in Kallenberg [Kal05]. The function

is a decomposition map in the sense of Varadarajan [Var63]*Section 4 so that typically, is the ergodic component in whose support the realization lies. Note that this ergodic component is characterized by the isomorphy class of a marked metric measure space, and in the dust-free case also by the isomorphy class of a metric measure space. Some further references on the ergodic decomposition are given e. g. in [Kal05]*p. 475.

By the following proposition, is Polish which will be applied in [Conv].

Proposition 3.17.

The subspace is closed in .

Proof.

Let be a sequence of -valued random variables that converges in distribution to some -valued random variable . Assume that for each , the distribution of lies in . Then has ergodic distribution by Proposition 3.14. Lemma 7.35 of [Kal05] says that is dissociated, which means that for any disjoint , the restrictions , , are independent. As this property is preserved under the limit in distribution, it also holds for , and another application of Lemma 7.35 of [Kal05] and yields that has ergodic distribution. The assertion follows by Proposition 3.15. ∎

4 Application to tree-valued processes

Using the function from Section 3.3, we map a Markov process whose states are exchangeable -valued random variables to a process with values in the space of isomorphy classes of marked metric measure spaces. At each time, the state of the image process is the marked metric measure space from the representation (Theorem 3.9) of the state of the -valued process. We also consider the process of the distance matrix distributions of these marked metric measure spaces. In the dust-free case, we can also work with isomorphy classes of metric measure spaces and the map as in Corollary 3.12.

In the proof of Theorem 4.1 below, we use the criterion of Rogers and Pitman [RP81]*Theorem 2 to show that also the image processes are Markovian. A martingale problem for the -valued process or the -valued process yields a martingale problem for the respective image process.

The so-called polynomials and marked polynomials, introduced in [GPW09, DGP11] have been used as domains of martingale problems in e. g. [GPW13, DGP12, DGP13]. We recall them here, adapting the definition to our present use of the marks. The uniform continuity of the derivative in the definitions of and below will turn out useful in [Conv]. For , we write for , and we denote by the restriction from to , . We denote also by the restriction from to , . Let denote the set of bounded differentiable functions with bounded uniformly continuous derivative. For , we denote also by the function , and we call the function , the polynomial associated with . (Here and at other places, we use the notation for a measure and an integrable function , and we view measures also as functionals on spaces of integrable functions.) Similarly, we denote by the set of bounded differentiable functions with uniformly continuous derivative. For , we denote also by the function , and we call the function , the marked polynomial associated with . (Usually, the argument of a function will be a marked distance matrix.) We write and . We denote the set of polynomials by

the set of marked polynomials by

and we define the set of test functions

For a metric space , let denote the set of bounded measurable functions . For a subset and an operator , we mean by a solution of the martingale problem a progressive -valued process such that for every , the process

is a martingale with respect to the filtration induced by , cf. Ethier and Kurtz [EK86]*p. 173.

Theorem 4.1.

Let be a -valued time-homogenous Markov process. Assume that for each , the random variable is exchangeable. Let and be operators. Define the -valued process , the -valued process , and the -valued process . Then the following two assertions hold:

  1. The process is Markovian. If the -valued process solves the martingale problem , then solves the martingale problem , given by

    for all with associated polynomial , and all .

  2. The process is Markovian. If solves the martingale problem , then solves the martingale problem , given by

    for all and , and the function , .

Assertion 1 below holds under the additional assumption that is a. s. dust-free for each .

  1. The process is Markovian. If solves the martingale problem , then solves the martingale problem , given by

    for all with associated polynomial , and all .

The proof of Theorem 4.1 can be found in Section 10.5.

Remark 4.2.

The process in Theorem 4.1 is Markov. This follows as is Markov by assumption and as is determined by via so that

for all and bounded measurable . This is an example for Dynkin’s criterion [Dynkin]*Theorem 10.13 for a function of a Markov process to be Markov.

Remark 4.3.

In Theorem 4.1, if is dust-free for some , then is (by Theorem 3.9 and Remark 3.5 the isomorphy class of a) dust-free marked metric measure space, is the (isomorphy class of the) metric measure space associated (as in Remark 3.1) with (any representative of) , and we have . The process is relevant only in the dust-free case: If is not dust-free, then is just the arbitrary element of from the definition of in Section 3.3.

Remark 4.4.

In Theorem 4.1, we characterize only versions of the processes , , and . That is, we do not make assertions on the full sample paths but only on the states at countably many times.

From Theorem 3.9, we obtain (and in the dust-free case also by Corollary 3.12) only for a fixed time (or countably many ) on an event of probability . This means that the uniform probability measures on the starting vertices of the external branches that end in the first leaves of the tree associated with the semi-ultrametric are shown to converge only at countably many times on an event of probability . For , a realization can be considered as an ergodic component. At the other times , we do not exclude that is just the arbitrary element of with probability measure in the definition of in Section 3.3.

Theorem 4.1 yields in particular the semigroups of the processes , , and . Also the martingale problems in Theorem 4.1 characterize only versions of these processes.

For the particular example of the process in Sections 59, it is shown in [Pathw] that (and in the dust-free case) also holds simultaneously for all on an event of probability (see Theorems 3.1(i) and 3.10(i), and Remarks 4.4 and 4.13 in [Pathw]). This allows to construct the full sample paths (Section 4 in [Pathw]). These results are obtained in [Pathw] by techniques specific to the lookdown model.

Remark 4.5.

Theorem 4.1 is an example for Markov mapping. To show that the image processes , , and are Markovian, we use the simple criterion of Rogers and Pitman [RP81]*Theorem 2 as this criterion is formulated in terms of the abstract semigroups of the processes, which fits to our assumption that is a general time-homogenous Markov process whose states are exchangeable.

A criterion for the Markov property of the image processes in terms of martingale problems is given in Corollary 3.5 of Kurtz [Kurtz98] which requires more assumptions, including uniqueness for the martingale problem for and existence of solutions of the martingale problems for the image processes. Corollary 3.5 of [Kurtz98] would also yield uniqueness for the martingale problems for the image processes.

In the present paper, we use martingale problems only to provide additional characterizations of the processes under consideration. In Proposition 7.1, we show uniqueness for the martingale problems for the image processes directly by duality for the concrete examples from Section 7.

Remark 4.6.

In particular in Sections 89, 11.3 and in [Conv], we need convergence determining (or at least separating) sets of test functions. As in [Lohr13, GPW09, DGP11], the sets and are convergence determining in and , respectively. The argument from [Lohr13]*Corollary 2.8 also applies for : The algebra generates the product topology on . By a theorem due to Le Cam, see e. g. [Lohr13]*Theorem 2.7 and the references therein, it follows that is convergence determining in . Hence, generates the weak topology on . As is an algebra (see [GPW09, DGP11]) and by definition of , also is an algebra. Again by [Lohr13]*Theorem 2.7, it follows that is convergence determining in .

Remark 4.7.

The set of polynomials is separating on . This follows from Propositions 3.4 and 10.5 as in the proof of Proposition 3.11. Nevertheless, we work with the space of test functions on as is not convergence determining, a counterexample can be constructed from [GPW09]*Example 2.12(ii).

5 Genealogy in the lookdown model

In this section, we define a Markov process to which we will later apply Theorem 4.1. In Subsection 5.1, we read off a realization of such a process from a population model that is driven by a deterministic point measure . In Subsection 5.2, we let be a Poisson random measure, and we study further properties of in Subsection 5.3. We remark that for the lookdown model of Donnelly and Kurtz [DK96], the process of the evolving genealogical distances and its martingale problem are considered in Remark 2.20 of Greven, Pfaffelhuber, and Winter [GPW13].

5.1 The deterministic construction

We denote by the set of partitions of . We endow with the topology in which a sequence of partitions converges if and only if the sequences of their finite restrictions converge. For , we denote by the set of partitions of . We denote the restriction map from to by , that is, . Recall that other restriction maps, e. g. from are also denoted by . Moreover, we denote by the partition in that consists of singletons only, and by the set of partitions of in which the first integers are not all in different blocks. Furthermore, for , we denote by the enumeration of the blocks of with . For , we denote by the integer that satisfies .

We use a lookdown model as the population model. In this model, there are countably infinitely many levels which are labeled by , and each level is occupied by one particle at each time . The particles undergo reproduction events which are encoded by a simple point measure on . A simple point measure is a purely atomic measure whose atoms all have mass . Let us impose a further assumption on , namely

(5.1)

The interpretation of a point of is that the following reproduction event occurs: At time , the particles on the levels with are removed. At time , for each , the particle that was on level at time assumes level and has offspring on all other levels in . Thus, the level of a particle is non-decreasing as time evolves. Condition (5.1) means that for each , only finitely many particles jump away from the first levels in bounded time intervals.

For all , each particle at time has an ancestor at time . We denote by the level of the ancestor at time of the particle on level at time such that the maps and are càdlàg. Then is well-defined as is non-increasing.

Remark 5.1.

We will use that the trajectories of the particles are non-crossing in the following sense: For any times and particles on levels at time , particle is still alive if particle is still alive, in which case the particles and occupy levels . In particular, if infinitely many particles at time survive until time , then all particles at time survive until time .

We are interested in the process of the genealogical distances between the particles that live at the respective times. Let . (We can assume here, but differentiability will be more elementary in the larger space, as a matter of taste.) We interpret as the genealogical distance between the particles on levels and at time . We define the genealogical distance between the particles on levels and at time by

In words, the genealogical distance between two particles at a fixed time is twice the time back to their most recent common ancestor, if such an ancestor exists, else it is given by the genealogical distance between the ancestors at time zero.

Remark 5.2.

If , then for each . Indeed, a semi-metric on is a semi-ultrametric if and only if for each , an equivalence relation on is given by . If this property holds for , then the definition of readily yields that it also holds for .

We also describe the process in a more formal way which will be useful for the description by martingale problems in Section 5.2. With each partition we associate a transformation , which we also denote by , by

(5.2)

Here denotes the integer such that is in the -th block, when blocks are ordered according to their minimal elements. Note that for each reproduction event encoded by a point , the corresponding jump of the process can be described by

(5.3)

In particular, if , and acts as the identity on . By assumption (5.1), there are only finitely many reproduction events in bounded time intervals that result in a jump of the process . Between such jumps, the genealogical distances grow linearly with slope , that is, for distinct and with .

Remark 5.3.

Schweinsberg [Schw00] constructs the -coalescent analogously from a point measure. The population model described in this section can be seen as the population model that underlies the dual flow of partitions in Foucart [Foucart12]. A lookdown model with a reproduction mechanism that is different in the case with simultaneous multiple reproduction events is studied by Birkner et al. [BBMST09]. In this model, a partition encodes the following reproduction event: Let be the increasing enumeration of the integers that either form singletons or are non-minimal elements of blocks of . For each , the particle on level moves to the level given by the -th lowest singleton of if has at least singletons, else the particle is removed. For each non-singleton block , the particle on level remains on its level and has one offspring on each level in . Here the trajectories of the particles may cross: Consider a partition such that and are in the same block, forms a singleton, and is the minimal element of a non-singleton block. If the reproduction event encoded by occurs at time , then there exists