1 Introduction

Markov branching in the

vertex splitting model



July 9, 2019




Sigurdur Örn Stefánsson



NORDITA,

Roslagstullsbacken 23, SE-106 91 Stockholm,

Sweden


sigste@nordita.org

Abstract. We study a special case of the vertex splitting model which is a recent model of randomly growing trees. For any finite maximum vertex degree , we find a one parameter model, with parameter which has a so–called Markov branching property. When we find a two parameter model with an additional parameter which also has this feature. In the case , the model bears resemblance to Ford’s –model of phylogenetic trees and when it is similar to its generalization, the –model. For , the model reduces to the well known model of preferential attachment.

In the case , we prove convergence of the finite volume probability measures, generated by the growth rules, to a measure on infinite trees which is concentrated on the set of trees with a single spine. We show that the annealed Hausdorff dimension with respect to the infinite volume measure is . When the model reduces to a model of growing caterpillar graphs in which case we prove that the Hausdorff dimension is almost surely and that the spectral dimension is almost surely . We comment briefly on the distribution of vertex degrees and correlations between degrees of neighbouring vertices.

1. Introduction

Random trees are an important tool in many branches of science, ranging from quantum gravity models [13, 25] to biological applications [4, 11], to name a few. In this paper we introduce and study a new model of randomly growing, rooted, planar trees which we refer to as the attachment and grafting model, or ag–model for short. It is a special case of the vertex splitting model, recently introduced in [9]. The vertex splitting model is a modification of a model of growing trees, encountered in the theory of random RNA folding [11].

The ag–model is described informally below and a more detailed description is given in Section 2. The root of the tree is simply a marked vertex of degree one and the planarity condition means that edges are ordered around vertices. The parameters of the model are and denotes the maximum degree of vertices in the trees. When , and are the only active parameters of the model but when , also plays a role. Define

(1.1)

and

(1.2)

The growth rules can be explained as follows. Call the edges which are adjacent to vertices of degree one (besides the root) leaves and call the other edges internal edges. In each discrete time step a new edge is added by randomly selecting

  • a vertex of degree with relative probability and attaching a new edge to it (the possibilities of attaching chosen uniformly at random) or

  • an inner edge with relative probability and dividing it into two edges by grafting a vertex to it or

  • a leaf with relative probability and dividing it into two edges by grafting a vertex to it,

see Fig. 1 (left). It is from these two operations ’attachment’ and ’grafting’ which the model gets its name.

Figure 1. Growth rules of the ag–model (left) and the –model (right). The root is indicated by a cirlced vertex.

When there is no grafting and the model reduces to the model of preferential attachment (see e.g. [2, 10]) with linear attachment kernel . When there is no attaching and we simply have a growing linear graph.

The ag–model closely resembles the –model which was introduced in [8]. In the –model a new leaf is added in each time step by randomly selecting

  • a vertex of degree with relative probability and attaching a new edge to it or

  • an inner edge with relative probability and grafting a leaf to it or

  • a leaf with relative probability and grafting a leaf to it

where , see Fig. 1 (right). In the case , this model reduces to Ford’s –model of growing binary trees [17] in which case it resembles the ag–model with . When it is similar to the ag–model with .

The –model and the ag–model both have a property referred to as Markov branching which was introduced by Aldous in [4]. This means, crudely, that the subtrees below a given vertex have the same distribution as the whole tree, see Section 3. This feature makes the models much simpler to treat, since one can easily write recursion equations for many observables. Furthermore, recent results by Haas and Miermont provide a recipe for taking the scaling limit of such models [18].

The main results of this paper are the following. For , as the size of the trees goes to infinity, the measure concentrates on the set of trees with exactly one non-backtracking path from the root to infinity, referred to as an infinite spine. The emergence of a unique infinite spine is known in other models of random trees, an example being the uniform planar tree and modifications of it, see e.g.  [14]. Similar effects are also observed in triangulation models in quantum gravity, where exactly one large “universe” appears with finite baby-universes attached, see e.g. [24]. We also establish that the average volume of a graph ball of radius in the infinite trees grows like . The exponent is referred to as the Hausdorff dimension and denoted by . This power law behaviour is interesting since it is often the case that models of growing trees exhibit an exponential volume growth. This is e.g. the case in the preferential attachment model [2] which in fact corresponds to the case as was noted before. Furthermore, since , the full range of exponents, , is realized and the ag–model is one of few known natural tree models having this feature.

1.1. Relation to the vertex splitting model

We now briefly introduce the vertex splitting model and show how the ag–model can be seen as a special case. The parameters of the vertex splitting model are given by a set of non–negative weights , with , where (or ) is a fixed number which denotes the maximum vertex degree in the trees. These weights are referred to as partitioning weights and the so–called splitting weights are defined by

(1.3)

Starting from a fixed finite planar tree, in each discrete time step a new edge is added as follows.

  • Select a given vertex of degree with relative probability .

  • Randomly partition the edges which contain into two disjoint sets of adjacent edges: of size and of size , with probability . For a given , all such partitionings are taken to be equally likely.

  • Move all edges in from to a new vertex and join and by a new edge.

We allow a small generalization of the above growth rule: single out a vertex of degree one in the initial tree and call it the root and modify (a) in such a way that the root is selected with relative probability . Each time the root is split we define the new vertex of degree one to be the root. The vertex splitting model has very general growth rules and it includes many other models of random trees as special cases or limiting cases, see [9, 29] for more detailed discussion.

The ag–model can be recovered from the vertex splitting model by assigning the weight to splitting the root and choosing the nonzero partitioning weights as follows

(1.4)

The splitting weights are then

(1.5)

and if . Note that in the case and , the weight is negative. We will however include this case in the ag–model since the total weight of any transition is still positive.

A similar relationship between the –model and the vertex splitting model was discussed in [9]. However, in that case one needs to take which means that comparison of results in the two models is not necessarily reliable. The ag–model is therefore more interesting as a special case and due to its simplicity, yet non–triviality, it serves as a good testing ground for non–rigorous results obtained in the vertex splitting model.

1.2. Outline

The paper is organized as follows. In Section 2 we define rooted planar trees and introduce a convenient notation for representing random trees. Thereafter we give a proper definition of the ag–model which was described informally above. In Section 3 we show that the model has the Markov branching property and we calculate its first split distribution. In Section 4 we show, using methods from [28], that the finite volume probability measures generated by the random growth operation, converge to a measure on the set of infinite trees. Furthermore, we characterize the infinite volume measure. In Section 5 we calculate the annealed Hausdorff dimension with respect to the infinite volume measure and in a certain special case, we calculate the almost sure Hausdorff and spectral dimensions. The results we obtain, support certain scaling assumptions which were made in the vertex splitting model. We conclude by commenting on the distribution of the degrees of vertices in the trees and correlations between degrees of neighbouring vertices by recalling results from [9]. In order to improve readability, proofs of theorems and lemmas are in most cases collected in Appendix B.

2. Random planar trees

In this section we begin by defining the set of rooted, planar trees and endow it with a metric. Then we define a convenient notation for representing random trees and introduce the model which will be studied in the paper.

Start with a tree graph which has vertices of finite or countably infinite degree and at least one vertex of degree one. By convention we define the root of to be a vertex of degree one and we label the unique nearest neighbour of the root by . The rest of the vertices are labeled in the following recursive way. The children of a given vertex in the tree (apart from ) with label are labeled with sequences , see Fig. 2. A rooted planar tree is a tree along with such a lexicographical labeling. From here on, we will always work with rooted, planar trees unless otherwise stated and will simply refer to them as trees. We denote the set of trees with edges by and the set of all trees, finite and infinite, by .

Figure 2. Left: An example of a rooted, planar tree and a left subtree (boxed in gray). Right: The graph ball and the left ball (boxed in gray). The root is indicated by a circled vertex.

A tree is said to be a left subtree of if it is a connected subtree of which contains and has the properties that if it contains a vertex with label then it contains all vertices with labels with , see Fig. 2. Let be the graph ball of radius centered on the root of . We define the left ball of radius , , as the maximal left subtree of with vertices of degree no greater than , see Fig. 2. A metric is defined on by

(2.1)

The metric was first introduced in [21] and we refer to this paper for some properties of the metric space .

Define the root joining operation in the following way. Given trees , , let be the tree obtained by (I) identifying the roots of and labeling them by (1), (II) replacing the first element ’’ of each label in by ’’, , and (III) connecting a new root to the vertex (1). If we may omit the symbol, see Fig. 3. Note that in general

for a permutation of .

Figure 3. The root joining operation.

Let be a probability distribution on . We define a random tree by the canonical probability generating function

(2.2)

The above sum of trees and multiplication of trees by a scalar are formal and provide a convenient way of storing information on the probability measure .

2.1. The ag–model

Using the notation introduced above, we now define the ag–model which was described informally in the introduction. Let

(2.3)

We introduce a growth operation in the following recursive way. Let be the single edge tree and define . For a tree define the random tree

where denotes the number of edges in . The growth operation is equivalent to the growth rule which was described informally in Fig. 1 (left) in the introduction. The ag–model is defined recursively as the random tree which satisfies and

(2.5)

We denote the probability measure on , generated by this growth process by .

3. Markov branching

A sequence of random trees is said to satisfy a Markov branching property, or to be Markovian self–similar, if there exist functions , such that for all

(3.1)

The functions are referred to as the first split distribution of . We use the convention that if any of the arguments equals zero.

Proposition 3.1.

The random trees , defined by (2.5), have the Markov branching property with a first split distribution which satisfies ,

(3.2)

for and

(3.4)

where .

Proof.

We use induction on . clearly satisfies (3.1) for . Assume it satisfies (3.1) for some . Then

This shows that (3.1) also holds for and we conclude that it holds for all . ∎

The recursions for the first split distribution in Proposition 3.1 can be solved with straightforward methods. We state the result in the following proposition which can easily be proved by induction. The method for finding the solution is described in Appendix A.

Proposition 3.2.

The first split distribution of the sequence is given by

where .

We will repeatedly use the following standard, easily derived identities when we work with the above first split distribution [1]

(3.6)

and

(3.7)

4. Convergence of the finite volume measures

In this section we show that the measures generated by the growth process converge weakly to a measure on the set of infinite trees. By weak convergence we mean that for all bounded functions which are continuous in the topology generated by the metric

(4.1)

We will call an infinite non–backtracking path from the root, a spine. Let be a tree with exactly one spine and let be a vertex on the spine () with degree . We call the finite subtrees of which are attached to the vertex outgrowths from the spine.

Theorem 4.1.

Let . The measures , viewed as probability measures on , converge weakly, as , to a probability measure which is concentrated on the set of trees that have exactly one spine. The degrees of the vertices on the spine are independently distributed by

(4.2)

The outgrowths from the spine are finite with probability one and outgrowths from different vertices are independently distributed. If a vertex on the spine has degree and are the outgrowths from to the left of the spine (in that order) and are the outgrowths from to the right of the spine (in that order), then their joint distribution is

(4.3)

where and .

A proof to the above theorem is given on page B in Appendix B.

We point out that the distributions are independent of how many of the outgrowths are to the left or to the right of the spine. For an ordered sequence of outgrowths, there are different ways to arrange them around the spine.

Below, we comment on some special cases. When , for as it should be. In the case and the trees are generic, i.e.  is a critical Galton–Watson process conditioned to have edges. This follows from the fact that the first split distributions can, in this case, be written as

(4.4)

with

(4.5)

can be interpreted as a finite volume partition function corresponding to branching weights and , see e.g. [14]. Furthermore, this is the only special case in which we obtain generic trees. This can be seen from the fact that when , outgrowths from the same vertex on the spine are dependent.

When and ,

(4.6)

and it falls off exponentially in . When and we find that for large

(4.7)

i.e. it falls off with a power law in . From the last formula, we see that when , the expected value of the degree of a vertex on the spine is infinite. A simple and interesting special case arises when in which case the outgrowths from the spine are single leaves. Such graphs have been referred to as caterpillars in the literature. The degrees of the vertices on the spine are distributed independently by

(4.8)

and they have an infinite expected value for all values of . These caterpillars are a special case of ’caterpillars at a phase transition’ in the equilibrium statistical mechanical model studied in [23]. We will consider this special case in more detail in the next section.

5. The Hausdorff dimension

The Hausdorff dimension is a notion of dimension of graphs and is defined in terms of how the volume of the graph ball scales with its radius . The Hausdorff dimension of a graph is defined as

(5.1)

provided that the limit exists. This definition is only interesting on an infinite graph. On the hyper–cubic lattice it holds that but in general is not an integer. This dimension has been studied by physicists, especially in the quantum gravity literature, see e.g. [5] and should not be confused with the usual notion of Hausdorff dimension in a metric space, although there are some similarities.

The Hausdorff dimension can be defined in different ways for random graphs. If the graphs are distributed by then they might first of all have, –almost surely, a Hausdorff dimension as defined above. Secondly, we define the annealed Hausdorff dimension as

(5.2)

where denotes expected value with respect to .

There is another notion of dimensionality which applies when one considers a sequence of finite volume measures on a set of graphs. It is usually defined in terms of how the average value of some typical distance in the graph (the maximum distance between vertices, the mean distance of vertices from the root, etc.) scales in relation to the volume of the graph as it grows. This dimension has also been referred to as the Hausdorff dimension in the physics literature but to avoid confusion we will refer to it here as the fractal dimension and denote it by . To give a more precise definition, we adopt the one from [9] which is as follows: Define the radius of a finite tree by

(5.3)

where is the vertex set of , is the root, is the graph metric and denotes the degree of . The fractal dimension is defined as

(5.4)

If converge to a measure concentrated on infinite graphs, has been observed to be equal to (or ) in many situations, a simple example is the uniform tree and modifications of it, see e.g. [14]. It is however straightforward to find a counterexample where and it is not entirely clear which conditions guarantee equality. We will comment on this relation in the ag-model below.

We will now calculate the annealed Hausdorff dimension of the trees distributed by from Theorem 4.1.

Theorem 5.1.

Let . The random trees, distributed by described in Theorem 4.1, have an annealed Hausdorff dimension

(5.5)

To prove Theorem 5.1, we need to analyse the large behaviour of . In order to simplify the notation we let be the empty tree and define . We then extend the probability distributions , , to probability distributions on and define

(5.6)

Since the outgrowths from different vertices on the spine are i.i.d. it is clearly sufficient to show that

(5.7)

as . This follows from the Lemma below which is proved on page B in Appendix B.

Lemma 5.2.

For ,

Note that when and () since then the expected value of degrees of vertices on the spine is infinite. However, the –almost sure Hausdorff dimension might still be finite. We confirm this in the case and , when the trees are caterpillars.

Theorem 5.3.

Let and . Then

(5.8)

–almost surely.

The proof is given on page B in Appendix B.

5.1. Comparison to the fractal dimension

In the original paper on the vertex splitting model [9] it was shown that the expected value of the radius of a tree can be written as

(5.9)

where is the probability that a uniformly chosen vertex has degree and that the volume of the subtree attached to containing the root is and that the other subtrees attached to have a total volume . Furthermore, in the case of linear splitting weights , was shown to be a solution of a system of linear recursion equations determined by the growth rules of the vertex splitting model, see [9, Section 3]. These recursion equations could not be solved explicitly but it was assumed that the following scaling holds

(5.10)

for some and “scaling functions“ . The linear recursions were thus reduced to an eigenvalue equation for

(5.11)

where is the Perron–Frobenius eigenvalue of the matrix indexed by a pair of two indices , , , and given by the matrix elements

(5.12)

Comparing (5.9) and (5.10) to (5.4) allows one to find the fractal dimension

(5.13)

The scaling assumption (5.10) was not proven but the results (5.11-5.13) were supported by simulations in the case .

It is interesting to compare , corresponding to the weights (1.4) of the ag–model, to the values of obtained in Theorem 5.1. It is straightforward to solve (5.11) for small values of and find that . Furthermore, we have calculated in the case and by solving (5.11) numerically. We used a cutoff on the system which is expected to closely approximate the case , since the vertex degree distribution is believed to fall of exponentially in this case, cf. (6.4). The results are shown in Fig. 4. The agreement we find, supports the validity of the scaling assumption (5.10) to a very high maximum degree .

Figure 4. Comparison of (gray squares) and (solid line) in the case and . Using the weights (1.4), we calculated numerically from (5.11-5.13) for , using a cutoff on the system.

5.2. The spectral dimension

We conclude this section by mentioning another notion of dimension of graphs called the spectral dimension. It is defined in terms of how the return probability of a random walker on the graph decays with time . More precisely, for a tree let be the probability that a simple random walk which leaves the root at time is back at the root at time . The spectral dimension of is defined as

(5.14)

provided the limit exist. The spectral dimension can take any value greater than one and does not necessarily agree with the Hausdorff dimension. We refer to [6, 14, 15, 22] for discussion of the spectral dimension of several types of random graphs.

It would be interesting to calculate the spectral dimension of the trees distributed by . For now we only have results in the case when the trees are caterpillars.

Theorem 5.4.

Let and . If exists then

(5.15)

– almost surely

The theorem is proved on page B in Appendix B.

6. Vertex degree distribution and correlations

In this section we use results from the vertex splitting model [9] to calculate the vertex degree distribution and correlation between the degrees of neighbouring vertices in the ag–model. Not all results in this section are rigorous and we will comment on this point below.

Let be the number of vertices of degree in a random tree with edges and define the vertex degree densities

(6.1)

It was shown in [9, Section 2] that the densities in the vertex splitting model satisfy the linear equation

(6.2)

assuming that the splitting weights are linear and under certain technical conditions on the partitioning weights. The splitting weights are linear in the ag–model, cf. (1.5), and one can check that for small the technical conditions needed on the partitioning weights are fulfilled. However, it is not certain whether (6.2) holds for general and one would need further analysis to verify that. It is straightforward to solve (6.2) for the weights (1.4) and thus we find that the vertex degree densities in the ag–model are given by

(6.3)

and

(6.4)

provided that (6.2) holds. By sending to zero we find that these results agree with results previously obtained in the preferential attachment model [2, 26]. Also note, that in the case , has in general a power law behaviour except when () in which case it falls of exponentially with rate . This resembles properties of the degree distribution of the vertices on the spine, cf. (4.6) and (4.7).

Let be the number of edges with endpoints of degree and in a random tree with edges, using the convention that the vertex of degree is the one closer to the root. Define the density

(6.5)

It was shown in [9] that these densities in the vertex splitting model satisfy

(6.6)

assuming that the limit (6.5) exists. The densities give us information about correlation between vertex degrees of neighbouring vertices. It can be measured with a correlation coefficient

(6.7)

where

(6.8)

The coefficient takes values between and . If the graph is said to show disassortative mixing and vertices with high degree prefer to be neigbours of vertices with low degree. If the graphs are said to show assortative mixing and vertices with high degree prefer to be neighbours of vertices with high degree, see e.g. [27].

We will conlude this section by calculating for two choices of parameters in the ag–model, namely and , . We consider (6.6) with the weights given in (1.4). In the case , (6.6) can be explicitly solved and we find that

(6.9)

This can of course be repeated for small values of . However, when is large or infinite it is more difficult to solve (6.6) explicitly. Instead, we study the generating function

(6.10)

and use the fact that

(6.11)

to calculate . Equation (6.6) becomes a linear, first order partial differential equation in terms of the generating function . It can in principle be solved for a general set of parameters, however we only comment on the case , . In that case, the coefficients of the derivative terms in the PDE are zero and we get an ordinary equation for . The solution is

(6.12)

where

(6.13)

with

(6.14)

From these expressions we find that

(6.15)

We plot the solutions (6.9) and (6.15) together in Fig. 5.

Figure 5. Comparison of Equations (6.9) (, black) and (6.15) (, , gray).

The two curves are similar in both cases. If then which agrees with results which have previously been obtained for the preferential attachment model [27]. In this case the vertices which are close to the root are ’old’ in the sense that once they reach the maximum degree (which they eventually do with probability one) they do not change again. Thus, a lot of vertices of high degree become neigbours. When is increased above zero, a repulsions is introduced between these vertices, the value of decreases and the trees show disassortative mixing. When goes to 1, the trees approach the same non–random graph, a spine with no outgrowths. As a consequence, the value of approaches the same value in both cases.

7. Conclusions

We introduced the ag–model, a special case of the vertex splitting model which has the Markov branching property. For particular choices of parameters it reduces to models of generic trees [14], preferential attachment [2] and non–generic caterpillars [23]. It was proved that the finite volume measures generated by the growth rules converge to a measure which is concentrated on the set of trees with exactly one spine and the limiting measure was described explicitly. The same has been done before in Ford’s –model [28] and a special case of the –model [29]. Extension of these convergence results to the vertex splitting model is a work in progress.

There is another notion of convergence of random trees, referred to as the scaling limit, see e.g. [3, 19]. This means, roughly, that a random tree viewed as a metric space with the graph metric suitably scaled, converges weakly, in the Gromov–Hausdorff topology, to a continuum random tree. In a recent paper on Markov branching trees [18], Haas and Miermont proved that under certain natural conditions on the first split distributions the scaling limit of the trees is a self–similar fragmentation tree, in the Gromov–Hausdorff-Prokhorov topology. We expect that this theory applies to the model studied in this paper and it would be interesting to confirm that. Moreover, it is an interesting and challenging problem to generalize the results on the scaling limit to the vertex splitting model when Markov branching is absent.

The annealed Hausdorff dimension, with respect to the infinite volume measure of the ag–model, was calculated for a certain range of the parameters. The results partly support scaling assumptions which were made when calculating the fractal dimension in the vertex splitting model [9]. In the special case of growing caterpillar graphs we calculated, almost surely, the Hausdorff and spectral dimension. It turns out that the dimensions are related by the formula

(7.1)

This equation holds in general for tree models which satisfy a certain uniformity condition and under the assumption that vertex degrees are uniformly bounded from the above [6]. We expect this relation to hold in the ag–model and it would be desirable to check whether it holds in the vertex splitting model.

It is possible to study other interesting observables in the ag–model such as the vertex degree distribution and correlations between degrees of neighbouring vertices in large trees. It would be interesting to give a rigorous proof of (6.4) and even to get stronger convergence results for the random variables . This can presumably be done, at least for some range of the parameters, using results on generalized Pólya urns [20]. Furthermore, it would be interesting to confirm the validity of (6.2) for as general set of parameters as possible. Similar results about the convergence of are desirable.

A natural question is whether the ag–model is the only special case of the vertex splitting model which has the Markov branching property. As was noted in the introduction, the –model has the Markov branching property but it is not strictly a special case of the vertex splitting model, rather a limiting case. Since the vertex splitting model has local and isotropic growth rules, one might also ask whether there exists some other notion of self–similarity which could be used to handle the general case. An understanding of this could be a key element towards a solution of the most general case.

Acknowledgement.

I am deeply indebted to Thordur Jonsson and François David for helpful discussions and comments.

Appendices

Appendix A Solution of the first split distributions

In this section we describe a ’network flow method’ for solving the recursion equations for the first split distribution given in Proposition 3.1. We encountered this method in [17] where it was used to solve recursion equations for the first split distribution of Ford’s –model.

First of all, it is straightforward to derive (LABEL:qfuncgengen) in the case . Consider next the case in which case the nearest neighbour of the root has two disjoint subtrees of descendants which we refer to as the left and right subtree. We represent a state when the tree has edges in the left subtree and edges in the right subtree by a node , in the network in Fig. 6.

Figure 6. A network flow diagram with sources , .

We assign conductance

(A.1)

between states and , which is the probability of going from state to state (symmetric in and ) given by the recursion (LABEL:qrec2). Let be a simple path (non–backtracking path) in the network with endpoints and where denotes the length of the path. Note that the product of conductance along a path from a state (or ) to a state is given by

(A.2)

and is independent of the path chosen. We define the value of a state as and we define as a source which flows into state (or ). We can write

i.e.  is given by the flow from the neighbouring sources and the neighbouring states, weighted by the conductance between the states. By comparing (LABEL:flowrelation) to (LABEL:qrec2) we find that

(A.4)

and

(A.5)

for .

From (LABEL:flowrelation) we conclude that is given by the sum over all paths from all sources which lead to the state , where each path is weighted by the product of the conductance along the path, i.e.

(A.6)

Since the conductance along a path between given states is independent of the path chosen, we can take the product outside the inner sum and we are simply left with a counting problem. The number of paths between and is and the number of paths between and is . We can now easily perform the sums over and we recover (LABEL:qfuncgengen) for .

This argument can be generalized to higher values of and it yields the formula (LABEL:qfuncgengen).

Appendix B Proof of main theorems

In this section we collect proofs of theorems and lemmas stated in the main part of the paper. We need the following two lemmas in the proof of Theorem 4.1.

Lemma B.1.

For ,