Representations of stack triangulations in the plane

# Representations of stack triangulations in the plane

## Abstract

Stack triangulations appear as natural objects when defining an increasing family of triangulations by successive additions of vertices. We consider two different probability distributions for such objects. We represent, or “draw” these random stack triangulations in the plane and study the asymptotic properties of these drawings, viewed as random compact metric spaces. We also look at the occupation measure of the vertices, and show that for these two distributions it converges to some random limit measure.

\epstopdfsetup

outdir=./

Keywords: Stack triangulations; Occupation measure; Limit Theorem.

## 1 Introduction

Consider a rooted triangulation of the plane, and some finite face , say , of this triangulation. We insert a vertex in , and add the three edges , , to the original triangulation. We obtain a triangulation with two faces more than the original triangulation (the face has been replaced by three new faces). Thus, starting from a single rooted triangle, after such insertions, we get a triangulation with internal vertices, that is which aren’t vertices of the original rooted triangle, and finite faces. The set of triangulations with internal vertices which can be reached through this growing procedure is denoted . We call such triangulations stack triangulations. Note that through this construction we do not obtain the set of all rooted triangulations. This iterative process is demonstrated in Figure 1.

We endow the set with two natural probability distributions:

• The first is the uniform distribution .

• The second distribution is the distribution induced by the above construction where at each step the face in which we insert the vertex is chosen uniformly at random among all finite faces, independently from the past.

### 1.1 The object of our study

In this paper, rather than look at stack triangulations as maps, that is up to homeomorphism, we look at particular representations, or drawings, of such objects in the plane, and the geometrical properties of such representations. That is, at each insertion of a new vertex, we draw the line segments corresponding to the edges added. We call such representations compact triangulations, and view them as compact subspaces of . The main difference is that while maps are graphs drawn in the plane, they are considered only up to homeomorphism, whereas we are interested in the actual representation. We are rather informal here, but will give formal definitions later in the paper, in Section 2.1.

We take , , (identifying and ) to be the three points representing the initial rooted triangle, with its root. We start with , and set . We denote by the filled triangle , that is the union of and of the finite connected component of . At time , we insert a point somewhere in . We then define , and set

 D1={T1(M); M∈~T0∖T0}.

Now let for some . Write , , , and similarly, for the corresponding filled triangles. At time 2, we insert a point somewhere in one of the ’s. We then define

 T2(M,N):=T1(M)∪[XN]∪[YN]∪[MN],

where , and finally set

 D2={T2(M,N); M∈~T0∖T0 and% N∈3⋃i=1(~T1(i)(M)∖T(i)1(M))}.

Figure 2 illustrates this initial construction.

Iterating this construction by choosing at each step some triangular face of our drawing and splitting it, we obtain representations of stack triangulations in the plane, and call these compact triangulations. Denote the set of such objects with internal vertices (that is, after successive insertions of vertices), and for write for its set of internal vertices (viewed as a set of points in the plane). Finally, for we define the occupation measure of by

 μ(m):=1k∑x∈V(m)δx, (1)

where stands for the Dirac mass at .

We are interested here in the case where the successive insertions of vertices are done at random. We suppose that all random variables in this paper are defined on some probability space . We denote the expected value, and the variance. We consider a probability distribution on such that if has law then a.s. for all and 1. We suppose that each insertion of a vertex in a face is done according to , that is we take to have barycentric coordinates where has law , independently from all previous insertions. Now the two distributions and on introduced at the start of the section induce probability distributions and on . In words, they are the distributions of the drawings of stack triangulations with distribution and , where the insertions of vertices are made according to , independently from each other, and independently from the choice of the underlying stack triangulation. The object of the paper is to study the asymptotic behaviour of these two distributions.

### 1.2 Outline of the paper

In Section 2, we formally define compact triangulations. We then enrich the classical bijection between stack triangulations and ternary trees (see for instance [1], Proposition 1) to encode compact triangulations. For this, in Section 2.2, we introduce the notion of coordinate-labelled ternary trees. These are ternary trees with labels at each vertex, which code compact triangulations via a bijection we establish in Theorem 2.8. We end the section by formally defining the distributions and on .

In Section 3, we study the asymptotic behaviour of the uniform distribution as . The main results are:

• The weak convergence of the occupation measure as defined in (1) towards a Dirac mass at a random position (Theorem 3.1).

• The weak convergence of the distribution towards a distribution on compact subspaces of (Theorem 3.9).

Section 3.1 is thus dedicated to the statement and proof of Theorem 3.1. The statement is split into two parts. The first part states the convergence in distribution of an internal vertex of chosen uniformly at random, where has distribution , to some limit vertex. Though this is weaker than the second part, it is a key ingredient in its proof, and thus we choose to state it separately. In Section 3.2 we introduce the notion of local convergence for trees (Definition 3.13). In Theorem 3.15, we show that the bijection established in Theorem 2.8 which maps a coordinate-labelled tree to a compact triangulation has a property of continuity with respect to this topology of local convergence, from which we infer Theorem 3.9.

Finally, in Section 4, we study the asymptotic behaviour of the occupation measure under . The key ingredient is Poisson-Dirichlet fragmentation, which allows us to view the trees corresponding to the compact triangulations via Theorem 2.8 as the underlying tree of a certain fragmentation tree (Theorem 4.2). We then show the convergence of the occupation measure as defined in (1) to some (random) limit measure (Theorem 4.5). In Section 4.3 we study the properties of . We show (Proposition 4.8) that a.s. has no atom, and that it is supported on a set whose Hausdorff dimension is at most (Theorem 4.10).

### 1.3 Literature and motivation

Motivation for this work stems from the paper by Bonichon et al. [7], in which the authors look at convex straight line drawings of triangulations, and establish bounds for the minimal grid size necessary for these drawings, with the constraint that all vertices are located at integer grid points. More precisely, they show that to draw any triangulation with faces, a grid of size (for some constant ) is sufficient, giving a constructive proof of this result by establishing an algorithm for drawing any triangulation. The aim of this paper if to provide an answer to the question: what do these drawings look like?

More specifically, we aim to explore an approach for the convergence of maps which differs from the traditional combinatorial one. Indeed, maps are embeddings of graphs, but in the combinatorial approach these are viewed up to homeomorphism, and equipped with the graph distance, that is every edge is given the same length. Concerning this approach, we cite the groundbreaking work by Schaeffer in his thesis [15], where he establishes a crucial bijection between maps and a class of labelled trees, as well as the more recent work by LeGall [13] and Miermont [14], who showed (separately, using different techniques) that uniform quandrangulations with faces, renormalised so that every edge has length (for some constant ), converge in distribution to a continuous limit object called the Brownian map. In this paper however, we look at the convergence of the embeddings themselves, viewed as (random) compact spaces. This approach is analogous to the work of Curien and Kortchemski [9]. In this paper, the authors showed the universality of the Brownian triangulation introduced by Aldous [4], in that is the limit of a number of discrete families called non-crossing plane configurations, such as dissections, triangulations, and non-crossing trees of the regular -gon. As mentioned, Curien and Kortchemski view non-crossing plane configurations as random compact subspaces of the unit disk, and it is these compact spaces which converge to the limit object.

In this paper, we also study the asymptotic behaviour of the occupation measure, as defined in (1). Similar work includes the paper by Fekete [10] on branching random walks. In this paper, he considers branching random walks where the underlying tree is a binary search tree (this is related to our distribution in this paper). He shows that the occupation measure converges weakly to a limit measure which is deterministic. More work concerning the study of random measures similar to ours can be found in [3]. In this paper, Aldous proposes a natural model for random continuous “distributions of mass”, called the Integrated super-Brownian Excursion (ISE), which is the (random) occupation measure of the Brownian snake with lifetime process the normalised Brownian excursion. ISE is defined using random branching structures, and appears to be the continuous limit of occupation measures of several discrete structures.

Finally, let us mention that the combinatorial aspect of stack triangulations has been extensively studied, notably by Albenque and Marckert [1], and their paper will therefore be of great use to us. The authors studied both the uniform distribution and the other distribution. More precisely, they showed that:

• for the topology of local convergence, converges weakly to a distribution on the set of infinite maps.

• For the Gromov-Hausdorff distance, with the normalising factor , a map with the uniform distribution converges weakly to the continuum random tree introduced by Aldous [2]

• Under the distribution , the distance between random points rescaled by converges to in probability.

## 2 Compact triangulations and encoding with trees

In this section we code compact triangulations, that is the representations of triangulations in the plane, by some labelled trees. There are two main ideas in this coding. First there is the combinatorial bijection between the discrete objects: stack triangulations (viewed up to homeomorphism) and ternary trees. There is a well known bijection which maps internal vertices of the triangulation to internal nodes of the tree and faces of the triangulation to leaves of the tree (see for instance [1] Proposition 1 and references therein). We then enrich this bijection to include the drawing of the triangulation by adding labels to the tree: these labels correspond to the barycentric coordinates of the vertices of the triangulation.

### 2.1 Compact triangulations

Here we build formally the set of compact triangulations with internal vertices. The construction is done by induction, and is similar to the construction of stack triangulations. This allows us to observe the tree-like structure of these objects. During the construction, we will define the various notions necessary for the encoding discussed above. Set as in the introduction , , to be the three points of the original triangle, and define . Now define , and set . The set will be the set of internal nodes of . Now assume we have constructed for some , such that is a set of compact subspaces of and any satisfies the following properties:

1. The compact space is the union of line segments in the plane.

2. There are exactly finite connected components of , and these are all interiors of triangles. Let be the set of these connected components, and call the elements of faces of . For we define as the three points of the triangle . We can in fact define these points non ambiguously as follows.

• for , if is the interior of the original triangle .

• If a triangle is split into three triangles by adding a point in its interior, and is defined by the three points , then with the other two vertices of each triangle unchanged (that is, , and so forth). This is illustrated in Figure 3 below.

3. Finally assume that for any we have defined a set of points of , which are the points inserted at each step of the construction of .

Note that these properties are all satisfied for .

We now construct the set . First, let

 ˙Dk={(m,f); m∈Dk,f∈F0(m)}

be the set of compact triangulations with a marked face. Define a map from onto the set of compact subspaces of as follows. Let , and let be the three (ordered) points of . For any point in the face we define

 m′=IM(m,f):=m∪{[AfM],[BfM],[CfM]},

that is the space with those three new lines added, connecting the points of the face with the inserted vertex . The map is illustrated in Figure 4. We see that there are exactly finite connected components of , and these are all interiors of triangles (we have replaced one of them, , by three new ones). We also set

 V(m′)=V(m)∪{M}, (2)

and thus the set is a set of points of : it is the set of the internal vertices which define the faces of . Finally, we can define

 Dk+1:={IM(m,f);(m,f)∈˙Dk,M∈f}

to be the image of this map. In words, it is the set of “ drawings ” of stack triangulations with internal vertices, with edges included.

###### Definition 2.1.

Let . For , we call the elements of (where is defined step by step by (2)) the internal vertices of . The set is called the set of compact triangulations with internal vertices. Finally, we denote

 D=⋃k≥0Dk

the set of compact triangulations.

###### Definition 2.2.

Let . We define the occupation measure of by

 μ(m)=1|V(m)|∑x∈V(m)δx. (3)

This is a probability measure in .

Note that is a set of compact subspaces of . We aim to introduce some probability laws on these sets (as explained in the introduction), and for this we need to equip them with a -field. We first recall the definition of the Hausdorff distance for compact spaces.

###### Definition 2.3.

Let be a compact metric space. For and , define the -neighbourhood of as the set of points of whose distance to is less than , that is

 Vε(A)={x∈E,d(x,A)<ε}.

Then for two compact sets , the Hausdorff distance between and is defined by

 dH(A,B)=inf{ε>0,A⊆Vε(B) and B⊆Vε(A)}.

This defines a distance on the set of compact subspaces of .

We equip the space of compact subspaces of with the Hausdorff distance. It is a well-known topological fact that this makes it a complete metric space (see for instance [8] Section 7.3.1 p. 252). In fact, is a compact metric space. We equip the sets with the corresponding Borel -algebra.

### 2.2 Encoding with properly marked trees

We now encode compact triangulations by certain labelled trees. We begin with the purely combinatorial aspect. Let

 W:=⋃n≥0Nn

be the set of all words on , and by convention set .
If we write and call this the height of . Also, if we take two words , we write for the concatenation of and . By convention . A planar tree is a subset such that

1. .

2. If for some and , then .
The notation is used here to mark the fact that we are concatenating the words and , the latter being written so as to differentiate it from the letter .

3. For every there exists such that if and only if .

The integer corresponds to the number of children (or descendants) of in .

We will denote by the set of planar trees. If is a planar tree, its height is defined by . If has no child (i.e. ) we say that is a leaf of . Any vertex which isn’t a leaf is called an internal node of . We denote the set of internal nodes of a tree . If is a tree, and are in , we write for the highest common ancestor of and , i.e. the element of maximal height of the set . If , we let . This is the subtree of which has as a root. A ternary tree is a planar tree such that , i.e. every internal node has exactly three children.

We denote the set of ternary trees, and henceforth we will simply call them trees. We denote the infinite complete ternary tree, that is

 T∞=⋃n≥0{1,2,3}n.

It is a well known fact that is mapped bijectively to the set of stack triangulations. However, compact triangulations contain more information than stack triangulations, since compact triangulations contain the information of where each internal vertex is placed. The additional information will be put at each vertex of the associated ternary tree, and will be the barycentric coordinates of the point associated with , at the time it has been inserted. The idea is thus to associate with a point its triplet of barycentric coordinates with respect to , taken to be with sum equal to . As such . Equivalently, if the splitting of is given as in Figure 5, then , where are the respective ratios of the areas of the triangles over the area of .

Write

 V2:={(x1,x2,x3)∈R3; x1,x2,x3≥0 % and x1+x2+x3=1}

for the -dimensional simplex, so that any point corresponds bijectively to its (normalised) barycentric coordinates in . Also, let

 V∗2:={(x1,x2,x3)∈R3; x1,x2,x3>0 % and x1+x2+x3=1}

be the -dimensional simplex with its boundary removed.

###### Definition 2.4.
1. A fragmentation-labelled tree is a pair , , such that for any , , that is a tree and a set of triplets indexed by the internal nodes of . is called the splitting triplet at . We denote the set of fragmentation-labelled trees with internal vertices, and .

2. A coordinate-labelled tree is a pair , , with labels at each node such that:

1. For each leaf of the tree , we have , and write . The elements of are called the coordinates of .

2. For each internal node we have , with , and . The elements of are called the coordinates of , and is called the splitting triplet at .

3. The coordinates of the root are .

4. If and then for we have where is equal to if and otherwise. This property is illustrated in Figure 6.

We denote the set of coordinate-labelled trees with internal vertices and likewise . For we will denote the underlying tree.

###### Remark 2.5.

The condition (as opposed to ) means that the insertions of new vertices at each step are proper insertions, that is the new point is added in the interior of a face and not on its border. This is crucial for Theorem 2.8.

###### Remark 2.6.

We can map a fragmentation-labelled tree to a coordinate-labelled tree by setting , keeping for every internal node of the same splitting triplet as in , and filling in the remaining triplets of coordinates using rules (c) and (d). This gives us a bijective mapping which we denote .

Once more, we aim to define probability distributions on the sets and . For this, we introduce a -field on the sets and , with the help of a distance.

###### Definition 2.7.

Let . The map defined by

 dC(t1∙,t2∙)=\mathds1p(t1∙)≠p(t2∙)+\mathds1p(t1∙)=p(t2∙)((maxu1∈p(t1∙),u2∈p(t2∙)d(λ(u1),λ(u2)))∧1),

where is the label of a node and represents any distance on the set of labels (seen as a subspace of for some ), is a distance on (for usual reasons). We call it the coordinate-label distance.

We define in an analogous manner a distance on . The spaces and for are equipped with the corresponding Borel -algebras. We can now state the main theorem of this section.

###### Theorem 2.8.

Let . Equip the set with the coordinate-label distance and with the Hausdorff distance . Then there exists a homeomorphism

 Ψn:CTn→Dn
 t∙↦m,

such that:

1. Each internal node of corresponds bijectively to an internal vertex of . Moreover, if then the barycentric coordinates of the vertex with respect to are given by .

2. Each leaf of corresponds bijectively to a face of . Moreover, if is the label at and the face is defined by the three vertices then , , .

###### Remark 2.9.

Note that the spaces which are in one-to-one correspondence are both infinite, so that it is not the existence of the bijection as such which is of interest, but the fact that via this bijection all relevant information on a compact triangulation can be read in a coordinate-labelled tree. The measurability of the bijection will allow us to transport distributions.

###### Definition 2.10.

For a node in a coordinate-labelled tree , we define to be the triangle whose three points are given by the triplet of coordinates , and for the filled triangle. This is illustrated in Figure 7.

### 2.3 Proof of Theorem 2.8

We proceed by induction on . We follow a similar path to the proof of Proposition 1 in [1], by constructing the bijection iteratively. For there is no work to do. We have , . By property (c) of Definition 2.4 the coordinates satisfy part 2 of the theorem, as desired.

Now assume we have constructed as in the statement of Theorem 2.8, for some . Let . Denote and choose a node such that are leaves of . Now define , and to be the coordinate-labelled tree such that its labels coincide with those of except at , and where we remove the splitting triplet , as is now a leaf of . Thus and by induction we can define . Let be the face of corresponding to the leaf via . Write as in the statement of the theorem for the three vertices defining .

Now let be the point in whose barycentric coordinates with respect to are , and define . It follows that the barycentric coordinates of with respect to are by property 2 of the induction hypothesis applied to in . Thus, by mapping to and all other internal nodes of to their corresponding internal vertex via , we see that satisfies condition 1 of Theorem 2.8. To satisfy condition 2, we map all the leaves of except to their corresponding faces via , noting that these faces are untouched by so that the condition remains satisfied. Finally, we map the leaves respectively to the faces of . Because of the local growth property (d) of Definition 2.4(2), we see that condition 2 is satisfied for these leaves. This iterative construction is illustrated in Figure 8.

Two points remain. Firstly, that is a bijection. But this follows from our construction and the definition of . Indeed is obtained from through the insertion of a vertex anywhere in a given face of an element of , while we have a similar iterative structure for coordinate-labelled trees. It is important here that each vertex is inserted in the interior of some face, and not on it’s boundary (since the splitting triplets are in and not just in ), so that the face it is inserted in is defined non ambiguously.

The final point is to prove that is bicontinuous with respect to the given distances. For this, we fix some and . Write . Now there exists such that for any , if then . We may suppose that , so that if we have . This implies that if then for any vertex the corresponding vertex in is at distance less than from the corresponding vertex in . As a consequence, we get that and the continuity of is proved. The bicontinuity stems immediately from the fact that it is a mapping between compact spaces.∎

### 2.4 Introducing randomness

So far, we have worked in a purely deterministic setting. In this paragraph, we formally introduce the two probability distributions on which will be of interest to us.

###### Definition 2.11.

A splitting law is a distribution on such that if is distributed according to , then:

1. For any permutation on , has same distribution as , that is the law of is symmetric.

2. For any , a.s..

3. a.s..

We denote the set of splitting laws, and say that a random variable is a splitting ratio if its distribution is a splitting law.

Fix some . We define two probability distributions on .

• The first, which we denote , is the uniform distribution on .

• The second, which we denote , is defined by induction. For , the distribution takes value the unique tree reduced to its root a.s.. Now suppose we have defined a distribution on . Choose according to . Conditionally to , choose one of its leaves uniformly at random, and replace that leaf by an internal node with three children. This gives us a probability distribution on . Note that the weight of a tree is proportional to the number of histories leading to its construction (starting from a single root node).

We say that a random variable is an increasing tree if it has distribution for some .

###### Definition 2.12.

Let be a splitting law, and be an i.i.d. sequence of random variables with law . Let .

1. We denote (resp. ) the distribution of where is independent from and has distribution (resp. ), and is as in Remark 2.6.

2. We define the distributions and to be the respective images of the distributions and via the bijection of Theorem 2.8. These are therefore two probability distributions on .

## 3 The uniform model

In this section, we study the asymptotic behaviour of the distribution . That is, we look at random compact triangulations where the underlying stack triangulation is chosen uniformly, and the insertion of vertices is done according to some splitting law , independent from the choice of the underlying triangulation. We study both the occupation measure, and the asymptotic behaviour of the distribution itself.

### 3.1 The occupation measure

###### Theorem 3.1.

Let be a sequence of random compact triangulations, where has distribution . Recall (Definition 2) that denotes the set of internal vertices of . For every , conditionally to , let be a vertex of , chosen uniformly at random. Finally, let be the occupation measure of , as in (3). Then

1. The random point converges in distribution to some random limit point as tends to infinity.

2. We have

 μn(d)⟶δU∞% as n→∞,

where the convergence is in distribution on the space of probability measures on the filled triangle .

Theorem 3.1 says is that in the uniform model, all the vertices of the compact triangulation are at the same place, except for a portion which tends to . Although point 2 is stronger than point 1, we state both here as point 1 will be heavily used in the proof of point 2. In Figure 9 is a simulation of the vertices of the map where we take for the special case , and . We can see that the vertices are indeed concentrated at one place.

#### Proof of Theorem 3.1.(1)

We begin by recalling an elementary fact about uniform ternary trees.

###### Fact 3.2.

Take as in the statement of Theorem 3.1, and write for the corresponding node in the coordinate-labelled tree , as in Theorem 2.8. Write where is the height of . Then conditionally to , the random variables are i.i.d, and are uniformly distributed on .

###### Proof.

By construction of the law , the tree follows the uniform distribution on . We now use the following argument. If is a random ternary tree, chosen uniformly among trees of a given size, then conditionally to their sizes the subtrees at the root are independent, and also follow the uniform distribution. It immediately follows that the are i.i.d, and the fact that the law of is uniform on stems from the symmetric nature of the uniform distribution in . ∎

By definition of a coordinate-labelled tree we have the following: let be an internal node in a coordinate-labelled tree , with coordinates and splitting triplet , then:

 ∀i∈{1,2,3},C(u(i))T=M(i)ν.C(u)T,

where is the three-by-three identity matrix in which the -th line is replaced by , i.e.

 M(1)ν=⎛⎜⎝P1P2P3010001⎞⎟⎠,M(2)ν=⎛⎜⎝100P1P2P3001⎞⎟⎠,M(3)ν=⎛⎜⎝100010P1P2P3⎞⎟⎠. (4)

Henceforth, we will leave out the subscript wherever there is no risk of confusion. Combining this and Fact 3.2 gives us the following result.

###### Proposition 3.3.

Let be as in the statement of Theorem 3.1. Write for the corresponding node in the coordinate-labelled tree , and for the coordinates of . Then for , conditionally to the height of , the law of is given by the -th row of the product where the are i.i.d random variables with law (the being defined as in (4)).

Now to get the desired convergence of , it is of course sufficient to show the convergence of the sequence of coordinates where is the barycentric coordinates of the point . By Theorem 2.8(1) the law of is where is a splitting ratio with distribution , independent from . The previous proposition gives us the law of . Moreover, for any , tends to zero as goes to infinity. Thus, to prove Theorem 3.1, it is sufficient to show the following.

###### Proposition 3.4.

Let be i.i.d. random variables with law . Then the product converges a.s. as to some random matrix whose three lines are identical.

###### Proof.

We write . We wish to show that there exists such that a.s. for all , .

###### Lemma 3.5.

Let be some sub-sequence of integers such that a.s. for all . Then a.s..

###### Proof.

We proceed by contradiction. To simplify notation we assume that a.s. for . Write for the three points whose respective coordinates are . Similarly write for the three points with respective coordinates . We may assume that , and from now on work conditionally to this event.

Fix some such that . Now there exists such that for any , we have for . Thus by construction the balls and do not intersect, and for . Define , where is a splitting ratio, independent from . Then