A convergence result on the lengths of Markovian loops
Abstract
Consider a sequence of Poisson point processes of nontrivial loops with certain intensity measures , where each is explicitly determined by transition probabilities of a random walk on a finite state space together with an additional killing parameter . We are interested in asymptotic behavior of typical loops. Under general assumptions, we study the asymptotics of the length of a loop sampled from the normalized intensity measure as . A typical loop is small for and extremely large for . For , we observe both small and extremely large loops. We obtain explicit formulas for the asymptotics of the mass of intensity measures, the asymptotics of the proportion of big loops, limit results on the number of vertices (with multiplicity) visited by a loop sampled from . We verify our general assumptions for random walk loop soups on discrete tori and truncated regular trees. Finally, we consider random walk loop soups on complete graphs. Here, our general assumptions are violated. In this case, we observe different asymptotic behavior of the length of a typical loop.
1 Introduction
Poisson ensembles of Markovian loops were introduced informally by Symanzik [Sym69] and then by Lawler and Werner [LW04] for 2D Brownian motion. An extensive investigation of the loop soup on finite and infinite graphs was done by Le Jan [Le 11] for reversible Markov processes, and then by Sznitman [Szn12] in relation with random interlacement (see [Szn10] or [DRS14]).
It is natural to study the typical behavior of loops in loop soups. The following questions were raised by Le Jan: for a loop sampled according to the normalized intensity measure , what can one say about its typical size? In particular, does the loop cover a positive proportion of the space? These questions are related to the length of the loop (i.e. the number of vertices with multiplicity visited by the loop). Indeed, by [AKL79], the cover time for the SRW on a general graph with the vertex set is , which implies that a randomly chosen Markovian loop visits all the vertices with high probability if its length exceeds certain power of . This motivates our study on the lengths of loops.
Given a sequence of Poisson point processes of loops on a sequence of increasing graphs, we are interested in the typical length of the loops. The intensity measures considered in the paper are given by Markov chains with uniform killing rates. When the killing rates go to zero superexponentially, typical loops are extremely large. On the other hand, when the killing rates decrease to zero subexponentially fast, the length of a randomly chosen loop is tight. We are particularly interested in the intermediate case where two types of typical loops appear simultaneously: loops of bounded size and loops of size much bigger than the size of the state space of Markov chains.
Following [Le 11], we consider measures on the space of loops associated with these Markov chains. We first introduce the notation of based loops and loops. By a based loop , we mean an element . Two based loops are equivalent if they coincide up to a circular permutation, e.g. . A loop is an equivalence class of based loops. The based loop functionals that we will be interested in will be loop functionals, i.e. they are invariant under circular permutations. For example, the length of a based loop , which is defined by . We use the same notation for the length of a loop. We define the mass of under the based loop measure by
(1) 
where are the transition probabilities of an irreducible Markov chain on a finite state space and
plays the role of killing for the Markov chain .
We normalize and get a probability measure . The corresponding pushforward measures on the space of loops are denoted by and . We refer to [Le 11] and [Szn12] for more details on Markovian loop measures.
The main object of interest in this paper is the limit of under . Let
be the empirical distribution of the eigenvalues of the matrix , where are the eigenvalues of and is the Dirac mass at for . By the PerronFrobenius theorem, is supported in the unit disk. By tightness, there always exists a convergent subsequence of . Therefore, we assume as hypothesis (H1) that
(2) 
By the PerronFrobenius theorem, there exists a unique stationary distribution for each . Our second hypothesis (H2) is that these are more or less similar to Markov chains on graphs of uniformly bounded degrees: there exist independent constants such that
(3) 
Our main result is a limit result on the typical size of a loop sampled from .
Theorem 1.1.
We suppose (H1) and (H2). Let be such that and that as increases to infinity. Then,

,

where is the total mass of ,

,

,

.
In Sections 3 and 4, we compute explicitly and on tori and trees. The proof of Theorem 1.1 is based on an upper bound on the transition functions of Markov processes and a direct calculation.
In contrast, when is the simple random walk on the complete graph of vertices. Theorem 1.1 is not applicable as (H2) fails. By an explicit calculation, we get
Theorem 1.2.
Suppose that is the simple random walk on a complete graph of vertices and for some . Then, we have that and that .
By a comparison with the coupon collector problem, we get
Corollary 1.3.
In the same setting as Theorem 1.2, let be a Poisson point process of loops with intensity . Then, converges in distribution to a Poisson random variable with mean , where stands for the set of loops which cover all the vertices inside the graph.
Organization of the paper: Section 2 is devoted to prove Theorem 1.1. Then, we analyze two examples and calculate the limit measures : random walk loop soups on discrete tori in Section 3 and random walk loop soups on balls in a regular tree in Section 4. We consider random walk loop soups on complete graphs and prove Theorem 1.2 and Corollary 1.3 in the last section.
2 Proof of Theorem 1.1
By (H2), the random walk is either aperiodic or periodic. For simplicity, we assume that these are all aperiodic. The argument for periodic case is quite similar and is left to the reader. The key to prove Theorem 1.1 is the heat kernel bounds in [MP05], which is stated for lazy random walks^{2}^{2}2By lazy random walks, we mean random walks with the transition probabilities , where and is the transition probabilities of some random walks.. Under (H2), the laziness assumption is satisfied by and [MP05, Theorem 4] gives an upper bound on . This bound is then used to bound and to control the convergence rate of towards , see Lemma 2.1. Then, Theorem 1.1 is straightforward through explicit calculations.
By applying [MP05, Theorem 5] for , we have the following lemma, which is implied by [AF02, Proposition 6.18] for finite regular graphs.
Lemma 2.1.
Under , we have that for ,
In particular, .
Proof.
Note that is an eigenvalue of and that
Also, note that is supported inside the unit disk centered at and hence is nonincreasing in . Hence, it is sufficient to bound for each . Since singular values dominate eigenvalues in the norm (see e.g. [HJ91, Theorem 3.3.13 b)]), we get that , where is the transpose of and
(Note that has the same eigenvalues as .) Following [MP05, Eq. 14], for two measures and , we write
Then, is exactly . As we will explain in details in the following, by applying [MP05, Theorem 4] to , we get that
and hence . To apply [MP05, Theorem 4], it suffices to verify that
(4) 
where and is defined in [MP05, Eq. 13]. By [MP05, Lemma 3],
where and
Under our assumption (H2), , and for with . (By a slightly more careful estimate, one could get that and .) Hence, we have that
and (4) follows. ∎
Proof of Theorem 1.1.

We give a brief indication and left the details to the reader. By , it suffices to show that . By the definition of , we write that
The second summand is the major term, which is asymptotically equivalent to by our assumptions on and . The first term is as by Lemma 2.1.

The proofs are similar and we left them to the reader.
∎
Remark 1.
By a direct calculation, the total mass of is , see [Le 11, Eq. (2.5)]. Note that is the partition function of weighted spanning trees rooted at the cemetery point of a Markov chain with transition probabilities , see [Le 11, Section 8.2]. The weight of a tree is the product of weights on edges directed to the root with the convention that . Hence, by using the crude lower bound (given by a single tree) of the partition function of weighted spanning trees, we get that and hence . For a reversible chain (i.e. , ), and are supported on and Theorem 1.1 a) is improved:
3 Example: discrete tori
We calculate the limiting probability measure and for simple random walks on discrete tori . We denote by the corresponding transition probabilities. Then, (H2) holds with and . The eigenvalues of are , see e.g. [LPW09, Subsection 12.3.1]. Since is a product of in the sense [LPW09, Eq. 12.19], its eigenvalues are where take values in . Rewrite these real eigenvalues of in nondecreasing order . Define and . Then . For all ,
Hence, for , is the convolution of arcsine distributions on :
and that
which equals when and equals when , where is the Catalan’s constant. We refer to [Kas61] or [Mon64] for the evaluation when . By Theorem 1.1 and previous calculations, for , we have that and that . Hence, for , if we take a Poisson point process of loops of the intensity , then as , converges to a Poisson point process on with intensity measure , where denotes the Lebesgue measure, and converges to a Poisson point process on with intensity measure . (Indeed, the sequence could be replaced by any sequence of positive integers such that and that .)
4 Example: balls in a regular tree
We consider the SRW on a truncated regular tree and we will calculate . More precisely, let be an infinite regular tree with degree . Fix a vertex , let be the balls with radius centered at and be the SRW on it with transition probabilities . Let be the empirical distribution of the eigenvalues of . By choosing a root vertex uniformly within , we obtain a sequence of random graphs , which converges locally to a canopy tree , where the root is of distance from the boundary^{3}^{3}3The boundaries are the leaves, i.e. the vertices with degree . with probability for , see Figure 1 for an illustration of . Indeed, by explicit calculations, we have that converge in distribution to . Moreover, for all positive integers , conditionally on that is of distance from the boundary , the distribution of the ball in equals the ball inside for large enough .
For simplicity of notation, we write instead of . The notion of local convergence is introduced in [BS01] and [AL07]. By [ATV11], it implies the convergence of the spectral measures . More precisely, such that
and is the probability measure supported on the unit disk such that for vertices of distance from ,
where is the SRW on the canopy tree. Hence, .
By [AW06, Theorem 1.4], the transition kernel of the SRW on canopy tree acting on has only point spectrum with compactly supported eigenfunctions. In that proof, M. Aizenman and S. Warzel used the idea in [AF00] on the decomposition of into invariant subspaces. To be more precise, let be the transition probabilities of SRW on canopy tree and be a reference measure defined by for a vertex in the canopy tree. Note that is reversible with respect to and equals in the sense of set. As in [AW06], for a vertex , we define a finite subtree at :
and we have an orthogonal decomposition of as follows:
where means the orthogonal complement^{4}^{4}4To be more precise, . and denotes the subspace of symmetric functions supported on the forward subtree :
Then, for each , is an invariant space for the map such that . We will describe the eigenvalues and the corresponding eigenfunctions in each where is of distance from . Consider the following transition probabilities on :
Note that
(5) 
where are Chebyshev polynomials of the second kind, defined by the identity
Hence, has different eigenvalues which are exactly times the zeros of
(6) 
Denote by the corresponding eigenfunctions. Next, take a real orthogonal matrix with for . List the neighbor vertices of in : . For , and , we define a function supported on by taking the value on each . Then, are eigenfunctions associated with the eigenvalue and is an orthogonal basis of . From the spectral representation of the transition probabilities , we obtain the following
Remark 2.
Let be the zeros of (6). Then,
and . ^{5}^{5}5Similarly, by (5), we could express by an infinite sum of functions involving Chebyshev polynomials and logarithms. However, we have no closed form expression for the moments of . Note that . A quick way to view this is through the connection with random rooted spanning tree with all edges directed towards the root. Indeed, is the total mass of directed spanning trees rooted at the cemetery point, where the weight of a tree is given by the product of weights on directed edges in that tree. In this particular case, there is only one rooted tree with weight . Hence, we get that . If we are only interested in the total mass , then a simpler way is to use the relation with the spanning trees. Indeed, by (1), we have that
where is the vertex set of the th trees in the sequence and equals the total mass of directed spanning trees rooted at the cemetery point. (Here, we view as the killing rate, i.e. the jumping rate to the cemetery point.) For our choice of , the mass is concentrated on trees such that the root has only one neighbor. The total mass of such trees is simply . Hence,
(7) 
which equals for a sequence of balls in a regular tree. For a sequence of trees in general, when the average degrees are bounded, are also bounded by concavity and (7) holds when exists. In this case, the uniformly chosen balls with radius are tight for each since are tight. (One could show this by induction on and by using the fact that a uniform chosen neighbor of a uniformly chosen vertex is a uniformly chosen vertex.) Hence, are tight and locally convergent subsequence exists. One could get similar results along that convergent subsequence.
5 Complete graphs
Let be the complete graph with vertex set and be a SRW on it.
5.1 Proof of Theorem 1.2
5.2 Proof of Corollary 1.3
As before, we denote by a simple random walk on the complete graph. We denote by the law of