Painting a graph with competing random walks\thanksrefT2

Painting a graph with competing random walks\thanksrefT2

[ [ Stanford University Department of Mathematics
Stanford University
Stanford, California 94305
USA
\printeade2
\smonth3 \syear2010\smonth4 \syear2011
\smonth3 \syear2010\smonth4 \syear2011
\smonth3 \syear2010\smonth4 \syear2011
Abstract

Let be independent random walks on , , each starting from the uniform distribution. Initially, each site of is unmarked, and, whenever visits such a site, it is set irreversibly to . The mean of , the cardinality of the set of sites painted by , once all of has been visited, is by symmetry. We prove the following conjecture due to Pemantle and Peres: for each there exists a constant such that where , and for . We will also identify explicitly and show that as . This is a special case of a more general theorem which gives the asymptotics of for a large class of transient, vertex transitive graphs; other examples include the hypercube and the Caley graph of the symmetric group generated by transpositions.

[
\kwd
\doi

10.1214/11-AOP713 \volume41 \issue2 2013 \firstpage636 \lastpage670 \newproclaimassumption[theorem]Assumption

\runtitle

Painting a graph with competing random walks

{aug}

A]\fnmsJason \snmMiller\correflabel=e2]jmiller@math.stanford.edu

\thankstext

T2Supported in part by NSF Grants DMS-04-06042 and DMS-08-06211.

class=AMS] \kwd60G50 \kwd60F99. Random walk \kwdcompeting random walks \kwdvariance.

1 Introduction

Suppose that are independent random walks on a graph starting from stationarity. Initially, each vertex of is unmarked, and, whenever visits such a site, it is marked irreversibly. If both and visit a site for the first time simultaneously, then the mark is chosen by the flip of an independent fair coin. Let be the set of sites marked once every vertex of has been visited. By symmetry, it is obvious that . The purpose of this manuscript is to derive precise asymptotics for for many families of graphs.

The process by which a single random walk covers a graph has been studied extensively. Examples of interesting statistics include the expected amount of time it takes for the random walk to visit every site M88 , DPRZCOV04 , the growth exponent of the set of sites visited most frequently DPRZTHICK01 and the clustering and correlation structure of the last visited points BH91 , DPRZLATE06 , MP11 . The motivation for this work is to understand better how multiple random walks cover a graph.

The investigation of the statistical properties of was first proposed in the work of Gomes Jr. et al. JLSH96 . Their motivation was to study the technical challenges associated with physical problems involving interacting random walks. They estimate the growth of where is the interface separating from in the special case of the one-cycle . As with , computing for becomes trivial for since it is easy to see that, with probability strictly between and , for any pair of adjacent vertices , will hit before , conditional on the event that hits first. On the other hand, estimating in this setting is challenging since its expansion in terms of correlation functions exhibits significant cancellation which, when ignored, leads to bounds that are quite imprecise. We will develop this point further at the end of the Introduction.

The problem we consider here was formulated by Hilhorst, though in a slightly different setting. Rather than considering the sets of sites first painted by , respectively, it is also natural to study the sets of sites most recently painted by , respectively. In other words, in the latter formulation the constraint that the marks are irreversible is removed. It turns out that these two classes of problems are equivalent, which is to say . This helpful observation, which follows from the time-reversibility of random walk, was made and communicated to us by Comets.

We restrict our attention to lazy walks to avoid issues of periodicity, and in particular to ensure that the random walk has a unique stationary distribution. That is, the one-step transition kernel is given by

where means that is adjacent to in . The particular choice of holding probability is not important for the proof; indeed, any would suffice. Our proofs also work in the setting of continuous time walks. Let be the -step transition kernel of a lazy random walk on and its unique stationary distribution.

Our main result is the precise asymptotics for on tori of dimension at least three, thus verifying a conjecture due to Pemantle and Peres DICKERTHESIS , page 35.

Theorem 1.1

Suppose that , . There exists a finite constant such that

where

Our proof allows us to identify explicitly and is given as follows. Let

(1)

be the Green’s function for lazy random walk on . This is the amount of time a random walk initialized at spends at before escaping to . For ,

(2)

It is not difficult to see that as , so that for and large is close to the variance of an i.i.d. marking. For ,

(3)

we will explain why this limit exists and is positive and finite in Proposition 2.1. The definition of is slightly more involved. Let denote the three-dimensional continuum torus, the transition kernel for Brownian motion on and

Now set

(4)

The reason that the limit exists and is positive and finite is that converges to the uniform density exponentially fast in ; see Proposition 3.1 for a discrete version of this statement.

Throughout the rest of the article, for functions , we say that if there exists constants such that . We say that if there exists constants so that . We say that if and . Finally, we say if .

We note that the problem for is trivial: . Indeed, observe that with positive probability, the distance between and at time is at least . In steps (for large enough), has positive probability of covering the entire cycle while has positive probability of not leaving an interval of length containing its starting point. On this event, . This proves our claim as the upper bound is trivial. For , the asymptotics of remains open.

One interesting remark is that the variance for is significantly higher than that of an i.i.d. marking. The results of Theorem 1.1 should also be contrasted with the behavior of the variance of the range of random walk on run up to the cover time of , which is the expected amount of time it takes for a single random walk to visit every site. When , ; see LPW08 . For , it follows from work of Jain and Orey JO68 that . For , it follows from work of Jain and Pruitt JP70 that is and , respectively.

This work opens the doors to many other problems involving two random walks. Natural next steps include CLTs for the fluctuations of and for the number of sites painted by at time , as well as the development of an understanding of the geometrical properties of the clusters of . The latter seem to be connected to the theory of random interlacements. This is a model developed by Sznitman in S10 to describe the microscopic structure of the points visited by a random walk on , , at times for —that is, when a constant order of vertices have been visited. Roughly speaking, the model is a Poisson process on , where is the space of doubly-infinite paths on modulo time-shifts. For a point realized in this process, one should think of as describing a random walk trajectory (an “interlacement”) and a time parameter. The model was first developed to study the process of disconnection of a discrete cylinder by random walk DS06 and has been subsequently applied to understand the fine geometrical structure of random walk in many different settings W08 , W10 . Sznitman’s theory generalizes to the setting of random walks by labeling each interlacement with an element of i.i.d. at random. Studying the structure of the clusters in the using this general theory is an interesting research direction.

Theorem 1.1 is a special case of a much more general result, which gives the asymptotics of for many other graphs, such as the hypercube and the Caley graph of the symmetric group generated by transpositions. We will now review some additional terminology which is necessary to give a precise statement of the result. Recall that the uniform mixing time of random walk on is

and the Green’s function for is

that is, the expected amount of time spends at up until when started from . Let be the first time hits ; we will omit if there is only one random walk under consideration. Throughout the rest of the article, for . {assumption} is a sequence of vertex transitive graphs with such that: {longlist}

and ;

for each fixed;

there exists so that uniformly in and distinct.

The purpose of 1 is that in many cases we will perform union bounds over time-scales whose length is proportional to , and the hypothesis gives us explicit control on how the number of terms in these bounds relates to the size of . Part 1 gives us control on the tail behavior of and, finally, part 1 says that with uniformly positive probability the walks we consider do not hit adjacent points within the mixing time. Note that vertex transitivity implies is constant along the diagonal. Part 1 implies that the number of times random walk started at returns to before the mixing time is stochastically dominated by a geometric random variable whose parameter depends only on . Consequently, we see that there exists such that uniformly in and .

Assume that is a sequence of vertex transitive graphs, and let

(5)
(6)

Note that does not depend on the choice of since if we replaced with , by vertex transitivity we may precompose with an automorphism of which sends to .

The general theorem is:

Theorem 1.2

Suppose that satisfies Assumption 1. Let

There exists so that for every , we have

(7)

as where

Applying this to the special cases of the hypercube and the Caley graph of generated by transpositions leads to the following corollary.

Corollary 1.3

Suppose that is either the hypercube or the Caley graph of generated by transpositions. Then

In particular, the first-order asymptotics of the variance are exactly the same as for an i.i.d. marking.

Throughout the remainder of the article, all graphs under consideration shall satisfy Assumption 1. In most examples, it will be that so that the second term in (7) is negligible. In this case, taking in (7) provides a means to compute not only the magnitude but also the constant in the first order asymptotics of the variance. In some cases, such as , the constant can even be computed when .

The challenge in obtaining Theorems 1.1 and 1.2 is that the cancellation in the expansion of the variance is quite significant which, when ignored, yields only an upper bound that can be off by as much as a multiple of . We will now illustrate this point in the case of for . It will turn out that the contribution to the variance from the sites visited by both simultaneously is negligible, and hence we will ignore this possibility in the present discussion. Observe

Note that is approximately . Let . Consequently, by symmetry, the above is approximately equal to

The reason for the term is that , so all of the diagonal terms are ignored in the summation. Let be the law of conditional on . As is approximately , using the Markov property of applied for the stopping time , we can rewrite the summation as

Here, denotes the joint law of with and . Thus we need to estimate

(8)

At this point, one is tempted to insert absolute values and then work on each of the summands separately. Since and are independent, note that . Thus by Bayes’ rule, we have

see Theorem 4.1 for a much finer estimate. Hence the expression in (8) is bounded from above by

(9)

where denotes the law of with and .

It is a basic fact that ; one way to see this is to invoke the local central limit theorem (LAW91 , Theorem 1.2.1). We can analyze as follows. We consider two different cases: either is hit before time or afterward. The probability that hits before is of order by a union bound since for all . Second, by the local transience of random walk on for , the probability that hits before is, up to a multiplicative constant, well approximated by . We now consider the second case. By time for large enough, will have mixed. This means that if neither nor has hit by this time, the probability that either one hits first is close to . The careful reader who wishes to see precise, quantitative versions of these statements will find such in the lemmas we use to prove Theorem 1.2. Thus it is not difficult to see that there exists so that

This leads to an upper bound of

A slightly more refined analysis leads to a lower bound of (9) with the same growth rate. As we will show in the next section, in every dimension this estimate is typically quite far from being sharp. The reason for the inaccuracy is that by moving the absolute value into the sum in (9) we are unable to take advantage of the cancellation that arises as when is close to and when is far from .

Outline

The remainder of this article is structured as follows. In the next section, we will deduce Theorem 1.1 and Corollary 1.3 from Theorem 1.2. In Section 3, we introduce some notation that will be used throughout in addition to collecting several basic random walk estimates. Next, in Section 4, we give a precise estimate of the Radon–Nikodym derivative of with respect to . In Section 5, we prove Theorem 1.2 and end in Section 6 with a list of related problems and discussion.

2 Proof of Theorem 1.1 and Corollary 1.3

The following proposition will be important for the proof of Theorem 1.1.

Proposition 2.1

Assume that for . For each , the limit

(10)

exists. When , it is as in (2), (3). When , it is given by where is as in (4).

The first step in the proof of the proposition is to reduce the existence of the limit to a computation involving Green’s functions. Recall from (1) that is the Green’s function for lazy random walk on . In order to keep the notation from becoming too heavy, throughout the rest of this section we will write for where will be clear from the context. Let

Lemma 2.2

Assume that for . For each , we have that

{pf}

Observe

We shall now prove a matching lower bound. Fix . Then we have that

Assuming , by mixing considerations as well as a union bound (see Proposition 3.1) we have that

Since , we have

where we used in the last line that for some (see LAW91 , Theorem 1.2.1) as well as the observation . Combining (2), (2) and (2), we have thus proved the lower bound

Here, we used the bound . Theorem 1.5.4 of LAW91 implies (it is actually stated for walks on which are not lazy, but the generalization is straightforward). Consequently,

Hence,

Dividing both sides by , taking a limsup as , then as yields

By (2) we know that , and, by local transience, it is not hard to see that . {pf*}Proof of Proposition 2.1 Lemma 2.2 implies that we may replace by in (10). Letting , we can likewise replace in (10) by . Consequently, to prove the proposition, it suffices to prove the existence of the limit

(14)

We will divide the proof into the cases and .

Case 1: . As , we have

Thus it suffices to show in this case that

exists, where and for . This will be a consequence of two observations. First, note that

Thus it suffices to show that, for with , the limit

exists (we can even restrict to finite if ). Our second observation is that

This follows since we can couple the walks on and starting at such that they are the same until the first time they have reached distance from , then move independently thereafter. The expected number of visits each walk makes to after time , where the former is stopped at time , is easily seen to be . Thus,

Therefore if , we have

For ,

Note that the limit on the right-hand side exists since by Theorem 1.5.4 of LAW91 (generalized to lazy walks)

if is fixed.

Case 2: . The thrust of the previous argument was that random walk on for is sufficiently transient so that pairs of points of distance make a negligible contribution to the variance, which in turn allowed us to make an accurate comparison between the Green’s function for random walk on with that on . The situation for is more delicate since the opposite is true: pairs of distance do not measurably affect the variance.

Theorem 1.2.1 of LAW91 (extended to the case of lazy random walk, see also Corollary 22.3 of BR76 ) implies the existence of constants such that with , we have the estimate

Hence letting , one can easily show that with

we have that

(15)

By differentiating in , we see that for , we have

We are now going to prove that

(16)

It suffices to bound

For , we apply Cauchy–Schwarz to the integral and invoke the integrability of over to arrive at

For