# Mobile Geometric Graphs, and Detection and Communication Problems in Mobile Wireless Networks

###### Abstract

Static wireless networks are by now quite well understood mathematically through the random geometric graph model. By contrast, there are relatively few rigorous results on the practically important case of mobile networks, in which the nodes move over time; moreover, these results often make unrealistic assumptions about node mobility such as the ability to make very large jumps. In this paper we consider a realistic model for mobile wireless networks which we call mobile geometric graphs, and which is a natural extension of the random geometric graph model. We study two fundamental questions in this model: detection (the time until a given “target” point—which may be either fixed or moving—is detected by the network), and percolation (the time until a given node is able to communicate with the giant component of the network). For detection, we show that the probability that the detection time exceeds is in two dimensions, and in three or more dimensions, under reasonable assumptions about the motion of the target. For percolation, we show that the probability that the percolation time exceeds is in all dimensions . We also give a sample application of this result by showing that the time required to broadcast a message through a mobile network with nodes above the threshold density for existence of a giant component is with high probability.

## 1 Introduction

A principal focus in wireless network research today is on mobile ad hoc networks, in which nodes moving in space cooperate to relay packets on behalf of other nodes without any centralized infrastructure. Although the static properties of such networks are by now quite well understood mathematically, the additional challenges posed by node mobility have so far received relatively little attention from the theory community. In this paper we consider a mathematical model for mobile wireless networks, which we call mobile geometric graphs and which is a natural extension of the widely studied random geometric graphs model of static networks. We study two fundamental problems in this model: the detection problem (time until a fixed or moving target is detected by the network), and the percolation problem (time until a given node is able to communicate with many other nodes).

In the random geometric graph (RGG) model [27], nodes are distributed in a region according to a Poisson point process of intensity (i.e., the number of nodes in any subregion is Poisson with mean , where is the volume of ). Two nodes are connected by an edge iff their distance is at most , where the parameter is the transmission range that specifies the distance over which nodes may send and receive information; since the structure of the RGG depends only on the product (where is the radius- ball in ) [6], we may fix so that and parameterize the model on only. We shall take to be a cube of volume (so that the expected number of nodes in is ), and consider the limiting behavior as .

Clearly, increasing increases the average degree of the nodes.
As is well known, there are two critical values of
at which the connectivity properties of the
RGG undergo a significant change. First there is the percolation
threshold (a constant that depends on the dimension ),
so that if the network w.h.p.^{1}^{1}1We shall take the phrase
“w.h.p.” (“with high probability”) to mean “with probability tending
to 1 as .” has a unique “giant” component containing a constant
fraction of the nodes, while if all components have size
w.h.p. [27]. Second, at the connectivity threshold
, the network becomes connected w.h.p. [19].
The percolation threshold occurs also in the infinite-volume
limit where , in which case the giant component is the unique
infinite component (or “infinite cluster”) with probability 1.
These and other fundamental properties
of RGGs are extensively discussed in the book of Penrose [27];
see also [17] for additional results on thresholds.
There are a host of theoretical results on routing and other algorithmic
questions on static RGGs (see the Related Work section below for a
partial list). Naturally, most of these consider networks above the
connectivity threshold.

A central feature of many ad hoc networks is the fact that the nodes are
moving in space. This is the case, for example, in vehicular
networks (where sensors are attached to cars, buses or taxis), surveillance
and disaster recovery applications where mobile sensors are used to
survey an area, and pocket-switched networks based on mobile communication
devices such as cellphones.
Such networks are also frequently modeled using RGGs, augmented by motion
of the nodes. We will employ the following model, which we refer to as
mobile geometric graphs (MGGs) and which is essentially equivalent
to the “dynamic boolean model” introduced in [5]
in the context of dynamic continuum percolation.
We begin at time 0 with a
(infinite)^{2}^{2}2Passing to infinite volume is a standard
device that eliminates boundary effects in a finite region; with a little more technical
effort the model and results can be extended to finite regions with a suitable
convention—such as reflection or wraparound—to handle motion of nodes at
the boundaries. See, e.g., Corollary 1.3 below.
RGG in . Nodes move independently in continuous time
according to Brownian motion with variance ; here is a range of
motion parameter, which we assume is constant to ensure a realistic model.
We observe the nodes at discrete time steps
(so the displacement of a node in each direction in each
time step is normally distributed with mean 0 and variance ).
It is not hard to verify that this produces a Markovian sequence of RGGs
, all with the same value of .
Note that, while each is a RGG, there are correlations over time;
it is this feature that makes mobility challenging to analyze.

Once mobility is injected, the questions of interest naturally change from those in the static case. For example, connectivity no longer plays such a central role because mobility may allow nodes to exchange messages even in the absence of a path between them at any given time: namely, can route its message to along a time-dependent path, opportunistically using other nodes to relay the message towards . Networks of this kind are often termed “delay tolerant networks” [14]. This allows us to focus not on the rather artificial connectivity regime (where grows with ), but instead on the case where (and hence the average degree) is constant. Keeping constant is obviously highly desirable as it makes the model much more realistic and scalable.

There are rather few rigorous results on wireless networks with mobile nodes, and those that do exist typically either make unrealistic assumptions about node mobility (such as unbounded range of motion [18, 11, 8] or no change in direction [25]), or work in the connectivity regime which, as we have seen, requires unbounded density or transmission range [10, 13]. (See the Related Work section for more details.) In this paper we study two fundamental questions for mobile networks assuming only constant average degree and bounded range of motion (i.e., constant values of the parameters and ).

### Results

Detection. A central issue in surveillance and remote sensing applications is the ability of the network to detect a “target” (which may be either fixed or moving), in the sense that there is a node within distance at most of . It is well known [26] that, for a static RGG, a fixed target can be detected only with constant probability unless the average degree grows with . In the mobile case, we may hope to achieve detection over time with constant average degree (, even below the percolation threshold). In this scenario, the detection time, , is formulated as the number of steps until a target initially at the origin is detected by the MGG. Recent work of Liu et al. [25] shows that the detection time in two dimensions is exponentially distributed when the nodes of the network move in fixed directions. In the more realistic MGG model, we are able to prove the following result which holds in all dimensions (see Section 3):

###### Theorem 1.1.

In the MGG model with any fixed and range of motion , the detection time for a fixed target or a target moving under Brownian motion satisfies for , and for .

The constants in the here depend only on , and the dimension . Thus the tail of the detection time is exponential in three and higher dimensions, and exponential with a logarithmic correction in two dimensions. We note that, as is evident from the proof, this dichotomy between two and three dimensions reflects the difference between recurrent and transient random walks in and respectively. We also note that the upper bound in Theorem 1.1 holds for arbitrary motion of the target (provided it is independent of the motion of the nodes); and the lower bound holds for any “sufficiently random” motion of the target.

We should point out that, for the special case of a fixed target, a slightly stronger version of Theorem 1.1, with a tight constant in the exponent, follows from classical results on the “Wiener sausage” in continuum percolation. (This was pointed out to us by Yuval Peres [29]; see Related Work for details.) However, it is not clear how to extend this approach to the case of a moving target. Our proof is elementary and based on an application of the mass transport principle.

Percolation. A fundamental question in mobile networks is whether a node can efficiently communicate with other nodes even when the network is not connected at any given time. In the MGG model, this question may naturally be formulated by considering a constant intensity (i.e., above the percolation threshold) and asking how long it takes until a node initially at the origin belongs to the giant component (or the infinite component in the limit ). We call this the percolation time . It should be clear that the percolation time can be used to derive bounds on other natural quantities, such as the time for a node to broadcast information to all other nodes (see Corollary 1.3 below). As far as we are aware the percolation time has not been investigated before, largely because previous work on RGGs has focused on networks above the connectivity threshold. However, it appears to be a fundamental question in the mobile context.

The detection time clearly provides a lower bound on the percolation time, so we may deduce from Theorem 1.1 above that is at least for and at least for . We are able to prove the following stretched exponential upper bound in all dimensions (see Section 4):

###### Theorem 1.2.

In the MGG model with any fixed and range of motion , the percolation time for a node at the origin satisfies in all dimensions .

Again, the constant in the depends only on , and . There is a gap between this upper bound and the lower bound from Theorem 1.1. We conjecture that the true tail behavior of is for and for .

Theorem 1.2 is the main technical contribution of the paper; we briefly mention some of the ideas used in the proof. The key technical challenge is the dependency of the RGGs over time. To overcome this, we partition into subregions of suitable size and couple the evolution of the nodes in each subregion with those of a fresh Poisson point process of slightly smaller intensity which is still larger than the critical value . After a number of steps that depends on the size of the subregion, we are able to arrange that the coupled processes match up almost completely. As a result, we can conclude that our original MGG process, observed every steps, contains a sequence of independent Poisson point processes with intensity . (This fact, which we believe is of wider applicability, is formally stated in Proposition 4.1 in Section 4.) This independence is sufficient to complete the proof. The slack in the bound comes from the “delay” .

To illustrate a sample application of Theorem 1.2, we consider
the time taken to broadcast a message in a network of finite size.
Consider a MGG in a cube of volume (so the expected^{3}^{3}3The
result can be adapted to the case of a fixed number of nodes using
standard “de-Poissonization” arguments [27]. See the Remark
following the proof of Corollary 1.3 in Section 5.
number of nodes is ). Since the volume is finite, we need to modify
the motion of the nodes to take account of boundary effects: following
standard practice, we do this by turning the cube into a torus (so that nodes
“wrap around” when they reach the boundaries).
Suppose a message originates at an arbitrary node
at time 0, and at each time step each node that has already
received the message broadcasts it to all nodes in the same connected
component. (Here we are making the reasonable assumption
that the speed of transmission is much faster than the motion of the nodes,
so that messages can travel throughout a connected component before it is
altered by the motion.) Let denote the time until all nodes have
received the message.

###### Corollary 1.3.

In a MGG on the torus of volume with any fixed and range of motion , the broadcast time is w.h.p. in any dimension .

### Related work

There are many theoretical results on routing and other algorithmic questions on (static) RGGs; we mention just a few highlights here. The seminal work of Gupta and Kumar [20, 21] (see also [15] for refinements) examined the information-theoretic capacity (or throughput) of such networks above the connectivity threshold, i.e., the number of bits per unit time that each node can transmit to some (randomly chosen) destination node in steady state, assuming constant size buffers in the network. The capacity per unit node is , which tends to 0 as , suggesting a fundamental limitation on the scalability of such static networks.

The detection problem has received much attention. In the static case detection is essentially equivalent to coverage of the region , which requires that the network be connected. In the absence of coverage, Balister et al. [3] determine the maximum diameter of the uncovered regions, while Dousse et al. [12] prove that, for any , the detection time for a target moving in a fixed direction has an exponential tail. (Note that this is not a mobility result as the nodes are fixed.)

The question of broadcasting within the giant component of a RGG above the percolation threshold was recently analyzed by Bradonjić et al. [7], who also show that the graph distance between any two (sufficiently distant) nodes is at most a constant factor larger than their Euclidean distance. Cover times for random walks on (connected) RGGs were investigated by Avin and Ercal [1] and Cooper and Frieze [9], while the effect of physical obstacles that obstruct transmission was studied by Frieze et al. [16].

The scope of mathematically rigorous work on RGGs with mobility is much more limited. We briefly summarize it here.

Motivated by the fact mentioned above [20] that the capacity of static networks goes to zero as , Grossglauser and Tse [18] (see also [11]) showed how to exploit mobility to achieve constant capacity using a two-hop routing scheme. However, these results require the unrealistic assumption that nodes move a distance comparable to the diameter of the entire region at each step. El Gamal et al. [13] study the tradeoff between capacity and delay in a realistic mobility model but above the connectivity threshold. Clementi et al. [8] show how to exploit mobility to enable broadcast in a RGG sufficiently far above the percolation threshold. However, this result again assumes that the range of motion of the nodes is unbounded (i.e., grows with ).

As mentioned earlier, the detection problem was addressed by Liu et al. [25], assuming that each node moves continuously in a fixed randomly chosen direction; they show that the time it takes for the network to detect a target is exponentially distributed with expectation depending on the intensity . Also, for the special case of a stationary target, as observed in [23, 24] a slightly stronger version of Theorem 1.1, with tight constants in the exponent, can be deduced from classical results on continuum percolation: namely, in this case it is shown in [31] that (in continuous time) , where is the expected volume of the “Wiener sausage” of length (essentially the trajectory of a Brownian motion “fattened” by a disk of radius ). This volume in turn is known quite precisely [30, 4].

A model essentially equivalent to MGGs was introduced under the name “dynamic boolean model” by Van den Berg et al. [5], who studied the measure of the set of times at which an infinite component exists. Finally, recent work of Díaz et al. [10] in a similar model determines, for networks exactly at the connectivity threshold, the expected length of time for which the network stays connected (or disconnected) as the nodes move. However, this question makes sense only for very large values of (growing with ) and thus falls outside the scope of our investigations.

## 2 Preliminaries

For any , let be the -dimensional ball centered at the origin with radius . Similarly, let be the cube with side-length centered at the origin and with sides parallel to the axes of . For any point and set , we define as the Minkowski sum . The volume of a set is denoted .

Poisson point processes

A “point process” is a random collection of points in ; for a formal treatment of this topic, the reader is referred to [31]. To avoid ambiguity, we refer to the points of a point process as nodes and reserve the word points for arbitrary locations in . We are mostly interested in Poisson point processes. A Poisson point process with intensity in a region is defined by a single property: for every bounded Borel set , the number of points in is a Poisson random variable with mean . We will make use of the following standard properties of Poisson point processes: (1) for disjoint sets , the numbers of points in and in are independent; (2) conditioned on the number of nodes in , each such node is located independently and uniformly at random in ; (3) [thinning] if each node of a Poisson point process with intensity is deleted with probability , the result is a Poisson point process with intensity ; (4) [superposition] the union of two Poisson point processes in with intensities and is a Poisson point process with intensity . In some of our proofs we will make use of non-homogeneous Poisson point processes, whose intensity is a function of position . In such a process the expected number of nodes in a set is .

Random geometric graphs

Fix parameters , and let be the cube of volume in . Let be a Poisson point process over with intensity . A random geometric graph (RGG) is constructed by taking the node set to be the nodes of and creating an edge between every pair of nodes whose Euclidean distance is at most . The parameter is called the transmission range. Since is a Poisson point process, the expected number of nodes in is .

It is well known [6, 26] that as the random graph model induced by depends only on the product . For this reason, we will always fix so that and parameterize the model only on . Note that with this convention, in the limit as , is also the expected degree of any node in .

Using a Poisson point process rather than a fixed number of nodes is a standard trick that simplifies the mathematics. Most results in this model can be translated to a model with a fixed number of nodes in using a technique known as “de-Poissonization” [27].

Many asymptotic properties of as are studied in the monograph by Penrose [27]. For example, it is known that is a threshold for connectivity, in the sense that if then is connected w.h.p., and if then is disconnected w.h.p. Another important critical value is the percolation threshold (a constant that depends on the dimension ); if then w.h.p. contains a unique “giant” connected component with nodes, while all other components are of size ; on the other hand, if then w.h.p. all connected components of have size . The value of is not known exactly in any dimension . However, for the rigorous bounds are known [26, Section 3.9], while Balister et al. [2] used Monte Carlo methods to deduce that with confidence .

Finally, we remark that in the limit as (that is, when the Poisson point process is defined over the whole of ) the percolation threshold still exists and is characterized by the appearance of a unique infinite component (or “infinite cluster”) with probability 1 for any . In this limit the graph is disconnected with probability 1 for any value of .

Mobile geometric graphs

We define our mobile geometric graph (MGG) model by taking a Poisson point process with intensity in at time 0 and letting each node move in continuous time according to an independent Brownian motion. We sample the locations of the nodes at discrete time steps and use these locations to define a sequence of random geometric graphs with transmission range . We base our MGG model on the infinite volume to avoid having to handle boundary effects on the motion of the nodes. Results in this model can be translated to finite regions with a suitable convention—such as wraparound or reflection—to handle the motion of nodes at the boundaries.

More formally, let be a Poisson point process with intensity over . We take a parameter , and with each node we associate an independent -dimensional Brownian motion that starts at the location of in and has variance [22]. Now, for any , we define as the point process obtained by putting a node at for each . A MGG is then the collection of graphs where and is fixed so that . (Note that, as in the static case, fixing the value of may be done w.l.o.g.)

It is an easy consequence of the mass transport principle (see below) that each , viewed in isolation, is itself a Poisson point process with intensity . This means that the sequence is stationary and therefore, when viewed in isolation, is a random geometric graph over . Thus, for example, if then each contains an infinite component with probability 1.

Mass transport principle

For two points and a time step , we define as the probability density function for a node located at position at time to be at position at time . Since nodes move according to -dimensional Brownian motion, we have .

In some situations, it is useful to regard as a mass transport function. For example, suppose nodes are initially distributed according to a Poisson point process with intensity in a region ; we may view this as a Poisson point process over with (non-homogeneous) intensity (or “mass function”) . Using the thinning and superposition properties, it is easy to check that the distribution of the nodes at time is a Poisson point process with intensity . This interpretation can be used, for example, to show that in a MGG is a Poisson point process with intensity for all , as claimed above.

## 3 Detection time

In this section we prove Theorem 1.1. We consider the detection time for a node initially placed at the origin independently of the MGG . We say that a node detects at time step if the distance between and at time step is at most , and we define as the first time that is detected by some node of . Our goal is to derive tight bounds for the tail of . In the proof we consider the cases where is either non-mobile or moves according to Brownian motion with variance . We discuss some extensions at the end of the section.

It will be convenient to restrict attention to the nodes of that are initially inside the cube , where is a suitably chosen parameter. We define as the first time a node initially inside detects . Note that clearly , where the limit exists since is monotone and bounded as a function of . We let be the locations of in the first steps. The following lemma relates the tail of to the tail of an analogous random variable for a single random node in .

###### Lemma 3.1.

We have and , where is the first time that a node initially located u.a.r. in detects .

###### Proof.

Let be the number of nodes inside at time . Each of these nodes is initially located uniformly at random inside , and the motion of each node does not depend on the locations of the other nodes. If we fix a given value for , then the first time that a given node detects does not depend on the other nodes of and is distributed according to the conditional distribution of given . (Note that this is not true if is not fixed but random, because the random motion of makes the relative displacements of the nodes of with respect to dependent.) Therefore, conditioning on and , we have , which yields

where we use the notation to denote expectation with respect to the random variable , and the last equality holds since is Poisson with mean . For the lower bound, we appeal to Jensen’s inequality to obtain

which completes the proof. ∎

We now proceed to derive upper and lower bounds for . Let be a node initially located u.a.r. in . For time steps , let be the expected number of time steps from to at which detects . We bound as follows.

###### Lemma 3.2.

Let and be arbitrary. There exists a constant and such that for all .

###### Proof.

We use the mass transport principle. We assume the initial intensity and let be the intensity at at time , i.e., for . At any time , the probability that detects is given by the ratio between the amount of mass inside and the total amount of mass . Noting that for this ratio is , we can write as

(1) | |||||

where the last step follows from the translation-invariance property of . Since , we obtain the upper bound from .

For the lower bound, let . We assume that is sufficiently large such that . Then replacing the integral over in (1) by an integral over gives the lower bound

where we used the fact that for all , and the result holds for some constant . ∎

Our goal is to write conditioning on . Let be the expected number of time steps from to at which detects given that the relative location of with respect to at time is . The next lemma gives lower and upper bounds for .

###### Lemma 3.3.

Let be arbitrary. There exists an integer such that for all the following holds. There exist functions and such that uniformly over . Moreover, there are constants and such that

The bounds for hold both for the case where does not move and for the case where moves according to Brownian motion with variance . The bounds for hold uniformly over .

###### Proof.

For any , let be the indicator random variable for the event that detects at time , assuming that at time is located at the origin and is located at . Clearly, . Recall that is the location of at time . Hence,

The upper bound follows by setting . Note that this upper bound holds for arbitrary .

Now we derive the lower bound. We use the fact that . If is non-mobile, then (recall that we assume to be the origin) and from the triangle inequality we obtain . Thus, . We take to be the smallest integer such that , set , and the result follows since is sufficiently large with respect to .

If moves according to a Brownian motion with variance , we average over to get

Let be a constant and let . We set so that for all . We integrate over instead of and then use the simple bounds and for all and to obtain

for some constant . Now, we set and the result follows since is sufficiently large with respect to . ∎

Remark: The bounds for change substantially from to , reflecting the dichotomy between recurrent and transient random walks in and : returns to a neighborhood of infinitely often for and only finitely often for . (Note that measures the expected number of returns of to a neighborhood around in a given time interval.)

We now use Lemma 3.3 to derive upper and lower bounds for .

###### Lemma 3.4.

Let the functions and be as in Lemma 3.3. For any constant we have

###### Proof.

We apply the straightforward equation

where the random variable denotes the relative location of with respect to given that . Note that since the condition implies that the distance between and at time is at most . Using Lemma 3.3 we obtain

(2) |

Also, since is non-decreasing with , we can take an arbitrary constant and use the fact that for all together with Lemma 3.3 to write

(3) | |||||

We are now in position to conclude the proof of Theorem 1.1. Plugging Lemmas 3.2 and 3.3 into Lemma 3.4, and using Lemma 3.1, we obtain a constant , and constants depending only on , such that for all and sufficiently large ,

(4) |

for , and

(5) |

for . Theorem 1.1 then follows by taking the limit as .

Remark:
As should be clear from the proof,
the upper bounds in (4) and (5) hold for arbitrary locations
of as long as moves independently of the locations of the nodes of .
The lower bounds also hold in more generality: e.g.,
if moves according to Brownian motion with variance ,
or indeed with any motion that has sufficiently large “variance” in all directions.
(Specifically, the lower bounds hold if the density for the motion of
after steps satisfies the following property: there exist positive constants
and such that for all
, , and .)
On the other hand, adding a random drift to the nodes in can change the detection time
substantially: if each node of moves according to Brownian motion with
drift and variance , where the are i.i.d. random variables,
then our proof can be adapted to show that under mild conditions^{4}^{4}4Note that this statement
cannot hold in full generality; if all nodes of have the same drift, then this is equivalent to the case without drift up to translations of .
on the distribution of ,
in all dimensions for arbitrary locations of .
We omit the details.

## 4 Percolation time

In this section we prove Theorem 1.2. We consider a MGG with density (i.e., above the percolation threshold), and study the random variable defined as the first time at which a node initially placed at the origin independently of the nodes of belongs to the infinite component of . We derive an upper bound for the tail as .

We begin by stating a proposition that will be a key ingredient in our analysis. We consider a large cube and tessellate it into small cubes called “cells.” The proposition says that, if all cells have sufficiently many nodes at a given time , then at time for suitably large the point process induced by the location of the nodes contains a fresh Poisson point process with only slightly reduced intensity inside a smaller cube . We believe this result is of independent interest. With this in mind, we state the proposition below for a slightly more general setting than is needed here. Its proof is deferred to the end of the section.

###### Proposition 4.1.

Fix and consider the cube tessellated into cells of side-length . Let be an arbitrary point process at time that contains at least nodes in each cell of the tessellation for some . Let be the point process obtained at time from by allowing the nodes to move according to Brownian motion with variance . Fix and let be a fresh Poisson point process with intensity . Then there exists a coupling of and and constants depending only on such that, if and , the nodes of are a subset of the nodes of inside the cube with probability

Now we proceed to the proof of Theorem 1.2. We first take a sufficiently small parameter such that . (This is always possible as we are assuming .) In what follows, we omit the dependencies of other parameters on and as we are considering them to be fixed.

Let be the event that does not belong to the infinite component at time . Then, the event is equivalent to . We define an integer parameter and consider the process obtained by skipping every time steps. (To simplify the notation we assume w.l.o.g. that is an integer.) In other words, instead of looking at the event we consider the event , which we henceforth denote by . Since the occurrence of the event implies we have . Our goal in introducing is to allow nodes to move further between consecutive time steps; we will choose the value of later.

Let be a sufficiently large constant and fix . We will confine our attention to the cube . We take a parameter and tessellate into cubes of side-length (see Figure 1(a)). We refer to each such cube as a “cell.” Later we will tie together the values of and , and will choose to optimize our upper bound for . For the moment we only assume that the tessellation is non-trivial in the sense that both and are as functions of .

For each time step , the expected number of nodes inside a given cell is . We say that a cell is dense at time if it contains at least nodes, where is as defined earlier. Let be the event that all cells are dense at time , and let . The lemma below shows that occurs with high probability.

###### Lemma 4.2.

With the above notation,

###### Proof.

At any given step , by a standard large deviation bound for a Poisson r.v. (cf. Lemma A.1), a cell has more than nodes with probability at least . The proof is completed by taking the union bound over all cells and time steps. ∎

Recall that is the location of at time . Define as the event that is located inside at time , and let . The next lemma bounds the probability that never leaves .

###### Lemma 4.3.

There exists a constant such that

###### Proof.

We fix a time step and then apply the union bound over time. The event corresponds to not moving a distance more than in any dimension. Therefore,

Then we use a standard large deviation bound for the Normal distribution (see Lemma A.2) to conclude that

Since the bound above decreases with we can conclude that

and the result follows from and . ∎

For each time step , we define to be the cube shifted randomly so that (the location of at time ) is uniformly random in (see Figure 1(b)). A crossing component of is a connected set of nodes within that contains a path connecting every pair of opposite faces of . (A path connects two faces of if each face is within distance of one of its endpoints.)

For each , let be the event that all the crossing components of are contained in the infinite component at time . (For definiteness we assume that holds if has no crossing component.) Let . The next lemma follows by a result of Penrose and Pisztora [28, Theorem 1].

###### Lemma 4.4.

For any , there exists a constant such that

###### Proof.

By stationarity we know that is the same for all . For any fixed , [28, Theorem 1] gives that for some constant . (In fact, [28, Theorem 1] handles an event more restrictive than , which among other things considers unique crossing components of .) Using the union bound we obtain and the result follows since . ∎

We now proceed to derive a bound on . We take to be the event that does not belong to a crossing component of at time , and define . Note that is a decreasing event, in the sense that if occurs then it also occurs after removing any arbitrary collection of nodes from the MGG . Clearly . By elementary probability,

(6) |

Note that we use only to replace by in (6); this helps to control the dependencies among time steps, since