Mobile Geometric Graphs, and Detection and Communication Problems in Mobile Wireless Networks
Static wireless networks are by now quite well understood mathematically through the random geometric graph model. By contrast, there are relatively few rigorous results on the practically important case of mobile networks, in which the nodes move over time; moreover, these results often make unrealistic assumptions about node mobility such as the ability to make very large jumps. In this paper we consider a realistic model for mobile wireless networks which we call mobile geometric graphs, and which is a natural extension of the random geometric graph model. We study two fundamental questions in this model: detection (the time until a given “target” point—which may be either fixed or moving—is detected by the network), and percolation (the time until a given node is able to communicate with the giant component of the network). For detection, we show that the probability that the detection time exceeds is in two dimensions, and in three or more dimensions, under reasonable assumptions about the motion of the target. For percolation, we show that the probability that the percolation time exceeds is in all dimensions . We also give a sample application of this result by showing that the time required to broadcast a message through a mobile network with nodes above the threshold density for existence of a giant component is with high probability.
A principal focus in wireless network research today is on mobile ad hoc networks, in which nodes moving in space cooperate to relay packets on behalf of other nodes without any centralized infrastructure. Although the static properties of such networks are by now quite well understood mathematically, the additional challenges posed by node mobility have so far received relatively little attention from the theory community. In this paper we consider a mathematical model for mobile wireless networks, which we call mobile geometric graphs and which is a natural extension of the widely studied random geometric graphs model of static networks. We study two fundamental problems in this model: the detection problem (time until a fixed or moving target is detected by the network), and the percolation problem (time until a given node is able to communicate with many other nodes).
In the random geometric graph (RGG) model , nodes are distributed in a region according to a Poisson point process of intensity (i.e., the number of nodes in any subregion is Poisson with mean , where is the volume of ). Two nodes are connected by an edge iff their distance is at most , where the parameter is the transmission range that specifies the distance over which nodes may send and receive information; since the structure of the RGG depends only on the product (where is the radius- ball in ) , we may fix so that and parameterize the model on only. We shall take to be a cube of volume (so that the expected number of nodes in is ), and consider the limiting behavior as .
Clearly, increasing increases the average degree of the nodes. As is well known, there are two critical values of at which the connectivity properties of the RGG undergo a significant change. First there is the percolation threshold (a constant that depends on the dimension ), so that if the network w.h.p.
A central feature of many ad hoc networks is the fact that the nodes are moving in space. This is the case, for example, in vehicular networks (where sensors are attached to cars, buses or taxis), surveillance and disaster recovery applications where mobile sensors are used to survey an area, and pocket-switched networks based on mobile communication devices such as cellphones. Such networks are also frequently modeled using RGGs, augmented by motion of the nodes. We will employ the following model, which we refer to as mobile geometric graphs (MGGs) and which is essentially equivalent to the “dynamic boolean model” introduced in  in the context of dynamic continuum percolation. We begin at time 0 with a (infinite)
Once mobility is injected, the questions of interest naturally change from those in the static case. For example, connectivity no longer plays such a central role because mobility may allow nodes to exchange messages even in the absence of a path between them at any given time: namely, can route its message to along a time-dependent path, opportunistically using other nodes to relay the message towards . Networks of this kind are often termed “delay tolerant networks” . This allows us to focus not on the rather artificial connectivity regime (where grows with ), but instead on the case where (and hence the average degree) is constant. Keeping constant is obviously highly desirable as it makes the model much more realistic and scalable.
There are rather few rigorous results on wireless networks with mobile nodes, and those that do exist typically either make unrealistic assumptions about node mobility (such as unbounded range of motion  or no change in direction ), or work in the connectivity regime which, as we have seen, requires unbounded density or transmission range . (See the Related Work section for more details.) In this paper we study two fundamental questions for mobile networks assuming only constant average degree and bounded range of motion (i.e., constant values of the parameters and ).
Detection. A central issue in surveillance and remote sensing applications is the ability of the network to detect a “target” (which may be either fixed or moving), in the sense that there is a node within distance at most of . It is well known  that, for a static RGG, a fixed target can be detected only with constant probability unless the average degree grows with . In the mobile case, we may hope to achieve detection over time with constant average degree (, even below the percolation threshold). In this scenario, the detection time, , is formulated as the number of steps until a target initially at the origin is detected by the MGG. Recent work of Liu et al.  shows that the detection time in two dimensions is exponentially distributed when the nodes of the network move in fixed directions. In the more realistic MGG model, we are able to prove the following result which holds in all dimensions (see Section 3):
The constants in the here depend only on , and the dimension . Thus the tail of the detection time is exponential in three and higher dimensions, and exponential with a logarithmic correction in two dimensions. We note that, as is evident from the proof, this dichotomy between two and three dimensions reflects the difference between recurrent and transient random walks in and respectively. We also note that the upper bound in Theorem ? holds for arbitrary motion of the target (provided it is independent of the motion of the nodes); and the lower bound holds for any “sufficiently random” motion of the target.
We should point out that, for the special case of a fixed target, a slightly stronger version of Theorem ?, with a tight constant in the exponent, follows from classical results on the “Wiener sausage” in continuum percolation. (This was pointed out to us by Yuval Peres ; see Related Work for details.) However, it is not clear how to extend this approach to the case of a moving target. Our proof is elementary and based on an application of the mass transport principle.
A fundamental question in mobile networks is whether a node can efficiently communicate with other nodes even when the network is not connected at any given time. In the MGG model, this question may naturally be formulated by considering a constant intensity (i.e., above the percolation threshold) and asking how long it takes until a node initially at the origin belongs to the giant component (or the infinite component in the limit ). We call this the percolation time . It should be clear that the percolation time can be used to derive bounds on other natural quantities, such as the time for a node to broadcast information to all other nodes (see Corollary ? below). As far as we are aware the percolation time has not been investigated before, largely because previous work on RGGs has focused on networks above the connectivity threshold. However, it appears to be a fundamental question in the mobile context.
The detection time clearly provides a lower bound on the percolation time, so we may deduce from Theorem ? above that is at least for and at least for . We are able to prove the following stretched exponential upper bound in all dimensions (see Section 4):
Again, the constant in the depends only on , and . There is a gap between this upper bound and the lower bound from Theorem ?. We conjecture that the true tail behavior of is for and for .
Theorem ? is the main technical contribution of the paper; we briefly mention some of the ideas used in the proof. The key technical challenge is the dependency of the RGGs over time. To overcome this, we partition into subregions of suitable size and couple the evolution of the nodes in each subregion with those of a fresh Poisson point process of slightly smaller intensity which is still larger than the critical value . After a number of steps that depends on the size of the subregion, we are able to arrange that the coupled processes match up almost completely. As a result, we can conclude that our original MGG process, observed every steps, contains a sequence of independent Poisson point processes with intensity . (This fact, which we believe is of wider applicability, is formally stated in Proposition ? in Section 4.) This independence is sufficient to complete the proof. The slack in the bound comes from the “delay” .
To illustrate a sample application of Theorem ?, we consider the time taken to broadcast a message in a network of finite size. Consider a MGG in a cube of volume (so the expected
There are many theoretical results on routing and other algorithmic questions on (static) RGGs; we mention just a few highlights here. The seminal work of Gupta and Kumar  (see also  for refinements) examined the information-theoretic capacity (or throughput) of such networks above the connectivity threshold, i.e., the number of bits per unit time that each node can transmit to some (randomly chosen) destination node in steady state, assuming constant size buffers in the network. The capacity per unit node is , which tends to 0 as , suggesting a fundamental limitation on the scalability of such static networks.
The detection problem has received much attention. In the static case detection is essentially equivalent to coverage of the region , which requires that the network be connected. In the absence of coverage, Balister et al.  determine the maximum diameter of the uncovered regions, while Dousse et al.  prove that, for any , the detection time for a target moving in a fixed direction has an exponential tail. (Note that this is not a mobility result as the nodes are fixed.)
The question of broadcasting within the giant component of a RGG above the percolation threshold was recently analyzed by Bradonjić et al. , who also show that the graph distance between any two (sufficiently distant) nodes is at most a constant factor larger than their Euclidean distance. Cover times for random walks on (connected) RGGs were investigated by Avin and Ercal  and Cooper and Frieze , while the effect of physical obstacles that obstruct transmission was studied by Frieze et al. .
The scope of mathematically rigorous work on RGGs with mobility is much more limited. We briefly summarize it here.
Motivated by the fact mentioned above  that the capacity of static networks goes to zero as , Grossglauser and Tse  (see also ) showed how to exploit mobility to achieve constant capacity using a two-hop routing scheme. However, these results require the unrealistic assumption that nodes move a distance comparable to the diameter of the entire region at each step. El Gamal et al.  study the tradeoff between capacity and delay in a realistic mobility model but above the connectivity threshold. Clementi et al.  show how to exploit mobility to enable broadcast in a RGG sufficiently far above the percolation threshold. However, this result again assumes that the range of motion of the nodes is unbounded (i.e., grows with ).
As mentioned earlier, the detection problem was addressed by Liu et al. , assuming that each node moves continuously in a fixed randomly chosen direction; they show that the time it takes for the network to detect a target is exponentially distributed with expectation depending on the intensity . Also, for the special case of a stationary target, as observed in  a slightly stronger version of Theorem ?, with tight constants in the exponent, can be deduced from classical results on continuum percolation: namely, in this case it is shown in  that (in continuous time) , where is the expected volume of the “Wiener sausage” of length (essentially the trajectory of a Brownian motion “fattened” by a disk of radius ). This volume in turn is known quite precisely .
A model essentially equivalent to MGGs was introduced under the name “dynamic boolean model” by Van den Berg et al. , who studied the measure of the set of times at which an infinite component exists. Finally, recent work of Díaz et al.  in a similar model determines, for networks exactly at the connectivity threshold, the expected length of time for which the network stays connected (or disconnected) as the nodes move. However, this question makes sense only for very large values of (growing with ) and thus falls outside the scope of our investigations.
For any , let be the -dimensional ball centered at the origin with radius . Similarly, let be the cube with side-length centered at the origin and with sides parallel to the axes of . For any point and set , we define as the Minkowski sum . The volume of a set is denoted .
A “point process” is a random collection of points in ; for a formal treatment of this topic, the reader is referred to . To avoid ambiguity, we refer to the points of a point process as nodes and reserve the word points for arbitrary locations in . We are mostly interested in Poisson point processes. A Poisson point process with intensity in a region is defined by a single property: for every bounded Borel set , the number of points in is a Poisson random variable with mean . We will make use of the following standard properties of Poisson point processes: (1) for disjoint sets , the numbers of points in and in are independent; (2) conditioned on the number of nodes in , each such node is located independently and uniformly at random in ; (3) [thinning] if each node of a Poisson point process with intensity is deleted with probability , the result is a Poisson point process with intensity ; (4) [superposition] the union of two Poisson point processes in with intensities and is a Poisson point process with intensity . In some of our proofs we will make use of non-homogeneous Poisson point processes, whose intensity is a function of position . In such a process the expected number of nodes in a set is .
Fix parameters , and let be the cube of volume in . Let be a Poisson point process over with intensity . A random geometric graph (RGG) is constructed by taking the node set to be the nodes of and creating an edge between every pair of nodes whose Euclidean distance is at most . The parameter is called the transmission range. Since is a Poisson point process, the expected number of nodes in is .
It is well known  that as the random graph model induced by depends only on the product . For this reason, we will always fix so that and parameterize the model only on . Note that with this convention, in the limit as , is also the expected degree of any node in .
Using a Poisson point process rather than a fixed number of nodes is a standard trick that simplifies the mathematics. Most results in this model can be translated to a model with a fixed number of nodes in using a technique known as “de-Poissonization” .
Many asymptotic properties of as are studied in the monograph by Penrose . For example, it is known that is a threshold for connectivity, in the sense that if then is connected w.h.p., and if then is disconnected w.h.p. Another important critical value is the percolation threshold (a constant that depends on the dimension ); if then w.h.p. contains a unique “giant” connected component with nodes, while all other components are of size ; on the other hand, if then w.h.p. all connected components of have size . The value of is not known exactly in any dimension . However, for the rigorous bounds are known , while Balister et al.  used Monte Carlo methods to deduce that with confidence .
Finally, we remark that in the limit as (that is, when the Poisson point process is defined over the whole of ) the percolation threshold still exists and is characterized by the appearance of a unique infinite component (or “infinite cluster”) with probability 1 for any . In this limit the graph is disconnected with probability 1 for any value of .
We define our mobile geometric graph (MGG) model by taking a Poisson point process with intensity in at time 0 and letting each node move in continuous time according to an independent Brownian motion. We sample the locations of the nodes at discrete time steps and use these locations to define a sequence of random geometric graphs with transmission range . We base our MGG model on the infinite volume to avoid having to handle boundary effects on the motion of the nodes. Results in this model can be translated to finite regions with a suitable convention—such as wraparound or reflection—to handle the motion of nodes at the boundaries.
More formally, let be a Poisson point process with intensity over . We take a parameter , and with each node we associate an independent -dimensional Brownian motion that starts at the location of in and has variance . Now, for any , we define as the point process obtained by putting a node at for each . A MGG is then the collection of graphs where and is fixed so that . (Note that, as in the static case, fixing the value of may be done w.l.o.g.)
It is an easy consequence of the mass transport principle (see below) that each , viewed in isolation, is itself a Poisson point process with intensity . This means that the sequence is stationary and therefore, when viewed in isolation, is a random geometric graph over . Thus, for example, if then each contains an infinite component with probability 1.
For two points and a time step , we define as the probability density function for a node located at position at time to be at position at time . Since nodes move according to -dimensional Brownian motion, we have .
In some situations, it is useful to regard as a mass transport function. For example, suppose nodes are initially distributed according to a Poisson point process with intensity in a region ; we may view this as a Poisson point process over with (non-homogeneous) intensity (or “mass function”) . Using the thinning and superposition properties, it is easy to check that the distribution of the nodes at time is a Poisson point process with intensity . This interpretation can be used, for example, to show that in a MGG is a Poisson point process with intensity for all , as claimed above.
In this section we prove Theorem ?. We consider the detection time for a node initially placed at the origin independently of the MGG . We say that a node detects at time step if the distance between and at time step is at most , and we define as the first time that is detected by some node of . Our goal is to derive tight bounds for the tail of . In the proof we consider the cases where is either non-mobile or moves according to Brownian motion with variance . We discuss some extensions at the end of the section.
It will be convenient to restrict attention to the nodes of that are initially inside the cube , where is a suitably chosen parameter. We define as the first time a node initially inside detects . Note that clearly , where the limit exists since is monotone and bounded as a function of . We let be the locations of in the first steps. The following lemma relates the tail of to the tail of an analogous random variable for a single random node in .
Let be the number of nodes inside at time . Each of these nodes is initially located uniformly at random inside , and the motion of each node does not depend on the locations of the other nodes. If we fix a given value for , then the first time that a given node detects does not depend on the other nodes of and is distributed according to the conditional distribution of given . (Note that this is not true if is not fixed but random, because the random motion of makes the relative displacements of the nodes of with respect to dependent.) Therefore, conditioning on and , we have , which yields
where we use the notation to denote expectation with respect to the random variable , and the last equality holds since is Poisson with mean . For the lower bound, we appeal to Jensen’s inequality to obtain
which completes the proof.
We now proceed to derive upper and lower bounds for . Let be a node initially located u.a.r. in . For time steps , let be the expected number of time steps from to at which detects . We bound as follows.
We use the mass transport principle. We assume the initial intensity and let be the intensity at at time , i.e., for . At any time , the probability that detects is given by the ratio between the amount of mass inside and the total amount of mass . Noting that for this ratio is , we can write as
where the last step follows from the translation-invariance property of . Since , we obtain the upper bound from .
For the lower bound, let . We assume that is sufficiently large such that . Then replacing the integral over in (Equation 3) by an integral over gives the lower bound
where we used the fact that for all , and the result holds for some constant .
Our goal is to write conditioning on . Let be the expected number of time steps from to at which detects given that the relative location of with respect to at time is . The next lemma gives lower and upper bounds for .
For any , let be the indicator random variable for the event that detects at time , assuming that at time is located at the origin and is located at . Clearly, . Recall that is the location of at time . Hence,
The upper bound follows by setting . Note that this upper bound holds for arbitrary .
Now we derive the lower bound. We use the fact that . If is non-mobile, then (recall that we assume to be the origin) and from the triangle inequality we obtain . Thus, . We take to be the smallest integer such that , set , and the result follows since is sufficiently large with respect to .
If moves according to a Brownian motion with variance , we average over to get
Let be a constant and let . We set so that for all . We integrate over instead of and then use the simple bounds and for all and to obtain
for some constant . Now, we set and the result follows since is sufficiently large with respect to .
The bounds for change substantially from to , reflecting the dichotomy between recurrent and transient random walks in and : returns to a neighborhood of infinitely often for and only finitely often for . (Note that measures the expected number of returns of to a neighborhood around in a given time interval.)
We now use Lemma ? to derive upper and lower bounds for .
We apply the straightforward equation
where the random variable denotes the relative location of with respect to given that . Note that since the condition implies that the distance between and at time is at most . Using Lemma ? we obtain
Also, since is non-decreasing with , we can take an arbitrary constant and use the fact that for all together with Lemma ? to write
We are now in position to conclude the proof of Theorem ?. Plugging Lemmas ? and ? into Lemma ?, and using Lemma ?, we obtain a constant , and constants depending only on , such that for all and sufficiently large ,
for , and
for . Theorem ? then follows by taking the limit as .
As should be clear from the proof, the upper bounds in (Equation 9) and (Equation 10) hold for arbitrary locations of as long as moves independently of the locations of the nodes of . The lower bounds also hold in more generality: e.g., if moves according to Brownian motion with variance , or indeed with any motion that has sufficiently large “variance” in all directions. (Specifically, the lower bounds hold if the density for the motion of after steps satisfies the following property: there exist positive constants and such that for all , , and .) On the other hand, adding a random drift to the nodes in can change the detection time substantially: if each node of moves according to Brownian motion with drift and variance , where the are i.i.d. random variables, then our proof can be adapted to show that under mild conditions
In this section we prove Theorem ?. We consider a MGG with density (i.e., above the percolation threshold), and study the random variable defined as the first time at which a node initially placed at the origin independently of the nodes of belongs to the infinite component of . We derive an upper bound for the tail as .
We begin by stating a proposition that will be a key ingredient in our analysis. We consider a large cube and tessellate it into small cubes called “cells.” The proposition says that, if all cells have sufficiently many nodes at a given time , then at time for suitably large the point process induced by the location of the nodes contains a fresh Poisson point process with only slightly reduced intensity inside a smaller cube . We believe this result is of independent interest. With this in mind, we state the proposition below for a slightly more general setting than is needed here. Its proof is deferred to the end of the section.
Now we proceed to the proof of Theorem ?. We first take a sufficiently small parameter such that . (This is always possible as we are assuming .) In what follows, we omit the dependencies of other parameters on and as we are considering them to be fixed.
Let be the event that does not belong to the infinite component at time . Then, the event is equivalent to . We define an integer parameter and consider the process obtained by skipping every time steps. (To simplify the notation we assume w.l.o.g. that is an integer.) In other words, instead of looking at the event we consider the event , which we henceforth denote by . Since the occurrence of the event implies we have . Our goal in introducing is to allow nodes to move further between consecutive time steps; we will choose the value of later.
Let be a sufficiently large constant and fix . We will confine our attention to the cube . We take a parameter and tessellate into cubes of side-length (see Figure ?(a)). We refer to each such cube as a “cell.” Later we will tie together the values of and , and will choose to optimize our upper bound for . For the moment we only assume that the tessellation is non-trivial in the sense that both and are as functions of .
For each time step , the expected number of nodes inside a given cell is . We say that a cell is dense at time if it contains at least nodes, where is as defined earlier. Let be the event that all cells are dense at time , and let . The lemma below shows that occurs with high probability.
At any given step , by a standard large deviation bound for a Poisson r.v. (cf. Lemma ?), a cell has more than nodes with probability at least . The proof is completed by taking the union bound over all cells and time steps.
Recall that is the location of at time . Define as the event that is located inside at time , and let . The next lemma bounds the probability that never leaves .
We fix a time step and then apply the union bound over time. The event corresponds to not moving a distance more than in any dimension. Therefore,
Then we use a standard large deviation bound for the Normal distribution (see Lemma ?) to conclude that
Since the bound above decreases with we can conclude that
and the result follows from and .
For each time step , we define to be the cube shifted randomly so that (the location of at time ) is uniformly random in (see Figure ?(b)). A crossing component of is a connected set of nodes within that contains a path connecting every pair of opposite faces of . (A path connects two faces of if each face is within distance of one of its endpoints.)
For each , let be the event that all the crossing components of are contained in the infinite component at time . (For definiteness we assume that holds if has no crossing component.) Let . The next lemma follows by a result of Penrose and Pisztora .
By stationarity we know that is the same for all . For any fixed ,  gives that for some constant . (In fact,  handles an event more restrictive than , which among other things considers unique crossing components of .) Using the union bound we obtain and the result follows since .
We now proceed to derive a bound on . We take to be the event that does not belong to a crossing component of at time , and define . Note that is a decreasing event, in the sense that if occurs then it also occurs after removing any arbitrary collection of nodes from the MGG . Clearly . By elementary probability,
Note that we use only to replace by in (Equation 11); this helps to control the dependencies among time steps, since is an event restricted to the cubes while is an event over the whole of . We use only to ensure that , which allows us to focus on the portion of inside . Note that is independent of , so this conditioning does not affect .
Now we set , where is the constant in the definition of . The main step in our proof is the lemma below.
We start by writing
(Here, for notational convenience, we assume that .)
We now derive an upper bound for . We start with a high level overview of the proof. Let be the (not necessarily Poisson) point process obtained from the nodes of (the MGG at time ) under the condition . Note that is conditioned only on events that occur between time and time ; therefore, the motion of the nodes of from time to is independent of the condition. Since all cells are assumed dense at time , using Proposition ? we can construct an independent Poisson point process and couple it with so that at time the nodes of in are a subset of the nodes of . Moreover, we can ensure that has intensity larger than in , and thus conclude that will belong to a crossing component of with constant probability. Using this, we can upper bound each term of the product in (Equation 12) by a constant strictly smaller than , which gives for some constant .
Turning now to the details, we can invoke Proposition ? with , , , and to obtain that, conditioned on , at time the nodes of the MGG in contain a fresh Poisson point process with intensity with probability at least , for some constant . Since is a decreasing event
as does not depend on the condition. Since the intensity of the fresh Poisson point process is ,  implies that, with probability for some constant , a constant fraction of the volume of is within distance of at least one node in a crossing component of . Since at time , is located uniformly at random inside , belongs to a crossing component of with probability at least , where is a constant. Hence we have
Since and go to infinity with , for sufficiently large each factor in the above product can be made strictly smaller than , which concludes the proof of Lemma ?.
Finally, we plug Lemmas ?– ? into (Equation 11) and obtain the following upper bound on :
(Here is a generic constant.) In order to minimize this upper bound we choose so that , which yields
for all sufficiently large , where is a constant depending on , , and . This completes the proof of Theorem ?. It remains to go back and prove Proposition ?.
Proof of Proposition
We will construct via three Poisson point processes. We start by defining as a Poisson point process over with intensity . Recall that has at least nodes in each cell of . Then, in any fixed cell, has fewer nodes than if has less than nodes in that cell, which by a standard Chernoff bound (cf. Lemma ?) occurs with probability larger than for such that . Since we have , and the probability above can be bounded below by for some constant . Let be the event that has fewer nodes than in every cell of . Using the union bound over cells we obtain