# Decentralized Random Walk-Based Data Collection in Networks

###### Abstract

We analyze a decentralized random walk-based algorithm for data collection at the sink in a multi-hop sensor network. Our algorithm, Random-Collect, which involves data packets being passed to random neighbors in the network according to a random walk mechanism, requires no configuration and incurs no routing overhead. To analyze this method, we model the data generation process as independent Bernoulli arrivals at the source nodes. We analyze both latency and throughput in this setting, providing a theoretical lower bound for the throughput and a theoretical upper bound for the latency. The main contribution of our paper, however, is the throughput result: we show that the rate at which our algorithm can collect data depends on the spectral gap of the given random walk’s transition matrix. In particular, for the simple random walk, we show that the rate also depends on the maximum and minimum degrees of the graph modeling the network. For latency, we show that the time taken to collect data not only depends on the worst-case hitting time of the given random walk but also depends on the data arrival rate. In fact, our latency bound reflects the data rate-latency trade-off i.e., in order to achieve a higher data rate we need to compromise on latency and vice-versa. We also discuss some examples that demonstrate that our lower bound on the data rate is optimal up to constant factors, i.e., there exists a network topology and sink placement for which the maximum stable data rate is just a constant factor above our lower bound.

Keywords: data collection, stable rate, throughput, random walk, sensor networks

## 1 Introduction

Sensor networks are useful in applications, for the environment and habitat monitoring, disaster recovery, agricultural monitoring, health administration, and target-tracking [Akyildiz-CN:2002], where human intervention is not possible. Being unattended these networks are more prone to frequent topology changes due to a random node or link failures. Such scenarios demand low overhead, more robustness and fault-tolerant algorithms. Thus topology-based algorithms are not useful for these networks as they involve high overhead for maintenance of topology information and recovery mechanisms for critical points of failure [Mabrouki-VALUETOOLS:2007]. On the other hand, stateless algorithms like the random walk-based algorithm we present in this paper, are reliant only on the local information, have low overhead and do not suffer from the problem of critical point failure (e.g. cluster heads [Heinzelman-HICSS:2000] or nodes close to root of data gathering tree [Cristescu-INFOCOM:2004]) as all nodes of the network play similar role in the algorithm (see e.g. [Avin-IPSN:2004]). These algorithms are simple, fault tolerant and scalable, thus more suitable for unattended sensor networks. Despite all these advantages, random walk-based algorithms are not often preferred for data collection since they tend to have higher latency compared to routing-based methods which rely on global topology information and are vulnerable to changes in topology but are optimized for low latency.

However, some sensor network applications particularly those involving monitoring of habitat, environment or agriculture like Intel’s Wireless Vineyard, Great Duck Island project, Zebranet project, and many others (see [Puccinelli-CSM:2005] and references therein) can compromise on latency but require uninterrupted continuous data collection. Motivated by these examples, we propose a simple decentralised random walk based algorithm Random-Collect, for data collection which requires no configuration and thus has zero setup cost.

##### Our approach

We consider a multi-hop sensor network where each node is equipped with a queue which helps in store and forward process of data. There are source nodes in an -node network, , and each source node gathers data at a fixed rate from its surroundings and stores it in its queue along with other data packets that may be generated earlier or received from the neighbouring nodes. Specifically, each of the source nodes sense data as independent Bernoulli process with some stable rate and relay it to a designated node called sink. Stable rate ensures that all data is successfully collected at the sink (we will define it more formally in section 3.3.1). In our model, we assume that the network is connected, but we do not expect any node to know anything about the network except the identity of its neighbours (the nodes with which it can directly communicate). We also assume that time is slotted and nodes communicate with each other at the start of every time slot. However, this assumption can be easily removed and does not affect our results.

In Random-Collect algorithm at the start of any time slot, a node picks up a data packet from its queue uniformly at random and forwards it to a neighbour who is also chosen uniformly at random. We allow a node to transmit only one packet to one of its neighbours, but, it can receive multiple packets from its neighbours. This is known as transmitter gossip constraint [Boyd-INFOCOM:2005, Mosk-PODC:2006] and has been used in literature [Karp-FOCS:2000, Boyd-INFOCOM:2005, Kashyap-PODS:2006, Mosk-PODC:2006] for energy conservation and to prevent data implosion [Heinzelman-MobiCom:1999]. The movement of any data packet in such setting can be seen as the random walk of the data packet on the graph. The particular type of random walk would depend on the actual transition probability of the nodes as defined in Section 3.2.

We define a parameter, throughput of network which is the rate at which data is received at the sink. We also define the collection time, which is the time taken for the sink to receive data packets from all the nodes and hence, represents the latency in collecting all data. We analyse these parameters by studying the movement of data packets as a general random walk on the graph representing the underlying network. We provide theoretical bounds on the throughput and latency for Random-Collect under the Bernoulli data generation model. An important thing to note is that our throughput lower bound is the best possible bound that can be proved in general up to constant factors. We demonstrate this by showing the bound to be tight for the complete graph.

The major drawback of our work is that our transmission model allows simultaneous transmission and reception and also allows for a node to receive more than one packet at a time, thereby bypassing the question of interference which is critical to sensor networks built on wireless nodes. This effectively means that at first sight, it appears that our results are valid only for wired sensor networks. However, this is not so. We discuss it further in Section 7.

##### Our contributions

We propose a simple, low overhead, fault-tolerant decentralised general random walk-based algorithm, Random-Collect, to collect data packets from source nodes at the designated node sink. We observe that the Markov chain defined by Random-Collect’s data collection process achieves steady-state when the queues of the nodes are stable. We also give a necessary and sufficient condition for the stability of the queueing system and the stationarity of the resulting Markov chain.

Having discussed our data collection process at steady-state, we analyse two important performance metrics of Random-Collect algorithm: throughput and latency. We show that the data rate which determines the network throughput is lower bounded by the spectral gap of general random walk’s transition matrix. In particular, we show if the random walk is simple then the rate also depends on the maximum and minimum degree of the graph modelling the network. This lower bound guarantees that our stable rate is greater than the given value. We also discuss examples for which our lower bound and upper bound on the data rate are optimal up to constant factors. We also present a high probability upper bound on the latency for the worst-case scenario with the source set as all nodes without the sink. The given bound reflects the trade-off between the data rate and latency in data collection i.e., we can’t achieve a high data rate and low latency at the same time.

##### Organisation

Following a survey of the literature in Section 2, we formalise our network setting and data collection model in Section 3.1. We also present a generalised data collection algorithm Random-Collect in Section 3.2. Then we discuss the stability criteria in data collection scenario in Section 3.3.1 followed by a discussion about two important performance metrics: throughput and latency. In Section 4, we present our main results and discuss their consequences. First, we discuss Random-Collect process at steady-state in Section 3.3 wherein we mention the necessary and sufficient condition for stationarity of the resulting Markov chain. We also present the steady-state equations for our Random-Collect algorithm. Then, we state our main theorems about throughput and latency in Section 4.2 and Section 4.3 respectively. In particular, we prove that the stable data rate guaranteed by our algorithm Random-Collect and the corresponding throughput are lower bounded by the spectral gap of the transition matrix of given random walk and we also prove a generalised upper bound on the network throughput for any data collection algorithm. For latency, we prove an upper bound which depends on the data rate and the worst-case hitting time of the general random walk. In Section 5, we discuss some interesting rate examples for various graphs with packets performing simple random walk. In Section 6 we prove our main theorems. We conclude the work in Section 7 with a discussion for possible future work.

## 2 Related Work

##### Data collection algorithms

Data collection in a sensor network is a well-studied field, and several algorithms have been proposed for it in the literature. We briefly discuss some of them and compare with Random-Collect algorithm.

One category of such algorithms is location-based like GPSR [Karp-MobiCom:2000] which uses some routing information at every node generated either using global or geographical information of the network. In a typical sensor network, routing information at any node becomes invalid frequently (due to movement or failure of nodes), hence these algorithms are not very efficient in practice. On the other hand, in Random-Collect every node needs only to know about its neighbourhood, thus is more robust and less vulnerable to node failures and network changes.

Another class of algorithms is based on hierarchical structures like clusters in LEACH [Heinzelman-HICSS:2000] or chain in PEGASIS [Lindsey-AeroConf:2002]. These algorithms also require time to learn some global information of the network and set up the cluster heads or chain of sensor nodes. Also, in clustering protocols [Abbasi-CC:2007], over a period of time cluster heads become the bottleneck for the propagation of data packets in the network and such solutions not scalable. Random-Collect algorithm is decentralised in the sense that each node transmits data only to its neighbours and there is no centralised node to govern these transmissions. Thus, Random-Collect is truly distributed and scalable.

Data correlation [Cristescu-INFOCOM:2004] and coding techniques [Kamra-SIGCOMM:2006] have also been used for data collection. We do not use any coding technique in our algorithm and so we need low configured nodes which are just capable of storing and forwarding data. However, our network model is similar to Kamra et al. [Kamra-SIGCOMM:2006] as they use only local information in their algorithms and have a single sink for data collection.

Closest to the Random-Collect algorithm is the technique used in gossip algorithms for data dissemination and rumour spreading [Karp-FOCS:2000, Boyd-INFOCOM:2005, Mosk-PODC:2006]. In these algorithms, at any time step, a node selects one of its neighbour uniformly at random and communicates some data to it. Every node performs certain computation on the received data (along with its data) and transmits the output to a randomly chosen neighbour in the next time slot. We use a similar approach to communication in our setting, but unlike gossip algorithms, our aim is to collect all the data packets at the sink node. In Random-Collect algorithm the nodes do not perform any computation on the data and just forward the data packets which they receive. Most of the literature in gossip algorithm setting compute functions like the average, sum or separable functions [Mosk-PODC:2006]. We are interested in collecting all the data packets which can be seen as identity function on the data. Moreover, we use push mechanism for spreading information rather than other variants like pull, push-pull as done by Demers et al. [Demers-PODC:1987] and Karp et al. [Karp-FOCS:2000].

##### Random walk-based algorithms

Models based on simple random walks have been used in literature for query processing [Avin-IPSN:2004], to model routing for data gathering [Mabrouki-VALUETOOLS:2007] and for opportunistic forwarding [Chau-INFOCOM:2009], but no analysis has been done for finding the average data collection time or resulting throughput explicitly. In a different context than ours, Neely [Neely-TN:2009] showed that when independent Markov processes modulated arrival processes, the average network delay grows at most logarithmically in the number of nodes in the network. Our latency (or data collection time) bounds are similar to these results.

Biased random walks have also been explored to improvise various results. One such biased random walk wherein priority is given to unvisited neighbours has been used by Avin et al. [Avin-IPSN:2004] to improve their query processing results and partial cover time bounds. Regarding the inherent problem of latency in random walks, they suggest performing parallel random walks as part of their future work [Alon-CPC:2011]. In Random-Collect algorithm, we also consider that data packets are performing parallel random walks which reduces the latency considerably.

##### Network throughput and data rate

Data arrival rate determines the frequency at which nodes in the network are collecting data from their surroundings, hence the network throughput. It is a critical performance parameter of the sensor network. The throughput capacity of the network in different contexts has been thoroughly studied [DuarteMelo-CN:2003, Giridhar-JSAC:2005, Barton-CISS:2006, Moscibroda-IPSN:2007] following the pioneering work by Gupta and Kumar [Gupta-TIT:2000]. Many protocols have been proposed which offer high data rates, most popular among them are Directed Diffusion [Intanagonwiwat-TN:2003] and its variants like Rumor routing [Braginsky-WSNA:2002], Energy-aware routing and others (see [Akkaya-AHN:2005] and references therein). All these offer high data rates along a reinforced path but suffer from overhead in setting up and maintaining such path. Also, these are only suitable for query-driven models and not useful for applications which require continuous data gathering [Akkaya-AHN:2005].

The stable data rate in terms of different underlying graph primitives has been found under varying contexts in literature. Banerjee et al. [Banerjee-QS:2012] determine the maximum data or refresh rate for a network computing Fully-Multiplexible functions in terms of min-mincut of the graph. Their result is a natural upper bound on our rate results as we simply collect data without any aggregation. Dependence of rate on the maximum degree of sensor network organized as a tree has been found by Incel et al. [Incel-SECON:2008] in their setting.

## 3 The Random-Collect Algorithm

In this section, we first discuss our network setting and modelling assumptions. Then we discuss the network and data collection model and present the Random-Collect algorithm in detail. After that, a discussion about stability in data collection scenario is presented followed by the introduction of various performance metrics.

### 3.1 Network Setting and Modelling Assumption

#### 3.1.1 Network Setting

##### Sensor node capabilities

We consider a connected multi-hop sensor network comprising a large number of nodes deployed over a given geographical area. Each source node of the network has a sensor which senses the environment, a transceiver which is used to send and receive data from other network nodes, and some storage in which data sensed from the environment and received from neighbouring nodes can be stored. Nodes other than the source nodes do not sense the environment but help in relaying the data in the network along with the source nodes. The network is deployed with zero configuration i.e., no node is aware of the sink position or global network routes. Moreover, each sensor is provided with a standard pseudo-random number generator, which is used for generating random numbers to choose among the neighbours. The ability of sensor nodes to choose random numbers has been exploited a lot in literature, be it in Data-centric protocols like ACQUIRE, Gradient-based routing, and Energy-aware routing or hierarchical protocols like LEACH (see [Akkaya-AHN:2005] and references therein).

##### Transceiver assumptions

Each node is configured to forward data to one of its neighbour chosen uniformly at random, but it can receive data from multiple neighbours at any time step. This is known as transmitter gossip constraint [Boyd-INFOCOM:2005, Mosk-PODC:2006]. It has been widely used in literature [Karp-FOCS:2000, Boyd-INFOCOM:2005, Kashyap-PODS:2006, Mosk-PODC:2006] as it results in slow energy dissipation and also prevents data implosion by maintaining only a single copy of data at any node [Heinzelman-MobiCom:1999, Akkaya-AHN:2005]. Multi-packet reception and simultaneous transmission-reception are easily ensured for wired networks due to the physical connections, but, for wireless networks, these are not possible due to interference. However, the main ideas underlying our proofs do not change even if we consider more realistic assumptions like allowing either transmission or reception of a single packet in a given time slot (see Section 7) and the analysis we present in this paper can serve as a benchmark for those scenarios.

##### Data generation and collection

We assume that sensor nodes generate new data as required, e.g., a temperature sensor may be configured to generate a new reading when the change over the prior reading is at least a certain threshold value. This data is then relayed through the network to one of the nodes that is designated as the data sink and is responsible for collecting data that reaches it. Depending on the application, we assume the sink can then relay the data to some decision making or processing unit using data mules to monitor events, perform local computation or configure local and global actuators [Braginsky-WSNA:2002].

##### Network setup and routing

We do not use any centralised algorithm to create connections among the nodes. At deployment time, sensors opportunistically make connections with every other sensor that they can directly communicate with. In wired networks, such nodes (neighbours) are the directly connected sensor nodes, and in wireless networks, these are nodes which lie within the transmission range of the given node. Our algorithm requires knowledge of the number of neighbours (the degree in graph-theoretic terms), so when the initial phase of making connections ends, each node exchanges this information with each of its neighbours. As nodes may become unavailable frequently, due to failures or enter into sleep mode (like in wireless sensor networks), nodes need to perform these handshakes periodically to ensure updated degree information of their neighbours. Hence, our scheme for network creation offers a basic communication mechanism without incurring any computational overhead. Moreover, no localization algorithm is required for establishing the multi-hop communications.

#### 3.1.2 Network and Data Collection Model

##### Network model

We model the multi-hop sensor network by an undirected graph , where is the set of sensor nodes with one sink , with as the set of source nodes and such that as the set of edges. There is an edge between nodes , if can directly communicate with each other. The neighbourhood of a node is the set of all nodes which can communicate with it, denoted by , i.e., The degree of node is defined as We denote the maximum and minimum degree among all nodes in the network by and respectively.

##### Time model

We consider a synchronous time model wherein time is slotted across all nodes in the network and nodes communicate with each other at the start of every time slot. Our results do not depend on synchronization and can be adapted to the asynchronous setting as well but in this paper, we present the synchronous setting for ease of presentation.

##### Data generation model

Given a set of data sources , we model the data generation process at each node of as a stochastic arrival process in discrete time that is Bernoulli with parameter and independent of the arrivals taking place at all other nodes, i.e., at each time slot each node generates a new data packet with probability independent of all other nodes.

##### Store and forward model

At any time slot , due to the enforced transmitter gossip constraint, each node can send only a single data packet to a chosen neighbour, but each node can receive multiple data packets simultaneously from its neighbours. We also allow a node to send and receive at the same time. We have discussed the implications of this assumption in Section 1 and will further discuss how to remove this assumption in Section 7.

At every time step, each node maintains a queue of packets, either generated at the node itself or received from neighbours, which have not been forwarded yet. For a given data generation rate , we denote the number of data packets in the queue of node , also referred to as the queue size, at the start of slot by and let be the expected queue size. Note that this expectation is over the random arrivals and the random choices made by our algorithm which will be explained further in the subsequent sections.

### 3.2 The Algorithm

In our proposed algorithm Random-Collect, at any time slot each node chooses a data packet from its queue and transmits it to a randomly chosen neighbour. The movement of a data packet in the network can be seen as a random walk on the graph . Note that the Random-Collect algorithm is a generalised algorithm and can be applied to any random walk provided is the transition matrix of the given random walk. In particular, for simple random walk the transition probability from node to is given by

(1) |

### 3.3 Stability and Performance Metrics

#### 3.3.1 Stability in Data Collection Scenario

In a data collection scenario where the data is being generated by an independent Bernoulli process with parameter at each node of , the state of the network running random walk-based data collection algorithm is described by a Markov chain where each , i.e., each is a dimensional vector of non-negative integers (where each coordinate is the queue size of a vertex in ). Now, in order for a given algorithm to be able to regularly collect data within a finite time of its being generated, the Markov chain must be stable, i.e., as , the probability of the queues at all nodes in to be finite is one.

Following Szpankowski [Szpankowski-TechReport:1989], we formally define the stable data rate of any data collection algorithm as follows:

###### Definition 1 (Stable data rate).

For any data collection algorithm with the data generation as independent Bernoulli process with parameter at all nodes except the sink and with dimensional vector representing the state of the network at time , for any the data rate is said to be stable if the following holds

(2) |

where and is the limiting distribution.

#### 3.3.2 Performance Metrics

After having defined the stable data rate for any data collection algorithm, any analysis of such an algorithm must focus on two important performance metrics: latency and throughput. We make these notions precise in this section.

Consider our regular data generation model with rate . To understand the notion of latency, let us identify different rounds of data generated. Starting time from , we number the data packets generated at any node starting from 1. We say that at time , round is generated which comprises of all the data packets that are numbered . We also define a parameter which represents the time taken to collect a single round of data packets i.e., given a round , is the time between the appearance of the last packet of round at any source node and the disappearance of the last packet of the given round into the sink.

The data collection time of the first rounds of data, , for Random-Collect with random walk is defined as,

(3) |

where, denotes the sink and is the random variable denoting the position of round data packet of node at the start of time slot given that the data packets perform random walk . Now, we define the average data collection time as follows.

###### Definition 2 (Average data collection time).

The average data collection time for the network is defined by

where is the data collection time of the first rounds of data.

Turning to the throughput we note that if the data arrival rate is very high Random-Collect will not be able to successfully move the data to the sink since the queues will keep growing and become unstable. But, our stable data rate (as defined in section 3.3.1) ensures that the queues are finite, hence , i.e., the average collection time is finite. Our main theorem, Theorem 1, will give a lower bound on the stable data rate guaranteed by Random-Collect and we will also give a general upper bound on stable data rates for any data collection algorithm. As expected, the notion of throughput is closely related to the notion of stable data rate and we can formally define it as follows.

###### Definition 3 (Network throughput).

Given a stable data rate , i.e., a data rate such that the queues of nodes in are stable (Szpankowski stability condition is satisfied), the network throughput is defined as the rate at which data is received by the sink. In other words, if we have data sources () and a stable data rate in a network, the network throughput is .

In the next section, we will discuss some theorems analysing these two performance metrics.

## 4 Results

In this section, we present our main results and discuss their consequences. First, we discuss the data collection process at steady-state wherein we present a necessary and sufficient condition for stability followed by the steady-state equations of the sensor nodes. Then we analyse the network throughput giving a lower bound on throughput guaranteed by Random-Collect algorithm and a general upper bound. We also discuss the relation of Random-Collect’s upper bound to that of general data collection upper bound. This discussion is followed by some examples for rate analysis given that the data packets perform simple random walk on the graph. We conclude this section by analysing the latency of Random-Collect algorithm by providing an upper bound on the average collection time and discussing some interesting examples. Detailed proof of all the theorems and propositions discussed in this section will be presented in Section 6.

### 4.1 The Random-Collect Process at Steady-state

The data collection process of Random-Collect is defined by dimensional vector where each represents the queue size at a given node given a data rate . Szpankowski [Szpankowski-TechReport:1989] shows that if the process is a Markov chain then the stability condition described by Eq. 2 implies ergodicity and the existence of a stationary distribution for this Markov chain. We now show that for any data collection algorithm involving general random walk , there is a such that Markov chain that describes the state of a network is stable for all .

###### Proposition 1.

For a network represented by an undirected graph running data collection algorithm involving aperiodic and irreducible random walk of data packets on the graph with the data generation as independent Bernoulli process with parameter at all nodes except the sink and with dimensional vector representing the state of the network at time , for all and any the following condition

(4) |

is necessary and sufficient for the multidimensional queueing system to be stable and for Markov chain to be ergodic. This implies that the Markov chain has a stationary distribution. Moreover, there exists a such that the Markov chain is ergodic for all as condition (4) holds and for all chain is non ergodic.

In view of Proposition 1 we say that is a stable data rate for Random-Collect if is stable at data generation rate . Now, given that is stable and its stationary distribution exists, let us consider the steady-state equations for our data collection process. Let be the number of packets arriving at node between time and from the environment and and it is independent of the queue size of at any time. The basic one step queue evolution equation for our data collection scenario is

(5) |

Now, taking expectations on both sides of one step queue evolution equation (Eq. (5)) and let be the expected queue size we get

(6) |

From Proposition 1, we know that if is stationary at time then . So we have the steady-state equation for any source node as

(7) |

For non-source nodes , there will be no term in their steady-state equation as they do not generate any data.

We can also represent the steady-state equations of all nodes in in matrix form as follows: let be an element row vector representing the steady-state queue occupancy probability of nodes in graph i.e., for nodes we have, respectively where . This is defined assuming that sink collects all data it receives and has no notion of maintaining queue. Let be another element row vector for nodes in graph such that . Let be the usual identity matrix and let be an element row vector defined as follows

Then, the steady-state queue equations at nodes (Eq.(7)) can be written in vector form as

(8) |

### 4.2 Throughput of Random-Collect Algorithm

Next we find the bounds on the stable data rate defined in Section 3.3.1 and also analyse the resulting throughput. Our main result about throughput is as follows.

###### Theorem 1 (Network throughput lower bound).

For a given graph with nodes, source set with data sources, each generating data as independent Bernoulli arrivals with stable data arrival rate , and a single sink, , we have

(9) |

where is the second largest eigenvalue of transition matrix of the general random walk on the graph , is the queue occupancy probability of node and is its stationary distribution under random walk . The corresponding network throughput is .

Moreover, if is the simple random walk then for and as the minimum and maximum degree of nodes of the graph respectively we have

(10) |

where, is the second largest eigenvalue of transition matrix . These results hold for , where is the critical rate below which data rates are stable and above which they are unstable.

The lower bound on throughput is shown to be related to the following natural upper bound on any data collection algorithm. In order to present this generalised upper bound, we first need to define a few terms. For any vertex subset we define its edge boundary as . Now, for all we define a constant and . Note where is edge expansion of the graph [Chung-BOOK:1996].

###### Proposition 2 (Generalised network throughput upper bound).

Given a graph with nodes and each node in set having independent Bernoulli data arrivals with rate , no data collection algorithm is able to achieve stable queues of nodes in for such that

(11) |

where is the degree of the sink, is a constant and is at most , edge expansion of the graph .

##### Discussion about network throughput upper bound for Random-Collect

In particular, if we consider Random-Collect algorithm with transition matrix . In this algorithm, instead of deterministically sending a data packet along any edge we send it with some probability given by . Now, for any vertex , we define its measure as, . Similarly, for any we define the measure and we define its edge boundary as . Thus, . Now, we define constants and where is the Cheeger’s constant for the random walk on the graph . Now, for ,

(12) |

We know, for any given set , where the maximum data flow that can move out of this set is the flow across the boundary,

(13) | ||||

(14) |

From Eq. (13), flow out of set can also be written as,

(15) |

The fundamental necessary bound on the value of is the accepting rate of the sink, so for source set we have . Using Eq. (15) in the above equation we have,

(16) |

Using Eq. (13) in Eq. (16), we have , so

(17) |

So, the bottleneck for rate can be sink itself or some other far-off node. Hence, the maximum value of stable data rate for Random-Collect is given by,

(18) |

Now, if we compare the general upper bound for any data collection algorithm (Eq. (11)) with Random-Collect’s upper bound (Eq. (18)), since for regular graphs and , so we always achieve a rate which is at least a factor less than any other data collection algorithm. Similarly, for non-regular graphs since and where is the minimum degree of the graph , so we are at least a factor less than others. So, in order to achieve data collection in low configured networks using Random-Collect we need to compromise on rate by certain factor.

Graph | Lower bound | Exact rate | General |

upper bound | |||

Cycle | |||

Star Graph with sink at centre | |||

and as self loop probability at each node | |||

Star Graph with sink at outer node | |||

Complete graph | |||

-dimension Hypercube with | |||

Random Geometric Graph | - |

##### Discussion about rate results

In Table 1 we present lower bound on the data rate for Random-Collect algorithm given that the data packets perform simple random walk on the graph and a general upper bound on the data rate for any data collection algorithm for various network topologies. We also present the exact values of data rate which are easy to calculate using elementary algebra for these topologies. In all these cases we assume that .

If we consider the complete graph topology it is easy to see that the exact rate is (see Section 5). As, the spectral gap of the simple random walk on the complete graph of nodes is we note that for this case our lower bound is tight up to constant factors i.e., both the exact value and the lower bound have order . Hence it is clear that our lower bound cannot admit any asymptotic improvement in general. On the other hand, consider cycle topology which shows that for specific cases a better lower bound may be possible. We note that our spectral gap-based lower bound is a lower than the exact value for this case. In view of our upper bound result, Proposition 2, that relates the throughput to the edge expansion, we conjecture that the reason the bound is weak is because although the cycle topology has spectral gap , its edge expansion is . To contextualise this, let us recall that Cheeger’s inequality says that the square of the “bottleneck ratio” or the “conductance” (both quantities that are closely related to the edge expansion) is a lower bound on the spectral gap of a Markov chain (see, e.g. Theorem 13.10 of [Levin-BOOK:2009]). In fact, the cycle graph or the path graph are examples that show that the lower bound in Cheeger’s inequality is tight for this very reason (see Example 13.12 of [Levin-BOOK:2009] for a fuller discussion).

Regarding the upper bound on data rate, star graph topology with sink at centre achieves an exact data rate of 1 (see Section 5) i.e., all nodes can send all data to the sink without any delays. This rate result is also clear from the topology itself as all nodes are connected only to the sink thus, can send uninterrupted data. Hence, for this case both general data collection and Random-Collect algorithm’s upper bound is tight and of the order . Now as pointed out in the throughput upper bound discussion earlier, the maximum rate at which the sink can ingest data is upper bounded by the sum of the probabilities of edges incoming to it which means the maximum rate Random-Collect can achieve is lower than the upper bound presented in Proposition 2 by a factor of at least the minimum degree of the graph. This loss of a factor equal to the degree of the sink shows up in the case of the star graph with sink at one of the outer vertices where Random-Collect achieves the best possible rate it can achieve but is still a factor of below the best possible achievable by any data collection algorithm.

### 4.3 Latency Analysis of Random-Collect Algorithm

Now after defining the steady-state of our Markov chain and throughput results, we next discuss the latency of our algorithm at stationarity for the worst-case with source set as the set of all nodes without the sink.

###### Theorem 2 (Average data collection time).

Given a graph representing the underlying network where each source node in set receives independent Bernoulli arrivals with stable rate , then with probability at least the average data collection time for Random-Collect is

(19) |

where is a constant, is a continuous and increasing function of with and as where is the critical rate below which data rates are stable and above which they are unstable and is the worst-case hitting time of general random walk on .

##### Discussion about latency result

Given that is the critical rate below which data rates are stable and above which they are unstable, let us analyse our latency results. We know that Szpankowski’s [Szpankowski-TechReport:1989] stability condition implies that we have,

for all [Luo-TIT:1999], so . Now, from this condition as the data rate , we will have as, , so the third term involving delay probability in Eq. (19) starts determining the average data collection time. This result is intuitive in the sense that as the data rate approaches critical rate, the node with maximum queue occupancy probability or delay probability starts determining the latency of data collection.

Now let us consider to be the simple random walk and be the time required by the random walk to cover all the states (see Chapter 11 [Levin-BOOK:2009]). Recall the definition of Let be states for which , thus any walk starting at must have visited by the time all states are covered, so we have For a connected graph like the given graph it is not possible for the simple random walk to assign non-zero probability to a vertex it has not yet visited, so we can conclude that , where is the mixing time of the graph (see Section 4.5 [Levin-BOOK:2009]). We know for simple random walk with source set , (by Theorem 1) and (Theorem 12.3 [Levin-BOOK:2009]). So, for data rates using above results in Eq. (19) we have is . This result is not surprising in the sense that for the stable rate since the queues are finite, hence stable, we expect that the data being generated is cleared in a time inversely proportional to the rate in which it is generated.

So, there is a visible trade-off between the data rate at which sources generate data and the latency in data collection by the sink i.e., we cannot achieve a high data rate and low latency at the same time. Since our latency depends on parameter which in turn depends on the data rate we can choose a particular value of to adjust to our latency requirements. Moreover, this latency-data rate trade-off can help the network designer to give preference to one of the metrics.

Now, having discussed the impact of various parameters on the latency let us look at some examples which will give us an insight about latency results for various common topologies. For this we consider working with i.e., , as discussed before this ensures that latency is within control and queues at the nodes are finite. For cycle topology [Aldous-BOOK:2002] and our lower bound on rate for Random-Collect given data packets perform simple random walk is (see Table 1), so from Eq. (19) the average data collection time is . For hypercube topology [Aldous-BOOK:2002] and Random-Collect lower bound for simple random walk is (see Table 1), so average data collection time is . Similarly if we consider star topology with sink at the centre [Aldous-BOOK:2002] and Random-Collect lower bound for simple random walk is (see Table 1), so average data collection time is . All these examples prove our conjecture that for simple random walks as long as average data collection time is .

## 5 Examples

In this section, we discuss rate analysis of various graphs given that the data packets perform simple random walk on the graph and with . First, we compute the exact rate for various topologies using first principles and a simple method involving partitions. Then, we compute the exact rate using same principles for an extreme scenario with just two data source out of sensor nodes. Finally, we end the section by presenting the rate lower and upper bound guaranteed by Random-Collect algorithm and a general upper bound of any data collection algorithm for the same topologies as discussed before. We have already summarised our results in Table 1.

### 5.1 Exact Rate of Data Collection for Sources

In this section, we will obtain the exact rate for various graphs for using two simple methods: first principles method and method involving partitions. We will discuss each of the methods in the remaining section.

#### First principles method

This method is based on elementary algebra. We know from Proposition 1, the steady-state equation for any source node is

(20) |

where (we drop the subscript at steady-state and superscript where rate is understood). In this method, we will use this equation iteratively for various nodes to compute the exact rate for as simple random walk. Let us see some examples solved using this method.

##### Cycle graph

Let be an node even cycle or ring graph with as the set of data sources. Using steady-state equation Eq. (20) for immediate neighbours of sink we have, for node : and for node : . Similarly, for any general node which is not a neighbour of sink Eq.(20) gives . Using steady-state equations (Eq. (20)) of node , and general node , we get

(21) |

Now, using general node equation for nodes to and results from successor nodes we will obtain steady-state equations in terms of and like for node we will have . Now, adding all such steady-state equations of node to , we have: , so

(22) |

##### Star graph with the sink at the centre

Now, let be an node star graph with sink at centre and as the set of data sources. Now, any node other than sink only sends data and has no arrival as sink doesn’t transmit any data, so to make Markov chain on set aperiodic we put self loop at every node. So, steady-state equation (Eq. (20)) for node with is: . As, so .

##### Star graph with the sink at the outer node

Now, let be an node star graph with sink at the outer node and as the set of data sources. Now, let and be the probability of queue occupancy of the centre node and all other symmetric outer nodes (except the sink) respectively. The node at the centre receives data from all symmetric outer nodes except the sink, so we can write its steady-state equation (Eq. (20)) as:

(24) |

Now, for all outer nodes except the sink we have

(25) |

Multiplying Eq. (24) by and adding with Eq. (25), we get . As, so .

##### Complete graph

Let be a complete graph with as the set of data sources. We know, every node in the graph except the sink is symmetric, so steady-state equation (Eq. (20)) for node with is: . So, . As, so .

#### Partition method

For simple graph topologies like cycle, star or complete graph elementary algebra easily helps to compute the exact rate. However, for other regular topologies like Hypercube getting the exact rate only using elementary algebra is not straight forward so we develop other simple method which is described as follows. Consider a partition of set into and such that . Given the source set we know that the set of sources in is . For regular graphs we know, so for a source node we can rewrite the steady-state equation (Eq. (20)) for as simple random walk as

(26) |

Now, summing the steady-state equation for all nodes , we have

(27) |

We obtain Eq.(27) as nodes in other than the source nodes do not generate any data. Note that the rate for the regular graphs like cycle and complete graph discussed before in first principles method can also be obtained using method involving partitions. Let us consider some examples using this method.

##### dimension Hypercube

Now, let be an node -dimension hypercube i.e., each of the vertices have bit labels, so . Consider sink to be the node that is labelled all 0s i.e., and as the set of data sources. By symmetry we can assume that the probability of queue occupancy at all nodes that are at the same distance from the sink are the same. Let the probability of queue occupancy for nodes at distance from the sink be . These nodes have bits set to 1 in their label and so can easily be seen to be in number. Let be the nodes with 1s in their label and be the set of nodes at distance or more from the sink. Now, applying Eq. (27) to the partition , we note that the only edges crossing the partition go from the nodes of to nodes of and there are exactly such edges. So we get, , i.e.,

(28) |

for to . It is easy to see that . Replacing this value and summing up the equations of the form (28), we get . Since, , and , we get

(29) |

Since we know that , putting and and then using in Eq. (29) we get . So, . As, for , we have . So, . This gives us .

### 5.2 Exact Rate of Data Collection for Two Sources

In this section, we consider the extreme scenario of just two sensor nodes out of nodes in graph i.e., . We will again be using the first principles and partition based methods as discussed before to compute the exact rate for various topologies under this scenario.

##### Cycle graph

For an node even cycle graph, we want to find the minimum possible rate for two sources. Now, let us number the nodes from to in clockwise direction and consider a partition at distance from the sink such that it cuts the cycle at two symmetric points and the two source nodes are present after this partition, so from Eq. (27) we have

(30) |

To minimize we need to minimize the difference between queue occupancy probability, which is possible only if we maximize the distance from sink i.e., sources are present farthest from sink. So, we consider one source at distance , since it is an even cycle we can only have one node at this level, so we consider the other source node at the next farthest level . So, for the partition at distance , from Eq. (27) we have

(31) |

Now, summing the equation (30) for to with equation (31), we get which means . Now, as so .

##### Star with sink at centre

Let us consider an node star graph with sink at centre and data sources, we know the probability of queue occupancy of the sink , so let be the probability of queue occupancy of source nodes. Now, we know in this topology all nodes are directly connected to the sink so non-data source nodes have no role to play. Also, any node sends data to the sink and has no arrival as sink doesn’t transmit any data, so to make Markov chain on set aperiodic we put self loop at every node. So, steady-state equation (Eq. (20)) for any source node with is: . As, so .

##### Star with sink at outer edge

Let us consider an