A macro-level model for investigating the effect of directional bias on network coverage

A macro-level model for investigating the effect of directional bias on network coverage

Abstract

Random walks have been proposed as a simple method of efficiently searching, or disseminating information throughout, communication and sensor networks. In nature, animals (such as ants) tend to follow correlated random walks, i.e., random walks that are biased towards their current heading. In this paper, we investigate whether or not complementing random walks with directional bias can decrease the expected discovery and coverage times in networks.

To do so, we develop a macro-level model of a directionally biased random walk based on Markov chains. By focussing on regular, connected networks, the model allows us to efficiently calculate expected coverage times for different network sizes and biases. Our analysis shows that directional bias can significantly reduce coverage time, but only when the bias is below a certain value which is dependent on the network size.

1 Introduction

The concept of a random walk was introduced over a century ago by Pearson [24] and has been studied extensively since then [15, 9, 11]. Recently, random walks have been proposed for searching, or disseminating information throughout, communications and sensor networks where the network’s structure is dynamic, or for other reasons unknown [6, 14, 4, 25]. They are ideal for this purpose as they require no support information like routing tables at nodes [5] — the concept of a random walk being for the agent performing the walk to move randomly to any connected node.

The efficiency of random-walk-based algorithms can be measured in terms of the average number of steps the agent requires to cover every node in the network (and hence be guaranteed to find the target node in the case of search algorithms). This is referred to as the coverage time under the assumption that the agent takes one step per time unit. Obviously, improving the coverage time for algorithms is an important goal.

One straightforward approach to this is to have multiple agents [3]. For some algorithms, such an approach is made even more effective when stigmergy is employed [23, 7, 17], i.e., agents leave information for other agents directing them to their goals. Such an approach, inspired by the way ants leave trails of pheromones directing other ants to food [10], is only useful when target nodes need to be visited by more than one agent. This is not always the case. More importantly, stigmergy is effective in directing agents only once a target node has been found. The time for the first agent to find a target node is not reduced. This can only be done by considering the agent’s ‘movement model’.

For this reason it has been suggested that random walks should be constrained, e.g., to prevent an agent returning to its last visited node, or to direct an agent to parts of the network where relatively few nodes have been visited [19]. We take a similar approach in this paper. We base our movement model on that observed in nature. Many models used by biologists to describe the movement of ants and other animals are based on correlated random walks, i.e., random walks which are biased to the animal’s current direction [18, 8, 20]. Based on our own observations of ants, we also investigate including a small probability of a non-biased step at any time to model occasional random direction changes.

To the best of our knowledge, directionally biased walks in networks have been investigated by only one other group of researchers. Fink et al. [16] look at the application of directional bias in a cyber-security system in which suspect malicious nodes must be visited by multiple agents. They compare coverage times for directional bias, with those for pure random walks, and random walks with stigmergy. Their conclusion is that directionally biased walks are more efficient even than random walks with stigmergy. This conclusion, however, is based on micro-level simulation, i.e., direct simulation of agents taking steps, for a single network size and bias. It cannot be generalised to arbitrary network size or bias.

The micro-level simulation approach of Fink et al. requires coverage times to be calculated as the average of multiple runs. They performed 500 simulation runs for each movement model. Such an approach is impractical for a deeper investigation of the effect of directional bias which considers various network sizes and biases. For that reason, in this paper we develop a more abstract, macro-level model of a directionally biased walk. It builds on the work of Mian et al. [21] for random walks, describes the directionally biased walk in terms of a Markov chain [22] and allows us to calculate the coverage time for a given network size and bias directly. Although certain special cases have analytic solutions, we have found this model to be helpful for a general approach to calculating coverage time.

We begin in Section 2 by describing the concept of a random walk, and our notion of directional bias in more detail. In Section 3 we present the Markov-chain model of a directionally biased walk on a network and show how it can be used to calculate the expected coverage time. In Section 4 we present and discuss the results of applying our model to the calculation of coverage times on a range of network sizes and biases. We conclude in Section 5.

2 Directionally biased walks

Random walks have been studied in 1-dimensional, 2-dimensional and multi-dimensional spaces. Many of the results from 2-dimensional walks are applicable to communications and sensor networks which are commonly modelled as connected graphs. In particular, it is known that with probability 1 a random walk will cover every node of a connected graph [1] and a number of approaches for calculating the coverage time have been proposed [2, 13, 19, 21].

The investigation in this paper focusses on regular, connected graphs where each node has exactly 8 neighbours (see Fig. 1(a) and, for the probability of the next step in a random walk over such a graph, Fig. 1(b)). Furthermore, to allow our graphs and hence networks to be finite, we wrap the north and south edges and the east and west edges to form a torus. Our aim is to provide a deeper analysis of directional bias than in the recent literature which also investigates the notion on regular toroidal graphs [16]. While the results do not apply directly to arbitrary irregular networks, they do apply to random geometric graphs which are often used to model ad hoc sensor networks. Approaches using regular toroidal graphs to determine coverage time on random geometric graphs include that of Lima and Barros [19] and Mian, Beraldi and Baldoni [21].

Figure 1: Random walk on a regular, connected graph where each node has 8 neighbours.

For modelling directional bias in nature, biologists typically use the von Mises distribution [12]. The von Mises distribution is a continuous angular function with a parameter which affects heading bias (see Figure 2).

Figure 2: The von Mises distribution about radians for and .

We do not adopt the von Mises distribution in our approach for two reasons. Firstly, we have only a discrete number of directions and so do not require a continuous distribution. Secondly, as in random walks, we would like the computations the agent needs to perform to be simple. Our notion of directional bias limits our agent to choose either its current direction with a probability (referred to as the bias), or any neighbouring direction, i.e., radians () clockwise or anti-clockwise from the current direction, with equal probability of . When the bias is high (as illustrated in Fig. 3(a)), the movement model approximates (discretely) that of the von Mises distribution for a high value of (such as in Fig. 2). This is not the case, however, for low values of (as illustrated in Fig. 3(b)).

Figure 3: Directionally biased walk for agent with current direction north and biases and .

Let be the possible directions in radians. Our notion of directional bias is then defined formally as follows.

Definition 1

(Directional bias) Given the current direction and bias , the probability of moving in direction at the next step, , is defined as follows.

We also investigate adding occasional random steps to our directionally biased walks. The idea is that with probability the agent will make a random, rather than directionally biased, step. This better matches our own observations of the movements of ants.

Definition 2

(Directional bias with random steps) Given the current direction , bias and probability of a random step, the probability of moving in direction at the next step, , is defined as follows.

To analyse coverage time under our models of directional bias, we adapt a Markov-chain model [22] developed for random walks by Mian et al. [21]. As explained in Section 3, this allows us to calculate the coverage time directly, and hence compare coverage times for different network sizes and biases.

3 A macro-level model

The previous work on directional bias by Fink et al. [16] shows that directionally biased walks are more efficient than random walks on a regular, connected toroidal graph; but only for the one specific network size and bias considered in their paper. In this paper, we produce more general results by investigating the effect on coverage time of varying the network size and directional bias. The micro-level model and simulation approach used by Fink et al. is not suited to this goal, requiring numerous simulations runs to calculate the coverage time for each network size and bias. We therefore use a more abstract, macro-level model which allows us to calculate the coverage time for a given network size and bias directly.

Our model is based on the work of Mian et al. [21] who provide a Markov-chain approach [22] to model and calculate coverage time for random walks on a regular, connected toroidal graph. Given a network of nodes, let the vector of length denote the state probability distribution with elements for , and the matrix of size denote the transition probability matrix with elements for .

Definition 3

(Markov-chain model) Let denote the initial state distribution and denote the state distribution after the th step.

  • The elements of the state probability distribution sum to 1.

  • The rows of the transition probability matrix sum to 1.

  • The state distribution at step is calculated by multiplying the initial distribution by the transition probability matrix times.

Often the system described by such a model would begin in a particular node with probability 1.

For a random walk, we call this node the starting node. A random walk is specified by letting the transition from a node to any of its neighbours occur with a probability of where is the number of neighbours. For example, for a 1-dimensional network of 5 nodes such that

the transition probability matrix for a random walk would be

where, for example, row 0 (the topmost row of ) indicates that an agent at node 0 (the first node of ) has a probability of of moving to node 1 and a probability of of moving to node 4 (since we wrap the east and west edges). Multiplying by results in

and so on. For a 2-dimensional network of nodes, the vector would have elements, with those from for denoting the nodes in the th row of the network. So, for example, row 0 of the matrix for a random walk on a network of 5 5 nodes would be

where ro refer to the row in the network.

In order to calculate coverage time, we modify the standard Markov-chain model for a random walk so that the starting node is an absorbing node, i.e., a node from which the probability of a transition to any neighbour is 0 (and the probability of a transition to itself is 1). We then model the system as starting from the state distribution after the initial distribution, i.e., that in which all neighbours of the starting node have probability where is the number of neighbours per node.

Definition 4

(Markov-chain model with absorbing node) Given a Markov-chain model as defined in Definition 3, let be the position of the starting node. A Markov-chain model with absorbing starting node is defined in terms of a transition probability matrix and initial state probability distribution as follows.

  • The initial state probability distribution is that reached after 1 step from a distribution in which the agent is in the starting node with probability 1.

  • The starting node transitions to itself with probability 1 in .

  • All other transitions in are as in .

Due to the starting node being an absorbing node, the probability of being in the starting node never decreases as the number of steps increase. For example, for the 1-dimensional network above, the transition probability matrix is

and the state probability distribution is

and so on.

The probability of the starting node at step in this model is the probability that the system has returned to the starting node within steps. It can be used to calculate the coverage time as follows.

Let denote the expected number of new nodes covered in the th step. The total expected number of nodes covered at the th step, , is then

The initial node is covered at step 0, so we have . For all , is equal to the probability that the node, , reached at the th step has not been visited before, i.e., or

Due to the regularity of the network and the fact that an agent behaves the same at each node, the probability of returning to the starting node after, say, 10 steps is equal to the probability of returning to the second node reached after 12 steps. More generally, we have . From which it follows that . Hence, from above

which is equal to the probability that the system has not returned to the starting node, , within steps. In other words, given the modified Markov-chain model of Definition 4

Coverage time can then be defined as follows.

Definition 5

(Coverage time) Given a Markov-chain model with absorbing node as defined in Definition 4, the time to cover % of the nodes is the smallest such that

With directional bias, we add an additional dimension to our representation of a network: the current direction of movement. For a network with nodes, the number of entries in the state probability distribution is hence no longer but where is the number of neighbours per node (and hence the number of directions of movement). Each entry represents the probability of being in a node having entered from a specific direction. We organise these entries so that those from for denote the probabilities of being in a node of the network having entered from direction . The corresponding transition probability matrix for the Markov-chain model is of size , and since there are positions corresponding to the starting node (one for each direction from which the starting node was entered) there will be absorbing positions in the matrix.

Definition 6

(Markov-chain model for directional bias) Let be a transition probability matrix of size (such that all rows sum to 1). A Markov-chain model for directional bias is defined in terms of a transition probability matrix and initial state probability distribution as follows.

  • The initial state probability distribution is that reached after 1 step from a distribution in which the agent is in the starting node (with a particular current direction) with probability 1.

  • The starting node transitions to itself (without changing the current direction) with probability 1 in .

  • All other transitions in are as in .

For example, consider adding directional bias to the 1-dimensional network of 5 nodes. Let the probability of continuing in the current direction be , and that of changing direction to be . Let 0 denote direction east (or right) and 1 denote direction west (or left), and let the starting node be node 2 (the centre node). The transition probability matrix is then

{sidebyside}
\nextside

where, for example, row 0 indicates that an agent at node 0 whose current direction is east will move to node 1 (and not change the direction) with probability and to node 4 (changing the current direction to west) with probability . Also, rows 2 and 7 indicate that an agent in node 2 will remain in node 2 and not change the current direction.

If the starting node’s initial direction is set to east then the state probability distribution is

and hence the initial state probability distribution for the Markov-chain is

Given that all nodes in our network follow the same directional-bias rules, we can calculate coverage time in a manner similar to that in Definition 5. The difference is that we need to sum the probabilities from the positions in the state probability matrix corresponding to the starting node.

Definition 7

(Coverage time for directional bias) Given a Markov-chain model for directional bias as defined in Definition 6, the time to cover % of the nodes is the smallest such that

4 Investigating directional bias

To perform our investigation into directional bias, we implemented our Markov-chain model for directional bias (Definition 6) and coverage time formula (Definition 7) in Java using the JAMA library for matrices and matrix operations (math.nist.gov/javanumerics/jama) and the JFreeChart library for plotting graphs (www.jfree.org/freechart).

Initially, we plotted graphs of coverage time versus bias (for bias values from 0 to 0.95 in steps of 0.05) for graphs sizes (25 nodes) to (225 nodes). We calculated the time for coverage of 99% of the network nodes. This was to avoid problems arising with 100% coverage when the coverage converged to a point just below the network size due to inaccuracies in the floating-point arithmetic.

The movement model was that of Definition 1. The graphs for the and networks are shown in Fig. 4 and Fig. 5, respectively. The horizontal line represents the coverage time for a random walk, and the curved line that for a directionally biased walk under the range of biases. The general shape of the latter and its position in relation to the horizontal line for random bias was consistent for all network sizes in the range considered.

Figure 4: Random vs directionally biased walk for a 2-dimensional network of 5 5 nodes.
Figure 5: Random vs directionally biased walk for a 2-dimensional network of 15 15 nodes.

A number of interesting results follow from this analysis.

  1. The best coverage time is achieved for a bias of 0. This corresponds to an agent which always changes direction by radians on every step.

  2. While for low directional biases (0 up to around 0.7 for the case) coverage time is less than that for a random walk, for higher biases it is greater than that for a random walk.

  3. The value of the bias at which a directionally biased walk becomes less efficient than a random walk (from here on called the cross-over bias), progressively increases as the size of the network increases. It is around 0.74 for a network, and 0.93 for a network.

  4. The improvement in efficiency of directional bias increases as the size of the network increases. For a directional bias of 0.5 the increase in efficiency is less than 20% for a network, and around 40% for a network.

Point 1 is particularly interesting as it suggests a new movement model that was not initially anticipated. Our initial motivation was to investigate movement models similar to those observed in nature, which are best represented by a von Mises distribution. As illustrated in Fig. 3(b), however, low values of bias in our movement model (including the value 0) do not correspond to a von Mises distribution. The new model, although perhaps impractical as a means of movement in nature, can nevertheless be readily implemented in network search and dissemination algorithms.

Point 2 is also interesting as it indicates that directional bias is only effective in reducing coverage time when the bias is not too large. This result was also unanticipated as directional bias in the movement of animals tends to be high. However, the areas over which such animals move would correspond to networks significantly larger than those we considered. Point 3 anticipates that the cross-over bias would be higher in such networks. This conjecture is supported by the work of Fink et al. [16] whose micro-level simulation of a 2-dimensional network of 100 100 nodes shows that a directionally biased walk (approximating a von Mises distribution with high ) is more efficient than a random walk.

The second part of our investigation considered the movement model of Definition 2, where occasional random steps are added to a directionally biased walk. Example plots for a network of nodes and the value of set to 0.1 (an average of one random step in every 10) and 0.2 (an average of one random step in every 5) are shown in Fig. 6 and Fig. 7, respectively.

Figure 6: Random vs directionally biased walk with probability 0.1 of a random step for a 2-dimensional network of 5 5 nodes.
Figure 7: Random vs directionally biased walk with probability 0.2 of a random step for a 2-dimensional network of 5 5 nodes.

The following results emerge from this analysis.

  1. As may have been predicted, the addition of random steps moves the coverage time closer to that or a random walk. Hence, for bias values lower than the cross-over bias the coverage time increases, but for values higher than the cross-over value the coverage time decreases. Comparing Fig. 7 with Fig. 4 it can be seen that for a bias of 0 the coverage time has increased from around 55 to 70, and for a bias of 0.95 it has decreased significantly from around 470 to about 158.

  2. The introduction of random steps increases the cross-over bias. For the cross-over bias increases to 0.78 (from 0.74 for no random steps) and for to 0.84.

5 Conclusion

In this paper we have investigated the effect of directional bias on the coverage time of random walks on regular, connected networks. Our analysis has shown that directional bias can reduce coverage time significantly and has a greater effect the larger the network. However, this reduction occurs only when the bias (to continue in the same direction) is below a certain value we call the cross-over bias. The cross-over bias is dependent on the network size, increasing as the size of the network increases. Hence, high values of bias which work well at reducing coverage time in large networks, may be less effective, or even increase the coverage time, in smaller networks.

The cross-over bias can be increased by adding occasional random steps to a directionally biased walk — the more random steps, the higher the cross-over bias. Adding such steps, however, moves the coverage time of a directionally biased walk closer to that of a random walk (increasing coverage time for biases below the cross-over bias).

Our analysis also revealed that a movement model in which an agent changes direction by radians (in either direction with equal probability) on each step is more effective in reducing coverage time than a standard directionally biased model. Further investigation of this, and similar models, is warranted.

Our investigation was facilitated by a macro-level model of random and directionally biased walks in terms of Markov chains. This model allowed us to calculate coverage time directly, in contrast to other approaches where coverage time is calculated as the average result obtained from numerous runs of a micro-level simulation. Although introducing a margin of error due to the limitations of floating-point arithmetic, our more abstract model has provided a practical means for obtaining a deeper, more complete analysis of directional bias than in previous work.

Acknowledgements

This work was supported by Australian Research Council (ARC) Discovery Grant DP110101211.

References

  1. D.J. Aldous. An introduction to covering problems for random walks on graphs. Journal of Theoretical Probability, 2(1):87–89, 1989.
  2. D.J. Aldous. Threshold limits for cover times. Journal of Theoretical Probability, 4:197–211, 1991.
  3. N. Alon, C. Avin, M. Kouchý, G. Kozma, Z. Lotker, and M. Tuttle. Many random walks are faster than one. Combinatorics, Probabability and Computing, 20(4):481–502, 2011.
  4. C. Avin and C. Brito. Efficient and robust query processing in dynamic environments using random walk techniques. In International Symposium on Information Processing in Sensor Networks (IPSN ’04), pages 277–286. ACM, 2004.
  5. C. Avin and B. Krishnamachari. The power of choice in random walks: An empirical study. In ACM International Symposium on Modelling, Analysis and Simulation of Wireless and Mobile Systems (MSWiM ’06), pages 219–228. ACM, 2006.
  6. Z. Bar-Yossef, R. Friedman, and G. Kliot. RaWMS — Random walk based lightweight membership service for wireless and ad hoc networks. In ACM International Symposium on Mobile Ad Hoc Networking and Computing (MobiHoc ’06), pages 238–249. ACM, 2006.
  7. E. Bonabeau, F. Henaux, S. Guérin, D. Snyers, P. Kuntz, and G. Theraulaz. Routing in telecommunications networks with ant-like agents. In International Workshop on Intelligent Agents for Telecommunications and Applications (IATA 1998). Springer-Verlag, 1998.
  8. P. Bovet and S. Benhamou. Spatial analysis of animals. Journal of Theoretical Biology, 131:419–433, 1988.
  9. M.J.A.M Brummelhuis and H.J. Hilhorst. Covering of a finite lattice by a random walk. Physica A: Statistical Mechanics and its Applications, 176:387–408, 1991.
  10. S. Camazine, J.L. Deneubourg, N.R. Franks, J. Sneyd, G. Theraulaz, and E. Bonabeau. Self-Organization in Biological Systems. Princeton University Press, 2001.
  11. S. Caser and H.J. Hilhorst. Topology of the support of the two-dimensional lattice random walk. Physical Review Letters, 77:992–995, 1996.
  12. T.O Crist and J.A. MacMahon. Individual foraging components of harvester ants: Movement patterns and seed patch fidelity. Insectes Sociaux, 38(4):379–396, 1991.
  13. A. Dembo, Y. Peres, J. Rosen, and O. Zeitouni. Cover times for Brownian motion and random walks in two dimensions. Annals of Mathematics, 160:433–464, 2004.
  14. S. Dolev, E. Schiller, and J. Welch. Random walk for self-stabilizing group communication in ad hoc networks. IEEE Transactions on Mobile Computing, 5(7):893–905, 2006.
  15. A. Dvoretzky and P. Erdös. Some problems on random walk in space. In Berkeley Symposium on Mathematical Statistics and Probability, 1950. University of California Press, 1951.
  16. G.A. Fink, K.S. Berenhaut, and C.S. Oehmen. Directional bias and pheromone for discovery and coverage on networks. In IEEE International Conference on Self-Adaptive and Self-Organizing Systems (SASO 2012), pages 1–10. IEEE, 2012.
  17. A. Ghosh, A. Halder, M. Kothari, and S. Ghosh. Aggregation pheromone density based data clustering. Information Sciences, 178(13):2816–2831, 2008.
  18. P.M. Kareiva and N. Shigesada. Analyzing insect movement as a correlated random walk. Oecologia, 56:234–238, 1983.
  19. L. Lima and J. Barros. Random walks on sensor networks. In International Symposium on Modeling and Optimization in Mobile, Ad Hoc and Wirless Networks and Workshops (WiOpt 2007), pages 1–5. IEEE, 2007.
  20. C.E. McCulloch and M.L. Cain. Analyzing discrete movement data as a correlated random walk. Ecology, 70:383–388, 1989.
  21. A.N. Mian, R. Beraldi, and R. Baldoni. On the coverage process of random walk in wireless ad hoc and sensor networks. In IEEE International Conference on Mobile, Ad-hoc and Sensor Systems (MASS 2010), pages 146–155. IEEE, 2010.
  22. J.R. Norris. Markov Chains. Cambridge University Press, 1998.
  23. H.V.D. Parunak. Go to the ant: Engineering principles from natural agent systems. Annals of Operations Research, 75:69–101, 1997.
  24. K. Pearson. The problem of the random walk. Nature, 72:294, 1905.
  25. N. Sadagopan, B. Krishnamachari, and A. Helmy. Active query forwarding in sensor networks. Journal of Ad Hoc Networks, 3(1):91–113, 2005.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
119933
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description