On Simple BackOff in Complicated Radio Networks
Abstract
In this paper, we study local and global broadcast in the dual graph model, which describes communication in a radio network with both reliable and unreliable links. Existing work proved that efficient solutions to these problems are impossible in the dual graph model under standard assumptions. In real networks, however, simple backoff strategies tend to perform well for solving these basic communication tasks. We address this apparent paradox by introducing a new set of constraints to the dual graph model that better generalize the slow/fast fading behavior common in real networks. We prove that in the context of these new constraints, simple backoff strategies now provide efficient solutions to local and global broadcast in the dual graph model. We also precisely characterize how this efficiency degrades as the new constraints are reduced down to nonexistent, and prove new lower bounds that establish this degradation as near optimal for a large class of natural algorithms. We conclude with a preliminary investigation of the performance of these strategies when we include additional generality to the model. These results provide theoretical foundations for the practical observation that simple backoff algorithms tend to work well even amid the complicated link dynamics of real radio networks.
Repeatrepeat \SetEndCharOfAlgoLine
1 Introduction
In this paper, we study upper and lower bounds for efficient broadcast in the dual graph radio network model [6, 13, 14, 5, 8, 7, 10, 9, 16, 11], a dynamic network model that captures wireless communication over both reliable and unreliable links. As argued in previous studies of this setting, including unpredictable link behavior is important for understanding these networks, as in real world deployments, radio links are often quite dynamic.
The BackOff Paradox.
Existing papers [14, 10, 16] proved that it is impossible to solve standard broadcast problems efficiently in the dual graph model without the addition of strong extra assumptions (see related work). In real radio networks, however, which suffer from the unreliability captured by the dual graph model, simple backoff strategies tend to perform quite well. These dueling realities seem to imply a dispiriting gap between theory and practice: basic communication tasks that are easily solved in real networks are impossible when studied in abstract models of these networks.
What explains this paradox? This paper tackles this fundamental question.
As detailed below, we focus our attention on the adversary entity that decides which unreliable links to include in the network topology in each round of an execution in the dual graph model. We introduce a new type of adversary with constraints that better generalize the dynamic behavior of real radio links. We then reexamine simple backoff strategies originally introduced in the standard radio network model [4] (which has only reliable links), and prove that for reasonable parameters, these simple strategies now do guarantee efficient communication in the dual graph model combined with our new, more realistic adversary.
We also detail how this performance degrades toward the existing dual graph lower bounds as the new constraints are reduced toward nonexistent, and prove lower bounds that establish these bounds to be near tight for a large and natural class of backoff strategies. Finally, we perform preliminary investigations of even more general (and therefore more difficult) variations of this new style of adversary that continue to underscore the versatility of simple backoff strategies.
We argue that these results help resolve the backoff paradox described above. When unpredictable link behavior is modeled properly, predictable algorithms prove to work surprisingly well.
The Fading Adversary.
The dual graph model describes a radio network topology with two graphs, and , , where corresponds to the wireless devices, corresponds to reliable (high quality) links, and corresponds to unreliable (quality varies over time) links. In each round, all edges from are included in the network topology. Also included is an additional subset of edges from , chosen by an adversary. Existing studies of the dual graph model focused mainly on the information about the algorithm known to the adversary when it makes its edge choices. In this paper, we place additional constraints on how these choices are generated.
In more detail, in each round, the adversary independently draws the set of edges from to add to the topology from some probability distribution defined over this set. We do not constrain the properties of the distributions selected by the adversary. Indeed, it is perfectly valid for the adversary in a given round to use a point distribution that puts the full probability mass on a single subset, giving it full control over its selection for the round. We also assume the algorithm executing in the model has no advance knowledge of the distributions used by the adversary.
We do, however, constrain how often the adversary can change the distribution from which it selects these edge subsets. In more detail, we parameterize the model with a stability factor, , and restrict the adversary to changing the distribution it uses at most once every rounds. For , the adversary can change the distribution in every round, and is therefore effectively unconstrained and behaves the same as in the existing dual graph studies. On the other extreme, for , the adversary is now quite constrained in that it must draw edges independently from the same distribution for the entire execution. As detailed below, we find , for local neighborhood size , to be a key threshold after which efficient communication becomes tractable.
Notice, these constraints do not prevent the adversary from inducing large amounts of changes to the network topology from round to round. For nontrivial values, however, they do require changes that are nearby in time to share some underlying stochastic structure. This property is motivated by the way wireless network engineers think about unreliability in radio links. In their formal models of link behavior (used, for example, to analyze modulation or rate selection schemes), engineers tend to assume that in the short term, changes to link quality come from sources like noise and multipath effects, which can be approximated by independent draws from an underlying distribution (Gaussian distributions are common choices for this purpose), while changes over the long term come from modifications to the network environment itself, such as devices moving, which do not necessarily have an obvious stochastic structure, but unfold at a slower rate than short term flucuations.
In our model, the distribution used in a given round captures short term changes, while the adversary’s arbitrary (but ratelimited) changes to these distributions over time capture long term changes. Because these general types of changes are sometimes labeled short/fast fading in the systems literature (e.g., [18]), we call our new adversary a fading adversary.
Problem  Time  Probability  Remarks  Reference 

Local broadcast  Thm 4  
Thm 5  
Thm 6  
Thm 7  
Thm 8  
Thm 8 
Our Results and Related Work.
In this paper, we study both local and global broadcast. The local version of this problems assumes some subset of devices in a dual graph network are provided broadcast messages. The problem is solved once each receiver that neighbors a broadcaster in receives at least one message. The global version assumes a single broadcaster starts with a message that it must disseminate to the entire network. Below we summarize the relevant related work on these problems, and the new bounds proved in this paper. We conclude with a discussion of the key ideas behind these new results.
Related Work. In the standard radio network model, which is equivalent to the dual graph model with ,
BarYehuda et al. [4] demonstrate that a simple randomized backoff strategy called Decay\xspacesolves local broadcast in rounds and global broadcast in rounds,
where is the network size and is the diameter of .
Both results hold with high probability in , and were subsequently proved
to be optimal or near optimal
In [13, 14], it is proved that global broadcast (with constant diameter), and local broadcast require rounds to solve with reasonable probability in the dual graph model with an offline adaptive adversary controlling the unreliable edge selection, while [10] proves that rounds are necessary for both problems with an online adaptive adversary. As also proved in [10]: even with the weaker oblivious adversary, local broadcast requires rounds, whereas global broadcast can be solved in an efficient rounds, but only if the broadcast message is sufficiently large to contain enough shared random bits for all nodes to use throughout the execution. In [16], an efficient algorithm for local broadcast with an oblivious adversary is provided given the assumption of geographic constraints on the dual graphs, enabling complicated clustering strategies that allow nearby devices to coordinate randomness.
New Results. In this paper, we turn our attention to local and global broadcast in the dual graph model with a fading adversary contrained by some stability factor (unknown to the algorithm). We start by considering upper bounds for a simple backoff style strategy inspired by the decay routine from [4]. This routine has broadcasters simply cycle through a fixed set of broadcast probabilities in a synchronized manner (all broadcasters use the same probability in the same round). We prove that this strategy solves local broadcast with probability at least , in rounds, where is an upper bound on local neighborhood size, and .
Notice, for this bound simplifies to , matching the optimal results from the standard radio network
model.
We next turn our attention to global broadcast. We consider a straightforward global broadcast algorithm that uses our local broadcast strategy as a subroutine. We prove that this algorithm solves global broadcast with probability at least , in rounds, where is the diameter of , and . Notice, for this bound reduces to , matching the near optimal result from the standard radio network model. As with local broadcast, we also prove the degradation of this performance as shrinks to be near optimal. (See Table 1 for a summary of these results and pointers to where they are proved in this paper.)
We conclude with a preliminary investigation of the performance of these strategies when we allow some bounded amount of correlation between the distributions selected by the adversary within a given stable period of rounds.
Technique Discussion. Simple backoff strategies can be understood as experimenting with different guesses at the amount of contention afflicting a given receiver. If the network topology is static, this contention is fixed, therefore so is the right guess. A simple strategy cycling through a reasonable set of guesses will soon arrive at this right guess—giving the message a good chance of propagating.
The existing lower bounds in the dual graph setting deploy an adversary that changes the topology in each round to specifically thwart that round’s guess. In this way, the algorithm never has the right guess for the current round so its probability of progress is diminished. The fading adversary, by contrast, is prevented from adopting this degenerate behavior because it is required to stick with the same distribution for consecutive rounds. An important analysis at the core of our upper bounds reveals that any fixed distribution will be associated with a right guess defined with respect to the details of that distribution. If is sufficiently large, our algorithms are able to experiment with enough guesses to hit on this right guess before the adversary is able to change the distribution.
More generally speaking, the difficulty of broadcast in the previous dual graph studies was not due to the ability of the topology to change dramatically from round to round (which can happen in practice), but instead due to the model’s ability to precisely tune these changes to thwart the algorithm (a behavior that is hard to motivate). The dual graph model with the fading adversary preserves the former (realistic) behavior while minimizing the latter (unrealistic) behavior.
2 Model and Problem
We study the dual graph model of unreliable radio networks. This model describes the network topology with two graphs and , where . The vertices in correspond to the wireless devices in the network, which we call nodes in the following. The edge in describe reliable links (which maintain a consistently high quality), while the edges in describe unreliable links (which have quality that can vary over time). For a given dual graph, we use to describe the maximum degree in , and to describe the diameter of .
Time proceeds in synchronous rounds that we label For each round , the network topology is described by , where contains all edges in plus a subset of the edges in . The subset of edges from are selected by an adversary. The graph can be interpreted as describing the high quality links during round . That is, if , this mean the link between and is strong enough that could deliver a message to , or garble another message being sent to at the same time.
With the topology established for the round, behavior proceeds as in the standard radio network model. That is, each node can decide to transmit or receive. If transmits, it learns nothing about other messages transmitted in the round (i.e., the radios are halfduplex). If receives and exactly one neighbor of in transmits, then receives ’s message. If receives and two or more neighbors in transmit, receives nothing as the messages are lost due to collision. If receives and no neighbor transmits, also receives nothing. We assume does not have collision detection, meaning it cannot distinguish between these last two cases.
The Fading Adversary.
A key assumption in studying the dual graph model are the constraints placed on the adversary that selects the unreliable edges to include in the network topology in each round. In this paper, we study a new set of constraints inspired by real network behavior. In more detail, we parameterize the adversary with a stability factor that we represent with an integer . In each round, the adversary must draw the subset of edges (if any) from to include in the topology from a distribution defined over these edges. The adversary selects which distributions it uses. Indeed, we assume it is adaptive in the sense that it can wait until the beginning of a given round before deciding the distribution it will use in that round, basing its decision on the history of the nodes’ transmit/receive behavior up to this point, including the previous messages they send, but not including knowledge of the nodes’ private random bits.
The adversary is constrained, however, in that it can change this distribution at most once every rounds. On one extreme, if , it can change the distribution in every round and is effectively unconstrained in its choices. On the other other extreme, if , it must stick with the same distribution for every round. For most of this paper, we assume the draws from these distributions are independent in each round. Toward the end, however, we briefly discuss what happens when we generalize the model to allow more correlations.
As detailed in the introduction, because these constraints roughly approximate the fast/slow fading behavior common in the study of real wireless networks, we call a dual graph adversary constrained in this manner a fading adversary.
Problem.
In this paper, we study both the local and global broadcast problems. The local broadcast problem assumes a set of nodes are provided with a message to broadcast. Each node can receive a unique message. Let be the set of nodes in that neighbor at least one node in in . The problem is solved once every node in has received at least one message from a node in . We assume all nodes in start the execution during round , but do not require that and are disjoint (i.e., broadcasters can also be receivers). The global broadcast problem, by contrast, assumes a single source node in is provided a broadcast message during round . The problem is solved once all nodes have received this message. Notice, local broadcast solutions are often used as subroutines to help solve global broadcast.
Uniform Algorithms.
The broadcast upper and lower bounds we study in this paper focus on uniform algorithms, which require nodes to make their probabilistic transmission decisions according to a predetermined sequence of broadcast probabilities that we express as a repeating cycle, of probabilities. In studying global broadcast, we assume that on first receiving a message, a node can wait to start making probabilistic transmission decisions until the cycle resets. We assume these probabilities can depend on , and (or worstcase bounds on these values).
As mentioned in the introduction, uniform algorithms, such as the decay strategy from [4], solve local and global broadcast with optimal efficiency in the standard radio network model. A major focus of this paper is to prove that they work well in the dual graph model as well, if we assume a fading adversary with a reasonable stability factor.
The fact that our lower bounds assume the algorithms are uniform technically weaken the results, as there might be nonuniform strategies that work better. In the standard radio network model, however, this does not prove to be the case: uniform algorithms for local and global broadcast match lower bounds that hold for all algorithms (c.f., discussion in [17]).
3 Local broadcast
We begin by studying upper and lower bounds for the local broadcast problem. Our upper bound performs efficiently once the stability factor reaches a threshold of . As decreases toward a minimum value of , this efficiency degrades rapidly. Our lower bounds capture that this degradation or small is unavoidable for uniform algorithms. In the following we use the notation .
3.1 Upper Bound
All uniform local broadcast algorithms behave in the same manner: the node in repeatedly broadcast according to some fixed cycle of broadcast probabilities. We formalize this strategy with algorithm RLB (Robust Local Broadcast) described below (we break out Uniform\xspaceinto its own procedure as we later use it in our improved FRLB local broadcast algorithm as well):
[H] \TitleOfAlgoRLB \lFor \KwTo \Repeat( times) Uniform\xspace
Our upper bound analysis leverages the following useful lemma which can be shown by induction on (the left side is also known as the Weierstrass Product Inequality):
Lemma 1.
For any such that :
To begin our analysis, we focus on the behavior of our algorithm with respect to a single receiver when we use the transmit probability sequence , where , and .
Lemma 2.
Fix any receiver and error bound . It follows: RLB delivers a message to with probability at least in time .
Proof.
It is sufficient to prove the claim for . For we use the algorithm for . Note that any algorithm that is correct for some must also work for any larger because the adversary may not choose to change the distribution as frequently as it is permitted to. In the case where we get that .
We want to show that if the nodes from execute the procedure twice, then receives some message with probability at least . Every time we execute Uniform\xspace twice, we have a total of consecutive time slots out of which, by the definition of our model, at least consecutive slots have the same distribution of the additional edges and moreover stations try all the probabilities (not necessarily in this order). Let denote the set of these time slots and for let be the step in which probability is used. We also denote the distribution used in steps from set by . Hence we can denote the edges between and its neighbors that have some message by . We know that the edge sets are chosen independently from the same distribution: for . Let us denote by the random variable being the number of neighbors that are connected to in step and belong to . For each form to we define for any . Observe that probabilities do not depend on during the considered rounds. Moreover , hence thus:
(1) 
Let denote the indicator random variable being if in th round if exactly one neighbor of transmits and is not transmitting in round and otherwise. Clearly if in some round , then receives some message in round . Then we would like to show for each that:
(2) 
In th slot the transmission probability is and the transmission choices done by the stations are independent from the choice of edges active in round . Note that might also belong and try to transmit. But since then is not transmitting with probability at least . If denotes the event that then:
because inequality holds for all . Since the edge sets are chosen independently in each step and the random choices of the stations whether to transmit or not are also independent from each other we have:
by Lemma 1  
by Equation (1) 
Hence if we execute the procedure for time steps, we have at least sequences of consecutive time steps in which the distribution over the unreliable edges is the same and the algorithm tries all the probabilities . Each of these procedures fails independently with probability at most hence the probability that all the procedures fail is at most: ∎
On closer inspection of the analysis of Lemma 2, it becomes clear that if we tweak slightly the probabilities used in our algorithm, we require fewer iterations. In more detail, the probability of a successful transmission in the case where each of the transmitters broadcasts independently with probability is approximately . In the previous algorithm we were transmitting in successive steps with probabilities . Thus if we would get in th step and approximately the sum of probabilities of success in consecutive steps would be . The formula shows that success probability depends on linearly if (“too small” probability) and depends exponentially on if (“too large” probability). In the previous theorem we intuitively only use the linear term. In the next one we would like to also use, to some extent, the exponential term. If we shift all the probabilities by multiplying them by a factor of , the total success probability would be approximately if and if . Thus by setting we maximize both these values. {algorithm}[h] \TitleOfAlgoFRLB \lFor \KwTo \Repeat( times) Uniform\xspace
The following lemma makes this above intuition precise and gains a logfactor in performance in algorithm FRLB (Fast Robust Local Broadcast) compared to RLB.
As part of this analysis, we add a second statement to our lemma that will prove useful during our subsequent analysis of global broadcast. The correctness of this second lemma is a straightforward consequence of the analysis.
(Most of the proofs are deferred to Appendix. We provide short ideas of all the missing proofs.)
Lemma 3.
Fix any receiver and error bound . It follows:

FRLB completes local broadcast with a single receiver in time
with probability at least , for any , 
FRLB completes local broadcast with a single receiver with probability at least .
Proof Idea.
The proof is similar to the one of Lemma 2. We define the probabilities in the same way and events . The key difference is in the evaluation of the probability of success in round conditioned on (). Event restricts the number of neighbors connected to to some interval. We prove that the success probability is lower bounded by the minimum of the values at the endpoints of this interval. This is true because when stations transmit with probability to a common neighbor then the probability of a successful transmission seen as a function of has a single maximum at hence its value at any point of some fixed interval is lower bounded by the minimum of the values at the endpoints. ∎
Proof.
It is sufficient to prove the claim for . For we use the algorithm for . Note that any algorithm that is correct for some must also work for any larger because the adversary may not choose to change the distribution as frequently as it is permitted to. In the case where we get that .
We want to show that if the nodes from execute the procedure twice, then receives some message with probability at least . Since we execute Uniform\xspace twice, we have a total of consecutive time slots out of which, by the definition of our model, at least consecutive slots have the same distribution of the edges in and moreover stations try all the probabilities . (not necessarily in this order). Let denote the set of these time slots and for let be the step in which probability is used. We also denote the distribution used in steps from set by . Observe from the definition of the algorithm that during these slots the number of participating stations does not change. Hence we can denote the edges between and its neighbors that have some message by . We know that the edge sets are chosen independently from the same distributions: for . Let us denote by the random variable being the number of neighbors that are connected to in step and belong to . For each form to we define :
for any . Observe that probabilities do not depend on during the considered rounds. Moreover , hence thus:
(3) 
For , let denote the indicator random variable being if in th round if exactly one neighbor of transmits and is not transmitting in round and otherwise. Clearly if in some round , then receives some message in round . Then we would like to show for each that:
(4) 
In th slot each station with a message transmits independently with probability is and the transmission choices done by the stations are independent from the choice of edges active in round . Let denote the event that . Station is not transmitting with probability at least . Thus, the probability that exactly one neighbor of is transmitting in step is exactly . Thus . Seeing expression as a function of then this function has a single maximum in . Hence if we assume that belongs to some interval then is lower bounded by the minimum value at the endpoints of this interval. Hence , which gives us:
Since and we have . Thus by Bernoulli inequality:
Moreover since inequality holds for all and and the minimum value of function is (for ):
Thus:
Since the edge sets are chosen independently in each step and the choices of the stations are also independent we have:
by Equation (4)  
by Lemma 1  
by Equation (3)  
where the last inequality is true since if we denote (for ) then we have hence . This completes proof of 2. To prove 1 we observe that if we execute the procedure for time steps we have at least sequences of consecutive time steps in which the distribution over the unreliable edges is the same and the algorithm tries all the probabilities . Each of these procedures fails independently with probability at most hence the probability that all the procedures fail is at most:
∎
In Lemmas 2 and 3 we studied the fate of single receiver in R during an execution of algorithms RLB and FRLB. Here we apply this result to bound the time for all nodes in R to receive a message, therefore solving the local broadcast problem. In particular, for a desired error bound , if we apply these lemmas with error bound , then we end up solving the single node problem with a failure probability upper bounded by . Applying a union bound, it follows that the probability that any node from fails to receive a message is less than . Formally:
Theorem 4.
Fix an error bound . It follows that algorithm solves local broadcast in rounds, with probability at least .
3.2 Lower bound
Observe that for , FRLB\xspacehas a time complexity of rounds for , which matches the performance of the optimal algorithms for this problem in the standard radio model. This emphasizes the perhaps surprising result that even large amounts of topology changes do not impede simple uniform broadcast strategies, so long as there is independence between nearby changes.
Once drops below , however, a significant gap opens between our model and the standard radio network model. Here we prove that gap is fundamental for any uniform algorithm in our model.
In the local broadcast problem, a receiver from set can have between and neighbors in set . The neighbors should optimally use probabilities close to the inverse of their number. But since the number of neighbors is unknown, the algorithm has to check all the values. If we look at the logarithm of the inverse of the probabilities (call them logestimates) used in Lemma 2 we get , for —which are spaced equidistantly on the interval . The goal of the algorithm is to minimize the maximum gap between two adjacent logestimates placed on this interval since this maximizes the success probability in the worst case. With this in mind, in the proof of the following lower bound, we look at the dual problem. Given a predetermined sequence of probabilities used by an arbitrary uniform algorithm, we seek the largest gap between adjacent logestimates, and then select edge distributions that take advantage of this weakness.
Theorem 5.
Fix a maximum degree , stability factor , and uniform local broadcast algorithm . Assume that guarantees with probability at least to solve local broadcast in rounds when executed in any dual graph network with maximum degree and fading adversary with stability . It follows that .
Proof.
Consider the dual graph and , defined as follows: and and (see Figure 1). We will study local boadcast in this dual graph with and .
Observe that the maximum degree of any node is indeed and the number of nodes is . Nodes do not belong to hence are not relevant in our analysis.
Using the sequence of probabilities used by algorithm we will define a sequence of distributions over the edges that will cause a long delay until node will receive a message. The adversary we define is allowed to change the distribution every steps. Accordingly, we partition the rounds into phases of length , which we label . Phase consists of time steps . For each phase , the adversary will use a distribution thatâs defined with respect to the probabilities used by during the rounds in phase . In particular, let sequence be the probabilities used by during phase .
We use to define the distribution as follows. Define and let represent urns labeled with numbers from to . Into these urns we place balls with numbers and for all . Ball with number is placed into the bin with the same number. With this procedure for each , we place two balls in adjacent bins if and a single ball in the opposite case. We arrange the bins in a circular fashion i.e., bins and are consecutive and we want to find the longest sequence of consecutive empty bins. Observe that since for each we put either a single ball or two balls into adjacent bins we have at most sequences of consecutive empty bins. Moreover, since at most bins contain a ball then there exists a sequence of consecutive empty bins of length at least . Knowing that is an integer and that we can represent , where is the fractional part of and . We can then show that:
We define:
We observe that for we have hence and are both positive integers and moreover . Hence we already showed that there exists a sequence of consecutive empty bins of length at least . Now, we pick the label of st bin in this sequence (the order of bins is according to the circular arrangement i.e., comes after ) and call it . Let . This set contains logarithms of all the estimates “tried” by the algorithm in th phase. Now we split into elements that are larger and that are smaller than : , , . We observe that if then because there are empty bins between bin and the bin containing ball . Symmetrically if then because there are empty bins between bin and the bin containing ball .
In our distribution in phase , we include all edges from , plus a subset of size selected uniformly from . This is possible since the adversary can choose to activate any subset of links among the set . With this choice, the degree of is in phase hence we can bound the probability that a successful transmission occurs in phase .
Having chosen the distribution of the edges between and we can now bound the probability of a successful transmission in any step in the considered phase. Let the event of a successful transmission in step be denoted by . For this event to happen simply exactly one of the nodes among that are connected to need to transmit. We have:
Take any step and the corresponding probability used by the algorithm. We know that is chosen so that or . We consider these cases separately:
 Case 1:

 Case 2:

We know that and thus hence since we get . Moreover since we have (because for all ). Which gives in this case: