Packet Latency of Deterministic Broadcasting in Adversarial Multiple Access Channels 1footnote 11footnote 1The results of this paper appeared in a preliminary form in [7] and [8].

Packet Latency of Deterministic Broadcasting
in Adversarial Multiple Access Channels 111The results of this paper appeared in a preliminary form in [7] and [8].

Lakshmi Anantharamu 222Department of Computer Science and Engineering, University of Colorado Denver, Denver, Colorado 80217, USA. The work of this author was supported by the National Science Foundation under Grant No. 1016847.    Bogdan S. Chlebus 222Department of Computer Science and Engineering, University of Colorado Denver, Denver, Colorado 80217, USA. The work of this author was supported by the National Science Foundation under Grant No. 1016847.       Dariusz R. Kowalski 333Department of Computer Science, University of Liverpool, Liverpool L69 3BX, United Kingdom.       Mariusz A. Rokicki 333Department of Computer Science, University of Liverpool, Liverpool L69 3BX, United Kingdom.
Abstract

We study broadcasting in multiple access channels with dynamic packet arrivals and jamming. Communication environments are represented by adversarial models that specify constraints on packet arrivals and jamming. We consider deterministic distributed broadcast algorithms and give upper bounds on the worst-case packet latency and the number of queued packets in relation to the parameters defining adversaries. Packet arrivals are determined by a rate of injections and a number of packets that can be generated in one round. Jamming is constrained by a rate with which an adversary can jam rounds and by a number of consecutive rounds that can be jammed.

Keywords: multiple access channel, adversarial queuing, jamming, distributed algorithm, deterministic algorithm, packet latency, queues size.

1 Introduction

We study broadcasting in multiple access channels by deterministic distributed algorithms. The communication medium may experience a mild form of jamming. We evaluate the performance of communication algorithms by upper bounds on their packet latency (delay) and the number of packets queued at stations (queues size). The performance metrics are understood in their worst-case sense and are considered in adversarial frameworks of packet injection and jamming. There are no statistical components in either algorithms or traffic generation.

The traditional approach to distributed broadcasting in multiple access channels uses randomization to arbitrate for access to a shared medium. Typical examples of randomized broadcast algorithms include backoff ones, like the binary exponential backoff employed in the Ethernet. The enduring effectiveness of the Ethernet, as a real-world implementation of local area networks [38], is a compelling evidence that randomized broadcasting can perform well in practice.

Using randomization in algorithms, intended as practical solutions to broadcasting, may appear to be inevitable in order to cope with bursty traffic. Among the main challenges that broadcasting on a shared channel faces is resolving conflicts for access to the communication medium. In real-world applications, most stations stay idle for most of the time, so that periods of inactivity are interspersed with unexpected bursts of activity by groups of stations configured unpredictably. Randomness appears to be a most natural way to break symmetry in attempts to access a channel. Since traffic demands are typically assumed to be unpredictable, the methodological underpinnings of key performance metrics of broadcasting, like queue sizes and packet delay, have traditionally been studied with stochastic assumptions in mind. In a matching manner, simulations have been geared towards models of packet generation defined by stochastic constraints. All these factors have historically contributed to a popular perception that randomness and stochastic assumptions are inevitable aspects of broadcasting in multiple access channels.

This paper addresses the efficiency of deterministic broadcast algorithms for dynamic traffic demands. Performance of algorithms is measured by packet delay and the number of queued packets pending transmission, while packet injection is constrained by formal adversarial models. Studying algorithmic paradigms useful for deterministic distributed broadcasting, for dynamic packet injection, is a topic interesting in its own sake. We do this in a model of continuous packet injection without any stochastic assumptions about how packets are generated and where and when they are injected. This model, known as adversarial queueing, is an alternative to representing packet generation by stochastic constraints. Adversarial queuing has proved useful in providing frameworks to study dynamic communication while imposing only minimal constraints on traffic generation. It is an important benefit of adversarial queuing to provide a methodology to assess the performance of deterministic algorithms by worst-case bounds, with respect to suitable metrics.

Jamming in wireless networks can be understood as either malicious disruptions of communication medium or inadvertent effects occurring on the physical layer. The former is an effect of foreign messages sent deliberately to hinder the flow of information by creating interferences of legitimate signals with such external disrupting transmissions. An example of jamming in this sense is a degradation-of-service attack that produces dummy packets that interfere with legitimate packets. The latter interpretation of jamming is about the physical layer affected by external factors, such as the supply of energy, weather, or crowded bandwidth. A closely related motivation is to interpret jamming as inadvertent collision of signals with concurrent foreign communication. This occurs when groups of stations pursue their independent communication tasks, and so for each group an interference caused by foreign transmissions is logically equivalent to jamming. To make our picture simple, jamming is understood in this paper as purely logical, in that this is a symptom we have to take into account without deliberating its causes. There are no assumptions made to justify why a transmitted message is not heard on the channel, including any references to the physical layer, while a message should be heard since only one station transmits in the round. A jammed round has the same effect as one with multiple simultaneous transmissions of stations attached to a channel, in that stations cannot distinguish a jammed round from a round with multiple transmissions.

A summary of the methodology and results.

We investigate deterministic broadcast algorithms for dynamic packet injection. No randomization is used in algorithms nor there is any stochastic component that affects packet injection in the considered communication environments. The studied communication algorithms are distributed in that they are executed with no centralized control. The two performance metrics are the queues size (maximum total number of packets simultaneously stored in the queues at stations while pending transmission) and packet latency (maximum number of rounds spent by a packet in a queue from injection until a successful transmission).

A set of stations attached to a channel is fixed and their number is known, in that it can be used in codes of algorithms. Stations are equipped with private queues, in which they can store packets until they are transmitted successfully.

We use the slotted model of synchrony, in which an execution of a communication algorithm is partitioned into rounds, so that a transmission of a message with one packet takes one round. All the stations attached to the channel are activated in the same initial round, each with an empty queue.

It is the assumed synchrony that allows to define the rate of injecting packets and the rate of jamming rounds. A round comprises a short atomic duration of time during which some events happening in the system can be considered as occurring simultaneously. For example, the burstiness of traffic is understood as the maximum number of packets that can be injected simultaneously, meaning in one round. The related concept of burstiness of jamming is understood as the maximum number of contiguous rounds that are unavailable for successful transmissions because of continuous jamming. Similarly, it takes a full round to transmit a message.

We consider broadcasting against adversaries that control both injections of packets into stations and jamming of the communication medium. Packet injection is limited only by the rate of injecting new packets and the number of packets that can be injected simultaneously. Jamming is limited by the rate of jamming different rounds and by how many consecutive rounds can be jammed.

All the considered algorithms have bounded packet latency for each fixed injection rate and jamming rate subject only to the necessary constraint that . The obtained upper bounds on packet latency and queue sizes of broadcast algorithms are understood in the worst-case sense. Here “queue size” means the maximum number of packets stored in the queues at the same time, as a function of , for a given number  of stations, and packet latency is the maximum possible number of rounds spent by a packet in a queue waiting to be heard on the channel.

The upper bounds on queue size and packet latency of the algorithms studied in this paper are summarized in Tables 1 and 2. All the algorithms we consider are reviewed in detail in Section 3.

  Algorithm Queues Latency Injection Proved
  OF-RRW Thm 1 Sec 4
  RRW [25] Thm 2 Sec 4
  OF-SRR Thm 3 Sec 4
  OF-SRR Thm 3 Sec 4
  SRR [25] Thm 4 Sec 4
  SRR [25] Thm 4 Sec 4
  MBTF [24] Thm 5 Sec 5
Table 1: Upper bounds on queue size and packet latency for a channel without jamming with stations, executed against an adversary of injection rate and burstiness . Algorithm MBTF is adaptive, and the remaining four algorithms are non-adaptive.

We consider non-adaptive algorithms for channels without jamming when either collision detection is not available (algorithms OF-RRW and RRW) or when it is available (algorithms OF-SRR and SRR). These algorithms have a property that queue sizes grow unbounded with injection rate  approaching , for a fixed . We conjecture that this is a general phenomenon.

Conjecture 1

Each non-adaptive algorithm for channels without jamming that provides bounded queues, for injection rate , has its queue bound grow arbitrarily large as a function of injection rate , if approaches , for all sufficiently large and fixed numbers of stations .

Adaptive algorithm MBTF for channels without jamming has bounded queues even when , but its packet latency grows unbounded when approaches . We conjecture that this reflects a general property of broadcast algorithms.

Conjecture 2

Each broadcast algorithm for channels without jamming that provides bounded packet latency, for injection rate , has its packet-latency bound grow arbitrarily large as a function of injection rate , if approaches , for all sufficiently large and fixed numbers of stations .

We show that a non-adaptive algorithm for channels with jamming achieves bounded packet latency for when an upper bound on jamming burstiness is a part of code. We hypothesize that this is unavoidable and reflects the utmost power of non-adaptive algorithms.

Conjecture 3

Each non-adaptive broadcast algorithm for channels with jamming can be made unstable by some adversaries with injection rates  and jamming rates  satisfying , for all sufficiently large and fixed numbers of stations .

Adaptive algorithm C-MBTF for channels with jamming has bounded queues when but its packet latency increases unbounded when approaches , for a fixed ; see the discussion following the proof of Theorem 10 in Section 7 for details. We conjecture that this is a general phenomenon.

Conjecture 4

Each broadcast algorithm for channels with jamming that provides bounded packet latency, for injection rate and jamming rate such that , has its packet-latency bound grow arbitrarily large as a function of injection rate  and jamming rate , if approaches , for all sufficiently large and fixed numbers of stations .

  Algorithm Queues Latency Proved
  OF-JRRW() Thm 6 Sec 6
  JRRW() Thm 7 Sec 6
  OFC-RRW Thm 8 Sec 7
  C-RRW Thm 9 Sec 7
  C-MBTF Thm 10 Sec 7
Table 2: Upper bounds on queue size and packet latency for a channel with jamming with stations, when the injection and jamming rates satisfy and for burstiness . The jamming burstiness is assumed to be at most  for algorithms OF-JRRW() and JRRW(), where is part of their codes. Algorithms OF-JRRW() and JRRW() are non-adaptive, and the remaining three algorithms are adaptive.

Previous work on adversarial multiple access channels.

Now we review previous work on broadcasting in multiple-access channels in the framework of adversarial queuing. The first such work, by Bender et al. [15], concerned the throughput of randomized backoff for multiple-access channels, considered in the queue-free model. Deterministic distributed broadcast algorithms for multiple-access channels, in the model of stations with queues, were first considered by Chlebus et al. [25]; that paper specified the classes of acknowledgment based and full sensing deterministic distributed algorithms, along the lines of the respective randomized protocols [22].

The maximum throughput, defined to mean the maximum rate for which stability is achievable, was studied by Chlebus et al. [24]. Their model was of a fixed set of stations with queues, whose number  is known. They developed a stable deterministic distributed broadcast algorithm with queues of sizes that are against leaky-bucket adversaries of injection rate . That work demonstrated that throughput  was achievable in the model of a fixed set of stations whose number is known. The paper [24] also showed some restrictions on traffic with throughput ; in particular, communication algorithms have to be adaptive (may use control bits in messages), achieving bounded packet latency is impossible, and queues have to be of sizes.

Anantharamu et al. [9] extended work on throughput  in adversarial settings by studying the impact of limiting window-type adversaries by assigning individual rates of injecting data for each station. That paper [9] gave a non-adaptive algorithm for channels without collision detection of queue size and packet latency, where is the window size; this is in contrast with general adversaries, against whom bounded packet latency for injection rate  is impossible to achieve.

Bieńkowski et al. [19] studied online broadcasting against adversaries that are unbounded in the sense that they can inject packets into arbitrary stations with no constraints on their numbers nor rates of injection. Paper [19] gave a deterministic algorithm optimal with respect to competitive performance, when measuring either the total number of packets in the system or the maximum queue size. This algorithm was also shown in [19] to be stochastically optimal for any expected injection rate smaller than or equal to .

Anantharamu and Chlebus [6] considered an ad-hoc multiple access channel, which has an unbounded supply of anonymous stations attached but only the stations activated with injected packets participate in broadcasting. They studied deterministic distributed broadcast algorithms against adversaries that are restricted to be able to activate at most one station per round. The algorithms given in [6] can provide bounded packet latency for injection rates up to , with specific rates depending on additional features of algorithms. It was also shown in [6] that no injection rate greater than  can be handled with bounded packet latency on such ad-hoc channels by deterministic algorithms.

Related work.

A natural basic communication problem in multiple access channels concerns collision resolution: there is a group of active stations, being a subset of all stations connected to the channel, and we want to have either some station in the group or all of them transmit successfully at least once. For the recent work on this topic, see the papers by Kowalski [36], Fernandez Anta et al. [27], and De Marco and Kowalski [26].

Most related work on broadcasting in multiple access channels has been carried out with randomization playing an integral part; see the survey [22]. Randomness can affect the behavior of protocols either directly, by being a part of the mechanism of a communication algorithm, or indirectly, when packets are generated subject to stochastic constraints. With randomness affecting communication in either way, the communication environment can be represented as a Markov chain with stability understood ultimately as ergodicity. Stability of randomized communication algorithms can be considered in the queue-free model, in which a packet gets associated with a new station at the time of injection, and the station dies after the packet has been heard on the channel. Full sensing protocols were shown to fare well in this model; some protocols stable for injection rate slightly below  were developed, see [22]. The model of a fixed set of stations with private queues was considered to be less radical, as queues appear to have a stabilizing effect. Håstad et al. [34], Al-Ammal et al. [2] and Goldberg et al. [32] studied bounds on the rates for which the binary exponential backoff was stable, as functions of the number of stations. For recent work related to the exponential backoff see the papers by Bender et al [16] and Bender et al [17], who proposed modifications to exponential backoffs with the goal to improve some of their characteristics. Raghavan and Upfal [40] and Goldberg et al [33] proposed randomized broadcasts based on different paradigms that those used in backoff algorithms.

The methodology of adversarial queuing allows to capture the notion of stability of communication algorithms without resorting to randomness and can serve as a framework for worst-case bounds on performance of deterministic algorithms. Borodin et al. [20] proposed this approach in the context of routing algorithms in store-and-forward networks. This was followed by Andrews et al. [10], who emphasized the notion of universality in adversarial settings.

The adversarial approach to modeling communication proved to be inspirational and versatile. Álvarez et al. [4] applied adversarial models to capture phenomena related to routing of packets with varying priorities and failures in networks. Álvarez et al. [5] addressed the impact of link failures on stability of communication algorithms by way of modeling them in adversarial terms. Andrews and Zhang [12] considered adversarial networks in which nodes operate as switches connecting inputs with outputs, so that routed packets encounter additional congestion constrains at nodes when they compete with other packets for input and output ports and need to be queued when delayed. Andrews and Zhang [13] investigated routing and scheduling in adversarial wireless networks in which every node can transmit data to at most one neighboring node per time-step and where data arrivals and transmission rates are governed by an adversary.

Worst-case packet latency of routing in store-and-forward wired networks has been studied in the framework of adversarial queuing. Aiello et al. [1] demonstrated that polynomial packet latency can be achieved by a distributed algorithm even when the adversaries do not disclose the paths they assigned to packets to validate complying with congestion constraints. Andrews et al. [11] studied packet latency of adversarial routing when the entire path of a packet is known at the source. Broder et al. [21] discussed conditions under which protocols effective for static routing provide bounded packet latency when applied in dynamic routing. Scheideler and Vöcking [45] investigated how to transform static store-and-forward routing algorithms, designed to handle packets injected at the same time, into efficient algorithms able to handle packets injected continuously into the network, so that packet delays in the static case are close to those occurring in the dynamic case. Rosén and Tsirkin [44] studied bounded packet delays against the ultimately powerful adversaries of rate .

Jamming in multiple-access channels and wireless networks is usually understood as disruptions occurring in individual rounds that prevent successful transmissions in spite of lack of collisions caused by concurrent interfering transmissions. Awerbuch et al. [14] studied jamming in multiple access channels in an adversarial setting with the goal to estimate saturation throughput of randomized protocols. Richa et al. [43] gave a randomized medium-access algorithm against adaptive adversarial jamming of a shared medium that achieves a constant-competitive throughput. Gilbert et al. [30] studied jammed transmissions in multiple access channel with the goal to optimize energy consumption per each transmitting station. Broadcasting in multi-channels with jamming controlled by adversaries was studied by Chlebus et al. [23], Gilbert et al. [28], Gilbert et al. [29], and Meier et al. [37]. Richa et al. [42] considered broadcasting on wireless networks modeled as unit disc graphs with one communication channel, in which a constant fraction of rounds can be jammed.

Jamming in multiple access channels is a special case of faulty behavior of wireless networks. Developing efficient fault-tolerant distributed communication algorithms in such networks has been an area of active investigations recently, of which the following is a sample. Alistarh et al. [3] studied non-cryptographic authenticated broadcast in radio networks when nodes are corrupted and behave in an unpredictable manner. Bertier et al. [18] designed message-efficient broadcast tolerating Byzantine faults in a multi-hop wireless sensor networks. Gilbert and Zheng [31] proposed a protocol for downloading data from a single base station that is resilient to a sybil attack, during which multiple fake identities are simulated. King et al. [35] studied communication channels that can be blocked by an adaptive adversary and proposed cost-efficient Las Vegas algorithms to send a message. Ogierman et al. [39] considered wireless media under the SINR model subject to adversarial jamming of nodes and gave a randomized distributed medium-access algorithm that achieves a constant competitive throughput. Richa et al. [41] studied multiple co-existing networks sharing a communication medium subject to adversarial jamming and gave a randomized medium-access algorithm to effectively use the non-jammed rounds. Tan et al. [46] developed randomized solutions for multi-communication primitives in multi-hop multi-channel networks subject to adversarial disruptions of the shared channels. Young and Boutaba [47] surveyed the recent work on models and algorithms coping with faults in wireless communication, which includes adversarial jamming.

Structure of the document.

We review the model of multiple-access channels and summarize the classes of adversaries and deterministic broadcast algorithms in Section 2. Section 3 contains a description of all the deterministic broadcast algorithms we consider, both old and new. The analysis of performance of broadcast algorithms is given in subsequent sections. These are Section 4 about non-adaptive algorithms for channels without jamming, Section 5 about adaptive algorithms for channels without jamming, Section 6 about non-adaptive algorithms for channels with jamming, and Section 7 about adaptive algorithms for channels with jamming. The final Section 8 includes a concluding discussion.

2 Preliminaries

In this section, we review the model of multiple access channels and adversarial packet injection. The considered communication environments allow to develop efficient deterministic distributed broadcast algorithms.

A communication medium is called a channel. There are a number of communicating units attached to such a channel, which are called stations.

We use the slotted model of synchrony, in which time is partitioned into rounds. The stations have access to a global clock measuring rounds, starting from round zero. An execution of a communication algorithm starts with all the stations activated in this round zero.

The stations receive packets continuously and their goal is to have each of them eventually broadcast. Each station is equipped with a private buffer space to store packets pending transmission. Such a buffer is considered to have unbounded capacity, in that it can accommodate an arbitrary finite number of packets. The buffer memory of a station typically operates under a fixed queuing discipline and is referred to as a queue of this station.

A message transmitted by a station on the channel may include at most one packet and it may include auxiliary control bits to coordinate actions of the stations. The size of messages and the duration of rounds are calibrated such that a transmission of a message takes one round; this means that a station can transmit at most one message in a round. Two messages transmitted by different stations in the same round overlap in time and are said to be transmitted simultaneously.

A successful transmission of a message on the channel means that the message gets broadcast to all the stations. If a message is delivered to a station then we say that that the message is heard by the station. If a message is heard by one station then it is also heard by all the stations. A round when no message is heard on the channel is called void.

A round may be jammed, which disrupts the communication functionality of the channel in this round; a round that is not jammed is called clear. A jammed round is always void but a clear round merely makes it possible to hear a message on the channel.

A communication environment we consider operates as a broadcast network consisting of “active” stations, which execute communication algorithms in a distributed manner, and a “passive” channel available for each station. The “external world” uses such a communication environment by providing packets, which are injected individually into the stations, and it also determines which round is jammed.

Multiple access channels.

Broadcast networks we consider allow for jamming in general, but we also consider the case when no round can be jammed. A broadcast network is said to be a multiple-access channel without jamming when no round is ever jammed and a message transmitted by a station is heard if and only if it is the only message transmitted in the round. A broadcast network is said to be a multiple-access channel with jamming when some rounds may be jammed and a message transmitted by a station is heard if and only if it is the only message transmitted in the round and the round is not jammed.

In every round, all the stations receive feedback from the channel. The feedback in a round is the same for each station; in particular, we do not differentiate between stations that transmit in a round and those that do not. If a message is heard on the channel, then the message itself is such a feedback. A round with no transmissions is said to be silent; in such a round, all the stations receive from the channel the feedback we call silence. Multiple transmissions in the same round result in conflict for access to the channel, which is called a collision. If a round is jammed then all the stations receive in this round the same feedback from the channel as in a round of collision.

Now we recapitulate all the possible reasons a round is void, that is, no message is heard. One possibility is that the round is silent, in that there is no transmission. The round may be jammed, then it does not matter whether there is any transmission in the round or not. Finally, there may be a collision caused by multiple simultaneous transmissions. Stations cannot distinguish between a round of collision, caused by multiple simultaneous transmissions, from a round in which the channel is jammed, in that the channel is sensed in exactly the same manner in both cases.

We say that collision detection is available when stations can distinguish between silence and collision/jamming in a round by the feedback they receive from the channel in the round. If such a discerning mechanism is not available then the channel is without collision detection. Next we specify the four possible kinds of channels, determined by jamming or lack thereof, and, independently, by collision detection or lack thereof, which determine how stations perceive rounds by the obtained feedback from the channel.

A channel without jamming and without collision detection:

a void round is caused by either silence or collision; a specific cause of voidness of a round is not perceivable.

A channel without jamming and with collision detection:

a void round is caused by either silence or collision; a specific cause of voidness of a round is identifiable.

A channel with jamming and without collision detection:

a void round is caused by either silence or collision or jamming; a specific cause of voidness of a round is not perceivable nor any can be excluded.

A channel with jamming and with collision detection:

a void round is caused by either silence or collision or jamming; silence can be perceived distinctly from the other two possible causes of voidness, but collision and jamming cannot be distinguished from each other.

A communication algorithm for channels without jamming can be executed on channels with jamming, without any changes in its code. This is because a channel with jamming does not produce any special “interference” signal indicating that a round is jammed, and stations obtain either a silence or collision as feedback from the channel when a round is void.

An adversarial model of packet injection without jamming.

We use a leaky-bucket adversarial model of packet injection, when a channel cannot be jammed, similarly as considered in [10, 24]. An adversary is determined by its maximum rate of injecting packets and a burstiness of traffic it can generate. Let a real number  and integer satisfy the inequalities and ; the leaky-bucket adversary of type may inject at most packets into an arbitrary set of stations in each contiguous segment of rounds. An adversary of type is said to have injection rate and burstiness component . The burstiness of an adversary means the maximum number of packets that can be injected in one round. An adversary of type has burstiness , so if then is the adversary’s burstiness.

In some broadcast algorithms, in which the place and time of injection of packets determines the order of their future transmissions, a prescribed quantity of rounds that occur allows the adversary to inject packets, which then take rounds to be transmitted, thus delaying transmissions of older packets. If this pattern can be iterated, then this creates a combined delay of the following duration:

We say that the quantity is obtained from by stretching-by-injecting.

An adversarial model of packet injection and jamming.

For channels with jamming, we consider adversaries that control both packet injections and jamming. Given real numbers and  in the interval and integer , the leaky-bucket jamming adversary of type can inject at most packets and, independently, it can jam at most rounds, in each contiguous segment of rounds. For such an adversary, we refer to as the injection rate, to as the jamming rate, and to as the burstiness component. We can observe that a non-jamming adversary of type is formally the same as a jamming adversary of type . The number of packets that a jamming adversary can inject in one round is called its injection burstiness, similarly as for a non-jamming leaky-bucket adversary. This parameter equals . If then every round could be jammed, making the channel dysfunctional. Therefore, we always assume that a jamming rate  satisfies .

Suppose we are concerned about a contiguous segment of non-jammed rounds, possibly interspersed with additional jammed rounds. If the adversary wants to stretch as much as possible by maximizing , then the inequality has to hold. If this is applied repeatedly and the adversary jams at full power then the burstiness component  can be applied only once. Disregarding the burstiness component  in the inequality is the same as setting , so we have the inequality , which gives . We obtain the following estimate

We say that the quantity is obtained from by stretching-by-jamming.

If the adversary injects with injection rate during these non-jammed rounds extended by inserted jammed rounds, then the number of injected packets in the whole interval that includes jammed rounds is at most the quantity

which is the same as if got expanded to a virtual injection rate by an effect similar to stretching-by-jamming. The quantity can indeed be interpreted as injection rate because , as . If the adversary applies this virtual injection rate, already obtained by stretching-by-jamming, by creating a stretching-by-inserting effect, an interval of clear rounds gets extended to the following number of rounds

We say that the quantity is obtained from  by combined stretching.

A maximum continuous number of rounds that an adversary can jam is called its jamming burstiness. We can find what is the jamming burstiness for a leaky-bucket jamming adversary of type as follows. Let be a number of rounds that make a contiguous interval and are all jammed. The inequality needs to hold, as otherwise rounds within an interval of  rounds could not be jammed. We conclude by algebra that the adversary can jam at most consecutive rounds, which is an instance of stretching-by-jamming.

Deterministic distributed broadcast algorithms.

Broadcast algorithms control timings of transmissions by individual stations in a deterministic manner, starting from round zero when all the stations are activated simultaneously. All the algorithms we consider are work-preserving in that if a station is scheduled to transmit and it has pending packets then a transmitted message includes a packet.

A state of a station is determined by the values of the private variables occurring in the code of an algorithm and by the number of outstanding packets in its queue that still need to be transmitted. The local queues of packets at stations operate under the first-in-first-out discipline, which minimizes packet latency. A station obtains a packet to broadcast by removing the first packet from the queue. If a station transmits a packet that is not heard then the station will transmit the same packet in the immediately following round in which a transmission is scheduled. A packet is never dropped by a station before it is heard on the channel.

A state transition is a change in a state of a station in one round, which depends on the state at the end of the previous round, the feedback from the channel in this round, and the packets injected in this round. A state transition of a station in a round consists of the following actions in order. If packets are injected into the station in this round then they are immediately enqueued into the local queue. If the station broadcasted successfully in the previous round, then the transmitted packet is discarded. If a new packet to transmit is needed and the local queue is nonempty then a packet is obtained by dequeuing the queue. Finally, a message for the next round is prepared, if any will be transmitted.

An event in a round comprises the following four actions by each station in the given order: (a) a station either transmits a message or pauses, accordingly to its state, (b) a station receives a feedback from the channel, in the form of either hearing a message or collision signal or silence, (c) new packets are injected into a station, if any, and finally, (d) the suitable state transition occurs at a station. An execution of an algorithm is a sequence of events occurring in consecutive rounds.

We categorize broadcast algorithms according to the terminology used in [24, 25]. All the algorithms considered in this paper are full sensing, in that nontrivial state transitions can occur at a station in any round, even when the station does not have pending packets to transmit. This may be interpreted as if the attached stations “sense the channel” in all rounds. Algorithms that use control bits piggybacked on packets or can send messages comprised of only control bits, when a station does not have a packet to transmit, are called adaptive, and otherwise they are non-adaptive.

Performance of broadcast algorithms.

The basic quality for a communication algorithm in a given adversarial environment is stability, understood to mean that the number of packets in the queues at stations stays uniformly bounded at all times. For a stable algorithm in a communication environment, an upper bound on the number of packets waiting in queues is a natural performance metric, see [24, 25].

We may observe that stability is not achievable by a jamming adversary with injection rate  and a jamming rate  satisfying . To see this, observe that it is equivalent to , so when the adversary is jamming with the maximum power, then the bandwidth remaining for transmissions is , while the injection rate is greater than .

A sharper performance metric is that of packet latency; it denotes an upper bound on the time spent by a packet waiting in a queue, counting from the round of injection through the round when the packet is heard on the channel. It is possible to achieve stability in the case , by adapting the approach for (and ) in [24], but packet latency is then inherently unbounded.

An algorithm for an environment without jamming is universal when it is stable for any injection rate smaller than . This can be extended to jamming by having stability for each case of . All the algorithms we present are universal in this sense. For each algorithm discussed in this paper, we give upper bounds for packet latency as functions of the number of stations  and the type of a leaky-bucket (jamming) adversary, subject only to the restriction .

Knowledge.

A property of a system is said to be known when it can be referred to explicitly in codes of algorithms. We assume throughout that the number of stations  is known to the stations. Each station has a unique integer name in , which it knows. If a station needs to be distinguished in a communication algorithm, for example to be the first one to transmit in an execution, then by default it is the station with name .

The type of an adversary is normally not assumed to be known by the algorithms in this paper. The only exception to this rule occurs for a non-adaptive algorithm given in Section 6 that has an upper bound  on the jamming burstiness of an adversary as part of its code; this algorithm attains the claimed packet latency when the adversary’s jamming burstiness happens to be at most .

3 A Review of Deterministic Broadcast Algorithms

We summarize the specifications of deterministic distributed broadcast algorithms whose packet latency is analyzed in the following Sections.

Three broadcast algorithms.

We start with a summary of three deterministic distributed algorithms for channels without jamming that are already known in the literature. These are the algorithms RRW, SRR and MBTF, which can be described as follows.

Algorithm Round-Robin-Withholding (RRW) is a non-adaptive algorithm for channels without collision detection. It operates in a round-robin fashion, in that the stations gain access to the channel in the cyclic order of their names. A station with the right to transmit is said to hold a conceptual token. Once a station receives the token then it withholds the channel to unload all the packets in its queue. A silent round is a signal for the next station, in the cyclic order of names, to take over the token. Algorithm RRW was introduced in [25] and showed to be universal, that is, stable for injection rates smaller than .

Algorithm Search-Round-Robin (SRR) is a non-adaptive algorithm for channels with collision detection. Its execution proceeds as a systematic continuous search for the next station with packets to transmit, under the cyclic ordering of stations by their names. The search is interpreted as binary one and is implemented by using a virtual distributed stack. If a station with pending packets is identified by the search, the search is suspended while the station withholds the channel to transmit all its packets. After all the packets held by a station have been unloaded, a silent round follows, which triggers the search to be resumed. A basic step in searching is to verify if there is a station with pending packets whose name is in a given interval of integers. Such a step is accomplished by all the stations in the interval transmitting their packets. Every station receives the same feedback from the channel, whether it transmitted or not, so all the stations know if the interval is empty (silence), or it contains a single station (packet heard), or it contains multiple stations (collision). A search for the next station is completed by a packet heard. A silence indicates that no station in the tested segment has packets and the interval is discarded. A collision results in having the interval partitioned into two halves of equal sizes, with one part processed immediately next while the other one is pushed on a stack to wait. If a processed interval becomes empty or it is verified by silence that there is no station with packets in it, then a new interval is obtained by popping the stack. One instance of a full sweep through all the stations is called a phase. A phase starts with the interval representing all the stations placed on the stack, and it ends with the stack becoming empty. Once a phase is completed, the next similar phase begins immediately. Algorithm SRR was introduced in [25] and showed to be universal.

Algorithm Move-Big-To-Front (MBTF) is an adaptive algorithm that can be executed on channels without collision detection. Each station maintains a dynamic list of all the stations in its private memory. Such a list is initialized in each station to have all the names of stations arranged in the increasing order: . The lists are manipulated in the same way by all the stations so they are identical copies of each other. The algorithm schedules exactly one station to transmit in a round, so that collisions never occur. This is implemented by having a conceptual token travel through the stations, which is initially assigned to the first station in the list. A station with the token broadcasts a packet, if it has any, otherwise the round is silent. A station considers itself big in a round when it has at least packets; such a station attaches a control bit to every packet it transmits to indicate this status. A big station is moved to the front of the list and it takes the token with it. If a station that is not big transmits in a round, or when it pauses due to a lack of packets while holding the token so the round is silent, then the token is passed in this round to the next station in the list ordered in a cyclic fashion. Algorithm MBTF was introduced in [24] and showed to be stable for injection rate .

The “old-go-first” approach.

We obtain new algorithms by modifying RRW and SRR so that packets are categorized into “old” and “new.” Intuitively, packets categorized as “new” become eligible for transmissions only after all the packets categorized as “old” have been heard. Formally, an execution is structured as a sequence of conceptual phases, which are contiguous segments of rounds of dynamic length, and then the notions of old versus new packets are defined with respect to them.

A phase is defined as a full cycle made by the conceptual token visiting the stations. No additional communication is needed to mark a transition to a new phase as all the stations can detect this by monitoring the position of the virtual token. A token leaves a station holding it after the station has transmitted all its old packets while new packets may remain waiting for the next token’s visit. In a given phase, packets are old when they had been injected in the previous phase, and packets injected in the current phase are considered new for the duration of the phase. If a new phase begins, the old packets have already been heard on the channel and the new ones immediately graduate to becoming old. This means that the “old-go-first” principle is implemented by having packets injected in a given phase transmitted only in the next phase. In particular, the first phase does not include any transmissions of packets, as all the packets, if any, are new.

Specifically, algorithm Old-First-Round-Robin-Withholding (OF-RRW) operates by manipulating the token similarly as algorithm RRW does, except that when a station gets access to the channel by transmitting successfully, then the station unloads all the old packets, while new packets stay in the queue when the token is passed to the next station. Algorithm Old-First-Search-Round-Robin (OF-SRR) performs search similarly as algorithm SRR does, except that searching is for old packets only while new ones are ignored for the duration of a phase. This approach is also applied to algorithm JRRW for channels with jamming, as explained next.

The approach to modify a token algorithm by making old packets go first makes packet latency smaller than in the original version but queue bounds remain the same, as reflected by the bounds summarized in Tables 1 and 2. The difference in packet latency is such that a “regular” version of an algorithm for channel without jamming, which is either RRW or SRR, has an additional factor of  present in its bound on packet latency as compared to their versions with old-go-first specification, and the bound for algorithm JRRW has an extra factor of present, as compared to the bound on packet latency for algorithm OF-JRRW. This might be counter-intuitive, as an old-go-first version of broadcasting is a “lazy” implementation, in the sense that a possible immediate transmission of a packet is delayed for later when the packet happens to be still new. This can be explained intuitively as follows. Consider a regular version of a given broadcast algorithm, like RRW. An injected packet may be transmitted either in the current phase or in the next phase, depending on how the station that the packet is injected into is located in the cycle of stations with respect to the station holding the token at the round of injection. We may say that injecting “behind the token” results in transmitting in the next phase and injecting “ahead of the token” results in transmitting in the current phase. If the adversary consistently injects “behind the token” so that packets are transmitted as already old then a execution is indistinguishable from that of the old-go-first version of the algorithm. There is a possibility of an effect of stretching-by-injecting occurring in executions of the old-go-first version and this is reflected in the factor of  in the bound on packet latency. If the adversary exercises the option to inject “ahead-of-the-token,” for the regular version of the algorithm, then this creates an additional possibility of enforcing stretching-by-injecting, and so adds another factor of .

Non-adaptive algorithms for channels with jamming.

We introduce a non-adaptive broadcast algorithm Jamming-Round-Robin-Withholding, abbreviated JRRW, for channels with jamming. The design of the algorithm is similar to that of RRW, the difference is in how the token is transferred from a station to the next one, in the cyclic order among the stations. Just one void round should not trigger a transfer of the token, as it is the case in RRW, because not hearing a message may be caused by jamming.

The algorithm has a parameter  interpreted as an upper bound on jamming burstiness of the adversary. This parameter is used to facilitate transfer of control from a station to the next one by way of forwarding the token. The token is moved after precisely contiguous void rounds, counting from either hearing a packet or moving the token; the former indicates that the transmitting station exhausted its queue, while the latter indicates that the queue was empty. More precisely, every station maintains a private counter of void rounds. The counters show the same value across the system, as they are updated in exactly the same way determined only by the feedback from the channel. A void round results in incrementing the counter by . The token is moved to the next station when the counter reaches . If either a packet is heard or the token is moved then the counter is zeroed.

Algorithm Old-First-Jamming-Round-Robin-Withholding, abbreviated OF-JRRW, is obtained from JRRW similarly as OF-RRW is obtained from RRW. An execution is structured as consisting of consecutive phases, and packets are categorized into old and new, with the same rule to graduate packets from new to old. If a token visits a station, then only the old packets are transmitted while the new ones will be transmitted during the next visit by the token.

Structural properties of algorithms.

We say that a communication algorithm designed for a channel without jamming is a token one if it uses a virtual token to determine a station that gains the right to transmit successfully. All the algorithms discussed in this paper could be considered as token ones. This is clearly the case for algorithms RRW, OF-RRW, JRRW, OF-JRRW, and MBTF, as their design specifies how a token is handled. Algorithms SRR and OF-SRR can also be interpreted as token ones, even though they make collisions possible to happen. A station that transmits a packet successfully can be considered as holding the token, in that it can safely withhold the channel, and the right to transmit was acquired by the virtue of being the next station with packets after the previously transmitting one, in the cyclic ordering of stations.

A token algorithm for channels without collision detection and without jamming can be modified to the model with jamming, but still without collision detection. This can be done in the following manner. If a station has the right to transmit a packet in the original algorithm, then the modified algorithm has the station transmit a packet as well, otherwise the station transmits a control bit. A round in which only a control bit is transmitted by a modified token algorithm is called a control round otherwise it is a packet round. The effect of sending control bits in control rounds is that if a round is not jammed then a message is heard in this round; this message is either just a control bit or it includes a packet. This approach to replace silent rounds by rounds with messages with control bits allows for jamming detection: when a void round occurs then this round has to be jammed, as otherwise a message would be heard. Once a communication algorithm can identify jammed rounds, we may ignore their impact on the flow of control and repeat the performed actions in the next round, exactly as they were performed in the immediately preceding jammed ones. The resulting algorithm is clearly adaptive. This method cannot be applied to algorithms relying on collision detection, like SRR and OF-SRR.

We will apply this method of modifying token algorithms to the non-adaptive algorithms RRW and OF-RRW, denoting the modified versions by C-RRW and OFC-RRW, respectively. Similarly, we modify algorithm MBTF such that a station with a token sends a control message even if the station does not have a packet; the modified algorithm is denoted by C-MBTF. The letter C is a mnemonic to indicate using control rounds for jamming detection.

Algorithms with executions structured into phases, so that each station with packets has one opportunity to transmit its packets in a phase, are referred to as phase algorithms. Among the algorithms considered in this paper, all are phase ones except for MBTF and C-MBTF. The phase algorithms consist of RRW, OF-RRW, C-RRW, OFC-RRW, SRR, OF-SRR, JRRW and OF-JRRW. If the old-go-first approach is used in a phase algorithm then it is an old-go-first version of the algorithm, otherwise it is a regular version of the algorithm. In particular, RRW, C-RRW, SRR and JRRW are all regular phase algorithms, while OF-RRW, OFC-RRW, OF-SRR and OF-JRRW are all old-go-first phase algorithms.

Let us consider an execution of a token algorithm. If a packet is injected into a station whose number is smaller than that of the current token’s holder then we say that the packet is injected behind the token, and otherwise it is injected ahead of the token. If the considered token algorithm is a regular one, like RRW, then packets injected behind the token are transmitted in the next phase, and those injected ahead of the token are transmitted in the current phase.

4 Non-adaptive Algorithms without Jamming

In this Section, we consider deterministic distributed non-adaptive algorithms for channels without jamming for injection rates . For each of these algorithms, we give upper bounds for the queue size and packet latency as functions of the number of stations  and the type of a leaky-bucket adversary.

4.1 Channels without collision detection

We begin with algorithms OF-RRW and RRW for channels without collision detection. Each of them is a token algorithm. The token is advanced to the next station when a station holding the token at the moment pauses, which results in a silent round.

Theorem 1

If algorithm OF-RRW is executed by stations against an adversary of type  then the number of packets simultaneously queued in the stations is at most

(1)

and packet latency is at most

(2)

Proof: Let denote the duration of phase , where . Let denote the number of old packets in the beginning of phase , where . The sequences and satisfy the following recursive dependencies, where we disregard the effect of burstiness:

and

by the algorithm’s design and the constraints imposed on the adversary. Iterating these recurrences produces the following bound on the duration of a phase:

(3)

A packet waits to be transmitted through at most two consecutive phases, each taking at most  rounds. A bound for given in (3) disregards the effect of burstiness. We can account for the effect of burstiness as follows. Let the adversary inject additional packets in a round of a phase. This instantaneously increases the number of packet queued in the current phase but extends the duration of the next phase, which is the phase when these packets are transmitted as old. These transmissions in turn allow the adversary to inject additional packets, which extends the duration of the next phase by rounds.

We conclude with the following estimates. The maximum number of queued packets is obtained by combining at most old packets with at most new packets, along with at most packets injected in a burst, which together give (1) as a bound. The maximum number of rounds spent by a packet waiting to be heard on the channel is obtained by adding twice the upper bound  on a duration of a phase (3), incremented by extra rounds in a phase immediately following one of a bursty injection, along with rounds of the next phase, which together give (2).

The bounds of Theorems 1 are asymptotically tight. We give a strategy of the adversary to make queue sizes and packet latency close to these for algorithm OF-RRW. When a phase begins then the adversary injects its first packet into station , to make it wait almost two phases. The adversary injects at full power, that is, as soon as a packet can be injected while satisfying the restriction that the number of packets injected is at most within the first rounds of an execution, then a packet is injected. The first phase takes exactly rounds, and the adversary injects packets during this phase, but all of them will be transmitted in the next phase. So when the second phase begins, there are already packets queued. The duration of phases keeps increasing such that when one takes rounds then the next one takes rounds, starting from , so that it gets arbitrarily close to . The number of old packets is times the duration of a phase. Burstiness allows to add to the number of queued packets and extend two consecutive phases by rounds.

Next we estimate the performance of algorithm RRW.

Theorem 2

If algorithm RRW is executed by stations against an adversary of type then the number of packets simultaneously queued in the stations is at most

(4)

and packet latency is at most

(5)

Proof: First consider the queue sizes. Packets injected behind the token are transmitted in the next phase, which is consistent with the design of OF-RRW and so with its bound. Packets injected ahead of the token are transmitted in the current phase, which slows down the phase compared to OF-RRW. If a phase is longer then more packets can be injected in it, but each extra round is spent on a transmission, because this is the reason a phase is longer, while not each extra round has to have a new packet injected in it. This means that the upper bound on the number of packets stored in the queues (1) derived for OF-RRW also applies to RRW, so we make it equal to (4).

Next we estimate packet latency. Packets injected behind the token and ahead of the token are considered separately. If packets are injected only behind the token then the bound (3) on the length of a phase for OF-RRW applies, in that each phase takes at most rounds. Such length of a phase is determined by the packets that are already queued when a phase begins. Now, consider the effect of injections only ahead of the token while the old packets are already queued. The duration of a phase is obtained from a duration of a phase of OF-RRW slowed down as much as possible by injecting packets in front of the token. The upper bound on the duration of such a phase becomes

(6)

Packet latency is upper bounded by the duration of two consecutive phases. The lengths of two consecutive phases are at most a sum of the lengths given by (3) and (6):

because injecting only in front of the token prevents creating old packets to be transmitted in the next phase, and the following phase starts with empty queues. The second of these two phases may be additionally extended by at most , due to the stretching-by-injecting effect, which gives the ultimate bound (5).

The bounds of Theorems 2 are asymptotically tight, which can be demonstrated by giving a specific adversary’s strategy. Let the adversary first keep injecting just after the token. These packets are transmitted in the next phase, which simulates the behavior of OF-RRW. Eventually the phase lengths gets arbitrarily close to . Then, at the beginning of a new phase, the adversary starts injecting just ahead of the token. The duration of this one phase gets extended by an additional factor of due to stretching-by-injecting.

The tightness of the bounds implies that the advantage of the old-go-first mechanism applied in algorithm OF-RRW, as compared to RRW, is the speedup of packet latency by the following factor

which is measured having an adversary fixed and growing unbounded.

4.2 Channels with collision detection

We consider algorithms Old-First-Search-Round-Robin (OF-SRR) and Search-Round-Robin (SRR), both of which use collision detection. Executions are partitioned into phases. A phase denotes one full sweep of search through all the names of stations.

We begin with a technical estimate that will be used in proving bounds on packet latency. Let denote .

Lemma 1

If there are already packets in the system when a phase of algorithm OF-SRR begins, then the phase takes at most rounds.

Proof: We argue that there are at most void rounds between two packets are heard on the channel. This is because of two reasons. First, when a station finishes its transmissions, then one silent round either triggers the next search or completes the phase. Second, when a new search to identify a station with a packet begins, it takes at most collisions to identify a single station with pending packets. There are also rounds spent to hear the packets.

Next we give the following alternative estimate. A phase can be represented by a binary search tree in which each interval on a stack corresponds to a node. In particular, a station with pending packets is in an interval that is a leaf, and an interval that creates a collision corresponds to an internal node. Observe that we may associate one void round with each node on such a tree. The association depends on the kind of node. First, if a node represents a station with packets, which is a leaf, then there is a silent round following all the transmissions by the station, which can be associated with the node. Second, if this is an internal node, then it is associated with a collision. It follows that the total number of nodes in the tree and the number of void rounds in a phase are equal. There are at most nodes in the tree, because it has at most leaves. The void rounds in the phase are added to the rounds used to hear the packets.

Now we give the performance bounds for the algorithm OF-SRR.

Theorem 3

If algorithm OF-SRR is executed by stations against an adversary of type then the number of packets simultaneously queued in the stations is at most

(7)

and packet latency is at most

(8)

If then the number of packets simultaneously queued in the stations is at most and packet latency is at most .

Proof: Let denote the duration of phase , where . Let denote the number of old packets in the beginning of phase , where . Let be an upper bound on the number of queued old packets and an upper bound on the duration of a phase.

First, we consider the case of . The inequality holds, including the effect of burstiness, so that . Then again . The pattern repeats, so the invariants and are maintained. This allows to set and . The queues size is at most the number of old and new packets together, which is , and packet latency is at most twice the duration of a phase , which is at most .

Next, we consider the general case. The sequences and satisfy the following recursive dependencies, by Lemma 1, where we disregard the effect of burstiness:

and

Iterating these recurrences produces the following bound on the duration of a phase:

(9)

A packet spends at most two consecutive phases waiting to be heard, each phase taking at most  rounds. A bound for given in (9) disregards the effect of burstiness, which can be accounted for as follows. If the adversary injects packets in one round then this increases the number of packet queued in the current phase. This injection extends the duration of the next phase rather then the current one, because this will be the phase when these packets are transmitted as old. These extra transmissions make it possible for the adversary to inject packets, which extends the duration of the next phase by rounds.

Here are the concluding estimates. The maximum number of queued packets is at most old packets added to at most new packets, and at most packets injected in a burst, which gives (7). The maximum number of rounds spent by a packet waiting to be heard on the channel is twice the upper bound  on a duration of a phase (9), incremented by extra rounds in a phase of a bursty injection along with rounds of the next phase, which gives (8).

The bounds of Theorems 3 are asymptotically tight, which can be shown as follows. There are two bounds on queues and latency, and tightness of a bound occurs when the adversary’s type satisfies additional conditions. First, the case of small , say, . Queues size is tight as the bound is proportional to the burstiness component. Let the adversary inject packets in pairs into two adjacent stations, a packet per station, such that they are at some point together in an interval on the stack that is of a constant-size. There are such pairs, and they are injected into stations that are about apart. For the burstiness component  such that , it takes to transmit packets injected simultaneously, so the phase duration is also tight. Next, the case of large injection rates , in particular, when . Let the adversary keep injecting into pairs of stations, a packet per station, that belong together to intervals of a constant length that are on the stack at some point in time. The time spent waiting to hear a new packets is initially, while the adversary injects at a rate larger than . Eventually, the rate of hearing consecutive packets becomes , but at that point the number of packets queued becomes . The adversary continues injecting at full power to extend a phase’s length close to , by the stretching-by-injecting effect. The adversary may add packets in a burst and next extend the two following phases by about rounds.

Next, we consider algorithm SRR. We begin with a preliminary fact.

Lemma 2

Let us consider the beginning of a phase of algorithm SRR. If the number of packets that are either already queued or they are injected during the phase into stations that belong to some intervals on the stack is  then the phase takes at most rounds.

Proof: A proof is similar to that of Lemma 1, with a difference regarding which packets get transmitted in a current phase. While in algorithm OF-SRR these are the packets already queued when the phase starts, algorithm SRR has all the available packets transmitted, including those already present when the phase begins but also newly injected ones. Each station that holds packets competes for access to the channel in a phase, unless its name is no longer on an interval on the stack. A round of the first transmission by such a station occurs when the interval including the station’s name is removed from the top of the stack and the station is the only one in the interval that holds packets pending transmission.

Now we give the performance bounds for the algorithm SRR.

Theorem 4

If algorithm SRR is executed by stations against an adversary of type then the number of packets simultaneously queued is at most

(10)

and packet latency is at most

(11)

If then the number of packets simultaneously queued is at most and packet latency is at most .

Proof: Packets that are injected into stations that do not belong to the intervals on the stack are transmitted in the next phase. The way the algorithm handles these packets is consistent with the design of OF-SRR, so their packet latency conforms to the bound on packet latency for OF-SRR. Packets injected into stations that belong to the intervals on the stack are transmitted in the current phase, which may slow down the phase as compared to OF-SRR. The extra rounds are either spent on transmissions or they produce collisions while a next station with packets is identified. Each round spent on transmissions decreases the number of packets in the queues, but not each of these rounds is used by the adversary to inject new packets and so increase the number of packets queued. Regarding the rounds producing collisions, they are estimated as overheads of either per packet or total in a phase, but in this respect Lemma 2 has exactly the same overheads as Lemma 1. The upper bound on the number of packets stored in the queues derived for OF-SRR includes both the old and new packets, but accounted for separately. Since accounting for transmission of old and new packets together is consistent with accounting for them separately, the upper bounds on the size of queues of OF-SRR also applies to algorithm SRR. We conclude that the bound (10) on queues size can be made identical to (10), along with the bound of  for the suitably small injection rates.

Next we estimate packet latency. There are two cases, the general one and a special one of suitably small injection rates. Let denote the duration of phase , and an upper bound on the duration of a phase.

First the case of . If the adversary injects only into stations that are not on the stack then these packets are old, in the sense that they will be heard in the next phase, so the bound for algorithm OF-SRR applies. If the adversary injects only into stations that are still on the stack, then this allows to extend a phase’s duration by a factor of . A packet can be delayed at most two consecutive phases, which is the following, for sufficiently large :

Next, we consider the general case. If packets are injected only into stations that do not belong to the intervals on the stack at the round of injection, then the bound (9) on the length of phase for OF-SRR applies, in that a phase takes at most rounds. Each such a duration suffices to hear the packets that are already queued when a phase begins. Now, consider the effect of injections only into stations that belong to intervals on the stack at the round of injection, while the old packets are already queued. The duration of a phase is obtained from the duration  of a phase of OF-SRR slowed down as much as possible by injecting packets into stations whose names are in the intervals on the stack. An upper bound on the duration of such a phase is obtained by the stretching-by-injecting effect to be at most the following:

(12)

The maximum of a sum of lengths of two consecutive phases is obtained as a sum of the lengths given by (9) and (12), because injecting only into stations on the stack results in not creating any old packets to be heard in the next phase. The obtained bound is as follows:

The second of these two phases may be additionally extended by at most , due to the stretching-by-injecting effect combined with burstiness, which gives (11).

The bounds of Theorem 4 are asymptotically tight, which can be shown by finding a specific adversary’s strategy. The case of small injection rate is similar as for algorithm OF-SRR, since algorithm SRR has its performance bounds differ from those for OF-SRR by constant multiplicative factors when injection rates are smaller than . Next we discuss the general case. Let the adversary first keep injecting into stations whose names are not in the intervals on the stack, similarly as in the case of algorithm OF-SRR. These packets are transmitted in the next phase, which is consistent with the behavior of OF-SRR, so that eventually the phase lengths gets arbitrarily close to . Then, at the beginning of a new phase, the adversary starts injecting into stations that are still on the stack. The duration of this one phase can get extended by an additional factor of due to stretching-by-injecting. This same phase can be further extended by by burstiness amplified by stretching-by-injecting.

The tightness of the bounds implies that the advantage of the old-go-first mechanism applied in algorithm OF-SRR, as compared to SRR, is the speedup of packet latency by a factor that is grater than , similarly as in the case of algorithm OF-RRW compared to RRW.

5 An Adaptive Algorithm without Jamming

Algorithm Move-Big-To-Front (MBTF) is an adaptive one for channels without collision detection. This algorithm is stable even when injection rate is , but for this rate packet latency is unbounded, in that even an eventual hearing of a packet is not guaranteed [24].

Algorithm MBTF works with stations arranged in a dynamic list, and we refer to the stations not by their names but by their positions on this list. There are positions: , with station  at the front of the list and station  at the end.

The list of stations is traversed by a token that gives the right to transmit. Let a traversal of the token, which starts at the front of the list and ends by reaching again the front station of the list, be called a pass of the token. A pass is concluded by either discovering a new big station or traversing the list to its end.

We monitor the number of packets in the queues at the end of a pass, to see how the pass contributed to the number of packets stored in the queues. If the number of queued packets at the end of a pass is smaller than at the end of the previous pass, then such a pass is called decreasing, otherwise it is non-decreasing.

We partition passes into two categories, depending on whether a big station is discovered in a pass or not. If a big station is discovered in a pass then such a pass is called big and otherwise it is called small. A discovery of a big station results in moving this big station to the front of the list, which concludes the pass. The next pass begins by a transmission of the newly discovered big station, just after it is moved up to the front position in the list. We begin the analysis of performance of algorithm MBTF by investigating how many packets can be accumulated in the queues when small passes occur.

Lemma 3

If algorithm MBTF is executed by stations against an adversary of type , in such a manner that all the passes have been small up to a given round, then the number of packets stored in the queues in this round is at most .

Proof: If the adversary injects packets at the rate as close to injection rate as possible then burstiness component can be applied only once, and we will conclude with its contribution, while initially we disregard it. A small pass takes rounds. The adversary can inject packets during a time segment of these many rounds. This number is also an upper bound on the number of stations with packets during a non-decreasing small pass, because if there were more such stations, then each of them would transmit a packet during a pass.

Each station with packets has at most packets during a small pass. It follows that if a small pass is non-decreasing then the number of packets in the queues at the end of the pass is at most . The adversary can inject at most packets in the course of any of these passes. We conclude that the number of packets is at most in a round by which only small passes have occurred.

The adversary may use big passes to accumulate packets in queues and delay packets at the end of the list of stations by preventing the token to reach the tail of this list. The accumulation of packets is largest when the token traverses as many stations with empty queues as possible before discovering a big station. During such passes, the adversary can inject at the rate of  while striving to make the ratio of the number of rounds with messages heard on the channel smaller than , which results in the number of queued packets growing.

Theorem 5

If algorithm MBTF is executed by stations against an adversary of type then the number of packets stored in the queues in any round is at most

(13)

and packet latency is at most

(14)

Proof: We will disregard the burstiness component through the initial stages of the analysis, to apply it at the end of the process of accounting for time and injected packets.

By Lemma 3, if no big station has been discovered yet then there are at most packets in total. We explore now how much the queues can increase when big passes occur. If there are at most  stations with packets then the sum of the lengths of big passes is maximized when the following is the case: (1) stations holding packets are located at the end of the list, and (2) each time the token reaches one of these stations, for the first time since big passes started to occur, then the station is discovered to be big. Therefore, the sum of the lengths of big passes is at most the following:

for sufficiently large . During these big passes, at most new packets are injected. The total number of packets at this point is at most

Injecting packets in one round can be increase the total number of packets to at most (13).

Next we estimate packet latency. Let us consider some packet and we argue about its delay by building a worst-case scenario. We may assume that gets injected when the configuration of packets is already as in Lemma 3, which is such that at most packets are located in the stations located at the end of the list, each holding at most packets, but possibly fewer. Let packet  be injected into the last station, which takes the longest for the token to reach when starting from the front. Additionally, if the last station is never discovered to be big, which is the case when the total number of packets in this station is at most including , then the token will never discover the station to be big before a packet that is at the bottom when is injected is ready to be transmitted. Packet  may be at the bottom of its queue just after it is injected, and we may assume it is preceded by packets in its queue. The token will need to cover the whole length of the list times to reach  when it is already ready to be transmitted. Each such a traversal of the whole list makes a small pass. In the meantime, the token may be delayed by discovering big stations, what makes the token return back to the front station without reaching the station holding .

We estimate how much time may pass before the token finally visits the ’s station, when is already at the top of the queue ready to be transmitted, by accounting for the following three groups of rounds contributing to ’s waiting time:

  1. a delay due to discovering big stations,

  2. a delay due to small passes and packets injected during such passes,

  3. the effect of burstiness.

We begin with the effect of discovering big stations. Starting from the ’s injection, the adversary may inject packets into the trailing stations to make each of them big, with the exception of the last one. The discoveries of up to big stations at the end of the list provide delays of up to these many rounds:

During these big passes, a worst-case waiting scenario occurs when they are extended by stretching-by-injecting to at most these many rounds in total:

Next, we consider the effect of small passes. It has two components. There are small passes before the token reaches  when at the top of its queue, each pass contributing rounds, for the total of rounds, which is the first component.

During small passes, packets can be injected to introduce additional delay, possibly through discovering big stations. Suppose some such packets are injected. If they are located in big stations that are discovered big for the first time then there are at most  such stations, each contributing a delay of at most  rounds for the total of at most rounds of delay. Otherwise, if some new packets are injected into a station that has already been discovered big and is at position  in the list, then this station has at most packets inherited from the time it was discovered big and moved to the front, so at least packets are needed to make it big again, and these packets contribute to delay  by making the station big. Any excess of packets beyond injected into a big station will contribute to a delay of  when the station is moved to the front of the list and starts transmitting. So overall, the delay is upper bounded by the number of packets injected. There are at most packets injected during small passes. The resulting delay is at most such, which is the second component.

Finally, burstiness allows to inject packets into a big station, which can be extended to by stretching-by-injecting.

We have assessed the three contributions to packet delay. Adding them together gives a total of at most these many rounds:

which is the claimed upper bound on packet latency (14).

The bounds given in Theorem 5 are asymptotically tight. The factors and in the upper bounds (13) and (14) are  because . It is sufficient to show how to construct a configuration with queued packets and a packet whose delay is .

Let the adversary build queues of packets each in stations. This occurs in the course of small passes during which the adversary injects two packets into each of some fixed stations, so each of them grows in a pass. After such small passes, each of the stations with packets has packets. During one more pass, the adversary injects packets so that the number of queued packets is at least .

Next we consider packet latency. Let the adversary build queues of packets each in the last stations, while the first stations have empty queues. A packet is injected into the last station as its last packet at the bottom. Let the adversary make each of the stations with packets big by inserting one extra packet, starting with the station in the smallest position, but skipping the last station. After that, the adversary keeps injecting at full power into the station that is last but one, which also includes injecting packets in one round.

We consider two cases. The first case is when , which implies . The big passes contribute at least  rounds and the small passes that follow contribute at least  rounds. The second case is when . The number of void rounds in big passes is at least . When the last-by-one station is discovered big, the adversary injects additional  packets into it. The number of rounds can be extended by stretching-by-injecting to at least rounds. All these rounds contribute to the delay of packet . This quantity grows unbounded if injection rate  converges to .

6 Non-adaptive Algorithms with Jamming

We show that non-adaptive algorithms may have bounded worst-case packet latency on channels with jamming. The caveat is that they are correct only against adversaries whose jamming burstiness is bounded from above by a parameter we denote . This parameter is part of code, and to emphasize this, is included as part of the names of algorithms OF-JRRW() and JRRW(). The value of  does not occur in the upper bounds on packet latency we derive, as the jamming burstiness of a jamming adversary of type is at most .

Lemma 4

If there are old packets in the queues when a phase of algorithm OF-JRRW() executed by stations begins, against an adversary of type whose jamming burstiness is at most , then the phase takes at most these many rounds:

Proof: It takes rounds to transmit the old packets. It takes intervals, of void rounds each, for the token to make a full cycle and so visit every station with old packets. Therefore, at most clear rounds are needed to hear the old packets. Consider a contiguous time segment of rounds in which some packets are heard. At most of these rounds can be jammed. Therefore, the following inequality needs to hold:

Solving for , we obtain the following bound

on a length of a contiguous time interval in which at least packets are heard.

Lemma 4 could be explained by referring to the stretching-by-jamming effect directly: there are rounds to successfully transmit the old packets, there are rounds to get the token around, and there is the burstiness component , each of them stretched by the factor . A phase takes close to the upper bound in Lemma 4 when the adversary does not jam the intervals of void rounds, each used to advance the token once. In what follows, similar facts are argued about by referring directly to the stretching-by-jamming effect.

During analyses of algorithms, if rounds are counted in disjoint intervals and the adversary jams at full power then the burstiness component can be applied only once. So Lemma 4 may be used for one phase as formulated above, and in the remaining ones the bound is restricted to a smaller quantity .

Theorem 6

If algorithm OF-JRRW() is executed by stations against a jamming adversary of type such that its jamming burstiness at most  then the number of packets queued in any round is at most

(15)

and packet latency is at most

(16)

Proof: Let be the duration of phase  and be the number of old packets in the beginning of phase , for . The following two estimates lead to a recurrence for the numbers , in which we disregard the burstiness component. One estimate reads

(17)

by the definitions of old packets and of type of the adversary, and the other estimate is

(18)

by Lemma 4. Let us denote . Substitute (17) into (18) to obtain

for and . Note that , as . An upper bound on the duration of a phase is found by iterating the recurrence to obtain a bound on the duration of a phase:

(19)

After substituting and into (19), we obtain the following estimate:

(20)

Replacing by in (20) expands to the following quantity:

(21)

We apply the estimate to (21) to obtain the following upper bound on :

(22)

A packet waits to be transmitted through at most two consecutive phases, each taking at most  rounds, where a bound for  given in (22) does not account for burstiness. Let the adversary inject extra packets in a round of a phase. This increases the number of packets in the current phase but extends the duration of the next phase by , which is the phase when these packets are transmitted as old. These transmissions in turn allow the adversary to inject additional packets, which extends the duration of the immediately following phase by rounds by the stretching-by-jamming effect.

We conclude with the following estimates. The maximum number of queued packets is obtained by adding at most old packets to at most new packets, along with at most packets injected in a burst, which together makes the following bound:

where we used . This yields (15). The maximum number of rounds spent by a packet waiting to be heard on the channel is obtained by adding twice the upper bound  on a duration of a phase (22), incremented by extra rounds in the phase immediately following one of a bursty injection, along with rounds of the following phase. This gives the following amount:

where we used . This yields (16).

The bound of Theorem 6 is tight, by the following scenario. A phase includes void rounds to advance the token around, which the adversary does not jam. If the adversary injects at full power, and at the same time jams at full power the rounds during which some station tries to transmit, then this is equivalent to injections with rate . Eventually phases get arbitrarily close to the following magnitude, by combined stretching:

If the adversary is such that then a phase takes close to rounds. The number of packets injected during a phase of such duration can be made close to , which can be made asymptotic to , if .

Next, we analyze algorithm JRRW().

Theorem 7

If algorithm JRRW() is executed by stations against a jamming adversary of type such that its jamming burstiness at most  then the number of packets stored in the queues in any round is at most

(23)

and packet latency is at most

(24)

Proof: Packets injected by the adversary may be transmitted in the current phase or in the next one, depending one how the station into which they are injected is related to the station with a token. We consider separately the impact of such injections to extend phases, by first estimating the phase length when packets are transmitted in the next phase and then when they are transmitted in the current phase.

Packets injected at stations behind the one that holds the token at the moment are transmitted in the next phase. These new packets will be visited by the token only after they become old. It follows that the adversary can make algorithm JRRW() behave as OF-JRRW by choosing stations to inject packets into in this very manner. If all packets are injected this way, an upper bound on the duration of a phase is given by (22), which we denote by .

Next, we estimate the contribution of packets injected at stations ahead of the station that holds the token at the moment, and which are transmitted in the current phase, compounded with packets already at the stations, which were injected behind the station holding the token. The packets get injected with the rate extended by stretching-by-jamming effect. The total number of rounds in such a phase is at most

(25)

Substituting into (25) results in the following bound

(26)

which is the maximum possible length of a single phase, if we disregard the effects of burstiness. To account for burstiness, the adversary can inject packets in front of the token, and then by iterating stretching-by-jamming by injecting at full power, the resulting extra rounds get extended to . The duration of two consecutive phases is bounded from above by a sum of (22), which we denote by , of (26), and of a one-time extension of a phase due to burstiness, which we calculated to be . They together make the following bound:

which is the upper bound (24).

The upper bound given in Theorem 7 is asymptotically tight, which can be justified by the following scenario. Let the adversary initially inject behind the token, which results in all injected packets transmitted in the next phase. The accompanying pattern of jamming is such as to make queues and packet latency get asymptotic to the bounds given in Theorem 6. This gives the tightness of queue bounds, as they are identical in Theorems 6 and 7. At this point, a phase takes close to rounds. Now, the adversary switches to injecting just before the token, to make the old packets injected in the previous phase and the currently injected packets transmitted in the current phase, so there are no outstanding packets when the phase is over. Injecting and jamming at full power has the effect of stretching injection rate to , which eventually makes a phase take close to the following amount:

by the estimate as in (25), which is asymptotic to (24), if .

The upper bound on packet latency given in Theorem 7 differs by the factor from the bound in Theorem 6. This factor can become arbitrarily large when gets suitably close to . This difference between the two bounds reflects the benefit of the approach “old-go-first” applied in the design of algorithm OF-JRRW, as compared to algorithm JRRW.

7 Adaptive Algorithms with Jamming

We give worst-case upper bounds on queues size and packet latency against jamming adversaries for the following three adaptive algorithms: OFC-RRW, C-RRW, and C-MBTF. Each of these algorithms is stable for any jamming burstiness, unlike the non-adaptive algorithms we considered in Section 6, which include in their codes a bound on jamming burstiness which they can withstand in a stable manner.

First, we estimate the worst-case performance of OFC-RRW, which combines adaptivity with the old-go-first approach, on top of the round-robin-withholding way to use a token.

Lemma 5

If there are old packets in the queues, when a phase of algorithm OFC-RRW executed by stations begins, against a type adversary, then the phase takes at most the following number of rounds:

Proof: It takes up to control rounds for the token to pass through all stations. It takes rounds to hear the packets. These rounds can be extended to by the stretching-by-jamming effect.

Now we give performance bounds for algorithm OFC-RRW.

Theorem 8

If algorithm OFC-RRW is executed by stations against a jamming adversary of type then the number of packets queued in any round is at most

(27)

and packet latency is at most

(28)

Proof: Let denote an upper bound on the duration of phase , for , where , as it consists of rounds possibly stretched by jamming. Let be the number of old packets in the beginning of phase , for . We use the following two estimates to derive a recurrence for the numbers . One is

(29)

which follows from the definition of old packets and the adversary of type . The other is

(30)

which follows from Lemma 5. Using the abbreviations and , we substitute (29) into (30) to obtain

To find an upper bound on the duration of a phase, we iterate the recurrence , which produces

(31)

After substituting and into (31), we obtain the following estimate of the duration of a phase