LIFO-Backpressure Achieves Near Optimal Utility-Delay Tradeoff

LIFO-Backpressure Achieves Near Optimal Utility-Delay Tradeoff

Longbo Huang, Scott Moeller, Michael J. Neely, Bhaskar Krishnamachari Longbo Huang, Scott Moeller, Michael J. Neely, and Bhaskar Krishnamachari (emails: {longbohu, smoeller, mjneely, bkrishna} are with the Department of Electrical Engineering, University of Southern California, Los Angeles, CA 90089, USA.This material is supported in part under one or more of the following grants: DARPA IT-MANET W911NF-07-0028, NSF CAREER CCF-0747525, and continuing through participation in the Network Science Collaborative Technology Alliance sponsored by the U.S. Army Research Laboratory.

There has been considerable recent work developing a new stochastic network utility maximization framework using Backpressure algorithms, also known as MaxWeight. A key open problem has been the development of utility-optimal algorithms that are also delay efficient. In this paper, we show that the Backpressure algorithm, when combined with the LIFO queueing discipline (called LIFO-Backpressure), is able to achieve a utility that is within of the optimal value for any scalar , while maintaining an average delay of for all but a tiny fraction of the network traffic. This result holds for general stochastic network optimization problems and general Markovian dynamics. Remarkably, the performance of LIFO-Backpressure can be achieved by simply changing the queueing discipline; it requires no other modifications of the original Backpressure algorithm. We validate the results through empirical measurements from a sensor network testbed, which show good match between theory and practice.


Queueing, Dynamic Control, LIFO scheduling, Lyapunov analysis, Stochastic Optimization

I Introduction

Recent developments in stochastic network optimization theory have yielded a very general framework that solves a large class of networking problems of the following form: We are given a discrete time stochastic network. The network state, which describes current realization of the underlying network randomness, such as the network channel condition, is time varying according to some probability law. A network controller performs some action based on the observed network state at every time slot. The chosen action incurs a cost, 111Since cost minimization is mathematically equivalent to utility maximization, below we will use cost and utility interchangeably but also serves some amount of traffic and possibly generates new traffic for the network. This traffic causes congestion, and thus leads to backlogs at nodes in the network. The goal of the controller is to minimize its time average cost subject to the constraint that the time average total backlog in the network be kept finite.

This general setting models a large class of networking problems ranging from traffic routing [1], flow utility maximization [2], network pricing [3] to cognitive radio applications [4]. Also, many techniques have also been applied to this problem (see [5] for a survey). Among the approaches that have been adopted, the family of Backpressure algorithms [6] are recently receiving much attention due to their provable performance guarantees, robustness to stochastic network conditions and, most importantly, their ability to achieve the desired performance without requiring any statistical knowledge of the underlying randomness in the network.

Most prior performance results for Backpressure are given in the following utility-delay tradeoff form [6]: Backpressure is able to achieve a utility that is within of the optimal utility for any scalar , while guaranteeing a average network delay that is . Although these results provide strong theoretical guarantees for the algorithms, the network delay can actually be unsatisfying when we achieve a utility that is very close to the optimal, i.e., when is large.

There have been previous works trying to develop algorithms that can achieve better utility-delay tradeoffs. Previous works [7] and [8] show improved tradeoffs are possible for single-hop networks with certain structure, and develops optimal and utility-delay tradeoffs. However, the algorithms are different from basic Backpressure and require knowledge of an “epsilon” parameter that measures distance to a performance region boundary. Work [9] uses a completely different analytical technique to show that similar poly-logarithmic tradeoffs, i.e., , are possible by carefully modifying the actions taken by the basic Backpressure algorithms. However, the algorithm requires a pre-determined learning phase, which adds additional complexity to the implementation. The current work, following the line of analysis in [9], instead shows that similar poly-logarithmic tradeoffs, i.e., , can be achieved by the original Backpressure algorithm by simply modifying the service discipline from First-in-First-Out (FIFO) to Last-In-First-Out (LIFO) (called LIFO-Backpressure below). This is a remarkable feature that distinguishes LIFO-Backpressure from previous algorithms in [7] [8] [9], and provides a deeper understanding of backpressure itself, and the role of queue backlogs as Lagrange multipliers (see also [2] [9]). However, this performance improvement is not for free: We must drop a small fraction of packets in order to dramatically improve delay for the remaining ones. We prove that as the parameter is increased, the fraction of dropped packets quickly converges to zero, while maintaining close-to-optimal utilitiy and average backlog. This provides an analytical justification for experimental observations in [10] that shows a related LIFO-Backpressure rule serves up to of the traffic with delay that is improved by 2 orders of magnitude.

LIFO-Backpressure was proposed in recent empirical work [10]. The authors developed a practical implementation of backpressure routing and showed experimentally that applying LIFO queuing discipline drastically improves average packet delay, but did not provide theoretical guarantees. Another notable recent work providing an alternative delay solution is [11], which describes a novel backpressure-based per-packet randomized routing framework that runs atop the shadow queue structure of [12] while minimizing hop count as explored in [13]. Their techniques reduce delay drastically and eliminates the per-destination queue complexity, but does not provide average delay guarantees.

Our analysis of the delay performance of LIFO-Backpressure is based on the recent “exponential attraction” result developed in [9]. The proof idea can be intuitively explained by Fig. 1, which depicts a simulated backlog process of a single queue system with unit packet size under Backpressure.

Fig. 1: The LIFO-Backpressure Idea

The left figure demonstrates the “exponential attraction” result in [9], which states that queue sizes under Backpressure deviate from some fixed point with probability that decreases exponentially in the deviation distance. Hence the queue size will mostly fluctuate within the interval which can be shown to be of size. This result holds under both FIFO and LIFO, as they result in the same queue process. Now suppose LIFO is used in this queue. Then from the right figure, we see that most of the packets will arrive at the queue when the queue size is between and , and these new packets will always be placed on the top of the queue due to the LIFO discipline. Most packets thus enter and leave the queue when the queue size is between and . Therefore, these packets “see” a queue with average size no more than . Now let be the packet arrival rate into the queue, and let be the arrival rate of packets entering when the queue size is in and that eventually depart. Because packets always occupy the same buffer slot under LIFO, we see that these packets can occupy at most buffer slots, ranging from to , where is the maximum number of packets that can enter the queue at any time. We can now apply Little’s Theorem [14] to the buffer slots in the interval , and we see that average delay for these packets that arrive when the queue size is in satisfies:


Finally, the exponential attraction result implies that . Hence for almost all packets entering the queue, the average delay is .

This paper is organized as follows. In Section II, we set up our notations. We then present our system model in Section III. We provide an example of our network in Section IV. We review the Backpressure algorithm in Section V. The delay performance of LIFO-Backpressure is presented in Section VI. Simulation results are presented in Section VII. We then also present experimental testbed results in Section VIII. Finally, we comment on optimizing a function of time averages in Section IX.

Ii Notations

Here we first set up the notations used in this paper: represents the set of real numbers. (or ) denotes the set of nonnegative (or non-positive) real numbers. (or ) is the set of dimensional column vectors, with each element being in (or ). bold symbols and represent column vector and its transpose. means vector is entrywise no less than vector . is the Euclidean distance of and . and denote column vector with all elements being and . and is the natural log.

Iii System Model

In this section, we specify the general network model we use. We consider a network controller that operates a network with the goal of minimizing the time average cost, subject to the queue stability constraint. The network is assumed to operate in slotted time, i.e., . We assume there are queues in the network.

Iii-a Network State

In every slot , we use to denote the current network state, which indicates the current network parameters, such as a vector of channel conditions for each link, or a collection of other relevant information about the current network channels and arrivals. We assume that evolves according a finite state irreducible and aperiodic Markov chain, with a total of different random network states denoted as . Let denote the steady state probability of being in state . It is easy to see in this case that for all . The network controller can observe at the beginning of every slot , but the and transition probabilities are not necessarily known.

Iii-B The Cost, Traffic, and Service

At each time , after observing , the controller chooses an action from a set , i.e., for some . The set is called the feasible action set for network state and is assumed to be time-invariant and compact for all . The cost, traffic, and service generated by the chosen action are as follows:

  1. The chosen action has an associated cost given by the cost function (or in reward maximization problems);

  2. The amount of traffic generated by the action to queue is determined by the traffic function , in units of packets;

  3. The amount of service allocated to queue is given by the rate function , in units of packets;

Note that includes both the exogenous arrivals from outside the network to queue , and the endogenous arrivals from other queues, i.e., the transmitted packets from other queues, to queue . We assume the functions , and are continuous, time-invariant, their magnitudes are uniformly upper bounded by some constant for all , , and they are known to the network operator. We also assume that there exists a set of actions with and some variables for all and with for all , such that


for some for all . That is, the stability constraints are feasible with -slackness. Thus, there exists a stationary randomized policy that stabilizes all queues (where represents the probability of choosing action when ) [6].

Iii-C Queueing, Average Cost, and the Stochastic Problem

Let , be the queue backlog vector process of the network, in units of packets. We assume the following queueing dynamics:


and . By using (3), we assume that when a queue does not have enough packets to send, null packets are transmitted. In this paper, we adopt the following notion of queue stability:


We also use to denote the time average cost induced by an action-choosing policy , defined as:


where is the cost incurred at time by policy . We call an action-choosing policy feasible if at every time slot it only chooses actions from the feasible action set . We then call a feasible action-choosing policy under which (4) holds a stable policy, and use to denote the optimal time average cost over all stable policies. In every slot, the network controller observes the current network state and chooses a control action, with the goal of minimizing the time average cost subject to network stability. This goal can be mathematically stated as: (P1)    . In the following, we will refer to (P1) as the stochastic problem.

Note that in some network optimization problems, e.g., [15], the objective of the network controller is to optimize a function of a time average metric. In this case, we see that the Backpressure algorithm and the deterministic problem presented in the next section can similarly be constructed, but will be slightly different. We will discuss these problems in Section IX.

Iv An example of our model

Here we provide an example to illustrate our model. Consider the -queue network in Fig. 2. In every slot, the network operator decides whether or not to allocate one unit of power to serve packets at each queue, so as to support all arriving traffic, i.e., maintain queue stability, with minimum energy expenditure. We assume the network state , which is the quadruple , evolves according to the finite state Markov chain with three states , and . Here denotes the number of exogenous packet arrivals to queue at time , and is the state of channel . implies that there are number of packets arriving at queue at time . means that channel has a “Good” or “Bad” state. When a link’s channel state is “Good”, one unit of power can serve packets over the link, otherwise it can only serve one. We assume power can be allocated to both channels without affecting each other.

Fig. 2: A two queue tandem example.

In this case, we see that there are three possible network states. At each state , the action is a pair , with being the amount of energy spent at queue , and . The cost function is , for all . The network states, the traffic functions, and the service rate functions are summarized in Fig. 3. Note here is part of and is independent of ; while hence depends on . Also note that equals instead of due to our idle fill assumption in Section III-C.

Fig. 3: The traffic and service functions under different states.

V Backpressure and the Deterministic Problem

In this section, we first review the Backpressure algorithm [6] for solving the stochastic problem. Then we define the deterministic problem and its dual. We first recall the Backpressure algorithm for utility optimization problems [6].

Backpressure: At every time slot , observe the current network state and the backlog . If , choose that solves the following:


Depending on the problem structure, (6) can usually be decomposed into separate parts that are easier to solve, e.g., [3], [4]. Also, when the network state process is i.i.d., it has been shown in [6] that,


where and are the expected average cost and the expected average network backlog size under Backpressure, respectively. When is Markovian, [3] and [4] show that Backpressure achieves an utility-delay tradeoff if the queue sizes are deterministically upper bounded by for all time. Without this deterministic backlog bound, it has recently been shown that Backpressure achieves an tradeoff under Markovian , with and representing the proximity to the optimal utility value and the “convergence time” of the Backpressure algorithm to that proximity [16]. However, there has not been any formal proof that shows the exact utility-delay tradeoff of Backpressure under a Markovian .

We also recall the deterministic problem defined in [9]:


where corresponds to the steady state probability of and . The dual problem of (8) can be obtained as follows:


where is called the dual function and is defined as:


Here is the Lagrange multiplier of (8). It is well known that in (10) is concave in the vector , and hence the problem (9) can usually be solved efficiently, particularly when cost functions and rate functions are separable over different network components. Below, we use to denote an optimal solution of the problem (9) with the corresponding .

Vi Performance of LIFO Backpressure

In this section, we analyze the performance of Backpressure with the LIFO queueing discipline (called LIFO-Backpressure). The idea of using LIFO under Backpressure is first proposed in [10], although they did not provide any theoretical performance guarantee. We will show, under some mild conditions (to be stated in Theorem 3), that under LIFO-Backpressure, the time average delay for almost all packets entering the network is when the utility is pushed to within of the optimal value. Note that the implementation complexity of LIFO-Backpressure is the same as the original Backpressure, and LIFO-Backpressure only requires the knowledge of the instantaneous network condition. This is a remarkable feature that distinguishes it from the previous algorithms achieving similar poly-logarithmic tradeoffs in the i.i.d. case, e.g., [7] [8] [9], which all require knowledge of some implicit network parameters other than the instant network state. Below, we first provide a simple example to demonstrate the need for careful treatment of the usage of LIFO in Backpressure algorithms, and then present a modified Little’s theorem that will be used for our proof.

Vi-a A simple example on the LIFO delay

Consider a slotted system where two packets arrive at time , and one packet periodically arrives every slot thereafter (at times ). The system is initially empty and can serve exactly one packet per slot. The arrival rate is clearly packet/slot (so that ). Further, under either FIFO or LIFO service, there are always 2 packets in the system, so .

Under FIFO service, the first packet has a delay of and all packets thereafter have a delay of :

where is the delay of the packet under FIFO ( is similarly defined for LIFO). We thus have:

Thus, , , and so indeed holds.

Now consider the same system under LIFO service. We still have , . However, in this case the first packet never departs, while all other packets have a delay equal to slot:

Thus, for all integers :

and so . Clearly . On the other hand, if we ignore the one packet with infinite delay, we note that all other packets get a delay of 1 (exactly half the delay in the FIFO system). Thus, in this example, LIFO service significantly improves delay for all but the first packet.

For the above LIFO example, it is interesting to note that if we define and as the average backlog and delay associated only with those packets that eventually depart, then we have , , and the equation indeed holds. This motivates the theorem in the next subsection, which considers a time average only over those packets that eventually depart.

Vi-B A Modified Little’s Theorem for LIFO systems

We now present the modified Little’s theorem. Let represent a finite set of buffer locations for a LIFO queueing system. Let be the number of arrivals that use a buffer location within set up to time . Let be the number of departures from a buffer location within the set up to time . Let be the delay of the th job to depart from the set . Define as the average delay considering only those jobs that depart:

We then have the following theorem:

Theorem 1

Suppose there is a constant such that with probability 1:

Further suppose that with probability 1 (so the number of departures is infinite). Then the average delay satisfies:

where is the size of the finite set .


See Appendix A.

Vi-C LIFO-Backpressure Proof

We now provide the analysis of LIFO-Backpressure. To prove our result, we first have the following theorem, which is the first to show that Backpressure (with either FIFO or LIFO) achieves the exact utility-delay tradeoff under a Markovian network state process. It generalizes the performance result of Backpressure in the i.i.d. case in [6].

Theorem 2

Suppose is a finite state irreducible and aperiodic Markov chain222In [17], the theorem is proven under more general Markovian processes that include the process assumed here. and condition (2) holds, Backpressure (with either FIFO or LIFO) achieves the following:


where and are the expected time average cost and backlog under Backpressure.


See [17].

Theorem 2 thus shows that LIFO-Backpressure guarantees an average backlog of when pushing the utility to within of the optimal value. We now consider the delay performance of LIFO-Backpressure. For our analysis, we need the following theorem (which is Theorem 1 in [9]).

Theorem 3

Suppose that is unique, that the slackness condition (2) holds, and that the dual function satisfies:


for some constant independent of . Then under Backpressure with FIFO or LIFO, there exist constants , i.e., all independent of , such that for any ,


where is defined:


See [9].

Note that if a steady state distribution exists for , e.g., when all queue sizes are integers, then is indeed the steady state probability that there exists a queue whose queue value deviates from by more than distance. In this case, Theorem 3 states that deviates from by distance with probability . Hence when is large, will mostly be within distance from . Also note that the conditions of Theorem 3 are not very restrictive. The condition (12) can usually be satisfied in practice when the action space is finite, in which case the dual function is polyhedral (see [9] for more discussions). The uniqueness of can usually be satisfied in many network utility optimization problems, e.g., [2].

We now present the main result of this paper with respect to the delay performance of LIFO-Backpressure. Below, the notion “average arrival rate” is defined as follows: Let be the number of packets entering queue at time . Then the time average arrival rate of these packets is defined (assuming it exists): . For the theorem, we assume that time averages under Backpressure exist with probability 1. This is a reasonable assumption, and holds whenever the resulting discrete time Markov chain for the queue vector under backpressure is countably infinite and irreducible. Note that the state space is indeed countably infinite if we assume packets take integer units. If the system is also irreducible then the finite average backlog result of Theorem 2 implies that all states are positive recurrent.

Let be constants as defined in Theorem 3, and recall that these are (independent of ). Assume , and define and as:

Define the interval . The following theorem considers the rate and delay of packets that enter when and that eventually depart.

Theorem 4

Suppose that , that is unique, that the slackness assumption (2) holds, and that the dual function satisfies:


for some constant independent of . Define as in Theorem 3, and define as above. Then for any queue with a time average input rate , we have under LIFO-Backpressure that:

(a) The rate of packets that both arrive to queue when and that eventually depart the queue satisfies:


(b) The average delay of these packets is at most , where:

This theorem says that the delay of packets that enter when and that eventually depart is at most . Further, by (16), when is large, these packets represent the overwhelming majority, in that the rate of packets not in this set is at most .


(Theorem 4) Theorem 2 shows that average queue backlog is finite. Thus, there can be at most a finite number of packets that enter the queue and never depart, so the rate of packets arriving that never depart must be . It follows that is equal to the rate at which packets arrive when . Define the indicator function to be if , and else. Define . Then with probability 1 we get: 333The time average expectation is the same as the pure time average by the Lebesgue Dominated Convergence Theorem, because we assume the pure time average exists with probability 1, and that .

Then using the fact that for all :


where we define , and note that because . From Theorem 3 we thus have:


This completes the proof of part (a). Now define . Since , we see that the rate of the packets that enter is at least . Part (b) then follows from Theorem 1 and the facts that queue is stable and that .

Note that if , we see from Theorem 4 that, under LIFO-Backpressure, the time average delay for almost all packets going through queue is only . Applying this argument to all network queues with input rates, we see that all but a tiny fraction of the traffic entering the network only experiences a delay of . This contrasts with the delay performance result of the usual Backpressure with FIFO, which states that the time average delay will be for all packets [9]. Also note that under LIFO-Backpressure, some packets may stay in the queue for very long time. This problem can be compensated by introducing certain coding techniques, e.g., fountain codes [18], into the LIFO-Backpressure algorithm.

Vii Simulation

In this section, we provide simulation results of the LIFO-Backpressure algorithm. We consider the network shown in Fig. 4, where we try to support a flow sourced by Node destined for Node with minimum energy consumption.

Fig. 4: A multihop network. represents the probability and the rate obtained with one unit of power when .

We assume that evolves according to the 2-state Markov chain in Fig. 5. When the state is , , else . We assume that the condition of each link can either be or at a time. All the links except link and link are assumed to be i.i.d. every time slot, whereas the conditions of link and link are assumed to be evolving according to independent 2-state Markov chains in Fig. 5. Each link’s probability and unit power rate at the state is shown in Fig. 4. The unit power rates of the links at the state are all assumed to be . We assume that the link states are all independent and there is no interference. However, each node can only spend one unit of power per slot to transmit over one outgoing link, although it can simultaneously receive from multiple incoming links. The goal is to minimize the time average power while maintaining network stability.

Fig. 5: The two state Markov chain with the transition probabilities.

We simulate Backpressure with both LIFO and FIFO for slots with . It can be verified that the backlog vector converges to a unique attractor as increases in this case. The left two plots in Fig. 6 show the average power consumption and the average backlog under LIFO-Backpressure. It can be observed that the average power quickly converges to the optimal value and that the average backlog grows linearly in . The right plot of Fig. 6 shows the percentage of time when there exists a whose value deviates from by more than . As we can see, this percentage is always very small, i.e., between and , showing a good match between the theory and the simulation results.

Fig. 6: LEFT: average network power consumption. MIDDLE: average network backlog size. RIGHT: percentage of time when such that .

Fig. 7 compares the delay statistics of LIFO and FIFO for more than of the packets that leave the system before the simulation ends, under the cases and . We see that LIFO not only dramatically reduces the average packet delay for these packets, but also greatly reduces the delay for most of these packets. For instance, when , under FIFO, almost all packets experience the average delay around slots. Whereas under LIFO, the average packet delay is brought down to . Moreover, of the packets only experience delay less than slots, and of the packets experience delay less than slots. Hence most packets’ delay are reduced by a factor of under LIFO as compared to that under FIFO!

Fig. 7: Delay statistics under Backpressure with LIFO and FIFO for packets that leave the system before simulation ends (more than ). is the percentage of packets that enter the network and has delay less than .

Fig. 8 also shows the delay for the first packets that enter the network in the case when . We see that under Backpressure plus LIFO, most of the packets experience very small delay; while under Backpressure with FIFO, each packet experiences roughly the average delay.

Fig. 8: Packet Delay under Backpressure with LIFO and FIFO

Viii Empirical Validation

In this section we validate our analysis empirically by carrying out new experiments over the same testbed and Backpressure Collection Protocol (BCP) code of [10]. This prior work did not empirically evaluate the relationship between , finite storage availability, packet latency and packet discard rate. We note that BCP runs atop the default CSMA MAC for TinyOS which is not known to be throughput optimal, that the testbed may not precisely be defined by a finite state Markovian evolution, and finally that limited storage availability on real wireless sensor nodes mandates the introduction of virtual queues to maintain backpressure values in the presence of data queue overflows.

In order to avoid using very large data buffers, in [10] the forwarding queue of BCP has been implemented as a floating queue. The concept of a floating queue is shown in Figure 10, which operates with a finite data queue of size residing atop a virtual queue which preserves backpressure levels. Packets that arrive to a full data queue result in a data queue discard and the incrementing of the underlying virtual queue counter. Underflow events (in which a virtual backlog exists but the data queue is empty) results in null packet generation, which are filtered and then discarded by the destination.

Despite these real-world differences, we are able to demonstrate clear order-equivalent delay gains due to LIFO usage in BCP in the following experimentation.

Fig. 9: The 40 tMote Sky devices used in experimentation on Tutornet.

Viii-a Testbed and General Setup

To demonstrate the empirical results, we deployed a collection scenario across 40 nodes within the Tutornet testbed (see Figure 9). This deployment consisted of Tmote Sky devices embedded in the 4th floor of Ronald Tutor Hall at the University of Southern California.

In these experiments, one sink mote (ID 1 in Figure 9) was designated and the remaining 39 motes sourced traffic simultaneously, to be collected at the sink. The Tmote Sky devices were programmed to operate on 802.15.4 channel 26, selected for the low external interference in this spectrum on Tutornet. Further, the motes were programmed to transmit at -15 dBm to provide reasonable interconnectivity. These experimental settings are identical to those used in [10].

Fig. 10: The floating LIFO queues of [10] drop from the data queue during overflow, placing the discards within an underlying virtual queue. Services that cause data queue underflows generate null packets, reducing the virtual queue size.

We vary over experimentation. In practice, BCP defaults to a setting of packets, the maximum reasonable resource allocation for a packet forwarding queue in these highly constrained devices.

Viii-B Experiment Parameters

Experiments consisted of Poisson traffic at 1.0 packets per second per source for a duration of 20 minutes. This source load is moderately high, as the boundary of the capacity region for BCP running on this subset of motes has previously been documented at 1.6 packets per second per source [10]. A total of 36 experiments were run using the standard BCP LIFO queue mechanism, for all combinations of and LIFO storage threshold . In order to present a delay baseline for Backpressure we additionally modified the BCP source code and ran experiments with 32-packet FIFO queues (no floating queues) for . 444These relatively small values are due to the constraint that the motes have small data buffers. Using larger values will cause buffer overflow at the motes.

Viii-C Results

Testbed results in Figure 11 provide the system average packet delay from source to sink over and , and includes 95% confidence intervals. Delay in our FIFO implementation scales linearly with V, as predicted by the analysis in [9]. This yields an average delay that grows very rapidly with , already greater than 9 seconds per packet for . Meanwhile, the LIFO floating queue of BCP performs much differently. We have plotted a scaled target, and note that as increases the average packet delay remains bounded by .

Fig. 11: System average source to sink packet delay for BCP FIFO versus BCP LIFO implementation over various V parameter settings.

These delay gains are only possible as a result of discards made by the LIFO floating queue mechanism that occur when the queue size fluctuates beyond the capability of the finite data queue to smooth. Figure 12 gives the system packet loss rate of BCP’s LIFO floating queue mechanism over . Note that the poly-logarithmic delay performance of Figure 11 is achieved even for data queue size 12, which itself drops at most 5% of traffic at . We cannot state conclusively from these results that the drop rate scales like . We hypothesize that a larger value would be required in order to observe the predicted drop rate scaling. Bringing these results back to real-world implications, note that BCP (which minimizes a penalty function of packet retransmissions) performs very poorly with , and was found to have minimal penalty improvement for V greater than 2. At this low V value, BCP’s 12-packet forwarding queue demonstrates zero packet drops in the results presented here. These experiments, combined with those of [10] suggest strongly that the drop rate scaling may be inconsequential in many real world applications.

Fig. 12: System packet loss rate of BCP LIFO implementation over various V parameter settings.

In order to explore the queue backlog characteristics and compare with our analysis, Figure 13 presents a histogram of queue backlog frequency for rear-network-node 38 over various V settings. This node was observed to have the worst queue size fluctuations among all thirty-nine sources. For , the queue backlog is very sharply distributed and fluctuates outside the range only 5.92% of the experiment. As is increased, the queue attraction is evident. For we find that the queue deviates outside the range only 5.41% of the experiment. The queue deviation is clearly scaling sub-linearly, as a four-fold increase in required only a 2.8 fold increase in for comparable drop performance.

Fig. 13: Histogram of queue backlog frequency for rear-network-node 38 over various V settings.

Ix Optimizing Functions of Time averages

So far we have focused on optimizing time averages of functions, we now consider the case when the objective of the network controller is to optimize a function of some time average metric, e.g., [15]. Specifically, we assume that the action at time incurs some instantaneous network attribute vector , and the objective of the network controller is to minimize a cost function , 555The case for maximizing a utility function of long term averages can be treated in a similar way. where represents the time average value of . We assume that the function is continuous, convex and is component-wise increasing, and that for all , . In this case, we see that the Backpressure algorithm in Section V cannot be directly applied and the deterministic problem (8) also needs to be modified.

To tackle this problem using the Backpressure algorithm, we introduce an auxiliary vector . We then define the following virtual queues that evolves as follows:


These virtual queues are introduced for ensuring that the average value of is no less than the average value of . We will then try to maximize the time average of the function , subject to the constraint that the actual queues and the virtual queues must all be stable. Specifically, the Backpressure algorithm for this problem works as follows:

Backpressure: At every time slot , observe the current network state , and the backlogs and . If , do the following:

  1. Auxiliary vector: choose the vector by solving:

  2. Action: choose the action that solves:


In this case, one can also show that this Backpressure algorithm achieves the utility-delay tradeoff under a Markovian process. We also note that in this case, the deterministic problem is slightly different. Indeed, the intuitively formulation will be of the following form:


However, the dual problem of this optimization problem is not separable, i.e., not of the form of (10), unless the function is linear or if there exists an optimal action that is in every feasible action set , e.g., [15]. To get rid of this problem, we introduce the auxiliary vector and change the problem to:


It can be shown that this modified problem is equivalent to (21). Now we see that it is indeed due to the non-separable feature of (21) that we need to introduce the auxiliary vector in the Backpressure problem. We also note that the problem (22) actually has the form of (8). Therefore, all previous results on (8), e.g., Theorem 3 and 4 will also apply to problem (22).

Appendix A – Proof of 1

Here we provide the proof of Theorem 1. {proof} Consider a sample path for which the arrival rate is at least and for which we have an infinite number of departures (this happens with probability 1 by assumption). There must be a non-empty subset of consisting of buffer locations that experience an infinite number of departures. Call this subset . Now let be the delay of the departure from , let denote the number of departures from a buffer slot up to time , and use to denote the occupancy of the buffer slot at time . Note that is either or . For all , it can be shown that:


This can be seen from Fig. 14 below.

Fig. 14: An illustration of inequality (23) for a particular buffer location . At time in the figure, we have .

Therefore, we have:


The left-hand-side of the above inequality is equal to the sum of all delays of jobs that depart from locations in up to time . All other buffer locations (in but not in ) experience only a finite number of departures. Let represent an index set that indexes all of the (finite number) of jobs that depart from these other locations. Note that the delay for each job is finite (because, by definition, job eventually departs). We thus have: