Optimal queue-size scaling in switched networks

Optimal queue-size scaling in switched networks

[ [    [ [    [ [ Massachusetts Institute of Technology, University of Amsterdam
and Columbia University
D. Shah
Department of EECS
Massachusetts Institute of Technology
Cambridge, Massachusetts 02139
USA
\printeade1
N. Walton
Korteweg-de Vries Institute for Mathematics
University of Amsterdam
1090 GE Amsterdam
The Netherlands
\printeade2
Y. Zhong
Department of IEOR
Columbia University
New York, New York 10027
USA
\printeade3
\smonth10 \syear2011\smonth12 \syear2012
\smonth10 \syear2011\smonth12 \syear2012
\smonth10 \syear2011\smonth12 \syear2012
Abstract

We consider a switched (queuing) network in which there are constraints on which queues may be served simultaneously; such networks have been used to effectively model input-queued switches and wireless networks. The scheduling policy for such a network specifies which queues to serve at any point in time, based on the current state or past history of the system. In the main result of this paper, we provide a new class of online scheduling policies that achieve optimal queue-size scaling for a class of switched networks including input-queued switches. In particular, it establishes the validity of a conjecture (documented in Shah, Tsitsiklis and Zhong [Queueing Syst. 68 (2011) 375–384]) about optimal queue-size scaling for input-queued switches.

[
\kwd
\doi

10.1214/13-AAP970 \volume24 \issue6 2014 \firstpage2207 \lastpage2245 \newproclaimdefinition[theorem]Definition \newproclaimassumption[theorem]Assumption

\runtitle

Optimal scheduling

{aug}

a]\fnmsD. \snmShah\thanksreft1label=e1]devavrat@mit.edu, b]\fnmsN. S. \snmWaltonlabel=e2]n.s.walton@uva.nl and c]\fnmsY. \snmZhong\correflabel=e3]yz2561@columbia.edu\thanksreft1 \thankstextt1Supported by NSF TF collaborative project and NSF CNS CAREER project. When this work was performed, the third author was affiliated with the Laboratory for Information and Decision Systems as well as the Operations Research Center at MIT. The third author is now affiliated with the Department of Industrial Engineering and Operations Research at Columbia University.

class=AMS] \kwd60K25 \kwd60K30 \kwd90B36 Switched network \kwdmaximum weight scheduling \kwdfluid models \kwdstate space collapse \kwdheavy traffic \kwddiffusion approximation

1 Introduction

A switched network consists of a collection of, say,  queues, operating in discrete time. At each time slot, queues are offered service according to a service schedule chosen from a specified finite set, denoted by . The rule for choosing a schedule from at each time slot is called the scheduling policy. New work may arrive to each queue at each time slot exogenously and work served from a queue may join another queue or leave the network. We shall restrict our attention, however, to the case where work arrives in the form of unit-sized packets, and once it is served from a queue, it leaves the network, that is, the network is single-hop.

Switched networks are special cases of what Harrison harrisoncanonical (), harrisoncanonicalcorr () calls “stochastic processing networks.” Switched networks are general enough to model a variety of interesting applications. For example, they have been used to effectively model input-queued switches, the devices at the heart of high-end Internet routers, whose underlying silicon architecture imposes constraints on which traffic streams can be transmitted simultaneously daibala (). They have also been used to model multihop wireless networks in which interference limits the amount of service that can be given to each host tassiula1 (). Finally, they can be instrumental in finding the right operational point in a data center SWo ().

In this paper, we consider online scheduling policies, that is, policies that only utilize historical information (i.e., past arrivals and scheduling decisions). The performance objective of interest is the total queue size or total number of packets waiting to be served in the network on average (appropriately defined). The questions that we wish to answer are: (a) what is the minimal value of the performance objective among the class of online scheduling policies, and (b) how does it depend on the network structure, , as well as the effective load.

Consider a work-conserving queue with a unit-rate server in which unit-sized packets arrive as a Poisson process with rate . Then, the long-run average queue-size scales222In this paper, by scaling of quantity we mean its dependence (ignoring universal constants) on  and/or the number of queues, , as these quantities become large. Of particular interest is the scaling of and , in that order. as . Such scaling dependence of the average queue size on (or the inverse of the gap, , from the load to the capacity) is a universally observed behavior in a large class of queuing networks. In a switched network, the scaling of the average total queue size ought to depend on the number of queues, . For example, consider parallel queues as described above. Clearly, the average total queue size will scale as . On the other hand, consider a variation where all of these queues pool their resources into a single server that works times faster. Equivalently, by a time change, let each of the queues receive packets as an independent Poisson process of rate , and each time a common unit-rate server serves a packet from one of the nonempty queues. Then, the average total queue-size scales as . Indeed, these are instances of switched networks that differ in their scheduling set , which leads to different queue-size scalings. Therefore, a natural question is the determination of queue-size scaling in terms of and , where is the effective load. In the context of an -port input-queued switch with queues, the optimal scaling of average total queue size has been conjectured to be , that is, STZopen ().

As the main result of this paper, we propose a new online scheduling policy for any single-hop switched network. This policy effectively emulates an insensitive bandwidth sharing network with a product-form stationary distribution with each component of this product-form behaving like an queue. This crisp description of stationary distribution allows us to obtain precise bounds on the average queue sizes under this policy. This leads to establishing, as a corollary of our result, the validity of a conjecture stated in STZopen () for input-queued switches. In general, it provides explicit bounds on the average total queue size for any switched network. Furthermore, due to the explicit bound on the stationary distribution of queue sizes under our policy, we are able to establish a form of large-deviations optimality of the policy for a large class of single-hop switched networks, including the input-queued switches, and the independent-set model of wireless networks, when the underlying interference graph is bipartite, for example, and more generally, perfect.

The conjecture from STZopen () that we settle in this paper, states that in the heavy-traffic regime (i.e., ), the optimal average total queue-size scales as . The validity of this conjecture is a significant improvement over the best-known bounds of (due to the moment bounds of MT93 () for the maximum weight policy) or (obtained by using a batching policy NeelyModiano ()).

Our analysis consists of two principal components. First, we propose and analyze a scheduling mechanism that is able to emulate, in discrete time, any continuous-time bandwidth allocation within a bounded degree of error. This scheduler maintains a continuous-time queuing process and tracks its own queue size process. If, valued under a certain decomposition, the gap between the idealized continuous-time process and the real queuing process becomes too large, then an appropriate schedule is allocated. Second, we implement specific bandwidth allocation named the store-and-forward allocation policy (SFA). This policy was first considered by Massoulié, and was consequently discussed in the thesis of Proutière PTh (), Section 3.4. It was shown to be insensitive with respect to phase-type service distributions in works by Bonald and Proutière bonaldproutiere1 (), bonaldproutiere2 (). The insensitivity of this policy for general service distributions was established by Zachary zachary (). The store-and-forward policy is closely related to the classical product-form multi-class queuing network, which have highly desirable queue-size scalings. By emulating these queuing networks, we are able to translate results which render optimal queue-size bounds for a switched network. An interested reader is referred to walton () and KMW () for an in-depth discussion on the relation between this policy, the proportionally fair allocation, and multi-class queuing networks.

1.1 Organization

In Section 2, we specify a stochastic switched network model. In Section 3, we discuss related works. Section 4 details the necessary background on the insensitive store-and-forward bandwidth allocation (SFA) policy. The main result of the paper is presented and proved in Section 5. We first describe the policy for single-hop switched networks, and state our main result, Theorem 5.2. This is followed by a discussion of the optimality of the policy. We then provide a proof of Theorem 5.2. A discussion of directions for future work is provided in Section 6.

Notation

Let be the set of natural numbers , let , let be the set of real numbers and let . Let be the indicator function of an event , Let , and . When is a vector, the maximum is taken componentwise.

We will reserve bold letters for vectors in , where is the number of queues. For example, . Superscripts on vectors are used to denote labels, not exponents, except where otherwise noted; thus, for example, refers to three arbitrary vectors. Let be the vector of all 0s and the vector of all 1s. The vector is the th unit vector, with all components being but the th component equal to . We use the norm . For vectors and , we let . Let be the transpose of matrix . For a set , denote its convex hull by . For , let be the factorial of , and by convention, .

2 Switched network model

We now introduce the switched network model. Section 2.1 describes the general system model, Section 2.2 lists the probabilistic assumptions about the arrival process and Section 2.3 introduces some useful definitions.

2.1 Queueing dynamics

Consider a collection of queues. Let time be discrete, and indexed by . Let be the amount of work in queue at time slot . Following our general notation for vectors, we write for . The initial queue sizes are . Let be the total amount of work arriving to queue , and be the cumulative potential service to queue , up to time , with .

We first define the queuing dynamics for a single-hop switched network. Defining and , the basic Lindley recursion that we will consider is

(1)

where the operation is applied componentwise. The fundamental switched network constraint is that there is some finite set such that

(2)

For the purpose of this work, we shall focus on . We will refer to as a schedule and as the set of allowed schedules. In the applications in this paper, the schedule is chosen based on current queue sizes, which is why it is natural to write the basic Lindley recursion as (1) rather than the more standard .

For the analysis in this paper, it is useful to keep track of two other quantities. Let be the cumulative amount of idling at queue , defined by and

(3)

where . Then, (1) can be rewritten as

(4)

Also, let be the cumulative amount of time that is spent on using schedule  up to time , so that

(5)

A policy that decides which schedule to choose at each time slot is called a scheduling policy. In this paper, we will be interested in online scheduling policies. That is, the scheduling decision at time will be based on historical information, that is, the cumulative arrival process till time .

2.2 Stochastic model

We shall assume that the exogenous arrival process for each queue is independent and Poisson. Specifically, unit-sized packets arrive to queue as a Poisson process of rate . Let denote the vector of all arrival rates. The results presented in this paper extend to more general arrival process with i.i.d. interarrival times with finite means, using a Poissonization trick. We discuss this extension in Section 6.

2.3 Useful quantities

We shall assume that the scheduling constraint set is monotone. This is captured in the following assumption. {assumption}[(Monotonicity)] If contains a schedule, then also contains all of its sub-schedules. Formally, for any , if and componentwise, then . Without loss of generality, we will assume that each unit vector belongs to . Next, we define some quantities that will be useful in the remainder of the paper. {definition}[(Admissible region)] Let be the set of allowed schedules. Let be the convex hull of , that is,

Define the admissible region to be

Note that under Assumption 2.3, the capacity region and the convex hull of coincide.

Given that is a polytope contained in , there exists an integer , a matrix and a vector such that

(6)

We call the rank of in the representation (6). When it is clear from the context, we simply call the rank of . Note that this rank may be different from the rank of matrix . Our results will exploit the fact that the rank may be an order of magnitude smaller than .

{definition}

[(Static planning problems and load)] Define the static planning optimization problem for to be

minimize (7)
subject to (9)

Define the induced load by , denoted by , as the value of the optimization problem . Note that is admissible if and only if . It also follows immediately from Definition 2.3 that

(10)

and is admissible if and only if , componentwise.

In the sequel, we will often consider the quantities , for , which can be interpreted as loads on individual “resources” of the system (this interpretation will be made precise in Section 4). They are closely related to the system load . We formalize this relation in the following lemma, whose proof is straightforward and omitted.

Lemma 2.1

Consider a nonnegative matrix and a vector with for all . For a nonnegative vector , define by (10) and . Then .

The following is a simple and useful property of : for any ,

(11)

2.4 Motivating example

An Internet router has several input ports and output ports. A data transmission cable is attached to each of these ports. Packets arrive at the input ports. The function of the router is to work out which output port each packet should go to, and to transfer packets to the correct output ports. This last function is called switching. There are a number of possible switch architectures; we will consider the commercially popular input-queued switch architecture.

Figure 1: An input-queued switch, and two example matchings of inputs to outputs.

Figure 1 illustrates an input-queued switch with three input ports and three output ports. Packets arriving at input destined for output are stored at input port , in queue , thus there are queues in total. (For this example, it is more natural to use double indexing, e.g., , whereas for general switched networks it is more natural to use single indexing, e.g., for .)

The switch operates in discrete time. At each time slot, the switch fabric can transmit a number of packets from input ports to output ports, subject to the two constraints that each input can transmit at most one packet, and that each output can receive at most one packet. In other words, at each time slot the switch can choose a matching from inputs to outputs. The schedule is given by if input port is matched to output port in a given time slot, and otherwise. The matching constraints require that for , and for . Figure 1 shows two possible matchings. On the left-hand side, the matching allows a packet to be transmitted from input port 3 to output port 2, but since is empty, no packet is actually transmitted.

In general, for an -port switch, there are queues. The corresponding schedule set is defined as

(12)

It can be checked that is monotone. Furthermore, due to Birkhoff–von Neumann theorem, BrK (), VN (), the convex hull of is given by

(13)

Thus, the rank of is less than or equal to for an -port switch. Finally, given an arrival rate matrix333Not a vector, for notational convenience, as discussed earlier. , is given by

3 Related works

The question of determining the optimal scaling of queue sizes in switched networks, or more generally, stochastic processing networks, has been an important intellectual pursuit for more than a decade. The complexity of the generic stochastic processing network makes this task extremely challenging. Therefore, in search of tractable analysis, most of the prior work has been on trying to understand optimal scaling and scheduling policies for scaled systems: primarily, with respect to fluid and heavy-traffic scaling, that is, .

In heavy-traffic analysis, one studies the queue-size behavior under a diffusion (or heavy-traffic) scaling. This regime was first considered by Kingman kingmanht (); since then, a substantial body of theory has developed, and modern treatments can be found in mike2 (), bramson (), williams (), whittspl (). Stolyar stolyar () has studied a class of myopic scheduling policies, known as the maximum weight policy, introduced by Tassiulas and Ephremides tassiula1 (), for a generalized switch model in the diffusion scaling. In a general version of the maximum weight policy, a schedule with maximum weight is chosen at each time step, with the weight of a schedule being equal to the sum of the weights of the queues chosen by that schedule. The weight of a queue is a function of its size. In particular, for the choice of one parameter class of functions parameterized by , , the resulting class of policies are called the maximum weight policies with parameter , and denoted as MW-.

In stolyar (), a complete characterization of the diffusion approximation for the queue-size process was obtained, under a condition known as “complete resource pooling,” when the network is operating under the MW- policy, for any . Stolyar stolyar () showed the remarkable result that the limiting queue-size vector lives in a one-dimensional state space. Operationally, this means that all one needs to keep track of is the one-dimensional total amount of work in the system (called the rescaled workload), and at any point in time one can assume that the individual queues have all been balanced. Furthermore, it was established that a max-weight policy minimizes the rescaled workload induced by any policy under the heavy-traffic scaling (with complete resource pooling). Dai and Lin LD05 (), LD08 () have established that a similar result holds (with complete resource pooling) in the more general setting of a stochastic processing network. In summary, under the complete resource pooling condition, the results in stolyar (), LD05 (), LD08 () imply that the performance of the maximum weight policy in an input-queued switch, or more generally in a stochastic processing network, is always optimal (in the diffusion limit, and when each queue size is appropriately weighted). These results suggest that the average total queue-size scales as in the limit. However, such analyses do not capture the dependence on the network scheduling structure . Essentially, this is because the complete resource pooling condition reduces the system to a one-dimensional space (which may be highly dependent on a network’s structure), and optimality results are then initially expressed with respect to this one-dimensional space.

Motivated to capture the dependence of the queue sizes on the network scheduling structure , a heavy-traffic analysis of switched networks with multiple bottlenecks (without resource pooling) was pursued by Shah and Wischik SW (). They established the so-called multiplicative state space collapse, and identified a member, denoted by MW- (obtained by taking ), of the class of maximum-weight policies as optimal with respect to a critical fluid model. In a more recent work, Shah and Wischik SWo () established the optimality of MW- with respect to overloaded fluid models as well. However, this collection of works stops short of establishing optimality for diffusion scaled queue-size processes.

Finally, we take note of the work by Meyn Meyn08 (), which establishes that a class of generalized maximum weight policies achieve logarithmic [in ] regret with respect to an optimal policy under certain conditions.

In a related model—the bandwidth-sharing network model—Kang et al. kelly-williamsssc () have established a diffusion approximation for the proportionally fair bandwidth allocation policy, assuming a technical “local traffic” condition, but without assuming complete resource pooling.444Kang et al. kelly-williamsssc () assume that critically loaded traffic is such that all the constraints are saturated simultaneously. They show that the resulting diffusion approximation has a product-form stationary distribution. Shah, Tsitsiklis and Zhong STZ () have recently established that this product-form stationary distribution is indeed the limit of the stationary distributions of the original stochastic model (an interchange-of-limits result). As a consequence, if one could utilize a scheduling policy in a switched network that corresponds to the proportionally fair policy, then the resulting diffusion approximation will have a product-form stationary distribution, as long as the effective network scheduling structure (precisely ) satisfies the “local traffic condition.” Now, proportional fairness is a continuous-time rate allocation policy that usually requires rate allocations that are a convex combination of multiple schedules. In a switched network, a policy must operate in discrete time and has to choose one schedule at any given time from a finite discrete set . For this reason, proportional fairness cannot be implemented directly. However, a natural randomized policy inspired by proportional fairness is likely to have the same diffusion approximation (since the fluid models would be identical, and the entire machinery of Kang et al. kelly-williamsssc (), building upon the work of Bramson bramson () and Williams williams (), relies on a fluid model). As a consequence, if (more accurately, ) satisfies the “local traffic condition,” then effectively the diffusion-scaled queue sizes would have a product-form stationary distribution, and would result in bounds similar to those implied by our results. In comparison, our results are nonasymptotic, in the sense that they hold for any admissible load, have a product-form structure, and do not require technical assumptions such as the “local traffic condition.” Furthermore, such generality is needed because there are popular examples, such as the input-queued switch, that do not satisfy the “local traffic condition.”

Another line of works—so-called large-deviations analysis—concerns exponentially decaying bounds on the tail probability of the steady-state distributions of queue sizes. Venkataramanan and Lin VL-LDP () established that the maximum weight policy with weight parameter , MW-, optimizes the tail exponent of the norm of the queue-size vector. Stolyar Stolyar-LDP () showed that a so-called “exponential rule” optimizes the tail exponent of the max norm of the queue-size vector. However, these works do not characterize the tail exponent explicitly. See STZSIGM () which has the best-known explicit bounds on the tail exponent.

In the context of input-queued switches, the example that has primarily motivated this work, the policy that we propose has the average total queue size bounded within factor of the same quantity induced by any policy, in the heavy-traffic limit. Furthermore, this result does not require conditions like complete resource pooling. More generally, our policy provides nonasymptotic bounds on queue sizes for every arrival rate and switch size. The policy even admits exponential tail bounds with respect to the stationary distribution, and the exponent of these tail bounds is optimal. These results are significant improvements to the state-of-the-art bounds for best performing policies for input-queued switches. As noted in the Introduction, our bound on the average total queue size is times better than the existing bound for the maximum-weight policy, and times better than that for the batching policy in NeelyModiano (). (Here is the number of queues, and the system load.) For further details of these results, see STZopen ().

For a generic switched network, our policy induces average total queue size that scale linearly with the rank of , under the diffusion scaling. This is in contrast to the best-known bounds, such as those for maximum weight policy, where the average queue-size scales as , under the diffusion scaling. Therefore, whenever the rank of is smaller than (the number of queues), our policy provides tighter bounds. Under our policy, queue sizes admit exponential tail bounds. The bound on the distribution of queue sizes under our policy leads to an explicit characterization of the tail exponent, which is optimal for a wide range of single-hop switched networks, including input-queued switches and the independent-set model of wireless networks, when the underlying interference graph is perfect.

4 Insensitivity in stochastic networks

This section recalls the background on insensitive stochastic networks that underlies the main results of this work. We shall focus on descriptions of the insensitive bandwidth allocation in so-called bandwidth-sharing networks operating in continuous time. Properties of these insensitive networks are provided in the Appendix.

We consider a bandwidth-sharing network operating in continuous time with capacity constraints. The particular bandwidth-sharing policy of interest is the store-and-forward allocation (SFA) mentioned earlier. We shall use the SFA as an idealized policy to design online scheduling policies for switched networks. We now describe the precise model, the SFA policy, and its performance properties.

Model

Let time be continuous and indexed by . Consider a network with resources indexed from . Let there be routes, and suppose that each packet on route consumes an amount of resource , for each . Let be the set of all resource–route pairs such that route uses resource , that is, . Without loss of generality, we assume that for each , . Let be the matrix with entries . Let be a positive capacity vector with components . For each route , packets arrive as an independent Poisson process of rate . Packets arriving on route require a unit amount of service, deterministically.

We denote the number of packets on route at time by , and define the queue-size vector at time by . Each packet gets service from the network at a rate determined according to a bandwidth-sharing policy. We also denote the total residual workload on route at time by , and let the vector of residual workload at time be . Once a packet receives its total (unit) amount of service, it departs the network.

We consider online, myopic bandwidth allocations. That is, the bandwidth allocation at time only depends on the queue-size vector . When there are packets on route , that is, if the vector of packets is , let the total bandwidth allocated to route be . We consider a processor-sharing policy, so that each packet on route is served at rate , if . If , let . If the bandwidth vector satisfies the capacity constraints

(14)

for all , then, in light of Definition 2.3, we say that is an admissible bandwidth allocation. A Markovian description of the system is given by a process which contains the queue-size vector along with the residual workloads of the set of packets on each route.

Now, on average, units of work arrive to route per unit time. Therefore, in order for the Markov process to be positive (Harris) recurrent, it is necessary that

(15)

All such will be called strictly admissible, in the same spirit as strictly admissible vectors for a switched network. Similarly to the corresponding switched network, given , we can define , the load induced by , using (10), as well as . Then by Lemma 2.1, , where can be interpreted as the load induced by on resource .

Store-and-forward allocation (SFA) policy

We describe the store-and-forward allocation policy that was first considered by Massoulié and later analyzed in the thesis of Proutière PTh (). Bonald and Proutière bonaldproutiere2 () established that this policy induces product-form stationary distributions and is insensitive with respect to phase-type distributions. This policy is shown to be insensitive for general service time distributions, including the deterministic service considered here, by Zachary zachary (). The relation between this policy, the proportionally fair allocation, and multi-class queuing networks is discussed in depth by Walton walton () and Kelly, Massoulié and Walton KMW (). The insensitivity property implies that the invariant measure of the process only depends on the parameters , and no other aspects of the stochastic description of the system.

We first give an informal motivation for SFA. SFA is closely related to quasi-reversible queuing networks. Consider a continuous-time multi-class queuing network (without scheduling constraints) consisting of processor sharing queues indexed by and job types indexed by the routes . Each route job has a service requirement at each queue , and a fixed service capacity is shared between jobs at the queue. Here each job will sequentially visit all the queues (so-called store-and-forward) and will visit each queue a fixed number of times. If we assume that jobs on each route arrive as a Poisson process, then the resulting queuing network will be stable for all strictly admissible arrival rates. Moreover, each stationary queue will be independent with a queue size that scales, with its load , as . For further details, see Kelly Ke79 (). So, assuming each queue has equal load, the total number of jobs within the network is of the order . In other words, these networks have the stability and queue-size scaling that we require, but do not obey the necessary scheduling constraints (14). However, these networks do produce an admissible schedule on average. For this reason, we consider an SFA policy which, given the number of jobs on each route, allocates the average rate with which jobs are transferred through this multi-class network. Next, we describe this policy (using notation similar to those used in KMW (), walton ()).

Given , define

For , we also define

Here, by notation (and ) we mean . For each , we exploit notation somewhat and define , for all . Also define

For , we define as

(16)

We shall define if any of the components of is negative. The store-and-forward allocation (SFA) assigns rates according to the function , so that for any , , with

(17)

where, recalling that is the same as at all but the th component, its th component equals . The bandwidth allocation is the stationary throughput of jobs on the routes of a multi-class queuing network (described above), conditional on there being jobs on each route.

A priori it is not clear if the above described bandwidth allocation is even admissible, that is, satisfies (14). This can be argued as follows. The can be related to the stationary throughput of a multi-class network with a finite number of jobs, , on each route. Under this scenario (due to finite number of jobs), each queue must be stable. Therefore, the load on each queue, , must be less than the overall system capacity . That is, the allocation is admissible. The precise argument along these lines is provided in, for example, KMW (), Corollary 2 and walton (), Lemma 4.1.

The SFA induces a product-form invariant distribution for the number of packets waiting in the bandwidth-sharing network and is insensitive. We summarize this in the following result.

Theorem 4.1

Consider a bandwidth-sharing network with . Under the SFA policy described above, the Markov process is positive (Harris) recurrent, and has a unique stationary probability distribution given by

(18)

where

(19)

is a normalizing factor. Furthermore, the steady-state residual workload of packets waiting in the network can be characterized as follows. First, the steady-state distribution of the residual workload of a packet is independent from . Second, in steady state, conditioned on the number of packets on each route of the network, the residual workload of each packet is uniformly distributed on , and is independent from the residual workloads of other packets.

Note that statements similar to Theorem 4.1 have appeared in other works, for example, bonaldproutiere1 (), walton (), Proposition 4.2, and KMW (). Theorem 4.1 is a summary of these statements, and for completeness, it is proved in Appendix A.

The following property of the stationary distribution described in Theorem 4.1 will be useful.

Proposition 4.2

Consider the setup of Theorem 4.1, and let be described by (18). Define a measure on as follows: for ,

(20)

Then, for any ,

(21)

We relate the distribution to the stationary distribution of an insensitive multi-class queuing network with a product-form stationary distribution and geometrically distributed queue sizes.

Proposition 4.3

Consider the distribution defined in (20). Then, for any ,

(22)

where .

Using Theorem 4.1 and Propositions 4.2 and 4.3, we can compute the expected value and the probability tail exponent of the steady-state total residual workload in the system. Recall that the total residual workload in the system at time is given by .

Proposition 4.4

Consider a bandwidth-sharing network with , operating under the SFA policy. Denote the load induced by to be , and for each , let . Then has a unique stationary probability distribution. With respect to this stationary distribution, the following properties hold: {longlist}[(ii)]

The expected total residual workload is given by

(23)

The distribution of the total residual workload has an exponential tail with exponent given by

(24)

where is the unique positive solution of the equation .

5 Main result: A policy and its performance

In this section, we describe an online scheduling policy and quantify its performance in terms of explicit, closed-form bounds on the stationary distribution of the induced queue sizes. Section 5.1 describes the policy for a generic switched network and provides the statement of the main result. Section 5.2 discusses its implications. Specifically, it discusses (a) the optimality of the policy for a large class of switched networks with respect to exponential tail bounds, and (b) the optimality of the policy for a class of switched networks, including input-queued switches, with respect to the average total queue size. Section 5.3 proves the main result stated in Section 5.1.

5.1 A policy for switched networks

The basic idea behind the policy, to be described in detail shortly, is as follows. Given a switched network, denoted by SN, with constraint set and queues, let have rank and representation [cf. (6)]

Now consider a virtual bandwidth-sharing network, denoted by BN, with routes corresponding to each of these queues. The resource–route relation is determined precisely by the matrix , and the resources have capacities given by . Both networks, SN and BN, are fed identical arrivals. That is, whenever a packet arrives to queue in SN, a packet is added to route in BN at the same time. The main question is that of determining a scheduling policy for SN; this will be derived from BN. Specifically, BN will operate under the insensitive SFA policy described in Section 4. By Theorem 4.1 and Propositions 4.2 and 4.3, this will induce a desirable stationary distribution of queue sizes in BN. Therefore, if we could use the rate allocation of BN, that is, the SFA policy, directly in SN, it would give us a desired performance in terms of the stationary distribution of the induced queue sizes. Now the rate allocation in BN is such that the instantaneous rate is always inside . However, it could change all the time and need not utilize points of as rates. In contrast, in SN we require that the rate allocation can change only once per discrete time slot and it must always employ one of the generators of , that is, a schedule from . The key to our policy is an effective way to emulate the rate allocation of BN under SFA (or for that matter, any admissible bandwidth allocation) by utilizing schedules from in an online manner and with the discrete-time constraint. We will see shortly that this emulation policy relies on being monotone; cf. Assumption 2.3.

To that end, we describe this emulation policy. Let us start by introducing some useful notation. Let be the vector of exogenous, independent Poisson processes according to which unit-sized packets arrive to both BN and SN, simultaneously. Recall that is a Poisson process with rate . Let denote the vector of numbers of packets waiting on the routes in BN at time . In BN, the services are allocated according to the SFA policy described in Section 4. Let denote the cumulative amount of service allocated to the routes in BN under the SFA policy: denotes the total amount of service allocated to all packets on route during the interval , for , with for . By definition, all components of are nondecreasing and Lipschitz continuous. Furthermore, for any and . Recall that the (right-)derivative of is determined by through the function as defined in (17).

Now we describe the scheduling policy for SN that will rely on . Let denote the cumulative amount of service allocated in SN by the scheduling policy up to time slot , with . The scheduling policy determines how is updated. Let be the queue sizes measured at the end of time slot . Let service be provided according to the scheduling policy instantly at the beginning of a time slot. Thus, the scheduling policy decides the schedule at the very beginning of time slot . This decision is made as follows. Let . We will see shortly that under our policy, is always nonnegative. This fact will be useful at various places, and in particular, for bounding the discrepancy between the continuous-time policy SFA and its discrete-time emulation. Let be the optimal objective value in the optimization problem defined in (7). In particular, there exists a nonnegative combination of schedules in such that

(25)

We claim that in fact, we can find nonnegative numbers , , such that

(26)

This is formalized in the following lemma.

Lemma 5.1

Let be a nonnegative vector. Consider the static planning problem defined in (7). Let the optimal objective value to be . Then there exists , , such that (26) holds.

The proof of the lemma relies on Assumption 2.3, and is provided in the Appendix.

There could be many possible nonnegative combinations of satisfying (26). If there exist nonnegative numbers , , satisfying (26) with for some , then choose as the schedule: set . If no such decomposition exists for , then set , where is a solution (ties broken arbitrarily) of

(27)

Here first observe that for all time , , so . Hence, is a feasible solution for the above problem, as .

The above is a complete description of the scheduling policy. Observe that it is an online policy, as the virtual network BN can be simulated in an online manner, and, given this, the scheduling decision in SN relies only on the history of BN and SN. The following result quantifies the performance of the policy.

Theorem 5.2

Given a strictly admissible arrival rate vector , with , under the policy described above, the switched network SN is positive recurrent and has a unique stationary distribution. Let , be the same as in Proposition 4.4. With respect to this stationary distribution, the following properties hold: {longlist}[(2)]

The expected total queue size is bounded as

(28)

where .

The distribution of the total queue size has an exponential tail with exponent given by

(29)

where is the unique positive solution of the equation .

5.2 Optimality of the policy

This section establishes the optimality of our policy for input-queued switches, both with respect to expected total queue-size scaling and tail exponent. General conditions under which our policy is optimal with respect to tail exponent are also provided.

Scaling of queue sizes

We start by formalizing what we mean by the optimality of expected queue sizes and of their tail exponents. We consider policies under which there is a well-defined limiting stationary distribution of the queue sizes for all such that . Note that the class of policies is not empty; indeed, the maximum weight policy and our policy are members of this class. With some abuse of notation, let denote the stationary distribution of the queue-size vector under the policy of interest. We are interested in two quantities: {longlist}[(2)]

Expected total queue size. Let be the expected total queue size under the stationary distribution , defined by

Note that by ergodicity, the time average of the total queue size and the expected total queue size under are the same quantity.

Tail exponent. Let be the lower and upper limits of the tail exponent of the total queue size under (possibly or ), respectively, defined by

(30)
(31)

If , then we denote this common value by . We are interested in policies that can achieve minimal and . For tractability, we focus on scalings of these quantities with respect to (equivalently, ) and , as and increase. For different and , it is possible that , but the scaling of , for example, could be wildly different. For this reason, we consider the worst possible dependence on and among all with .

Note that we are considering scalings with respect to two quantities, and , and we are interested in two limiting regimes, and . The optimality of queue-size scaling stated here is with respect to the order of limits and then . As noted in STZopen (), taking the limits in different orders could potentially result in different limiting behaviors of the object of interest, for example, . For further discussion, see Section 6. It should be noted, however, that whenever the tail exponent is optimal, this optimality holds for any and .

Optimality of the tail exponent

Here we establish sufficient conditions under which our policy is optimal with respect to tail exponent. First, we present a universal lower bound on the tail exponent, for a general single-hop switched network under any policy. We then provide a condition under which this lower bound matches the tail exponent under our policy. This condition is satisfied by both input-queued switches and the independent-set model of wireless networks.

Consider any policy under which there exists a well-defined limiting stationary distribution of the queue sizes for all such that . Let denote the stationary distribution of queue sizes under this policy. The following lemma establishes a universal lower bound on the tail exponent.

Lemma 5.3

Consider a switched network as described in Theorem 5.2, with scheduling set and admissible region . Let and be as described. For each , let be defined as in Theorem 5.2. Then under ,

(32)

where, for each , is the unique positive solution of the equation

{pf}

Consider a fixed . Without loss of generality, we assume that , by properly normalizing the inequality . In this case, for all , since for each , , and satisfies the constraint .

Now consider the following single-server queuing system. The arrival process is given by the sum , so that arrivals across time slots are independent, and that in each time slot, the amount of work that arrives is , where is an independent Poisson random variable with mean , for each . Note that the arriving amount in a single time slot does not have to be integral. Note also that , since . In each time slot, a unit amount of service is allocated to the total workload in the system. Then, for this system, the workload process satisfies

where is the number of arrivals to queue in the original system in time slot . We make two observations for this system. First, is stochastically dominated by , where is the size of queue in the original system, under any online scheduling policy. This is because for all schedules , satisfies , and hence for every . Second, since for all , is stochastically dominated by . Thus we have

We now show that

where is the unique positive solution of the equation

Consider the log-moment generating function (log-MGF) of , the arriving amount in one time slot. Since is a Poisson random variable with mean for each , its moment generating function is given by

Hence the log-MGF is

By Theorem 1.4 of bigQ (),

where . Since is strictly convex, satisfies

is arbitrary, so

\upqed

For general switched networks, the lower bound above need not match the tail exponent achieved under our policy [cf. (29)]. However, for a wide class of switched networks, these two quantities are equal. The following corollary of Lemma 5.3 is immediate.

Corollary 5.4

Consider a switched network as described in Lemma 5.3, with scheduling set and admissible region . If for all and , , then our policy achieves optimal tail exponent, for any strictly admissible arrival-rate vector .

{pf}

Let be strictly admissible, that is, . Let for each , and let be the system load induced by . Consider the in Lemma 5.3. When for all , and , is the unique positive solution of the equation

for each . Using the relation , we see that is the unique positive solution of the equation