Distributed Rate Allocation for Wireless Networks

# Distributed Rate Allocation for Wireless Networks

Jubin Jose and Sriram Vishwanath J. Jose and S. Vishwanath are with the Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin, TX 78712 USA (email(s): jubin@austin.utexas.edu; sriram@austin.utexas.edu).
###### Abstract

This paper develops a distributed algorithm for rate allocation in wireless networks that achieves the same throughput region as optimal centralized algorithms. This cross-layer algorithm jointly performs medium access control (MAC) and physical-layer rate adaptation. The paper establishes that this algorithm is throughput-optimal for general rate regions. In contrast to on-off scheduling, rate allocation enables optimal utilization of physical-layer schemes by scheduling multiple rate levels. The algorithm is based on local queue-length information, and thus the algorithm is of significant practical value.

The algorithm requires that each link can determine the global feasibility of increasing its current data-rate. In many classes of networks, any one link’s data-rate primarily impacts its neighbors and this impact decays with distance. Hence, local exchanges can provide the information needed to determine feasibility. Along these lines, the paper discusses the potential use of existing physical-layer control messages to determine feasibility. This can be considered as a technique analogous to carrier sensing in CSMA (Carrier Sense Multiple Access) networks. An important application of this algorithm is in multiple-band multiple-radio throughput-optimal distributed scheduling for white-space networks.

Wireless networks, Throughput-optimal rate allocation, Distributed algorithms

## I Introduction

The throughput of wireless networks is traditionally studied separately at the physical and medium access layers, and thus independently optimized at each of these two layers. As a result, conventionally, data-rate adaptation is performed at the physical layer for each link, and link scheduling is performed at the medium access layer. There are significant throughput gains in studying these two in a cross-layer framework [27, 8, 11, 19, 4]. This cross-layer optimization results in a joint rate allocation for all the links in the network.

Maximum Weighted (Max-Weight) scheduling introduced in the seminal paper [27] performs joint rate allocation and guarantees throughput-optimality111For cooperative networks, throughput-optimal rate allocation does not follow from classical Max-Weight scheduling. In [17], modified algorithms are developed for certain cooperative networks that guarantee throughput-optimality.. However, Max-Weight algorithm and its variants have the following disadvantages. (a) It requires periodic solving of a possibly hard optimization problem. (b) The optimization problem is centralized, and thus introduces significant overhead due to queue-length information exchanges. Thus, in order to overcome these disadvantages, we need efficient distributed algorithms for general physical-layer interference models [19].

The goal of this paper is to perform joint rate allocation in a decentralized manner. A related problem is distributed resource allocation in networks, and this problem has received considerable attention in diverse communities over years. In data and/or stochastic processing networks, resource-sharing is typically described in terms of independent set constraints. With such independent set constraints, the resource allocation problem translates to medium access control (or link scheduling) in wireless networks. For such on-off scheduling, recently, efficient algorithms have been proposed for both random access networks [12, 26] and CSMA networks [21, 2]. More recently, with instantaneous carrier sensing, a throughput-optimal algorithm with local exchange of control messages that approximate Max-Weight has been proposed in [25], and a fully decentralized algorithm has been proposed in [15]. The decentralized queue-length based scheduling algorithm in [15] and its variants have been shown to be throughput-optimal in [14, 20, 13]. This body of literature on completely distributed on-off scheduling has been extended to a framework that incorporates collisions in [16, 24]. Further, this decentralized framework has been validated through experiments in [18].

However, independent set constraints can only model orthogonal channel access which, in general, is known to be sub-optimal [5] (Section ). For wireless networks, the interaction among nodes require a much more fine-grained characterization than independent set constraints. This can be fully captured in terms of the network’s rate region, i.e., the set of link-rates that are simultaneously sustainable in the network. As long as the data-rates of links are within the rate region, simultaneous transmission is possible even by neighboring links in the network. Therefore, it is crucial to perform efficient distributed joint rate allocation (and not just distributed link scheduling) in wireless networks. Although distributed rate allocation is a very difficult problem in general, in this work, we show that this problem can be solved by taking advantage of physical-layer information.

In this work, we consider single-hop222For networks that do not employ cooperative schemes, the results in this paper are likely to generalize using multi-hop by combining “back-pressure” with the algorithmic framework of this paper. wireless networks. We develop a simple, completely distributed algorithm for rate allocation in wireless networks that is throughput-optimal. In particular, given any rate region for a wireless network, we develop a decentralized (local queue-length based) algorithm that stabilizes all the queues for all arrival rates within the throughput region. Thus, we can utilize the entire physical-layer throughput region of the system with distributed rate allocation. To the best of our knowledge, this is the first paper to obtain such a result. This is a very exciting result as our decentralized algorithm achieves the same throughput region as optimal centralized cross-layer algorithms. The algorithm requires that each link can determine the global feasibility of increasing its data-rate from the current data-rate. In Section VIII-A, we provide details on techniques to determine rate feasibility, and explain reasons for using this approach in practice.

The framework developed in this paper generalizes the distributed link scheduling framework. As discussed before, the current distributed link scheduling algorithms primarily deal with binary (on-off) decisions whereas our algorithm performs scheduling over multiple data-rates. Similar to these existing distributed link scheduling algorithms, our algorithm is mathematically modeled by a Markov process on the discrete set of data-rates. However, with multiple data-rates for each link, the appropriate choice of the large number of transition rates is very complicated. Thus, a key challenge is to design a Markov chain with fewer parameters that can be analyzed and appropriately chosen for throughput-optimality. We overcome this challenge by showing that transition rates with the following structure have this property. For link , the transition rate to a data-rate from any other data-rate is , where is a single parameter associated with link that is updated based on its queue-length. The transition takes place only if the new data-rate is feasible. As expected, this reduces to the existing algorithmic framework in the special case of binary (on-off) decisions.

For the general framework mentioned above, at an intuitive level, the techniques required for proving throughput-optimality remain similar to existing techniques. However, there are few additional technical issues that arise while analyzing the general framework. First, we need to account for more general constraints that arise from the set of possible rate allocation vectors. Next, the choice of update rules for with time based on local queue-lengths that guarantee throughput-optimality does not follow directly. The mixing time of the rate allocation Markov chain plays an important role in choosing the update rules. For arbitrary throughput regions, any rate allocation algorithm that approach -close (for arbitrarily small ) to the boundary possibly requires an increasing number of data-rates per link. This leads to a potential increase in the mixing time due to the increase in the size of the state-space. Thus, the analysis performed in this paper is more general and essential to establish throughput-optimality of the algorithms considered.

An important application of this algorithmic framework is for networks of white-space radios [7], where multiple non-adjacent frequency bands are available for operation and multiple radios are available at the wireless nodes. A scheduler needs to allocate different radios to different bands in a distributed manner. This problem introduces multiple data-rates for every link even in the CSMA framework, and hence, existing distributed algorithms cannot be directly applied. We demonstrate that our framework provides a throughput-optimal distributed algorithm in this setting.

Our main contributions are the following:

• We design a class of distributed cross-layer rate allocation algorithms for wireless networks that utilize local queue-length and physical-layer measuring.

• We show that there are algorithms in this class that are (a) throughput-optimal, and (b) completely decentralized.

• We demonstrate that an adaptation of these algorithms are throughput-optimal for multiple-band multiple-radio distributed scheduling.

### I-a Notation

Vectors are considered to be column vectors and denoted by bold letters. For a vector and matrix , , where is the transpose of . For vectors, , , , and are defined component-wise. denotes all-zeros vector and denotes all-ones vector. Other basic notation used in the paper is given in Table I. Notation specific to proofs is introduced later as needed.

### I-B Organization

The next section describes the system model. Section III explains the distributed rate allocation algorithm. Section IV introduces relevant definitions and known results. Section V describes the rate allocation Markov chain and the optimization framework. Section VI establishes the throughput-optimality of the algorithm. The algorithm for multiple-band multiple-radio scheduling is given in Section VII. Further discussions and simulation results are given in Section VIII. We conclude with our remarks in Section IX. For readability, the proofs of the technical lemmas in Section V and Section VI are moved to the Appendix.

## Ii System Model

Consider a wireless network consisting of nodes, labeled . In this network, we are interested in single-hop flows that correspond to wireless links labeled . Since we have a shared wireless medium, these links interact (or interfere) in a potentially complex way. For single-hop flows, this interaction among links can be captured through a -dimensional rate region for the network, which is formally defined next.

###### Definition 1 (Rate Region)

The rate region of a network is defined as the set of instantaneous rate vectors at which queues (introduced later) of all links can be drained simultaneously.

In this paper, we assume that the rate region is fixed333We consider fixed or slow-fading channels. (i.e., not time-varying). We denote the rate region associated with the network by . By definition, this rate region is compact. We assume that the rate region has the following simple property: if , then for all and . The above property states that rates can be decreased component-wise. Such an assumption is fairly mild, and is satisfied by rate regions resulting from most physical-layer schemes. Next, we define the throughput region of the network.

###### Definition 2 (Throughput Region)

The throughput region of a network, denoted by , is defined as the convex hull of the rate region of the network.

We use a continuous-time model to describe system dynamics. Time is denoted by Every (transmitter of) link is associated with a queue , which quantifies the information (packets) remaining at time waiting to be transmitted on link . Let the cumulative arrival of information at the -th link during the time interval be with . Rate allocation at time is defined as the rate vector in the rate region at which the system is being operated at time . Let the rate allocation corresponding to the -th link at time be . Then, for every link , the queue dynamics is given by

 Qi(t)=Qi(s)−∫tsri(z)I(Qi(z)>0)dz+Ai(t)−Ai(s), (1)

where . The vector of queues in the system is denoted by . The queues are initially at .

We consider arrival processes at the queues in the network with the following properties.

• We assume every arrival process is such that increments over integral times are independent and identically distributed with

• We assume that all these increments belong to a bounded support , i.e., for all .

Based on these properties, the (mean) arrival rate corresponding to the -th link is . We denote the vector of arrival rates by . Without loss of generality444If , then this link can be removed from the system., we assume . It follows from the strong law of large numbers that, with probability ,

 limt→∞Ai(t)t=λi. (2)

In summary, our system model incorporates general interference constraints through a arbitrary rate region and focuses on single-hop flows. We proceed to describe the rate allocation algorithm and the main results of this paper.

## Iii Rate Allocation Algorithm & Main Results

The goal of this paper is to design a completely decentralized algorithm for rate allocation that stabilizes all the queues as long as the arrival rate vector is within the throughput region. By assumption, every link can determine rate feasibility, i.e., every link can determine whether increasing its data-rate from the current rate allocation results in a net feasible rate vector. More formally, every link at time , if required, can obtain the information More details on determining rate feasibility are given in Section VIII.

The rate allocation vector at time is denoted by . For decentralized rate allocation, we develop an algorithm that uses only local queue information for choosing over time . Further, we perform rate allocation over a chosen limited (finite) set of rate vectors that are feasible. We choose a finite set of rate levels corresponding to every link, and form vectors that are feasible. The details are as follows:

1. For each link , a set of rate levels are chosen from with , and . Here, is the maximum possible transmission rate for the -th link, i.e., , and is the number of levels other than zero. Since the rate region is compact, without loss of generality555If , then this link can be removed from the system., we assume .

2. The set of rate allocation vectors, denoted by , is given by

The convex hull of the set of rate allocation vectors is denoted by . Define the set of strictly feasible rates. For rate regions that are polytopes, the partitions can be chosen such that . For any compact rate region, it is fairly straightforward to choose partitions with such that if . The trivial partition with as step size in all dimensions satisfy the above property. Thus, for any given , we can obtain a set of rate allocation vectors such that

 |R|≤⌈2¯K/ϵ⌉n (3)

and if .

Before describing the algorithm, we define two notions of throughput performance of a rate allocation algorithm.

###### Definition 3 (Rate stable)

We say that a rate allocation algorithm is rate-stable if, for any , the departure rate corresponding to every queue is equal to its arrival rate, i.e., for all , with probability ,

 limt→∞1t∫t0ri(z)I(Qi(z)>0)dz=λi.

From (1),(2), this is same as, for all , with probability ,

 limt→∞Qi(t)/t=0.
###### Definition 4 (Throughput optimal)

We say that a rate allocation algorithm is throughput-optimal if, for any given , the algorithm makes the underlying network Markov chain positive Harris recurrent (defined in Section IV) for all such that . By definition, the algorithm can depend on the value of

Next, we describe a class of algorithms to determine as a function of time based on a continuous-time Markov chain. Recall that is the set of possible rates/states for allocation associated with the -th link. In these algorithms, the -th link uses independent exponential clocks with rates/parameters666These should not to be confused with the rates for allocation. (or equivalently exponential clocks with mean times ). The clock with (time varying) parameter is associated with the state . Based on these clocks, the -th link obtains as follows:

1. If the clock associated with a state (say ) ticks and further if transitioning to that state is feasible, then is changed to ;

2. Otherwise, remains the same.

The above procedure continues, i.e, all the clocks run continuously. Define . It turns out that the appropriate structure to introduce is as follows:

 ui,j=ri,jvi,∀i∈L,j∈{0,1,…,ki},

where We denote the vector consisting of these new set of parameters by .

###### Example 1

Consider a Gaussian multiple access channel with two links as shown in Figure 1 with average power constraint at the transmitters and noise variance at the receiver. The capacity region of this channel is shown in Figure 2 where . In this case, orthogonal access schemes limit the throughput region to the triangle (strictly within the pentagon) shown using dash-line. In this example, if we allow for capacity-achieving physical-layer schemes, the rate region (and hence the throughput region) is identical to the pentagon shown in Figure 2. The natural choice for the set of rate levels at link-1 is where and . Similarly, . This leads to the set of rate allocation vectors It is clear that the convex combination of this set is the throughput region itself. For this example, the state-space of the Markov chain and transitions to and from state are shown in Figure 3.

A distributed algorithm needs to choose the parameters in a decentralized manner. For providing the intuition behind the algorithm, we perform this in two steps. In the first step, we develop the non-adaptive version of the algorithm that has the knowledge of . This algorithm is called non-adaptive as the algorithm requires the explicit knowledge of . The rate allocation at time is set to be . This algorithm uses at all times which is a function of , and is given by

 v∗=argmaxv∈Rnλ⋅v−log(∑r∈Rexp(r⋅v)).

We show in Section V that, given , the above optimization problem has a unique solution that is finite, and therefore has a valid . An important result regarding this non-adaptive algorithm is the following theorem.

###### Theorem 1

The above non-adaptive algorithm is rate-stable for any given .

###### Proof:

For any , there is at least one distribution on that has expectation as . For the Markov chain specified by any , there is a stationary distribution on the state-space . The value is chosen such that it minimizes the Kullback-Leibler divergence of the induced stationary distribution from the distribution corresponding to . For the Markov chain specified by , the expected value of the stationary distribution turns out to be . This leads to rate-stable performance of the algorithm. The proof details are given in Section V. \qed

In the second step, we develop the adaptive algorithm, where is obtained as a function of time denoted by This algorithm is called adaptive as the algorithm does not require the knowledge of . The values of are updated during fixed (not random variables) time instances for . We set and . During interval the algorithm uses . The length of the intervals are . During interval , let the empirical arrival rate be

 ^λi(l)=Ai(τl+1)−Ai(τl)Tl (4)

and the empirical offered service rate be

 ^si(l)=1Tl∫τl+1τlri(z)dz. (5)

The update equation corresponding to the algorithm for the -th link is given by

 vi(τl+1)=[vi(τl)+αl(^λi(l)+ϵ4−^si(l))]D (6)

where , i.e., is the projection of to the closest point in , and are the step sizes. Thus, the algorithm parameters are interval lengths , step sizes and .

###### Remark 1

Clearly, both empirical arrival rate and empirical offered service rate used in the above algorithm can be computed by the -th link without any external information. In fact, the difference is simply the difference of its queue-length over the previous interval appropriately scaled by the inverse of the length of the previous interval.

The following theorem provides -optimal performance guarantee for the adaptive algorithm.

###### Theorem 2

Consider any given , . Then, there exists some choice of algorithm parameters , and such that the appropriate network Markov chain under the adaptive algorithm is positive Harris recurrent if , i.e., the algorithm is throughput-optimal.

###### Proof:

The update in (6) can be intuitively thought of as a gradient decent technique to solve an optimization problem that will lead to whose induced stationary distribution on has expected value strictly greater than . However, the arrival rate and offered service rate are replaced with their empirical values for decentralized operation. We consider the two time scales involved in the algorithm - update interval and update intervals. The main steps involved in establishing the throughput-optimality are the following. First, we show that, sufficiently long can be chosen such that the empirical values used in the algorithm are arbitrarily close to the true values. Using this, we next show that the average offered empirical service rate over update intervals is strictly higher than the arrival rate. Finally, we show that this results in a drift that is sufficient to guarantee positive Harris recurrence. The proof details are given in Section VI. \qed

## Iv Definitions & Known Results

We provide definitions and known results that are key in establishing the main results of this paper. We begin with definitions on two measures of difference between two probability distributions.

###### Definition 5 (Kullback-Leibler (KL) divergence)

Consider two probability mass functions and on a finite set . Then, the KL divergence from to is defined as

###### Definition 6 (Total Variation)

Consider two probability mass functions and on a finite set . Then, the total variation distance between and is defined as

Next, we provide two known results that are used later. Result 1 follows directly from [3](Theorem ), and Result 2 is in [3](Theorem ).

###### Result 1 (Mixing Time)

Consider any finite state-space, aperiodic, irreducible, discrete-time Markov chain with transition probability matrix and the stationary distribution . Let be the minimum value in and the second largest eigenvalue modulus (SLEM) be . Then, for any , starting from any initial distribution (at time 0), the distribution at time associated with the Markov chain, denoted by , is such that if

 τ≥12log(1/α\emph{min})+log(1/ρ)log(1/σ\emph{max}). (7)
###### Result 2 (Conductance Bounds)

Consider the setting as above. The ergodic flow out of is defined as and the conductance is defined as

 (8)

Then, the SLEM is bounded by conductance as follows:

 1−2Φ≤σ\emph{max}≤1−Φ2/2. (9)

Lastly, we provide the definition of positive Harris recurrence. For details on properties associated with positive Harris recurrence, see [22, 6].

###### Definition 7 (Positive Harris recurrence)

Con-sider a discrete-time time-homogeneous Markov chain on a complete, separable metric space . Let denote the Borel -algebra on , and denote the state of the Markov chain at time . Define stopping time for any . The set is called Harris recurrent if for any . A Markov chain is called Harris recurrent if there exits a -finite measure on such that if for some , then is Harris recurrent. It is known that if is Harris recurrent an essentially unique invariant measure exists. If the invariant measure is finite, then it may be normalized to a probability measure. In this case, is called positive Harris recurrent.

## V Rate allocation Markov chain & Rate Stability

Rate allocation Markov chain: The main challenge is to design a Markov chain with fewer parameters that can be analyzed and appropriately chosen for throughput-optimality. First, we identify a class of Markov chains that are relatively easy to analyze. Consider the class of algorithms introduced in Section III. The core of this class of algorithms is a continuous-time Markov chain with state-space , which is the (finite) set of rate allocation vectors. Define

 f(^r,r):=exp(n∑i=1ki∑j=0ui,jI(ri=ri,j)I(ri≠^ri)), (10)

where , and are the parameters introduced in Section III. Now, the transition rate from state to state can be expressed as

 q(^r,r)={f(^r,r),if ∥^r−r∥0=1,0,if ∥^r−r∥0>1.

And, the diagonal elements of the rate matrix are given by for all . This follow directly from the description of the algorithm. This class of algorithms are carefully designed such that it is tractable for analysis. In particular, the following lemma shows that this Markov chain is reversible and the stationary distribution has exponential form.

###### Lemma 3

The rate allocation Markov chain is reversible and has the stationary distribution

 π(r)=exp(∑ni=1∑kij=0ui,jI(ri=ri,j))∑~r∈Rexp(∑ni=1∑kij=0ui,jI(~ri=ri,j)). (11)

Furthermore, this Markov chain converges to this stationary distribution starting from any initial distribution.

###### Proof:

The proof follows from detailed balance equations for all and known results on convergence to stationary distribution for irreducible finite state-space continuous-time Markov chains [1]. \qed

The offered service rate vector under the stationary distribution is . In general, for , we expect to find values for parameters as a function of and such that . Due the exponential form in (9), it turns out that the right structure to introduce is

 ui,j=ri,jvi,∀i∈L,j∈{0,1,…,ki}, (12)

where , and obtain suitable values for as a function of and such that . To emphasize the dependency on , from now onwards, we denote the stationary distribution by and the offered service rate vector by

 sv=∑r∈Rπv(r)r. (13)

Substituting (12), we can simplify (9) to obtain

 πv(r)=exp(r⋅v)∑~r∈Rexp(~r⋅v). (14)

Optimization framework: We utilize the optimization framework in [15] to show that values for exist such that . In particular, we show that the unique solution to an optimization problem given by has the property . Next, we describe the intuitive steps to arrive at the optimization problem. If , then can be expressed as a convex combination of , i.e., there exists a valid probability distribution such that . For a given distribution , we are interested in choosing such that is close to . We consider the KL divergence of from given by . Minimizing over the parameter is equivalent in terms of the optimal solution(s) to maximizing over the parameter as is a constant. Simplifying leads the optimization problem as follows:

 F(μ(r),πv(r)) = ∑r∈Rμ(r)logπv(r) (a)= ∑r∈Rμ(r)r⋅v−log(∑r∈Rexp(r⋅v)) (b)= λ⋅v−log(∑r∈Rexp(r⋅v)).

Here, follows from (14) and follows from the assumption . Now onwards, we denote the objective function by . To summarize, the optimization problem of interest is, given ,

 maximize F(v,λ)=λ⋅v−log(∑r∈Rexp(r⋅v)) (15) subject to v∈Rn.

The following lemma regarding the optimization problem in (15) is a key ingredient to the main results.

###### Lemma 4

Let . The optimization problem in (15) has a unique solution , which is finite. In addition, the offered service rate vector under is equal to the arrival rate vector, i.e.,

###### Proof:

See Appendix. \qed

The important observations are that the objective function is concave in and the gradient with respect to is . With offered service rate equal to arrival rate, the next step is to show that the queues drain at rate equal to .

### V-a Proof of Theorem 1

Rate stability of the non-adaptive algorithm: We establish the rate stability of the non-adaptive algorithm with the result given in Lemma 4 as follows.

Consider time instances for with , and interval length . The queue at the -th link can be upper bounded as follows. The offered service during the time interval is is used to serve the arrivals during the time interval alone. Consider a time , and choose such that . Using (1) and the above upper bounding technique, we obtain

 Qi(t) = Ai(t)−∫t0ri(z)I(Qi(z)>0)dz (16) ≤ l−2∑k=0[Ai(νk+1)−Ai(νk)−∫νk+2νk+1ri(z)dz]+ +Ai(t)−Ai(νl−1),

where

For each interval , define the following two random variables:

 αi(k):=Ai(νk+1)−Ai(νk)Γk, and
 βi(k):=1Γk∫νk+1νkri(z)dz.

It follows from the strong law of large numbers that, with probability , . From Lemma 4 and ergodic theorem for Markov chains, it follows that, with probability , Since the arrival process is non-decreasing and the increments are bounded by , we have

 Ai(t)−Ai(νl−1) ≤ Ai(νl+1)−Ai(νl−1) (17) ≤ K(νl+1−νl−1) = K(Γl−1+Γl).

Rewriting (16) with above defined random variables and applying (17) along with and , we obtain

 Qi(t)t ≤ 1νll−2∑k=0Γk[αi(k)−βi(k+1)]+ (18) +K(Γl−1+Γl)νl.

In (18), the second term on the right hand side (RHS) goes to zero as as . The first term on the RHS of (18) goes to zero with probability as , and Thus, for any given , with probability ,

 limt→∞Qi(t)t=0,∀i∈L,

which completes the proof.

This result is important due to the following two reasons.

1. The result shows that this algorithm has good performance, and an algorithm that approaches the operating point of this algorithm has the potential to perform “well.” Essentially, this aspect is utilized to obtain the adaptive algorithm.

2. The non-adaptive algorithm does not require the knowledge of the number of nodes or , as required by the adaptive algorithm. This suggests the existence of similar gradient-like algorithms that perform “well” with different algorithm parameters that may not depend on the number of nodes or . We do not address this question in the paper, but the non-adaptive algorithm will serve as the starting point to address such issues.

## Vi Throughput Optimality of Algorithm

In this section, we establish the throughput-optimality of the adaptive algorithm for a particular choice of parameters. The algorithm parameters used in this section are dependent on the number of links and . It is evident from the theorem that determines how close the algorithm is to optimal performance. Define

 C(n):=35(2¯K+K)2(¯K2n22+n).

We set all the step sizes (irrespective of interval) to

 αl=α(n,ϵ):=ϵ2/C(n), (19)

and used in the projection to

 D=D(n,ϵ):=16¯KK––nϵlog⌈2¯Kϵ⌉+¯K. (20)

All the interval lengths (irrespective of interval) are set to

 Tl=T(n,ϵ):=exp(^K(n2ϵlognϵ)) (21)

for some large enough constant .

###### Remark 2

The large value of in (21) is due to the poor bound on the conductance of the rate allocation Markov chain. The parameters given by (19), (20) and (21) are one possible choice of the parameters. We would like to emphasize that this choice is primarily for the purpose of the proofs. The choice of right parameters (and even the update functions) in practice are subject to further study especially based on the network configuration and delay requirements. Some comments on this are given in Section VIII.

We start with the optimization framework developed in the previous section. For the adaptive algorithm, the relevant optimization problem is as follows: given such that ,

 maximize Fϵ(v):=F(v,λ+ϵ41) subject to v∈Rn.

The following result is an extension of Lemma 4.

###### Lemma 5

Consider any given and . Then, the optimization problem in (VI) is strictly concave in with gradient and Hessian

 H(F(v))=−(Eπv[rrT]−Eπv[r]Eπv[rT]).

Further, let . Then, it has a unique solution , which is finite, such that the offered service rate vector under is equal to , i.e., In addition, if , then the optimal value is such that

 ∥v∗∥∞≤16¯KK––nϵlog⌈2¯Kϵ⌉. (23)
###### Proof:

See Appendix. \qed

The update step in (6), which is central to the adaptive algorithm, can be intuitively thought of as a gradient decent technique to solve the above optimization problem. Technically, it is different as the arrival rate and offered service rate are replaced with their empirical values for decentralized operation. The algorithm parameters can be chosen in order to account for this. This forms the central theme of this section.

### Vi-a Within update interval

Consider a time interval . During this interval the algorithm uses parameters . For simplicity, in this subsection, we denote by and the vector by and by . For the rate allocation Markov chain (MC) introduced in Section V, we obtain an upper bound on the convergence time or the mixing time.

To obtain this bound, we perform uniformization of the CTMC (continuous-time MC) and use results given in Section IV on the mixing time of DTMC (discrete-time MC). The uniformization constant used is . The resulting DTMC has the same state-space with transition probability matrix . The transition probability from state to state is , and from state to itself is . With our choice of parameters given by (12), we can simplify (10) to

 f(^r,r)=exp(n∑i=1riviI(ri≠^ri)). (24)

For all , clearly . Since at most elements in every row of the transition rate matrix of the CTMC is positive for all . Therefore, is a valid probability transition matrix.

The DTMC has the same stationary distribution as the CTMC. In addition, the CTMC and the DTMC have one-to-one correspondence through an underlying independent Poisson process with rate In this subsection, time denotes the time within the update interval, i.e., denotes global time . Let be the distribution over given by the CTMC at time , and be a Poisson random variable with parameter . Then, we have

 μ(t) = ∑m∈Z+Pr(ζ=m)μ(0)Pm (25) = μ(0)exp(At(P−I)),

where is the identity matrix. Next, we provide the upper bound on the mixing time of the CTMC.

###### Lemma 6

Consider any . Then, there exists a constant , such that, if

 t≥exp(K1(n∥v∥∞+nlog1ϵ))log1ρ1, (26)

then the total variation between the probability distribution at time given by (25) and the stationary distribution given by (14) is smaller than , i.e.,

###### Proof:

See Appendix. \qed

Lemma 6 is used to show that the error associated with using empirical values for arrival rate and offered service rate in the update rule (6) can be made arbitrarily small by choosing large enough . This is formally stated in the next lemma.

###### Lemma 7

Consider . Then, there exists a constant , such that, if the updating period

 T≥exp(K2(n∥v∥∞+nlog1ϵ))1ρ2,

then for any time interval

 E[∥∥^λ(l)−λ∥∥1]+E[∥^s(l)−sv∥1]≤ρ2. (27)
###### Proof:

See Appendix. \qed

Thus, the important result is that due to the mixing of the rate allocation Markov chain, the empirical offered service rate is close to the offered service rate. The next step is to address whether the offered service rates over multiple update intervals is higher than the arrival rates.

### Vi-B Over multiple update intervals

We consider multiple update intervals, and establish that the average empirical offered service rate is strictly higher than the arrival rate. This result follows from the observation that, if the error in approximating the true values by empirical values are sufficiently small, then the expected value of the gradient of over sufficiently large number of intervals should be small. In this case, we can expect the average offered service rate to be close to . Since, is strictly higher than arrival rates, we can expect the average offered service rate to be strictly higher than the arrival rate. The result is formally stated next.

###### Lemma 8

Consider update intervals. Then, the average of empirical service rates over these update intervals is greater than or equal to , i.e.,

 1NN∑l=1E[^s(l)]≥λ+ϵ81.
###### Proof:

See Appendix. \qed

Now, we proceed to show that the appropriate ‘drift’ required for stability is obtained.

### Vi-C Proof of Theorem 2

Consider the underlying network Markov chain consisting of all the queues in the network, the update parameters, and the resulting rate allocation vectors at time , i.e., for It follows from the system model and the algorithm description that is a time-homogenous Markov chain on an uncountable state-space The -field on considered is the Borel -field associated with the product topology. For more details on dealing with general state-space Markov chains, we refer readers to [22].

We consider a Lyapunov function of the form, for . In order to establish positive Harris recurrence, for any such that , we use multi-step888This is a special case of the state-dependent drift criteria in [22]. Lyapunov and Foster’s drift criteria to establish positive recurrence of a set of the form , for some From the assumption on the arrival processes, it follows that is a closed petite set (for definition and details see [22, 13]). It is well known that these two results imply positive Harris recurrence [22].

Next, we obtain the required drift criteria. For simplicity, we denote by in the rest of this section. Consider

 E[Q2i(TN)−Q2i(0)] = E[(Qi(TN)−Qi(0))2 +2Qi(0)(Qi(TN)−Qi(0))] (a)≤ (max(K,¯K)TN)2+ 2Qi(0)E[Qi(TN)−Qi(0)].

Here, follows from the fact that over unit time queue difference belong to . Now, we look at two cases. If , clearly during interval as service rate is less than or equal to . For this case, from Lemma 8,

 2Qi(0)E[Qi(TN)−Qi(0)] = 2Qi(0)T(N∑l=1(λi−E[^si(l)]) ≤ −ϵ4TNQi(0) (a)≤ −ϵ4TNQi(0)+ϵ4¯K(TN)2.

Here, is trivial, but the extra term is added to ensure that the RHS evaluates to a non-negative value for . If , then clearly Since the bounds for each case do not evaluate to negative values for the other case, we have

 E[Q2i(TN)−Q2i(0)]≤−ϵ4TNQi(0)+((K+¯K)2+ϵ4¯K)(TN)2.

Since both and are bounded, there exists some fixed such that

 E[v2i(TN)−v2i(0)]+E[r2i(TN)−r2i(0)]≤M(n,ϵ).

Summing up over all , we obtain

 E[V(X(N))−V(X(0))]≤−ϵ4TN(n∑i=1Qi(0)) +nM(n,ϵ)+n((K+¯K)2+ϵ4¯K)(TN)2.

This shows that there exists some such that for all with there is strict negative drift. Hence, the set