Optimal Multi-Server Allocation to Parallel Queues With Independent Random Queue-Server Connectivity

Optimal Multi-Server Allocation to Parallel Queues With Independent Random Queue-Server Connectivity

Hussein Al-Zubaidy, Ioannis Lambadaris, Yannis Viniotis H. Al-Zubaidy is with the Edward S. Rogers Sr. Department of Electrical and Computer Engineering, University of Toronto, Toronto, ON, M5S 1A1 Canada, e-mail: hzubaidy@comm.utoronto.ca.I. Lambadaris is with the Department of Systems and Computer Engineering, Carleton University, Ottawa, ON, K1S 5B6 Canada, e-mail: ioannis.lambadaris@sce.carleton.ca.Y. Viniotis is with the Electrical and Computer Engineering Department, North Carolina State University, Raleigh, NC, USA, e-mail: candice@ncsu.edu.
This work was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC).
Abstract

We investigate an optimal scheduling problem in a discrete-time system of parallel queues that are served by identical, randomly connected servers. Each queue may be connected to a subset of the servers during any given time slot. This model has been widely used in studies of emerging 3G/4G wireless systems. We introduce the class of Most Balancing (MB) policies and provide their mathematical characterization. We prove that MB policies are optimal; we define optimality as minimization, in stochastic ordering sense, of a range of cost functions of the queue lengths, including the process of total number of packets in the system. We use stochastic coupling arguments for our proof. We introduce the Least Connected Server First/Longest Connected Queue (LCSF/LCQ) policy as an easy-to-implement approximation of MB policies. We conduct a simulation study to compare the performance of several policies. The simulation results show that: (a) in all cases, LCSF/LCQ approximations to the MB policies outperform the other policies, (b) randomized policies perform fairly close to the optimal one, and, (c) the performance advantage of the optimal policy over the other simulated policies increases as the channel connectivity probability decreases and as the number of servers in the system increases.

1 Introduction, Model Description and Prior Research

Emerging 3G/4G wireless networks can be categorized as high-speed, IP-based, packet access networks. They utilize the channel variability, using data rate adaptation, and user diversity to increase their channel capacity. These systems usually employ a mixture of Time and Code Division Multiple Access (TDMA/CDMA) schemes. Time is divided into equal size slots, each of which can be allocated to one or more users. To optimize the use of the enhanced data rate, these systems allow several users to share the wireless channel simultaneously using CDMA. This will minimize the wasted capacity resulting from the allocation of the whole channel capacity to one user at a time even when that user is unable to utilize all of that capacity. Another reason for sharing system capacity between several users, at the same time slot, is that some of the user equipments at the receiving side might have design limitations on the amount of data they can receive and process at a given time.

The connectivity of users to the base station in any wireless system is varying with time and can be best modeled as a random process. The application of stochastic modeling and queuing theory to model wireless systems is well vetted in the literature. Modeling wireless systems using parallel queues with random queue/server connectivity was used by Tassiulas and Ephremides [3], Ganti, Modiano and Tsitsiklis [6] and many others to study scheduler optimization in wireless systems. In the following subsection, we provide a more formal model description and motivation for the problem at hand.

1.1 Model Description

In this work, we assume that time is slotted into equal length deterministic intervals. We model the wireless system under investigation as a set of parallel queues with infinite capacity (see Figure 1); the queues correspond to the different users in the system. We define to represent the number of packets in the queue at the beginning of time slot . The queues share a set of identical servers, each server representing a network resource, e.g., transmission channel. We make no assumption regarding the number of servers relative to the number of queues, i.e., can be less, equal or greater than . The packets in this system are assumed to have constant length, and require one time slot to complete service. A server can serve one packet only at any given time slot. A server can only serve connected, non-empty queues. Therefore, the system can serve up to packets during each time slot. Those packets may belong to one or several queues.

The channel connectivity between a queue and any server is random. The state of the channel connecting the queue to the server during the time slot is denoted by and can be either connected () or not connected (). Therefore, in a real system will determine if transmission channel can be used by user or not. We assume that , for all , and , are independent, Bernoulli random variables with parameter .

The number of arrivals to the queue during time slot is denoted by . The random variables is assumed to have Bernoulli distribution. We require that arrival processes to different queues be independent of each other; we further require that the random processes be independent of the processes for , . The symmetry and independence assumptions are necessary for the coupling arguments we use in our optimality proofs. The rest are simplifying assumptions that can be relaxed at the price of a more complex and maybe less intuitive proof.

A scheduling policy (or server allocation policy or scheduler) decides, at the beginning of each time slot, what servers will be assigned to which queue during that time slot. The objective of this work is to identify and analyze the optimal scheduling policy that minimizes, in a stochastic ordering sense, a range of cost functions of the system queue sizes, including the total number of queued packets, in the aforementioned system. The choice of the class of cost functions and the minimization process are discussed in detail in Section 5.

Figure 1: Abstraction of downlink scheduler in a multi-server wireless network.

1.2 Previous Work and Our Contributions

In the literature, there is substantial research effort focusing on the subject of optimal scheduling in wireless networks with random connectivity. Tassiulas and Ephremides [3] studied the problem of allocating a single, randomly connected server to a set of parallel queues. They proved, using stochastic coupling arguments, that a LCQ (Longest Connected Queue) policy is optimal. In our work we investigate a more general model that studies the optimal allocation of randomly connected servers to parallel queues. We show that LCQ is not always optimal in a multi-server system where multiple servers can be allocated to each queue at any given time slot. Bambos and Michailidis [4] worked on a similar model (a continuous time version of [3] with finite buffer capacity) and proved that under stationary ergodic input job flow and modulation processes, both ‘Maximum Connected Workload’ and LCQ dynamic allocation policies maximize the stability region for this system. Furthermore, they proved that a policy that allocates the server to the connected queue with the fewest empty spaces, stochastically minimizes the loss flow and maximizes the throughput [5].

Another relevant result is that reported by Ganti, Modiano and Tsitsiklis [6]. They presented a model for a satellite node that has transmitters. The system was modeled by a set of parallel queues with symmetrical statistics competing for identical, randomly connected servers. At each time slot, no more than one server is allocated to each scheduled queue. They proved, using stochastic coupling arguments, that a policy that allocates the servers to the longest connected queues at each time slot, is optimal. This model is similar to the one we consider in this work, except that in our model one or more servers can be allocated to each queue in the system. A further, stronger difference between the two models is that we consider the case where each queue has independent connectivities to different servers. We make these assumptions for a more suitable representation of the 3G/4G wireless systems described earlier. These differences make it substantially harder to identify (and even describe) the optimal policy (see Section 3). A more recent result that has relevance to our work is the one reported by Kittipiyakul and Javidi in [7]. They proved, using dynamic programming, that a ‘maximum-throughput and load-balancing’ policy minimizes the expected average cost for a two-queue, multi-server system with random connectivity. In our research work, we prove optimality of the most balancing policies in the more general problem of a multi-queue (more than two queues) and multi-server system with random channel connectivity. A stronger distinction of our work is that we proved the optimality in a stochastic ordering sense which is a stronger notion of optimality compared to the expected average cost criterion that was used in [7]. Lott and Teneketzis [8] investigated a multi-class system of weighted cost parallel queues and servers with random connectivity. They also used the same restriction of one server per queue used in [6]. They showed that an index rule is optimal and provided conditions sufficient, but not necessary, to guarantee its optimality.

Koole et al [9] studied a model similar to that of [3] and [5]. They found that the ‘Best User’ policy maximizes the expected discounted number of successful transmissions. Liu et al [10], [11] studied the optimality of opportunistic schedulers (e.g., Proportional Fair (PF) scheduler). They presented the characteristics and optimality conditions for such schedulers. However, Andrews [12] showed that there are six different implementation algorithms of a PF scheduler, none of which is stable. For more information on resource allocation and optimization in wireless networks the reader may consult [13], [14], [15], [16], [17], and [18].

The model we present in this work can be applied to many of the previous work described above. In section 9 we discuss this applicability for three key publications, namely [3], [6] and [7], that are strongly related to our own. We also show how our model can be reduced to their models and used to describe the problems they investigated.

In summary, the main contributions of our work are the following:

  1. We introduce and show the existence of the class of Most Balancing (MB) scheduling policies in the model of Figure 1 (see Equations (7) and (8)). Intuitively, an MB policy attempts to balance all queue sizes at every time slot, so that the total sum of queue size differences will be minimized.

  2. We prove the optimality of MB policies for minimizing, in stochastic ordering sense, a set of functionals of the queue lengths (see Theorem 1).

  3. We provide low-overhead, heuristic approximations for an MB policy. At any time slot, such policies allocate the “least connected servers first” to their “longest connected queues” (LCSF/LCQ). These policies have complexity and thus can be easily implemented. We evaluate the performance of these approximations via simulations.

The rest of the article is organized as follows. In section II, we introduce notation and define the scheduling policies. In section III, we introduce and provide a detailed description of the MB policies. In section IV, we introduce and characterize balancing interchanges, that we will use in the proof of MB optimality. In section V, we present the main result, i.e., the optimality of MB policies. In section VI, we present the Least Balancing (LB) policies, and show that these policies perform the worst among all work conserving policies. MB and LB policies provide upper and lower performance bounds. In section VII, we introduce practical, low-overhead approximations for such policies, namely the LCSF/LCQ policy and the MCSF/SCQ policy, with their implementation algorithms. In section VIII, we present simulation results for different scheduling policies. In section IX, we give some final remarks that show the applicability of our model to problems studied in previous work. We present proofs for some of our results in the Appendix.

2 Scheduling Policies

Recall that and denote the number of queues and servers respectively in the model introduced in Figure 1. We will use bold face, UPPER CASE and lower case letters to represent vector/matrix quantities, random variables and sample values respectively. In order to represent the policy action that corresponds to “idling” a server, we introduce a special, “dummy” queue which is denoted as queue 0. Allocating a server to this queue is equivalent to idling that server. By default, queue 0 is permanently connected to all servers and contains only “dummy” packets. Let denote the indicator function for condition . Throughout this article, we will use the following notation:

  • is an matrix, where for is the channel connectivity random variable as defined in Section 1. By assumption, for all .

  • is the vector of queue lengths at the beginning of time slot , measured in number of packets. We assume .

  • is the withdrawal control. For any , denotes the number of packets withdrawn from queue (and assigned to servers) during time slot .

  • is the vector of the number of exogenous arrivals during time slot . Arrivals to queue are as defined in Section 1.

  • For ease of reference, we call the tuple the “state” of the system at the beginning of time slot .

For any (feasible) control (), the system described previously evolves according to

(1)

We assume that arrivals during time slot are added after removing served packets. Therefore, packets that arrive during time slot have no effect on the controller decision at that time slot and may only be withdrawn during or later. For convenience and in order to ensure that for all , we define . We define controller policies more formally next.

2.1 Feasible Scheduling and Withdrawal Controls

The withdrawal control defined earlier does not provide any information regarding server allocation. Such information is necessary for our optimality proof. To capture such information, we define the vector , where denotes the index of the queue that is selected (according to some rule) to be served by server during time slot . Note that serving the “dummy” queue, i.e., setting indicates that server is idling during time slot . For future reference, we will call the scheduling (or server allocation) control.

Using the previous notation and given a scheduling control vector we can compute the withdrawal control vector as:

(2)

We say that a given vector is a feasible scheduling control (during time slot ) if: (a) a server is allocated to a connected queue, and, (b) the number of servers allocated to a queue (dummy queue excluded) cannot exceed the size of the queue at time . Similarly, we say that a vector is a feasible withdrawal control (during time slot ) if there exists a feasible scheduling control that satisfies Equation (2).

Conditions (a) and (b) above are also necessary for feasibility of a scheduling control vector . From Equation (2), a feasible withdrawal control satisfies the following necessary conditions:

(4)

For the rest of this article, we will refer to as an implementation of the given feasible control . We denote the set of all feasible withdrawal controls while in state by .

Note from Equation (2) that, given a feasible scheduling control , a feasible withdrawal control can be readily constructed. Note, however, that, for any feasible , the feasible scheduling control may not be unique. Furthermore, given a feasible , the construction of the scheduling control may not be straightforward111Given a state and a feasible withdrawal vector , one can determine the feasible scheduling control by performing a brute-force search over all feasible vectors . and will not be examined in this article.

2.2 Definition of Scheduling Policies

A scheduling policy (or policy for simplicity) is a rule that determines feasible withdrawal vectors for all , as a function of the past history and current state of the system . The state history is given by the sequence of random variables

(5)

Let be the set of all state histories up to time slot . Then a policy can be formally defined as the sequence of measurable functions

s.t. (6)

where is the set of non-negative integers and , where the Cartesian product is taken times.

At each time slot, the following sequence of events happens: first, the connectivities and the queue lengths are observed. Second, the packet withdrawal vector is determined according to a given policy. Finally, the new arrivals are added to determine the next queue length vector .

We denote the set of all scheduling policies described by Equation (2.2) by . We introduce next a subset of , namely the class of Most Balancing (MB) policies. The goal of this work is to prove that MB policies are optimal (in a stochastic ordering sense).

3 The Class of MB Policies

In this section, we provide a description and mathematical characterization of the class of MB policies. Intuitively, the MB policies “attempt to minimize the queue length differences in the system at every time slot ”. For a more formal characterization of MB policies, we first define the following:

Given a state and a policy that chooses the feasible control at time slot , define the “updated queue size” as the size of queue , after applying the control and just before adding the arrivals during time slot . Note that because we let , we have where is the set of all integers, i.e., we allow to be negative.

We define , the “imbalance index” of policy at time slot , as the following sum of differences:

(7)

where denotes the index of the longest queue after applying the control and before adding the arrivals at time slot . By convention, queue ‘’ (the “dummy queue”) will always have order (i.e., the queue with the minimum length). This definition ensures that the differences are nonnegative and a pair of queues is accounted for in the summation only once; moreover, as we shall see in Lemma Appendix .2.1, this definition allows for a straightforward calculation and comparison of various policies222We have experimented with alternatives to Equation (7) that use lexicographic ordering of queues and “Min-Max” definitions (e.g., minimize the length of the largest queue). However, we were not able to derive results equivalent to Lemma Appendix .2.1..

It follows from Equation (7) that the minimum possible value of the imbalance index is equal to (i.e., all queues have the same length which is equal to the shortest queue length) which is indicative of a fully balanced system. It also follows that the maximum such value is equal to . This value is attained when the longest queues have the same size.

Let denote the set of all MB policies, then we define the elements of this set as follows:

Definition: A Most Balancing (MB) policy is a policy that, at every , chooses feasible withdrawal vector such that the imbalance index at that time slot is minimized, i.e.,

(8)

The set in Equation (8) is well-defined and non-empty, since the minimization is over a finite set. Note that the set of MB policies may have more than one element. This could happen, for example, when at a given time slot , a server is connected to two or more queues of equal size, which happen to be the longest queues connected to this server after allocating all the other servers. To illustrate this case, consider a two-queue system with a single, fully-connected server at time slot . Let . Assume that policy (respectively ) chooses a withdrawal vector (respectively ). Then both policies minimize the imbalance index, and .

Given and , one can construct an MB policy using a direct search over all possible server allocations. For large and , this can be a challenging computational task and is not the focus of this work. In Section 7, we provide a low-complexity heuristic algorithm (LCSF/LCQ) to approximate MB policies.

Remark 1.

Note that the LCQ policy in [3] is a most balancing (MB) policy for (i.e., the one server system presented in [3]). Extension of LCQ to (i.e., allocating all the servers to the longest queue in the multiserver model) may not result in a MB policy, as the following example demonstrates.

Consider a system of three queues with three fully-connected servers during time slot . Let . An LCQ policy in the spirit of [3] that allocates all servers to the longest connected queue results in queue size vector . Moreover, an LCQ policy in the spirit of [6] that allocates the three servers to the three longest connected queues results in queue size vector . Both policies have . An MB policy results in queue size vector and .

3.1 Comparing arbitrary policies to an MB policy

When comparing various policies to an MB policy, the definition in Equation (8) is cumbersome since it involves all time instants . The subsets we introduce next define policies that are related to MB policies and allow us to perform comparisons one single instant at a time.

Consider any fixed ; we say that a policy “has the MB property” at time , if achieves the minimum value of the index .

Definition: For any given time , denotes the set of policies that have the MB property at all time slots (and are arbitrary for ).

We have that . Note that the set is not empty, since MB policies are elements of it. We can easily see that these sets form a monotone sequence, with

(9)

Then the set in Equation (8) can be defined as

The vector defined in Equation (10) is a measure of how much an arbitrary policy differs from a given MB policy during a given time slot .

Definition: Consider a given state and a policy that chooses the feasible withdrawal vector during time slot . Let be a withdrawal vector chosen by an MB policy during the same time slot . We define the (-dimensional vector as

(10)

Note that, for notational simplicity, we omit the dependence of on the policies and the time index . Intuitively, a negative element of vector indicates that more packets than necessary (compared to a policy that has the MB property) have been removed from queue under policy .

The following lemma quantifies the difference between an arbitrary policy and an MB policy (at time ). Its proof is given in Appendix Appendix .1.

Lemma 1.

Consider a given state and a policy . Then, (a) if , the policy has the MB property at time , and, (b) if has the MB property at time , the vector has components that are or only.

Consider a policy ; let . As we show in the Appendix (see Lemma Appendix .3.2), is integer-valued and . In view of Lemma 1, can be seen as a measure of “how close” the policy is to having the MB property at time .

Definition: For any given time and integer , where , define the set as the set that contains all policies , such that .

From Lemma 1, we can see that . We can easily check that , so by default. forms a monotone sequence, with

(11)

We exploit the monotonicity property of the sets in the next section, when we show how balancing interchanges reduce the imbalance index of a given policy.

Note that the set of all policies can be denoted as

(12)

It follows from the last two equations that an arbitrary policy will also belong to a set , for some . The proof of optimality in Section 5 is based on comparisons of to a series of policies that belong to the subsets (see Lemma 5).

4 Balancing Interchanges

In this section, we introduce the notion of “balancing interchanges”. Intuitively, an interchange between two queues, and , describes the action of withdrawing a packet from queue instead of queue (see Equations (15) and (16)). Such interchanges are used to relate the imbalance indices of various policies (see Equation (23)); balancing interchanges are special in two ways: (a) they do not increase the imbalance index (see Lemma 2) and thus provide a means to describe how a policy can be modified to obtain the MB property at time , and, (b) they preserve the queue size ordering we define in the next section (see relations R1-R3 in Section 5.1). This ordering is crucial in proving optimality.

Interchanges can be implemented via server reallocation. Since there are servers, it is intuitive that at most interchanges suffice to convert any arbitrary policy to a policy that has the MB property at time . The crux of Lemma 4, the main result of this section, is that such interchanges are balancing.

4.1 Interchanges between two queues

Let represent the indices of two queues that we refer to as the ‘from’ and ‘to’ queues. Define the -dimensional vector , whose -th element is given by:

(13)

Fix an initial state at time slot ; consider a policy with a (feasible) withdrawal vector . Let

(14)

be another withdrawal vector. The two vectors differ only in the two components ; under the withdrawal vector , an additional packet is removed from queue , while one packet less is removed from queue . Note that either or can be the dummy queue. In other words,

(15)
(16)
(17)

In the sequel, we will call an interchange between queues and . We will call a feasible interchange if it results in a feasible withdrawal vector . It follows immediately from Equations (1) and (14) that the interchange will result in a new vector, , of updated queue sizes, such that:

(18)

We are interested next in describing sufficient conditions for ensuring feasible interchanges.

4.2 Feasible Single-Server Reallocation

Given the state , let be any feasible withdrawal vector at time slot that is implemented via . We define a “feasible, single-server reallocation” (from queue to queue ) as the reallocation of a single server from queue to queue , such that the new scheduling control is also feasible. The conditions and are sufficient for the reallocation of server (from queue to queue ) to be feasible.

A feasible, single-server reallocation from queue to queue results into a feasible interchange . However, the reverse may not be true, as we detail in the following section.

4.3 Sufficient conditions for a feasible interchange

Consider again the state and feasible scheduling control . The feasible interchange in Equation (14) may result from a sequence of feasible, single-server reallocations among several queues, as demonstrated in Figure 2, where .

Figure 2: A sequence of single-server reallocations results in a feasible . The dotted line denotes original server allocation. The solid line denotes server reallocation that implements .

Let denote a sequence of queue indices, where and . Let denote the server reallocated from queue to queue . Then the following are sufficient conditions for the feasibility of the interchange operation of Equation (14):

(19)
(20)

for some integer and .

Constraint (19) ensures that connectivity conditions allow for the feasibility of all intermediate single-server reallocations. The sequence of server reallocations starts by reallocating server to queue . In this case, queue is reduced by one packet (i.e., an extra packet is withdrawn from queue ) and queue is increased by one packet. Constraint (20) ensures that a packet can be withdrawn from queue . The reallocation of server insures that queue contains at least one packet for the second intermediate single-server reallocation to be feasible even when . Same is true for any queue . Therefore, constraints (19) and (20) are also sufficient for the feasibility of the interchange .

4.4 “Balancing” interchanges

Definition: A feasible interchange is “balancing” if

(21)

is “unbalancing” if

(22)

Balancing interchanges result in policies that may reduce the imbalance index, as the following lemma states.

Lemma 2.

Consider two policies and , related via the balancing interchange

at time slot . Then the imbalance indices for the two policies are related via

(23)

where (respectively ) is the order of queue (respectively ) in when ordered in descending order, such that, and .333These conditions state that when there exist multiple components that have the same value as (respectively ) only the last (respectively the first) of the components in order is considered. Intuitively, we use (respectively ) to refer to the order of the “shorter” (respectively the “longer”) queue of the two queues used in the interchange..

The proof is a direct consequence of Lemma Appendix .2.1 in Appendix Appendix .2 and the fact that, by definition of the balancing interchange, we have .

In words, Equation (23) states that an interchange , when balancing, results in: either a cost reduction of (when ) or an unchanged cost (when ). The latter case agrees with intuition, since the balancing interchange in this case will result in simply permuting the lengths of queues and ; this permutation does not change the total sum of differences (and hence the imbalance index) in the resulting queue length vector.

We determine next conditions that characterize what interchanges are balancing. We also describe how balancing interchanges transform an arbitrary policy to an MB policy.

4.5 How to determine balancing interchanges

Lemma 3 provides a selection criterion to systematically select balancing (and hence improving) interchanges. Lemma 4 provides a bound on the number of interchanges needed to convert any policy into one that has the MB property at time . The proofs of the two lemmas are given in Appendix Appendix .3 and Appendix .4 respectively.

Lemma 3.

Consider a given state and a feasible withdrawal vector . Any feasible interchange with indices and such that , is a balancing interchange.

Recall that . Consider a sequence of balancing interchanges, . Let

We denote by the policy that chooses the withdrawal vector . In other words, denotes the policy that results from applying this sequence of interchanges.

Lemma 4.

For any policy , balancing interchanges suffice to determine a policy such that .

Lemma 3 can be used to identify queues and during time slot such that the interchange is balancing. Lemma 4 shows that performing a sequence of such interchanges, determines a policy that has the MB property for one more time slot. Both lemmas are crucial for the proof of our main result, since they indicate how a given policy can be improved using one balancing interchange at a time.

5 Optimality of MB Policies

In this section, we present the main result of this article, that is, the optimality of the Most Balancing (MB) policies. We will establish optimality for a range of performance criteria, including the minimization of the total number of packets in the system. We introduce the following definition.

5.1 Definition of Preferred Order

Let’s define the relation on first; we say if:

  1. for all (i.e., point wise comparison),

  2. is obtained from by permuting two of its components; the two vectors differ only in two components and , such that and , or

  3. is obtained from by performing a “balancing interchange”, in the sense of Equation (21), i.e., the two vectors differ in two components and only, where , such that: and .

To prove the optimality of MB policies, we will need a methodology that enables comparison of the queue lengths under different policies. Towards this end, we define a “preferred order” as follows:

Definition: (Preferred Order). The transitive closure of the relation defines a partial order (which we call preferred order and use the symbol to represent) on the set

The transitive closure [21], [6] of on the set is the smallest transitive relation on that contains the relation . From the engineering point of view, if is obtained from by performing a sequence of reductions, permutations of two components and/or balancing interchanges.

For example, if and then since can be obtained from by performing the following two consecutive two-component permutations: first swap the second and third components of , yielding then swap the first and second components of , yielding .

Suppose that represent queue size vectors for our model. Statement R3 in this case describes moving a packet from one real, large queue to another smaller one (note that the queue with index is not excluded since a balancing interchange may represent the allocation of an idled server). We say that is more balanced than when R3 is satisfied. For example, if and then a balancing interchange (where and ) will result in .

5.2 The class of cost functions

Let be two vectors representing queue lengths. Then we denote by the class of real-valued functions on that are monotone, non-decreasing with respect to the partial order ; that is, if and only if

(24)

From (24) and the definition of preferred order, it can be easily seen that the function belongs to . This function corresponds to the total number of queued packets in the system444Another example is the function which also belongs to the class ..

For two real-valued random variables and , defines the usual stochastic ordering [2]. In the remainder of this paper, we say that a policy dominates another policy if

(25)

for all cost functions .

We will need the following lemma to complete the proof of our main result presented in Theorem 1.

Lemma 5.

Consider an arbitrary policy , where . Then, there exists a policy , such that dominates .

The full details of the proof for Lemma 5 are given in Appendix Appendix .5. The proof involves two parts. First, we construct a policy by applying a balancing interchange to ; using Lemmas 3 and 4, we show that . Second, we prove that dominates policy (see Equation (25)); this part employs coupling arguments.

5.3 The main result

In the following, and represent the queue sizes under a MB and an arbitrary policy .

Theorem 1.

Consider a system of queues served by identical servers, as shown in Figure 1 with the assumptions of Section 1. A Most Balancing (MB) policy dominates any arbitrary policy when applied to this system, i.e.,

(26)

for all and all cost functions .

Proof.

From (24) and the definition of stochastic dominance, it is sufficient to show that for all and all sample paths in a suitable sample space. The sample space is the standard one used in stochastic coupling methods [1]; see Appendix Appendix .5 for more details.

To prove the optimality of an MB policy, , we start with an arbitrary policy and apply a series of modifications that result in a sequence of policies (). The modified policies have the following properties:

  1. dominates the given policy ,

  2. , i.e., policy has the MB property at time slots , and,

  3. dominates for (i.e., has the MB property for a longer period of time than ).

Let be any arbitrary policy; then . Using Lemma 5 we can construct a policy that dominates the original policy . Repeating this operation we can construct policies that belong to such that all dominate the original policy . This sequence of construction steps will result in a policy that has the MB property at , i.e., , and dominates . Therefore, by construction . We repeat the construction steps above for time slot , by improving on , to obtain a policy that dominates , and recursively for to obtain policies . From the construction of , we can see that it satisfies properties (a), (b) and (c) above.

Denote the limiting policy as by . One can see that is an MB policy. Furthermore, dominates , for all , as well as the original policy . ∎

Remark 2.

The optimal policy may not be unique. Our main objective is to prove the optimality of the MB policy not its uniqueness. The optimality of MB policies makes intuitive sense; any such policy will tend to reduce the chance that any server idles. This is because an MB policy distributes the servers among the connected queues in the system such that it keeps packets spread among all the queues in a “uniform” manner.

6 The Least Balancing Policies

The Least Balancing (LB) policies are the scheduling policies, among all work-conserving (non-idling) policies, that at every time slot (), choose a packet withdrawal vector that “maximizes the differences” between queue lengths in the system (i.e., maximizes in Equation (7)). In other words, if is the set of all LB policies and is the set of all work conserving policies then,

(27)

Maximizing the imbalance among the queues in the system will result in maximizing the number of empty queues at any time slot, thus maximizing the chance that servers are forced to idle in future time slots. This intuitively suggests that LB policies will be outperformed by any work conserving policy. The next theorem states this fact. Its proof is analogous to that of Theorem 1 and will not be given here.

Remark 3.

A non-work conserving policy can by constructed such that it will perform worse than LB policies, e.g., a policy that idles all servers.

Theorem 2.

Consider a system of queues served by identical servers, under the assumptions described in Sections 1. A Least Balancing (LB) policy is dominated by any arbitrary work conserving policy when applied to this system, i.e.,

(28)

for all and all cost functions .

An LB policy has no practical significance, since it maximizes the cost functions presented earlier. Intuitively, it should also worsen the system stability region and hence the system throughput. However, it is interesting to study the worst possible policy behavior and to measure its performance. The LB and MB policies provide lower and upper limits to the performance of any work conserving policy. The performance of any policy can be measured by the deviation of its behavior from that of the MB and LB policies.

7 Heuristic Implementation Algorithms For MB and LB Policies

In this section, we present two heuristic policies that approximate the behavior of the MB and LB policies respectively. We present an implementation algorithm for each one of them.

7.1 Approximate Implementation of MB Policies

We introduce the Least Connected Server First/Longest Connected Queue (LCSF/LCQ) policy, a low-overhead approximation of MB policy, with computational complexity. The policy is stationary and depends only on the current state during time slot .

The LCSF/LCQ implementation during a given time slot is described as follows: The least connected server is identified and is allocated to its longest connected queue. The queue length is updated (i.e., decremented). We proceed accordingly to the next least connected server until all servers are assigned. In algorithmic terms, the LCSF/LCQ policy can be described/implemented as follows:

Let denote the set of queues that are connected to server during time slot ; we omit the dependence on to simplify notation. Let be the element in the sequence , when ordered in ascending manner according to their size (set cardinality), i.e., if . Ties are broken arbitrarily. Then under the LCSF/LCQ policy, the servers are allocated according to the following algorithm:

Algorithm 1 (LCSF/LCQ Implementation).

Note that in line 5 of Algorithm 1, if the set is empty, then the returns the empty set. In this case, the order server will not be allocated (i.e., will be idle during time slot ). Algorithm 1 produces two outputs, when it is run at : and as shown in line 9 of the algorithm. In accordance to the definition of a policy in Equation (2.2), the LCSF/LCQ policy can be formally defined as the sequence of time-independent mappings that produce the withdrawal vector described in line 9 above.

Lemma 6.

LCSF/LCQ is not an MB policy.

To prove lemma 6 we present the following counter example. Consider a system with and . At time slot the system has the following configuration:

The queue state at time slot is . Servers 1 to 6 are connected to queues 1, 2 and 3 and server 7 is connected to queues 1 and 4 only.

Under this configuration, we can show that the LCSF/LCQ algorithm will result in (where the first element represents the dummy queue that by assumption holds no real packets) and . A policy can be constructed that selects the feasible server allocation which yields the state and . Therefore, LCSF/LCQ is not an MB policy.

The LCSF/LCQ policy is of particular interest for the following reasons: (a) It follows a particular server allocation ordering (LCSF) to their longest connected queues (LCQ) and thus it can be implemented using simple sequential server allocation with low computation complexity, (b) the selected server ordering (LCSF) and allocation (LCQ) intuitively attempt to reduce the size of the longest connected queue thus reducing the imbalance among queues, and, (c) as we will see in Section 8, the LCSF/LCQ performance is statistically indistinguishable from that of an MB policy (implying that the counterexamples similar to the one in Lemma 6 proof have low probability of occurrence under LCSF/LCQ system operation). Therefore, LCSF/LCQ can be proposed as an approximate heuristic for the implementation of MB policies.

7.2 Approximate Implementation of LB Policies

In this section, we present the MCSF/SCQ policy as a low complexity approximation of LB policies. We also provide an implementation algorithm for MCSF/SCQ using the same sequential server allocation principle that we used in Algorithm 1 above.

The Most Connected Server First/Shortest Connected Queue (MCSF/SCQ) policy is the server allocation policy that allocates each one of the servers to its shortest connected queue (not counting the packets already scheduled for service) starting with the most connected server first. The MCSF/SCQ implementation algorithm is analogous to Algorithm 1 except for lines 4 and 5 which are described next:

Algorithm 2 (MCSF/SCQ Implementation).

Comments analogous to the ones valid for Algorithm 1 are also valid for Algorithm 2.

8 Performance Evaluation and Simulation Results

We used simulation to study the performance of the system under the MB/LB policies and to compare against the system performance under several other policies. The metric we used in this study is , the average of the total number of packets in the system.

We focused on two groups of simulations. In the first, we evaluate the system performance with respect to number of queues () and servers () as well as channel connectivity (Figures 3 to 7). Arrivals are assumed to be i.i.d. Bernoulli. In the second group (Figures 8(a) to 8(c)) we consider batch arrivals with random (uniformly distributed) burst size.

The policies used in this simulation are: LCSF/LCQ, as an approximation of an MB policy; MCSF/SCQ, as an approximation of an LB policy. An MB policy was implemented using full search, for the cases specified in this section, and its performance was indistinguishable from that of the LCSF/LCQ. Therefore, in the simulation graphs the MB and LCSF/LCQ are represented by the same curves. This statement is also true for LB and MCSF/SCQ policies performances. Other policies that were simulated include the randomized, Most Connected Server First/Longest Connected Queue (MCSF/LCQ), and Least Connected Server First/Shortest Connected Queue (LCSF/SCQ) policies. The randomized policy is the one that, at each time slot, allocates each server randomly and with equal probability to one of its connected queues. The MCSF/LCQ policy differs from the LCSF/LCQ policies in the order that it allocates the servers. It uses the exact reverse order, starting the allocation with the most connected server and ending it with the least connected one. However, it resembles the LCSF/LCQ policies in that it allocates each server to its longest connected queue. The LCSF/SCQ policy allocates each server, starting from the one with the least number of connected queues, to its shortest connected queue. The difference from an LCSF/LCQ policy is obviously the allocation to the shortest connected queue. This policy will result in greatly unbalanced queues and hence a performance that is closer to the LB policies.

Figure 3 shows the average total queue occupancy versus arrival rate under the five different policies. The system in this simulation is a symmetrical system with 16 parallel queues (), 16 identical servers () and i.i.d. Bernoulli queue-to-server (channel) connectivity with parameter .

Figure 3: Average total queue occupancy, , versus load under different policies, and .

The curves in Figure 3 follow a shape that is initially almost flat and ends with a rapid increase. This abrupt increase happens at a point where the system becomes unstable. In this case, the queue lengths in the system will grow fast and the system becomes unstable. The graph shows that LCSF/LCQ, the MB policy approximation outperforms55599% confidence intervals are very narrow and would affect the readability of the graphs. Therefore they are not included. all other policies. It minimizes and hence the queuing delay. We also noticed that it maximizes the system stability region and hence the system throughput as well. The MCSF/SCQ performed the worst. As expected, the performance of the other three policies lies within the performance of the MB and LB policies.

The MCSF/LCQ and LCSF/SCQ policies are variations of the MB and LB policies respectively. The performance of MCSF/LCQ policy is close to that of the MB policy. The difference in performance is due to the order of server allocation. On the other hand, the LCSF/SCQ policy shows a large performance improvement on that of the LB policy. This improvement is a result of the reordering of server allocations.

Figure 3 also shows that the randomized policy performs reasonably well. Moreover, its performance improves as the number of servers in the system decreases, as the next set of experiments shows.

8.1 The Effect of The Number of Servers

In this section, we study the effect of the number of servers on policy performance. Figure 4 () and Figure 5 () show versus arrival rate per queue under the five policies, in a symmetrical system with and . Comparing these two graphs to the one in Figure 3, we notice the following:

First, the performance advantage of the LCSF/LCQ (and hence of an MB policy) over the other policies increases as the number of servers in the system increases. The presence of more servers implies that the server allocation action space is larger. Selecting the optimal (i.e., MB) allocation, over any arbitrary policy, out of a large number of options will result in a better performance as compared to the case when the number of server allocation options is less.

Second, the stability region of the system becomes narrower when less servers are used. This is true because fewer resources (servers) are available to be allocated by the working policy in this case.

Finally, we notice that the MCSF/LCQ performs very close to the LCSF/LCQ policy in the case of . Apparently, when is small, the order of server allocation does not have a big impact on the policy performance.

Figure 4: Average total queue occupancy, , versus load, and .
Figure 5: Average total queue occupancy, , versus load, and .

8.2 The Effect of Channel Connectivity

In this section we investigate the effect of channel connectivity on the performance of the previously considered policies. Figures 6 and 7 show this effect for two choices of and . We observe the following:

First, we notice that for larger channel connection probabilities (), the effect of the policy behavior on the system performance becomes less significant. Therefore, the performance difference among the various policies is getting smaller. The LCSF/LCQ policy still has a small advantage over the rest of the policies, even though it is statistically difficult to distinguish. MCSF/SCQ continues to have the worst performance. As increases, the probability that a server will end up connected to a group of empty queues will be very small regardless of the policy in effect. In fact, when the servers have full connectivity to all queues (i.e., ) we expect that any work conserving policy will minimize the total number of packets in a symmetrical homogeneous system of queues since, any (work-conserving) policy will be optimal in a system with full connectivity.

Second, from all graphs we observe that there is a maximum input load that results in a stable system operation (maximum stable throughput)666The last point in every curve corresponds to an overloaded system operating beyond its stability region. As a result the simulation is permanently in a “transient” state. Such points are shown in the presented graphs for illustrative purposes in order to show the trend of the queue size.. An upper bound (for stable system operation) for the arrival rate per queue is given by

(29)

i.e., the average number of packets entering the system () must be less than the rate they are being served. When , the stability condition in Inequality (29) will be reduced to , which makes intuitive sense in such a system.

Finally, we observe that the MCSF/LCQ policy performance is very close to that of LCSF/LCQ. However, its performance deteriorates in systems with higher number of servers and lower probabilities for queue-server connectivity. It is intuitive that with more servers available, the effect of the order of server allocations on the policy performance will increase. Since MCSF/LCQ differs from LCSF/LCQ only by the order of server allocation, therefore, more servers implies larger performance difference. Also, the lower the connectivity probability, the higher the probability that a server will end up with no connectivity to any non-empty queue, and hence be forced to idle.

(a)
(b)
(c)
Figure 6: Average total queue occupancy, , versus load under different policies, and .
(a)
(b)
(c)
Figure 7: Average total queue occupancy, , versus load under different policies, and .

8.3 Batch Arrivals With Random Batch Sizes

We studied the performance of the presented policies in the case of batch arrivals with uniformly distributed batch size, in the range . Figure 8 shows versus load for three cases with , and hence average batch sizes 1.5, 3, and 5.5. The LCSF/LCQ policy clearly dominates all the other policies. However, the performance of the other policies, including MCSF/SCQ (LB approximation) approaches that of the LCSF/LCQ policy as the average batch size increases. The performance of all the policies deteriorates when the arrivals become burstier, i.e., the batch size increases.

(a) , average batch size = 1.5.
(b) , average batch size = 3.
(c) , average batch size = 5.5.
Figure 8: Average total queue occupancy, , versus load, batch arrivals, and .

9 Final Remarks

The model and the results presented in this article can be regarded as a generalization (with the obvious added complexity as well as utility of our model) of the models and results reported by [3], [6], and [7].

In [3], the authors investigated the optimal scheduling policy for a model of parallel queues and one randomly connected server. This model is a special case of the model we presented in this article, i.e., when . Using stochastic dominance techniques, they proved that LCQ is optimal in that it minimizes the total number of packets in the system. In our work, we also use stochastic dominance techniques to prove the optimality of MB policies for a wide range of cost functions (cost functions that are monotone, non-decreasing with respect to the partial order ) including the total number of packets in the system. It can be easily shown that for the case of a single server (i.e., ) the LCQ policy minimizes the imbalance index and therefore, LCQ belongs to the set of MB policies.

In [6], the authors investigated the optimal policy for a model of parallel queues with a stack of servers. Each queue is randomly connected to the entire server stack. Only one server can be allocated to a queue at any time slot. In contrast, our model assumes independent queue-server connectivity, i.e., a queue can be connected to a subset of the servers and not connected to the rest at any given time slot. We also allow for multiple servers to be allocated (when connected) to any queue. Therefore, the model in [6] can also be considered as a special case of our model, i.e., by letting and by adding the feasibility constraint . They proved that a policy that allocates the servers to the longest connected queues (LCQ) is optimal. Under the constraints above, this policy would also minimize the imbalance index among all feasible policies, i.e., this policy belongs to the set of MB policies.

In [7] the authors proved that, in a model of two parallel queues () and multiple randomly connected servers, a MTLB (maximum throughput/load balancing) policy minimizes the expected total cost. They defined the cost as a class of functions of the queue lengths for the two queues in the system. In our work, we generalize the model in [7] as follows: (a) we extend the model to , (b) we optimize the cost function in the stochastic order sense which implies the expected total cost used in [7], and (c) we relax the supermodularity and convexity constraints that they enforced on the cost function, i.e., we prove our results for a larger set of cost functions that includes theirs.

The authors of [7] defined the MTLB policy as the one that minimizes the lexicographic order of the queue length vector while maximizing the instantaneous throughput. We can show that MTLB policy belongs to the set of MB policies. To do that, we have to show that a policy which minimizes the lexicographic order: (a) also minimizes the imbalance index, i.e., it belong to the set of MB policies, and (b) is a work-conserving policy. A work-conserving policy minimizes the number of idling servers and hence, it maximizes instantaneous throughput (by the definition of instantaneous throughput). Lemma 7 states these results formally.

Lemma 7.

Given the state during time slot . Let be a vector resulted from the feasible withdrawal vector . Suppose that for all feasible . Then: a) the vector achieves the minimum imbalance index among all feasible vectors, and b) A policy that selects is a work-conserving policy.

Proof.

a) Assume to the contrary that does not minimize the imbalance index. Then there must exist a such that the imbalance index of the vector is strictly less than that of . This implies that a policy that results in the withdrawal vector , and therefore the vector , belongs to the set for some , i.e., it does not have the MB property during time slot . For any given state, a policy that minimizes the imbalance index must exist, ‘minimization on a finite set’. According to Lemma 4, balancing interchanges (BIs are feasible interchanges) are required to make any policy in belongs to . Lemma D-1 shows that such balancing interchanges are feasible. Therefore, the following balancing interchange is both feasible and enhancing (it reduces the imbalance index):

(30)

for any and .

In other words, we perform a feasible server reallocation from the longest queue to the longest queue in the system during time slot . The resulted leftover vector is related to the vector as follows:

(31)

Since by definition, then it is clear that . This contradicts the initial assumption. Therefore, must have the minimum imbalance index.

b) A feasible interchange is a balancing one, since by definition of queue 0 and the interchange feasibility conditions, we have . Queue 0 is permanently connected to all servers by assumption. According to Lemma B-1 this interchange will definitely reduce the imbalance index. Therefore, any policy that intentionally idles servers can always be improved (i.e., reduce its imbalance index) by using the balancing interchange for some queue .

From part a) of this lemma, we showed that a policy that minimizes the lexicographic order will also minimize the imbalance index. We also showed that a policy that idles servers intentionally can not achieve the minimum imbalance index. Therefore, only a work-conserving policy can minimize the lexicographic order. ∎

From the above, we conclude that the MTLB belongs to the class of MB policies.

10 Conclusion

In this work, we presented a model for dynamic packet scheduling in a multi-server systems with random connectivity. This model can be used to study packet scheduling in emerging wireless systems. We modeled such systems via symmetric queues with random server connectivities and and Bernoulli arrivals. We introduced the class of Most Balancing policies. These policies distribute the service capacity between the connected queues in the system in an effort to “equalize” the queue occupancies. A theoretical proof of the optimality of MB policies using stochastic coupling arguments was presented. Optimality was defined as minimization, in stochastic ordering sense, of a range of cost functions of the queue lengths. The LCSF/LCQ policy was proposed as good, low-complexity approximation for MB policies.

A simulation study was conducted to study the performance of five different policies. The results verified that the MB approximation outperformed all other policies (even when the arrivals became bursty). However, the performance of all policies deteriorate as the mean burst size increases. Furthermore, we observed (through simulation) that the performance gain of the optimal policy over the other policies is reduced greatly in this case. Finally, we observed that a randomized policy can perform very close to the optimal one in several cases.

Appendix .1 Proof of Lemma 1

Proof.

To prove part (a), assume that ; then, using Equation (10), we have: