# Minimum-Length Scheduling with Finite Queues:

Solution Characterization and Algorithmic Framework
^{†}^{†}thanks: Part of the material of this paper was presented in ISIT 2012 – IEEE International Symposium on Information Theory.

###### Abstract

We consider a set of transmitter-receiver pairs, or links, that share a common channel and address the problem of emptying backlogged queues at the transmitters in minimum time. The problem amounts to determining activation subsets of links and their time durations to form a minimum-length schedule. The problem of scheduling has been studied under various formulations before. In this paper, we present fundamental insights and solution characterizations that include: (i) showing that the complexity of the problem remains high for any continuous and increasing rate function, (ii) formulating and proving sufficient and necessary optimality conditions of two base scheduling strategies that correspond to emptying the queues using “one-at-a-time” or “all-at-once” strategies, (iii) presenting and proving the tractability of the special case in which the transmission rates are functions only of the cardinality of the link activation sets. These results are independent of physical-layer system specifications and are valid for any form of rate function. We then develop an algorithmic framework. The framework encompasses exact as well as sub-optimal, but fast, scheduling algorithms, all under a unified principle design. Through computational experiments we finally investigate the performance of several specific algorithms.

Index Terms– algorithm, optimality, scheduling, wireless networks.

## I Introduction

For multiple communication links sharing a common wireless channel, the fundamental aspect of access coordination is called scheduling. It amounts to deciding which links are allowed to transmit simultaneously and for how long they should do so. Usually, the selection of a schedule is driven by the goal of optimizing a cost criterion. Scheduling has a long history of investigation that has ranged from simple transmission models to fully cross-layered ones that combine rate and power control with overall network resource allocation. In this paper, we examine a version of the scheduling problem that arises from the objective of draining in minimum time the bit-contents that reside at the transmitters of a finite number of links. That is, we consider the multiple access or interference channel with finite traffic volume that must be delivered in minimum time.

Past work on this problem includes [15] in which a centralized, polynomial-time algorithm was obtained for static networks with specified link traffic requirements. The formulation was based on mapping the network to an undirected graph and on assuming that any two links can be successfully activated simultaneously as long as they do not share common vertices on the graph. In [7, 14] -hardness was addressed for the problem of determining a minimum-length schedule under a given traffic demand in a wireless network with SINR constraints, using a protocol and a geometric model respectively. In some special cases the structure of the traffic demand allowed a polynomial algorithm [6].

In [4, 5] it was shown that more fundamental resource allocation problems in wireless networks with SINR constraints, such as node and link assignment, are also -hard. In these problems the goal is to assign at least one time slot to each node, or link, such that the number of time slots is minimized. Set-covering formulations enabled a column generation method for solving the resulting linear programming relaxations. For the minimum length scheduling problem, a column-generation-based solution method was also used in [17], which can approach an optimal solution, with the advantage of a potentially reduced complexity. In [21] the minimum-length scheduling problem was formulated as a shortest path problem on directed acyclic graphs and the authors obtained suboptimal analytic characterizations. It is also possible to “absorb” the scheduling in the general network resource allocation problem as done in [11], where the overall criterion is to maintain network stability. However basic versions of scheduling remain important, both from the theoretical standpoint and from that of specific applications.

Our contributions include new results on the combinatorial complexity of the problem, on the structure of the optimal schedule, and finally, new necessary and/or sufficient conditions for optimality based on the values of the transmission rates that the links can transmit at, when these rates depend explicitly or implicitly on the set of links who are allowed to transmit simultaneously. Thus, our results contribute to the tightening of the joint use of physical and MAC layer approaches and point to practical and realistic algorithms for approximating, or precisely determining, an optimal schedule. To that effect we also provide a number of algorithms the performance of which we evaluate extensively.

## Ii System Model

We consider a set of links, or source-destination pairs that share a common channel. These links are associated with a strictly positive vector of demand , with each representing the amount of bit-traffic stored at the transmitter of the corresponding link . Without loss of generality, assume that the entries in the demand vector are in ascending order and that they take values in a continuum. Let denote the union of all subsets of , excluding the empty set. Clearly, . We use the term group to refer to a member , that is, a subset of the link set. Scheduling a group means that all members of are activated simultaneously for a positive amount of time. For any group, the service rate of each of its members is a function of the group composition. Let denote the rate function; that is, for and link , represents the non-negative service rate of link , if is active. Clearly, the rate values can be positive only for the members of , i.e., . If is a singleton link , we use as a more convenient, short-hand notation for the rate instead of .

In all applications with meaningful physical interpretations, the service rate has the following property: If two elements are served together, the rates of being served can not be higher than the individual rates, respectively. Thus, throughout the paper, it is assumed that the service rate of any link in a group does not increase if the group is augmented, i.e., for any two groups and , . We refer to this as the rate monotonicity property. No further conditions are imposed on .

The minimum-length scheduling problem amounts to, given , selecting a set of groups , among the members of , along with their respective activation durations , so that is minimized, subject to the requirement that all stored traffic is successfully delivered. It is important to stress that the problem input does not include the explicit knowledge of the rate vectors. If these vectors are all computed a priori, solving the problem reduces to optimizing a linear program (of which the size is exponential in ). What is provided instead in the problem input is the function , that can be viewed as a black box, or an “oracle”, that returns the rate values for any given . Thus a scheduling algorithm is regarded of exponential complexity, if in the algorithm the number of times that function is invoked is exponential in . We assume that the computation of is practically efficient, that is, one function evaluation of any group and runs in polynomial time in . Note that from a communication/information-theoretic perspective the rate values represent any feasible, or achievable, rates for a given channel with specific coding, modulation and detection structures. Thus the treatment of the problem is decoupled from the physical-layer aspects of it, although it is directly connected to, and dependent on, them.

One specific scenario of interest is the highly symmetric case in which the rate is determined completely by group cardinality, and hence all group members share the same rate. That is, is a function of but not of its individual members. It corresponds to a system where all receivers are located at a central point, with transmitters having the same distance (on a circle) to the center with identical geometric channel gain. Such a case is considered in [6]. This special case is much more structured, and it is possible to derive strong results on tractability and optimality characterization of the optimal solution. When the rates depend only on group size, the input can be equivalently defined using an -dimensional rate vector , each denoting the common rate of every link in a group of size respectively. Rate monotonicity then implies that . We will subsequently use the input triplet to refer to this problem case.

## Iii The Rate Function

Thus far, the minimum-length scheduling problem has been presented in a rather generic form, that is the function could be completely arbitrary, provided it satisfies the monotonicity property. If we assume that transmission is successful at some given rate on a link it means that for a fairly broad class of channel models and receiver structures that the signal-to-interference-plus-noise ratio (SINR) at the receiver must exceed a certain threshold [12]. Specifically, if a channel matrix of dimension , is provided, where its element is the channel gain between the transmitter of link and the receiver of link and if denotes the power of link , and the noise variance, then for link in group the SINR is given by

(1) |

The treatment of the scheduling problem in this paper does not depend on a specific form of the rate function and, hence, it applies to emptying backlogged queues in minimum time for any system, not limited to wireless links on a common channel. For this specific context, two commonly used modeling approaches for defining are as follows.

The first is a one-step function returning either zero (no success) or one (success) as the rate value. Indeed, many of the previous studies of scheduling in wireless networks use implicitly this function (e.g. [4, 5]). In effect, a transmission of a packet is successful if and only if the SINR meets a threshold . A group such that all of its links can successfully transmit is sometimes referred to as a feasible matching. Clearly, an infeasible matching will not be part of the optimal schedule, since if it would be used, it is de facto replaced by a subset of members having the SINR condition satisfied. An equivalent view is, in the definition of , to set zero rates for all elements of any infeasible matching. Thus the following definition of provides the SINR-threshold-based model of scheduling. In the sequel, we use to denote this binary function.

The definition can be further generalized to account for rate adaptation. In this case, the rate values form a discrete set with cardinality higher than two. Each rate value is associated with an SINR threshold, often obtained from the available adaptive modulation and coding schemes of some specific wireless system [26]. The generalization corresponds to being a step-wise function taking multiple values.

The second commonly used modeling approach is to consider the rate as a continuous function of the SINR [12]. We will use as a general notation of the wide class of continuous functions that are (strictly) monotonically increasing in the SINR. A particular case of interest is the Shannon formula for the additive white gaussian noise (AWGN) channel. This case will be referred to as , and is given by

(2) |

The aforementioned property of rate monotonicity clearly holds for both and . For , we have , for two groups and , if and only if both are feasible matchings or both are infeasible matchings. If is feasible but is not, . For , strict inequality holds as long as .

## Iv Linear Programming Formulation

The scheduling problem is easily shown to be equivalent to a linear program (LP). Although formulating the LP does not give a practically feasible solution algorithm, it enables us to gain structural insights. Denote by the non-negative scheduling decision vector of dimension , whose element denotes the time duration of running group . We use to denote an optimal scheduling solution. Notation is reserved for a set of groups that correspond to an optimum solution, that is, . By the following lemma, all demands will be met exactly at optimum. This is rather intuitive and has been (implicitly) taken for granted (e.g., [6]). Formalizing this result is useful in our case, as it eliminates any doubt about the validity of the form of LP basic solutions to be discussed later.

###### Lemma 1.

There exists an optimal schedule such that, before reaching the end of the time duration of a group, none of the link queues in the group is empty.

###### Proof.

Suppose the opposite is true. Then there exists a group run with time duration and link , such that the demand served of in the group, denoted by , satisfies the condition . Let . Consider splitting the running time in two segments, with lengths and respectively. In the first segment, group is run, and for the second segment, the reduced group is run. The lemma follows from two observations. First, the served demand of in segment one remains . Second, any of the links other than is served for an overall time of , and their rates in are not worse, if not better, than those in . ∎

By Lemma 1, we arrive at the following LP formulation.

(3a) | ||||

s. t. | (3b) | |||

(3c) |

As (3b) are equalities, the formulation is in so called standard LP form, hence no slack or surplus variables will be involved in constructing matrix bases or the corresponding basic solutions.

Even though there are candidate groups, we can conclude the existence of an optimal scheduling solution using at most groups. The result follows from the fundamental optimality theory of LP and the structure of (3).

###### Lemma 2.

There exists an optimal scheduling solution using at most groups, i.e., .

###### Proof.

Note that the feasible region of (3) is non-empty as the single-link groups would provide a feasible schedule. This corresponds to a TDMA-based activation of the links one-at-a-time until each empties its queue. Hence, by the fundamentals of linear programming (e.g., [19]), there exists an optimal basic solution. For any feasible basic solution, the number of positive values is no more than the number of rows and equals if the solution is non-degenerate, and the lemma follows. ∎

By Lemma 2, there is always a compact representation of optimality. However, finding this best combination of groups, among the candidate ones, remains generally hard (see also Section V). In this regard, optimal scheduling has a combinatorial side, even if formulation (3) is an LP.

In some of the analysis later on, we utilize the LP dual of (3). Letting denote the dual variable of (3b), the dual formulation is as follows.

(4a) | ||||

s. t. | (4b) | |||

(4c) |

## V Complexity Considerations

Complexity is a fundamental aspect in the treatment of optimization problems. By Lemma 2, obtaining the globally optimal schedule is equivalent to selecting the “best” groups. The question is how difficult this selection task is. For a discrete rate function , the problem is -hard [1, 5, 14]. A natural follow-up question is whether the complexity reduces for continuous (and thus much more well-behaved) functions of class . For example, is the problem tractable if the rate function is , or, even simply linear (regardless of the fact that it would not be realistic) in SINR? In the following, we provide a negative answer, stating that the problem in the wireless communications context is in general hard for all rate functions that are continuous and strictly increasing in the SINR.

###### Theorem 3.

Given any function of the SINR, there are NP-hard instances of the minimum-length scheduling problem.

###### Proof.

Given , where is of type , the recognition version of the problem, by Lemma 2, is as follows. Are there groups, which can be represented using a binary matrix, such that the total time of satisfying using these groups is at most a given positive number? The problem is clearly in class , as checking the validity of a solution (a certificate in form of a square matrix of size ) is straightforward Consider a general-topology graph . Let . Thus a link in the scheduling instance corresponds to a vertex in . Let , and , i.e., , with because is strictly increasing in SINR. Let . For each edge in the graph, set the coupling element . Moreover, . All other elements of the channel matrix are zeros. Finally, the transmit power .

Consider link and any group that contains , but not any of the adjacent vertexes in . The SINR is , thus the rate is 1.0. If is put in a group containing at least one adjacent vertex in , the SINR is no more than , because and . Thus the rate of becomes strictly less than . Suppose, at optimum, a group containing two links and , that are adjacent in , has a positive amount of time duration . Note that, in , corresponds to at least one connected component (because and are adjacent). Denote by the component containing and , and let . Note that . By the observation before, for each of the links in , including and , the demand served in time within group is strictly less than .

Consider splitting group into groups, obtained by combining with each of the individual links in . Each of the groups is given time . For all links in , including and , the rate grows from less than to . Since , the quantity is strictly more than enough to serve demand , for any link in . For the links in , overall they are served with the same time duration , with rate no less than before. Repeat the argument for the remaining components if necessary. In conclusion, there is an optimal scheduling solution in which the groups are formed by links corresponding to independent sets of . At this stage, it is apparent that solving the scheduling problem provides the correct answer to the weighted fractional coloring problem [18], with the demand vector being the weights of the vertexes, and the result follows. ∎

Theorem 3 establishes the inherent difficulty of the scheduling problem. The result generalizes the observation made in [7] on the connection between fractional coloring and scheduling under the so called protocol model, which uses a conflict graph and disregards the channel matrix. As our result applies to any , one should not expect that the use of smooth rate functions, including linear ones, would help in reducing complexity.

## Vi Optimality Conditions for Base Scheduling Strategies

We consider two base strategies that are the most simple choices in constructing a scheduling solution. In the first strategy, denoted by , the link queues are emptied completely separately, corresponding to a TDMA activation. That is, . The second strategy, denoted by , applies the very opposite philosophy, namely, all links are activated at once, and the -links group is served until some of the queues becomes empty. The next group consists in all the links having positive remaining demand, and so on.

Note that both strategies have size , and hence represent basic solutions (extreme points of the polytope in the LP formulation). Given out of the groups, the computing time of the correct time share (or concluding that the groups do not form a feasible schedule) is normally of complexity due to matrix inversion. Solutions and are simpler to construct – after calls of function , computing the schedule runs clearly in linear time, whereas for the schedule the computing time is of .

Intuitively, strategy is desirable, if the links, when activated simultaneously with others, experience significant rate reduction. This corresponds to a high-interference environment. The following condition quantifies the notion.

###### Condition 1.

For all , the sum of the ratios between the members’ rates in and their respective rates of being served individually, is at most 1.0, that is,

The above condition is simple in structure. Yet, it is exact in characterizing the optimality of .

###### Theorem 4.

is optimal if and only if Condition 1 holds.

###### Proof.

Sufficiency: Consider the LP formulation (3), and the base matrix for the basic solution . The inverse matrix is diagonal with , where by we denote the transpose of vector . For any non-basic variable with , the reduced cost equals , where is a row vector of ones and denotes the column vector corresponding to in (3).

The expression leads to value , that is non-negative if Condition 1 holds.
Since none of the non-basic variables has strictly negative reduced cost, is optimal, by LP optimality.

Necessity: If Condition 1 does not hold for some group , the reduced cost of the corresponding non-basic variable is strictly negative.
Moreover, for , all the basic variables have strictly positive values.
Therefore the LP pivot operation of bringing in into the base is not degenerate, meaning that the objective function will strictly improve, and the result follows.
∎

Theorem 4 provides a complete answer to the optimality of . The condition consists of one inequality per group. From the proof, it is clear that reducing the number of inequalities is not possible. However, if we relax the requirement of necessity, and consider a pair of links, there is a simpler sufficient condition that excludes the activation of both in any group. This occurs, as formulated below, if the two links generate high interference to each other, but their rates are not much affected by simultaneous transmissions of the other links.

###### Condition 2.

For a pair of links , we define the following inequality.

###### Theorem 5.

If Condition 2 is true, then there exists in which and do not appear together in any group, that is, in optimizing the schedule, the condition is sufficient for discarding all groups containing both and .

###### Proof.

Suppose an optimal schedule has a group having both and . Without loss of generality, assume the time duration of is 1.0. The demands served equal and for the two links, respectively. By the property of rate monotonicity we have, and , which yields the following inequality

Consider, instead of , two groups and . The rates of and are at least and , respectively. Activating the two groups for time durations and delivers respectively no less than and as served demands for and , hence the conclusion. ∎

###### Remark 1.

For , i.e., the symmetric case of cardinality-based rates, the number of inequalities in Condition 1 is reduced to . This, together with the proof of Theorem 4, lead to the following corollary.

###### Corollary 6.

The following condition is both sufficient and necessary for the optimality of for .

The structure of Condition 2 also simplifies for . In addition, by augmenting the line of arguments in the proof of Theorem 5, we arrive at a sufficient condition for excluding the use of any group of a specific size . This fact is given in the corollary below.

###### Corollary 7.

For and a given group size , if the following condition holds for at least one , there is an optimal schedule not using any group of size .

By defining sum-rate as the amount of data served per time unit, Corollaries 6 and 7 have natural interpretations. The former corollary indicates that it is beneficial to use a TDMA-based schedule when any grouping of links results in lower sum-rate than single activation, while the latter one states that if there is a group size dominated in sum-rate by a smaller one, then there will be no optimal schedule selecting the larger size group.

Let us consider when it is preferable to augment the size of a group (of any size, except ). Intuitively, one can expect that the group should be augmented with a new link, if the resulting sum-rate, is higher than that of any time combination of running the group and the link separately. Conversely, if it is optimal to activate group , then the sum-rate of , namely , can not be achieved by any combined use of its subsets of size . The insight leads to the following condition.

###### Condition 3.

Given group , let and denote by its subsets of cardinality , obtained by deleting each of the links of . Denote by the vector of rates of the links in , and the corresponding rate vector for (with zero rate for ). We define the following condition: For any with , the vector inequallity is satisfied for at least one element.

###### Remark 2.

Note that finding whether or not there exists a vector that violates the condition can be formulated as an LP of size . Thus the condition can be checked efficiently for any given group.

What the above condition states is, in fact, that the rate vector of can not be outperformed by the throughput region of the sub-groups. If group is active at optimum, then the condition must be true, as formulated below.

###### Theorem 8.

If , then Condition 3 holds.

###### Proof.

Suppose group is activated with any positive time . Strictly inequality in all the elements means that running , with time proportions , respectively, will serve demand within less time than , and the result follows. ∎

We now turn our attention to the scheduling strategy . In this solution, the groups, which are easily identified, are of sizes . To save notation without loss of generality, assume link has its queue emptied first, followed by link in the second group, and so on. Applying Theorem 8 yields immediately the following necessary condition for the optimality of .

###### Corollary 9.

If is optimal, then Condition 3 must be true for the groups , and .

Consider the implication of Condition 3 for cardinality-based rates . Because of the rate symmetry, the quantity can attain maximum simultaneously in all the elements, only if for all . For this , all elements of equal , resulting in the observation below.

###### Corollary 10.

If is optimal for , then the following condition must hold.

The inequalities in the above corollary form a hierarchy of relations with a clean interpretation. Namely, if is optimal, then the group sum-rate must be monotonically increasing in group size. Conversely, if this monotonicity is violated, we conclude is not optimal. However, the reverse formulation does not hold, i.e., the hierarchy of relations is not sufficient for ensuring that is optimal. A counter-example is provided in Section X.

To arrive at a sufficient optimality condition for strategy for the general problem setting , we consider maximum and minimum rates of groups. Denote by and the maximum and minimum link rates, respectively, of all groups of size . Although the exact values of them are difficult to calculate in general, in practical systems it is typically possible to derive optimistic respective conservative bounds on the rates, using the function and the channel matrix , as replacements of the maximum and minimum values.

###### Condition 4.

We define the following inequalities,

where, by convention, the term , corresponding to , is taken to be zero.

The inequalities in Condition 4 form a chain for group sizes moving from one to . By the following theorem, this chain of relations is sufficient for the optimality of schedule .

###### Theorem 11.

If Condition 4 holds, then is optimal.

###### Proof.

Without any loss of generality, assume that, by strategy , the queue of link becomes empty first, followed by link , and so on. Thus . In case of any degeneracy, the structure remains, with the only difference that the corresponding time duration of some of these groups is zero. All -variables other than those for the groups are zeros. Clearly, LP primal feasibility of (3) is satisfied by the solution. Consider the LP dual (4), and, for all the groups in , set the corresponding rows in the dual to equality, that is,

(5a) | ||||

(5b) | ||||

(5c) | ||||

(5d) |

The above equalities uniquely determine a solution to the LP dual (4). This, together with , form a pair of dual and primal solutions. Consider the complementary slackness condition in linear programming. The condition for (3b) is always satisfied no matter what the values of the dual variables are, since constraints (3b) are equalities. For the LP dual, complementary slackness holds for the above rows. For the rest of rows, the condition is also satisfied because the corresponding -variables are zeros. In conclusion, the pair of solutions is optimal, if LP dual feasibility holds.

Suppose the derived dual solution is not dual feasible, i.e., at least one of the remaining constraints in (4) is violated. We prove a contradiction, assuming a violated constraint concerning group of size . The construction for arriving at a contradiction for other groups of size as well as other group sizes is analogous.

The above assumption of constraint violation means that . This implies the following inequality.

(6) |

(7) |

(8) |

(9) |

The strict inequality , however, contradicts (5a), and the theorem follows. ∎

For , i.e. for the symmetric special case, the result of Theorem 11 can be strengthened.
We prove Condition 4 is also necessary for the optimality of , as long as is non-degenerate, that is, all the groups are run with strictly positive time durations^{1}^{1}1For , non-degenerate is equivalent to all links having different demands..

###### Theorem 12.

For and non-degenerate , Condition 4, which reduces to the following inequalities, is also necessary for the optimality of .

(10) |

###### Proof.

As in the proof of Theorem 11, assume without loss of generality that the sequence in which the queues empty in are . Consider the group of size , consisting of . Note that the group is not part of the solution. The base matrix of solution is triangular for , where column consists of consecutive elements of value , followed by zeros. Hence the basis inverse, , has the following form.

For the aforementioned group, the linear programming reduced cost, for the basis , is given by the expression below.

If the opposite of (10) holds, then the reduced cost is strictly negative. Thus group is an incoming variable in the simplex algorithm for linear programming. As long as is non-degenerate, the pivot operation bringing in the group into the basis will strictly improve the objective function, and the theorem follows. ∎

It has been commented earlier that the condition of strictly improving throughput in group size, used in Corollary 9, is not sufficient for the optimality of , whereas the inequalities given in Theorem 12 are. Thus the former is implied by the latter (in a strict sense, because they are not equivalent). This fact is formally established below.

###### Corollary 13.

For , , implies .

###### Proof.

For , the inequality gives , as is effectively zero by the aforementioned convention. For the induction step, assume and consider . The inequality and the induction hypothesis together yield . Comparing the left and right sides, we obtain , and the corollary follows. ∎

###### Remark 3.

The inequalities in the conditions in this section do not involve . Except from the non-degeneracy assumption in Theorem 12, all results are valid completely independent of the demand values. This observation can be confirmed from (3): Given a feasible schedule in form of a (non-degenerate) basic solution, whether or not it is optimal depends only on the left-hand side, which does not contain .

## Vii Complexity of Scheduling with Cardinality-Based Rates

From Section VI, one can observe that the optimality conditions for and are more structured and stronger for . This raises the question whether or not reaching optimality of is more tractable than the general case. In this section, we provide a positive answer to the question.

Consider first a more restrictive case, where the demand values are uniform. For this setting, we provide an analytic solution requiring only linear time to compute, and prove it is globally optimal.

###### Theorem 14.

For , let . If all demand values are uniform and equal to , then the groups, , , , each scheduled for a time duration of , is optimal.

###### Proof.

For all the links, the given schedule clearly meets demand exactly. For any feasible scheduling solution (not restricted to the case in question) of length , the total demand, , divided by , gives the average throughput per time unit. As is a constant, a schedule is minimum in time if attains the maximum possible value. By the assumption in the theorem, the instantaneous throughput of any feasible schedule can never exceed . This throughput is achieved during the entire duration of the scheme in the theorem, and the result follows. ∎

###### Remark 4.

For non-uniform demand, does not admit an optimal schedule in closed form, yet we are able to conclude its polynomial-time tractability. This fundamental insight is established in the following theorem.

###### Theorem 15.

is in class P, that is, the global optimum of any instance can be computed in polynomial time.

###### Proof.

Consider the LP dual given in (4). For problem class , the dual has the following form.

(11a) | ||||

s. t. | (11b) | |||

(11c) |

Observe that there is a symmetry among the occurrences of the dual variables in (11b). As a result, given any feasible solution, swapping the values of any two dual variables will preserve feasibility. The demand vector is given in ascending order. It follows that there must exist an optimal solution with , because otherwise the objective function value can be improved or kept the same by swapping the variable values so that the condition holds.

Based on the above observation, one concludes that, among all constraints of (11b) with variables on the left-hand side, the inequality is the most stringent one in defining the optimum. Therefore, the number of constraints required to define optimum can be reduced from to , implying that, at optimum, the scheduling problem is equivalent to the following LP.

(12a) | ||||

s. t. | (12b) | |||

(12c) |

In conclusion, the optimal solution to problem class is found by solving an LP of size , and the theorem follows. ∎

The above theorem is significant not only for the special symmetric case , but also for all scenarios where the transmitters having similar distances (and hence close-to-uniform channel gains) to their receivers, and the latter are located close to each other. For these cases, one can expect that solving , which can be done fast, will provide a good approximate solution to the global optimum.

## Viii An Algorithmic Framework

Since optimal scheduling is in general complex, it is important to propose algorithms which trade optimality against reduced complexity and yield decent performance. To this end we propose several algorithm variations that range from suboptimal ones with low complexity to one actually optimal with high complexity. They are all based on a common framework that uses a natural view of the problem and that is based to some degree on some of the optimality conditions and insights derived in the previous sections. In fact we will demonstrate that the modular structure we propose eventually leads to exploiting tools from optimization theory so that we may come close to, or achieve full optimality, at a reduced complexity level.

As is evident from the LP formulation of the problem, any scheduling algorithm will have to have two basic components:

(i) a method for generating the link groups that will be part of the proposed final schedule, and

(ii) a method for deciding the duration of activation for each of these sets.

Later, we will confirm that this structural decomposition actually leads to a powerful toolset for eventual optimization.

Our proposed algorithms use a variety of criteria for fulfilling the two aforementioned requirements. Each algorithm uses what we call a Group Generation Module to select the activation sets and an Activation Duration Module to decide the length of the activation of each set. The two modules do not operate independently, but are closely coupled and operate interactively. Before proceeding to the description and evaluation of these algorithms, we should emphasize that they are not based on ideas like those that govern the so-called approximation algorithms (e.g. [27]), or algorithms that impose structural restriction or additional assumptions on the problem. Instead, they are completely generally applicable. In fact we will show that some of these algorithms do achieve the optimal solution when some of the conditions that were mentioned earlier hold (i.e. in the cases where either , or is optimal).

We proceed now with the description of the algorithms. Regarding the Activation Duration module, we consider two possibilities. Either we activate the chosen group until one of its links empties its queue or we activate it for a fixed amount of time chosen a priori as a parameter. Clearly, in the latter case, it is possible that the time that the a queue empties is less that . In that case the termination of the activation period occurs at that time instant, rather than continuing on until time has passed. Therefore for large values of the two criteria become less and less distinguishable. We refer to the first criterion as TF (for “time at which the first queue of the group empties”) and to the second as T (for “time , unless a queue empties earlier”).

Regarding the group generation module, some care needs to be exercised because here is the main source of high complexity. Namely, there are possible groups. So to chose groups we must utilize some heuristic in the selection or, without any knowledge of the rate function , we must endure the full consideration of all groups. In either case, we want to reduce the complexity by avoiding the solution of the full LP in (3). Therefore we must choose a metric by which we will evaluate the candidate groups. To this effect we either consider the sum-rate metric (SR), i.e. the quantity , or the weighted sum-rate (WSR), i.e. the quantity , where is the “current” queue size at the transmitter of link . Clearly, at the start ; however, as different links get activated at different times, the initial keeps diminishing until it reaches zero. Whether we choose the SR or the WSR metric, we have two choices for selecting a group. Either we look at all groups (or all remaining groups, after some links have emptied) or we look at a judiciously chosen group that requires a much reduced search. In the first case, we call the selection method exact, while in the second we call it heuristic. In fact, we will later see that the “exact” choice can be achieved without necessarily looking at all possible groups. Thus we have four possible group generation methods: (i) SR–exact, (ii) SR–heuristic, (iii) WSR–exact and (iv) WSR–heuristic. Since we may pair any one of these four group generation methods with either the TF criterion or the T criterion in the activation module we obtain a total of eight algorithms.

It remains to describe the heuristic method of choosing a group. We propose the following. We rank the links according to their currently remaining queue size in descending order. To form a group we start with the singleton having the link at the top of the rank. We then visit the second link of the ranking and pair it with the first one. Doing so, the rates of concurrent activation of both links are reduced. If the updated metric (SR or WSR) increases as a result of pairing the two links into a group, we keep the second link in the group. Otherwise, we skip it. We then visit the third link in the ranking and repeat the same process. We proceed in this fashion until all links are visited. Thus one group will emerge at the end of this process. To hedge against this process being highly suboptimal, we repeat this entire construction for two additional rank-permutations, where we start with the second and third link in the ranking respectively. Thus, in the end, we have three possible candidate groups from which we select the one with the highest metric.

Therefore we now have the following algorithms: (1) TF–SR–exact, (2) TF–SR–heuristic, (3) TF–WSR–exact, (4) TF–WSR–heuristic, (5) T–SR–exact, (6) T–SR–heuristic, (7) T–WSR–exact, (8) T–WSR–heuristic.

## Ix Generalizing the Framework with Optimization Tools

As we hinted earlier, the simple algorithms that were described in the preceding section can be embedded in a considerably more general setting that can exploit a variety of optimization techniques to yield a better combination of performance and complexity. Specifically, we may now consider the Activation Duration module as a more sophisticated process. Instead of choosing a simplistic criterion for activation (like the TF and the T ), it can actually obtain a “tentative” set of activation times that are actually optimal for a much reduced set of groups. That is, it can be thought of as solving the LP over a small, limited, and restricted set of link groups. Once it does this, it feeds back to the group generation module a “metric” that is based on the dual variables of the LP. This metric is then used (in lieu of the SR, or the WSR metrics) to select a new group. The new group is fed to the Activation Duration module that proceeds to resolve the LP and obtain a new tentative set of activation times. It is possible that this new set improves on the previous one (that is, it does yield a shorter schedule length or it does not). If it does, the process is repeated until no more improvement is achieved and the final activation times are then the ones that are obtained by the last iteration of the process. This is the so-called Column-Generation method.

The remaining issue is how to select the new group to be added to the partial LP solution at each step of the iteration. The dual variable values scaled by the rates are used as the metric in the group generation module. These are two possibilities: Either a heuristic can be used (like the one proposed for the four of the eight algorithms of the previous section, with the only difference that now the the dual variable is used to sort the links), or an exact determination of the next “optimal” group based on the dual variable metric. If the latter option is chosen, then there are two further possibilities: either an exhaustive search performed over all remaining groups or a more efficient determination. For the use of the second possibility one needs to know the rate function introduced in Section III. Then, a variety of optimal (yet efficient) searches, from techniques of convex optimization, to branch-and-cut or branch and bound and other methods [2, 3, 9], can be performed to determine the best group capitalizing on the knowledge and properties of the function. If on the other hand only the rate values are known (but not the rate function that produced them) then the only option for exact determination of the best group at each step is the exhaustive search.

Thus, in the arsenal of the eight previously described algorithms we add two more. The first uses the Column Generation method along with the rank-based heuristic for “next group” selection at each step (we call this the CG–heuristic algorithm). The second uses the column generation method as well, but with an “exact” selection of the next group at each step. We use the exhaustive search method for that while we note the possibility of dramatic expansion of the problem sizes (in terms of number of links ) that are possible to solve if we utilize the knowledge of the rate function (we call that the CG–exact algorithm).

Note also that even in the earlier eight algorithms the “exact” group generation option can be exercised with significantly reduced complexity if the rate function is known. The only difference there is that the metric for the candidate groups is not the one that depends on the dual variables, but rather the SR or WSR, or possibly even a totally different metric. Of course the use of other metrics, either in association with the LP or the TF, or T methods does not in general lead to the minimum length schedule, while in the case of the CG–exact algorithm it generally does. The complexity of the CG–exact algorithm though is in the worst case exponential.

## X Optimality Properties of the Algorithms

We present now some properties of the proposed algorithms that strengthen their appropriateness for the solution to the scheduling problem. In the following, we denote an algorithm through the descriptive terms used in the preceding sections. For example, denotes the strategy of selecting the group with maximum sum-rate in every iteration, and the time duration of activation is a constant (or until the first queue in the group empties, whichever occurs first). It is clear that each of the five designs of the Activation Duration module will empty all queues in a finite number of iterations. Here, one iteration of the module refers to the activation of one group under TF or T, or solving a whole LP under CG. Obviously, the running time of one iteration is polynomial in in all cases. As for the number of iterations to empty all queues, the complexity is summarized in the following theorem.

###### Theorem 16.

1) The number of iterations under CG–exact is polynomial in , 2) the number of iterations under and is , and 3) the number of iterations by and is and hence pseudo-polynomial in , where is either a) , for the case of a continuous rate function , or b) the lowest positive rate level of a discrete rate function .

###### Proof.

An LP can be solved to optimality using a polynomial number of iterations, or equivalently, separation of constraints in the dual LP, therefore the first statement holds. The second statement follows from the fact that at least one link gets its queue emptied in every iteration in and . The last statement follows from , and that and drain at least an amount proportional to from one or several queues per iteration. ∎

Among the above designs CG–exact is an exact algorithm that guarantees global optimality [5]). The correspondence to the general method of Column Generation consists of the fact that a column in the LP (3) corresponds a variable associated with a group. Note that the complexity of CG–exact by Theorem 16 does not contradict the general -hardness of the scheduling problem.

Let us consider the other four options of section VIII. In general, they are sub-optimal, but significantly simpler than CG. In the following, we show that, if the corresponding sufficient optimality conditions discussed in Section VI strongly hold (i.e., the inequalities in the conditions are strict), the use of SR gives the optimal schedule, i.e., the two base scheduling solutions and , respectively.

###### Theorem 17.

###### Proof.

Assume Condition 1 is strictly satisfied and, without loss of generality, that the rates are in descending order for individual links, that is, . For any group with , we have

The last inequality above follows from . As a result, is the group to be activated. For TF, the demand of link one is served for time duration , and the next group to be activated with the SR metric is , and so on. Applying T, the demand of link one is gradually served by repeatedly activating , after which is used. Hence both strategies lead to schedule .

If we assume Condition 4 strongly holds, we first prove that . For , follows immediately from the assumption. Suppose holds. Then, Condition 4 written for leads to the following

which immediately implies . As a result, the total sum-rate increases for any group by adding new links. Therefore, the grand group has the highest sum-rate under SR. For both TF and T, the group will be activated until one of the links’ queue becomes empty. Repeating the above argument, the next group to be activated is the one consisting of all links with positive remaining demands, and the theorem follows. ∎

The construction of the proof leads to another observation of and . Under the exact, or a deterministic heuristic for group selection with the SR metric, T will be activating the same group until one of its links’ queue empties, becoming equivalent to TF.

Theorem 17 supports the choice of SR as the group selection metric. Below, we illustrate the merit of the WSR and that of T using the following two examples.

###### Example 1.

Consider a case where , , and , i.e. . The unique optimum comprises the two groups and , with time durations and , respectively.

Note that for this example , yet group having the highest sum-rate is not part of the optimum. Hence preferring the top sum-rate group (which itself is not a trivial task) in designing a schedule may not work well.

###### Example 2.

Consider , with and . One can easily verify that the unique optimum schedule consists of groups , , and , each with a time duration of .

Observe that, for Example 2, none of the links has its entire queue emptied in any of the groups that it participates in at the unique optimum. Thus the TF–based algorithm may fail, even if an exhaustive search of all ordered combinations of groups is tried.

###### Remark 5.

For Example 1, using the WSR metric enables the construction of an optimal schedule. Specifically, yields an optimum, and and is also optimal for . For Example 2, one can verify that delivers optimum for all , ( is a positive integer), and is optimal for , where is positive integer that satisfies .

## Xi Simulation Setup and Results

In this section we provide simulation results to illustrate the performance of the algorithms developed within the framework. We consider a set of links randomly placed in an area of 10001000 meters. The signal propagation follows a distance-based model with a path loss exponent of 4. The distance between the transmitter and the receiver of a link was restricted to be between 3 and 250 meters to obtain links of practically meaningful SNR values. For the queue sizes, we defined two different sets: (i) uniform demand of 1000 bits, (ii) non-uniform demand, uniformly distributed in [100 …1500] bits. In each setup, 100 link location instances were ran, unless otherwise indicated.

For the rate values, we consider two cases, namely (i) rates given by the Shannon formula as in Eq. (2), (ii) rates given by a combination of uncoded BPSK with symbol rate control, at a fixed error rate, as in [22]. In the Appendix we provide a detailed derivation of the BPSK rate formula used, with the standard assumption that the energy of interference from concurrent transmissions is equivalent to Average White Gaussian Noise (AWGN). For illustration purposes we provide both functions in Figure 4 of the Appendix. Note that although both are approximate, with Shannon’s formula giving an upper bound on the achievable rates, and BPSK giving a more practical flavor in our investigation, they jointly provide a useful insight on how the physical layer affects the algorithms’ performance.

For each of the setups described above, we solved the full LP of (3), using AMPL [10], to establish the optimal schedule length, assuming a unit of bandwidth in Hz and error rate for the BPSK rate calculations. Then, all ten algorithms of the previous sections were ran for all instances. All results presented below are normalized with respect to the ”baseline” optimal value.

In Figure 1 we present the effect of the parameter on the performance of the T-based algorithms. Intuitively, a small enables a more “cautious” design since the algorithms will iterate more times between the two modules, thus having more opportunities to select groups closer to the optimal with more refinement. This is directly confirmed by the green (T–WSR–Heuristic) and black (T–WSR–Exact) algorithms, for which the gap increases when grows. Note that for a very large value, the T strategy would coincide with the TF one, as can be seen in the rightmost of Figure 1. In contrast to T–WSR–Heuristic and T–WSR–Exact, the red line for algorithm T–SR–Heuristic has the opposite trend, as small gives larger optimality gap. This is due to the behavior of the heuristic – the heuristic sorts the links by their remaining demand in group generation, even though the sorting may not respond well to the SR metric. The mismatch is however rectified for larger values since the impact of the heuristic will be accumulated fewer times. The blue line for algorithm T–SR–Exact is horizontal, because, with exact group selection and the SR metric, the maximum sum-rate group will remain selected, regardless of the size of , until one of the link queues empties. Thus T is equivalent to TF, as commented in the discussion after Theorem 17 in Section X.

For T–WSR–Heuristic and T–WSR–Exact, the gain of reducing diminishes below some point (0.5s in Figure 1), since, without significant queue draining, the same group is simply selected over and over. For the minimum used (i.e., leftmost of the figure), the best performance is given by T–WSR–Exact, as also suggested by the first example in Section X. On the rightmost (i.e., TF activation), the WSR metric is inferior to the SR metric, because the former may result in a group with low sum-rate, and the impact can not be mitigated by group activation which runs the group until one link empties the entire remaining queue. As a result, for large or equivalently TF activation, the best performance is achieved by T–SR–Exact. Finally, the performance of heuristic selection is consistently inferior to exact selection in Figure 1.

Figures 2 and 3 provide performance comparison for all the algorithms. For T-based algorithms, based on our preceding discussion, a small value (0.5s) is chosen for the activation duration parameter . The last algorithm, CG-exact, always gives the global optimum, normalized to 1.0 in the figures. Overall, the other nine algorithms perform reasonable well, with only one giving an average optimality gap larger than 20%.

The first eight algorithms all perform better when the rates are derived from the Shannon formula, in both demand cases. The explanation lies in the shapes of the two rate functions. The BPSK rate is must more robust to interference from concurrent transmissions in comparison to the Shannon function. Hence, at optimum for the BPSK rate function, large-cardinality groups having similar rates on the links are very likely to be used. With the Shannon formula, smaller groups of higher link rates are more preferred in the optimal schedule. Consequently, scheduling with BPSK rate is much more prone to sub-optimality in group selection (as many groups perform similarly for both SR and WSR), and, more importantly, to sub-optimality in group activation (cf. Example 2 in Section X). Thus TF and T become more sub-optimal for BPSK-derived rates than those derived by the Shannon formula. This conclusion is further supported by the performance of CG–Heuristic. In this case, groups activation is carries out group activation with LP, which is the best possible solution of determining the time share among groups, justifying the better performance of this algorithm for the BPSK case, as opposite to the two heuristic activation strategies.

For both TF and T, heuristic group selection is always outperformed by the exact one. Hence enhancing the group selection module alone gives noticeable contribution to the overall performance, regardless of the activation strategy. As expected, the impact of sub-optimality in group selection is more striking in BPSK. As was mentioned above, larger groups are expected at optimum for BPSK-derived rates. When the exact solution of group selection contains many elements, it is less probable that greedy selection, as used in our heuristic, is able to approach optimality.

Comparing the two metrics SR and WSR, the latter always yields better results for the T activation strategy, that is, the notion of remaining demand interacts better with emptying queues progressively. This was also observed in the discussion of Figure 1. For BPSK, WSR also outperforms SR for TF activation. The reason is that the optimal schedule with BPSK tends to use groups of similar sizes. With SR, there is a higher risk that a small-cardinality group with low sum-rate and high remaining demand will have to be deployed by the end, making the overall schedule inferior in comparison to balancing the remaining queues in group selection. For Shannon-based rates, the structure at optimum is quite the opposite, and SR behaves better than WSR in TF.

From the above two Figures, Examples 1 and 2, and the results in Figure 1, it is inconclusive whether the TF, or T activation should be in general preferred. However, if the WSR metric is employed in group selection, T is clearly superior, as discussed above. In addition, the two group activation strategies coincide in TF–SR–Exact and T–SR–Exact, as justified by our discussion in Section X and the results in Figure 1.

Finally, the demand structure (uniform versus non-uniform) has a noticeable impact on performance, when heuristic group selection is used. In general, the results show improved performance when the initial demand is non-uniform. This can be attributed to the demand ordering in group construction of the heuristic that we used. Non-uniform demands aid the heuristic to better differentiate among the links, especially in the BPSK case where the effect of non-uniform demand is indeed more prominent, as the links in a group tend to have more similar rates. With exact group selection coupled with TF and T, the demand structure has virtually no effect on performance for the Shannon-based rate function. For BPSK-derived rates, on the other hand, non-uniform demand leads to smaller optimality gap. This is because the sub-optimality of TF and T in group activation is more crucial for uniform demand that assembles the structure of Example 2.

## Xii Conclusion

We have considered the minimum-length scheduling problem for the case of emptying queues over a shared channel. The generic consideration of rates, which may be produced by some underlying function, unifies the previously considered formulations. Several fundamental results of solution characterization have been gained. First, we have proven the hardness of the problem for all continuous and monotonically increasing function in SINR. Second, optimality conditions of two base scheduling strategies are developed and formalized. Third, we have demonstrated how the problem class with cardinality-based rates can be solved effectively. On the algorithmic side, we have presented a framework that accommodates both exact and sub-optimal scheduling solutions. Extensive simulation results have been provided and assessed to quantify the performance of some specific algorithm designs.

The research line of the current paper is subject to several extensions. For example, we may consider cooperative methods among the links, including relaying each other’s messages. In that case we could have multiple transmitters that transmit to the same receiver draining the same queue simultaneously. Another extension is the fundamental solution characterization under a multi-objective setting that incorporates both efficiency (i.e., schedule length) and energy expenditure, or that includes the aspect of fairness among the links. Multi-hop or multicasting applications are also of interest. Last but not least, it is important to use the insights from this work in the problem of scheduling with continuous arrivals (rather than the queue draining problem).

[The BPSK rate function for a multi-user environment]

Assuming an uncoded BPSK modulation scheme, in an interference-free environment the bit error probability is given by

where the function is the probability that a Gaussian random variable with zero mean and unit variance exceeds . The fraction is the system SNR, where the numerator is the bit energy and the denominator is the power spectral density (psd) of the noise, both in Joules [12].

In the presence of interference by concurrent transmissions, denoting by the duration of one BPSK symbol the bit rate will be . The received bit power is also in Watts. Then the error rate can be calculated as

where denotes the sum of all interference energy in Joule (which we consider it to be an AWGN signal) plus the noise psd. Notice that can be approximated by scaling the sum of the interference powers received by a time factor, in order to obtain an appropriate quantity in Joules.

Observe now that any change in the value, under a fixed and fixed , leaves us with the symbol duration , i.e. the bit rate, as the only control we have to keep the equation above satisfied. Hence, solving the error rate equation above yields our approximation to the BSPK bit rate: