Laxity-Based Opportunistic Scheduling with Flow-Level Dynamics and Deadlines

Laxity-Based Opportunistic Scheduling with Flow-Level Dynamics and Deadlines

Abstract

Many data applications in the next generation cellular networks, such as content precaching and video progressive downloading, require flow-level quality of service (QoS) guarantees. One such requirement is deadline, where the transmission task needs to be completed before the application-specific time. To minimize the number of uncompleted transmission tasks, we study laxity-based scheduling policies in this paper. We propose a Less-Laxity-Higher-Possible-Rate (LHPR) policy and prove its asymptotic optimality in underloaded identical-deadline systems. The asymptotic optimality of LHPR can be applied to estimate the schedulability of a system and provide insights on the design of scheduling policies for general systems. Based on it, we propose a framework and three heuristic policies for practical systems. Simulation results demonstrate the asymptotic optimality of LHPR and performance improvement of proposed policies over greedy policies.

1 Introduction

Opportunistic scheduling plays an important role in improving network resource efficiency and user experience. A large number of scheduling policies, such as Proportional Fair (PF) scheduler [1] and MaxWeight [2], have been proposed to balance between the system throughput and the level of satisfaction among different users. Most existing work focuses on the packet-level scheduling, where the number of users is assumed to be fixed and the performance is defined on the packet-level, e.g., number of packets received in a unit time or average delay of all received packets.

On the other hand, file download and multimedia streaming become increasing popular in cellular networks [3]. The traffic generated by these applications is characterized by flow-level dynamics and deadlines. This is because scheduling such traffic needs to be carried out across a greater temporal scale, during which the population of users may change. In addition, the transmission tasks should be completed before their application-specific deadlines to maintain the required quality of experience (QoE). For example, in progressive downloading, to achieve quasi-live streaming, a segment of a video should be downloaded before the buffer depletes, which imposes a deadline of several seconds [3]. Therefore, we study opportunistic scheduling policies to minimize the delay violation probability in wireless networks with flow-level dynamics and deadlines.

Flow-level scheduling has been considered in the literature. Similar to the packet-level scheduling, a critical issue is to guarantee stability if possible. Recent results show that the maximum stable region can be easily achieved by applying some simple rules such as Best-Rate (BR) rule [4]. Other papers investigate policies for minimizing the average transmission delay. In [5, 6], with the assumption of fast varying channel conditions, it is shown that combining opportunistic scheduling and the Shortest-Remaining-Processing-Time (SRPT) discipline in machine-job scheduling can minimize the average delay. However, the transmission delay may exceed the user’s tolerant delay and become useless. In [7], the authors study flow-level scheduling policies for maximizing delay-dependent utility functions. This model can be viewed as scheduling with soft deadline constraints. However, it requires the knowledge of channel states in the future, which may be difficult in practice.

Scheduling with deadlines has been investigated in machine-job scheduling literature. Policies such as Earliest-Deadline-First (EDF) and Least-Laxity-First (LLF) have been proposed and shown to be optimal for underloaded systems [8]. Namely, a feasible schedule can be obtained by EDF and LLF if there are some off-line policies can do so. Other policies, e.g., D [9], have been proposed for overloaded systems and are shown to achieve the optimal competitive ratio. However, the temporal variation of data rate makes the design and analysis of scheduling policies for wireless networks with flow-level deadlines challenging. In [10], the authors show the Max C/I policy, which greedily serves the user with the highest data rate, achieves the optimal competitive ratio, assuming a partial value model. In this model, one user does not require the completion of the entire transmission task and the value is in proportion to the amount of data received. In many applications, however, it is required that at least certain percentage of data should be received or it will be useless.

In this paper, to minimize the number of uncompleted tasks, we study scheduling policies that balance between serving urgent users and maintaining multi-user diversity. We quantify the urgency of transmission tasks with laxity and propose laxity-based policies for scheduling file download traffic. Under the assumption of polymatroid capacity region [5], we propose a Less-Laxity-Higher-Possible-Rate (LHPR) policy and show its asymptotic optimality in underloaded identical-deadline systems. To the best of our knowledge, this is the first theoretical result on wireless scheduling with deadlines using the entire value model. The insights obtained from this policy can serve as a guideline to design policies for general systems. Based on it, we propose a laxity-based policy framework and three heuristic policies for practical systems. Through numerical simulations, we demonstrate the asymptotic optimality of LHPR and the performance improvement of channel-and-urgency-aware policies.

2 System Model

We consider the flow-level scheduling with deadlines in the downlink of a single cell. A sequence of users enter the system and request to download files with deadlines. They depart upon task completion or delay violation. The objective of the base station (BS) is to minimize the number of uncompleted requests.

2.1 Traffic and Channel Model

Let be the index set of all users entering the system. For each user , the download request is represented by a triple , where , , and denote the arrival time, the initial file size (in bits), and the deadline, respectively. All s, s, and s are random variables. The difference between the deadline and the arrival time, i.e., , reflects the delay tolerance of user . We focus on file download applications such as content precaching. Hence, we assume that the file size is available as soon as user arrives.

All data are transmitted over a wireless channel from the BS to each user using Time-Division Multiplexing (TDM). The channel condition for each user is time-varying and is modeled as a stationary stochastic process (), where denotes the instantaneous rate at which the BS can transmit to user at time . We assume a wireless system with homogeneous channels, where s () are statistically identical with for all . For a more practical system with heterogenous channels, we can transform the original system into an equivalent system with homogeneous channels using the scaling technique in [5]. Similar to [5], we normalize the data rate and the file size with respect to the average rate .

2.2 Scheduling Process

The BS schedules the transmission in a slotted manner. Time is divided into time slots of length and each slot is indexed by an integer . In slot , we let be the index set of users present in the system, and be the number of users. For each user , we denote its residual file size by .

At the beginning of the -th slot, i.e., , the BS allocates user with data rate . The rate vector, which consists of all s (), stays in the capacity region corresponding to [5]. Thus, the residual file size of user evolves as follows:

(1)

where the initial value of residual file size is .

Note that the time scale separation [5] is applied here. In other words, we assume that the channel conditions s () vary infinitely fast. Then, a time slot can be divided into mini-slots, each of which is in the order of the channel coherence time. If the BS schedules in each mini-slot, then the data rates allocated to the present users in each slot can be averaged out and the rate vector stays in the capacity region. This assumption is critical because it captures the multi-user diversity effect in a tractable manner. However, we note that the time scale separation is highly ideal, especially when the slot length . We will only use it for asymptotic analysis in Section 3, and will relax this assumption when designing heuristic policies for practical systems in Section 4.

The objective of the BS is to minimize the number of users violating their deadlines. However, designing optimal policies for such a scheduling problem is challenging even with the time scale separation argument. Therefore, in this paper, we first focus on the optimal policies for underloaded systems under certain additional assumptions. We call a policy optimal in underloaded systems, if a feasible schedule can be obtained by it whenever there are some off-line policies can do so, following the convention in machine-job scheduling [8, 9]. Then, we propose heuristic policies for more general systems and evaluate their performance through simulations.

3 Asymptotically Optimal Policy in Identical-Deadline Systems

In this section, we study the opportunistic scheduling problem in an identical-deadline system with users, all of which request to download files before a same deadline, i.e., for .

Due to the flow-level dynamics, designing optimal policies is challenging even for the identical-deadline system. Motivated by the idea of polymatroid capacity region [5], we propose a laxity-based policy, referred to as LHPR, and prove its asymptotic optimality.

3.1 Polymatroid Capacity Region

To be self-contained, we briefly summarize the definition as follows. Polymatroid capacity region [5] approximates the original capacity region with its polymatriod outer bound. Consider the scenarios where all the channel conditions are i.i.d. processes across all users. Let be the achievable multi-user diversity gain when there are active users, i.e., the ratio between the maximum throughput achieved by the -user system and the single user system. Assume that is concavely increasing in and let . The polymatroid capacity region is defined as follows:

The polymatroid capacity region is the tightest polymatroid outer-bound region containing the original capacity region, which is shown in Fig. 1 for the 2-user case. Thus, the minimum delay violation probability obtained with the polymatroid capacity region is a lower-bound of the practical system.

Figure 1: Polymatroid capacity region for 2-user case [5].

3.2 Design of LHPR Policy

In order to minimize the number of uncompleted tasks, the BS should tradeoff between maintaining multi-user diversity (i.e., maximizing system throughput) and serving the more urgent users. To quantify the urgency of a given user, we introduce expected laxity. The expected laxity is similar to the laxity defined in traditional job scheduling with the constant service rate replaced by the expected rate.

Definition

(Expected Laxity) In slot , for each , the expected laxity is defined as

(2)

In the above definition, the term is the expected time required to finish the task with the entire channel allocated to user . Hence, the expected laxity represents the time that can be allocated to other users without effecting the transmission of user . Users with smaller expected laxity are more urgent.

Motivated by the intuition that the BS should allocate more resource to more urgent users, we propose a Less-Laxity-Higher-Possible-Rate (LHPR) policy.

Definition

(LHPR policy) In slot , when , sort all users in in the ascending order of their expected laxity and let be the rank of user . The LHPR policy serves each user with data rate

(3)

Note that with the assumption of concavely increasing gain, we have , indicating that with LHPR policy, the user with less expected laxity is allocated with higher data rate. Moreover, the total data rate is , and thus the LHPR policy reaches the maximum system throughput that can be obtained when the number of users is .

3.3 Asymptotic Optimality of LHPR Policy

In this subsection, we show that the LHPR policy is asymptotically (as the slot length tends to 0) optimal for underloaded systems with identical deadlines.

It is worth noting that in constant-rate machine-job scheduling, the optimality of the LLF (or EDF) policy is shown with the exchange argument, i.e., transforming any feasible schedule into the one found by the LLF (or EDF) policy [8]. However, this approach does not work for LHPR since the feasible service rates depend on the number of users present in the system. Rather than using the exchanging argument, we study the asymptotic optimality by examining the least expected laxity obtained by LHPR in every time slot.

We examine the state of all users having entered the system, including both present users and completed users. Let denote the index set of users arriving before time , i.e.,

We notice that by setting the residual size , the expected laxity defined in (2) can also be applied for completed users. We further notice that the term in (2) is common for all users. For the sake of notation simplicity, we introduce the virtual expected laxity for each user , which is defined as follows

Note that by this definition, the virtual expected laxity is for completed users since .

In order to show the asymptotic optimality of LHPR, we need to examine the least expected laxity, or equivalently, the least virtual expected laxity, obtained by LHPR in every slot. The least virtual expected laxity is given by

and the least-laxity-user is defined as the user having the least virtual expected laxity, i.e.,

Note that there may be more than one user having the least virtual expected laxity and we let be the one with the smallest index. This will not affect the analysis result since the performance of LHPR is reflected by the value of the least virtual expected laxity.

We expect that LHPR achieves the maximum least virtual expected laxity in every time slot to ensure the optimality of LHPR. Unfortunately, this is not always true because LHPR serves users with rate values from a discrete set , but other policies can reach larger least virtual expected laxity by using finer allocation. For example, for two users with , the LHPR policy allocates data rates and to these two users and the least virtual expected laxity of LHPR is . However, one can allocate equal data rate, i.e., , to the two users and achieve the least virtual expected laxity . This is larger than due to the concavely increasing property of . Nevertheless, we can show that the difference vanishes as the slot length tends to 0 and thus LHPR is asymptotically optimal.

First, we focus on the case where all users arrive at the same time. Without loss of generality, we assume for all . Hence, for all , .

Recall that the key idea of the LHPR policy is allocating larger data rate to the user with less virtual expected laxity. Thus, we first define two relationships, “Used-to-be-Less-Than (ULT)” and “Indirectly-Used-to-be-Less-Than (I-ULT)”, which will play an important role in analyzing the performance of LHPR.

Definition

(ULT and I-ULT)

a) In slot , for two users and , we say that used-to-be-less-than (ULT) before slot , denoted as , if there exists an () such that .

b) In slot , for two users and , we say that indirectly-used-to-be-less-than (I-ULT) before slot , denoted as , if there exists a user sequence , such that , , , and .

The lemma below shows that under the LHPR policy, if a user ULT another user, its virtual expected laxity will not much exceed that of the other user.

Lemma

For an identical-deadline system under the LHPR policy, if users and satisfy , then

(4)

Lemma 3.3 can be proved by using the definition of ULT and tracing the virtual expected laxity of users and . It is omitted here due to the space limitation. Interested users are referred to Appendix 7.

In order to provide a lower bound on the least virtual expected laxity obtained by LHPR, we define a least-laxity-set, which contains the least-laxity-user and all other users I-ULT .

Definition

(Least-Laxity-Set) The least-laxity-set in slot is an index set satisfying the following conditions:

a) ;

b) For any , if and only if .

Let be the number of elements in . By the definition of , we know that from time slot to time slot , all the largest data rates, i.e., , are allocated to users in . With this property of , we present the following lemma stating a lower bound on the least virtual expected laxity of LHPR.

Lemma

The least virtual expected laxity obtained by LHPR in time slot is bounded as

(5)

Proof

By the definition of I-ULT, for any , , we can find a sequence of different users, , such that , , , and . Note that the sequence does not contain or and thus . According Lemma 3.3, we have

(6)

Consequently, when there are some completed users in , the least virtual expected laxity is bounded by

(7)

On the other hand, when there are no completed users in , as we have pointed out before, from time slot to , all the largest data rates are allocated to the users in . Thus, the sum of virtual expected laxity is

(8)

Note that from (6), we have

(9)

Finally, combining (7), (8), and (9) , we know that (3.3) is true.

Next, using Lemma 3.3, we show the asymptotic (as the slot length ) optimality of LHPR in underloaded identical-deadline systems.

Theorem

Assume that for , , . When the slot length , the LHPR policy achieves the maximum least-laxity at any time and is asymptotically optimal in underloaded systems.

Proof

For given , we divide the time into slots and the slot length . According to Lemma 3.3, as and , the least virtual expected laxity at time satisfies

(10)

Because is the maximum throughput that users can obtain in a duration , there are no other feasible schedules can obtain larger least virtual expected laxity than LHPR. Consequently, when the arrival sequence is schedulable, the LHPR policy will generate a feasible schedule as and is asymptotically optimal in underloaded systems.

This conclusion can be extended to the case with identical deadlines but different arrival times, which is stated by the following theorem.

Theorem

Assume that for , , . As the slot length , the LHPR policy achieves the maximum least-laxity at any time and is asymptotically optimal in underloaded systems.

The proof of Theorem 3.3 is similar to Theorem 3.3. The main difference is that in the different-arrival-time case, since there will be some new arrivals, the least-laxity-set is not monotonically increasing, i.e., is not necessarily true. Thus, we need to discuss the laxity in different temporal intervals. Interested readers are referred to Appendix 8.

4 Practical Heuristic Laxity-based Policies

The asymptotically optimal policy LHPR is based on the idealized assumption of polymatroid capacity region, and cannot be implemented in practical systems. In this section, we propose practical heuristic laxity-based policies.

4.1 Policy Structure

First, for a TDM system, typically at most one user can be served in each time slot. We assume that the slot length is sufficiently small and the channel state is constant within one time slot. Let be the data rate supported by user in slot . Then, in slot , using policy , the BS chooses user to serve based on the network status , which is given by

Furthermore, another issue is that a practical system may be overloaded, i.e., not all download tasks can be completed before their deadlines. Serving users which are likely to expire may waste the chance to finish other download tasks. Thus, according to the expected laxity, we divide the present users into two groups, and , which are given by

and

The users in will be served by trading off between the data rate and the urgency. For the users in , they will be served only if , so that we do not waste on tasks that are unlikely to be finished. Moreover, for , we will simply serve the user with the highest data rate to maximize the system throughput. Specifically, we propose the following policy framework:

(11)

where is a constant for distinguishing priorities of different applications, and is a decreasing function that quantifies the urgency based on the expected laxity defined by (2). With the structure proposed above, we can obtain different policies by designing different urgency functions.

4.2 Laxity-based Heuristic Policies

We construct the urgency function in (11) to obtain different policies. First, note that when , the expected laxity for may be negative. We deal with this issue by using an approximation , which is given by

where is a small constant.

We propose three heuristic polices based on polynomial, exponential, and logarithm urgency function, and refer to them as L-MaxWeight, L-Exp, and L-Log, respectively.

a) L-MaxWeight

where .

b) L-Exp

where , , and .

c) L-Log

where and .

Rigorous analysis for the performance of the above heuristic policies is challenging and we will evaluate their performance through simulations in the next section.

5 Simulation Results

In this section, we evaluate the performance of the proposed laxity-based policies through simulations. We present simulation results on the schedulability and delay violation probability.

5.1 Simulation Settings

Traffic Model

We consider both identical-deadline systems with finite users and stationary-arrival systems. In the identical-deadline system with deadline , we assume that there are users with arrival times uniformly distributed in the interval . is a constant for controlling the distribution of the arrival time.

In the stationary-arrival case, we assume that the users arrive according to a Poisson process with rate . Moreover, for each , we set the deadline , where is the maximum acceptable stretch factor. Note that the stretch factor is defined as the ratio between the practical delay and the expected delay in an ideal situation where the entire channel is occupied by a given user [11]. Hence, the metric provides an indication of how much delay the application can tolerate.

For the file size, we apply the model proposed in [12] for FTP traffic, where the file size follows truncated lognormal distribution with mean 2 Mbytes, standard deviation 0.722 Mbytes, and maximum size 5 Mbytes. The size is normalized with respect to the average rate.

Channel Model

We use a continuous transmission rate model [13, 7] and assume that the data rate supported by each user is given by , where is the bandwidth, and is the received SINR of user in slot . Rayleigh fading channel is assumed for each user and thus the received SINR follows exponential distribution. We set KHz and the expectation of SINR dB. Note that these values only have a marginal effect since the data rate is normalized with respective to the average rate.

To obtain multi-user diversity gains used in LHPR, we set due to the normalization of the data rate, and obtain () by evaluating the throughput of the Max C/I scheduler with users.

Policy Parameters

The parameters used by the proposed laxity-based policies are summarized in Table 1. We set since the users with negative (but not too large absolute value) expected laxity may still be able to be completed if they experience good channel conditions in the following slots. Other parameters are similar to those in conventional packet level scheduling policies [13].

  Common L-MaxWeight L-Exp L-Log  

Table 1: Parameters used by scheduling policies

5.2 Schedulability under Different Policies

Fig. 2 shows the number of schedulable realizations under different scheduling policies. From Figs. a and b, we can see that the proposed LHPR achieves the largest number of schedulable realizations among all policies. By tracing the scheduling results of each realization, we find that in the identical deadline system, a realization is schedulable under LHPR as long as it is schedulable under some other policies. In contrast, a schedulable realization under LHPR is not necessarily schedulable under other policies. These results demonstrate the asymptotic optimality of LHPR in underloaded identical-deadline system. Similar phenomenon occurs in the stationary-arrival system, though we cannot prove the asymptotic optimality of LHPR in such a general system.

The proposed laxity-based policies, L-MaxWeight, L-Exp, and L-LLF, outperform the greedy policy Max C/I in maximizing the number of schedulable realizations. Comparing the performance of L-MaxWeight, L-Exp, and L-LLF in the identical-deadline system and the stationary-arrival system, we can see that L-MaxWeight and L-Exp outperform L-LLF in the identical-deadline system, but the situation is reversed in the stationary-arrival system. This is because the variance of the expected laxity in the identical-deadline system is much smaller than that in the stationary-arrival system. The urgencies quantified by the logarithm function are similar among different users and L-LLF behaves as Max C/I in the identical-deadline system. But the L-LLF policy provides a better tradeoff when the variance of the expected laxity is large and performs better in the stationary-arrival system.

We emphasize that in the presented range, no realization is schedulable under EDF and LLF, which are unaware of the channel conditions (EDF is not evaluated in the identical-deadline system since all users have the same deadline). Even Max C/I, which makes scheduling decisions based only on channel conditions, can perform much better than EDF and LLF. This shows the value of channel condition knowledge in improving the scheduling performance.

(a) Identical-deadline system
(b) Stationay arrival system
Figure 2: Number of schedulable realizations (, , and ).

5.3 Delay Violation Probability

Fig. 3 depicts the delay violation probability under different policies. It is surprising at the first sight that from Fig. a, the delay violation probability of LHPR is larger than that of Max C/I when . This is because the LHPR policy tries to maximize the least expected laxity of the system by prioritizing the most urgent user. Thus, when a realization is unschedulable, resource is wasted and many users will violate their deadline constraints. Similar problems occur in other laxity-based policies.

In the stationary-arrival system, the proposed laxity-based policies outperform the greedy Max C/I policy. For example, the delay violation probability of L-LLF is only 25% of that of Max C/I. In addition, the delay violation probability turns to be about under LHPR and L-LLF when . While similar probability is obtained by Max C/I until , which requires additional 40% delay.

Again, we point out that the channel-oblivious policies, EDF and LLF, perform rather badly compared with the channel-aware policies. Particularly, in the identical-deadline system under LLF, many users achieve very close expected laxity. Hence, most of users miss their deadlines and the delay violation probability only slightly decreases as increases.

(a) Identical-deadline system
(b) Stationary-arrival system
Figure 3: Delay violation probability (, , and ).

6 Conclusion and Future Work

In this paper, we study laxity-based policies for scheduling file downloading traffic which is characterized with flow-level dynamics and deadlines. Under the idealized assumption of polymatroid capacity region, we propose an asymptotically optimal policy, referred to as LHPR. We also propose heuristic policies, L-MaxWeight, L-Exp, and L-LLF, for practical systems. Comparative study between the proposed laxity-based policies and traditional greedy policies such as Max C/I, EDF, and LLF, demonstrates that the performance can be improved by intelligently balancing the multi-user diversity and the urgent users.

We mainly focus on designing optimal policies for underloaded systems and assume that all completed tasks have same value. However, in practice, it is possible that not all tasks can be finished before their deadlines and different tasks may have different values. In the future, we will study algorithms that estimate the schedulability of the coming sequence, drop users to avoid resource wasting, and maximize obtained utility when the system is possibly overloaded.

\appendices

7 Proof of Lemma 3.3

We can show that for an identical-deadline system under LHPR, in any time slot , if , then

(12)

This is because if , since and , then (12) follows.

Otherwise, , and the BS will allocate more resource to user than . Hence, and .

By the definition of ULT, we know that implies the existence of an (), such that . Thus, the desired results follows.

8 Proof of Theorem 3.3

Without loss of generality, we assume that and . For given time , let be the last user enters the system before time , i.e.,

We divide the interval into time slots, each of which with length . Assume that the -th () user arrives in the -th slot, i.e., in the interval , and can be served from time slot . The initial virtual expected laxity of user is . We study the lower bound on the least virtual expected laxity of LHPR by examining the least-laxity-set .

Similar to the proof of Lemma 3.3, we know that for all ,

(13)

Assume that the users in the least-laxity-set are sorted in the ascending order of their arrival times. If some of the users in is completed, then the sum of virtual expected laxity in time slot is bounded by

(14)

Otherwise, by the definition of , we know that from time slot to (), all the largest data rates are allocated to the users in , and from time slot to , all the largest data rates are allocated to the users in . Thus, the sum of virtual expected laxity is

(15)

Then, as tends to infinity, the slot length tends to 0. From (13), we know that the virtual expected laxity of all users in tends to the least virtual expected laxity at time , i.e., . Therefore, we have when all the users are complected, or

(16)

which is the maximum least-virtual-expected-laxity can be obtained by any scheduling policies. Thus, for any schedulable arrival sequence, the LHPR policy reaches the maximum least-virtual-expected-laxity at any time , and is asymptotically optimal when .

References

  1. D. Tse, “Multiuser diversity in wireless networks,” Apr. 2001. [Online]. Available: http://www.eecs.berkeley.edu/d̃tse/stanford416.ps
  2. M. Andrews, K. Kumaran, K. Ramanan, A. Stolyar, R. Vijayakumar, and P. Whiting, “Scheduling in a queueing system with asynchronously varying service rates,” Journal of Probability in the Engineering and Informational Sciences, vol. 18, no. 2, pp. 191 – 217, Apr. 2004.
  3. K. Evensen, T. Kupka, D. Kaspar, P. Halvorsen, and C. Griwodz, “Quality-adaptive scheduling for live streaming over multiple access networks,” in Proc. ACM NOSSDAV’10, June 2010.
  4. U. Ayesta, M. Erausquin, M. Jonckheere, and I. Verloop, “Stability and asymptotic optimality of opportunistic schedulers in wireless systems,” in Proc. the 5th ICST VALUETOOLS, 2011.
  5. B. Sadiq and G. de Veciana, “Balancing SRPT prioritization vs opportunistic gain in wireless systems with flow dynamics,” in Proc. 22nd International Teletraffic Congress (ITC), Amsterdam, Sept. 2010.
  6. S. Aalto, A. Penttinen, P. Lassila, and P. Osti, “On the optimal trade-off between SRPT and opportunistic scheduling,” in Proc. ACM SIGMETRIX’11, 2011.
  7. M. Proebster, M. Kaschub, and S. Valentin, “Context-aware resource allocation to improve the quality of service of heterogeneous traffic,” in Proc. IEEE ICC’12, Jun. 2011.
  8. A. K.-L. Mok, “Fundamental design problems of distributed systems for the hard-real-time environment,” Ph.D. dissertation, Masschsetts Institute of Technology, May. 1983.
  9. G. Koren and D. Shasha, “D-Over: an optimal on-line scheduling algorithm for overload real-time systems,” SIAM Journal of Computing, vol. 24, pp. 318 – 319, 1995.
  10. M. Agarwal and A. Puri, “Base station scheduling of requests with fixed deadlines,” in Proc. IEEE INFOCOM, 2002.
  11. M. A. Bender, S. Chakrabarti, and S. Muthukrishnan, “Flow and stretch metrics for scheduling continuous job streams,” in Proc. the ninth annual ACM-SIAM symposium on discrete algorithms, 270 - 279 1998.
  12. R. Irmer (Editor in charge), “NGMN radio access performance evaluation methodology,” MGMN White Paper, Jan. 2008.
  13. B. Sadiq, S. J. Baek, and G. de Veciana, “Delay-optimal opportunistic scheduling and approximations: the log rule,” IEEE/ACM Trans. Networking, vol. 19, no. 2, pp. 405 – 418, Apr. 2011.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minumum 40 characters
   
Add comment
Cancel
Loading ...
32689
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description