Online Non-preemptive Scheduling on Unrelated Machines with Rejections

# Online Non-preemptive Scheduling on Unrelated Machines with Rejections

## Abstract

When a computer system schedules jobs there is typically a significant cost associated with preempting a job during execution. This cost can be from the expensive task of saving the memory’s state and loading data into and out of memory. It is desirable to schedule jobs non-preemptively to avoid the costs of preemption.

There is a need for non-preemptive system schedulers on desktops, servers and data centers. Despite this need, there is a gap between theory and practice. Indeed, few non-preemptive online schedulers are known to have strong foundational guarantees. This gap is likely due to strong lower bounds on any online algorithm for popular objectives. Indeed, typical worst case analysis approaches, and even resource augmented approaches such as speed augmentation, result in all algorithms having poor performance guarantees.

This paper considers on-line non-preemptive scheduling problems in the worst-case rejection model where the algorithm is allowed to reject a small fraction of jobs. By rejecting only a few jobs, this paper shows that the strong lower bounds can be circumvented. This approach can be used to discover algorithmic scheduling policies with desirable worst-case guarantees.

Specifically, the paper presents algorithms for the following two objectives: minimizing the total flow-time and minimizing the total weighted flow-time plus energy under the speed-scaling mechanism. The algorithms have a small constant competitive ratio while rejecting only a constant fraction of jobs.

Beyond specific results, the paper asserts that alternative models beyond speed augmentation should be explored to aid in the discovery of good schedulers in the face of the requirement of being online and non-preemptive.

## 1 Introduction

Designing efficient system schedulers is critical for optimizing system performance. Many environments require the scheduler to be non-preemptive, ensuring each job is scheduled on a machine without interruption. The need for non-preemption arises because preemption requires saving the state of a program and writing the state to memory or disk. For large complex tasks, the overhead cost of saving state is so large that it has to be avoided entirely.

Designing theoretically efficient online non-preemptive schedulers is challenging. Strong lower bounds have been shown, even for simple instances [1, 2]. The difficulty lies in the pessimism of assuming the algorithm is online and must be robust to all problem instances combined with irrevocable nature of scheduling a non-preemptive jobs.

In order to overcome strong theoretical barriers when designing scheduling algorithms, [3] and [4] proposed using resource augmentation in terms of speed augmentation and the machine augmentation, respectively. The idea is to either give the algorithm faster processors or extra machines versus the adversary. These models provide a tool to establish a theoretical explanation for the good performance of algorithms in practice. Indeed, many practical heuristics have been shown to be competitive in the on-line preemptive model where the algorithm is given resource augmentation.

Non-preemptive environments have resisted the discovery of strong theoretical schedulers. Specifically, it is known that a non-preemptive algorithm cannot have a small reasonable competitive ratio using only speed or machine augmentation [5] for the popular average flow time objective.

Recently, [6] extended the resource augmentation model to allow rejection. That is, some jobs need not be completed and are rejected. By combining rejection and speed augmentation, [5] gave competitive algorithms for non-preemptive flow-time problems. An intriguing question is the power of rejection versus resource augmentation. Is there a competitive algorithm that only uses rejection? This would establish that theoretically rejection is more powerful since there are lower bounds using resource augmentation. This paper answers this question positively.

### 1.1 Models, Problems and Contribution

#### Non-Preemptive Total Flow-time Minimization

In this problem, we are given a set of unrelated machines and jobs arrive on-line. Each job is characterized by a release time and it takes a different processing time if it is executed on each machine . The characteristics of each job become known to the algorithm only after its arrival. The jobs should be scheduled non-preemptively, that is a job is considered to be successfully executed only if it is executed on a machine for continuous time units. Given a schedule , the completion time of a job is denoted by . Then, its flow-time is defined as , that is the total amount of time during which remains in the system. Our goal is to create a non-preemptive schedule that minimizes the total flow-times of all jobs, i.e., .

The problem has been studied in [5] in the model of speed augmentation and rejection. Specifically, [5] gave a -competitive algorithm that uses machines with speed and reject at most -fraction of jobs for arbitrarily small . A natural intriguing question is whether speed augmentation is necessary. Our main result answers positively this question.

###### Theorem 1.

For the non-preemptive total flow-time minimization problem, there exists a -competitive algorithm that removes at most fraction of the total number of jobs, for any .

The design and analysis of the algorithm follow the duality approach. At the release time of any job , the algorithm defines the dual variables associated to the job and assigns to some machine based on this definition. The value of the dual variables associated to are selected in order to satisfy two key properties: (i) comprise the marginal increase of the total weighted flow-time due to the arrival of the job — the property that has been observed [7] and has become more and more popular in dual-fitting for on-line scheduling; and (ii) capture the information for a future decision of the algorithm whether job will be completed or rejected. Moreover, the dual variables are defined so as to stabilize the schedule and allows us to maintain a non-preemptive schedule (even with job arrivals and rejections in the future).

The decision about rejecting a job depends on the load of the recently released jobs that are waiting in the queue of each machine. The scheduler rejects a job when this load exceeds a given threshold, while the rejected job is not necessarily the one that just arrived and caused the excess in the threshold. The following lemma, whose proof is given in the Appendix, shows that immediate rejection policies cannot improve the competitive ratio.

###### Lemma 1.

Any -rejection policy which has to decide the rejection or not of each job immediately upon its arrival, has a competitive ratio of for the non-preemptive total flow-time minimization problem even on a single machine environment, where is the ratio of the maximum over the minimum processing time in the instance and .

###### Proof.

Assume that jobs of length are released at time . Note that the algorithm can reject at most one of them. Consider the time where the algorithm schedules the first of these jobs.

[leftmargin=*,topsep=5pt]
• If , then the algorithm was waited too long. Specifically, the solution of the algorithm has a total flow time of at least . On the other hand, the adversary schedules the jobs sequentially in an arbitrary order starting from time . Hence, the total flow time in adversary’s schedule is equal to . Thus, the competitive ratio in this case is .

• If , then starting at time a job of processing time is released every time until . Thus, there are such small jobs released. By the definition of the model, the algorithm cannot reject the job which is scheduled at time , and hence the small jobs have to wait until this job is completed at time . Since the algorithm can only reject a constant fraction of the small jobs, it will have a total flow time of . On the other hand, the adversary schedules all small jobs before all big jobs of processing time . Hence, the total flow time for the small jobs is , while for the big jobs the total flow time is , since . Thus, the competitive ratio is again .

The lemma follows from the fact that . ∎

#### Non-Preemptive Total Flow-time Plus Energy Minimization

We next consider non-preemptive scheduling in the speed scaling model. In this model, each machine has a power function of the form , where is the speed of the machine at time and is a constant parameter (usually ). Each job is now characterized by its weight , its release date and, for each machine , a machine-dependent volume of execution . A non-preemptive schedule in the speed-scaling model is a schedule in which each job is processed continuously (without being interrupted) in a machine and a job has a constant speed during its execution. Note that in the model, it is allowed to process multiple jobs in parallel on the same machine. The objective is to schedule jobs non-preemptively so that minimizing the total weighted flow-time plus the energy consumed for all jobs, i.e. .

Building upon the resilient ideas and techniques from flow-time minimization, we derive a competitive algorithm for the problem. Note that this algorithm does not need to process multiple jobs in parallel on the same machine, although this is permissible by the described model.

###### Theorem 2.

For the non-preemptive total weighted flow-time plus energy minimization problem, there exists an O-competitive algorithm that rejects jobs of total weight at most an -fraction of the total weight of all jobs, for any .

#### Non-Preemptive Energy Minimization

Subsequently, we consider the non-preemptive energy minimization scheduling problem in the speed scaling model. The setting is similar to the previous problem but a job now has a release date , a deadline and a processing volume if it is assigned to machine . Every job has to be processed non-preemptively and to be completed before its deadline. The goal is to minimize the total energy consumption where is the power function of machine . (In this case we consider the discrete time setting.)

No competitive algorithm is known in the non-preemptive multiple-machine environment. Despite of some similarities to the problem of minimizing energy plus flow-time, the main difference is that in the latter, one can make a trade-off between energy and flow-time and derive a competitive algorithm whereas for the energy minimization problem, one has to deal directly with a non-linear objective. The critical issue is that no linear program (LP) with relatively small integrality gap was known. In order to derive a competitive algorithm for this problem, we make use of the primal-dual approach based on configuration LP recently developed in [8]. The approach consists of introducing exponential number of variables to the natural formulation in order to reduce the integrality gap. Then, in contrast to current rounding techniques based on configuration LPs, the approach maintains greedily a competitive solution in the sense of primal-dual (without solving exponential size LPs). Interestingly, using this approach, the power functions are not required to be convex (a crucial property for prior analyses) and the competitive ratio is characterized by a notion of smoothness defined as follows.

###### Definition 1.

A set function is -smooth if for any set and any collection , the following inequality holds.

 n∑i=1[f(Bi∪ai)−f(Bi)]≤λf(A)+μf(B)
###### Theorem 3.

Assume that all power functions are -smooth. Then, there is a -competitive algorithm. In particular, if for then the algorithm is -competitive where .

In the following lemma, whose proof is given in the Appendix, we consider the case of typical power functions of the form , and we show that the above result is asymptotically optimal as a function of .

###### Lemma 2.

Any deterministic algorithm is at least -competitive for the non-preemptive energy minimization problem even in a single machine environment.

###### Proof.

The construction is inspired by the one in [9].

Fix a deterministic on-line algorithm Alg. Without loss of generality, assume that is an integer. Recall that time interval has (normalized) size at least 1. The span of job 1 is defined as and . The adversary Adv specify the span of subsequent jobs depending on the behavior of Alg. Let and be the starting time and completion time of job by algorithm Alg. For every , once algorithm Alg decides the starting time and the speed of job (so the completion time), Adv releases immediately job with release date , deadline , and volume . The instance ends when either the number of released jobs equals or .

We first observe that by executing every job by speed 1, Adv can process all jobs such that at any moment, no two jobs are run in parallel (or in other words, there is no overlapping). Specifically, by definition of jobs (especially ), Adv can entirely execute with speed 1 a job outside of interval . So there is no overlapping with job and subsequent jobs. Hence, as the speed is at most 1, the total energy induced is at most the length of the biggest span, which is .

Besides, by the way Adv releases jobs, a job overlaps with all other jobs in the schedule of Alg. Imagine now each job is initially represented by a rectangle of size by . An algorithm consists in reshaping it to another rectangle (contracting the width and augmenting the height) and place them in appropriate way. Now suppose that there is a job with span satisfy . In this case, the total height of all rectangles is at least . Otherwise, suppose that the instance releases jobs. Then the total height of all rectangles is also at least . In both case, the total energy during the span of the last job is at least .

Hence, the competitive ratio is at least . ∎

### 1.2 Related Work

For the on-line non-preemptive scheduling problem of minimizing total weighted flow-time, any algorithm has at least competitive ratio, even for single machine where is the number of jobs (as mentioned in [2]). In identical machine environments, [4] gave a constant competitive algorithm that uses machines (recall that the adversary uses machines), where is the ratio of the largest to the smallest processing time. Moreover, an -machine -speed algorithm that returns the optimal schedule has been presented in [4] for the unweighted flow-time objective. [10] proposed an -machines -competitive algorithm for the unweighted case on a single machine. This algorithm is optimal up to a constant factor for constant . Recently, [5] consider the problem in the model of speed augmentation and rejection. They showed that without rejection, no algorithm is competitive even on single machine with speed arbitrarily faster than that of adversary. Moreover, they gave a scalable -competitive algorithm that uses machines with speed and reject at most fraction of jobs for arbitrarily small .

For the on-line non-preemptive scheduling problem of minimizing total weighted flow-time plus energy, to the best of our knowledge, no competitive algorithm is known. However, the problem in the preemptive setting has been widely studied. [11] gave an -competitive algorithm for weighted flow-time plus energy in a single machine where the energy function is . Based on linear programming and dual-fitting, [7] proved an -competitive algorithm for unrelated machines. Subsequently, Nguyen [12] and [13] presented an -competitive algorithms for unrelated machines by dual fitting and primal dual approaches, respectively.

For the on-line non-preemptive scheduling problem of minimizing total energy consumption, no competitive algorithm is known. Even in the preemptive scheduling in which migration of jobs between machines are not allowed, no algorithm with provable performance is given. The difficulty, as mentioned earlier, is due to the integrality gap barrier of all currently known formulations. In single machine where the issue of non-migration does not exist, [14] gave a -competitive algorithm. Moreover, [15] showed that no deterministic algorithm has competitive ratio less than . [16] considered the case where jobs are allowed to be executed preemptively and migration between machines is permitted. For this problem, they proposed an algorithm based on the Average Rate algorithm [17] and they showed a competitive ratio of .

## 2 Minimize Total Flow-time

#### Linear Programming Formulation

In order to formulate our problem as a linear program, for each job , machine and time , we introduce a binary variable which is equal to one if is processed on at time , and zero otherwise. We use two lower bounds on the flow-time of each job , assuming that it is dispatched to machine : its fractional flow-time which is defined as (see for example [7]), and its processing time . Then, the linear programming formulation for the problem of minimizing the total flow-time follows.

 min∑i∈M∑j∈J ∫∞rj(t−rjpij+1)xij(t)dt ∑i∈M∫∞0xij(t)pijdt ≥1 ∀j ∑j∈Jxij(t) ≤1 ∀i,t xij(t)∈ {0,1} ∀i,j,t

Note that the objective value of the above linear program is at most twice that of the optimal non-preemptive schedule. We relax the above integer linear program by replacing the integrality constraints for each with . The dual of the relaxed linear program is the following.

 max∑j∈Jλj −∑i∈M∫∞0βi(t) dt λjpij−βi(t) ≤t−rjpij+1 ∀i,j,t λj ≥0 ∀j βi(t) ≥0 ∀i,t

In the rejection model considered in this article, we assume that the algorithm is allowed to reject some jobs. This can be interpreted in the primal linear program by considering only the variables corresponding to the non-rejected jobs, that is the algorithm does not have to satisfy the first constraint for the rejected jobs.

#### The Algorithm and Definition of Dual Variables

We next define the scheduling, the rejection and the dispatching policies of our algorithm which is denoted by . Let , , be an arbitrarily small constant which indicates the fraction of the total number of jobs that will be rejected. Each job is immediately dispatched to a machine upon its arrival. Let be the set of pending jobs at time dispatched to machine , that is the jobs dispatched to that have been released but not yet completed or rejected at time . Moreover, let be the remaining processing time at time of a job which has been dispatched to the machine .

Let be the job that is executed on machine at time . We always consider the jobs in sorted in non-decreasing order with respect to their processing times; in case of ties, we consider the jobs in earliest release time order. We say that a job precedes (resp. succeeds) a job if appears before (resp. after) in the above order, and we write (resp. ). We use the symbols and to express the fact that may coincide with . The scheduling policy of the algorithm is the following: whenever a machine becomes idle at a time , schedule on the job that precedes any other job in .

We use two different rules for defining our rejection policy. The first rule handles the arrival of a big group of jobs during the execution of a long job as in [5]. The second rule simulates and replaces the utility of speed-augmentation.

Rejection Rule 1

At the beginning of the execution of a job on machine , we introduce a counter which is initialized to zero. Whenever a job is dispatched to machine during the execution of , we increase by . Then, we interrupt and reject the job the first time when .

Rejection Rule 2

For each machine , we maintain a counter which is initialized to zero at . Whenever a job is dispatched to a machine , we increase by . Then, we reject the job with the largest processing time in the first time when , and we reset to zero.

Let be the set of all rejected jobs. By slightly abusing the notation, we denote the rejection time of a job by . Moreover, we define the flow-time of a rejected job to be the difference between its rejection time and its arrival time, and we denote it by .

At the arrival of a new job , let be the increase in the total flow-time if we decide to dispatch the job to the machine . Fix a machine and let be the job that is executed on at . Then, assuming that is dispatched to (i.e., assuming that ), we have that

 Δij= qik(rj)⋅\mathbbm1{if k is not rejected (% due to Rule 1)}+∑ℓ⪯jpiℓ +∑ℓ≻jpij −(qik(rj)+∑ℓ≠jqik(rj))⋅\mathbbm1{if k is rejected due to Rule 1} −(qik(rj)+∑ℓ≠jpiℓ+pijmax)⋅\mathbbm1{if jmax is rejected due to Rule % 2}

where the first term corresponds to the flow-time of the new job , the second term corresponds to the increase of the flow-time for the jobs in due to the dispatching of to machine , the third term corresponds to the decrease of the flow-time for the jobs in due to the rejection of (according to Rule 1), and the forth term corresponds to the decrease of the flow-time of the largest job due to its rejection (according to Rule 2). Based on the above, we define

 λij=1ϵpij+∑ℓ⪯jpiℓ+∑ℓ≻jpij

Then, our dispatching policy is the following: at the arrival of a new job at time , dispatch to the machine .

The quantity is strongly related with the marginal increase . However, all negative terms that appear in have been eliminated in . Moreover, the positive quantity does not appear in , but we have added the term . The intuition for the definition of is to charge an upper bound to the marginal increase to the quantities of some jobs dispatched to . Specifically, the quantity is charged to . If the positive quantity exists, then it is charged to the term of (i.e., to the job that is executed on at the arrival of ). The rejection Rule 1 guarantees that this term is sufficient for all jobs arrived and dispatched to during the execution of .

In order to deal with the ignored negative terms, we expand the notion of completion time of each job . Let be the set of jobs that are rejected due to Rule 1 after the release time of and before its completion or rejection (including in case it is rejected), that is the jobs that cause a decrease to the flow time of due to Rule 1. Moreover, we denote by the job released at the moment we reject a job . Then, we say that a job which is dispatched to machine is definitively finished at the time

 ~Cj = Cj+∑k∈Djqik(rjk) +(qik(rjj)+∑ℓ≠jjpiℓ+pij)⋅\mathbbm1{if j is rejected due to Rule 2}

Let be the set of jobs that are completed or rejected at time but not yet definitively finished. Intuitively, at the completion or rejection of job at time is moved from the set of pending jobs to the set of not yet definitively finished jobs , and it remains to this set until the time . Let be the set of jobs that are already rejected due to Rule 2 at time but they are not yet definitively finished.

It remains to formally define the dual variables. At the arrival of a job , we set and we never change this value again. Moreover, for each and , we set . Note that, given any fixed time , may increase if a new job arrives at any time . However, never decreases in the case of rejection since the rejected jobs are transferred to the set where they remain until they are definitively finished.

#### Analysis

We first show the following lemma which relates all but jobs in to some jobs in .

###### Lemma 3.

Fix a machine and a time . Consider the jobs in sorted in non-decreasing order of the time they are definitively finished; let be this order, where . There is a partition of the jobs in into at most subsets, such that

[label=()]
1. , for ,

2. ,

3. for each job , , the estimated completion time of assuming that no other job is released after time is at most .

###### Proof.

The proof is based on induction on time. We consider only times which correspond to discrete events that modify the sets and , i.e., arrival of a new job, completion of a job, rejection of a job according to Rule 2 and definitive finish of a job in .

At the arrival of the first job dispatched to machine , we have that and the statement directly holds. Let us assume that the partition exists at an event which occurs at time . We will show that this holds also for the next event at time . We consider the following three cases.

[leftmargin=*,topsep=5pt]
• If a job completes at time , then is removed from without affecting the mapping implied by the statement of the lemma.

• If a job arrives at time , then is increased by one. Let , , be the job with the largest processing time in . If , then we set for and and the partition is valid since is increased. Otherwise, find the biggest , , such that . We set for , , and for . By these definitions, the first two items of the lemma are satisfied by the induction hypothesis since each set, except for , has the same size at times and . For item (iii), we observe that the job that is added in each set , , has a shorter processing time than the job which is removed. Hence, the item (iii) holds by the definition of the scheduling policy. Moreover, if a job is rejected according to Rule 2 at time , then and . Therefore, the lemma holds since and is the job with the largest processing time (and hence the largest estimated completion time) in .

• If the job is definitively finished at time , then assume that is not empty. Then, by the induction hypothesis each job should complete before , which is a contradiction to the fact that is the next event after .

Therefore, the lemma follows. ∎

The following corollary is an immediate consequence of Lemma 3.

###### Corollary 1.

For each , it holds that .

The following lemma guarantees that the definition of the dual variables lead always to a feasible solution for the dual program.

###### Lemma 4.

For all , and , the dual constraint is feasible.

###### Proof.

For a machine and a job , observe that for any fixed , the value of may only increase during the execution of the algorithm. Hence, it is sufficient to prove the constraint assuming that no job arrives after . Assume that the job is executed on the machine at the arrival of the job . We have the following cases.

Case 1: The job is executed at . By the definition of and , we have:

 λjpij ≤ϵ1+ϵ⎛⎝1ϵ+1pij∑ℓ⪯jpiℓ+∑ℓ≻j1⎞⎠≤ϵ1+ϵ⎛⎝1ϵ+∑ℓ⪯j1+∑ℓ≻j1⎞⎠ (since piℓ≤pij for all ℓ⪯j) ≤ϵ1+ϵ(1ϵ+|Ui(t)|+t−rjpij) (since t−rj≥0)

Case 2: A job is executed at . Then, we have . Using the definition of and , we have:

 λjpij ≤ϵ1+ϵ⎛⎝1ϵ+1pij∑ℓ⪯jpiℓ+∑ℓ≻j1⎞⎠ =ϵ1+ϵ⎛⎝1ϵ+1pij∑ℓ≺zpiℓ+1pij∑z⪯ℓ⪯jpiℓ+∑ℓ≻j1⎞⎠ ≤ϵ1+ϵ⎛⎝1ϵ+t−rjpij+∑z⪯ℓ⪯j1+∑ℓ≻j1⎞⎠ (since piℓ≤pij for all ℓ⪯j) ≤ϵ1+ϵ(1ϵ+t−rjpij+|Ui(t)|)

Case 3: A job is executed at . Then, we have . Using the definition of and , we have:

 λjpij ≤ϵ1+ϵ⎛⎝1ϵ+1pij∑ℓ⪯jpiℓ+∑ℓ≻j1⎞⎠ =ϵ1+ϵ⎛⎝1ϵ+1pij∑ℓ≺jpiℓ+∑j≺ℓ≺zpiℓpiℓ+∑ℓ⪰z1⎞⎠ ≤ϵ1+ϵ⎛⎝1ϵ+1pij∑ℓ≺jpiℓ+∑j≺ℓ≺zpiℓpij+∑ℓ⪰z1⎞⎠ (since piℓ>pij for all ℓ≻j) ≤ϵ1+ϵ(1ϵ+t−rjpij+|Ui(t)|)

Hence, in all the three cases we have:

 λjpij ≤ϵ1+ϵ(1ϵ+t−rjpij+|Ui(t)|) =ϵ1+ϵ(1ϵ+t−rjpij+|Ui(t)|+ϵ|Ui(t)|1+ϵ) ≤ϵ1+ϵ(1ϵ+t−rjpij+|Ui(t)|+|Ri(t)|+11+ϵ) (by Corollary 1) ≤11+ϵ+ϵ(1+ϵ)2+ϵ1+ϵt−rjpij+βi(t)<1+t−rjpij+βi(t)

and the lemma follows. ∎

Using the above results, we next prove Theorem 1.

###### Proof of Theorem 1.

An immediate consequence of the definition of the two rejection rules is that the jobs rejected by algorithm is at most a -fraction of the total number of jobs in . By Lemma 4, we know that the proposed definition of the dual variables leads to a feasible dual solution. For the objective value of the dual program, by the definition of and , we have that

 ∑j∈Jλj≥ϵ1+ϵ∑j∈J(~Cj−rj)

Moreover, by the definition of , and , we have that

 ∑i∈M∫∞0βi(t)=ϵ(1+ϵ)2∑j∈J(~Cj−rj)

Then, the dual objective is at least

 (ϵ1+ϵ)2∑j∈J(~Cj−rj)

Let be the flow time of a job in the schedule constructed by algorithm ; recall that, for a rejected job , corresponds to the time between its release and its rejection. By definition, we have that , for each . Therefore, taking into account that the objective value of our primal linear program is at most twice the value of an optimal non-preemptive schedule, the theorem follows. ∎

## 3 Minimize Total Weighted Flow Time plus Energy

#### Linear Programming Formulation

Let be the density of a job on machine . Let be a variable that represents the speed at which the job is executed on machine at time . Given a constant that will be defined later, we consider the following convex programming formulation for the problem of minimizing the total weighted flow time plus energy.

 min ∑i∈M∑j∈J∫∞rjsij(t)δij(t−rj+pij)dt +αγ(α−1)∑i∈M∑j∈Jwα−1αj∫∞rjsij(t)dt +∑i∈M∫∞rj(∑j∈Jsij(t))αdt ∑i∈M ∫∞rjsij(t)pijdt≥1 ∀j∈J sij( t)≥0 ∀i∈M,j∈J,t≥rj

The first and the second [7] terms of the objective correspond to the weighted fractional flow time whereas the third term corresponds to the total energy consumed. In order to linearize the convex energy term, we use the following property which holds for any convex function : . Thus, we can relax the objective function by replacing its last term by

 ∑i∈M∫∞0(1−α)(ui(t))αdt+∑i∈M∫∞0α(ui(t))α−1(∑j∈Jsij(t))dt

Note that the only variables in the above formulation are . The quantities are constants that will be defined later. In fact, ’s will be treated as dual variables and they will be defined during the primal-dual procedure. The dual of the above LP is the following:

 max∑j∈Jλj+∑i∈M∫∞0(1−α)(ui(t))αdt λjpij≤δij(t−rj+pij)+α(ui(t))α−1+αγ(α−1)wα−1αj Unsupported use of \hfill

#### The Algorithm and Definition of Dual Variables

In this section, we define the scheduling, the rejection and the dispatching policies of our algorithm which is denoted by . Let be some arbitrarily small constant which corresponds to the fraction of the rejected weights. Each job is immediately dispatched to some machine upon its arrival. Let be the set of pending jobs at time dispatched to machine , that is the jobs dispatched to that have been released but not yet completed or rejected at time . Moreover, let be the remaining volume at time of job which is dispatched to machine .

Let be the job that is being executed on machine at time . We consider the jobs in sorted in non-increasing order with respect to their densities; in case of ties, we consider the jobs in earliest release time order. We say that a job precedes (resp. succeeds) a job if appears before (resp. after) in the above order, and we write (resp. ). We use the symbols and to express the fact that may coincide with .

The scheduling policy of the algorithm is the following: whenever a machine becomes idle at a time , schedule on the job that precedes any other job in . The speed of the machine at the start time is defined as . Note that, the speed of is defined at the beginning of the execution of and does not change until is completed or rejected. Assuming that no other jobs arrive in the future, we can compute the expected speed of each remaining pending job which is equal to .

As soon as the machine starts executing a job , we introduce a counter which is initialized to zero. Each time a job is released during the execution of and it is dispatched to machine , we increase by . Then, the rejection policy of the algorithm is the following: interrupt the execution of and reject it the first time when .

Assume that at the arrival of a new job at time , the machine is executing the job . For each , let . We denote by the marginal increase in the total weighted flow time that will occur following the scheduling and rejection policies of , if we decide to dispatch the job to machine . Then, can be bounded as follows (we ignore the increase of the speed and hence the decrease of the processing time for each job )

 Δij≤⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩wj⎛⎝qik(rj)sk+∑ℓ⪯jpiℓγW1/αℓ⎞⎠+(∑ℓ≻jwℓ)pijγW1/αjif vk+wj≤wkϵwj⎛⎝∑ℓ⪯jpiℓγW1/αℓ⎞⎠+(∑ℓ≻jwℓ)pijγW1/αj−(∑ℓ≠jwℓ)qik(rj)skotherwise

where in both cases, the first positive term correspond to the weighted flow time of the job , while the second positive term correspond to the marginal increase of the weighted flow time of other jobs, that is the completion time of the jobs with density smaller than the density of is delayed by . The negative term in the second case corresponds to the decrease in the weighted flow time of all jobs in if the job is rejected. Then, we define a set of variables , for all , as: . The dispatching policy is the following: dispatch the job to the machine such that .

We next define the dual variables as well as the quantities . Based on the dispatching policy, we set . For each job , let be the set of the jobs rejected due to the rejection policy between and the time when is completed or rejected. Let denote the job released at the time when our policy rejects the job . Then, we say that a job is definitively finished at the time after its completion or rejection. For every job , define the fractional weight of at time as . Let be the set of jobs that are dispatched to machine and are already completed or rejected but no yet definitively finished at time . Let be the total fractional weight of jobs that are not definitively finished on machine at time . We define as follows: . Note that when a job is rejected, it is transferred from to where it remains until the time it is definitively finished.

Consider now two sets of jobs and assigned to machine such that they are identical except that there is only a job . Moreover, assume that no job is released after time in either of the instances. Then the algorithm is said to be monotonic iff where the jobs in and are scheduled according to . The following lemma shows the monotonicity of .

###### Lemma 5.

is monotone for every machine .

###### Proof.

Let be the job executing on machine at time . Observe that changes due to the arrival of a new job. Assume that a new job arrives at , i.e. . Then, it is sufficient to show that is non-decreasing during anytime . Consider the jobs in . Since all such jobs are scheduled in non-increasing order of their densities, the total fractional weight of jobs in is monotonic with respect to arrival of a new job (refer to Lemma 6.1 in [7]).

Now we consider the case if is rejected or not rejected at time . In the case is not rejected then for , the speed of the machine is a constant. Hence, is a constant. Using Lemma 6.1 in [7], the lemma holds for this case. In the case is rejected then decreases due to the removal of . Since all jobs in remain for at least time in after their completion or rejection from , the total fractional weight of jobs in