Nonpreemptive Scheduling in a Smart Grid Model and
its Implications on Machine Minimization^{†}^{†}thanks: A preliminary version of this paper appeared in Proceedings of the 27th International Symposium on Algorithms and Computation, ISAAC 2016 [31] and some results are improved in this version.
Abstract
We study a scheduling problem arising in demand response management in smart grid. Consumers send in power requests with a flexible feasible time interval during which their requests can be served. The grid controller, upon receiving power requests, schedules each request within the specified interval. The electricity cost is measured by a convex function of the load in each timeslot. The objective is to schedule all requests with the minimum total electricity cost. Previous work has studied cases where jobs have unit power requirement and unit duration. We extend the study to arbitrary power requirement and duration, which has been shown to be NPhard. We give the first online algorithm for the general problem, and prove that the problem is fixed parameter tractable. We also show that the online algorithm is asymptotically optimal when the objective is to minimize the peak load. In addition, we observe that the classical nonpreemptive machine minimization problem is a special case of the smart grid problem with minpeak objective, and show that we can solve the nonpreemptive machine minimization problem asymptotically optimally.
1 Introduction
We study a scheduling problem arising in “demand response management” in smart grid [22, 23, 35, 53, 17]. The electrical smart grid is one of the major challenges in the 21st century [48, 15, 47]. The smart grid [18, 38] is a power grid system that makes power generation, distribution and consumption more efficient through information and communication technologies against the traditional power system. Peak demand hours happen only for a short duration, yet makes existing electrical grid less efficient. It has been noted in [8] that in the US power grid, 10% of all generation assets and 25% of distribution infrastructure are required for less than 400 hours per year, roughly 5% of the time [48]. Demand response management attempts to overcome this problem by shifting users’ demand to offpeak hours in order to reduce peak load [34, 37, 27, 7, 43, 40]. Research initiatives in the area include [25, 33, 41, 46].
The electricity grids supports demand response mechanism and obtains energy efficiency by organizing customer consumption of electricity in response to supply conditions. It is demonstrated in [35] that demand response is of remarkable advantage to consumers, utilities, and society. Effective demand load management brings down the cost of operating the grid, as well as energy generation and distribution [34]. Demand response management is not only advantageous to the supplier but also to the consumers as well. It is common that electricity supplier charges according to the generation cost, i.e., the higher the generation cost the higher the electricity price. Therefore, it is to the consumers’ advantage to reduce electricity consumption at high price and hence reduce the electricity bill [43].
The smart grid operator and consumers communicate through smart metering devices [28, 38]. A consumer sends in a power request with the power requirement (cf. height of request), required duration of service (cf. width of request), and the time interval that this request can be served (giving some flexibility). For example, a consumer may want the dishwasher to operate for one hour during the periods from 8am to 11am. The grid operator upon receiving requests has to schedule them in their respective time intervals using the minimum energy cost. The load of the grid at each timeslot is the sum of the power requirements of all requests allocated to that timeslot. The electricity cost is modeled by a convex function on the load, in particular we consider the cost to be the th power of the load, where is some constant. Typically, is small, e.g., [44, 14].
Previous work. Koutsopoulos and Tassiulas [27] has formulated a similar problem to our problem where the cost function is piecewise linear. They show that the problem is NPhard, and their proof can be adapted to show the NPhardness of the general problem studied in this paper [6]. Burcea et al. [6] gave polynomial time optimal algorithms for the case of unit height (cf. unit power requirement) and unit width (cf. unit duration). Feng et al. [19] have claimed that a simple greedy algorithm is 2competitive for the unit case and . However, as to be described below in Lemma 4, there is indeed a counter example that the greedy algorithm is at least 3competitive. This implies that it is still an open question to derive online algorithms for the problem. Salinas et al. [43] considered a multiobjective problem to minimize energy consumption cost and maximize some utility. A closely related problem is to manage the load by changing the price of electricity over time [7, 37, 39, 16]. Heuristics have also been developed for demand side management [34]. Other aspects of smart grid have also been considered, e.g., communication [29, 8, 30, 32], security [36, 32]. Reviews of smart grid can be found in [22, 23, 35, 53, 17].
The main combinatorial problem we defined in this paper has analogy to the traditional load balancing problem [3] and machine minimization problem [9, 12, 13, 42] but the main differences are the objective being maximum load and jobs are unit height [9, 12, 13, 42]. Minimizing maximum load has also been looked at in the context of smart grid [1, 26, 45, 50, 51], some of which further consider allowing reshaping of the jobs [1, 26]. As to be discussed in Section 2, our problem is more difficult than minimizing the maximum load. Our problem also has resemblance to the dynamic speed scaling problem [2, 49, 5] and our algorithm has employed some techniques there.
As to be discussed, our problem is closely related to the nonpreemptive machine minimization problem [12, 13], which has been claimed to be solved optimally in asymptotically sense for the online setting [42]. We provide an alternative asymptotically optimal competitive algorithm for the nonpreemptive machine minimization problem. More precisely, we show that our algorithm for the smart grid problem can also solve the nonpreemptive machine minimization problem with asymptotically optimal competitive ratio. A more detailed discussion is in Section 7.
Our contribution. In this paper, we consider a demand response optimization problem minimizing the total electricity cost and study its relation with other scheduling problems. We propose the first online algorithm for the general problem with worst case competitive ratio, which is polylogarithm in the maxmin ratio of the duration of jobs (Theorem 25 in Section 4); and give a lower bound for any online algorithm. Interestingly, the ratio depends on the maxmin width ratio but not the maxmin height ratio. The algorithm is based on an competitive online algorithm for jobs with uniform duration (Section 3). We also propose competitive online algorithms for some special cases (Section 5). In addition, we show that the problem is fixed parameter tractable by proposing the first fixed parameter exact algorithms for the problem; and derive lower bounds on the running time (Section 6). Table 1 gives a summary of our results. Interestingly, our online algorithm and exact algorithms depend on the variation of the job widths but not the variation of the job heights.
We further show that our online algorithms and exact algorithms can be adapted to the objective of minimizing the peak electricity cost, as well as the related problem of nonpreemptive machine minimization. Our online algorithms are asymptotically optimal for both problems (Section 7.1), with competitive ratio being logarithm in the maxmin ratio of the job duration. In addition, we show that both problems are fixedparameter tractable (Section 7.2).
Technically speaking, our online algorithms are based on identifying a relationship with the dynamic speed (voltage) scaling () problem [49]. The main challenge, even when jobs have uniform width or uniform height, is that in time intervals where the “workload” is low, the optimal schedule may have much lower cost than the optimal schedule because jobs in schedules can effectively be stretched as flat as possible while jobs in schedules have rigid duration and cannot be stretched. In such case, it is insufficient to simply compare with the optimal schedule. Therefore, our analysis is divided into two parts: for high workload intervals, we compare with the optimal schedule; and for low workload intervals, we directly compare with the optimal schedule via a lower bound on the total workload over these intervals (Lemmas 6 and 30). For jobs with arbitrary width, we adopt the natural approach of classification based on job width. We then align the “feasible interval” of each job in a more uniform way so that we can use the results on uniform width (Lemma 20).
In designing exact algorithms we use interval graphs to represent the jobs and the important notion maximal cliques to partition the time horizon into disjoint windows. Such partition usually leads to optimal substructures; nevertheless, nonpreemption makes it trickier and requires a smart way to handle jobs spanning multiple windows. We describe how to handle such jobs without adding a lot of overhead.
Organization of the paper. We define the problem and provide some basic observations in Section 2. The online algorithms for uniform time duration and arbitrary power requirement are developed in Section 3 and are extended for solving the general case in Section 4. The lower bound of online algorithms is provided in Section 4.3. Several special cases regarding uniform power requirement are discussed in Section 5. We design fixedparameter exact algorithms in Section 6 and derive a lower bound for the running time in Section 6.3. In Section 7, we extend our online and exact algorithms to the objective of maximum load and the related nonpreemptive machine minimization problem. We conclude the paper in Section 8.
Width  Height  Ratio 

Unit 
Arbitrary  competitive 
approximate  
Uniform 
Arbitrary  competitive 
Arbitrary  Arbitrary  competitive 
Unit  Uniform  competitive 
Arbitrary 
Uniform  competitive 
agreeable deadline  

2 Definitions and preliminaries
The input. The time is labeled from to and we consider events (release time, deadlines) occurring at integral time. We call the unit time timeslot . We denote by a set of input jobs in which each job comes with release time , deadline , width representing the duration required by , and height representing the power required by . We assume , , , and are integers. The feasible interval, denoted by , is defined as the interval and we say that is available during . We denote by the length of an interval , i.e., where . We define the density of , denoted by , to be . Roughly speaking, the density signifies the average load required by the job over its feasible interval. We then define the “average” load at any time as . In our analysis, we have to distinguish timeslots with high and low average load. Therefore, for any , we define and to be set of timeslots where the average load is larger than and at most , respectively. Note that and do not need to be contiguous.
In Section 4, we consider an algorithm that classifies jobs according to their widths. To ease discussion, we let and be the maximum and minimum width over all jobs, respectively. We further define the maxmin ratio of width, denoted by , to be . Without loss of generality, we assume that . We say that a job is in class if and only if for any .
Feasible schedule. A feasible schedule has to assign for each job a start time meaning that runs during , where the end time , and . Note that this means preemption is not allowed. The load of at time , denoted by is the sum of the height (power request) of all jobs running at , i.e., . We drop and use when the context is clear. For any algorithm , we use to denote the schedule of on . We denote by the optimal algorithm.
The cost of a schedule is the sum of the th power of the load over all time, for a constant , i.e., . For a set of timeslots (not necessarily contiguous), we denote by . Our goal is to find a feasible schedule such that is minimized. We call this the problem.
Online algorithms. In this paper, we consider online algorithms, where the job information is only revealed at the time the job is released; the algorithm has to decide which jobs to run at the current time without future information and decisions made cannot be changed later. Let be an online algorithm. We say that is competitive if for all input job sets , we have . In particular, we consider nonpreemptive algorithms where a job cannot be preempted to resume/restart later.
Special input instances. We consider various special input instances. A job is said to be unitwidth (resp. unitheight) if (resp. ). A job set is said to be uniformwidth (resp. uniformheight) if the width (resp. height) of all jobs are the same. A job set is said to have agreeable deadlines if for any two jobs and , implies .
Relating to the speed scaling problem. The problem resembles the dynamic speed scaling () problem [49] and we are going to refer to three algorithms for the problem, namely, the algorithm which gives an optimal algorithm for the problem, the online algorithms called and . We first recap the problem and the associated algorithms. In the problem, jobs come with release time , deadline , and a work requirement . A processor can run at speed and consumes energy in a rate of , for some . The objective is to complete all jobs by their deadlines using the minimum total energy. The main differences of problem to the problem include (i) jobs in can be preempted while preemption is not allowed in our problem; (ii) as processor speed in can scale, a job can be executed for varying time duration as long as the total work is completed while in our problem a job must be executed for a fixed duration given as input; (iii) the work requirement of a job in can be seen as for the corresponding job in .
With the resemblance of the two problems, we make an observation about their optimal algorithms. Let and be the optimal algorithm for the and problem, respectively. Given a job set for the problem, we can convert it into a job set for by keeping the release time and deadline for each job and setting the work requirement of a job in to the product of the width and height of the corresponding job in . Then we have the following observation.
Observation 1.
Given any schedule for , we can convert into a feasible schedule for such that ; implying that .
Proof.
Consider any feasible schedule . At timeslot , suppose there are jobs scheduled and their sum of heights is . The schedule for during timeslot can be obtained by running the processor at speed and the jobs timeshare the processor in proportion to their height. This results in a feasible schedule with the same cost and the observation follows. ∎
It is known that the online algorithm for the problem is competitive [49]. Basically, at any time , runs the processor at a speed which is the sum of the densities of jobs that are available at . By Observation 1, we have the following corollary. Note that it is not always possible to convert a feasible schedule for the problem to a feasible schedule for the problem easily. Therefore, the corollary does not immediately solve the problem but as to be shown it provides a way to analyze algorithms for .
Corollary 2.
For any input and the corresponding input , .
The online algorithm proposed by Bansal et al. [4] for problem is competitive with respect to total cost. Let denote the speed of at time . where denotes the total work of jobs with and . That is, chooses the interval which has maximal released average total work and and uses as the speed at . By Observation 1 we have the following corollary:
Corollary 3.
For any input and the corresponding input , .
Remark: One may consider the nonpreemptive problem as the reference of the problem. However, given a job set and the corresponding , may not necessarily lower than , where here is the optimal algorithm for nonpreemptive . There is an instance shows the optimal cost of is smaller. The instance contains two jobs. One has release time 0, deadline 3, width 3 and height 1. The other has release time 1, deadline 2, width 1 and height 1. Both jobs can only schedule at their release time in since their widths are the same as the lengths of their feasible intervals. The optimal cost of is . Whereas the optimal cost of nonpreemptive is . This is because the schedule uses speed 2 and runs the longer job with 1.5 time units and the shorter job with 0.5 time units. The optimal cost of is lower when . Therefore, it is unclear how we may use the results on nonpreemptive problem and so we would stick with the preemptive algorithms.
Relating to minimizing maximum cost. The problem of minimizing maximum cost over time (minmax) has been studied before [50]. We note that there is a polynomial time reduction of the decision version of the minmax problem to that of the minsum problem (the problem we study in this paper) for a large enough . In particular, one can show that with , the maximum load would dominate the load in other timeslots and we would be able to solve the minmax problem if we have a solution for the minsum problem on .
On the other hand, minimizing the maximum cost does not necessarily minimize the total cost. For example, consider an input of three jobs , and where , , ; , , ; and , , . Note that only has flexibility where it can be scheduled. To minimize the maximum cost over time, we would schedule to start at time and achieve a maximum load of . This gives a total cost of . However, to minimize the total cost, we would schedule to start at time giving a total cost of , which is smaller than when .
Lower bound on Greedy. In [19], the greedy algorithm that assigns a job to a timeslot with the minimum load is considered. It is claimed in the paper that the greedy algorithm is competitive on the onlinelist model and for the case where the load of a timeslot is , jobs are of unit length and height and the feasible timeslots of a job is a set of (noncontiguous) timeslots that the job can be assigned to. We show a counterexample to this claim and show that Greedy is at least competitive. This implies that it is still an open question to derive online algorithms for the problem.
Lemma 4.
Greedy is no better than competitive for the onlinelist model when .
Proof.
Let be an arbitrarily large integer. The adversary works in rounds and all the jobs released are of width and height . In the th round, where , the adversary releases jobs; and in the th round (the final one), the adversary releases two jobs. In the first round, the feasible timeslots of each job released are . In the th round, where , the feasible timeslots of each job released are all the timeslots that Greedy has assigned jobs in the th round. We claim that the total cost of Greedy is and the total cost of the optimal algorithm is . Therefore, the competitive ratio of Greedy is arbitrarily close to with an arbitrarily large integer .
We first analyze Greedy. Since Greedy always assigns to a timeslot with the minimum load, in the first round, Greedy assigns jobs to timeslots with each job to a different timeslot. These timeslots will be the feasible timeslots for the jobs in the second round. Using a similar argument, we can see that in each round, the number of feasible timeslots is twice the number of jobs released in that round. In addition, before the th round, the load of each feasible timeslot is and Greedy adds a load of 1 to each timeslot that it assigns a job, making the load become . Therefore, the total cost of Greedy is . On the other hand, we can assign jobs released in a round to the timeslots that are not feasible timeslots for later rounds since in the th round, the number of feasible timeslots is and the number of jobs released is . Therefore, in the optimal schedule, the load of each timeslot is exactly 1 and the total cost is . ∎
3 Online algorithm for uniform width jobs
To handle jobs of arbitrary width and height, we first study the case when jobs have uniform width (all jobs have the same width ). The proposed algorithm (Section 3.2) is based on a further restricted case of unit width, i.e., (Section 3.1).
3.1 Unit width and arbitrary height
In this section, we consider jobs with unit width and arbitrary height. We present an online algorithm which makes reference to an arbitrary feasible online algorithm for the problem, denoted by . In particular, we require that the speed of remains the same during any integral timeslot, i.e., in for all integers . Note that when jobs have integral release times and deadlines, many known algorithms satisfy this criteria, including , , and .
Recall in Section 2 how a job set for the problem is converted to a job set for the problem. We simulate a copy of on the converted job set and denote the speed used by at as . Our algorithm makes reference to but not the jobs run by at .
Algorithm . For each timeslot , we schedule jobs to start at such that is at least or until all available jobs have been scheduled. Jobs are chosen in an EDF manner.
Analysis. We note that since makes decision at integral time and jobs have unit width, each job is completed before any further scheduling decision is made. In other words, is nonpreemptive. To analyze the performance of , we first note that gives a feasible schedule (Lemma 5), and then analyze its competitive ratio (Theorem 7).
Lemma 5.
gives a feasible schedule.
Proof.
Let denote the total work done by schedule in . That is, . According to the algorithm, for all , .
Suppose on the contrary that has a job missing deadline at . That is, but is not assigned before . By the algorithm, for all , unless there are less than available jobs at for . Let be the last timeslot in such that , since all jobs released at or before have been assigned. For all , . Also, all jobs with are finished by and jobs executed in are those released after . Consider set of jobs with feasible interval completely inside (note that ), for any feasible schedule . Since assigns jobs in EDF manner and is not feasible, . It follows that . It contradicts to the fact that is feasible. Hence, finishes all jobs before their deadlines. ∎
Let be the maximum height of jobs scheduled at by ; we set if assigns no job at . We first classify each timeslot into two types: (i) , and (ii) . We denote by and the union of all timeslots of Type (i) and (ii), respectively. Notice that and can be empty and the union of and covers the entire time line. The following lemma bounds the cost of in each type of timeslots. Recall that denotes the cost of the schedule over the interval and denotes the cost of the entire schedule.
Lemma 6.
The cost of satisfies the following properties. (i) ; and (ii) .
Proof.
(i) By the algorithm, for . It follows that .
(ii) By convexity, . We can see that . According to the algorithm, for . Hence, . ∎
Notice that since and have no overlap. Together with Lemma 6 and Observation 1, we obtain the competitive ratio of in the following theorem.
Theorem 7.
Algorithm is competitive, where is the competitive ratio of the reference algorithm .
There are a number of algorithms that can be used as the reference algorithm. The only requirement is that the speed of the reference algorithm within any integral interval for some integer should be at most the load of the resulting online algorithm at the corresponding timeslot . Otherwise, the feasibility of cannot be guaranteed. Also, since in our online algorithm we make decision at each integral time , it means if the load of the reference algorithm at is larger than for some , our online algorithm might not be feasible.
The speed of the and algorithm only change at release times or deadlines of the jobs so it is valid to use or as a reference. Note that if we use as the reference, the algorithm is an offline algorithm since is an offline algorithm. Unlike and , the speed of within a timeslot might increase. Hence, we need to modify the algorithm such that it can be used as the reference algorithm. In Lemma 8, we show that the speed of in is bounded by a constant factor times the speed at for any time .
Lemma 8.
For any integral time and a constant , if the release times and deadlines of jobs are integral.
Proof.
Recall that the speed of at time , where and . The proof idea is, consider the interval chosen by corresponding to , we can transform it into another interval which is one of the interval candidate for . We show that is at least times of the speed of at .
Assume that at time , where is chosen by . We can construct such that by setting . It is clear that since the two intervals have the same right endpoint and is longer than . In fact, . Moreover, for any interval candidate, the length must be at least if the release times and deadlines of the jobs are integral. Otherwise, the interval contains no jobs and the speed is . Hence, . By , . The later equality holds since there is no job released between and . Since and , . Hence, . ∎
Lemma 8 implies that, although the speeds of change within , the speeds are bounded by times of the speed at . Hence, we can modify into as follows: at integral time , the speed of , ; at time where is integral and , . By the modification, the speed of remains the same during any integral timeslot, and . As mentioned in Section 2, the algorithm is competitive. On the other hand, can take an offline algorithm, e.g., the optimal algorithm, as reference and returns an offline schedule. Therefore, we have the following corollary.
Corollary 9.
is competitive, competitive, and approximate when the algorithm , , and are referenced, respectively.
3.2 Uniform width and arbitrary height
In this section, we consider jobs with uniform width and arbitrary height. The idea of handling uniform width jobs is to treat them as if they were unit width, however, this would mean that jobs may have release times or deadlines at nonintegral time. To remedy this, we define a procedure AlignFI to align the feasible intervals (precisely, release times and deadlines) to the new time unit of duration .
Let be a uniform width job set. We first define the notion of “tight” and “loose” jobs. A job is said to be tight if ; otherwise, it is loose. Let and be the disjoint subsets of tight and loose jobs of , respectively. We design different strategies for tight and loose jobs. As to be shown, tight jobs can be handled easily by starting them at their release times. For any loose job, we modify it via Procedure AlignFI such that its release time and deadline is a multiple of . With this alternation, we can treat the jobs as unit width and make scheduling decisions at time multiple of .
Procedure AlignFI. Given a loose job set in which and . We define the procedure AlignFI to transform each loose job into a job with release time and deadline “aligned” as follows. We denote the resulting job set by .

;

.
Observation 10.
For any job and the corresponding , (i) ; (ii) ; (iii) .
Notice that after AlignFI, the release time and deadline of each loose job are aligned to timeslot and for some integers . By Observation 10, a feasible schedule of is also a feasible schedule of . Furthermore, after AlignFI all jobs are released at time which is a multiple of . Hence, the job set can be treated as job set with unit width, where each unit has duration instead of .
As a consequence of altering the feasible intervals, we introduce two additional procedures that convert associated schedules. Given a schedule for job set , AlignSch converts it to a schedule for the corresponding job set . The other procedure FreeSch takes a schedule for a job set and converts it to a schedule for .
Transformation AlignSch. AlignSch transforms into by shifting the execution interval of every job .

;

.
Observation 11.
Consider any schedule for and the schedule for constructed by AlignSch. The following properties hold: (i) For any job and the corresponding , and ; (ii) is a feasible schedule for ; and (iii) At any time , .
Proof.
(ii) By AlignSch, . Also, . Hence . That is, is a feasible schedule for both and .
(iii) By (i), and for each . Hence, for any timeslot , for each job with , . On the other hand, consider the jobs that . Since , at least one of the timeslots , , or is in . Hence we can capture by . ∎
Corollary 12.
Using AlignSch to generate given , we have .
Proof.
By Observation 11 (iii), . ∎
Lemma 13.
.
Proof.
Consider set of loose jobs with uniform width and the corresponding . Given , there exists schedule generated by AlignSch. By Lemma 12, . Hence, . ∎
Transformation FreeSch. FreeSch transforms into .

;

.
The feasibility of can be easily proved by Observation 10.
Lemma 14.
Using FreeSch, we have .
Proof.
Since the execution intervals of and are the same, for all . Hence . ∎
Online algorithm . The algorithm takes a job set with uniform width as input and schedules the jobs in as follows. Let be the set of tight jobs in and be the set of loose jobs in .

For any tight job , schedule to start at .

Loose jobs in are converted to by AlignFI. For , we run Algorithm , which is defined in Section 3.1, with as the reference algorithm. Jobs are chosen in an earliest deadline first (EDF) manner.
Note that the decisions of can be made online.
Analysis of Algorithm . We analyze the tight jobs and loose jobs separately. We first give an observation.
Observation 15.
For any two job sets , .
Proof.
Assume on the contrary that , we can generate a schedule by removing jobs from which are not in . It follows that , contradicting to the fact that is optimal for . ∎
In the following analysis we say that interval is a interval of if and . The next lemma proves the competitive ratio separately for and .
Lemma 16.
(i) ; (ii) .
Proof.
(i) We prove that any feasible schedule for tight jobs is competitive. We first extend jobs to as follows: , , , and . That is, every job has its width as the length of its feasible interval. We denote the resulting job set by . Since each job in are not shiftable, there is only one feasible schedule for and it is optimal. Thus, for any feasible schedule for .
For each job in , the length of its feasible interval is at most . Hence, we can bound the load at any time of by the loads of constant number of timeslots in . Assume that at timeslot an extended job is executed. That is, since is not shiftable. Consider the job corresponding to , the execution interval of in any feasible schedule must contains either timeslot , , or . Hence we can upper bound the load at any time in : . Therefore, .
(ii) For , we apply AlignFI and get . We then run and get , which can be viewed as a schedule for unit width jobs. We get by FreeSch. Hence, . According to Corollary 9,