QuasiPTAS for Scheduling with Precedences using LP Hierarchies
Abstract
A central problem in scheduling is to schedule unit size jobs with precedence constraints on identical machines so as to minimize the makespan. For , it is not even known if the problem is NPhard and this is one of the last open problems from the book of Garey and Johnson.
We show that for fixed and , rounds of SheraliAdams hierarchy applied to a natural LP of the problem provides a approximation algorithm running in quasipolynomial time. This improves over the recent result of Levey and Rothvoss, who used rounds of SheraliAdams in order to get a approximation algorithm with a running time of .
1 Introduction
A central problem in scheduling is the following: suppose we are given unit jobs which have to be processed nonpreemptively on identical machines. There is also a precedence order among the jobs: if , then job has to be completed before can begin. The goal is to find a schedule of the jobs with the minimum makespan, which is defined as the time by which all the jobs have finished.
This problem admits an easy approximation algorithm which was given by Graham [Gra66] in the 60’s and is one of the landmark results in scheduling. This algorithm is known as the listscheduling algorithm and works as follows: at every time , if there is an empty slot on any of the machines, schedule any available job there, where a job is available if it is not yet scheduled and all the jobs which must precede it have already been scheduled. This simple greedy algorithm is essentially the best algorithm for the problem and for almost half a century it was an open problem whether one can get a better approximation algorithm. In fact, this was one of the ten open problems in Schuurman and Woeginger’s influential list of open problems in scheduling [SW99]. It was known since the 70’s that it is NPhard to get an approximation factor better than [LRK78]. Slight improvements were given by Lam and Sethi [LS77] who gave a approximation algorithm, and Gengal and Ranade [GR08] who gave a approximation algorithm for . Finally in 2010, Svensson [Sve10] showed that assuming a variant of Unique Games conjecture due to Bansal and Khot [BK09], for any constant there is no approximation algorithm for the problem.
However, this still leaves open the problem for the important case when is a constant. In fact in practice, usually the number of jobs are very large but there are only a few machines. Surprisingly, for , it is not even known if the problem is NPhard. This is one of the four problems from the book of Garey and Johnson [Gar79] whose computational complexity is still unresolved.
In order to get a better algorithm for the case when is a constant, a natural strategy is to write a linear program (LP), and for this problem, one such LP is the timeindexed LP (1), in which we first make a guess of the makespan and then solve the LP. The value of the LP is the smallest for which the LP is feasible, and the worst case ratio of the optimal makespan and the value of the LP is known as the integrality gap of the LP. It is well known that LP (1) has an integrality gap of at least (see e.g. [LR16]), which suggests that one needs to look at stronger convex relaxations in order to get a better algorithm. Such a stronger convex relaxation can be obtained by applying a few rounds of a hierarchy to the LP, and in this paper, we will use the SheraliAdams hierarchy [SA90]. It is known that just one round of SheraliAdams hierarchy reduces the integrality gap to for and thus, the problem can be solved exactly in this case (credited to Svensson in [Rot13]). Claire Mathieu in [Dag10] asked if one can get a approximation algorithm using rounds of SheraliAdams hierarchy for some function independent of , which would imply a PTAS for the problem when is a constant. This is also Open Problem 1 in Bansal’s recent list of open problems in scheduling [Ban17].
To get some intuition behind why hierarchies should help in this problem, let us first look at the analysis for Graham’s listscheduling algorithm. At the end of this algorithm, the number of time slots which are busy, that is where all the machines have some job scheduled on them, is a lower bound on the optimum. Also, the number of nonbusy time slots is a lower bound on the optimum. This is because there must be a chain of jobs such that one job from this chain is scheduled at each nonbusy time, and the length of any chain in the instance is clearly a lower bound on the optimum. This implies that the makespan given by the algorithm, which is the sum of the number of busy and nonbusy time slots, is a approximation of the optimum makespan, and a slightly more careful argument gives the guarantee of . Now the key idea is that if the instance given to us has a maximum chain length of at most times the optimal makespan, then Graham’s listscheduling algorithm already gives a approximation, and hierarchies provide, via conditionings, a good way to “effectively” reduce the length of the chains in any given instance.
Though the question of whether one can get a approximation algorithm using rounds of SheraliAdams hierarchy is still unresolved, a major breakthrough was made recently by Levey and Rothvoss [LR16], who gave a approximation algorithm using rounds of SheraliAdams. This gives an algorithm with a running time of , which is faster than exponential time but worse than quasipolynomial time.
1.1 Our Result
In this paper, we improve over the result of Levey and Rothvoss [LR16] by giving a approximation algorithm which runs in quasipolynomial time. Formally, we show the following:
Theorem 1.
The natural LP (1) for the problem augmented with rounds of SheraliAdams hierarchy has an integrality gap of at most , where . Moreover, there is a approximation algorithm for this problem running in time .
Throughout the paper, we use the notation to hide factors depending only on and . The natural LP for the problem is the following:
(1)  
Here is our guess on the optimum makespan. In an integral solution, if job is scheduled at time , and otherwise. The first constraint ensures that each job is scheduled at exactly one time and the second constraint ensures that no more than jobs are scheduled at any time. The third constraint ensures that if , then job can only be scheduled at a time strictly later than job .
1.2 Overview of Our Algorithm
Let us first give an overview of the algorithm of LeveyRothvoss [LR16] since our algorithm builds up on it.
Previous approach.
At a highlevel, the algorithm of LeveyRothvoss [LR16] works by constructing a laminar family of intervals, where the topmost level has one interval and each succeeding level is constructed by dividing each interval of the previous level into two equal sized intervals, as shown in the figure below. Thus, there are levels where level contains intervals, each of size for . This laminar family can be thought of as being a binary tree of depth with the interval as the root and the level intervals being vertices at depth .
Each job is first assigned to the smallest interval in this laminar family which fully contains the fractional support of as per the solution of the LP. Let and let us call the top levels in the laminar family as the top levels and the level succeeding it, that is the level as the bottom level. Their algorithm conditions (see Section 2 for the definition of conditioning) roughly times in order to reduce the maximum chain length among the jobs assigned to the top levels. Once the length of the chains in the top levels is reduced, the last of the top levels are discarded from the instance, and the subinstances corresponding to each interval in the bottom level is recursively solved in order to get a partial schedule for all the jobs except those assigned to the top levels. The discarding of the levels is done in order to create a large gap between the top levels and the bottom level. Having such a gap makes it easier to schedule the remaining jobs in the top levels in the gaps of the partial schedule and LeveyRothvoss [LR16] give an elegant algorithm to do this, provided that the maximum chain length among the jobs in the top levels is small. This step increases the makespan by at most a factor, which adds up to a loss of a factor in total over the at most depth of the recursion. Finally, one must also schedule the jobs in the levels which were discarded; to do this without increasing the makespan by more than a factor, it suffices to ensure that these discarded levels contain at most an fraction of the jobs contained in the top levels. Let us call such a set of consecutive levels, which contains at most an fraction of the number of jobs in the levels above it, as a good batch.
Now the reason they had to condition times, which leads to the running time of , comes from the fact that they condition on every interval in the top levels. And this is necessary to ensure that a good batch exists. For example, the number of jobs contained in the levels may be about for all , in which case there is no good batch in the first levels.
Our approach.
To get around the above issue, we observe the following: if, after conditioning on only the top levels, where is a big enough constant, there does not exist a good batch in the top levels, then in fact a fraction of the jobs in the top levels must lie in the last levels, that is, in levels from to . This implies that we can discard the jobs in the first levels by charging them to the jobs in the levels from to , and in doing so we only discard an fraction of the total number of jobs.
Notice that we have only conditioned about times till now as there are these many intervals in the top levels. The next crucial observation is that after deleting the top levels, the subinstances defined by each of the subtrees rooted at the intervals on the level can be solved independently of each other. This means that we can perform conditioning in parallel on each such subinstance, and thus in total, we will condition at most times, since the depth of the recursion is at most the height of the tree.
Now it might also happen that we already find a good batch in the top levels and in this case, we follow a strategy similar to LeveyRothvoss [LR16] by recursing on the bottom intervals to find a partial schedule and fitting the jobs in the top levels in this partial schedule. This step might discard an fraction of the jobs in the top levels. These two cases, one where we recurse because there is no good batch in the top levels and one where we recurse because there is a good batch in the top levels, might interleave in a complicated manner. We show that the number of jobs ever discarded in the algorithm due to each type of recursion is at most an fraction of the total number of jobs, which implies that we can achieve a makespan of .
The above highlevel description skims over a few important issues. One big challenge in the above approach is to ensure that the number of jobs discarded in the cases where we do not find a good batch stays small during the whole algorithm. Even though this is the case in one such recursive call, this might not happen over all the recursive calls taken together and we might end up discarding a constant fraction of the jobs. To get over this obstacle, we carefully control which interval each job is assigned to: if a job is assigned to an interval but after conditioning on some job which is assigned to a level lower than , the fractional support of shrinks to a subinterval of , then we will still keep assigned to , rather than moving it down the laminar family. This ensures that each job is not charged more than once for discarded jobs and thus the total number of discarded jobs is at most . This however slightly changes the way jobs are assigned to intervals and the techniques developed by Levey and Rothvoss [LR16] cannot be immediately applied to fit the jobs of the top levels in the partial schedule of the bottom levels in the case when a good batch exists. To tackle this issue, we will allow each job in the top levels to be scheduled outside of its (current) fractional support as long as it doesn’t violate the precedence constraints with the jobs in the bottom levels. With this modification, we will be able to fit the jobs in the top levels in the partial schedule of the bottom levels without discarding more than an fraction of the top jobs. This implies that in both types of recursions, we only discard an fraction of the jobs.
2 Preliminaries on SheraliAdams Hierarchy
In this section, we state the basic facts about SheraliAdams hierarchy which we will need. We refer the reader to the excellent surveys [Lau03, CT12, Rot13] for a more extensive introduction to hierarchies.
Consider a linear program with variables where for each , . For , the round SheraliAdams lift of this linear program is another linear program with variables for each satisfying , and some additional constraints. We will often denote by for simplicity.
If we think of as the probability that , intuitively the variables should equal the probability that each has , that is we would like to have that . As these constraints are not convex, we can only impose some linear conditions implied by them. In particular, for every constraint of the starting LP, we add, for every such that , a new constraint given by
(2) 
If was indeed true for all , then the above inequality can be succinctly written as , and are thus valid constraints for all solutions.
Observe that an round SheraliAdams lift of an LP with variables and constraints is just another LP with variables and constraints. Letting denote a feasible solution of the round SheraliAdams lift, is also feasible for all rounds of SheraliAdams and in particular is a feasible solution of the starting LP.
Conditioning.
Given a feasible solution of the round SheraliAdams lift and such that , then we can condition on the event to get a feasible solution of the round SheraliAdams lift defined as
The fact that is a feasible solution of the round SheraliAdams lift follows easily from (2). Moreover, satisfies and the following useful property:
Observation 2.
If for some , and we condition on for any , then .
Proof.
One can think of the solution as giving the conditional probability of given . By conditioning on a variable to be , we will mean that we replace the current fractional solution with the fractional solution as in above. Observation 2 implies that conditioning can never increase the support of any variable, or in other words, if the probability that is zero, then the conditional probability that conditioned on , is also zero.
3 Algorithm
Before we describe our algorithm, we first develop some notation. Let denote the value of the LP (1) and let denote the feasible solution of the round SheraliAdams lift of the LP we get after we condition times in the algorithm. We will say that we are in round of the algorithm if we have conditioned times so far. So we will start the algorithm in round with solution , and if we condition in round , we go to round with solution .
For each job , define the fractional support interval of in round as , where is the smallest time for which and is the largest time for which ( and are used to symbolize release time and deadline). In other words, is the minimal interval which fully contains the fractional support of job in . By Observation 2, upon conditioning, the fractional support interval can only shrink, that is .
For each job , we also define a support interval . We will initially set . In later rounds, we will update in such a way that . Intuitively, reflects our knowledge in round of where can be scheduled. Notice that we might schedule outside of the fractional support interval .
A schedule of jobs is called a feasible schedule if it satisfies the precedence constraints among all the jobs and a partial feasible schedule if it schedules some of the jobs and satisfies the precedence constraints among them. In order to get a feasible schedule of all the jobs with a makespan of at most , it suffices to show the following:
Theorem 3.
We can find a partial feasible schedule such that for at most jobs.
Clearly a schedule as in Theorem 3 has makespan at most . Having such a partial feasible schedule, we can easily convert it to a feasible schedule of all the jobs with a makespan of at most : iterate through every job discarded in and find the earliest time by when all the jobs which must precede have either already been scheduled or are as of yet discarded. Create a new time slot between times and containing only job . This increases the makespan by one for every job discarded in .
Laminar Family.
A laminar family of intervals is defined in the following manner. The topmost level, level , has one interval . Each succeeding level is constructed by dividing each interval of the previous level into two equal sized intervals^{1}^{1}1Without loss of generality, is a power of . Otherwise, we can add a few dummy jobs at the end which must succeed all other jobs and which make a power of .. Thus there are levels, where level contains intervals each of size for . This laminar family can be thought of as being a binary tree of depth with the interval as the root, and the level intervals as being vertices at depth .
Let denote the set of intervals at level of the laminar family. For an interval , a subinterval of is any interval of the laminar family, including itself; and will denote the left and right subintervals respectively of in . By the midpoint of , we will mean the right boundary of .
Job is assigned to interval in round if is the smallest interval in the laminar family such that . This assignment of jobs to intervals depends on and will change as and change during the algorithm. Let denote the interval to which is assigned in round of the algorithm. For an interval in the laminar family, let denote the set of jobs assigned to , and let denote the set of jobs assigned to intervals in in round of the algorithm.
Batches.
Let . For , define the batch as
That is, it denotes the set of consecutive levels starting from level till level . Let denote the set of jobs assigned to intervals in batch in round . Batch for is called a good batch with respect to in round if
(3) 
We will omit the “with respect to ” if it is clear from the context that we start the summation in the right hand side of (3) from the first batch in . Similarly, we will omit the “in round ” if is clear from the context.
Algorithm.
We can now describe our algorithm and split its description in two steps for clearer exposition. Let and . The reader can think of these parameters as being and . will always denote the current round of the algorithm, unless otherwise specified. We initialise and for each job , .
Schedule(): ^{2}^{2}2When the algorithm is called on an interval of length , we can just “brute force” by conditioning times to find an exact solution. We avoid writing this explicitly in the algorithm for simplicity.

Step 1: Reducing chain length in the top levels
In this step, we will reduce the length of the chains in each interval in the top levels of the laminar family to at most , for some . This is done by going down the levels, starting from level till level , where is chosen such that

after having conditioned on all the levels from to , is a good batch, or

we have already conditioned on the top levels and found no good batch, in which case we set .
The conditioning on the levels and update of ’s is done as follows. For :

Let and for each , let denote the level of the interval , that is the level to which is assigned at the beginning of this iteration of the loop.

We go over every interval and do the following: if has a chain of length more than , let be the first job in this chain. We condition on lying in .
After every conditioning, update and set for every job as follows:

if , let denote the midpoint of and . If , then we set , and if , then we set . Otherwise, set .

if , set .

That is, the support intervals are set such that if we condition on jobs in level , then the jobs assigned to a level before the conditionings stay assigned to level , and for all other jobs, equals the fractional support interval .


Step 2: Recursion
There are two cases to consider here, depending on which of or took place in the previous step.

If occured, perform a recursion of type .
This step is similar to the algorithm of [LR16]. We discard all the jobs in the good batch . Then for each interval , we recursively call Schedule() to obtain a schedule , which are put together to form a partial feasible schedule for the jobs assigned to a level .
Then we fit the jobs in the top levels, that is the jobs in in the empty slots in . We give more details of how this is done in Section 4.2.1. Some jobs in the top levels will be discarded while doing this.
Call this step a recursion of type . The number of jobs discarded in this step, that is the jobs in batch along with the jobs in the top levels which are discarded, will be referred to as the jobs discarded due to this step. Notice that this does not include the jobs discarded in each recursive call to the intervals in .

If occured, perform a recursion of type .
In this case, we discard all the jobs in . Then for each interval , we recursively call Schedule() to get a schedule which are put together to form a partial feasible schedule for all the jobs assigned to a level .
Call this step a recursion of type 2. The number of jobs discarded in this step, that is the jobs in batches will be referred to as the jobs discarded due to this step. Just like before, this does not include the jobs discarded in each recursive call to the intervals in .
In each type of recursion, we recurse on multiple subinstances defined by intervals of some level. It is important that the recursions on these subinstances are done independently of each other. That is, we pass the same (current) SheraliAdams solution to each recursive call, and conditionings done in one recursive call are independent of conditionings done in any other recursive call, and thus do not affect the fractional solution of any other recursive call.

4 Analysis
In this section, we prove Theorem 3 which will imply Theorem 1. We split the analysis into two parts: in the first part, we give a bound on the number of rounds of SheraliAdams needed in the algorithm. In the second part, we show that we discard at most jobs during the algorithm and schedule all other jobs by time , thus proving Theorem 3.
But first, we need to show that the algorithm is welldefined.
Observation 4.
In step , when we condition on an interval by finding a chain in and conditioning the first job in this chain to lie in , this is possible to do. Moreover, this assigns every job in to a subinterval of .
Proof.
For the first part of the observation, we need to show that , where is the round of the algorithm just before we condition on in . As is assigned to in round , it must be that . The support intervals are updated in a way such that we can only have after we condition on a level below that of . But because we always condition on the levels from top to bottom, we must have . This proves the first part of the observation.
The moreover part follows easily now since every other job satisfies and must start scheduling only after . ∎
4.1 Bounding number of rounds of SheraliAdams
Let denote the number of rounds of SheraliAdams the algorithm uses when run on the instance defined by the subtree rooted at interval of the laminar family. Our goal in this subsection is to show for .
We first give an upper bound on the number of conditionings done in one interval .
Lemma 5.
The algorithm conditions at most times on any interval in step .
Proof.
Let denote the round of the algorithm just before we start to condition in , and let be such that . Each time we condition in , we assign at least jobs in to a subinterval of (by Observation 4).
Also, no job assigned to a level in round moves down the laminar family during conditionings done in . And for all other jobs, they only get assigned to a subinterval. Thus no new job is assigned to while we are conditioning in .
Using and the second constraint of LP (1), there can be at most jobs in total assigned to in round . Thus, the number of times we condition in is at most
∎
Lemma 6.
The algorithm conditions at most times in step of the algorithm.
Proof.
By Lemma 5, we condition at most times per interval. As we condition on the topmost levels and hence on at most intervals, we condition at most times in step . ∎
In step 2 of the algorithm, if we do a recursion of type 1 then we recurse on every interval at level . Otherwise, if we do a recursion of type 2 then we recurse on every interval at level . In either case we recurse on every interval of some level and thus on an interval of size at most . Because the conditionings done in one recursive call are done independently of the conditionings in any other recursive call, the total number of rounds of SheraliAdams we need can be bounded by the following recurrence:
where the base case is , and thus we get
4.2 Bounding number of jobs discarded
In this subsection, we bound the number of jobs discarded in the algorithm and show that it is at most . We will separately bound the number of jobs discarded due to recursions of type and recursions of type and show that each is at most . The former uses a result proved by Levey and Rothvoss [LR16] but which needs to be heavily adapted to our algorithm. The latter uses a simple charging argument.
4.2.1 Jobs discarded due to recursions of type 1.
Suppose we perform a recursion of type when the algorithm is called on the interval of the laminar family. To be consistent with the notation of [LR16], we will call the set of jobs as , the set of jobs as and the jobs in the levels below these as (here we are reindexing the batches such that the first level starts from interval ).
Claim 7.
Proof.
Follows from the fact that is a good batch and (3). ∎
After discarding all the jobs in , the algorithm recursively finds a partial feasible schedule of the jobs in . Let be the set of jobs scheduled by . The algorithm will then attempt to extend to a schedule of the jobs in . We will be able to do this by discarding only a few jobs from . More formally:
Lemma 8.
When the algorithm is called on an interval , we can extend to a feasible schedule of the jobs in where
Before going to the proof of Lemma 8, let us first see how it implies that we discard at most jobs in all recursions of type .
Lemma 9.
Total number of jobs discarded in all recursions of type during the algorithm is at most .
Proof.
Using Claim 7 and Lemma 8, if we perform a recursion of type when the algorithm is called on the interval , the number of jobs discarded is at most
Over all recursions of type , the first term sums up to at most . For any , the second term sums up to over all intervals . As there are at most levels, the second term also sums up to over all recursions of type . ∎
We now come to the proof of Lemma 8. Without loss of generality and for easier notation, we will take . A similar result was proved in [LR16] but we need to adapt their result to our setting before we can use it. Let us first mention what they proved. We need a bit of notation before that.
Let the intervals in be where . For any time interval where , define
In other words, we just extend by one interval from at either end if possible.
For a job , denote the interval it is assigned to by ( is implicitly fixed as we do not condition in this step) and let denote the midpoint of .
The following theorem is proved in [LR16] though not stated in this form. For this reason, we show its proof in Appendix.
Theorem 10.
[LR16] Suppose we are given a feasible schedule of the jobs in and let the maximum chain length among jobs in be . Suppose we are also given for each , an interval such that:

for some .

if for some , then and .

if or for some , then .

and lies in the interior of .
Then, we can extend to a feasible schedule of where
and every is scheduled in the interval
In order to use the above Theorem, we need to find and satisfying the above four conditions. Before that, we first prove an easy bound on the length of a chain in .
Lemma 11.
The maximum chain length in is at most .
Proof.
Each interval for has maximum chain length at most . Thus the maximum chain length in for is at most and hence the maximum chain length in is at most . ∎
We now find and satisfying the conditions of Theorem 10. Recall that and is the smallest interval in the laminar family to satisfy this. Let be the minimum index of the interval in which intersects , that is, . Similarly let be the maximum index of the interval in which intersects . Define
In other words, is obtained by chopping off from the first and the last intervals in intersecting . We set for each . ^{3}^{3}3It is possible that in which case we can take . All these jobs will be discarded in Theorem 10.
Because , conditions 1 and 4 in Theorem 10 follow straightaway. It only remains to prove conditions 2 and 3. We start with a useful lemma first.
Lemma 12.
Given any jobs and such that . Then, in any round of the algorithm, cannot be assigned to a subinterval of , and cannot be assigned to a subinterval of .
Proof.
Suppose to the contrary that in some round , is assigned to a subinterval of . Let be such that .
Observe that cannot have any fractional support in , as then we would have a nonzero fraction of scheduled after has been fully scheduled, which contradicts the feasibility of the LP. Thus it must be the case that .
Let denote the last round of the algorithm when had a nonzero fractional support in . is well defined because of the fact that is assigned to . The conditioning done after which made must have happened on an interval at a level below because remains assigned to level . This means that if in round , was assigned to a level , then also stays assigned to from then on, and thus cannot get assigned to a subinterval of in round . So it must be that in round , was assigned to a level .
But this implies that in round , had a nonzero fractional support in while . This contradicts the feasibility of the LP solution .
The other part of the lemma that cannot be assigned to a subinterval of follows similarly. ∎
We can now show that conditions 2 and 3 of Theorem 10 are satisfied.
Lemma 13.
If such that , then and . Thus, condition 2 is satisfied.
Proof.
Suppose to the contrary that . If has any fractional support to the left of , which also means to the left of , then that is a clear contradiction because then we would have a nonzero fraction of and no amount of scheduled before . So assume that this is not the case.
Because extends to the left of , and hence to the left of the fractional support of , it must be that and . In that case, and thus is assigned to a subinterval of , contradicting Lemma 12.
The proof for follows similarly. Assume otherwise that . If has any fractional support to the right of , which also means to the right of , that is a clear contradiction. So the only possibility is that and that is assigned to a subinterval of , contradicting Lemma 12. ∎
Lemma 14.
For all and , if or , then . Thus, condition 3 is satisfied.
Proof.
Suppose and . We argue that in this case and thus . The argument for the case follows similarly.
Notice that because , must have been assigned to a subinterval of when we recursed on . By Lemma 12, it must be that . Also, cannot have any fractional support to the right of . These two facts imply that the right boundary of , which is either the same as the right boundary of or is at , cannot be to the right of the right boundary of . Thus is to the left of the left boundary of . Hence . ∎
4.2.2 Jobs discarded due to recursions of type .
Let . Recall that in a recursion of type 2, we delete all the jobs assigned to levels to and retain only the later batches. We show below that in such a case, at least a fraction of the jobs in the top batches are in the last batches and thus, by deleting the jobs in the first batches we only delete an fraction of the jobs.
Lemma 15.
If case occurs in step 1 of the algorithm, then
(4) 
Proof.
This implies that when we discard the top batches, we are only discarding at most an fraction of the jobs in the next batches. We can imagine this as putting a charge of on every job in the last batches. Thus the total charge on all the jobs at the end of the algorithm is an upper bound on the number of jobs discarded in recursions of type 2 during the algorithm.
Lemma 16.
For every job , we put a charge on at most once.
Proof.
Fix a job and suppose we put a charge on at least once. When we put a charge on for the first time, then in some recursion of type it must have been assigned to the lowest batches among the top batches. The algorithm will then recurse on every interval at level and thus job is now in the top batches in one of the recursive calls.
Let be such that is assigned to a subinterval of . When we recursively call the algorithm on , the first batches already satisfy the property that any interval in them has maximum chain length at most . Thus in step of the algorithm, we will not condition on any interval in the top batches. This implies that job always stays assigned to the top batches; this is because the assignment of a job to an interval can only change when we condition on an interval at the same level or at a level above that of the job.
Now suppose we put a charge on again. Then we must have once again done a recursion of type within the recursive call to . But is assigned to the topmost