An EPTAS for Scheduling on Unrelated Machines of Few Different Types^{1}^{1}1This work was partially supported by the German Research Foundation (DFG) project JA 612/161. The current article is an extended version of the conference article [20]
Abstract
In the classical problem of scheduling on unrelated parallel machines, a set of jobs has to be assigned to a set of machines. The jobs have a processing time depending on the machine and the goal is to minimize the makespan, that is the maximum machine load. It is well known that this problem is NPhard and does not allow polynomial time approximation algorithms with approximation guarantees smaller than unless PNP. We consider the case that there are only a constant number of machine types. Two machines have the same type if all jobs have the same processing time for them. This variant of the problem is strongly NPhard already for . We present an efficient polynomial time approximation scheme (EPTAS) for the problem, that is, for any an assignment with makespan of length at most times the optimum can be found in polynomial time in the input length and the exponent is independent of . In particular we achieve a running time of , where denotes the input length. Furthermore, we study three other problem variants and present an EPTAS for each of them: The Santa Claus problem, where the minimum machine load has to be maximized; the case of scheduling on unrelated parallel machines with a constant number of uniform types, where machines of the same type behave like uniformly related machines; and the multidimensional vector scheduling variant of the problem where both the dimension and the number of machine types are constant. For the Santa Claus problem we achieve the same running time. The results are achieved, using mixed integer linear programming and rounding techniques.
1 Introduction
We consider the problem of scheduling jobs on unrelated parallel machines—or unrelated scheduling for short—in which a set of jobs has to be assigned to a set of machines. Each job has a processing time for each machine and the goal is to find a schedule minimizing the makespan , i.e. the maximum machine load. The problem is one of the classical scheduling problems studied in approximation. In Lenstra, Shmoys and Tardos [23] showed that there is no approximation algorithm with an approximation guarantee smaller than , unless PNP. Moreover, they presented a approximation, and closing this gap is a rather famous open problem in scheduling theory and approximation (see e.g. [27]).
In particular, we study the special case where there is only a constant number of machine types. Two machines and have the same type, if holds for each job . In many application scenarios this variant is plausible, e.g., when considering computers which typically only have a very limited number of different types of processing units. We denote the processing time of a job on a machine of type by and assume that the input consist of the corresponding processing time matrix together with machine multiplicities for each type , yielding . Note that the case is equivalent to the classical scheduling on identical machines. We also study three other variants of the problem:
Santa Claus Problem.
We consider the reverse objective of maximizing the minimum machine load, i.e. . This problem is known as maxmin fair allocation or the Santa Claus problem. The intuition behind these names is that the jobs are interpreted as goods (e.g. presents), the machines as players (e.g. children), and the processing times as the values of the goods from the perspective of the different players. Finding an assignment that maximizes the minimum machine load, means therefore finding an allocation of the goods that is in some sense fair (making the least happy kid as happy as possible). We will refer to the problem as Santa Claus problem in the following, but otherwise will stick to the scheduling terminology.
Uniform Types.
Two machines and have the same uniform machine type, if there is a scaling factor such that for each job . While jobs behave on machines of the same type like they do on identical machines, they behave of machines of the same uniform type like they do on uniformly related machines. Hence, we may assume that the input consists of job sizes depending on the job and the uniform type , together with uniform machine types and machine speeds , such that .
Vector Scheduling.
In the dimensional vector scheduling variant of unrelated scheduling, a processing time vector is given for each job and machine and the makespan of a schedule is defined as the maximum load any machine receives in any dimension:
Machine types are defined correspondingly. We consider the case that both and are constant and like in the one dimensional case we may assume that the input consist of processing time vectors depending on types and jobs, together with machine multiplicities.
Basic Concepts.
We study polynomial time approximation algorithms: Given an instance of an optimization problem, an approximation for this problem produces a solution in time , where denotes the input length. For the objective function value of this solution it is guaranteed that , in the case of an minimization problem, or , in the case of an maximization problem, where is the value of an optimal solution. We call the approximation guarantee or rate of the algorithm. In some cases a polynomial time approximation scheme (PTAS) can be achieved, that is, an approximation for each . If for such a family of algorithms the running time can be bounded by for some computable function , the PTAS is called efficient (EPTAS), and if the running time is polynomial in both and it is called fully polynomial (FPTAS).
Related Work.
It is well known that the unrelated scheduling problem admits an FPTAS in the case that the number of machines is considered constant [16] and we already mentioned the seminal work by Lenstra et al. [23]. Furthermore, the problem of unrelated scheduling with a constant number of machine types is strongly NPhard, because it is a generalization of the strongly NPhard problem of scheduling on identical parallel machines. Therefore an FPTAS can not be hoped for in this case. However, Wiese, Bonifaci and Baruah showed that there is a PTAS [26], and Wiese and Bonifaci [6] gave an extended analysis for the vector scheduling case where both the dimension and are constant. The authors do not present a detailed analysis of the running time, however the procedures involve guessing steps with possibilities, where . Gehrke, Jansen, Kraft and Schikowski [13] presented a PTAS with an improved running time of for the regular one dimensional case of unrelated scheduling with a constant number of machine types. On the other hand, Chen, Jansen and Zhang [9] showed that there is no PTAS for scheduling on identical machines with running time for any , unless the exponential time hypothesis fails. Furthermore, the case has been studied: Imreh [17] designed heuristic algorithms with rates and , and Bleuse et al. [5] presented an algorithm with rate and, moreover, a (faster) approximation, for the case that for each job the processing time on the second machine type is at most the one on the first. Moreover, Raravi and Nélis [25] designed a PTAS for the case with two machine types.
Interestingly, unrelated scheduling is in P, if both the number of machine types and the number of job types is bounded by a constant. This is implied by a recent result due to Chen, Marx, Ye and Zhang [10] building upon a result by Goemans and Rothvoss [14]. Job types are defined analogously to machine types, i.e., two jobs have the same type, if for each machine . In this case the matrix has only a constant number of distinct rows and columns. Note that both the number of machine types and uniform machine types bounds the rank of this matrix. However the case of unrelated scheduling where the matrix has constant rank turns out to be much harder: Already for the case with rank the problem is APXhard [10] and for rank an approximation algorithm with rate smaller than can be ruled out, unless PNP [11]. In a rather recent work, Knop and Koutecký [22] considered the number of machine types as a parameter from the perspective of fixed parameter tractability. They showed that unrelated scheduling is fixed parameter tractable for the parameters and , that is, there is an algorithm with running time for some computable function that solves the problem to optimality. Chen et al. [10] extended this, showing that unrelated scheduling is fixed parameter tractable for the parameters and the rank of the processing time matrix.
For the case that the number of machines is constant, the Santa Claus problem behaves similar to the unrelated scheduling problem: there is an FPTAS that is implied by a result due to Woeginger [28]. In the general case however, so far no approximation algorithm with a constant approximation guarantee has been found. The results by Lenstra et al. [23] can be adapted to show that that there is no approximation algorithm with a rate smaller than , unless PNP, and to get an algorithm that finds a solution with value at least , as was done by Bezáková and Dani [4]. Since could be bigger than , this does not provide a (multiplicative) approximation guarantee. Bezáková and Dani also presented a simple approximation and an improved approximation guarantee of was achieved by Asadpour and Saberi [2]. The best rate so far is due to Bateni et al. [3] and Chakrabarty et al. [7], with a running time of for any .
To the best of our knowledge, unrelated scheduling with a constant number of uniform machine types has not been studied before, but we argue that it is a natural extension of the case with a constant number of regular machine types and also a sensible special case of the general unrelated scheduling and the low rank case in particular.
The vector scheduling problem has been studied for the special case of identical machines by Chekuri and Khanna [8]. They achieve a PTAS for the case that is constant and an approximation for the case that is arbitrary.
Results and Methodology.
The main result of this paper is the following:
Theorem 1.
There is an EPTAS for both scheduling on unrelated parallel machines and the Santa Claus problem with a constant number of different machine types with running time .
First we present a basic version of the EPTAS for unrelated scheduling with a running time doubly exponential in . For this EPTAS we use the dual approximation approach by Hochbaum and Shmoys [15] to get a guess of the optimal makespan . Then, we further simplify the problem via geometric rounding of the processing times. Next, we formulate a mixed integer linear program (MILP) with a constant number of integral variables that encodes a relaxed version of the problem. The MILP can be seen as a generalization of the classical integer linear program of configurations—or configuration ILP—for scheduling on identical parallel machines. We solve it with the algorithm by Lenstra and Kannan [24, 21]. The fractional variables of the MILP have to be rounded and we achieve this with a flow network utilizing flow integrality and causing only a small error. With an additional error the obtained solution can be used to construct a schedule with makespan . This procedure is described in detail in Section 2. Building upon the basic EPTAS we achieve the improved running time using techniques by Jansen [18] and by Jansen, Klein and Verschae [19]. The basic idea of these techniques is to make use of existential results about simple structured solutions of integer linear programs (ILPs). In particular these results can be used to guess the nonzero variables of the MILP, because they sufficiently limit the search space. We show how these techniques can be applied in our case in Section 3. Furthermore, we present efficient approximation schemes for several other problem variants, thereby demonstrating the flexibility of our approach. In particular, we can adapt all our techniques to the Santa Claus problem yielding the result stated above. This is covered in Section 4 and in Section 5 we show:
Theorem 2.
There is an EPTAS for scheduling on unrelated parallel machines with a constant number of different uniform machine types with running time .
We achieve this with a nontrivial combination of the ideas of Section 2 with techniques for scheduling on uniformly related machines by Jansen [18]. Finally, in Section 6, we revisit the unrelated vector scheduling problem that was studied by Bonifaci and Wiese [6]. We show that an additional rounding step—similar to the one in [8]—together with a slight modification of the MILP and the rounding procedure yield an EPTAS for this problem as well.
Theorem 3.
There is an EPTAS for vector scheduling on unrelated parallel machines with constant dimension a constant number of different machine types.
Note that our results may also be seen as fixed parameter tractable algorithms for the parameters and (and ). In the last section we elaborate on possible directions for future research.
2 Basic EPTAS
In this chapter we describe a basic EPTAS for unrelated scheduling with a constant number of machine types, with a running time doubly exponential in . Wlog. we assume . Furthermore denotes the logarithm with basis and for we write for .
First, we simplify the problem via the classical dual approximation concept by Hochbaum and Shmoys [15]. In the simplified version of the problem a target makespan is given and the goal is to either output a schedule with makespan at most for some constant , or correctly report that there is no schedule with makespan . We can use a polynomial time algorithm for this problem in the design of a PTAS in the following way. First we obtain an upper bound for the optimal makespan of the instance with . This can be done using the approximation by Lenstra et al. [23]. With binary search on the interval we can find in iterations a value for which the mentioned algorithm is successful, while is rejected. We have and therefore . Hence the schedule we obtained for the target makespan has makespan at most . In the following we will always assume that a target makespan is given. Next we present a brief overview of the algorithm for the simplified problem followed by a more detailed description and analysis.
Algorithm 4.

Simplify the input via geometric rounding with an error of .

Build the mixed integer linear program and solve it with the algorithm by Lenstra and Kannan ().

If there is no solution, report that there is no solution with makespan .

Generate an integral solution for via a flow network utilizing flow integrality.

The integral solution is turned into a schedule with an additional error of due to the small jobs.
Simplification of the Input.
We construct a simplified instance with modified processing times . If a job has a processing time bigger than for a machine type we set . We call a job big (for machine type ), if , and small otherwise. We perform a geometric rounding step for each job with , that is we set with .
Lemma 5.
If there is a schedule with makespan at most for , the same schedule has makespan at most for instance and any schedule for instance can be turned into a schedule for without increase in the makespan.
We will search for a schedule with makespan for the rounded instance . We establish some notation for the rounded instance. For any rounded processing time we denote the set of jobs with by . Moreover, for each machine type let and be the sets of small and big rounded processing times. Obviously we have . Furthermore is bounded by a constant: Let be such that is the biggest rounded processing time for all machine type. Then we have and therefore (using ).
Milp.
For any set of processing times we call the indexed vectors of nonnegative integers configurations (for ). The size of configuration is given by . For each we consider the set of configurations for the big processing times and with . Given a schedule , we say that a machine of type obeys a configuration , if the number of big jobs with processing time that assigns to is exactly for each . Since the processing times in are bigger than we have for each . Therefore the number of distinct configurations in can be bounded by .
We define a mixed integer linear program in which configurations are chosen integrally and jobs are assigned fractionally to machine types. Note that we will call a solution of a MILP integral if both the integral and fractional variables have integral values. We introduce variables for each machine type and configuration , and for each machine type and job . For we set . Besides this, the MILP has the following constraints:
(1)  
(2)  
(3)  
(4) 
With constraint (1) the number of chosen configurations for each machine type equals the number of machines of this type. Due to constraint (2) the variables encode the fractional assignment of jobs to machine types. Moreover for each machine type it is ensured with constraint (3) that the summed up number of big jobs of each size is at most the number of big jobs that are used in the chosen configurations for the respective machine type. Lastly, (4) guarantees that the overall processing time of the configurations and small jobs assigned to a machine type does not exceed the area . It is easy to see that the MILP models a relaxed version of the problem:
Lemma 6.
If there is schedule with makespan there is a feasible (integral) solution of , and if there is a feasible integral solution for there is a schedule with makespan at most .
Proof.
Let be a schedule with makespan . Each machine of type obeys exactly one configuration from , and we set to be the number of machines of type that obey with respect to . Furthermore for a job let be the type of machine . We set and for . It is easy to check that all conditions are fulfilled.
Now let be an integral solution of . Using (2) we can assign the jobs to distinct machine types based on the variables. The variables can be used to assign configurations to machines such that each machine receives exactly one configuration using (1). Based on these configurations we can create slots for the big jobs and for each type we can successively assign all of the big jobs assigned to this type to slots of the size of their processing time, because of (3). Now, for each type, we can iterate through the machines and greedily assign small jobs. When the makespan is exceeded due to some job, we stop assigning to the current machine and continue with the next. Because of (4), all small jobs can be assigned in this fashion. Since the small jobs have size at most , we get a schedule with makespan at most . ∎
We have integral variables, i.e., a constant number. Therefore can be solved in polynomial time, with the following classical result due to Lenstra [24] and Kannan [21]:
Theorem 7.
A mixed integer linear program with integral variables and encoding size can be solved in time .
Rounding.
In this paragraph we describe how a feasible solution for can be transformed into an integral feasible solution for , where the second MILP is defined using the same configurations but accordingly changed right hand side. This is achieved via a flow network utilizing flow integrality.
For any (small or big) processing time let be the rounded up (fractional) number of jobs with processing time that are assigned to machine type . Note that for big job sizes , we have , because of (3) and because the right hand side is an integer.
Now we describe the flow network with source and sink . For each job there is a job node and an edge with capacity connecting the source and the job node. Moreover, for each machine type we have processing time nodes for each processing time . The processing time nodes are connected to the sink via edges with capacity . Lastly, for each job and machine type with , we have an edge with capacity connecting the job node with the corresponding processing time nodes. We outline the construction in Figure 1. Obviously we have and .
Lemma 8.
has a maximum flow with value .
Proof.
Since the outgoing edges from have summed up capacity , is a trivial upper bound for the maximum flow. The solution for can be used to design a flow with value , by setting , and . It is easy to check that is indeed a feasible flow with value . ∎
Using the FordFulkerson algorithm, an integral maximum flow can be found in time . Due to flow conservation, for each job there is exactly one machine type such that , and we set and for . Moreover, we set . Obviously fulfils (1) and (2). Furthermore, (3) is fulfilled, because of the capacities and because for big job sizes . Utilizing the geometric rounding and the convergence of the geometric series, as well as , we get:
Hence, we have and therefore (4) is fulfilled as well.
Analysis.
The solution found for can be turned into an integral solution for . Like described in the proof of Lemma 6 this can easily be turned into a schedule with makespan . It is easy to see that the running time of the algorithm by Lenstra and Kannan dominates the overall running time. Since has many constraints, fractional and integral variables, the running time of the algorithm can be bounded by:
3 Better running time
We improve the running time of the algorithm using techniques that utilize results concerning the existence of solutions for integer linear programs (ILPs) with a certain simple structure. In a first step we can reduce the running time to be only singly exponential in with a technique by Jansen [18]. Then we further improve the running time to the one claimed in Theorem 1 with a very recent result by Jansen, Klein and Verschae [19]. Both techniques rely upon the following result about integer cones by Eisenbrandt and Shmonin [12].
Theorem 9.
Let be a finite set of integer vectors and let . Then there is a subset , such that and , with .
For the first improvement of the running time, this theorem is used to show:
Corollary 10.
has a feasible solution, where for each machine type at most of the corresponding integer variables are nonzero.
We get the better running time by guessing the nonzero variables and removing all the others from the MILP. The number of possibilities of choosing elements out of a set of elements can be bounded by . Considering all the machine types we can bound the number of guesses by . The running time of the algorithm by Lenstra and Kannan with integer variables can be bounded by:
This yields a running time of:
In the following we first proof Corollary 10 and then introduce the technique from [19] to further reduce the running time.
Proof of Corollary 10.
We consider the so called configuration ILP for scheduling on identical machines. Let be a given number of machines, be a set of processing times with multiplicities for each and let be some finite set of configurations for . The configuration ILP for , , , and is given by:
(5)  
(6)  
(7) 
The default case that we will consider most of the time is that is given by a target makespan that upper bounds the size of the configurations.
Let’s assume we had a feasible solution for . For and we set . We fix a machine type . By setting , we get a feasible solution for the configuration ILP given by , , and . Theorem 9 can be used to show the existence of a solution for the ILP with only a few nonzero variables: Let be the set of column vectors corresponding to the left hand side of the ILP and be the vector corresponding to the right hand side. Then holds and Theorem 9 yields that there is a subset of with cardinality at most and . Therefore there is a solution for the ILP with many nonzero variables. If we set and and perform corresponding steps for each machine type, we get a solution that obviously satisfies constraints (1),(2) and (3) of . The last constraint is also satisfied, because the number of covered big jobs of each size does not change and therefore the overall size of the configurations does not change either for each machine type. This completes the proof of Corollary 10.
Further Improvement of the Running Time.
The main ingredient of the technique by Jansen et al. [19] is a result about the configuration ILP, for the case that there is a target makespan upper bounding the configuration sizes. Let be the set of configurations with size at most . We need some further notation. The support of any vector of numbers is the set of indices with nonzero entries, i.e., . A configuration is called simple, if the size of its support is at most , and complex otherwise. The set of complex configurations from is denoted by .
Theorem 11.
Let the configuration ILP for , , , and have a feasible solution and let both the makespan and the processing times from be integral. Then there is a solution for the ILP that satisfies the following conditions:

and for .

.
We will call such a solution thin. Furthermore they argue:
Remark 12.
There are at most simple configurations.
The better running time can be achieved by determining configurations that are equivalent to the complex configurations (via guessing and dynamic programming), guessing the support of the simple configurations, and solving the MILP with few integral variables. The approach is a direct adaptation of the one in [19] for our case. In the following, we explain the additional steps of the modified algorithm in more detail, analyse the running time and present an outline of the complete algorithm.
We have to ensure that the makespan and the processing times are integral and that the makespan is small. After the geometric rounding step we scale the makespan and the processing times, such that and holds and the processing times have the form . Next we apply a second rounding step for the big processing times, setting for and denote the set of these processing times by . Obviously we have . We denote the corresponding instance by . Since for a schedule with makespan for instance there are at most big jobs on any machine, we get:
Lemma 13.
If there is a schedule with makespan at most for , the same schedule has makespan at most for instance and any schedule for instance can be turned into a schedule for without increase in the makespan.
We set and for each machine type we consider the set of configurations for with size at most . Rounding down ensures integrality and causes no problems, because all big processing times are integral. Furthermore let and be the subsets of complex and simple configurations. Due to Remark 12 we have:
(8) 
Due to Theorem 11 (using the same considerations concerning configuration ILPs like in the last paragraph), we get that there is a solution for (adjusted to this case) that uses for each machine type at most many configurations from . Moreover, at most complex configurations are used and each of them is used only once. Since each configuration corresponds to at most jobs, there are at most many jobs for each type corresponding to complex configurations. Hence, we can determine the number of complex configurations for machine type along with the number of jobs with processing time that are covered by a complex configuration in many steps via guessing. Now we can use a dynamic program to determine configurations (with multiplicities) that are equivalent to the complex configurations in the sense that their size is bounded by , their summed up number is and they cover exactly jobs with processing time . The dynamic program iterates through determining indexed vectors of nonnegative integers with . A vector computed at step encodes that jobs of size can be covered by configurations from . We denote the set of configurations the program computes with and the multiplicities with for . It is easy to see that the running time of such a program can be bounded by . Using and this yields a running time of , when considering all the machine types.
Having determined configurations that are equivalent to the complex configurations, we may just guess the simple configurations. For each machine type, there are at most simple configurations and the number of configurations we need is bounded by . Therefore, the number of needed guesses is bounded by . Now we can solve a modified version of in which is fixed to for and only variables corresponding to the guessed simple configurations are used. The running time for the algorithm by Lenstra and Kannan can again be bounded by . Thus we get an overall running time of . Considering the two cases and yields the claimed running time of:
Hence, the proof of the part of Theorem 1 concerning unrelated scheduling is complete. We conclude this section with a summary of the complete algorithm.
Algorithm 14.

Simplify the input via scaling, geometric rounding and a second rounding step for the big jobs with an error of . We now have .

Guess the number of machines with a complex configuration for each machine type along with the number of jobs with processing time covered by complex configurations for each big processing time .

For each machine type determine via dynamic programming configurations that are equivalent to the complex configurations.

Guess the simple configurations used in a thin solution.

If there is no solution for each of the guesses, report that there is no solution with makespan .

Generate an integral solution for via a flow network utilizing flow integrality.

With an additional error of due to the small jobs the integral solution is turned into a schedule.
4 The Santa Claus Problem
Adapting the result for unrelated scheduling we achieve an EPTAS for the Santa Claus problem. It is based on the basic EPTAS together with the second running time improvement. In the following we show the needed adjustments.
Preliminaries.
Wlog. we present a approximation instead of a approximation. Moreover, we assume and that , because otherwise the problem is trivial.
The dual approximation method can be applied in this case as well. However, since we have no approximation algorithm with a constant rate, the binary search is slightly more expensive. Still we can use for example the algorithm by Bezáková and Dani [4] to find a bound for the optimal makespan with . In many steps we can find a guess for the optimal minimum machine load such that and therefore . It suffices to find a procedure that given an instance and a guess outputs a solution with objective value at least for some constant .
Concerning the simplification of the input, we first scale the makespan and the running times such that . Then we set the processing times that are bigger than equal to . Next we round the processing times down via geometric rounding: We set with . The number of big jobs for any machine type is again bounded by . For the big jobs we apply the second rounding step setting and denote the resulting big processing times with , the corresponding instance by and the occurring small processing times by . The analogue of Lemma 13 holds, i.e. at the cost of we may search for a solution for the rounded instance . We set .
Milp.
In the Santa Claus problem it makes sense to use configurations of size bigger than . Let . It suffices to consider configurations with size at most and for each machine type we denote the corresponding set of configurations by . Again we can bound by . The MILP has integral variables for each such configuration and fractional ones like before. The constraints (1) and (2) are adapted changing only the set of configurations and for constraint (3) additionally in this case the lefthand side has to be at least as big as the right hand side. The last constraint (4) has to be changed more. For this we partition into the set of big configurations with size bigger than and the set of small configurations with size at most . The changed constraint has the following form:
(9) 
We denote the resulting MILP by and get the analogue of Lemma 6:
Lemma 15.
If there is schedule with minimum machine load , there is a feasible (integral) solution of ; and if there is a feasible integral solution for , there is a schedule with minimum machine load at least .
Proof.
Let be a schedule with minimum machine load . We first consider only the machines for which the received load due to big jobs is at most . These machines obey exactly one configuration from and we set the corresponding integral variables like before. The rest of the integral variables we initially set to . Now consider a machine of type that receives more than load due to big jobs. We can successively remove a biggest job from the set of big jobs assigned to the machine until we reach a subset with summed up processing time at most and bigger than . This set corresponds to a big configuration and we increment the variable . The fractional variables are set like in the unrelated scheduling case and it is easy to verify that all constraints are satisfied.
Now let be an integral solution of MILP(). Again we can assign the jobs to distinct machine types based on the variables and the configurations to machines based on the variables such that each machine receives at most one configuration. Based on these configurations we can create slots for the big jobs and for each type we can successively assign big jobs until all slots are filled. Now we can, for each type, iterate through the machines that received small configurations and greedily assign small jobs. When the makespan would be exceeded due to some job, we stop assigning to the current machine (not adding the current job) and continue with the next machine. Because of (9) we can cover all of the machines by this. Since the small jobs have size at most we get a schedule with makespan at least . There may be some remaining jobs that can be assigned arbitrarily. ∎
To solve the MILP we adapt the techniques by Jansen et al. [19], which is slightly more complicated for the modified MILP. Unlike in the previous section in order to get a thin solution that still fulfils (9), we have to consider big and small configurations separately for each machine type. Note that for a changed solution of the MILP (9) is fulfilled, if the summedup size of the small and the summed up number of the big configurations is not changed. Given a solution for the MILP and a machine type , we set and , and furthermore and for . We get two configuration ILPs: The first is given by , , and and we call it the small ILP. The second is given by ,