Structural Parameters for Scheduling with Assignment Restrictions1footnote 11footnote 1This work was partially supported by the DAAD (Deutscher Akademischer Austauschdienst) and by the German Research Foundation (DFG) project JA 612/15-1.

Structural Parameters for Scheduling with Assignment Restrictions

Abstract

We consider scheduling on identical and unrelated parallel machines with job assignment restrictions. These problems are NP-hard and they do not admit polynomial time approximation algorithms with approximation ratios smaller than unless P=NP. However, if we impose limitations on the set of machines that can process a job, the problem sometimes becomes easier in the sense that algorithms with approximation ratios better than exist. We introduce three graphs, based on the assignment restrictions and study the computational complexity of the scheduling problem with respect to structural properties of these graphs, in particular their tree- and rankwidth. We identify cases that admit polynomial time approximation schemes or FPT algorithms, generalizing and extending previous results in this area.

1 Introduction

We consider the problem of makespan minimization for scheduling on unrelated parallel machines. In this problem a set of jobs has to be assigned to a set of machines via a schedule . A job has a processing time for every machine and the goal is to minimize the makespan . In the three-field notation this problem is denoted by . On some machines a job might have a very high, or even infinite processing time, so it should never be processed on these machines. This amounts to assignment restrictions in which for every job there is a subset of machines on which it may be processed. An important special case of is given if the machines are identical in the sense that each job has the same processing time on all the machines on which it may be processed, i.e., . This problem is sometimes called restricted assignment and is denoted as in the three-field notation.

We study versions of and where the restrictions are in some sense well structured. In particular we consider three different graphs that are defined based on the job assignment restrictions and study how structural properties of these graphs affect the computational complexity of the corresponding scheduling problems. We briefly describe the graphs. In the primal graph the vertices are the jobs and two vertices are connected by an edge, iff there is a machine on which both of the jobs can be processed. In the dual graph, on the other hand, the machines are vertices and two of them are adjacent, iff there is a job that can be processed by both machines. Lastly we consider the incidence graph. This is a bipartite graph and both the jobs and machines are vertices. A job is adjacent to a machine , if . In Figure 1 an example of each graph is given. These graphs have also been studied in the context of constraint satisfaction (see e.g. [22] or [20]) and we adapted them for machine scheduling.

We consider the above scheduling problems in the contexts of parameterized and approximation algorithms. For an -approximation for a minimization problem computes a solution of value , where is the optimal value for a given instance . A family of algorithms consisting of -approximations for each with running times polynomial in the input length (and ) is called a (fully) polynomial time approximation scheme (F)PTAS. Let be some parameter defined for a given problem, and let be its value for instance . The problem is said to be fixed-parameter tractable (FPT) for , if there is an algorithm that given and solves in time , where is a constant, any computable function and the input length. This definition can easily be extended to multiple parameters.

Related work.

\includegraphics

[scale=0.8]graph_example_01.pdf

Figure 1: Primal, dual and incidence graph for an instance with jobs and machines.

In 1990 Lenstra, Shmoys and Tardos [15] showed, in a seminal work, that there is a -approximation for and that the problem cannot be approximated with a ratio better than unless PNP. Both bounds also hold for and have not been substantially improved since that time. The case where the number of machines is constant is weakly NP-hard and there is an FPTAS for this case [11]. In 2012 Svensson [21] presented an interesting result for : A special case of the restricted assignment problem called graph balancing was studied by Ebenlendr et al. [6]. In this variant each job can be processed by at most machines and therefore an instance can be seen as a (multi-)graph where the machines are vertices and the jobs edges. They presented a approximation for this problem and also showed that the inapproximability result remains true. Lee et al. [14] studied the version of graph balancing where (in our notation) the dual graph is a tree and showed that there is an FPTAS for it. Moreover, the special case of graph balancing where the graph is simple has been considered. For this problem Asahiro et al. [2] presented among other things a pseudo-polynomial time algorithm for the case of graphs with bounded treewidth. For certain cases of with job assignment restrictions that are in some sense well-structured PTAS results are known. In particular for the path- and tree-hierarchical cases ([18] and [7]) in which the machines can be arranged in a path or tree and the jobs can only be processed on subpaths starting at the leftmost machine or at the root machine respectively, and the nested case ([17]), where , or holds for each pair of jobs .

The study of from the FPT perspective has started only recently. Mnich and Wiese [16] showed that is FPT for the pair of parameters and the number of distinct processing times. The problem is also FPT for the parameter pair and the number of machine types [12]. Two machines have the same type, if each job has the same processing time on them. Furthermore Szeider [23] showed that graph balancing on simple graphs with unary encoding of the processing times is not FPT for the parameter treewidth under usual complexity assumptions.

Results.

In this paper we present a graph theoretical viewpoint for the study of scheduling problems with job assignment restrictions that we believe to be of independent interest. Using this approach we identify structural properties for which the problems admit approximation schemes or FPT algorithms, generalizing and extending previous results in this area. The results are based on dynamic programming utilizing tree and branch decompositions of the respective graphs. For the approximation schemes the dynamic programs are combined with suitable rounding approaches.

Tree and branch decompositions are associated with certain structural width parameters. We consider two of them: treewidth and rankwidth. In the following we denote the treewidth of the primal, dual and incidence graph with , and , respectively. For the definitions of these concepts we refer to Section 2.

We now describe our results in more detail. Let be the set of jobs the machine can process. In the context of parameterized algorithms we show the following.

Theorem 1.

is FPT for the parameter .

Theorem 2.

is FPT for the pair of parameters with and .

Note that with constant remains NP-hard [6]. In the context of approximation we get:

Theorem 3.

is weakly NP-hard, if or is constant and there is an FPTAS for both of these cases.

The hardness is due to the hardness of scheduling on two identical parallel machines . The result for the dual graph is a generalization of the result in [14] and resolves cases that were marked as open in that paper. All results mentioned so far are discussed in Section 3. In the following section we consider the rankwidth:

Theorem 4.

There is a PTAS for instances of where the rankwidth of the incidence graph is bounded by a constant.

It can be shown that instances of with path- or tree-hierarchical or nested restrictions are special cases of the case when the incidence graph is a bicograph. Bicographs are known to have a rankwidth of at most (see [9]) and a suitable branch decomposition can be found very easily. Therefore we generalize and unify the known PTAS results for with structured job assignment restrictions.

2 Preliminaries

In the following will always denote an instance of or and most of the time we will assume that it is feasible. We call an instance feasible if for every job . A schedule is feasible if . For a subset of jobs and a subset of machines we denote the subinstance of induced by and with . Furthermore, for a set of schedules for we let , and if is the set of all schedules for . We will sometimes use . Note that there are no schedules for instances without machines. On the other hand, if is an instance without jobs, we consider the empty function a feasible schedule (with makespan ), and have therefore in that case.

Dynamic programs for .

We sketch two basic dynamic programs that will be needed as subprocedures in the following. The first one is based on iterating through the machines. Let for and , assuming . Then it is easy to see that . Using this recurrence relation a simple dynamic program can be formulated that computes the values . It holds that and as usual for dynamic programs an optimal schedule can be recovered via backtracking. The running time of such a program can be bounded by , yielding the following trivial result:

Remark 5.

is FPT for the parameter .

The second dynamic program is based on iterating through the jobs. Let . We call a load vector and say that a schedule fulfils , if . For let be the set of load vectors that are fulfilled by some schedule for the subinstance , assuming . Then can also be defined recursively as the set of vectors with and for , where and . Using this, a simple dynamic program can be formulated that computes for all . can be recovered from and a corresponding schedule can be found via backtracking. Let there be a bound for the number of distinct loads that can occur on each machine, i.e., for each . Then the running time can be bounded by , yielding:

Remark 6.

is FPT for the pair of parameters and with .

For this note that both and are bounds for the number of distinct loads that can occur on any machine. This dynamic program can also be used to get a simple FPTAS for for the case when the number of machines is constant. For this let be an upper bound of with . Such a bound can be found with the -approximation by Lenstra et al. [15]. Moreover let and . By rounding the processing time of every job up to the next integer multiple of we get an instance whose optimum makespan is at most bigger than . The dynamic program can easily be modified to only consider load vectors for , where all loads are bounded by . Therefore there can be at most distinct load values for any machine and an optimal schedule for can be found in time . The schedule can trivially be transformed into a schedule for the original instance without an increase in the makespan.

Tree decompostion and treewidth.

A tree decomposition of a graph is a pair , where is a tree, for each is a set of vertices of , called a bag, and the following three conditions hold:

  1. For every the set induces a connected subtree of .

The width of the decomposition is , and the treewidth of is the minimum width of all tree decompositions of . It is well known that forests are exactly the graphs with treewidth one, and that the treewidth of is at least as big as the biggest clique in minus . More precisely, for each set of vertices inducing a clique in , there is a node with (see e.g. [4]). For a given graph and a value it can be decided in FPT time (and linear in ) whether the treewidth of is at most and in the affirmative case a corresponding tree decomposition with nodes can be computed [3]. However, deciding whether a graph has a treewidth of at most , is NP-hard [1].

Branch decomposition and rankwidth.

It is easy to see that graphs with a small treewidth are sparse. Probably the most studied parameter for dense graphs is the cliquewidth . In this paper however we are going to consider a related parameter called the rankwidth . These two parameters are equivalent in the sense that [19]. Furthermore it is known that [5]. On the other hand cannot be bounded by any function in or , which can easily be seen by considering complete graphs.

A cut of is a partition of into two subsets. For let be the adjacency submatrix induced by and , i.e., if and otherwise for and . The cut rank of is the rank of over the field with two elements GF() and denoted by . A branch decomposition of is a pair , where is a tree with leaves whose internal nodes have all degree , and is a bijection from to the leafs of . For each there is an induced cut of : For the set contains exactly the nodes , where is a leaf that is in the same connected component of as , if is removed. Now the width of (with respect to ) is and the rankwidth of the decomposition is the maximum width over all edges of . The rankwidth of is the minimum rankwidth of all branch decompositions of . It is well known that the cliquewidth of a complete graph is equal to and this is also true for the rankwidth. For a given graph and fixed there is an algorithm that finds a branch decomposition of width in FPT-time (cubic in ), or reports correctly that none exists [10].

3 Treewidth Results

We start with some basic relationships between different restriction parameters for , especially the treewidths of the different graphs for a given instance. Similar relationships have been determined for the three graphs in the context of constraint satisfaction.

Remark 7.

and .

To see this note that the sets and are cliques in the primal and dual graphs, respectively.

Remark 8.

and . On the other hand and .

These properties were pointed out by Kalaitis and Vardi [13] in a different context. Note that this Remark together with Theorem 1 implies the results of Theorem 2 concerning the parameter . Furthermore, in the case of with only job and machines, or jobs and only machine the primal graph has treewidth or and the dual or , respectively, while the incidence graph in both cases has treewidth .

Dynamic Programs

We show how a tree decomposition of width for any one of the three graphs can be used to design a dynamic program for the corresponding instance of . Selecting a node as the root of the decompostion, the dynamic program works in a bottom-up manner from the leaves to the root. We assume that the decomposition has the following simple form: For each leaf node the bag is empty and we fix one of these nodes as the root of . Furthermore each internal node has exactly two children and (left and right), and each node has one parent . We denote the descendants of with . A decomposition of this form can be generated from any other one without increasing the width and growing only linearly in size through the introduction of dummy nodes. The bag of a dummy node is either empty or identical to the one of its parent.

For each of the graphs and each node we define sets and of inactive jobs and machines along with sets and of active jobs and machines. The active jobs and machines in each case are defined based on the respective bag , and the inactive ones have the property that they were active for a descendant of but are not at . In addition there are nearly inactive jobs and machines , which are the jobs and machines that are deactivated when going from to its parent (for we assume them to be empty). The sets are defined so that certain conditions hold. The first two are that the (nearly) inactive jobs may only be processed on active or inactive machines, and the (nearly) inactive machines can only process active or inactive jobs:

(1)
(2)

Where and for any sets and . Furthermore the (nearly) inactive jobs and machines of the children of an internal form a disjoint union of the inactive jobs and machines of , respectively:

(3)
(4)

Where for any two sets emphasizes that the union is disjoint, i.e., . Now at each node of the decomposition the basic idea is to perform three steps:

  1. Combine the information from the children (for internal nodes).

  2. Consider the nearly inactive jobs and machines:

    • Primal and incidence graph: Try all possible ways of scheduling active jobs on nearly inactive machines.

    • Dual and incidence graph: Try all possible ways of scheduling nearly inactive jobs on active machines.

  3. Combine the information from the last two steps.

For the second step the dynamic programs described in Section 2 are used as subprocedures. We now consider each of the three graphs.

The primal graph.

In the primal graph all the vertices are jobs, and we define the active jobs of a tree node to be exactly the jobs that are included in the respective bag, i.e., . The inactive jobs are those that are not included in but are in a bag of some descendant of and the nearly inactive one are those that are active at but inactive at , i.e., and . Moreover the inactive machines are the ones on which some inactive job may be processed, and the (nearly in-)active machines are those that can process (nearly in-)active jobs and are not inactive, i.e., , and . For these definitions we get:

Lemma 9.

The conditions (1)-(4) hold, as well as:

(5)
(6)
Proof.

(1) and (6):

This yields (6) and (6) implies (1).

(2) and (5): Let and . We first consider the case that . Then there is a job with . If , we have and otherwise . Because of (T2) there is a node with . Since , we have . This together with (T3) gives . Now implies . Therefore we have . Next we consider he case that . In this case there is a job with and for each job we have . If we have and otherwise . Because of (T2) there is again a node with . Since , and we get using (T3). This also implies (5).

(3): All but follows directly from the definitions. Assuming there is a job we get because of (T3), yielding a contradiction.

(4): Because of (3) and the definitions we get , and for is clear by definition. Therefore it remains to show . We assume that there is a machine in this cut. Then there are jobs for with . We have and because of (T2) there is a node with . Because of and (T3) we have a contradiction. ∎

For and let . Let , and . We set and to be the sets of feasible schedules for the instances and respectively. We will consider and .

First note that , where is the root of . Moreover, for a leaf node there are neither jobs nor machines and holds. Hence let be a non-leaf node. We first consider how can be computed from the children of (Step 1). Due to Property 3 of the tree decomposition and (1) the jobs from are already active on at least one of the direct descendants of . Because of this and (4), may be split in two parts , where for . Let be the set of such pairs . From (3), (4) and (6) we get:

Lemma 10.

.

Proof.

Let be optimal. Since , we have for . Let . Because of (4) we have and obviously holds. Let . Because of (6) we have and (3) implies . This yields:

Now let minimizing the righthand side of the equation and optimal. Then (3) and (4) imply that is in . Therefore we have . Since also equals the right hand side of the equation the claim follows. ∎

Consider the computation of (Step 3). We may split and into a set going to the nearly inactive and a set going to the inactive machines. We set to be the set of pairs with , and . Because of (3)-(5) we have:

Lemma 11.

.

Proof.

Let be optimal. Because of (5) we have . We set and . Then . Let and . Then and is a feasible schedule for . Because of (3) and (4), we have and:

Now let minimizing the right hand side of the equation, and a feasible schedule for . Then (3) and (4) yield , and therefore . Since also equals the right hand side of the equation, the claim follows. ∎

Determining the values corresponds to Step 2. Note that these values can be computed using the first dynamic program from Section 2 in time .

The dual graph.

For the dual graph the (in-)active jobs and machines are defined dually: The active machines for a tree node are the ones in the respective bag, the inactive machines are those that were active for some descendant but are not active for , and the nearly inactive machines are those that are active at but inactive at its parent, i.e., , and . Furthermore the inactive jobs are those that may be processed on some inactive machine and the (nearly in-)active ones are those that can be processed on some (nearly in-)active machine and are not inactive, i.e., , and . With these definitions we get analogously to Lemma 9:

Lemma 12.

The conditions (1)-(4) hold, as well as:

(7)
(8)

We will need some extra notation. Like we did in Section 2 we will consider load vectors , where is a set of machines. We say that a schedule fulfils , if for each . For any set of schedules for we denote the set of load vectors for that are fulfilled by at least one schedule from with . Furthermore we denote the set of all schedules for with , and for a subset of jobs , we write as a shortcut for . Let . We set and . Moreover, for and we set and to be those schedules that fulfil and respectively. We now consider and .

First note . Moreover, for a leaf node we have neither jobs nor machines and . Therefore . Hence let be a non-leaf node. Again, we first consider how can be computed from the children of . Because of (3) may be split into a left and a right part. For two machine sets let be a trasformation function for load vectors, where the -th entry of equals for and otherwise. We set to be the set of pairs with , and for . Because of (1), (3) and (4), we have:

Lemma 13.

.

Proof.

Let be optimal. Because of (1) we have for . Let and the load vector that fulfils on . Then we have and . Because of (3) and (4) we have . Because of the objective function we have:

Now let minimizing the right hand side of the equation and optimal. Then is in and equals the right hand side of the equation. Since furthermore the claim follows. ∎

Now we consider . We may split into the load due to inactive and that due to nearly inactive jobs. Note that the nearly inactive jobs can only be processed by active machines (7). We set to be the set of pairs with , and . Now (3), (4) and (7) yield:

<