A Constant Factor Approximation Algorithm for Unsplittable Flow on Paths
In the unsplittable flow problem on a path, we are given a capacitated path and tasks, each task having a demand, a profit, and start and end vertices. The goal is to compute a maximum profit set of tasks, such that for each edge of , the total demand of selected tasks that use does not exceed the capacity of . This is a well-studied problem that has been studied under alternative names, such as resource allocation, bandwidth allocation, resource constrained scheduling, temporal knapsack and interval packing.
We present a polynomial time constant-factor approximation algorithm for this problem. This improves on the previous best known approximation ratio of . The approximation ratio of our algorithm is for any .
We introduce several novel algorithmic techniques, which might be of independent interest: a framework which reduces the problem to instances with a bounded range of capacities, and a new geometrically inspired dynamic program which solves a special case of the maximum weight independent set of rectangles problem to optimality. In the setting of resource augmentation, wherein the capacities can be slightly violated, we give a -approximation algorithm. In addition, we show that the problem is strongly NP-hard even if all edge capacities are equal and all demands are either 1, 2, or 3.
In the Unsplittable Flow Problem on a Path (UFPP), we are given a path with an integral capacity for each edge . In addition, we are given a set of tasks where each task is characterized by a start vertex , an end vertex , a demand , and a profit . A task uses an edge if lies on the path from to . The aim is to compute a set of tasks with maximum total profit such that for each edge, the sum of the demands of all tasks in that use this edge does not exceed its capacity.
The name of this problem is motivated by an interpretation as a multicommodity flow problem, where each task corresponds to a commodity. The term “unsplittable” means that the total amount of flow from each commodity has to be routed completely along the path from the source to the sink or not at all. There are several settings and applications in which this problem occurs, and several other interpretations of the problem. Therefore, this problem, and close variants thereof, have been studied under the names bandwidth allocation [10, 21, 32], admission control , interval packing  temporal knapsack , multicommodity demand flow , unsplittable flow problem [6, 7, 17, 19], scheduling with fixed start and end times , and resource allocation [8, 14, 23, 36]. In many applications, the vertices correspond to time points, and tasks have fixed start and end times. Within this time interval they consume a given amount of a common resource, of which the available amount varies over time.
UFPP is easily seen to be (weakly) NP-hard, since it contains the Knapsack problem as a special case (in case the path is just a single edge). In addition, Darmann et al.  show that the special case where all profits and all capacities are uniform is also weakly NP-hard. Chrobak et al.  strengthen this result by showing strong NP-hardness in this case. In addition, they show that the case where the profits equal the demands is strongly NP-hard. These results show that the problem admits no polynomial time approximation scheme (PTAS) unless . On the other hand, the special case of a single edge (Knapsack) admits an FPTAS. When the number of edges is bounded by a constant, UFPP admits a PTAS since it is a special case of Multi-Dimensional Knapsack .
Most of the research on UFPP has focused on two restricted cases: firstly, the special case in which all capacities are equal has been well-studied, which is also known as the Resource Allocation Problem (RAP) [8, 14, 23, 24, 36]. A more general special case of UFPP is given by the No-Bottleneck Assumption (NBA): in that case it is required that (this holds in particular for RAP). We will denote this restriction of the problem by UFPP-NBA. For UFPP-NBA, a -approximation algorithm is known , which matches the earlier best approximation ratio for RAP .
Many previous papers on UFPP partition the tasks into small and large tasks, and use different algorithmic techniques for these two groups. For a task , denote by the minimum capacity among all edges used by task . For with , we say that a task is -small if holds, and -large otherwise. The two main algorithmic techniques that have been used in previous results are dynamic programming (for large tasks) and rounding of solutions to the linear programming relaxation of the problem (for small tasks). These techniques work well when the NBA holds.
However, there are several important obstacles that prevent these techniques to be generalized to the general case of UFPP. For example, Chakrabarti et al.  show that under the NBA the natural LP-relaxation of UFPP has a constant integrality gap. However, without this assumption the integrality gap can be as large as . Moreover, the NBA implies that if all tasks are -large, then in any solution there can be at most tasks which use each edge. This property is useful for setting up a dynamic program; see  and Section 3.2.2. Without the NBA this is no longer possible.
Despite these obstacles, there are a few breakthrough results for (general) UFPP: The best known polynomial time algorithm by Bansal et al.  achieves an approximation factor of , thus beating the integrality gap of the natural LP-relaxation. This result has been generalized to trees by Chekuri et al. . In addition, they gave a linear programming relaxation for UFPP with integrality gap . Finally, Bansal et al.  gave a -approximation algorithm with quasi-polynomial running time, which additionally requires that the capacities and the demands are quasi-polynomial, i.e. bounded by . Nevertheless, it remained an open question whether UFPP admits a constant factor approximation algorithm (this was asked e.g. in [7, 19]).
1.1 Our Contribution and Outline
We present the first polynomial time constant-factor approximation algorithm for the general case of UFPP. The algorithm has an approximation ratio of , for arbitrary .
To obtain this result we introduce several new algorithmic techniques, which are interesting in their own right. We develop a useful viewpoint which allows us to reduce the problem to a special case of the maximum weight independent set of rectangles problem. In addition, we design a framework which reduces the problem to solving instances where essentially the edge capacities are within a constant factor of each other. The techniques can be applied and combined in various ways. For instance, for practical purposes, we also show how our results can be used to obtain a constant factor approximation algorithm with a reasonable running time of only . We now go into more detail about these results, the new techniques we introduce, and give an outline of the paper.
Similar to many previous papers, for our main algorithm we partition the tasks into ‘small’ and ‘large’ tasks. For the small tasks our main result is as follows: For any and , we present a -approximation algorithm for UFPP in Section 3, for the case where each task is -small. We remark that a similar result was given by Chekuri et al. , who gave an -approximation algorithm if each task is -small. Their result also applies to trees. To prove our -approximation, we introduce a novel framework in which the tasks are first grouped into smaller sets, according to their values, such that the techniques for the NBA case can be applied. So the resulting sets can be solved via relatively standard dynamic programming, LP-rounding, and network flow techniques. (This is similar to e.g. [14, 20].) Solutions to these smaller sets leave a small amount of the capacity of each edge unused. In our framework we recombine these solutions into a feasible solution for all tasks.
Using the techniques developed for the -approximation, in Section 3.3 we also give a result for UFPP in the setting of resource augmentation. We give an algorithm that computes a -approximative solution which is feasible if we increase the capacity of each edge by a factor of , for arbitrarily small and . Note that this algorithm works with arbitrary task sets and does not require the tasks to be small.
For our main approximation algorithm, it remains to handle the large tasks. For these, we present the following main result: for any integer , if all tasks are -large, we give a -approximation algorithm in Section 4. This is based on a geometric viewpoint of the problem: we represent UFPP instances by drawing a curve in the plane determined by the edge capacities, and representing tasks by axis-parallel rectangles, that are drawn as high as possible under this curve. The demand of a task determines the height of its rectangle, and the profit of a task determines the weight of the rectangle. Using a novel geometrically inspired dynamic program, we show that in polynomial time, a maximum weight set of pairwise non-intersecting rectangles can be found. Such a set corresponds to a feasible UFPP solution. In addition, we show that when every task is -large, this solution yields a -approximative solution for UFPP. With this dynamic program we contribute towards the well-studied problem of finding a Maximum Weight Independent Set of Rectangles (MWISR) [1, 30, 34]. Below we discuss this problem in more detail.
For our main result, we partition the tasks into -small tasks and -large tasks. For the first group, we apply the aforementioned -approximation algorithm. For the second group, our second algorithm gives a 4-approximation. Returning the best solution of the two yields the -approximation algorithm. The main algorithm is summarized in Section 5. In addition, in Section 5 we show how our results can be combined to obtain a time constant factor approximation algorithm for UFPP, and we discuss how our results carry over to the generalization from a path to cycle networks, where we obtain a -approximation algorithm.
Finally, we give an alternative proof of the strong NP-hardness of UFPP, which shows that a different restriction also remains strongly NP-hard. In the existing NP-hardness proofs [22, 23], arbitrarily large demands are used in the reductions. In Section 6, we prove that the problem is strongly NP-hard even for the restricted case where all demands are chosen from and capacities are uniform (RAP). Note that in contrast to our hardness result, it is known that in the slightly more restricted case where the capacities and demands are uniform, the problem admits a polynomial time algorithm: In that case, Arkin and Silverberg  have shown that the problem can be solved in time by minimum-cost flow computations.
1.2 Related Results
As mentioned above, the restrictions of UFPP where demands are uniform (RAP) and where the No-Bottleneck Assumption holds (UFPP-NBA) have been well-studied, with the current best approximation algorithm for both problems being the -approximation of Chekuri et al. . Both RAP and UFPP-NBA have been generalized in various ways: in a scheduling context, one may allow more freedom for choosing the start time and end time of a given task . Philips et al.  obtain a -approximation algorithm for such a generalization of RAP, by using LP-rounding techniques. For a similar generalization, where for each task one out of a set of alternatives needs to be selected, Bar-Noy et al.  provide a constant factor approximation algorithm using the local ratio technique. RAP and UFPP-NBA can be generalized in a different way, more related to network flows, by considering graphs other than a path. In graphs other than trees, there may be different paths possible between a terminal pair , . However, a single path has to be chosen for each selected terminal pair. This generalization of UFPP to general graphs is called the Unsplittable Flow Problem (UFP), or UFP-NBA if the NBA applies. Baveja and Srinivasan  provide an -approximation algorithm for UFP-NBA (on all graphs), improving on various earlier results. A simpler combinatorial algorithm with the same guarantee was subsequently given by Azar and Regev . Chakrabarti et al.  also give an approximation algorithm for all graphs. In addition, they observed that an -approximation algorithm for UFPP-NBA gives an -approximation for UFP-NBA on cycles, for any . This way they gave the first constant factor approximation algorithm for both UFPP-NBA and UFP-NBA on cycles. Chekuri et al.  obtain a 48-approximation for UFP-NBA on trees.
In addition, many hardness results are known for UFPP (not necessarily under the NBA) generalized to various graphs: for general graphs, it is hard to approximate within a factor of unless , and for depth-3 trees the problem is APX-hard . Hardness-of-approximation results are known even for the special case with unit demands and capacities (the Edge Disjoint Path Problem); see [2, 3].
When viewing UFPP as a packing problem, the corresponding covering problem has also been studied [15, 16]. In that case, tasks have costs instead of profits, and the objective is to find a minimum cost set of tasks , such that for each edge, the sum of the demands of all tasks in that use this edge is at least its capacity. Recently, Chakaravarthy et al.  designed a primal-dual 4-approximation algorithm for this problem.
Recall that we reduced UFPP for large tasks to a special case of the Maximum (Weight) Independent Set of Rectangles (M(W)ISR) problem. In this problem, a collection of axis-parallel rectangles is given and the task is to find a maximum (weight) subset of disjoint rectangles. For the unweighted case of this problem, a randomized -approximation is known . For the weighted case, there are several -approximation algorithms [1, 30, 34]. The algorithm by Erlebach et al.  gives a PTAS for the case that the ratio between height and width of rectangles is bounded (note that this does not apply in the special case that we need here for approximating UFPP). Our new dynamic programming technique might be useful for further research on this problem.
We remark that this approach for the large tasks is closely related to another well-studied variant of RAP: In adjacent resource scheduling problems, one wants to schedule a job on several machines in parallel which must be contiguous, that is, adjacent to each other. In other words, this is a variant of MWISR where rectangles are allowed to move vertically, within a given range. Duin and van Sluis  prove the decision variant of scheduling tasks on contiguous machines to be strongly NP-complete. RAP on contiguous machines has been considered under the name storage allocation problem (SAP), in which tasks are axis-aligned rectangles that are only allowed to move vertically. Leonardi et al.  provide a -approximation algorithm for SAP and Bar-Yehuda et al.  present a -approximation algorithm.
In this paper, we study UFPP in the setting of resource augmentation. This means that we find a solution which is feasible if we increase the capacity of each edge by a modest factor of . The paradigm of resource augmentation is very popular in real-time scheduling. There, the augmented resource is the speed of the machines. For instance, it is known that the natural earliest deadline first policy (EDF) is guaranteed to work on machines with speed if the instance can be feasibly scheduled on machines with unit speed . In addition, a matching feasibility test is known . For further examples of resource augmentation results in real-time scheduling see [25, 31].
We assume that the vertices of the path are numbered , and . We assume that the tasks are numbered . Recall that tasks are characterized by two vertices and with , and positive integer demand and profit .
For each task we denote by the edge set of the subpath of from to . If , then task is said to use . For each edge we denote by the set of tasks which use . For a set of tasks we define its profit by . Our objective is to find a set of tasks with maximum profit such that for each edge . For each task we define its bottleneck capacity by . An edge is called a bottleneck edge for the task if and . In addition, we define for every task that . The value can be interpreted as the remaining capacity of a bottleneck edge of when is selected in a solution. Consider a vertex and an edge with . We write (or ) if (resp. ). For two edges and in , we write if and if . In other words, we interpret an edge simply as a number between and . Without loss of generality, we will assume throughout this paper that for all edges and for all tasks ; zero demands and capacities can easily be handled in a preprocessing step. Moreover, observe that one can easily adjust any given instance to an equivalent instance in which each vertex is either a start or an end vertex of a task. Such an adjustment can be implemented in linear time and it hence does not dominate the running times of the algorithms presented in this paper. Therefore, we will henceforth assume that . Throughout this paper, we will use the notations defined above to refer to the UFPP instance currently under consideration. In the few cases where we consider multiple instances, the notations will be clear from the context.
We define an -approximation algorithm for a maximization problem to be a polynomial time algorithm which computes a feasible solution for a given instance such that its objective value is at least times the optimal value. Throughout this paper, for a subset of the tasks , denotes an optimal solution for the UFPP instance restricted to the task set . The following simple fact shows how we can combine our approximation algorithms for different task subsets into one algorithm for all tasks.
Consider a UFPP instance with task set , and a partition of . If for , there exists an -approximation algorithm for the instance restricted to the tasks in , then there exists an -approximation algorithm for the entire instance.
Proof: For , let denote the solution returned by the approximation algorithm for the instance restricted to the tasks in , and an optimal solution. Let denote an optimal solution for all tasks. So for , . The algorithm that returns the maximum profit solution of and has an approximation ratio of , since
3 Small Tasks
In this section we present a -approximation algorithm for any set of tasks which are -small (for arbitrarily small and ). In our main algorithm (for a general set of tasks) we will invoke this algorithm as a subroutine for all tasks which are -small. Moreover, with a slight adjustment of introduced techniques we construct a polynomial time algorithm computing a -approximative solution for the entire instance (not only small tasks) which is feasible if the capacities of the edges are increased by a factor (resource augmentation), see Section 3.3.
Our strategy is to define groups of tasks such that the bottleneck capacities of all tasks in one group are within a certain range. This allows us to compute a feasible solution for each group, whose profit is at most a factor smaller than the profit of an optimal solution for the group. In addition, each computed solution leaves a certain amount of capacity of every edge unused. We devise a framework which combines solutions for a selection of groups into a feasible solution for the entire instance, in a way that yields a -approximation (with an appropriate choice of for the given ).
We define the framework sketched above. We group the tasks into sets according to their bottleneck capacities. Let be a constant. We define for each integer . Note that this includes negative values for , and that at most sets are non-empty (only those will be relevant later). In the sequel, we will present an algorithm which computes feasible solutions . These solutions will satisfy the following properties.
Consider a set and let and . A set is called -approximative if
for each edge such that . (In particular, it is a feasible solution.)
An algorithm which computes -approximative sets in polynomial time is called an -approximation algorithm. We call the second condition the modified capacity constraint.
Our framework consists of a procedure that turns an -approximation algorithm for each set into a -approximation algorithm for all given tasks, where and are chosen such that . Later, for our resource augmentation result (see Section 3.3) we will work with -approximative sets and therefore, some of the claims below will be proven more generally, allowing to be zero.
Lemma 3.2 (Framework)
Let and be constants and let . Let the sets be defined as stated above for an instance of UFPP. Assume we are given an -approximation algorithm for each set with running time for a polynomial . Then there is a -approximation algorithm with running time for the set of all tasks.
Now we describe the algorithm that yields Lemma 3.2. Assume that we are given an -approximation algorithm which computes solutions . The key idea is that due to the unused edge-capacities of the sets , the union of several of these sets still yields a feasible solution. With an averaging argument we will show further that the indices for the sets that we want to combine can be chosen such that the resulting set is an -approximation. Formally, for each offset we define an index set (the values and that it depends on will always be clear from the context). For each we compute the set . We output the set with maximum profit among all sets . In Lemma 3.5 we will prove that the resulting set is an -approximation. First, in Lemma 3.4 we will prove that each set is feasible (using that ). This requires the following property.
Let be a feasible UFPP solution such that for all . Then for every edge , .
Proof: Let be an edge. If , then the claim follows immediately. Now suppose that . Any task must use an edge whose capacity is less than . In particular, it must use either the closest edge to the left of or the closest edge to the right of whose capacity is less than . The total demand of tasks in using is less than , and the same holds for . It follows that the total capacity used by tasks in must be less than .
Let and be constants. For each set let be a -approximative set with . Then for each the set is feasible.
Proof: Consider a set . By definition of the -approximative sets, leaves units of the capacity of every used edge free. Observe that this is at least twice the maximum bottleneck capacity of tasks in . Therefore, by Proposition 3.3, the set is feasible. In fact, it again leaves a fraction of the capacities free, which makes it possible to continue this argument for further sets with , and prove that their union is feasible.
Formally, let and let be an edge. Denote . Let be the largest integer in such that . For every , denote
Since is an -approximation and , we have that
For every , applying Proposition 3.3 for the modified capacities shows that
Summarizing, we have that
Let and be constants. For each set let be a -approximative set with . Then for the offset which maximizes it holds that , where denotes an optimal solution of the given instance of UFPP.
Proof: Every task is included in different sets . Using this fact, we calculate that
So there must be a value such that . In particular, this holds for .
Proof of Lemma 3.2: Lemma 3.4 shows that is feasible and Lemma 3.5 shows that is a -approximation. For computing we need to compute the set for each relevant value . There are at most relevant values . Finding the optimal offset can be done in steps. This yields an overall running time of (recall that is the polynomial bounding the running time needed to compute the sets ).
3.2 An Approximation Algorithm for Small Tasks
Now that we have developed the framework to translate -approximation algorithms for the sets into an approximation algorithm for the entire instance (Lemma 3.2), it remains to present such an -approximation algorithm. In this section, we present a -approximation algorithm for sets in which all tasks are -small (for arbitrarily small ). Together with our framework of Lemma 3.2 this yields a -approximation algorithm for UFPP for the case that all tasks are -small, for arbitrary and . To get some intuition, the reader can think of being equal to .
Suppose we are given a set with only -small tasks. In order to derive the mentioned -approximation algorithm, we choose a value and split the set into -small tasks (tiny tasks) and tasks which are -large but -small (medium tasks). We define such that for the tiny tasks there is a -approximation algorithm, presented in the following subsection. For the medium tasks, we give a -approximation algorithm in Section 3.2.2.
3.2.1 An Approximation Algorithm for Tiny Tasks
We show that for given and , there is a such that if all tasks are -small, then for each set there is a -approximation algorithm. The key idea is to use linear programming techniques and a result by Chekuri et al.  about the integrality gap of the canonical LP-relaxation of UFPP under the no-bottleneck assumption (NBA). UFPP with a task set can be formulated in a straightforward way as an integer linear program:
The LP relaxation is obtained by replacing the constraint by . Chekuri et al.  gave an algorithm for UFPP instances which satisfy the NBA (i.e. ), in which all tasks are -small, which returns a UFPP solution that is at most a factor worse than the optimum of the LP relaxation. Here is a function for which the limit is 1 as approaches zero. In other words: The integrality gap of the canonical LP-relaxation is if the NBA holds and all tasks are sufficiently small.
This result can be used for sets : By definition, tasks in use only edges with . Call these relevant edges. It is therefore possible to choose the value small enough to ensure that the NBA holds, when considering only the relevant edges and -small tasks in . Furthermore, modifying the capacities by choosing decreases the capacities of relevant edges at most by a factor . Therefore, the optimal value of the LP relaxation also becomes at most a factor smaller. These are the key ideas to prove the following lemma:
For every combination of constants , , and , there exists a such that if all tasks are -small, then for each set there is a -approximation algorithm.
In the remainder of this subsection we prove Lemma 3.6. Assume we are given constants , , and with . For an instance of UFPP, we denote by the natural LP-relaxation of the IP-formulation given above, where each constraint is replaced by . By we denote the optimum value of the LP. We define . The following result is proved by Chekuri et al. , although an exact analysis of the running time is not given. We observe that their algorithm admits a implementation.
Consider a UFPP instance for which the NBA holds, in which all tasks are -small, with . Then in time , a feasible UFPP solution for can be computed with .
Proof: The algorithm of Chekuri et al. works as follows. The tasks are partitioned into at most groups, depending on their demands. The demands and the capacity are scaled such that a problem with uniform demands and uniform capacities is obtained. Since the demands and capacities are uniform, this can be solved optimally in time , using the algorithm by Arkin and Silverberg . Then Chekuri et al. show that combining the solutions of each group yields a feasible solution. Furthermore, they have shown in [20, Corollary 3.4] that the obtained solution is at most a factor worse than the optimal LP solution.
Let , , and be constants such that and . If all tasks in are -small, then in time , a solution for can be computed that is -approximative.
Proof: Consider the UFPP instance that is obtained by restricting the instance to the tasks in , and only considering edges used by these tasks. So for every task , it holds that , and . For every edge , we have .
Now construct the instance from by modifying the capacities as follows: , for each edge . The instance contains the same task set as , without modifications. For a task , by and we denote its bottleneck capacity in and , respectively. Hence .
We will first argue that the algorithm from Lemma 3.7 may be applied to . First, we note that since in each task is -small with respect to the original capacities, under the modified capacities each task is still -small:
Recall that we required that , so this condition of Lemma 3.7 is satisfied. Now we argue that for the modified capacities, the NBA holds. Since , for all tasks and edges it holds that . This shows that the NBA also holds with respect to the modified capacities.
Hence we may apply the algorithm from Lemma 3.7 to obtain a feasible solution for , with . We argue that : Since for every edge it holds that , it follows that . Thus, if we take a feasible solution to , and scale all the variables by a factor , we obtain a feasible solution to , in which the objective value has also been scaled by a factor . This gives that
where is an optimal (integer) UFPP solution for the instance .
The proof of Lemma 3.6 now easily follows:
3.2.2 An Approximation Algorithm for Medium Tasks
It remains now to find a -approximation algorithm for tasks in that are both -large and -small (for the we obtained from Lemma 3.6), for arbitrarily small . When restricting to sets with only -large tasks, the essential property is that for any edge and any feasible solution , there are at most tasks in that use . This property allows for a straightforward dynamic program to be used to compute an optimal solution (see e.g. [14, 17]). This can be turned into a -approximate solution: since tasks are -small, it can be shown that in polynomial time, any solution can be partitioned into two sets which are both feasible for the modified capacities. Using these ideas, we will prove the following lemma in the remainder of this subsection.
Let , and be constants and assume we are given an instance of UFPP in which all tasks are both -small and -large. Then, for each set there is a -approximation algorithm.
Suppose we are given a set whose tasks are all -large and -small.
Let be a feasible solution in which all tasks are -large. Then for any edge , at most tasks in use .
Proof: Let be a feasible solution. For each task it holds that . In addition, all tasks are -large. Therefore, for all tasks it holds that . For every task , . So according to Proposition 3.3, for every edge it holds that . Therefore, at most tasks in use .
For constant and , if all tasks are -large, an optimal solution for can be computed in polynomial time.
Proof sketch: By Proposition 3.10, each edge can be used by at most tasks. Hence, for each edge there can be at most combinations of tasks which use in an optimal solution. We enumerate all these combinations for each edge . For each of these, we establish a dynamic programming cell which stores the maximum profit one can obtain from tasks with , given the respective combination of tasks that use . The correct values for these cells can be computed by iterating through the edges of the path from left to right. See [14, 17] for details.
Let and let be a feasible solution for in which all tasks are -small. Then in time , can be partitioned into two sets and which are both feasible for the modified capacity constraints .
Proof: Note that the claim is trivially true if by setting and , so now assume . We initialize two sets , . Assume that the tasks in are ordered such that the start vertices are non-decreasing. We consider the tasks in in this order. In the -th iteration we take the task . We add to a set with such that obeys the modified capacity constraint, i.e., it leaves a free capacity of in each edge.
It remains to show that indeed either or obeys the modified capacity constraint. Assume to the contrary that neither nor obey the modified capacity constraint. Then there are edges on the path of such that
Inequality (3.1) implies that . Inequality (3.2) gives that . Assume w.l.o.g. that or . Recall that we considered the tasks by non-decreasing start index. Hence, all tasks in use as well. For the next calculation we need that
and hence . Also, note that since . We calculate that
This is a contradiction. Hence, task can be added to one of the sets such that still obeys the capacity constraint. When computing and we need to check for each task in whether adding it to one of the sets violates the modified capacity constraint in one of the edges. Since w.l.o.g. , this check can be done in time for each task. There are tasks in total, and hence the entire procedure can be implemented in .
Now we can prove Lemma 3.9.
Proof of Lemma 3.9: Since the tasks are -large, we can compute an optimal solution in polynomial time (Proposition 3.11). Since tasks are -small, in time , the solution can be partitioned into two solutions and that obey the modified capacity constraint (Lemma 3.12). Returning the solution of these two with maximum profit then yields a -approximation for .
3.2.3 An Approximation Algorithm for -Small Tasks
Let , , and be constants with and assume we are given an instance of UFPP in which all tasks are -small. Then, for each set there is a -approximation algorithm.
Proof: Given , and , Lemma 3.6 shows that there exists a such that for the -small tasks we have an -approximation algorithm (for each set ). The remaining tasks are both -large and -small. If , we are done. Otherwise, Lemma 3.9 shows that we have an -approximation algorithm for these (for each set ). Together this gives an -approximation algorithm for each (observe that Fact 2.1 also applies to -approximation algorithms).
Using the above lemma and our framework (Lemma 3.2), we obtain our main result for small tasks.
For any pair of constants and , there is a polynomial time -approximation algorithm for UFPP instances in which all tasks are -small.
Proof: Choose , , and such that (hence all tasks are -small), and , which is always possible 111 For instance, choose small enough such that , , and that there is an integer with . Now choose such that . We obtain an approximation factor of which is at most if is sufficiently small. . Now, combining the framework using the chosen and (Lemma 3.2) with -approximation algorithms for every (Lemma 3.13) yields a -approximation algorithm.
3.3 Resource Augmentation
Using the techniques derived above, we now describe a polynomial time algorithm that computes a set of tasks such that , and is feasible if the capacity of every edge is increased by a factor . Note that for this result we do not require that the tasks are -small.
The main idea is the following: from the results in Sections 3.2.1 and 3.2.2 we will conclude that there are and -approximation algorithms for the tiny tasks and all remaining tasks, respectively. Combining these, we obtain a -approximation algorithm for each set (without any further conditions on its tasks) in Proposition 3.15. We apply our framework with the sets computed by this algorithm. In Lemma 3.16 we show that the union is feasible when the capacities of the edges are increased by a factor (this lemma takes the role of Lemma 3.4 from the original framework).
The first step is to establish the approximation algorithms for the sets .
Let . There is a -approximation algorithm for each set which runs in polynomial time.
Proof: Using Lemma 3.6 with yields that for all there is a such that there is a -approximation algorithm for each set which consists only of -small tasks. Proposition 3.11 implies that for each fixed there is a polynomial time -approximation algorithm (i.e., an optimal algorithm in the usual sense) for sets which consist only of -large tasks. Using Fact 2.1 this yields a -approximation algorithm for each set .
The next lemma is an adjusted version of Lemma 3.4. In contrast to the latter, here the increased capacity allows us to combine the computed sets to a globally feasible solution, without requiring that every solution should leave a fraction of the capacity free. Recall that .
Let , , and . For each , let be a feasible solution and define . Then for each edge it holds that