Matroid and Knapsack Center Problems^{1}^{1}1This work was supported in part by the National Basic Research Program of China Grant 2011CBA00300, 2011CBA00301, and the National Natural Science Foundation of China Grant 61033001, 61061130540, 61073174, 61202009. The research of D.Z. Chen was supported in part by NSF under Grants CCF0916606 and CCF1217906.
Abstract
In the classic center problem, we are given a metric graph, and the objective is to select nodes as centers such that the maximum distance from any vertex to its closest center is minimized. In this paper, we consider two important generalizations of center, the matroid center problem and the knapsack center problem. Both problems are motivated by recent content distribution network applications. Our contributions can be summarized as follows:

We consider the matroid center problem in which the centers are required to form an independent set of a given matroid. We show this problem is NPhard even on a line. We present a 3approximation algorithm for the problem on general metrics. We also consider the outlier version of the problem where a given number of vertices can be excluded as outliers from the solution. We present a 7approximation for the outlier version.

We consider the (multi)knapsack center problem in which the centers are required to satisfy one (or more) knapsack constraint(s). It is known that the knapsack center problem with a single knapsack constraint admits a 3approximation. However, when there are at least two knapsack constraints, we show this problem is not approximable at all. To complement the hardness result, we present a polynomial time algorithm that gives a 3approximate solution such that one knapsack constraint is satisfied and the others may be violated by at most a factor of . We also obtain a 3approximation for the outlier version that may violate the knapsack constraint by .
1 Introduction
The center problem is a fundamental facility location problem. In the basic version, we are given a metric space and are asked to locate a set of at most vertices as centers and to assign the other vertices to the centers, so as to minimize the maximum distance from any vertex to its assigned center, or more formally, to minimize . In the demand version of the center problem, each vertex has a positive demand , and our goal is to minimize the maximum weighted distance from any vertex to the centers, i.e., . It is well known that the center problem is NPhard and admits a polynomial time 2approximation even for the demand version [14, 17], and that no polynomial time approximation algorithm exists unless [14].
In this paper, we conduct a systematic study on two generalizations of the center problem and their variants. The first one is the matroid center problem, denoted by MatCenter, which is almost the same as the center problem except that, instead of the cardinality constraint on the set of centers, now the centers are required to form an independent set of a given matroid. A finite matroid is a pair , where is a finite set (called the ground set) and is a collection of subsets of . Each element in is called an independent set. Moreover, satisfies the following three properties: (1) ; (2) if and , then ; (3) for all with , there exists an element such that . Following the conventions in the literature, we assume the matroid is given by an independence oracle which, given a subset , decides whether . For more information about the theory of matroids, see, e.g., [29].
The second problem we study is the knapsack center problem (denoted as KnapCenter), another generalization of center in which the chosen centers are subject to (one or more) knapsack constraints. More formally, in KnapCenter, there are nonnegative weight functions on , and weight budgets . Let for all . A solution takes a set of vertices as centers such that for all . The objective is still to minimize the maximum service cost of any vertex in (the service cost of equals , or in the demand version). In this paper, we are interested only in the case where the number of knapsack constraints is a constant. We note that the special case with only one knapsack constraint was studied in [18] under the name of weighted center, which already generalizes the basic center problem.
Both MatCenter and KnapCenter are motivated by important applications in content distribution networks [16, 22]. In a content distribution network, there are several types of servers and a set of clients to be connected to the servers. Often there is a budget constraint on the number of deployed servers of each type [16]. We would like to deploy a set of servers subject to these budget constraints in order to minimize the maximum service cost of any client. The budget constraints correspond to finding an independent set in a partition matroid.^{*}^{*}* Let be a collection of disjoint subsets of and be integers such that for all . We say a set is independent if for . All such independent sets form a partition matroid. We can also use a set of knapsack constraints to capture the budget constraints for all types (we need one knapsack constraint for each type). Motivated by such applications, Hajiaghayi et al. [16] first studied the redblue median problem in which there are two types (red and blue) of facilities, and the goal is to deploy at most red facilities and blue facilities so as to minimize the sum of service costs. Subsequently, Krishnaswamy et al. [22] introduced a more general matroid median problem which seeks to select a set of facilities that is an independent set in a given matroid and the knapsack median problem in which the set of facilities must satisfy a knapsack constraint. The work mentioned above uses the sum of service costs as the objective (the median objective), while our work aims to minimize the maximum services cost (the center objective), which is another popular objective in the clustering and network design literature.
1.1 Our Results
For MatCenter, we show the problem is NPhard to approximate within a factor of for any constant , even on a line. Note that the center problem on a line can be solved exactly in polynomial time [5]. We present a 3approximation algorithm for MatCenter on general metrics. This improves the constant factors implied by the approximation algorithms for matroid median [22, 3] (see Section 2.2 for details).
Next, we consider the outlier version of MatCenter, denoted as RobustMatCenter, where one can exclude at most nodes as outliers. We obtain a 7approximation for RobustMatCenter. Our algorithm is a nontrivial generalization of the greedy algorithm of Charikar et al. [2], which only works for the outlier version of the basic center. However, their algorithm and analysis do not extend to our problem. In their analysis, if at least nodes are covered by disks (with radius 3 times ), they have found a set of centers and obtained a 3approximation. However, in our case, we may not be able to open enough centers in the covered region, due to the matroid constraint. Therefore, we need to search for centers globally. To this end, we carefully construct two matroids and argue that their intersection provides a desirable answer (the construction is similar to that for the nonoutlier version, but more involved).
We next deal with the KnapCenter problem. We show that for any , the existence of an approximation algorithm for KnapCenter with more than one knapsack constraint implies . This is a sharp contrast with the case with only one knapsack constraint, for which a 3approximation exists [18] and is known to be optimal [7]. Given this strong inapproximability result, it is then natural to ask whether efficient approximation algorithms exist if we are allowed to slightly violate the constraints. We answer this question affirmatively. We provide a polynomial time algorithm that, given an instance of KnapCenter with a constant number of knapsack constraints, finds a 3approximate solution that is guaranteed to satisfy one constraint and violate each of the others by at most a factor of for any fixed . This generalizes the result of [18] to the multiconstraint case. Our algorithm also works for the demand version of the problem.
We then consider the outlier version of the knapsack center problem, which we denote by RobustKnapCenter. We present a 3approximation algorithm for RobustKnapCenter that violates the knapsack constraint by a factor of for any fixed . Our algorithm can be regarded as a “weighted” version of the greedy algorithm of Charikar et al. [2] which only works for the unitweight case. However, their charging argument does not apply to the weighted case. We instead adopt a more involved algebraic approach to prove the performance guarantee. We translate our algorithm into inequalities involving point sets, and then directly manipulate the inequalities to establish our desired approximation ratio. The total weight of our chosen centers may exceed the budget by the maximum weight of any client, which can be turned into a multiplicative factor by the partial enumeration technique. We leave open the question whether there is a constant factor approximation for RobustKnapCenter that satisfies the knapsack constraint.
1.2 Related Work
For the basic center problem, Hochbaum and Shmoys [17, 18] and Gonzalez [14] developed 2approximation algorithms, which are the best possible if P NP [14]. The former algorithms are based on the idea of the threshold method, which originates from [10]. On some special metrics like the shortest path metrics on trees, center (with or without demands) can typically be solved in polynomial time by dynamic programming. By exploring additional structures of the metrics, even linear or quasilinear time algorithms can be obtained; see e.g. [5, 8, 11] and the references therein. Several generalizations and variations of center have also been studied in a variety of application contexts; see, e.g. [1, 25, 20, 4, 9, 21].
A problem closely related to center is the wellknown median problem, whose objective is to minimize the sum of service costs of all nodes instead of the maximum one. Hajiaghayi et al. [16] introduced the redblue median problem that generalizes median, and presented a constant factor approximation based on local search. Krishnaswamy et al. [22] introduced the more general matroid median problem and presented a approximation algorithm based on LP rounding, whose ratio was improved to by Charikar and Li [3] using a more careful rounding scheme. Another generalization of median is the knapsack median problem studied by Kumar [23], which requires to open a set of centers with a total weight no larger than a specified value. Kumar gave a (large) constant factor approximation for knapsack median, which was improved by Charikar and Li [3] to a 34approximation. Several other classical problems have also been investigated recently under matroid or knapsack constraints, such as minimum spanning tree [32], maximum matching [15], and submodular maximization [24, 30].
For the center formulation, it is well known that a few distant vertices (outliers) can disproportionately affect the final solution. Such outliers may significantly increase the cost of the solution, without improving the level of service to the majority of clients. To deal with outliers, Charikar et al. [2] initiated the study of the robust versions of center and other related problems, in which a certain number of points can be excluded as outliers. They gave a 3approximation for robust center, and showed that the problem with forbidden centers (i.e., some points cannot be centers) is inapproximable within unless P = NP. For robust median, they presented a bicriteria approximation algorithm that returns a approximate solution in which the number of excluded outliers may violate the upper bound by a factor of . Later, Chen [6] gave a truly constant factor approximation (with a very large constant) for the robust median problem. McCutchen and Khuller [26] and ZarrabiZadeh and Mukhopadhyay [31] considered the robust center problem in a streaming context.
2 The Matroid Center Problem
In this section, we consider the matroid center problem and its outlier version. A useful ingredient of our algorithms is the (weighted) matroid intersection problem defined as follows. We are given two matroids and defined on the same ground set . Each element has a weight . The goal is to find a common independent set in the two matroids, i.e., , such that the total weight is maximized. It is well known that this problem can be solved in polynomial time (e.g., see [29]).
2.1 NPhardness of Matroid Centers on a Line
In contrast to the basic center problem on a line which can be solved in nearlinear time [5], we show that MatCenter is NPhard even on a line. We actually prove the following stronger theorem.
Theorem 1.
It is NPhard to approximate MatCenter on a line within a factor strictly better than 2, even when the given matroid is a partition matroid.
Proof.
In a partition matroid, each element in the ground set is colored using one of the colors and we are given integers . The collection of all independent sets is defined to be all subsets that contain at most elements of color , at most elements of color , and so on.
We use the 3SAT problem for the reduction. Without loss of generality, we assume that each literal (including all variables and their negation ) appears exactly four times in the 3DNF. Given a 3DNF, we create a MatCenter instance as follows. The points appear in groups. Each group consists of () points with points in the middle, one to the left and one to the right. The left and right points are unit distance away from the midpoints. Different groups are very far away from each other. Therefore, in order to make the maximum radius at most one, we need to either select one of the midpoints in each group or select at least the two points not in the middle. For each variable , we create a variable gadget as follows. The gadget consists of 6 groups, each having 3 points:
For two points and , we use to indicate that we assign a new color to and . The color assignment for the gadget is defined by the following pairs:
We are allowed to choose at most one point as a center from each color class. Points are called positive portals of and points are called negative portals of . See Figure 1 for an example. For each clause, we create a clause gadget, which is a group of points. We have 3 points in the middle (colocated at the same place), each corresponding to a literal in the clause. If the point corresponds to a positive (negative) literal, say (or ), the point is paired with one of the positive (negative) portals of and we assign the pair a new color. We also require that at most one point can be chosen as a center in this pair. Each portal can be paired at most once. Since each literal appears exactly 4 times, we have enough portals for the clause gadgets. All the left and right points of all clause gadgets have the same color but we are allowed to choose none of them as centers.
We can show that the optimal radius for the MatCenter instance is if and only if the 3DNF formula is satisfiable. First, suppose the 3DNF is satisfiable. If is in a truth assignment, then we pick and as centers. Otherwise, we pick and as centers. It is straightforward to verify the independence property. For each group, at least one of the midpoints is selected. Thus, the optimal solution is . Given the correspondence, the reverse direction can be proved similarly and we omit it. ∎
2.2 A Approximation for MatCenter
In fact, we can obtain a constant approximation for MatCenter by using the constant approximation for the matroid median problem [22, 3], which roughly gives a 9approximation for MatCenter. The idea is given below.
We say a space with a distance function satisfies the relaxed triangle inequality (TI) for some and , if for all . (Thus a metric space satisfies the relaxed TI for all .) By examining the algorithms in [22, 3] for the matroid median problem, we notice that they can actually give a approximation for matroid median where is some universal constant, if the underlying space satisfies the relaxed TI for some algorithmdependent .^{†}^{†}†We note that Golovin et al. [13] claimed (without a proof) that, in our notations, most existing approximation algorithms for median achieve an approximation on spaces satisfying relaxed TI. By a scrutiny of the existing median algorithms, we are not able to reproduce the same result and the correct approximation ratio should be roughly . However, the results of [13] are not affected in any essential way since this only changes the constant hidden in the bigoh notation. (Roughly speaking, is the maximum number of times that the triangle inequality is used for bounding the distance between a client and a facility.) Now, given an instance of MatCenter with metric space , we define a new distance function as for all , where is a parameter whose value will be specified later. By the convexity of the function when , for all and , we have , and thus
Therefore satisfies the relaxed TI for all . In particular, it satisfies the relaxed TI where is the algorithmdependent parameter mentioned before. We now solve the matroid median problem on the instance with the new distance function . Let denote the optimal objective value of MatCenter on the original instance. Then it is clear that the optimal cost of matroid median on the new instance is at most . By our previous observation, the algorithms of [22, 3] give a solution of cost at most . Transforming the distance function back to , the maximum service cost of any client is at most . By choosing , this can produce a approximation for MatCenter for any fixed . Using the algorithm of [3] this roughly gives a 9approximation.
We next present a approximation for MatCenter, thus improving the ratio derived from the matroid median algorithms [22, 3]. Also, compared to their LPbased algorithms, ours is simpler, purely combinatorial, and easy to implement. We begin with the description of our algorithm. Regard the metric space as a (complete) graph where each edge has length . Let be the set of vertices that are at most unit distance away from (it depends on the underlying graph). Let be the edges in a nondecreasing order of their lengths. We consider each spanning subgraph of that contains only the first edges, i.e., where . We run Algorithm 1 on each and take the best solution.
Lemma 1.
For any two distinct , and are disjoint sets.
Proof.
Suppose we are working on and there is a node that is in both and . Then we know and . Thus, . But this contradicts with the fact that the distance between every two nodes in must be larger than . ∎
Theorem 2.
Algorithm 1 produces a approximation for MatCenter.
Proof.
Suppose the maximum radius of any cluster in an optimal solution is and a set of optimal centers is . Consider the algorithm on with ( must be the length of some edge). First we claim that there exists an intersection of and of size . In fact, we show there is a subset of that is such an intersection. For each node , let be an optimal center in that is at most away from . Consider the set . Since is a subset of , it is an independent set of by the definition of matroid. It is also easy to see that for each . Therefore, is also independent in , which proves our claim. Thus, the algorithm returns a set that contains exactly 1 element from each with . According to the algorithm, for each there exists that is at most away, and this is within distance from the (unique) element in . Thus every node of is within a distance from some center in . ∎
2.3 Dealing with Outliers: RobustMatCenter
We now consider the outlier version of MatCenter, denoted as RobustMatCenter, in which an additional parameter is given and the goal is to place centers (which must form an independent set) such that after excluding at most nodes as outliers, the maximum service cost of any node is minimized. For , we have the standard MatCenter. In this section, we present a approximation for RobustMatCenter.
Our algorithm bears some similarity to the 3approximation algorithm for robust center by Charikar et al. [2], who also showed that robust center with forbidden centers cannot be approximated within unless P = NP. However, their algorithm for robust center does not directly yield any approximation ratio for the forbidden center version. In fact, robust center with forbidden centers is a special case of RobustMatCenter since forbidden centers can be easily captured by a partition matroid. We briefly describe the algorithm in [2]. Assume we have guessed the right optimal radius . For each , call the disk of and the expanded disk of . Repeat the following step times: Pick an uncovered vertex as a center such that its disk covers the most number of uncovered nodes, then mark all nodes in the corresponding expanded disk as covered. Using a clever charging argument they showed that at least nodes can be covered, which gives a approximation. However, their algorithm and analysis do not extend to our problem in a straightforward manner. The reason is that even if at least nodes are covered, we may not be able to find enough centers in the covered region due to the matroid constraint. In order to remedy this issue, we need to search for centers in the entire graph, which also necessitates a more careful charging argument to show that we can cover at least nodes.
Now we describe our algorithm and prove its performance guarantee. For each , we run Algorithm 2 on the graph defined as before. We need the following simple lemma.
Lemma 2.
is a matroid.
Proof.
It is straightforward to verify that the first and second matroid properties hold. We only need to verify the third property. Suppose and are two independent sets of and . We know the set (resp., ) of vertices that appear in (resp., ) is an independent set of . Since and , . Hence, there is a vertex such that is independent. We add to the pair in that involves and it is easy to see the resulting set is also independent in . ∎
Theorem 3.
Algorithm 2 produces a approximation for RobustMatCenter.
Proof.
Assume the maximum radius of any cluster in an optimal solution is and the set of optimal centers is . For each , let denote the optimal disk . As before, we claim that our algorithm succeeds if . It suffices to show the existence of an intersection of and with a weight at least . We next construct such an intersection from the optimal center set . The high level idea is as follows. Let the disk centers in be (according to the order that our algorithm chooses them). Note that are the centers chosen by the greedy procedure in the first part of the algorithm, but not the centers returned at last. We process these centers one by one. Initially, is empty. As we process a new center , we may add for some to . Moreover, we charge each newly covered node in any optimal disk to some nearby node in the expanded disk . (Note that this is the key difference between our charging argument and that of [2]; in [2], a node may be charged to some node far away.) We maintain that all nodes in covered by are charged after processing . Thus, eventually, all nodes covered by the optimal solution (i.e., ) are charged to the expanded disks selected by our algorithm. We also make sure that each node in any expanded disk in is being charged to at most once. Therefore, the weight of is at least .
Now, we present the details of the construction of . If every node in for some is charged, we say is entirely charged. Consider the step when we process . We distinguish the following cases.

Suppose there is a node such that is not entirely charged and intersects . Then add to (if there are multiple such ’s, we only add one of them). We charge the newly covered nodes in (i.e., the nodes in ) to themselves (we call this charging rule I). Note that is entirely charged after this step since .

Suppose does not intersect for any , but there is some node such that is not entirely charged and . Then we add to and charge all newly covered nodes in (i.e., the node in ) to (we call this charging rule II). Since covers the most number of uncovered elements when is added, there are enough vertices in to charge. Obviously, is entirely charged after this step. If there is some other node such that is not entirely charged and , then we charge each newly covered node (i.e., nodes in ) in to itself using rule I.

If does not intersect with any optimal disk that is not entirely charged, then we simply skip this iteration and continue to the next .
It is easy to see that all covered nodes in are charged in the process and each node is being charged to at most once. Indeed, consider a node in . If intersects some , then may be charged by rule I and, in this case, no further node can be charged to again. If does not intersect any , then may be charged by rule II. This also happens at most once. It is obvious that in this case, no node can be charged to using rule I. For a node , it can be charged at most once using rule I. Moreover, by the charging process, all nodes in are charged to the nodes in some expanded disks that appear in . Therefore, the total weight of is at least . We can see that each vertex in is also in and appears at most one. Therefore, is independent in . Clearly, each appears in at most once. Hence, is also independent in , which proves our claim.
Since is an optimal intersection, we know the expanded disks in contain at least nodes. By the requirement of , we can guarantee that the set of centers forms an independent set in . For each in , we can see that every node in is within a distance from , as follows. Suppose (because for any pair ). By the triangle inequality, This completes the proof of the theorem. ∎
3 The Knapsack Center Problem
In this section, we study the KnapCenter problem and its outlier version. Recall that an input of KnapCenter consists of a metric space , nonnegative weight functions on , and budgets . The goal is to select a set of centers with for all , so as to minimize the maximum service cost of any vertex in . In the outlier version of KnapCenter, we are given an additional parameter , and the objective is to minimize , i.e., the maximum service cost of any nonoutlier node after excluding at most nodes as outliers.
3.1 Approximability of KnapCenter
When there is only one knapsack constraint (i.e., ), the problem degenerates to the weighted center problem for which a 3approximation algorithm exists [18]. However, as we show in Theorem 4, the situation changes dramatically even if there are only two knapsack constraints.
Theorem 4.
For any , if there is an approximation algorithm for KnapCenter with two knapsack constraints, then .
Proof.
To prove the theorem, we present a reduction from the partition problem, which is wellknown to be NPhard [12], to the KnapCenter problem with two knapsack constraints. In the partition problem, we are given a multiset of positive integers , and the goal is to decide whether can be partitioned into two subsets such that the sum of numbers in one subset equals the sum of numbers in the other subset.
Given an instance of the partition problem, we construct an instance of the KnapCenter problem as follows. The set of clients is . The distance metric is defined as for all , and for all . It is easy to verify that is indeed a metric. Every client in has a unit demand. There are two weight functions and specified as follows: for each , , , , and . The two corresponding weight budgets are , where . This finishes the construction of .
We show that can be partitioned into two subsets of equal sum if and only if has a solution of cost 0. First consider the “if” direction. Assume that admits a solution of cost 0. Clearly, for each , the solution must take at least one of as a center, and we assume w.l.o.g. that it takes exactly one of and (just choosing an arbitrary one if both are taken). Let be the set of indices for which is taken as a center in the solution. Then consists of all indices for which is taken by the solution. Considering the first weight constraint, we have . Similarly, by the second weight constraint, we get . Since , it holds that . Therefore, can be partitioned into two subsets of equal sum.
We next prove the “only if” part. Suppose there exists such that . In the instance , we take as the set of centers. It only remains to show that satisfies both the weight constraints, which is easy to verify: , and . This proves the “only if” direction.
Since the optimal objective value of is 0, any approximate solution is in fact an optimal one. Hence, if KnapCenter with two constraints and unit demands allows an approximation algorithm for any , then the partition problem can be solved in polynomial time, which implies . The proof of Theorem 4 is thus complete. ∎
It is then natural to ask whether a constant factor approximation can be obtained if the constraints can be relaxed slightly. We show in Theorem 5 that this is achievable (even for the demand version). Before proving the theorem we first present some highlevel ideas of our algorithm, shown as Algorithm 3. The algorithm first guesses the optimal cost , and then chooses a collection of disjoint disks of radius according to some rules. It can be shown that there exists a set of centers consisting of exactly one point from each disk that gives a 3approximate solution and satisfies all the knapsack constraints. We then reduce the remaining task to another problem called the group multiknapsack problem, which will formally be defined in the following proof.
Theorem 5.
For any fixed , there is a 3approximation algorithm for KnapCenter with a constant number of knapsack constraints, which is guaranteed to satisfy one constraint and violate each of the others by at most a factor of .
In what follows we prove Theorem 5. We first present our algorithm for KnapCenter in Algorithm 3 that we use to prove Theorem 5. The algorithm works for the more general version where each vertex has a demand and the service cost of is when taking as the set of centers.
Given an instance of the KnapCenter problem, suppose Algorithm 3 correctly guesses the optimal objective value . (This can be equivalently realized by running the algorithm for all possibilities and taking the best solution among all the candidates.) The algorithm greedily finds a collection of mutually disjoint disks , and then constructs a set of centers by selecting exactly one point from each disk using some algorithm for the group multiknapsack problem, which we will define later.
Call a set standard if consists of exactly one point from each of the disks . We first show that there exists a standard set such that for all , i.e., fulfills all the knapsack constraints. Suppose is the set of centers opened in some optimal solution. Then, for each , there exists such that , and thus . Hence, we can choose from each exactly one point that belongs to , and these points are distinct because the disks are pairwise disjoint. Let denote the set of these points. Clearly, is a standard and is a subset of , and thus for all . This proves the existence of a standard set that satisfies all the knapsack constraints.
We will reduce the remaining task to another problem called the group multiknapsack problem, which we define as follows. Suppose we are given a collection of pairwise disjoint sets . Let . For some fixed integer , there are nonnegative weight functions defined on the items of , which we denote by , and weight limits . A solution is a subset that consists of exactly one element from each of the sets . The goal is find a solution such that for all , provided that such solution exists. For our purpose, we require the number of constraints to be a constant. This problem is new to our knowledge, and may be useful in other applications. By Lemma 3 (which will be presented and proved later), we can find in polynomial time a solution that satisfies one constraint and violates each of the others by a small factor.
Now come back to the KnapCenter problem. By Lemma 3, line 6 of Algorithm 3 produces in polynomial time a standard set that satisfies one constraint and violates each of the others by a factor of at most . (We notice that, when running Algorithm 3 with an incorrect value of , there may not exist any standard set, in which case the algorithm may return an empty set. We shall simply ignore such solutions.)
It now only remains to show that, by designating as the set of centers, the maximum service cost of any client is at most . Suppose for each . It suffices to prove that, for each , there exists such that . We consider two cases.

. Since , we have by the definition of .

. Then for some , otherwise should be added to by the algorithm. Let . If for all , then the algorithm will choose before choosing all , which contradicts with the assumption that . Thus, there exists for which . Consider this particular , and choose an arbitrary . We have
Combining the two cases, we have shown that the service cost with centers in is at most three times the optimal cost, which completes the proof.
Finally, we need the following Lemma 3, which is used in the above argument. The group multiknapsack problem is similar to the multiple knapsack problem (i.e., the knapsack problem with multiple resource constraints), and the (standard) technique for the latter can be easily adapted to solve the group multiknapsack problem (see, e.g., [28, 19]). Another way to deduce Lemma 3 is by applying the approximate Pareto curve method introduced by Papadimitriou and Yannakakis [27]. For sake of completeness, we give a proof of Lemma 3 in Appendix A.
Lemma 3.
For any fixed , there is a polynomial time algorithm that, given an instance of group multiknapsack for which a solution satisfying all weight constraints exists, constructs in polynomial time a solution that satisfies one constraint and violates each of the others by at most a factor of .
3.2 Dealing with Outliers: RobustKnapCenter
We now study RobustKnapCenter, the outlier version of KnapCenter. Here we consider the case with one knapsack constraint (with weight function and budget ) and unit demand. Our main theorem is as follows.
Theorem 6.
There is a 3approximation algorithm for RobustKnapCenter that violates the knapsack constraint by at most a factor of for any fixed .
We present our algorithm for RobustKnapCenter as Algorithm 4. We assume that , since otherwise the problem is trivial. We also set for and , which makes line 5 work even if . Our algorithm can be regarded as a “weighted” version of that of Charikar et al. [2], but the analysis is much more involved. We next prove the following theorem, which can be used together with the partial enumeration technique to yield Theorem 6. Note that, if all clients have unit weight, Theorem 7 will guarantee a 3approximate solution with , which implies . So it actually gives a 3approximation without violating the constraint. Thus, our result generalizes that of Charikar et al. [2].
Theorem 7.
Given an input of the RobustKnapCenter problem, Algorithm 4 returns a set with such that .
Proof.
We call the disk of and the expanded disk of . Assume w.l.o.g. that the algorithm returns where , and that the centers are chosen in the order . We first observe that are pairwise disjoint, which can be seen as follows. By standard use of the triangle inequality, we have and for any such that . Therefore, if there exists such that , then all points in are marked “covered” when choosing , and hence choosing cannot cover any more point, contradicting with the way in which the centers are chosen (note that the algorithm terminates when all points have been covered). So the disks are pairwise disjoint.
For ease of notation, let and for . By the condition of the WHILE loop, , and thus . It remains to prove . Note that this clearly holds if the expanded disks together cover at least points. Thus, it suffices to show that . If , then all points in are covered by due to the termination condition of the WHILE loop, and thus . In the rest of the proof, we deal with the case .
For each , let be the minimum such that ; let if no such exists (i.e., if disk is disjoint from all disks centered in ). Suppose is an optimal solution, in which the centers are ordered such that . Since the optimal solution is also feasible, we have . Hence, to prove , we only need to show . For any sets and , we have . Therefore,
(1)  
As are pairwise disjoint,
and