How to Solve Fair k-Center in Massive Data Models

How to Solve Fair -Center in Massive Data Models

Abstract

Fueled by massive data, important decision making is being automated with the help of algorithms, therefore, fairness in algorithms has become an especially important research topic. In this work, we design new streaming and distributed algorithms for the fair -center problem that models fair data summarization. The streaming and distributed models of computation have an attractive feature of being able to handle massive data sets that do not fit into main memory. Our main contributions are: (a) the first distributed algorithm; which has provably constant approximation ratio and is extremely parallelizable, and (b) a two-pass streaming algorithm with a provable approximation guarantee matching the best known algorithm (which is not a streaming algorithm). Our algorithms have the advantages of being easy to implement in practice, being fast with linear running times, having very small working memory and communication, and outperforming existing algorithms on several real and synthetic data sets. To complement our distributed algorithm, we also give a hardness result for natural distributed algorithms, which holds for even the special case of -center.

1 Introduction

Data summarization is a central problem in the area of machine learning, where we want to compute a small summary of the data. For example, if the input data is enormous, we do not want to run our machine learning algorithm on the whole input but on a small representative subset. How we select such a representative summary is quite important. It is well known that if the input is biased, then the machine learning algorithms trained on this data will exhibit the same bias. This is a classic example of selection bias but as exhibited by algorithms themselves. Currently used algorithms for data summarization have been shown to be biased with respect to attributes such as gender, race, and age (see, e.g., [KMM15]), and this motivates the fair data summarization problem. Recently, the fair -center problem was shown to be useful in computing fair summary [KAM19]. In this paper, we continue the study of fair -center and add to the series of works on fairness in machine learning algorithms. Our main results are streaming and distributed algorithms for fair -center. These models are extremely suitable for handling massive datasets. The fact that data summarization problem arises when the input is huge makes our work all the more relevant!

Suppose the input is a set of real vectors with a gender attribute and you want to compute a summary of data points such that both1 genders are represented equally. Say we are given a summary . The cost we pay for not including a point in is its Euclidean distance from . Then the cost of is the largest cost of a point. We want to compute a summary with minimum cost that is also fair, i.e., contains women and men. In one sentence, we want to compute a fair summary such that the point that is farthest from this summary is not too far. Fair -center models this task: let the number of points in the input be , the number of groups be , target summary size be , and we want to select a summary such that contains points belonging to Group , where . And we want to minimize , where denotes the distance function. Note that each point belongs to exactly one of the groups; for the case of gender, .

We call the special case where and as just -center throughout this paper. For -center, there are simple greedy algorithms with an approximation ratio of  [Gon85, HS85], and getting better than -approximation is NP-hard [HN79]. The NP-hardness result also applies to the more general fair -center. The best algorithm known for fair -center is a -approximation algorithm that runs in time  [CLLW16]. A linear-time algorithm with approximation guarantee of , which is constant if is, was given recently [KAM19]. Both of these algorithms work only in the traditional random access machine model, which is suitable only if the input is small enough to fit into fast memory. We give a two-pass streaming algorithm that achieves the approximation ratio arbitrarily close to . In the streaming setting, input is thought to arrive one point at a time, and the algorithm has to process the input quickly, using minimum amount of working memory—ideally linear in the size of a feasible solution, which is for fair -center. Our algorithm processes each incoming input point in time and uses space , which is if the number of groups is very small. This improves the space usage of the existing streaming algorithm [Kal19] almost quadratically, from , while also matching the best approximation ratio achieved by Chen et al. We also give the first distributed, constant approximation algorithm where the input is divided among multiple processors, each of which performs one round of computation and sends a message of size to a central processor, which then computes the final solution. Both rounds of computation are linear time. All the approximation, communication, space usage, and running-time guarantees are provable. To complement our distributed algorithm, we prove that any distributed algorithm, even randomized, that works by each processor sending a subset of its input to a central processor which outputs the solution, needs to essentially communicate the whole input to achieve an approximation ratio of better than . This, in fact, applies for the special case of -center showing that known -approximation algorithm [MKC15] for -center is optimal.

We perform experiments on real and synthetic datasets and show that our algorithms are as fast as the linear-time algorithm of Kleindessner et al., while achieving improved approximation ratio, which matches that of Chen et al. Note that this comparison is possible only for small datasets, since those algorithms do not work either in streaming or in distributed setting. We also run our algorithms on a really large synthetic dataset of size 100GB, and show that their running time is only one order of magnitude more than the time taken to just read the input dataset from secondary memory.

As a further contribution, we give faster implementations of existing algorithms—those of Kale and Chen et al.

Related work

Chen et al. gave the first polynomial-time algorithm that achieves -approximation. Kale achieves almost the same ratio using just two passes and also gives a one-pass -approximation algorithm, both using space.

One way that is incomparable to ours is to compute a fair summary is using a determinantal measure of diversity [CKS18]. Fair clustering has been studied under another notion of fairness, where each cluster must be balanced with respect to all the groups (no over-or-under-representation of any group) [CKLV17], and this line of work also has received a lot of attention in a short span of time [BCFN19, AEKM19, BIPV19, SSS20, JSS20].

The -median clustering problem with fairness constraints was first considered by [HKK10] and with more general matroid constraints was studied by [KKN11]. The work of Chen et al. and Kale also actually applies for matroid constraints.

There has been a lot of work done on fairness, and we refer the reader to overviews by [KAM19, CKS18].

2 Preliminaries

The input to fair -center is a set of points in a metric space given by a distance function . We denote this metric space by . Each point belongs to one of groups, say . Let denote this group assignment function. Further, for each group , we are given a capacity . Let . We call a subset feasible if for every , the set contains at most points from group . The goal is to compute a feasible set of centers that (approximately) minimizes the clustering cost, formally defined as follows.

Definition 1.

Let , then the clustering cost of for is defined as .

Note here that we allow to not be a subset of . The following lemmas follow easily from the fact that the distance function satisfies the triangle inequality.

Lemma 1.

Let . The clustering cost of for is at most the clustering cost of for plus the clustering cost of for .

Lemma 2.

Suppose for a set of points there exists a set of centers, not necessarily a subset of , whose clustering cost for is at most . If is a set of points separated pairwise by distance more than , then .

Proof.

If then some two points in must share one of the centers, and must therefore be both within distance from that common center. Then by the triangle inequality, they cannot be separated by distance more than . ∎

We denote by a feasible set which has the minimum clustering cost for , and by OPT the minimum clustering cost. We assume that our algorithms have access to an estimate of OPT. When is at least OPT, our algorithms compute a solution of cost at most for a constant . Thus, when , our algorithms compute a -approximate solution. In Section 3.3 we describe how to efficiently compute such a .

3 Algorithms

Before stating algorithms, we describe some elementary procedures which will be used as subroutines in our algorithms.

getPivots takes as input a set of points with distance function and a radius . Starting with , it performs a single pass over . Whenever it finds a point which is not within distance from any point in , it adds to . Finally, it returns . Thus, is a maximal subset of of points separated pairwise by distance more than . We call points in pivots. By Lemma 2, if there is a set of points whose clustering cost for is at most , then . Moreover, due to maximality of , its clustering cost for is at most . Note that getPivots runs in time .

getReps takes as input a set of points with distance function , a group assignment function , a subset , and a radius . For each , initializing , it includes in one point, from each group, which is within distance from whenever such a point exists. Note that this is done while performing a single pass over . This procedure runs in time .

Informally, if is a good but infeasible set of centers, then getReps finds representatives of the groups in the vicinity of each . This, while increasing the clustering cost by at most , gives us enough flexibility to construct a feasible set of centers. The procedure HittingSet that we describe next finds a feasible set from a collection of sets of representatives.

HittingSet takes as input a collection of pairwise disjoint sets of points, a group assignment function , and a vector of capacities of the groups. It returns a feasible set intersecting as many ’s as possible. This reduces to finding a maximum cardinality matching in an appropriately constructed bipartite graph. It is important to note that this procedure does the post-processing: it doesn’t make any pass over the input stream of points. This procedure runs in time .

For interested readers, the pseudocodes of these procedures, an explanation of HittingSet, and the proof of its running time appear in Appendix A.

3.1 A Two-Pass Algorithm

Input: Metric space , group assignment function , capacity vector .
/* Pass 1: Compute pivots. */
getPivots.
/* Pass 2: Compute representatives. */
getReps.
/* Compute solution. */
HittingSet.
Output .
Algorithm 1 Two-pass algorithm

Recall that is an upper bound on the minimum clustering cost. Our two-pass algorithm given by Algorithm 1 consists of three steps. First, the algorithm constructs a maximal subset of pivots separated pairwise by distance more than by executing one pass on the stream of points. In another pass, the algorithm computes a representative set of each pivot . Points in the representative set of a pivot are within distance from the pivot. Due to the separation of between the pivots, these representative sets are pairwise disjoint. Finally, a feasible set intersecting as many ’s as possible is found and returned. (It will soon be clear that intersects all the ’s.)

The algorithm needs working space only to store the pivots and their representative sets. By substituting in Lemma 2, the number of pivots is at most , that is, . Since contains at most one point from any group, it has at most points other than . Thus,

Observation 1.

The two-pass algorithm needs just enough working space to store points.

The calls to getPivots and getReps both take time , with update time per point. The call to HittingSet takes time . Thus,

Observation 2.

The two-pass algorithm runs in time , which is when , the number of groups, is constant.

We now prove the approximation guarantee.

Theorem 1.

The two-pass algorithm returns a feasible set whose clustering cost is at most . This is a -approximation when .

Proof.

Recall that is a feasible set having clustering cost at most . For each let denote a point such that . Since the points in are separated by distance more than , the points are all distinct. Recall that , the output of getReps, contains one point from every group which has a point within distance from . Therefore, contains a point, say , from the same group as such that . Consider the set . This set intersects for each . Furthermore, contains exactly as many points from any group as , and therefore, is feasible. Thus, there exists a feasible set, namely , intersecting all the pairwise disjoint ’s. Recall that , the output of HittingSet, is a feasible set intersecting as many ’s as possible. Thus, also intersects all the ’s.

Now, the clustering cost of for is at most , because intersects for each . The clustering cost of for is at most by the maximality of the set returned by getPivots. These facts and Lemma 1 together imply that the clustering cost of , the output of the algorithm, for is at most . ∎

3.2 A Distributed Algorithm

In the distributed model of computation, the set of points to be clustered is distributed equally among processors. Each processor is allowed a restricted access to the metric : it may compute the distance between only its own points. Each processor performs some computation on its set of points and sends a summary of small size to a coordinator. From the summaries, the coordinator then computes a feasible set of points which covers all the points in within a small radius. Let denote the set of points distributed to processor .

Input: Set , metric restricted to , group assignment function restricted to .
/* Compute local pivots. */
an arbitrary point in .
for  to  do
     .
.
.
/* Compute local representative sets. */
getReps.
.
/* Send message to coordinator. */
Send to the coordinator.
Algorithm 2 Summary computation by the ’th processor

The algorithm executed by each processor is given by Algorithm 2, which consists of two main steps. In the first step, the processor uses Gonzalez’s farthest point heuristic to find points. The first of those constitute the set , which we will call the set of local pivots. The point is the farthest point from the set of local pivots, and it is at a distance from the set of local pivots. Thus, every point is within distance from the set of pivots. This means,

Observation 3.

The clustering cost of for is .

In the second step, for each local pivot , the processor computes a set of local representatives in the vicinity of . Finally, the set of local pivots and the union of local representative sets is sent to the coordinator. Since contains at most one point from any group, it has at most points other than . Since we have the following observation.

Observation 4.

Each processor sends at most points to the coordinator.

Moreover, the separation between the local pivots is bounded as follows.

Lemma 3.

For every processor , we have .

Proof.

Suppose . Then is a set of points separated pairwise by distance more than . But is a set of at most points whose clustering cost for is . This contradicts Lemma 2. ∎

Observation 3 allows us to define a covering function cov from , the input set of points, to , the set of local pivots, as follows.

Definition 2.

Let be an arbitrary point in . Suppose is processed by processor , that is, . Then is an arbitrary local pivot in within distance from .

Since the processors send only a small number of points to the coordinator, it is very well possible that the optimal set of centers is lost in this process. In the next lemma, we claim that the set of points received by the coordinator contains a good and feasible set of centers nevertheless.

Lemma 4.

The set contains a feasible set, say , whose clustering cost for is at most .

Proof.

Consider any , and suppose it is processed by processor . Then by Definition 2. Recall that , the output of getReps, contains one point from every group which has a point within distance from . Therefore, contains some point, say , from the same group as (possibly itself), such that . Then by the triangle inequality and Lemma 3. Let . Clearly, . Since has exactly as many points from any group as , is feasible. The clustering cost of for is at most . The clustering cost of for is at most , because . By Lemma 1, the clustering cost of for is at most , as required. ∎

, .
/* Receive messages from processors. */
for  to  do
     Receive from processor .
     , .
/* Coordinator now has access to and restricted to , and capacity vector . */
/* Compute global pivots. */
getPivots.
/* Compute global representative sets. */
getReps.
/* Compute solution. */
HittingSet.
Output .
Algorithm 3 Coordinator’s algorithm

The algorithm executed by the coordinator is given by Algorithm 3. The coordinator constructs a maximal subset of the set of pivots returned by the processors such that points in are pairwise separated by distance more than . is called the set of global pivots. For each global pivot , the coordinator computes a set of its global representatives, all of which are within distance from . Due to the separation between points in , the sets are pairwise disjoint. Finally, a feasible set intersecting as many ’s as possible is found and returned. (As before, it will be clear that intersects all the ’s.)

Theorem 2.

The coordinator returns a feasible set whose clustering cost is at most . This is a -approximation when .

Proof.

By Lemma 4, contains a feasible set, say , whose clustering cost for is at most . For each , let denote a point in that is within distance from . Since the points in are separated pairwise by distance more than , ’s are all distinct. By the property of getReps, the set returned by it contains a point, say , from the same group as . Let . This set intersects for each . Since and are from the same group and ’s are all distinct, contains at most as many points from any group as does. Since is feasible, so is . To summarize, there exists a feasible set, namely , intersecting all the ’s. Recall that , the output of HittingSet, is a feasible set intersecting as many ’s as possible. Thus, also intersects all the ’s.

Now, the clustering cost of for is at most , because intersects for each . The clustering cost of for is at most by the maximality of the set returned by getPivots. The clustering cost of for is at most because the clustering cost of each for is at most . These facts and Lemma 1 together imply that the clustering cost of , the output of the coordinator, for is at most . ∎

We note here that even though our distributed algorithm has the same approximation guarantee as Kale’s one-pass algorithm, it is inherently a different algorithm. Ours is extremely parallel whereas Kale’s is extremely sequential. We now prove a bound on the running time.

Theorem 3.

The running time of the distributed algorithm is . By an appropriate choice of , the number of processors, this can be made .

Proof.

For each processor , computing local pivots as well as the call to getReps takes time each. For the coordinator, the separation between the global pivots and Lemma 2 together enforce . Observation 4 implies . Therefore, getPivots takes time and getReps takes time . The call to HittingSet takes time , thus limiting the coordinator’s running time to . Choosing minimizes the total running time to . ∎

3.3 Handling the Guesses

Given an arbitrarily small parameter , a lower bound , and an upper bound , we run our algorithms for guess , which means at most guesses. Call this method of guesses as geometric guessing starting at until . For the , our algorithms will compute a solution successfully.

In the distributed algorithm, by Lemma 3, for each processor, . Therefore, . We then run Algorithm 3 with geometric guessing starting at until it successfully finds a solution.

For the two-pass algorithm, let be the set of first points; then is a lower bound (call this the simple lower bound). Note that no passes need to be spent to compute the simple lower bound. We also need an upper bound . One can compute an arbitrary solution and its cost—which will be an upper bound—by spending two more passes (call this the simple upper bound). This results in a four-pass algorithm. To obtain a truly two pass algorithm and space usage , one can use Guha’s trick [Guh09], which is essentially starting guesses and if a run with guess fails, then continuing the run with guess and treating the old summary as the initial stream for this guess; see also [Kal19] for details. But obtaining and using an upper bound is convenient and easy to implement in practice.

4 Distributed -Center Lower Bound

Malkomes et al. [MKC15] generalized the greedy algorithm [Gon85] to obtain a -approximation algorithm for the -center problem in the distributed setting. Here we prove a lower bound for the -center problem with processors for a special class of distributed algorithms: If each processor communicates less than a constant fraction of their input points, then with a constant probability, the output of the coordinator will be no better than a -approximation to the optimum.

Figure 1: The underlying metric for

Figure 1 shows a graph metric with points for which lower bound holds, where the point is not a part of the metric but is only used to define the distances. Note that and is at distance of from each point in .

For , let denote an arbitrary equipartition of . There are processors, whose inputs are given by , and , for . The goal is to solve the -center problem on the union of their inputs. (Observe that the optimum solution is with distance .) Each processor is allowed to send a subset of their input points to the coordinator, who outputs three of the received points. For this class of algorithms, we show that if each processor communicates less than points, then the output of the coordinator is no better than a -approximation to the optimum with probability at least . Using standard amplification arguments, we can generate a metric instance for the ()-center problem on which with probability at least , the algorithm outputs no better than -approximation ().

We first discuss the intuition behind the proof. The key observation is that all points in each are pairwise equidistant. Therefore, sending a uniformly random subset of the inputs is the best strategy for each processor. Since each processor communicates only a small fraction of its input points, the probability that the coordinator receives any of the points in is negligible. Conditioned on the coordinator not receiving these points, all the received points are a subset of . As all points in are pairwise equidistant, the best strategy for the coordinator is to output points at random. Hence, with constant probability, all the points in the output belong to or all of them belong to . This being the case, the output has cost , whereas the optimum cost is .

4.1 The Formal Proof

We now present the formal details of the lower bound. For a natural number , denotes the set .

The metric space .

The point set of this metric space on points is given by

where . Let . We call the points in critical. Note that are pairwise disjoint and are also disjoint from . The metric is the shortest-path-length metric induced by the graph shown in Figure 1 (where is not a point in but is only used to define the pairwise distances). The pairwise distances are given in Table 1. Note that if the table entry is indexed by sets, then the entry corresponds to the distance between distinct points in the sets. The following observation can be verified by a case-by-case analysis.

Observation 5.

The sets and are the only optimum solutions of the -center problem on and they have unit clustering cost. The clustering cost of any subset of is due to point . Similarly, the clustering cost of any subset of is due to point .

\adjustbox

max width =

Table 1: Pairwise Distances

Input Distribution on the Processors’ Inputs.

For , let be an arbitrary equi-partition of (and therefore, for all ). Define the sets , and , for . Observe that each contains exactly points separated pairwise by distance , and moreover, three of the points are critical. We assign the sets randomly to the nine processors after a random relabeling. Formally, we pick a uniformly random bijection as the relabeling and another uniformly random bijection , independent of , as the assignment. We assign the set to processor for every . When a processor or the coordinator queries the distance between and where , it gets as an answer. Note that neither the processors nor the coordinator knows or . Let the random variable denote the partition of the set of labels into a sequence of nine subsets induced by and , where is the set of labels of points assigned to processor , that is, .

Lemma 5.

Consider any deterministic distributed algorithm for the processor -center problem on and input distribution , in which each processor communicates an -sized subset of its input points, and the coordinator outputs of the received points. If , then with probability at least , the output is no better than a -approximation.

Although the probability with which the coordinator fails to outputs a better-than--approximation is only , it can be amplified to , for any . We discuss the amplification result before presenting the proof of the above lemma.

Lemma 6.

Let and be arbitrary constants, and let

Then there exists an instance of the -center problem such that, in the distributed setting with processors, each communicating at most a fraction of its input points to the coordinator, the coordinator fails to output a better than -approximation with probability at least .

Proof.

The underlying metric space consists of disjoint copies of separated by an arbitrarily large distance from one another. The point set of each copy is distributed to the nine processors as described earlier, and these distribtions are independent. Thus, each processor receives points. Observation 5 implies that in this instance, the optimum set of centers (the union of optimum sets of centers in each copy) has unit cost. Also, in order to get a better than -approximation, the coordinator must output a better than -approximate solution from every copy. We prove that this is unlikely.

By our assumption, each processor sends at most points to the coordinator, where . Therefore, for each processor, there exist at most copies from which it sends more than points to the coordinator. Since we have processors, there exist at most copies from which more than points are sent by some processor. From each of the remaining copies, no processor sends more than points. By Lemma 5, the coordinator succeeds on each of these copies independently with probability at most , in producing a better than approximation. Therefore, the probability that the coordinator succeeds in all the copies is bounded as

where the last inequality follows by substituting the value of . Thus, the coordinator fails to produce a better than -approximation with probability at least . ∎

Proof of Lemma 5.

Consider any one of the nine processors. It gets the set for a uniformly random . Since is a uniformly random labeling and points in are pairwise equidistant, the processor is not able to identify the three critical points in its input. This happens even if we condition on the values of . Formally, conditioned on and , all subsets of of size are equally likely to be the set of labels of the three critical points in processor ’s input, i.e., where . As a consequence, the probability that at least one of the three critical points appears in the set of at most points the processor communicates is at most , even when we condition on . For a given processor , let be the set of labels it sends to the coordinator, and define to be the event that contains the label of a critical point. Then . Next, define to be the event that no processor sends the label of any critical point to the coordinator, that is, , where is the complement of . Then by the union bound and the fact that , we have for every partition of the label set and every bijection ,

(1)

Suppose the coordinator outputs , a set of three labels, on receiving . Then for some . Observe that , , and are all completely determined2 by . In contrast, due to the random labeling , the mapping is independent of . Therefore,

Observation 6.

Conditioned on , the bijection is equally likely to be any of the bijections from to .

Next, define to be the event that is either or . In words, is the event that the coordinator outputs labels of three points, all of which are contained in or in . Note that the event implies that the coordinator’s output is contained in or in . Therefore, by Observation 5, event implies that the coordinator fails to output a better than -approximation. We are now left to bound from below.

Since the set is completely determined by , the event is completely determined by and : for any , there exist exactly values of which cause to happen. Formally,

Observation 7.

For every partition of the label set, there exist exactly bijections such that , whereas for all the other bijections .

Therefore, we have,

Here, we used Observation 7 for the second and fourth equality, and Equation (1) and Observation 6 for the inequality. Thus, the coordinator fails to output a better than -approximation with probability at least , as required. ∎

Using Lemma 6 along with Yao’s lemma, we get our main lower-bound theorem.

Theorem 4.

There exists such that for any , with , any randomized distributed algorithm for -center where each processor communicates at most points to the coordinator, who outputs a subset of those points as the solution, is no better than -approximation with probability at least .

5 Experiments

All experiments are run on HP EliteBook 840 G6 with Intel® Core™ i7-8565U CPU 1.80GHz having 4 cores and 15.5 GiB of RAM, running Ubuntu 18.04 and Anaconda. We make our code available on GitHub3.

We perform our experiments on a massive synthetic dataset, several real datasets, and small synthetic datasets. The same implementation is used for the large synthetic dataset and the real datasets, but a slightly different implementation is used for small synthetic datasets. Before presenting the experiments, we first discuss the implementation details that are common to all three experiments. Specific details are mentioned along with the corresponding experimental setup. For all our algorithms if the solution size is less than , then we extend the solution using an arbitrary solution of size (which also certifies the simple upper bound). In the case of the distributed algorithm, an arbitrary solution is computed using only the points received by the coordinator. Also, one extra pass is spent into computing solution cost. In the processors’ algorithm, we return along with . No randomness is used for any optimization, making our algorithms completely deterministic. Access to distance between two points is via a method get_distance(), whose implementation depends on the dataset.

We use the code shared by Kleindessner et al. for their algorithm on github4, exactly as is, for all datasets. In their code, the distance is assumed to be stored in an distance matrix.

As mentioned in the introduction, we give new implementations for existing algorithms—those of Chen et al. and Kale (we choose to implement Kale’s two-pass algorithm only, because it is the better of his two). Instead of using a matroid intersection subroutine, which can have running time of super quadratic in , we reduce the postprocessing steps of these algorithms to finding a maximum matching in an appropriately constructed graph (for details, see HittingSet() in Appendix A). We further reduce maximum matching to max-flow which is computed using Python package NetworkX. This results in a postprocessing time of for Chen et al. and for Kale. This step itself makes Chen et al.’s algorithm practical for much larger than what is observed by Kleindessner et al.

Handling the guesses

For all algorithms (except Kleindessner et al.’s), we use . For Chen et al.’s algorithm, we use geometric guessing starting with the lower bound given by the farthest point heuristic (call this Gonzalez’s lower bound). For our two-pass algorithm and Kale’s algorithm, we use geometric guessing starting with the simple lower bound until the upper bound given by an arbitrary solution. The values for the guesses in the coordinator’s algorithm are scaled down by a factor of . Concretely, let be the maximum among the ’s. Then the guesses take values in , until a feasible solution is found. The factor of ensures that when getPivots() is run with the parameter , we end up picking at least pivots from .

We now proceed to present our experiments. To show the effectiveness of our algorithms on massive datasets, we run them on a 100 GB synthetic dataset which is a collection of 4,000,000 points in 1000 dimensional Euclidean space, where each coordinate is a uniformly random real in . Each point is assigned one of the four groups uniformly at random, and capacity of each group is set to . Just reading this data file takes more than four minutes. Our two-pass algorithm takes 1.95 hours and our distributed algorithm takes 1.07 hours; both compute a solution of almost the same cost, even though their theoretical guarantees are different. Here, we use block size of in the distributed algorithm, i.e., the number of processors .

For the above dataset and the real datasets:

The input is read from the input file and attributes are read from the attribute file, one data point at a time, and fed to the algorithms. This is done in order to be able to handle the 100 GB dataset. Using Python’s multiprocessing library, we are able to use four cores of the processor 5.

5.1 Real Datasets

We use three real world datasets: Celeb-A [LLWT15], Sushi [sus], and Adult [KB], with by selecting the first 1000 data points (see Table 2).

Dataset Capacities Gonzalez’s Lower Bound Chen et al. Kale Kleindessner et al. Two pass Distributed
CelebA [2, 2] 30142.4 1.9 1.9 1.85 1.76 1.76
CelebA [2, 2, 2, 2] 28247.3 2.0 2.0 1.9 1.88 1.88
SushiA [2, 2] 11.0 2.18 2.18 2.27 2.0 2.09
SushiA [2] * 6 8.5 2.35 2.35 2.24 2.35 2.24
SushiA [2] * 12 7.5 2.13 2.13 2.0 2.4 2.4
SushiB [2, 2] 36.5 1.81 1.81 2.11 1.81 1.86
SushiB [2] * 6 34.0 2.0 1.82 2.12 1.79 2.0
SushiB [2] * 12 32.0 1.94 1.94 2.09 1.94 1.94
Adult [2, 2] 4.9 2.04 2.13 2.44 1.9 2.02
Adult [2] * 5 3.92 2.66 2.66 2.02 2.36 2.35
Adult [2] * 10 2.76 2.75 2.41 2.48 2.48 2.75
Table 2: Comparison of solution quality of algorithms for fair -center on real datasets. Each column after the third corresponds to an algorithm and shows ratio of its cost and Gonzalez’s lower bound. Note that this is not the approximation ratio. Our two-pass algorithm is the best for majority of the settings. Dark shaded cell shows the best-cost algorithm and lightly shaded cell shows the second best.

Celeb-A dataset is a set of 202,599 images of human faces with attributes including male/female and young/not-young, which we use. We use Keras to extract features from each image [fea] via the pretrained neural network VGG16, which returns a 15360 dimensional real vector for each image. We use the distance as the metric and two settings of groups: male/female with capacity of each (denoted by in Table 2), and {male, female} {young, not-young} with capacity of each (denoted by in Table 2).

Sushi dataset is about preferences for different types of Sushis by 5000 individuals with attributes of male/female and six possible age-groups. In SushiB, the preference is given by a score whereas in SushiA, the preference is given by an order. For SushiB, we use the distance whereas for SushiA, we use the number of inversions, i.e., the distance between two Sushi rankings is the number of doubletons such that Sushi is preferred over Sushi by one ranking and not the other. For both SushiA and SushiB, we use three different group settings: with gender only, with age group only, and combination of gender and age group. This results in , , and groups, respectively, and the capacities appear as , , and , respectively, in Table 2.

Motivated by Kleindessner et al., we consider the adult dataset [KB], which is extracted from US census data and contains male/female attribute and six numerical attributes that we use as features. We normalize this dataset to have zero mean and standard deviation of one and use the distance as the metric. There are two attributes that can be used to generate groups: gender and race (Black, White, Asian Pacific Islander, American Indian Eskimo, and Other). Individually and in combination, this results in , , and groups, respectively.

For comparison, see Table 2. On majority of settings, our two-pass algorithm outputs a solution with cost smaller than the rest. We reiterate for emphasis that in addition to being at least as good as the best in terms of solution quality, our algorithms can handle massive datasets.

For the distributed algorithm, we use block size of 25, i.e., the number of processors are : theoretically, using processor gives maximum speedup.

5.2 Synthetic Datasets

Motivated by the experiments in Kleindessner et al., we use the Erdős-Rényi graph metric to compare the running time and cost of our algorithms with existing algorithms. For a fixed natural number , a random metric on points is generated as follows. First, a random undirected graph on vertices is sampled in which each edge is independently picked with probability . Second, every edge is assigned a uniformly random weight in . The points in the metric correspond to the vertices of the graph, and the pairwise distances between the points are given by the shortest path distance. In addition, if is the number of groups, then each point in the metric is assigned a group in uniformly and independently at random.

Figure 2: Comparing Running Times

Figure 2 shows the plots between the running time and instance size ; the bottom one is a zoom-in of the top one to the lower four plots. In this experiment, takes values in . The number of groups is fixed to and the capacity of each group is . For each fixing of , we run the five algorithms on independent random metric instances of size to compute the average running time. Our two pass algorithm and Kleindessner et al.’s algorithm are the fastest. Our distributed algorithm is faster than Chen et al.’s algorithm, but slower than Kale’s.

Figure 3: Comparing Approximation Ratios

Figure 3 shows the ratios of the cost of various algorithms to Gonzalez’s lower bound. For this comparison, the instance size is fixed to and capacities are , Here again, for every fixing of the capacities, the algorithm is run on independent random metric instances to compute the average costs. Chen et al.’s algorithm achieves the least cost for almost all settings, and Kleindessner et al.’s algorithm gives the highest cost on majority (5 out of 8) of settings. Our two-pass algorithm and Kale’s algorithm perform similar to each other and are quite close to Chen et al.’s. Our distributed algorithm is somewhere in between Chen et al.’s and Kleindessner et al.’s. Note that the ratios of the costs between any two algorithms is at most .

In the implementation of our two pass algorithm, we use geometric guessing starting with the simple lower bound until the algorithm returns a success instead of running all guesses. This is done for a fair comparison in terms of running time.

6 Research Directions

One research direction is to improve the theoretical bounds, e.g., get a better approximation ratio in the distributed setting or prove a better hardness result. Another interesting direction is to use fair -center for fair rank aggregation using the number of inversions between two rankings as the metric.

Appendix A Algorithms

The definition of clustering cost (Definition 1) immediately implies the following observations.

Observation 8.

Let and be sets of points in a metric space given by a distance function . The clustering cost of for is at most the clustering cost of for .

Observation 9.

Let be sets of points in a metric space given by a distance function . Suppose the clustering cost of each for is at most . Then the clustering cost of for is at most .

The following lemma follows easily from the triangle inequality.

Lemma 7 (Lemma 1 from the paper, restated).

Let . The clustering cost of for is at most the clustering cost of for plus the clustering cost of for .

Proof.

Let be the metric and let and denote the clustering costs of for and of for respectively. For every , there exists such that . But for this , there exists such that . Thus, for every , there exists a such that , by the triangle inequality. This proves the claim. ∎

The pseudocodes of procedures getPivots, getReps, and HittingSet are given by Algorithms 45, and 6 respectively.

Input: Set with metric , radius .
where is an arbitrary point in .
for each (in an arbitrary order) do
     if  then
         .      
Return .
Algorithm 4 getPivots
Input: Set with metric