Distributed k-Means and k-Median Clustering on General Topologies

Distributed -Means and -Median Clustering on General Topologies

Maria Florina Balcan   Steven Ehrlich   Yingyu Liang Georgia Institute of Technology, ninamf@cc.gatech.edu Georgia Institute of Technology, sehrlich@cc.gatech.edu Georgia Institute of Technology, yliang39@gatech.edu
Abstract

This paper provides new algorithms for distributed clustering for two popular center-based objectives, -median and -means. These algorithms have provable guarantees and improve communication complexity over existing approaches. Following a classic approach in clustering by [16], we reduce the problem of finding a clustering with low cost to the problem of finding a coreset of small size. We provide a distributed method for constructing a global coreset which improves over the previous methods by reducing the communication complexity, and which works over general communication topologies. Experimental results on large scale data sets show that this approach outperforms other coreset-based distributed clustering algorithms.

1 Introduction

Most classic clustering algorithms are designed for the centralized setting, but in recent years data has become distributed over different locations, such as distributed databases [24, 8], images and videos over networks [23], surveillance [13] and sensor networks [7, 14]. In many of these applications the data is inherently distributed because, as in sensor networks, it is collected at different sites. As a consequence it has become crucial to develop clustering algorithms which are effective in the distributed setting.

Several algorithms for distributed clustering have been proposed and empirically tested. Some of these algorithms [12, 25, 9] are direct adaptations of centralized algorithms which rely on statistics that are easy to compute in a distributed manner. Other algorithms [17, 19] generate summaries of local data and transmit them to a central coordinator which then performs the clustering algorithm. No theoretical guarantees are provided for the clustering quality in these algorithms, and they do not try to minimize the communication cost. Additionally, most of these algorithms assume that the distributed nodes can communicate with all other sites or that there is a central coordinator that communicates with all other sites.

In this paper, we study the problem of distributed clustering where the data is distributed across nodes whose communication is restricted to the edges of an arbitrary graph. We provide algorithms with small communication cost and provable guarantees on the clustering quality. Our technique for reducing communication in general graphs is based on the construction of a small set of points which act as a proxy for the entire data set.

An -coreset is a weighted set of points whose cost on any set of centers is approximately the cost of the original data on those same centers up to accuracy . Thus an approximate solution for the coreset is also an approximate solution for the original data. Coresets have previously been studied in the centralized setting ([16, 10]) but have also recently been used for distributed clustering as in [26] and as implied by [11]. In this work, we propose a distributed algorithm for -means and -median, by which each node constructs a local portion of a global coreset. Communicating the approximate cost of a global solution to each node is enough for the local construction, leading to low communication cost overall. The nodes then share the local portions of the coreset, which can be done efficiently in general graphs using a message passing approach.

More precisely, in Section 3, we propose a distributed coreset construction algorithm based on local approximate solutions. Each node computes an approximate solution for its local data, and then constructs the local portion of a coreset using only its local data and the total cost of each node’s approximation. For constant, this builds a coreset of size for -median and -means when the data lies in dimensions and is distributed over sites. If there is a central coordinator among the sites, then clustering can be performed on the coordinator by collecting the local portions of the coreset with a communication cost equal to the coreset size. For distributed clustering over general connected topologies, we propose an algorithm based on the distributed coreset construction and a message-passing approach, whose communication cost improves over previous coreset-based algorithms. We provide a detailed comparison below.

Experimental results on large scale data sets show that our algorithm performs well in practice. For a fixed amount of communication, our algorithm outperforms other coreset construction algorithms.

Comparison to Other Coreset Algorithms: Since coresets summarize local information they are a natural tool to use when trying to reduce communication complexity. If each node constructs an -coreset on its local data, then the union of these coresets is clearly an -coreset for the entire data set. Unfortunately the size of the coreset using this natural approach increases greatly with the number of nodes. More sophisticated approaches, such as [26] reduce the size of the global coreset by approximating the union of local coresets with another coreset. They assume nodes communicate over a rooted tree, with each node passing its coreset to its parent. Because the approximation factor of the constructed coreset depends on the quality of its component coresets, the accuracy a coreset needs (and thus the overall communication complexity) scales with the height of this tree. Although it is possible to find a spanning tree in any communication network, when the graph has large diameter every tree has large height. In particular many natural networks such as grid networks have a large diameter ( for grids) which greatly increases the size of coresets which must be communicated across the lower levels of the tree.

We show that it is possible to construct a global coreset with low communication overhead. This is done by distributing the coreset construction procedure rather than combining local coresets. The communication needed to construct this coreset is negligible – just a single value from each data set representing the approximate cost of their local optimal clustering. Since the sampled global -coreset is the same size as any local -coreset, this leads to an improvement of the communication cost over the other approaches. See Figure 1 for an illustration. The constructed coreset is smaller by a factor of in general graphs, and is independent of the underlying communication topology. This method excels in sparse networks with large diameters, where the previous approach in [26] requires coresets that are quadratic in the size of the diameter for -median and quartic for -means; see Section 4.2 for further details. [11] also merge coresets using coreset construction, but they do so in a model of parallel computation and ignore communication costs. Section 6 provides a review of additional related work.

(a) Zhang et al.[26]
(b) Our Construction
Figure 1: (a) Each node computes a coreset on the weighted pointset for its own data and its subtrees’ coresets. (b) Local constant approximation solutions are computed, and the costs of these solutions are used to coordinate the construction of a local portion on each node.

2 Preliminaries

Let denote the Euclidean distance between any two points . The goal of -means clustering is to find a set of centers which minimize the -means cost of data set . Here the -means cost is defined as where . If is a weighted data set with a weighting function , then the -means cost is defined as . Similarly, the -median cost is defined as . Both -means and -median cost functions are known to be NP-hard to minimize (see for example [2]). For both objectives, there exist several readily available polynomial-time algorithms that achieve constant approximation solutions (see for example [18, 21]).

In the distributed clustering task, we consider a set of nodes which communicate on an undirected connected graph with edges. More precisely, an edge indicates that and can communicate with each other. Here we measure the communication cost in number of points transmitted, and assume for simplicity that there is no latency in the communication. On each node , there is a local set of data points , and the global data set is . The goal is to find a set of centers which optimize while keeping the computation efficient and the communication cost as low as possible. Our focus is to reduce the total communication cost while preserving theoretical guarantees for approximating clustering cost.

2.1 Coresets

For the distributed clustering task, a natural approach to avoid broadcasting raw data is to generate a local summary of the relevant information. If each site computes a summary for their own data set and then communicates this to a central coordinator, a solution can be computed from a much smaller amount of data, drastically reducing the communication.

In the centralized setting, the idea of summarization with respect to the clustering task is captured by the concept of coresets [16, 10]. A coreset is a set of points, together with a weight for each point, such that the cost of this weighted set approximates the cost of the original data for any set of centers. The formal definition of coresets is:

Definition 1 (coreset).

An -coreset for a set of points with respect to a center-based cost function is a set of points and a set of weights such that for any set of centers ,

In the centralized setting, many coreset construction algorithms have been proposed for -median, -means and some other cost functions. For example, for points in , algorithms in [10] construct coresets of size for -means and coresets of size for -median. In the distributed setting, it is natural to ask whether there exists an algorithm that constructs a small coreset for the entire point set but still has low communication cost. Note that the union of coresets for multiple data sets is a coreset for the union of the data sets. The immediate construction of combining the local coresets from each node would produce a global coreset whose size was larger by a factor of , greatly increasing the communication complexity. We present a distributed algorithm which constructs a global coreset the same size as the centralized construction and only needs a single value111The value that is communicated is the sum of the costs of approximations to the local optimal clustering. This is guaranteed to be no more than a constant factor times larger than the optimal cost. communicated to each node. This serves as the basis for our distributed clustering algorithm.

3 Distributed Coreset Construction

In this section, we design a distributed coreset construction algorithm for -means and -median. Note that the underlying technique can be extended to other additive clustering objectives such as -line median.

To gain some intuition on the distributed coreset construction algorithm, we briefly review the coreset construction algorithm in [10] in the centralized setting. The coreset is constructed by computing a constant approximation solution for the entire data set, and then sampling points proportional to their contributions to the cost of this solution. Intuitively, the points close to the nearest centers can be approximately represented by the nearest centers while points far away cannot be well represented. Thus, points should be sampled with probability proportional to their contributions to the cost.

Directly adapting the algorithm to the distributed setting would require computing a constant approximation solution for the entire data set. We show that a global coreset can be constructed in a distributed fashion by estimating the weight of the entire data set with the sum of local approximations. We first compute a local approximation solution for each local data set, and communicate the total costs of these local solutions. Then we sample points proportional to their contributions to the cost of their local solutions. At the end of the algorithm, the coreset consists of the sampled points and the centers in the local solutions. The coreset points are distributed over the nodes, so we call it distributed coreset. See Algorithm 1 for details.

  Input: Local datasets , parameter (number of points to be sampled).
  
    Round 1: on each node
      Compute a constant approximation for .       Communicate to all other nodes.
    Round 2: on each node
       Set and .
       Pick a non-uniform random sample of points from ,      where for every and , we have with probability .      Let for each .
      For , let ,
  
  Output: Distributed coreset: points with weights , .
Algorithm 1 Communication aware distributed coreset construction
Theorem 1.

For distributed -means and -median clustering on a graph, there exists an algorithm such that with probability at least , the union of its output on all nodes is an -coreset for . The size of the coreset is for -means, and for -median. The total communication cost is .

As described below, the distributed coreset construction can be achieved by using Algorithm 1 with appropriate , namely for -means and for -median. The formal proofs are described in the following subsections.

3.1 Proof of Theorem 1: -median

The analysis relies on the definition of the pseudo-dimension of a function space and a sampling lemma.

Definition 2 ([22, 10]).

Let be a finite set of functions from a set to . For , let . The dimension of the function space is the smallest integer such that for any ,

Consider a set of functions defined on a set of points . Suppose we draw a sample according to , namely for every and every , we have with probability . Then for any , the expectation of the weighted cost of equals the cost of the original data :

The following lemma shows that if the sample size is large enough, then we also have concentration for any . The lemma is implicit in [10] and we include the proof in the appendix for completeness.

Lemma 1.

Fix a set of functions . Let be a sample drawn i.i.d. from according to , namely, for every and every , we have with probability . Let for . If for a sufficiently large , then with probability at least

To get a small bound on the difference between and , we need to choose such that is bounded. More precisely, if we choose , then the difference is bounded by .

We first consider the centralized setting and review how [10] applied the lemma to construct a coreset for -median as in Definition 1. A natural approach is to apply this lemma directly to the cost, namely, to choose . The problem is that a suitable upper bound is not available for . However, we can still apply the lemma to a different set of functions defined as follows. Let denote the closest center to in the approximation solution. Aiming to approximate the error rather than to approximate directly, we define , where is added so that . Since , we can apply the lemma to and . The lemma then bounds the difference by , so we have an -approximation.

Note that does not equal . However, it equals the difference between and a weighted cost of the sampled points and the centers in the approximation solution. To get a coreset as in Definition 1, we need to add the centers of the approximation solution with specific weights to the coreset. Then when the sample is sufficiently large, the union of the sampled points and the centers is an -coreset.

Our key contribution in this paper is to show that in the distributed setting, it suffices to choose from the local approximation solution for the local dataset containing , rather than from an approximation solution for the global dataset. Furthermore, the sampling and the weighting of the coreset points can be done in a local manner. In the following, we provide a formal verification of our discussion above. We have the following lemma for -median with

Lemma 2.

For -median, the output of Algorithm 1 is an -coreset with probability at least , if for a sufficiently large constant .

Proof.

We want to show that for any set of centers the true cost for using these centers is well approximated by the cost on the weighted coreset. Note that our coreset has two types of points: sampled points with weight and local solution centers with weight . We use to represent the nearest center to in the local approximation solution. We use to represent the set of points having as their closest center in the local approximation solution.

As mentioned above, we construct to be the difference between the cost of and the cost of on so that Lemma 1 can be applied to . Note that by triangle inequality, and is sufficiently large and chosen according to weights , so the conditions of Lemma 1 are met. Then we have

where the last inequality follows from the fact that is a constant approximation solution for .

Next, we show that the coreset is constructed such that is exactly the difference between the true cost and the weighted cost of the coreset, which then leads to the lemma.

Note that the centers are weighted such that

(1)

Also note that , so

(2)
By plugging (1) into (2), we have

which implies the lemma. ∎

In [10] it is shown that Therefore, by Lemma 2, when , the weighted cost of approximates the -median cost of for any set of centers, then is an -coreset for . The total communication cost is bounded by , since even in the most general case when every node only knows its neighbors, we can broadcast the local costs with communication (see Algorithm 3).

3.2 Proof of Theorem 1: -means

We have for -means a similar lemma that when , the algorithm constructs an -coreset with probability at least . The key idea is the same as that for -median: we use centers from the local approximation solutions as an approximation to the original data points , and show that the error between the total cost and the weighted sample cost is approximately the error between the cost of and its sampled cost (compensated by the weighted centers), which is shown to be small by Lemma 1.

The key difference between -means and -median is that triangle inequality applies directly to the -median cost. In particular, for the -median problem note that is an upper bound for the error of on any set of centers, i.e. , by triangle inequality. Then we can construct such that is bounded. In contrast, for -means, the error does not have such an upper bound. The main change to the analysis is that we divide the points into two categories: good points whose costs approximately satisfy the triangle inequality (up to a factor of ) and bad points. The good points for a fixed set of centers are defined as

where the upper bound is . Good points we can bound as before. For bad points we can show that while the difference in cost may be larger than , it must still be small, namely .

Formally, the functions are restricted to be defined only over good points:

Then is decomposed into three terms:

(3)
(4)
(5)

Lemma 1 bounds (3) by , but we need an accuracy of to compensate for the factor in the upper bound, resulting in a factor in the sample complexity.

We begin by bounding (4). Note that for each term in (4), since . Furthermore, only when and are close to each other and far away from . In Lemma 3 we use this to show that The details are presented in the appendix.

Using Lemma 3, (4) can be bounded by

Similarly, by the definition of and Lemma 3, (5) is bounded by

Note that the expectation of is . By a sampling argument (Lemma 4), if , then . Then (5) is bounded by where is at most a constant factor more than the optimum cost.

Since each of (3),(4), and (5) is , we know that their sum is the same magnitude. Combining the above bounds, we have The proof is then completed by choosing a suitable , and bounding as in [10].

4 Effect of Network Topology on Communication Cost

In the previous section, we presented a distributed coreset construction algorithm. If there is a central coordinator in the communication graph, then we can simply send the local portions of the coreset to the coordinator which can perform the clustering task. The total communication cost is just the size of the coreset.

In this section, we consider the distributed clustering tasks where the nodes are arranged in some arbitrary connected topology, and can only communicate with their neighbors. We propose a message passing approach for globally sharing information, and use it for collecting information for coreset construction and sharing the local portions of the coreset. We also consider the special case when the graph is a rooted tree.

  Input: , local datasets; the neighbors of , ; an -approximation algorithm for weighted clustering instances.
  
    Round 1: on each node
      Construct its local portion of an -coreset by Algorithm 1,       using Message-Passing for communicating the local costs.
    Round 2: on each node
      Call Message-Passing().
      .
  
  Output:
Algorithm 2 Distributed clustering on a graph
  Input: is the message, are the neighbors.
  
      Let denote the information received.
        Initialize , and send to all the neighbors.
      While :        If receive message ,            and send to all the neighbors.
Algorithm 3 Message-Passing(, )

4.1 General Graphs

We now present the main result for distributed clustering on graphs.

Theorem 2.

Given an -approximation algorithm for weighted -means (-median respectively) as a subroutine, there exists an algorithm that with probability at least outputs a -approximation solution for distributed -means (-median respectively) clustering. The total communication cost is for -means, and for -median.

Proof.

The details are presented in Algorithm 2. By Theorem 1, the output of Algorithm 1 is a coreset. Observe that in Algorithm 3, for any , propagates on the graph in a breadth-first-search style, so at the end every node receives . This holds for all , so all nodes has a copy of the coreset at the end, and thus the output is a -approximation solution.

Also observe that in Algorithm 3, for any node and , sends out once, so the communication of is . The communication cost of Algorithm 3 is . Then the total communication cost of Algorithm 2 follows from the size of the coreset constructed. ∎

In contrast, an approach where each node constructs an -coreset for -means and sends it to the other nodes incurs communication cost of . Our algorithm significantly reduces this.

4.2 Rooted Trees

Our algorithm can also be applied on a rooted tree, and compares favorably to other approaches involving coresets [26]. We can restrict message passing to operating along this tree, leading to the following theorem for this special case.

Theorem 3.

Given an -approximation algorithm for weighted -means (-median respectively) as a subroutine, there exists an algorithm that with probability at least outputs a -approximation solution for distributed -means (-median respectively) clustering on a rooted tree of height . The total communication cost is for -means, and for -median.

Proof.

We can construct the distributed coreset using Algorithm 1. In the construction, the costs of the local approximation solutions are sent from every node to the root, and the sum is sent to every node by the root. After the construction, the local portions of the coreset are sent from every node to the root. A local portion leads to a communication cost of , so the total communication cost is . Once the coreset is constructed at the root, the -approximation algorithm can be applied centrally, and the results can be sent back to all nodes. ∎

Our approach improves the cost of for -means and the cost of for -median in [26] 222 Their algorithm used coreset construction as a subroutine. The construction algorithm they used builds coreset of size . Throughout this paper, when we compare to [26] we assume they use the coreset construction technique of [10] to reduce their coreset size and communication cost. . The algorithm in [26] builds on each node a coreset for the union of coresets from its children, and thus needs accuracy at the low levels of the tree to prevent the accumulation of errors. Since the coreset construction subroutine has quadratic dependence on for -median (quartic for -means), the algorithm then has quadratic dependence on (quartic for -means). Our algorithm does not build coreset on top of coresets, resulting in a better dependence on the height of the tree .

In a general graph, any rooted tree will have its height at least as large as half the diameter. For sensors in a grid network, this implies . In this case, our algorithm gains a significant improvement over existing algorithms.

5 Experiments

In our experiments we seek to determine whether our algorithm is effective for the clustering tasks and how it compares to the other distributed coreset algorithms 333Our theoretical analysis shows that our algorithm has better bounds on the communication cost. Since the bounds are from worst-case analysis, it is meaningful to verify that our algorithm also empirically outperforms other distributed coreset algorithms.. We present the -means cost of the solution produced by our algorithm with varying communication cost, and compare to those of other algorithms when they use the same amount of communication.

Data sets: Following the setup of [26, 4], for the synthetic data we randomly choose centers from the standard Gaussian distribution in , and sample equal number of points from the Gaussian distribution around each center. Note that, as in [26, 4], we use the cost of the centers as a baseline for comparing the clustering quality. We choose the following real world data sets from [3]: Spam (4601 points in ), Pendigits (10992 points in ), Letter (20000 points in ), and ColorHistogram of the Corel Image data set (68040 points in ). We use for these data sets. We further choose YearPredictionMSD (515345 points in ) for larger scale experiments, and use for this data set.

Experimental Methodology: To transform the centralized clustering data sets into distributed data sets we first generate a communication graph connecting local sites, and then partition the data into local data sets. To evaluate our algorithm, we consider several network topologies and partition methods.

The algorithms are evaluated on three types of communication graphs: random, grid, and preferential. The random graphs are Erdös-Renyi graphs with , i.e. they are generated by including each potential edge independently with probability . The preferential graphs are generated according to the preferential attachment mechanism in the Barabási-Albert model [1]. For data sets Spam, Pendigits, and Letter, we use random/preferential graphs with sites and grid graphs. For synthetic data set and ColorHistogram, we use random/preferential graphs with sites and grid graphs. For large data set YearPredictionMSD, we use random/preferential graphs with sites and grid graphs.

The data is then distributed over the local sites. When the communication network is a random graph, we consider three partition methods: uniform, similarity-based, and weighted. In the uniform partition, each data point in the global data set is assigned to the local sites with equal probability. In the similarity-based partition, each site has an associated data point randomly selected from the global data. Each data point in the global data is then assigned to the site with probability proportional to its similarity to the associated point of the site, where the similarities are computed by Gaussian kernel function. In the weighted partition, each local site is assigned a weight chosen by and then each data point is distributed to the local sites with probability proportional to the site’s weight. When the network is a grid graph, we consider the similarity-based and weighted partitions. When the network is a preferential graph, we consider the degree-based partition, where each point is assigned with probability proportional to the site’s degree.

To measure the quality of the coreset generated, we run Lloyd’s algorithm on the coreset and the global data respectively to get two solutions, and compute the ratio between the costs of the two solutions over the global data. The average ratio over 10 runs is then reported. We compare our algorithm with the naïve method of combining a coreset from each local data set and the algorithm of [26] (Zhang et al.). When running the algorithm of Zhang et al., we restrict the general communication network to a spanning tree by picking a root uniformly at random and performing a breadth first search.

(a) random graph, uniform
(b) random graph, similarity-based
(c) random graph, weighted
(d) grid graph, similarity-based
(e) grid graph, weighted
(f) preferential graph, degree-based
Figure 2: -means cost (normalized by baseline) v.s. communication cost over graphs. The titles indicate the network topology and partition method.
(a) random graph, uniform
(b) random graph, similarity-based
(c) random graph, weighted
(d) grid graph, similarity-based
(e) grid graph, weighted
(f) preferential graph, degree-based
Figure 3: -means cost (normalized by baseline) v.s. communication cost over the spanning trees of the graphs. The titles indicate the network topology and partition method.

Results: Here we focus on the results of the largest data set YearPredictionMSD, and in Appendix B we present the experimental results for all the data sets.

Figure 2 shows the results over different network topologies and partition methods. We observe that the algorithms perform well with much smaller coreset sizes than predicted by the theoretical bounds. For example, to get cost ratio, the coreset size and thus the communication needed is only of the theoretical bound.

In the uniform partition, our algorithm performs nearly the same as COMBINE. This is not surprising since our algorithm reduces to the COMBINE algorithm when each local site has the same cost and the two algorithms use the same amount of communication. In this case, since in our algorithm the sizes of the local samples are proportional to the costs of the local solutions, it samples the same number of points from each local data set. This is equivalent to the COMBINE algorithm with the same amount of communication. In the similarity-based partition, similar results are observed as it also leads to balanced local costs. However, when the local sites have significantly different costs (as in the weighted and degree-based partitions), our algorithm outperforms COMBINE. As observed in Figure 2, the costs of our solutions consistently improve over those of COMBINE by . Our algorithm then saves communication cost to achieve the same approximation ratio.

Figure 3 shows the results over the spanning trees of the graphs. Our algorithm performs much better than the algorithm of Zhang et al., achieving about improvement in cost. This is due to the fact that their algorithm needs larger coresets to prevent the accumulation of errors when constructing coresets from component coresets, and thus needs higher communication cost to achieve the same approximation ratio.

Similar results are observed on the other datasets, which are presented in Appendix B.

6 Additional Related Work

Many empirical algorithms adapt the centralized algorithms to the distributed setting. They generally provide no bound for the clustering quality or the communication cost. For instance, a technique is proposed in [12] to adapt several iterative center-based data clustering algorithms including Lloyd’s algorithm for -means to the distributed setting, where sufficient statistics instead of the raw data are sent to a central coordinator. This approach involves transferring data back and forth in each iteration, and thus the communication cost depends on the number of iterations. Similarly, the communication costs of the distributed clustering algorithms proposed in [9] and [25] depend on the number of iterations. Some other algorithms gather local summaries and then perform global clustering on the summaries. The distributed density-based clustering algorithm in [17] clusters and computes summaries for the local data at each node, and sends the local summaries to a central node where the global clustering is carried out. This algorithm only considers the flat two-tier topology. Some in-network aggregation schemes for computing statistics over distributed data are useful for such distributed clustering algorithms. For example, an algorithm is provided in [7] for approximate duplicate-sensitive aggregates across distributed data sets, such as SUM. An algorithm is proposed in [14] for power-preserving computation of order statistics such as quantile.

Several coreset construction algorithms have been proposed for -median, -means and -line median clustering [16, 6, 15, 20, 10]. For example, the algorithm in [10] constructs a coreset of size whose cost approximates that of the original data up to accuracy with respect to -median in . All of these algorithms consider coreset construction in the centralized setting, while our construction algorithm is for the distributed setting.

There has also been work attempting to parallelize clustering algorithms. [11] showed that coresets could be constructed in parallel and then merged together. In Scalable k-means++ [4], Bahmani et al. adapted k-means++ to the parallel setting. In this setting a centralized problem is broken up and distributed to several processors with the aim of reducing computation time. In contrast to the distributed setting, the communication costs are ignored.

There is also related work providing approximation solutions for -median based on random sampling [5]. Particularly, they showed that given a sample of size drawn i.i.d. from the data, there exists an algorithm that outputs a solution with an average cost bounded by twice the optimal average cost plus an error bound . If we convert it to a multiplicative approximation factor, the factor depends on the optimal average cost. When there are outlier points far away from all other points, the optimal average cost can be very small after normalization, then the multiplicative approximation factor is large. The coreset approach provides better guarantees. Additionally, their approach is not applicable to -means.

Acknowledgements

This work was supported by ONR grant N00014-09-1-0751, AFOSR grant FA9550-09-1-0538, and by a Google Research Award. We thank Le Song for generously allowing us to use his computer cluster.

References

  • [1] R. Albert and A.-L. Barabási. Statistical mechanics of complex networks. Reviews of Modern Physics, 2002.
  • [2] P. Awasthi and M. Balcan. Center based clustering: A foundational perspective. Survey Chapter in Handbook of Cluster Analysis (Manuscript), 2013.
  • [3] K. Bache and M. Lichman. UCI machine learning repository, 2013.
  • [4] B. Bahmani, B. Moseley, A. Vattani, R. Kumar, and S. Vassilvitskii. Scalable k-means++. In Proceedings of the International Conference on Very Large Data Bases, 2012.
  • [5] S. Ben-David. A framework for statistical clustering with a constant time approximation algorithms for k-median clustering. Proceedings of Annual Conference on Learning Theory, 2004.
  • [6] K. Chen. On k-median clustering in high dimensions. In Proceedings of the Annual ACM-SIAM Symposium on Discrete Algorithms, 2006.
  • [7] J. Considine, F. Li, G. Kollios, and J. Byers. Approximate aggregation techniques for sensor databases. In Proceedings of the International Conference on Data Engineering, 2004.
  • [8] J. C. Corbett, J. Dean, M. Epstein, A. Fikes, C. Frost, J. Furman, S. Ghemawat, A. Gubarev, C. Heiser, P. Hochschild, et al. Spanner: Google¡¯s globally-distributed database. In Proceedings of the USENIX Symposium on Operating Systems Design and Implementation, 2012.
  • [9] S. Datta, C. Giannella, H. Kargupta, et al. K-means clustering over peer-to-peer networks. In Proceedings of the International Workshop on High Performance and Distributed Mining, 2005.
  • [10] D. Feldman and M. Langberg. A unified framework for approximating and clustering data. In Proceedings of the Annual ACM Symposium on Theory of Computing, 2011.
  • [11] D. Feldman, A. Sugaya, and D. Rus. An effective coreset compression algorithm for large scale sensor networks. In Proceedings of the International Conference on Information Processing in Sensor Networks, 2012.
  • [12] G. Forman and B. Zhang. Distributed data clustering can be efficient and exact. ACM SIGKDD Explorations Newsletter, 2000.
  • [13] S. Greenhill and S. Venkatesh. Distributed query processing for mobile surveillance. In Proceedings of the International Conference on Multimedia, 2007.
  • [14] M. Greenwald and S. Khanna. Power-conserving computation of order-statistics over sensor networks. In Proceedings of the ACM SIGMOD-SIGACT-SIGART Symposium on Principles of Database Systems, 2004.
  • [15] S. Har-Peled and A. Kushal. Smaller coresets for k-median and k-means clustering. Discrete & Computational Geometry, 2007.
  • [16] S. Har-Peled and S. Mazumdar. On coresets for k-means and k-median clustering. In Proceedings of the Annual ACM Symposium on Theory of Computing, 2004.
  • [17] E. Januzaj, H. Kriegel, and M. Pfeifle. Towards effective and efficient distributed clustering. In Workshop on Clustering Large Data Sets in the IEEE International Conference on Data Mining, 2003.
  • [18] T. Kanungo, D. M. Mount, N. S. Netanyahu, C. D. Piatko, R. Silverman, and A. Y. Wu. A local search approximation algorithm for k-means clustering. In Proceedings of the Annual Symposium on Computational Geometry, 2002.
  • [19] H. Kargupta, W. Huang, K. Sivakumar, and E. Johnson. Distributed clustering using collective principal component analysis. Knowledge and Information Systems, 2001.
  • [20] M. Langberg and L. Schulman. Universal -approximators for integrals. In Proceedings of the Annual ACM-SIAM Symposium on Discrete Algorithms, 2010.
  • [21] S. Li and O. Svensson. Approximating k-median via pseudo-approximation. In Proceedings of the Annual ACM Symposium on Theory of Computing, 2013.
  • [22] Y. Li, P. M. Long, and A. Srinivasan. Improved bounds on the sample complexity of learning. In Proceedings of the eleventh annual ACM-SIAM Symposium on Discrete Algorithms, 2000.
  • [23] S. Mitra, M. Agrawal, A. Yadav, N. Carlsson, D. Eager, and A. Mahanti. Characterizing web-based video sharing workloads. ACM Transactions on the Web, 2011.
  • [24] C. Olston, J. Jiang, and J. Widom. Adaptive filters for continuous queries over distributed data streams. In Proceedings of the ACM SIGMOD International Conference on Management of Data, 2003.
  • [25] D. Tasoulis and M. Vrahatis. Unsupervised distributed clustering. In Proceedings of the International Conference on Parallel and Distributed Computing and Networks, 2004.
  • [26] Q. Zhang, J. Liu, and W. Wang. Approximate clustering on distributed data streams. In Proceedings of the IEEE International Conference on Data Engineering, 2008.

Appendix A Proofs for Section 3

The proof of Lemma 1 follows from the analysis in [10], although not explicitly stated there. We begin with the following theorem for uniform sampling on a function space. The theorem is from [10] but rephrased for convenience.

Theorem 4 (Theorem 6.9 in [10]).

Let be a set of functions from to , and let . Let be a sample of

i.i.d items from , where is a sufficiently large constant. Then, with probability at least , for any and any ,

Proof of Lemma 1.

Without loss of generality, assume . Define as follows: for each , include copies of in and define . Then is equivalent to a sample draw i.i.d. and uniformly at random from . We now apply Theorem 4 on and . By Theorem 4, we know that for any ,

(6)

The lemma then follows from multiplying both sides of (6) by . Also note that the dimension is the same as that of as pointed out by [10]. ∎

Lemma 3.

If , then

Proof.

We first have by triangle inequality

Then by ,

Therefore, we have

for sufficiently small . Then

Similarly, . The lemma follows from the last two inequalities. ∎

Lemma 4 (Corollary 15.4 in [10]).

Let , and for a sufficiently large . Then with probability at least ,

Appendix B Complete Experimental Results

Here we present the results of all the data sets over different network topologies and data partition methods.

Figure 4 shows the results of all the data sets on random graphs. The first column of Figure 4 shows that our algorithm and COMBINE perform nearly the same in the uniform data partition. This is not surprising since our algorithm reduces to the COMBINE algorithm when each local site has the same cost and the two algorithms use the same amount of communication. In this case, since in our algorithm the sizes of the local samples are proportional to the costs of the local solutions, it samples the same number of points from each local data set. This is equivalent to the COMBINE algorithm with the same amount of communication. In the similarity-based partition, similar results are observed as this partition method also leads to balanced local costs. However, in the weighted partition where local sites have significantly different contributions to the total cost, our algorithm outperforms COMBINE. It improves the -means cost by , and thus saves communication cost to achieve the same approximation ratio.

Figure 5 shows the results of all the data sets on grid and preferential graphs. Similar to the results on random graphs, our algorithm performs nearly the same as COMBINE in the similarity-based partition and outperforms COMBINE in the weighted partition and degree-based partition. Furthermore, Figure 4 and 5 also show that the performance of our algorithm merely changes over different network topologies and partition methods.

Figure 6 shows the results of all the data sets on the spanning trees of the random graphs and Figure 7 shows those on the spanning trees of the grid and preferential graphs. Compared to the algorithm of Zhang et al., our algorithm consistently shows much better performance on all the data sets in different settings. It improves the -means cost by , and thus can achieve even better approximation ratio with only communication cost. This is because the algorithm of Zhang et al. constructs coresets from component coresets and needs larger coresets to prevent the accumulation of errors. Figure 6 also shows that although their costs decrease with the increase of the communication, the decrease is slower on larger graphs (e.g., as in the experiments for YearPredictionMSD). This is due to the fact that the spanning tree of a larger graph has larger height, leading to more accumulation of errors. In this case, more communication is needed to prevent the accumulation.

random graph random graph random graph
uniform partition similarity-based partition weighted partition
Figure 4: -means cost on random graphs. Columns: random graph with uniform partition, random graph with similarity-based partition, and random graph with weighted partition. Rows: Spam, Pendigits, Letter, synthetic, ColorHistogram, and YearPredictionMSD.
grid graph grid graph preferential graph
similarity-based partition weighted partition degree-based partition
Figure 5: -means cost on grid and preferential graphs. Columns: grid graph with similarity-based partition, grid graph with weighted partition, and preferential graph with degree-based partition. Rows: Spam, Pendigits, Letter, synthetic, ColorHistogram, and YearPredictionMSD.
spanning tree of random graph spanning tree of random graph spanning tree of random graph
uniform partition similarity-based partition weighted partition
Figure 6: -means cost on the spanning trees of the random graphs. Columns: random graph with uniform partition, random graph with similarity-based partition, and random graph with weighted partition. Rows: Spam, Pendigits, Letter, synthetic, ColorHistogram, and YearPredictionMSD.
  spanning tree of grid graph spanning tree of grid graph spanning tree of preferential graph
  similarity-based partition weighted partition degree-based partition
Figure 7: -means cost on the spanning trees of the grid and preferential graphs. Columns: grid graph with similarity-based partition, grid graph with weighted partition, and preferential graph with degree-based partition. Rows: Spam, Pendigits, Letter, synthetic, ColorHistogram, and YearPredictionMSD.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
48452
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description