High Dimensional Clustering

A Probabilistic Method for Clustering High Dimensional Data

Abstract.

In general, the clustering problem is NP–hard, and global optimality cannot be established for non–trivial instances. For high–dimensional data, distance–based methods for clustering or classification face an additional difficulty, the unreliability of distances in very high–dimensional spaces. We propose a probabilistic, distance–based, iterative method for clustering data in very high–dimensional space, using the –metric that is less sensitive to high dimensionality than the Euclidean distance. For clusters in , the problem decomposes to problems coupled by probabilities, and an iteration reduces to finding weighted medians of points on a line. The complexity of the algorithm is linear in the dimension of the data space, and its performance was observed to improve significantly as the dimension increases.

Key words and phrases:
Clustering, –norm, high–dimensional data, continuous location
2010 Mathematics Subject Classification:
Primary 62H30, 90B85; Secondary 90C59

1. Introduction

The emergence and growing applications of big data have underscored the need for efficient algorithms based on optimality principles, and scalable methods that can provide valuable insights at a reasonable computational cost.

In particular, problems with high–dimensional data have arisen in several scientific and technical areas (such as genetics [19], medical imaging [29] and spatial databases [21], etc.) These problems pose a special challenge because of the unreliability of distances in very high dimensions. In such problems it is often advantageous to use the –metric which is less sensitive to the “curse of dimensionality” than the Euclidean distance.

We propose a new probabilistic distance–based method for clustering data in very high–dimensional spaces. The method uses the –distance, and computes the cluster centers using weighted medians of the given data points. Our algorithm resembles well–known techniques such as fuzzy clustering [9] and –means, and inverse distance interpolation [26].

The cluster membership probabilities are derived from necessary optimality conditions for an approximate problem, and decompose a clustering problem with clusters in into one–dimensional problems, which can be solved separately. The algorithm features a straightforward implementation and a polynomial running time, in particular, its complexity is linear in the dimension . In numerical experiments it outperformed several commonly used methods, with better results for higher dimensions.

While the cluster membership probabilities simplify our notation, and link our results to the theory of subjective probability, these probabilities are not needed by themselves, since they are given in terms of distances, that have to be computed at each iteration.

1.1. Notation

We use the abbreviation for the indicated index set. The component of a vector is denoted . The norm of a vector is

and the associated distance between two vectors and is , in particular, the Euclidean distance with , and the –distance,

(1)

1.2. The clustering problem

Given

  • a set of points in ,

  • their weights , and

  • an integer ,

partition into clusters , defined as disjoint sets where the points in each cluster are similar (in some sense), and points in different clusters are dissimilar. If by similar is meant close in some metric , we have a metric (or distance based) clustering problem, in particular clustering if the –distance is used, Euclidean clustering for the –distance, etc.

1.3. Centers

In metric clustering each cluster has a representative point, or center, and distances to clusters are defined as the distances to their centers. The center of cluster is a point that minimizes the sum of weighted distances to all points of the cluster,

(2)

Thus, the metric clustering problem can be formulated as follows: Given and as above, find centers so as to minimize

(L.)

where is the cluster of points in assigned to the center .

1.4. Location problems

Metric clustering problems often arise in location analysis, where is the set of the locations of customers, is the set of their weights (or demands), and it is required to locate facilities to serve the customers optimally in the sense of total weighted-distances traveled. The problem (L.) is then called a multi–facility location problem, or a location–allocation problem because it is required to locate the centers, and to assign or allocate the points to them.

Problem (L.) is trivial for (every point is its own center) and reduces for to the single facility location problem: find the location of a center so as to minimize the sum of weighted distances,

(L.)

For , the problem (L.) is NP-hard in general [24], while the planar case can be solved polynomially in , [13].

1.5. Probabilistic approximation

(L.) can be approximated by a continuous problem

(P.)

where rigid assignments are replaced by probabilistic (soft) assignments, expressed by probabilities that a point belongs to the cluster .

For each point the cluster membership probabilities sum to 1, and are assumed to depend on the distances as follows

membership in a cluster is more likely the closer is its center (A)

Given these probabilities, the problem (P.) can be decomposed into single facility location problems,

(P.)

for . The solutions of the problems (P.), are then used to calculate the new distances for all , and from them, new probabilities , etc.

1.6. The case for the norm

In high dimensions, distances between points become unreliable [7], and this in particular “makes a proximity query meaningless and unstable because there is poor discrimination between the nearest and furthest neighbor” [1]. For the Euclidean distance

(3)

between random points , the cross products in (3) tend to cancel for very large , and consequently,

In particular, if are random points on the unit sphere in then for very large . This “curse of high dimensionality” limits the applicability of distance based methods in high dimension.

The –distance is less sensitive to high dimensionality, and has been shown to “provide the best discrimination in high–dimensional data spaces”, [1]. We use it throughout this paper.

The plan of the paper

The –metric clustering problem is solved in § 2 for one center. A probabilistic approximation of (L.) is discussed in § 3, the probabilities studied in §§ 45. The centers of the approximate problem are computed in § 6. Our main result, Algorithm PCM() of § 8, uses the power probabilities of § 7, and has running time that is linear in the dimension of the space, see Corollary 1. Theorem 1, a monotonicity property of Algorithm PCM(), is proved in § 9. Section 10 lists conclusions. Appendix A shows relations to previous work, and Appendix B reports some numerical results.

2. The single facility location problem with the –norm

For the –distance (1) the problem (L.1) becomes

(4)

in the variable , which can be solved separately for each component , giving the problems

(5)
Definition 1.

Let be an ordered set of points

and let be a corresponding set of positive weights. A point is a weighted median (or median) of if there exist such that

(6)

where is the weight of if , and if .

The weighted median always exists, but is not necessarily unique.

Lemma 1.

For as above, define

(7)

and let be the smallest with . If

(8)

then is the unique weighted median, with

(9)

Otherwise, if

(10)

then any point in the open interval is a weighted median with .

Proof.

The statement holds since the sequence (7) is increasing from to . ∎

Note: In case (10),

we can take the median as the midpoint of and , in order to conform with the classical definition of the median (for an even number of points of equal weight) .

Lemma 2.

Given and as in Definition 1, the set of minimizers of

is the set of –medians of .

Proof.

The result is well known if all weights are 1. If the weights are integers, consider a point with weight as coinciding points of weight 1 and the result follows. Same if the weights are rational. Finally, if the weights are real, consider their rational approximations. ∎

3. Probabilistic approximation of (L.)

We relax the assignment problem in (L.) of § 1.2 by using a continuous approximation as follows,

(P.)

with two sets of variables,

the centers , and

the cluster membership probabilities ,

(11)

Because the probabilities add to 1 for each , the objective function of (P.) is an upper bound on the optimal value of (L.),

(12)

and therefore so is the optimal value of (P.),

(13)

4. Axioms for probabilistic distance clustering

In this section, stands for , the distance of to the center of the –cluster, . To simplify notation, the point is assumed to have weight .

The cluster membership probabilities of a point depend only on the distances ,

(14)

where is the vector of probabilities , and is the vector of distances . Natural assumptions for the relation (14) include

(15a)
(15b)
(15c)

Condition (15a) states that membership in a cluster is more probable the closer it is, which is Assumption (A) of § 1.5. The meaning of (15b) is that the probabilities do not depend on the scale of measurement, i.e., is homogeneous of degree 0. It follows that the probabilities depend only on the ratios of the distances .

The symmetry of , expressed by (15c), guarantees for each , that the probability does not depend on the numbering of the other clusters.

Assuming continuity of it follows from (15a) that

for any .

For any nonempty subset , let

the probability that belongs to one of the clusters , and let denote the conditional probability that belongs to the cluster , given that it belongs to one of the clusters .

Since the probabilities depend only on the ratios of the distances , and these ratios are unchanged in subsets of the index set , it follows that for all ,

(16)

which is the choice axiom of Luce, [22, Axiom 1], and therefore, [30],

(17)

where is a scale function, in particular,

(18)

Assuming for all , it follows that

(19)

where the right hand side is a function of , and does not depend on .

Property (15a) implies that the function is a monotone decreasing function of .

5. Cluster membership probabilities as functions of distance

Given centers , and a point with weight and distances from these centers, a simple choice for the function in (17) is

(20)

for which (19) gives1,

(21)

where the function , called the joint distance function (JDF) at , does not depend on .

For a given point and given centers , equations (21) are optimality conditions for the extremum problem

(22)

in the probabilities . The squares of probabilities in the objective of (22) serve to smooth the underlying non–smooth problem, see the seminal paper by Teboulle [27]. Indeed, (21) follows by differentiating the Lagrangian

(23)

with respect to and equating the derivative to zero.

Since probabilities add to one we get from (21),

(24)

and the JDF at ,

(25)

which is (up to a constant) the harmonic mean of the distances , see also (A-4) below.

Note that the probabilities are determined by the centers alone, while the function depends also on the weight . For example, in case ,

(26a)
(26b)

6. Computation of centers

We use the –distance (1) throughout. The objective function of (P.) is a separable function of the cluster centers,

(27a)
(27b)

The centers problem thus separates into problems of type (4),

(28)

coupled by the probabilities . Each of these problems separates into problems of type (5) for the components ,

(29)

whose solution, by Lemma 2, is a weighted median of the points with weights .

7. Power probabilities

The cluster membership probabilities of a point serve to relax the rigid assignment of to any of the clusters, but eventually it may be necessary to produce such an assignment. One way to achieve this is to raise the membership probabilities of (24) to a power , and normalize, obtaining the power probabilities

(30)
which, by (24), can also be expressed in terms of the distances ,
(31)

As the exponent increases the power probabilities tend to hard assignments: If is the index set of maximal probabilities, and has elements, then,

(32)

and the limit is a hard assignment if , i.e. if the maximal probability is unique.

Numerical experience suggests an increase of at each iteration, see, e.g., (33) below.

8. Algorithm PCM(): Probabilistic Clustering Method with distances

The problem (P.) is solved iteratively, using the following updates in succession.

Probabilities computation: Given centers , the assignments probabilities are calculated using (31). The exponent is updated at each iteration, say by a constant increment ,

(33)

starting with an initial .

Centers computation: Given the assignment probabilities , the problem (P.) separates into problems of type (29),

(34)

one for each component of each center , that are solved by Lemma 2.

These results are presented in an algorithm form as follows.
Algorithm PCM() : An algorithm for the clustering problem
  Data: data points,  their weights, the number of clusters, (stopping criterion), (initial value of the exponent ),  (the increment in (33).) Initialization: arbitrary centers , . Iteration: Step 1 compute distances for all Step 2 compute the assignments (using (31)) Step 3 compute the new centers (applying Lemma 2 to (34)) Step 4 if stop else , return to step 1  

Corollary 1.

The running time of Algorithm PCM() is

(35)

where is the dimension of the space, the number of points, the number of clusters, and is the number of iterations.

Proof.

The number of operations in an iteration is calculated as follows:

Step 1: , since computing the distance between two -dimensional vectors takes time, and there are distances between all points and all centers.

Step 2: , there are assignments, each taking .

Step 3: , computing the weighted median of points in takes time, and such medians are computed.

Step 4: , since there are cluster centers of dimension .

The corollary is proved by combining the above results. ∎

Remark 1.

(a) The result (35) shows that Algorithm PCM() is linear in , which in high–dimensional data is much greater than and .
(b) The first few iterations of the algorithm come close to the final centers, and thereafter the iterations are slow, making the stopping rule in Step 4 ineffective. A better stopping rule is a bound on the number of iterations , which can then be taken as a constant in (35).
(c) Algorithm PCM() can be modified to account for very unequal cluster sizes, as in [14]. This modification did not significantly improve the performance of the algorithm in our experiments.
(d) The centers here are computed from scratch at each iteration using the current probabilities, unlike the Weiszfeld method [28] or its generalizations, [17][18], where the centers are updated at each iteration.

9. Monotonicity

The centers computed iteratively by Algorithm PCM() are confined to the convex hull of , a compact set, and therefore a subsequence converges to an optimal solution of the approximate problem (P.), that in general is not an optimal solution of the original problem (L.).

The JDF of the data set is defined as the sum of the JDF’s of its points,

(36)

We prove next a monotonicity result for .

Theorem 1.

The function decrease along any sequence of iterates of centers.

Proof.

The function can be written as

(37)

The proof is completed by noting that, for each , the probabilities are chosen as to minimize the function

(38)

for the given centers, see (22), and the centers minimize the function (38) for the given probabilities. ∎

Remark 2.

The function also decreases if the exponent is increased in (30), for then shorter distances are becoming more probable in (37).

10. Conclusions

In summary, our approach has the following advantages.

  1. In numerical experiments, see Appendix B, Algorithm PCM() outperformed the fuzzy clustering –method, the –means method, and the generalized Weiszfeld method [17].

  2. The solutions of (22) are less sensitive to outliers than the solutions of (A-5), which uses squares of distances.

  3. The probabilistic principle (A-8) allows using other monotonic functions, in particular the exponential function , that gives sharper results, and requires only that every distance be replaced by , [5].

  4. The JDF (36) of the data set, provides a guide to the “right” number of clusters for the given data, [6].

Appendix A: Relation to previous work

Our work brings together ideas from four different areas: inverse distance weighted interpolation, fuzzy clustering, subjective probability, and optimality principles.

1. Inverse distance weighted (or IDW) interpolation was introduced in 1965 by Donald Shepard, who published his results [26] in 1968. Shepard, then an undergraduate at Harvard, worked on the following problem:

A function is evaluated at given points in , giving the values , respectively. These values are the only information about the function. It is required to estimate at any point .

Examples of such functions include rainfall in meteorology, and altitude in topography. The point cannot be too far from the data points, and ideally lies in their convex hull.

Shepard estimated the value as a convex combination of the given values ,

(A-1)
where the weights are inversely ptoportional to the distances between and , say
(A-2)
giving the weights
(A-3)

that are identical with the probabilities (24), if the data points are identified with the centers. IDW interpolation is used widely in spatial data analysis, geology, geography, ecology and related areas.

Interpolating the distances , i.e. taking in (A-2), gives

(A-4)

the harmonic mean of the distances , which is the JDF in (25) multiplied by a scalar.

The harmonic mean pops up in several areas of spatial data analysis. In 1980 Dixon and Chapman [12] posited that the home–range of a species is a contour of the harmonic mean of the areas it frequents, and this has since been confirmed for hundreds of species. The importance of the harmonic mean in clustering was established by Teboulle [27], Stanforth, Kolossov and Mirkin [25], Zhang, Hsu, and Dayal [31][32], Ben–Israel and Iyigun [5] and others. Arav [3] showed the harmonic mean of distances to satisfy a system of reasonable axioms for contour approximation of data.

2. Fuzzy clustering introduced by J.C. Bezdek in 1973, [8], is a relaxation of the original problem, replacing the hard assignments of points to clusters by soft, or fuzzy, assignments of points simultaneously to all clusters, the strength of association of with the cluster is measured by .

In the fuzzy c–means (FCM) method [9] the centers are computed by

(A-5)

where the weights are updated as2

(A-6)

and the centers are then calculated as convex combinations of the data points,

(A-7)

The constant (the “fuzzifier”) controls he fuzziness of the assignments, which become hard assignments in the limit as . For , FCM is the classical –means method. If then the weights are inversely proportional to the square distance , analogously to (21).

Fuzzy clustering is one of the best known, and most widely used, clustering methods. However, it may need some modification if the data in question is very high–dimensional, see, e.g. [20].

3. Subjective probability. There is some arbitrariness in the choice of the model and the fuzzifier in (A-5)–(A-6). In contrast, the probabilities (24) can be justified axiomatically. Using ideas and classical results ([22], [30]) from subjective probability it is shown in Appendix B that the cluster membership probabilities , and distances , satisfy an inverse relationship, such as,

(A-8)

where is non–decreasing, and does not depend on . In particular, the choice gives (21), which works well in practice.

4. Optimality principle. Equation (A-8) is a necessary optimality condition for the problem

(A-9)

that reduces to (22) for the choice . This shows the probabilities of (24) to be optimal, for the model chosen.

Remark 3.

Minimizing a function of squares of probabilities seems unnatural, so a physical analogy may help. Consider an electric circuit with resistances connected in parallel. A current through the circuit splits into currents, with current through the resistance . These currents solve an optimization problem (the Kelvin principle)

(A-10)

that is analogous to (22). The optimality condition for (A-10) is Ohm’s law,

a statement that potential is well defined, and an analog of (21). The equivalent resistance of the circuit, i.e. the resistance such that is equal to the minimal value in (A-10), is then the JDF (25) with instead of and .

Appendix B: Numerical Examples

In the following examples we use synthetic data to be clustered into clusters. The data consists of two randomly generated clusters, with points, and with points.

The data points of cluster are such that all of their components are generated by sampling from a distribution with mean , . In cluster we take , and in cluster , .

We ran Algorithm PCM(), with the parameters , and compared its performance with that of the fuzzy clustering method [9] with the norm, as well as the generalized Weiszfeld algorithm of [18] (that uses Euclidean distances), and the –K-Means algorithm [23]. For each method we used a stopping rule of at most 100 iterations (for Algorithm PCM() this replaces Step 4). For each experiment we record the average percentage of misclassification (a misclassification occurs when a point in is declared to be in , or vice versa) from 10 independent problems. In examples 1,2,3 we choose the probability distributions to be .

Example 1.

In this example the clusters are of equal size, . Table 1 gives the percentages of misclassification under the five methods tested, for different values of and dimension .

Method
PCM () 0.0 0.0 0.0 0.0 0.0
FCM () 27.1 38.6 24.4 40.9 40.1
-means (