Ising-based Consensus Clustering on Specialized Hardware

Ising-based Consensus Clustering on Specialized Hardware

Abstract

The emergence of specialized optimization hardware such as CMOS annealers and adiabatic quantum computers carries the promise of solving hard combinatorial optimization problems more efficiently in hardware. Recent work has focused on formulating different combinatorial optimization problems as Ising models, the core mathematical abstraction used by a large number of these hardware platforms, and evaluating the performance of these models when solved on specialized hardware. An interesting area of application is data mining, where combinatorial optimization problems underlie many core tasks. In this work, we focus on consensus clustering (clustering aggregation), an important combinatorial problem that has received much attention over the last two decades. We present two Ising models for consensus clustering and evaluate them using the Fujitsu Digital Annealer, a quantum-inspired CMOS annealer. Our empirical evaluation shows that our approach outperforms existing techniques and is a promising direction for future research.

1 Introduction

The increasingly challenging task of scaling the traditional Central Processing Unit (CPU) has lead to the exploration of new computational platforms such as quantum computers, CMOS annealers, neuromorphic computers, and so on (see [3] for a detailed exposition). Although their physical implementations differ significantly, adiabatic quantum computers, CMOS annealers, memristive circuits, and optical parametric oscillators all share Ising models as their core mathematical abstraction [3]. This has lead to a growing interest in the formulation of computational problems as Ising models and in the empirical evaluation of these models on such novel computational platforms. This body of literature includes clustering and community detection [14, 20, 24], graph partitioning [27, 28], and many NP-Complete problems such as covering, packing, and coloring [18, 17].

Consensus clustering is the problem of combining multiple ‘base clusterings’ of the same set of data points into a single consolidated clustering [9]. Consensus clustering is used to generate robust, stable, and more accurate clustering results compared to a single clustering approach [9]. The problem of consensus clustering has received significant attention over the last two decades [9], and was previously considered under different names (clustering aggregation, cluster ensembles, clustering combination) [10]. It has applications in different fields including data mining, pattern recognition, and bioinformatics [10] and a number of algorithmic approaches have been used to solve this problem. The consensus clustering is, in essence, a combinatorial optimization problem [30] and different instances of the problem have been proven to be NP-hard (e.g., [6, 26]).

In this work, we investigate the use of special purpose hardware to solve the problem of consensus clustering. To this end, we formulate the problem of consensus clustering using Ising models and evaluate our approach on a specialized CMOS annealer. We make the following contributions:

  1. We present and study two Ising models for consensus clustering that can be solved on a variety of special purpose hardware platforms.

  2. We demonstrate how our models are embedded on the Fujitsu Digital Annealer (DA), a quantum-inspired specialized CMOS hardware.

  3. We present an empirical evaluation based on seven benchmark datasets and show our approach outperforms existing techniques for consensus clustering.

2 Background

2.1 Problem Definition

Let be a set of data points. A clustering of is a process that partitions into subsets, referred to as clusters, that together cover . A clustering is represented by the mapping where is the number of clusters produced by clustering . Given and a set of clusterings of the points in , the Consensus Clustering Problem is to find a new clustering, , of the data that best summarizes the set of clusterings . The new clustering is referred to as the consensus clustering.

Due to the ambiguity in the definition of an optimal consensus clustering, several approaches have been proposed to measure the solution quality of consensus clustering algorithms [9]. In this work, we focus on the approach of determining a consensus clustering that agrees the most with the original clusterings. As an objective measure to determine this agreement, we use the mean Adjusted Rand Index (ARI) metric (Equation 14). However, we also consider clustering quality measured by mean Silhouette Coefficient [23] and clustering accuracy based on true labels. In Section 4 these evaluation criteria are discussed in more details.

2.2 Existing Criteria and Methods

Various criteria or objectives have been proposed for the Consensus Clustering Problem. In this work we mainly focus on two well-studied criteria, one based on the pairwise similarity of the data points, and the other based on the different assignments of the base clusterings. Other well-known criteria and objectives for the Consensus Clustering Problem can be found in the excellent surveys of [9, 29], with most defining NP-Hard optimization problems.

Pairwise Similarity Approaches: In this approach, a similarity matrix is constructed such that each entry in represents the fraction of clusterings in which two data points belong to the same cluster [21]. In particular,

(1)

with being the indicator function. The value lies between 0 and 1, and is equal to 1 if all the base clusterings assign points and to the same cluster. Once the pairwise similarity matrix is constructed, one can use any similarity-based clustering algorithm on to find a consensus clustering with a fixed number of clusters, . For example, [16] proposed to find a consensus clustering with exactly clusters that minimizes the within-cluster dissimilarity:

(2)

Partition Difference Approaches: An alternative formulation is based on the different assignments between clustering. Consider two data points , and two clusterings . The following binary indicator tests if and disagree on the clustering of and :

(3)

The distance between two clusterings is then defined based on the number of pairwise disagreements:

(4)

with the factor to take care of double counting and can be ignored. This measure is defined as the number of pairs of points that are in the same cluster in one clustering and in different clusters in the other, essentially considering the (unadjusted) Rand index [9]. Given this measure, a common objective is to find a consensus clustering with respect to the following optimization problem:

(5)

Methods and Algorithms: The two different criteria given above define fundamentally different optimization problems, thus different algorithms have been proposed. One key difference between the two approaches inherently lies in determining the number of clusters in . The pairwise similarity approaches (e.g., Equation (2)) require an input parameter that fixes the number of clusters in , whereas the partition difference approaches such as Equation (5) do not have this requirement and determining is part of the objective of the problem. Therefore, for example, Equation (2) will have a minimum value in the case when , however this does not hold for Equation (5).

The Cluster-based Similarity Partitioning Algorithm (CSPA) is proposed in [25] for solving the pairwise similarity based approach. The CSPA constructs a similarity-based graph with each edge having a weight proportional to the similarity given by . Determining the consensus clustering with exactly clusters is treated as a -way graph partitioning problem, which is solved by methods such as METIS [12]. In [21], the authors experiment with different clustering algorithms including hierarchical agglomerative clustering (HAC) and iterative techniques that start from an initial partition and iteratively reassign points to clusters based on their pairwise similarities. For the partition difference approach, Li et al. [15] proposed to solve Equation (5) using nonnegative matrix factorization (NMF). Gionis et al. [10] proposed several algorithms that make use of the connection between Equation (5) and the problem of correlation clustering. CSPA, HAC, NMF: these three approaches are considered as baseline in our empirical evaluation section (Section 4).

2.3 Ising Models

Ising models are graphical models that include a set of nodes representing spin variables and a set of edges corresponding to the interactions between the spins. The energy level of an Ising model which we aim to minimize is given by:

(6)

where the variables are the spin variables and the couplers, , represent the interaction between the spins.

A Quadratic Unconstrained Binary Optimization (QUBO) model includes binary variables and couplers, . The objective to minimize is:

(7)

QUBO models can be transformed to Ising models by setting  [2].

3 Ising Approach for Consensus Clustering on Specialized Hardware

In this section, we present our approach for solving consensus clustering on specialized hardware using Ising models. We present two Ising models that correspond to the two approaches in Section 2.2. We then demonstrate how they can be solved on the Fujitsu Digital Annealer (DA), a specialized CMOS hardware.

3.1 Pairwise Similarity-based Ising Model

For each data point , let be the binary variable such that if assigns to cluster , and 0 otherwise. Then the constraints

(8)

ensure assigns each point to exactly one cluster. Subject to the constraints (8), the sum of quadratic terms is 1 if assigns both to the same cluster, and is if assigned to different clusters. Therefore the value

(9)

represents the sum of within-cluster dissimilarities in : is the fraction of clusterings in that assign and to different clusters while assigns them to the same cluster. We therefore reformulate Equation (2) as QUBO:

(10)

where the term is added to the objective function to ensure that the constraints (8) are satisfied. is positive constant that penalizes the objective for violations of constraints (8). One can show that if , the optimal solution of the QUBO in Equation (10) does not violate the constraints (8). The proof is very similar to proof of Theorem 3.1 and a similar result in [14].

3.2 Partition Difference Ising Model

The partition difference approach essentially considers the (unadjusted) Rand Index [9] and therefore can be expected to perform better. The Correlation Clustering Problem is another important problem in data mining. Gionis et al. [10] showed that Equation (5) is a restricted case of the Correlation Clustering Problem, and that Equation (5) can be expressed as the following equivalent form of the Correlation Clustering Problem

(11)

We take advantage of this equivalence to model Equation (5) as a QUBO. In a similar fashion to the QUBO formulated in the preceding subsection, the terms

(12)

measure the similarity between points in different clusters, where represents an upper bound for the number of clusters in . This then leads to the minimizing the following QUBO:

(13)

Intuitively, Equation (13) measures the disagreement between the consensus clustering and the clusterings in . This disagreement is due to points that are clustered together in the consensus clustering but not in the clusterings in , however it is also due to points that are assigned to different clusters in the consensus partition but in the same cluster in some of the partitions in .

Formally, we can show that Equation (13) is equivalent to the correlation clustering formulation in Equation (11) when setting . Consistent with other methods that optimize Equation (5) (e.g., [15]), our approach takes as an input , an upper bound on the number of clusters in , however the obtained solution can use smaller number of clusters. In our proof, we assume is large enough to represent the optimal solution, i.e., greater than the number of clusters in optimal solutions to the correlation clustering problem in Equation (11).

Theorem 3.1

Let be the optimal solution to the QUBO given by Equation (13). If , for a large enough , an optimal solution to the Correlation Clustering Problem in Equation (11), , can be efficiently evaluated from .

Proof

First we show the optimal solution to the QUBO in Equation (13) satisfies the one-hot encoding (). This would imply given we can create a valid clustering . Note, the optimal solution will never have as it can only increase the cost. The only case in which an optimal solution will have is when the cost of assigning a point to a cluster is higher than the cost of not assigning it to a cluster (i.e., the penalty ). Assigning a point to a cluster will incur a cost of for each point in the same cluster and for each point that is not in the cluster. As there is additional points in total, and both and are less or equal to one (Equation (1)), setting guarantees the optimal solution satisfies the one-hot encoding.

Now we assume that is not optimal, i.e., there exists an optimal solution to Equation (11) that has a strictly lower cost than . Let be the corresponding QUBO solution to , such that if and only if . This is possible because is large enough to accomodate all clusters in . As both and satisfy that one-hot encoding (penalty terms are zero), their cost is identical to the cost of and . Since the cost of is strictly lower than , and the cost of is lower or equal to , we have a contradiction. ∎

3.3 Solving Consensus Clustering on the Fujitsu Digital Annealer

The Fujitsu Digital Annealer (DA) is a recent CMOS hardware for solving combinatorial optimization problems formulated as QUBO [1, 8]. We use the second generation of the DA that is capable of representing problems with up to 8192 variables with up to 64 bits of precision. The DA has previously been used to solve problems in areas such as communication [19] and signal processing [22].

The DA algorithm [1] is based on simulated annealing (SA) [13], while taking advantage of the massive parallelization provided by the CMOS hardware [1]. It has several key differences compared to SA, most notably a parallel-trial scheme in which each MC step considers all possible one-bit flips in parallel and dynamic offset mechanism that increase the energy of a state to escape local minima [1].

Encoding Consensus Clustering on the DA

When embedding our Ising models on the DA, we need to consider the hardware specification and adapt the representation of our model accordingly. Due to hardware precision limit, we need to embed the couplers and biases on an integer scale with limited granularity. In our experiments, we normalize the pairwise costs in the discrete range , , and accordingly is replaced by . Note that the theoretical bound is adjusted accordingly to be .

The theoretical bound guarantees that all constraints are satisfied if problems are solved to optimality. In practice, the DA does not necessarily solve problems to optimality and due to the nature of annealing-based algorithms, using very high weights for constraints is likely to create deep local minima and result in solutions that may satisfy the constraints but are often of low-quality. This is especially relevant to our pairwise similarity model where the bound tends to become loose as the number of clusters grows. In our experiments, we use constant, reasonably high, weights that were empirically found to perform well across datasets. For the pairwise similarity-based model (Equation (10)) we use , and for the partition difference model (Equation (13)) we use . While we expect to get better performance by tuning the weights per-dataset, our goal is to demonstrate the performance of our approach in a general setting. Automatic tuning of the weight values for the DA is a direction for future work.

Unlike many of the existing consensus clustering algorithms that run until convergence, our method runs for a given time limit (defined by the number of runs and iterations) and returns the best solution encountered. In our experiments, we arbitrarily choose three seconds as a (reasonably short) time limit to solve our Ising models. As with the weights, we employ a single temperature schedule across all datasets, and do not tune it per dataset.

4 Empirical Evaluation

We perform an extensive empirical evaluation of our approach using a set of seven benchmark datasets. We first describe how we generate the set of clusterings, . Next, we describe the baselines, the evaluation metrics, and the datasets.

Generating Partitions

We follow [7] and generate a set of clusterings by randomizing the parameters of the K-Means algorithm, namely the number of clusters and the initial cluster centers. In this work, we only use labelled datasets for which we know the number of clusters, , based on the true labels. To generate the base clusterings we run the K-Means algorithm with random cluster centers and we randomly choose from the range . For each dataset, we generate 100 clusterings to serve as the clustering set .

Baseline Algorithms

We compare our pairwise similarity-based Ising model, referred to as DA-Sm, and our correlation clustering Ising model, referred to as DA-Cr, to three popular algorithms for consensus clustering:

  1. The cluster-based similarity partitioning algorithm (CSPA) [25] solved as a -way graph partitioning problem using METIS [12].

  2. The nonnegative matrix factorization (NMF) formulation in [15].

  3. Hierarchical agglomerative clustering (HAC) starts with all points in singleton clusters and repeatedly merges the two clusters with the largest average similarity based on , until reaching the desired number of clusters [21].

Evaluation

We evaluate the different methods using three measures. Our main concern in this work is the level of agreement between the consensus clustering and the set of input clusterings. To this end, one requires a metric measuring the similarity of two clusterings that can be used to measure how close the consensus clustering to each base clustering is. One of popularly used metrics to measure the similarity between two clusterings is the Rand Index (RI) and Adjusted Rand Index (ARI) [11]. The Rand Index of two clustering lies between 0 and 1, obtaining the value 1 when both clusterings perfectly agree. Likewise, the maximum score of ARI, which is corrected-for-chance version of RI, is achieved when both clusterings perfectly agree. can be viewed as measure of agreement between the consensus clustering and some base clusterings . We use the mean ARI as the main evaluation criteria:

(14)

We also evaluate based on clustering quality and accuracy. For clustering quality, we use the mean Silhouette Coefficient [23] of all data points (computed using the Euclidean distance between the data points). For clustering accuracy, we compute the ARI between the consensus partition and the true labels.

Benchmark Datasets

We run experiments on seven datasets with different characteristics: Iris, Optdigits, Pendigits, Seeds, Wine from the UCI repository [5] as well as Protein [31] and MNIST.1 Optdigits-389 is a randomly sampled subset of Optdigits containing only the digits . Similarly, MNIST-3689 and Pendigits-149 are subsets of the MNIST and Pendigits datasets.

Table 1 provides statistics on each of the data set, with the coefficient of variation (CV) [4] describing the degree of class imbalance: zero indicates perfectly balanced classes, while higher values indicate higher degree of class imbalance.

Dataset # Instances # Features # Clusters CV
Iris 150 4 3 0.000
MNIST-3689 389 784 4 0.015
Optdigits-389 537 64 3 0.021
Pendigits-149 532 16 3 0.059
Protein 116 20 6 0.301
Seeds 210 7 3 0.000
Wine 178 13 3 0.158
Table 1: Datasets

4.1 Results

We compare the baseline algorithms to the two Ising models in Section 3 solved using the Fujitsu Digital Annealer described in Section 3.3.

Clustering is typically an unsupervised task and the number of clusters is unknown. The number of clusters in the true labels, , is not available in real scenarios. Furthermore, is not necessarily the best value for clustering tasks (e.g., in many cases it is better to have smaller clusters that are more pure). We therefore test the algorithms in two configurations: when the number of clusters is set to , as in the true labels, and when the number of clusters is set to .

clusters clusters
Dataset CSPA NMF HAC DA-Sm DA-Cr CSPA NMF HAC DA-Sm DA-Cr
Iris 0.555 0.618 0.618 0.619 0.621 0.536 0.614 0.627 0.608 0.642
MNIST 0.459 0.449 0.469 0.474 0.474 0.456 0.511 0.517 0.490 0.521
Optdig. 0.528 0.550 0.541 0.550 0.551 0.492 0.596 0.608 0.576 0.612
Pendig. 0.546 0.546 0.507 0.555 0.555 0.531 0.629 0.642 0.605 0.644
Protein 0.344 0.393 0.379 0.390 0.405 0.324 0.419 0.423 0.378 0.415
Seeds 0.558 0.577 0.534 0.575 0.577 0.484 0.602 0.602 0.580 0.612
Wine 0.481 0.536 0.535 0.537 0.538 0.502 0.641 0.641 0.641 0.643
# Best 0 4 1 6 7 0 1 3 1 6
Table 2: Consensus Performance Measured by Mean ARI Across Partitions

Consensus Criteria

Table 2 shows the mean ARI between and the clusterings in . To avoid bias due to very minor differences, we consider all the methods that achieved Mean ARI that is within a threshold of 0.0025 from the best method to be equivalent and highlight them in bold. We also summarize the number of times each method was considered best across the different datasets.

The results show that DA-Cr is the best performing method for both and clusters. The results of DA-Sm are not consistent: DA-Sm and NMF are performing well for clusters and HAC is performing better for clusters.

Clustering Quality

Table 3 report the mean Silhouette Coefficient of all data points. Again, DA-Cr is the best performing method across datasets, followed by HAC. NMF seems to be equivalent to HAC for .

clusters clusters
Dataset CSPA NMF HAC DA-Sm DA-Cr CSPA NMF HAC DA-Sm DA-Cr
Iris 0.519 0.555 0.555 0.551 0.553 0.289 0.366 0.371 0.343 0.373
MNIST 0.075 0.072 0.078 0.079 0.078 0.069 0.082 0.074 0.074 0.082
Optdig. 0.127 0.120 0.120 0.130 0.130 0.088 0.119 0.119 0.112 0.121
Pendig. 0.307 0.307 0.315 0.310 0.310 0.305 0.332 0.375 0.368 0.364
Protein 0.074 0.106 0.095 0.094 0.104 0.068 0.111 0.115 0.119 0.118
Seeds 0.461 0.468 0.410 0.469 0.472 0.275 0.343 0.304 0.344 0.302
Wine 0.453 0.542 0.571 0.547 0.545 0.452 0.543 0.541 0.539 0.542
# Best 0 2 4 2 5 0 4 4 2 5
Table 3: Clustering Quality Measured by Silhouette

Clustering Accuracy

Table 4 shows the clustering accuracy measured by the ARI between and the true labels. For , we find DA-Sm to be best-performing solution (followed by DA-Cr). For , DA-Cr outperforms the other methods. Interestingly, there is no clear winner between CSPA, NMF, and HAC.

clusters clusters
Dataset CSPA NMF HAC DA-Sm DA-Cr CSPA NMF HAC DA-Sm DA-Cr
Iris 0.868 0.746 0.746 0.716 0.730 0.438 0.463 0.447 0.433 0.521
MNIST 0.684 0.518 0.704 0.730 0.720 0.412 0.484 0.545 0.440 0.484
Optdig. 0.712 0.642 0.675 0.734 0.738 0.380 0.513 0.630 0.481 0.623
Pendig. 0.674 0.679 0.499 0.668 0.668 0.398 0.614 0.625 0.490 0.639
Protein 0.365 0.298 0.363 0.349 0.376 0.237 0.332 0.301 0.308 0.345
Seeds 0.705 0.710 0.704 0.764 0.717 0.424 0.583 0.573 0.500 0.619
Wine 0.324 0.395 0.371 0.402 0.398 0.231 0.245 0.240 0.248 0.238
# Best 1 1 0 3 2 0 0 2 1 4
Table 4: Clustering Accuracy Measured by ARI Compared to True Labels

Experiments with higher

In partition difference approaches, increasing does not necessarily lead to a that has more clusters. Instead, serves as an upper bound and new clusters will be used in case they reduce the objective.

To demonstrate how different algorithms handle different values, Table 5 shows the consensus criteria and the actual number of clusters in for different values of (note that in Iris). The results show that the performance of the pairwise similarity methods (CSPA, HAC, DA-Sm) degrades as we increase . This is associated with the fact the actual number of clusters in is equal to which is significantly higher compared to the clusterings in . Methods based on partition difference (NMF and DA-Cr) do not exhibit significant degradation and the actual number of clusters does not grow beyond 5 for DA-Cr and 6 for NMF. Note that the average number of clusters in is .

Consensus Criteria # of clusters in consensus clustering
CSPA NMF HAC DA-Sm DA-Cr CSPA NMF HAC DA-Sm DA-Cr
0.555 0.618 0.618 0.619 0.621 3 3 3 3 3
0.536 0.614 0.627 0.608 0.642 6 6 6 6 5
0.447 0.614 0.591 0.497 0.642 9 6 9 9 5
0.370 0.614 0.507 0.414 0.642 12 6 12 12 5
Table 5: Results for Iris dataset with different number of clusters

5 Conclusion

Motivated by the recent emergence of specialized hardware platforms, we present a new approach to the consensus clustering problem that is based on Ising models and solved on the Fujitsu Digital Annealer, a specialized CMOS hardware. We perform an extensive empirical evaluation and show that our approach outperforms existing methods on a set of seven datasets. These results shows that using specialized hardware in core data mining tasks can be a promising research direction. As future work, we plan to investigate additional problems in data mining that can benefit from the use of specialized optimization hardware as well as experimenting with different types of specialized hardware platforms.

Footnotes

  1. http://yann.lecun.com/exdb/mnist/

References

  1. Aramon, M., Rosenberg, G., Valiante, E., Miyazawa, T., Tamura, H., Katzgraber, H.G.: Physics-inspired optimization for quadratic unconstrained problems using a digital annealer. Frontiers in Physics 7 (2019)
  2. Bian, Z., Chudak, F., Macready, W.G., Rose, G.: The ising model: teaching an old problem new tricks. D-wave systems 2 (2010)
  3. Coffrin, C., Nagarajan, H., Bent, R.: Evaluating ising processing units with integer programming. In: CPAIOR. pp. 163–181 (2019)
  4. DeGroot, M.H., Schervish, M.J.: Probability and Statistics. Pearson (2012)
  5. Dua, D., Graff, C.: UCI machine learning repository (2017), http://archive.ics.uci.edu/ml
  6. Filkov, V., Skiena, S.: Integrating microarray data by consensus clustering. International Journal on Artificial Intelligence Tools 13(04), 863–880 (2004)
  7. Fred, A.L., Jain, A.K.: Combining multiple clusterings using evidence accumulation. IEEE TPAMI 27(6), 835–850 (2005)
  8. Fujitsu: Digital annealer. https://www.fujitsu.com/jp/digitalannealer/
  9. Ghosh, J., Acharya, A.: Cluster ensembles. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery 1(4), 305–315 (2011)
  10. Gionis, A., Mannila, H., Tsaparas, P.: Clustering aggregation. TKDD 1(1),  4 (2007)
  11. Hubert, L., Arabie, P.: Comparing partitions. J. Classification 2(1), 193–218 (1985)
  12. Karypis, G., Kumar, V.: Multilevelk-way partitioning scheme for irregular graphs. Journal of Parallel and Distributed computing 48(1), 96–129 (1998)
  13. Kirkpatrick, S., Gelatt, C.D., Vecchi, M.P.: Optimization by simulated annealing. Science 220(4598), 671–680 (1983)
  14. Kumar, V., Bass, G., Tomlin, C., Dulny, J.: Quantum annealing for combinatorial clustering. Quantum Information Processing 17(2),  39 (2018)
  15. Li, T., Ding, C., Jordan, M.I.: Solving consensus and semi-supervised clustering problems using nonnegative matrix factorization. In: ICDM. pp. 577–582 (2007)
  16. Li, T., Ogihara, M., Ma, S.: On combining multiple clusterings: an overview and a new perspective. Applied Intelligence 33(2), 207–219 (2010)
  17. Liu, X., Ushijima-Mwesigwa, H., Mandal, A., Upadhyay, S., Safro, I., Roy, A.: On modeling local search with special-purpose combinatorial optimization hardware. arXiv preprint arXiv:1911.09810 (2019)
  18. Lucas, A.: Ising formulations of many np problems. Frontiers in Physics 2,  5 (2014)
  19. Naghsh, Z., Javad-Kalbasi, M., Valaee, S.: Digitally annealed solution for the maximum clique problem with critical application in cellular v2x. In: ICC. pp. 1–7 (2019)
  20. Negre, C.F.A., Ushijima-Mwesigwa, H., Mniszewski, S.M.: Detecting multiple communities using quantum annealing on the d-wave system. PLOS ONE 15, 1–14 (02 2020), https://doi.org/10.1371/journal.pone.0227538
  21. Nguyen, N., Caruana, R.: Consensus clusterings. In: ICDM. pp. 607–612 (2007)
  22. Rahman, M.T., Han, S., Tadayon, N., Valaee, S.: Ising model formulation of outlier rejection, with application in wifi based positioning. In: ICASSP. pp. 4405–4409 (2019)
  23. Rousseeuw, P.J.: Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. J. Computational and Applied Mathematics 20, 53–65 (1987)
  24. Shaydulin, R., Ushijima-Mwesigwa, H., Safro, I., Mniszewski, S., Alexeev, Y.: Network community detection on small quantum computers. Advanced Quantum Technologies p. 1900029 (2019)
  25. Strehl, A., Ghosh, J.: Cluster ensembles—a knowledge reuse framework for combining multiple partitions. JMLR 3(Dec), 583–617 (2002)
  26. Topchy, A., Jain, A.K., Punch, W.: Clustering ensembles: Models of consensus and weak partitions. IEEE TPAMI 27(12), 1866–1881 (2005)
  27. Ushijima-Mwesigwa, H., Negre, C.F., Mniszewski, S.M.: Graph partitioning using quantum annealing on the d-wave system. In: PMES. pp. 22–29 (2017)
  28. Ushijima-Mwesigwa, H., Shaydulin, R., Negre, C.F., Mniszewski, S.M., Alexeev, Y., Safro, I.: Multilevel combinatorial optimization across quantum architectures. arXiv preprint arXiv:1910.09985 (2019)
  29. Vega-Pons, S., Ruiz-Shulcloper, J.: A survey of clustering ensemble algorithms. IJPRAI 25(03), 337–372 (2011)
  30. Wu, J., Liu, H., Xiong, H., Cao, J., Chen, J.: K-means-based consensus clustering: A unified view. IEEE TKDE 27(1), 155–169 (2014)
  31. Xing, E.P., Jordan, M.I., Russell, S.J., Ng, A.Y.: Distance metric learning with application to clustering with side-information. In: NIPS. pp. 521–528 (2003)
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
410702
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description