DAOC: Stable Clustering of Large Networks

DAOC: Stable Clustering of Large Networks

Abstract

Clustering is a crucial component of many data mining systems involving the analysis and exploration of various data. Data diversity calls for clustering algorithms to be accurate while providing stable (i.e., deterministic and robust) results on arbitrary input networks. Moreover, modern systems often operate on large datasets, which implicitly constrains the complexity of the clustering algorithm. Existing clustering techniques are only partially stable, however, as they guarantee either determinism or robustness. To address this issue, we introduce DAOC, a Deterministic and Agglomerative Overlapping Clustering algorithm. DAOC leverages a new technique called Overlap Decomposition to identify fine-grained clusters in a deterministic way capturing multiple optima. In addition, it leverages a novel consensus approach, Mutual Maximal Gain, to ensure robustness and further improve the stability of the results while still being capable of identifying micro-scale clusters. Our empirical results on both synthetic and real-world networks show that DAOC yields stable clusters while being on average 25% more accurate than state-of-the-art deterministic algorithms without requiring any tuning. Our approach has the ambition to greatly simplify and speed up data analysis tasks involving iterative processing (need for determinism) as well as data fluctuations (need for robustness) and to provide accurate and reproducible results.

stable clustering, deterministic overlapping clustering, community structure discovery, parameter-free community detection, cluster analysis.
\setstretch

0.982

1

I Introduction

Clustering is a fundamental part of data mining with a wide applicability to statistical analysis and exploration of physical, social, biological and information systems. Modeling and analyzing such systems often involves processing large complex networks [1]. Clustering large networks is intricate in practice, and should ideally provide stable results in an efficient way in order to make the process easier for the data scientist.

Stability is pivotal for many data mining tasks since it allows to better understand whether the results are caused by the evolving structure of the network, by evolving node ids (updated labels, coordinates shift or nodes reordering), or by some fluctuations in the application of non-deterministic algorithms. Stability of the results involves both determinism and robustness. We refer to the term deterministic in the strictest sense denoting algorithms that a) do not involve any stochastic operations and b) produce results invariant of the nodes processing order. Robustness ensures that clustering results gracefully evolve with small perturbations or changes in the input network [19]. It prevents sudden changes in the output for dynamic networks and provides the ability to tolerate noise and outliers in the input data [10].

Clustering a network is usually not a one-off project but an iterative process, where the results are visually explored and refined multiple times. The visual exploration of large networks requires to consider the specificities of human perception [4, 32] which is good at handling fine-grained hierarchies of clusters. In addition, those hierarchies should be stable across iterations such that the user can compare previous results with new results. This calls for results that are both stable and fine-grained.

In this paper, we introduce a novel clustering method called DAOC to address the aforementioned issues. To the best of our knowledge, DAOC 2 is the first parameter-free clustering algorithm that is simultaneously deterministic, robust and applicable to large weighted networks yielding a fine-grained hierarchy of overlapping clusters. More specifically, DAOC leverages a) a novel consensus technique we call Mutual Maximal Gain (MMG) to perform a robust and deterministic identification of node membership in the clusters, and b) a new technique for overlap decomposition (OD) to form fine-grained clusters in a deterministic way, even when the optimization function yields a set of structurally different but numerically equivalent optima (see degeneracy in Section III). We empirically evaluate the stability of the resulting clusters produced by our approach, as well as its efficiency and effectiveness on both synthetic and real-world networks. We show that DAOC yields stable clusters while being on average 25% more accurate than state-of-the-art deterministic clustering algorithms and more efficient than state-of-the-art overlapping clustering algorithms without requiring any manual tuning. In addition, we show that DAOC returns on average more accurate results than any state-of-the-art clustering algorithm on complex real-world networks (e.g., networks with overlapping and nested clusters). We foresee DAOC to represent an important step forward for clustering algorithms as: a) deterministic clustering algorithms are usually not robust and have a lower accuracy than their stochastic counterparts, and b) robust methods are typically not deterministic and do not provide fine-grained results as they are insensitive to micro-scale changes, as described in more detail in the following section.

Ii Related Work

A great diversity of clustering algorithms can be found in the literature. Below, we give an overview of prior methods achieving robust results, before describing deterministic approaches and outlining a few widely used algorithms that are neither robust nor deterministic but were inspirational for our method.

Robust clustering algorithms typically leverage consensus or ensemble techniques [14, 45, 23, 31]. They identify clusters using consensus functions (e.g., majority voting) by processing an input network multiple times and varying either the parameters of the algorithm, or the clustering algorithm itself. However, such algorithms typically a) are unable to detect fine-grained structures due to the lack of consensus therein, b) are stochastic and c) are inapplicable to large networks due to their high computational cost. We describe some prominent and scalable consensus clustering algorithms below.

- Order Statistics Local Optimization Method (OSLOM) [21] is one of the first widely used consensus clustering algorithms, which accounts for weights of the network links and yields overlapping clusters with a hierarchical structure. It is based on the local optimization of a fitness function expressing the statistical significance of clusters with respect to random fluctuations. OSLOM scales near linearly on sparse networks but has a relatively high computational complexity at each iteration, making it inapplicable to large real-world networks (as we show in Section V).

- Core Groups Graph Clustering Randomized Greedy (CGGC[i]_RG) [38] is a fast and accurate ensemble clustering algorithm. It applies a generic procedure of ensemble learning called Core Groups Graph Clustering (CGGC) to determine several weak graph (network) clusterings and then to form a strong clustering from their maximal overlap. The algorithm has a near linear computational complexity with the number of edges due to the sampling and local optimization strategies applied at each iteration. However, this algorithm is designed for unweighted graphs and produces flat and non-overlapping clusters only, which limits its applicability and yields low accuracy on large complex networks as we show in Section V.

- Fast Consensus technique was recently proposed and works on top of state-of-the-art clustering algorithms including Louvain (FCoLouv), Label Propagation (FCoLPM) and Infomap (FCoIMap) [43]. The technique initializes a consensus matrix and then iteratively refines it until convergence as follows. First, the input network is clustered by the original algorithm multiple times. The consensus values of the matrix are evaluated as the fraction of the runs in which nodes and belong to the same cluster. The consensus matrix is formed using pairs of co-clustered adjacent nodes and extended with closed triads instead of all nodes in the produced clusters, which significantly reduces the amount of computation. The formed matrix is filtered with a threshold and then clustered times by the original clustering algorithm, producing a refined consensus matrix. This refinement process is repeated until all runs produce identical clusters (i.e.,until all values in the consensus matrix are either zero and one) with precision . The Fast Consensus technique however lacks a convergence guarantee and relies on three parameters having a strong impact on its computational complexity.

Deterministic clustering algorithms and, in general, non-stochastic ones (i.e., algorithms relaxing the determinism constraint) are typically not robust and are sensitive to both a) initialization [42, 5, 44, 18] (including the order in which the nodes are processed) and b) minor changes in the input network, which may significantly affect the clustering results [10, 23]. Non-stochastic algorithms also often yield less precise results getting stuck on the same local optimum until the input is updated. Multiple local optima often exist due to the degeneracy phenomenon, which is explained in Section III and has to be specifically addressed to create deterministic clustering algorithms that are both robust and accurate. We describe below some of the well-known deterministic algorithms.

- Clique Percolation method (CPM) [11] is probably the first deterministic clustering algorithm supporting overlapping clusters and capable of providing fine-grained results. Sequential algorithm for fast clique percolation (SCP) [20] is a CPM-based algorithm, which detects -clique clusters in a single run and produces a dendrogram of clusters. SCP produces deterministic and overlapping clusters at various scales, and shows a linear dependency of the computational complexity with the number of k-cliques in the network. However, SCP relies on a number of parameters and has an exponential worst case complexity in dense networks, which significantly limits its practical applicability.

- pSCAN [7] is a fast overlapping clustering algorithm for “exact structural graph clustering” (i.e., it is deterministic and input-order independent). First, it identifies core graph vertices (network nodes), which always belong to exactly one cluster, forming initially disjoint clusters. The remaining nodes are then assigned to the initial clusters, yielding overlapping clusters. pSCAN relies on two input parameters, and . The results it produces are very sensitive to those parameters, whose optimal values are hard to guess for arbitrary input networks.

Inspirational algorithms for our method

- Louvain [3] is a commonly used clustering algorithm that performs modularity optimization using a local search technique on multiple levels to coarsen clusters. It introduces modularity gain as an optimization function. The algorithm is parameter-free, returns a hierarchy of clusters, and has a near-linear runtime complexity with the number of network links. However, the resulting clusters are not stable and depend on the order in which the nodes are processed. Similarly to Louvain, our method is a greedy agglomerative clustering algorithm, which uses modularity gain as optimization function. However, the clusters formation process in DAOC differs a lot, addressing the aforementioned issues of the Louvain algorithm.

- DBSCAN [13] is a density-based clustering algorithms suitable to process data with noise. It regroups points that are close in space given the maximal distance between the points and the minimal number of points within an area. DBSCAN is limited in discovering a large variety of clusters because of its reliance to a density parameter. It has a strong dependency on input parameters, and lacks a principled way to determine optimal values for these parameters [8]. We adopt however the DBSCAN idea of clusters formation based on the extension of the densest region to prevent early coarsening and to produce a fine-grained hierarchical structure.

Iii Preliminaries

#i Node <i> of the network (graph)
Q Modularity
Modularity Gain between #i and #j
Items and (nodes or clusters) are neighbors (adjacent, linked)
Maximal Modularity Gain for #i:
Mutual Maximal Gain:
Mutual clustering candidates of #i (by )
TABLE I: Notations

A clustering algorithm is applied to a network to produce groups of nodes that are called clusters (also known as communities, modules, partitions or covers). Clusters represent groups of tightly-coupled nodes with loosely inter-group connections [35], where the group structure is defined by the clustering optimization function. The resulting clusters can be overlapping, which happens in case they share some common nodes, the overlap. The input network (graph) can be weighted and directed, where a node (vertex) weight is represented as a weighted link (edge) to the node itself (a self-loop). The main notations used in this paper are listed in Table I.

Clustering algorithms can be classified by the kind of input data they operate on: a) graphs specified by pairwise relations (networks) or b) attributed graphs (e.g., vertices specified by coordinates in a multi-dimensional space). These two types of input data cannot be unambiguously converted into each other, at least unless one agrees on some customized and specific conversion function. Hence, their respective clustering algorithms are not (directly) comparable. In this paper, we focus on clustering algorithms working on graphs specified by pairwise relations (networks), which are also known as community structure discovery algorithms.

Modularity ([34] is a standard measure of clustering quality that is equal to the difference between the density of the links in the clusters and the expected density:

(1)

where is the accumulated weight of the arcs between nodes #i and #j, is the accumulated weight of all arcs of #i, is the total weight of the network, is the cluster to which #i is assigned, and Kronecker delta is a function, which is equal to when #i and #j belong to the same cluster (i.e., ), and otherwise.

Modularity is applicable to non-overlapping cases only. However, there exist modularity extensions that handle overlaps [17, 9]. The main intuition behind such modularity extensions is to quantify the degree of a node membership among multiple clusters either by replacing a Kronecker (see (1)) with a similarity measure  [33, 41] or by integrating a belonging coefficient [37, 24, 25] directly into the definition of modularity. Although both old and new measures are named modularity, they generally have different values even when applied to the same clusters [16], resulting in incompatible outcomes. Some modularity extensions are equivalent to the original modularity when applied to non-overlapping clusters [33, 41, 25]. However, the implementations of these extensions introduce an excessive number of additional parameters [33, 41] and/or boost the computational time by orders of magnitude [25], which significantly complicates their application to large networks.

Modularity gain ([3] captures the difference in modularity when merging two nodes #i and #j into the same cluster, providing a computationally efficient way to optimize Modularity:

(2)

We use modularity gain () as an underlying optimization function for our meta-optimization function MMG (introduced in Section IV-A1).

Degeneracy is a phenomenon linked to the clustering optimization function appearing when multiple distinct clusterings (i.e., results of the clustering process) share the same globally maximal value of the optimization function while being structurally different [15]. This phenomenon is inherent to any optimization function and implies that a network node might yield the maximal value of the optimization function while being a member of multiple clusters, which is the case when an overlap occurs. This prevents the derivation of accurate results by deterministic clustering algorithms without considering overlaps. To cope with degeneracy, typically multiple stochastic clusterings are produced and combined, which is called an ensemble or consensus clustering and provides robust but coarse-grained results [15, 2]. Degeneracy of the optimization function, together with the aforementioned computational drawback of modularity extensions, motivated us to introduce a new overlap decomposition technique, OD (see Section IV-B1). OD allows to consider and process overlaps efficiently using algorithms having an optimization function designed for the non-overlapping case. It produces accurate, robust and fined-grained results in a deterministic way as we show in our experimental evaluation (see Section V).

Iv Method

We introduce a novel clustering algorithm, DAOC, to perform a stable (i.e., both robust and deterministic) clustering of the input network, producing a fine-grained hierarchy of overlapping clusters. DAOC is a greedy algorithm that uses an agglomerative clustering approach with a local search technique (inspired by Louvain [3]) and extended with two novel techniques. Namely, we first propose a novel (micro) consensus technique called Mutual Maximal Gain (MMG) for the robust identification of nodes membership in the clusters, which is performed in a deterministic and fine-grained manner. In addition to MMG, we also propose a new overlap decomposition (OD) technique to cope with the degeneracy of the optimization function. OD forms stable and fine-grained clusters in a deterministic way from the nodes preselected by MMG.

Algorithm 1 gives a high-level description of our method. It takes as input a directed and weighted network with self-loops specifying node weighs. The resulting hierarchy of clusters is built iteratively starting from the bottom level (the most fine-grained level). One level of the hierarchy is generated at each iteration of our clustering algorithm. A clustering iteration consists of the following steps listed on lines 45:

  1. Identification of the clustering candidates for each node #i using the proposed consensus approach, MMG, described in Section IV-A2 and

  2. Cluster formation considering overlaps, described in Section IV-B.

1:function cluster()
2:      List of the hierarchy levels
3:     while  do Stop if the nodes list is empty
4:           Initialize nd.ccs
5:          
6:          if  then Initialize the next-level nodes
7:               for all  do
8:                    
9:               end for
10:               for all  do Consider propagates
11:                    if  then
12:                         
13:                         
14:                    end if
15:               end for
16:                Extend the hierarchy
17:          end if
18:           Update the processing nodes
19:     end while
20:     return The resulting hierarchy of clusters
21:end function
Algorithm 1 DAOC Clustering Algorithm.

At the end of each iteration, links are (re)computed for the formed clusters (initCluster procedure) and for the non-clustered nodes (propagated nodes in the initNode procedure). Both the non-clustered nodes and the formed clusters are treated as input nodes for the following iteration. The algorithm terminates when the iteration does not produce any new cluster.

The clustering process yields a hierarchy of overlapping clusters in a deterministic way independent of the nodes processing order, since all clustering operations a) consist solely of non-stochastic, uniform and local operations, and b) process each node independently, relying on immutable data evaluated on previous steps only. The algorithm is guaranteed to converge since a) the optimization function is bounded (as outlined in Section IV-A1) and monotonically increasing during the clustering process, and b) the number of formed clusters does not exceed the number of clustered nodes at each iteration (as explained in Section IV-B2).

Iv-a Identification of the Clustering Candidates

The clustering candidates are the nodes that are likely to be grouped into clusters in the current iteration. The clustering candidates are identified for each node () in two steps as listed in Algorithm 2. First, for each node the adjacent nodes () having the maximal non-negative value of the optimization function optfn are stored in the sequence, see lines 311. Then, the preselected are reduced to the mutual candidates by the mcands procedure, and the filtered out nodes are marked as propagated. The latter step is combined with a cluster formation operation in our implementation to avoid redundant passes over all nodes. The mcands procedure implements our Maximal Mutual Gain (MMG) consensus approach described in Section IV-A2, which is a meta-optimization technique that can be applied on top of any optimization function that satisfies a set of constraints described in the following paragraph.

1:function identifyCands()
2:     for all  do Evaluate clustering candidates
3:           Maximal gain
4:          for all  do
5:                Clustering gain
6:               if  and  then
7:                    if  then
8:                          Reset cands
9:                         
10:                    end if
11:                     Extend cands
12:               end if
13:          end for
14:     end for
15:
16:     for all  do Reduce the candidates using the consensus approach, propagate remained nodes
17:          if  or not mcands then
18:               
19:          end if
20:     end for
21:end function
Algorithm 2 Clustering Candidates Identification

Optimization Function

In order to be used in our method, the optimization function optfn should be a) applicable to pairwise node comparison, i.e. (adjusted pair of nodes); b) commutative, i.e.; and c) bounded on the non-negative range, where positive values indicate some quality improvement in the structure of the forming cluster. There exist various optimization functions satisfying these constraints besides modularity and inverse conductance (see the list in [6], for instance).

Our DAOC algorithm uses modularity gain, (see (2)), as an optimization function. We chose modularity (gain) optimization because of the following advantages. First, modularity maximization (under certain conditions) is equivalent to the provably correct but computationally expensive methods of graph partitioning, spectral clustering and to the maximum likelihood method applied to the stochastic block model [36, 30]. Second, there are known and efficient algorithms for modularity maximization, including the Louvain algorithm [3], which are accurate and have a near-linear computational complexity.

MMG Consensus Approach

We propose a novel (micro) consensus approach, called Mutual Maximal Gain (MMG) that requires only a single pass over the input network, is more efficient and yields much more fine-grained results compared to state-of-the-art techniques.

Definition 1 (Mutual Maximal Gain (MMG))

MMG is a value of the optimization function (in our case modularity gain) for two adjacent nodes #i and #j, and is defined in cases where these nodes mutually reach the maximal value of the optimization function (i.e., reach consensus on the maximal value) when considering each other:

(3)

where denotes adjacency of #i and #j, and is the maximal modularity gain for #i:

(4)

where is the modularity gain for #i and #j (see (2)).

MMG exists in any finite network, which can be easily proven by contradiction as follows. The nonexistence of MMG would create a cycle with increasing and results in considering that : , i.e. , which yields a contradiction. MMG evaluation is deterministic and the resulting nodes are quasi-uniform clustering candidates, in the sense that inside each connected component they share the same maximal value of modularity gain. MMG takes into account fine-grained clusters as it operates on pairs of nodes, unlike conventional consensus approaches, where micro-clusters either require lots of reexecutions of the consensus algorithm, or cannot be captured at all. MMG does not always guarantee optimal clustering results but reduces degeneracy due to the applied consensus approach. According to (3), all nodes having MMG to #i have the same value of , i.e., form the overlap in #i. Overlaps processing is discussed in the following section.

Iv-B Clusters Formation with Overlap Decomposition

Clusters are formed from candidate nodes selected by MMG as listed in Algorithm 3: a) nodes having a single mutual clustering candidate (cc) form the respective cluster directly as shown on line 9, b) otherwise the overlap is processed. There are three possible ways of clustering an overlapping node in a deterministic way: a) split the node into fragments to have one fragment per each cc of the node and group each resulting fragment with the respective cc into the dedicated cluster (see lines 1112), or b) group the node together with all its nd.ccs items into a single cluster (i.e. coarsening on line 19), or c) propagate the node to be processed on the following clustering iteration if its clustering would yield a negative value of the optimization function. Each node fragment created by the overlap decomposition is treated as a virtual node representing the belonging degree (i.e., the fuzzy relation) of the original node to multiple clusters. Virtual nodes are used to avoid the introduction of the fuzzy relation for all network nodes (i.e., to avoid an additional complex node attribute) reducing memory consumption and execution time, and not affecting the input network itself. In order to get the most effective clustering result, we evaluate the first two aforementioned options and select the one maximizing the optimization function, . Then, we form the cluster(s) by the merge or mergeOvp procedures as follows. The mergeOvp procedure groups each node fragment (i.e., the virtual node created by the overlap decomposition) together with its respective mutual clustering candidate. This results in either a) an extension of the existing cluster to which the candidate already belongs to, or b) the creation of a new cluster and its addition to the cls list. The merge procedure groups the node with all its clustering candidates either a) merging together the respective clusters of the candidates if they exists, or b) creating a new cluster and adding it to the cls list.

1:function formClusters()
2:      List of the formed clusters
3:     for all  do
4:          if  then Prefileter nodes
5:               continue
6:          end if
7:
8:          if  then Form a cluster
9:               
10:          else if  and  then Form overlapping clusters
11:               for all  do
12:                    
13:               end for
14:          else DBSCAN inspired aggregation
15:               
16:               if  then Form a single cluster
17:                    
18:               else if  then Form a cluster
19:                    
20:               else
21:                    
22:               end if
23:          end if
24:     end for
25:     return Resulting clusters
26:end function
Algorithm 3 Clusters Formation

Node splitting is the most challenging process, which is performed only if the accumulated gain from the decomposed node fragments to each of the respective mutual clustering candidates, , (gainEach procedure) exceeds the gain of grouping the whole node with all (gainAll procedure). The node splitting involves: a) the estimation of the node fragmentation impact on the clustering convergence (odAccept procedure given in Section IV-B2) and b) the evaluation of the weights for both the forming fragment and for the links between the fragments of the splitting node as described in Section IV-B1.

Overlap Decomposition (OD)

An overlap occurs when a node has multiple mutual clustering candidates (ccs). To evaluate when clustering the node with each of its ccs, the node is split into identical and fully interconnected fragments sharing the node weight and original node links. However, since the objective of the clustering is the maximization of : a) the node splitting itself is acceptable only in case the resulting , and b) the decomposed node can be composed back from the fragments only in case . Hence, to have a reversible decomposition without affecting the value of the optimization function for the decomposing node, we end up with .

The outlined constraints for an isolated node, which does not have any link to other nodes, can formally be expressed as:

(5)

where is the weight of the original node being decomposed into fragments, is the weight of each node fragment and is the weight of each link between the fragments. since the modularity of the isolated node is zero (see (1)). The solution of (5) is:

(6)

Nodes in the network typically have links, which get split equally between the fragments of the node:

(7)

where is the weight of each fragment #ik of the node #i, is the weight of the link .

Example 1 (Overlap Decomposition)
Fig. 1: Decomposition of the clusters , , overlapping in node of the input network with weights on both nodes and links.

The input network on the left-hand side of Fig. 1 has node with neighbor nodes being ccs of . These neighbor nodes form the respective clusters overlapping in . is decomposed into fragments to evaluate the overlap. Node has an internal weight equal to (which can be represented via an additional edge to itself) and three edges of weight each. The overlapping clusters are evaluated using (7) as equivalent and virtual non-overlapping clusters formed using the new fragments of the overlapping node:

Constraining Overlap Decomposition

Overlap decomposition (OD) does not affect the value of the optimization function for the node being decomposed (), hence it does not affect the convergence of the optimization function during the clustering. However, OD increases the complexity of the clustering when the number of produced clusters exceeds the number of clustered nodes decomposed into multiple fragments. This complexity increase should be identified and prevented to avoid indefinite increases in terms of clustering time.

In what follows, we infer a formal condition that guarantees the non-increasing complexity of OD. We decompose a node of degree into fragments, . Each forming cluster that has an overlap in this node owns one fragment (see Fig. 1) and shares at most links to the non-ccs neighbors of the node. The number of links between the fragments resulting in the node split is . The aggregated number of resulting links should not exceed the degree of the node being decomposed to retain the same network complexity, therefore:

(8)

The solution of (8) is , namely: .

If a node being decomposed has a degree or a node has more than ccs then, before falling back to the coarse cluster formation, we apply the following heuristic inspired by the DBSCAN algorithm [13]. We evaluate the intersection of nd.ccs with each (maxIntersectOrig procedure on line 15 of Algorithm 3) and group the node with its clustering candidate(s) yielding the maximal intersection if the latter contains at least half of the nd.ccs. In such a way, we try prevent an early coarsening and obtain more fine-grained and accurate results.

Iv-C Complexity Analysis

The computational complexity of DAOC on sparse networks is , where m is the number of links in the network. All links of each node () are processed for iterations. In the worst case, the number of iterations is equal to the number of nodes n (instead of ) and the number of mutual candidates is equal to the node degree d instead of 1. Thus, the theoretical worst-case complexity is and occurs only in a hypothetical dense symmetric network having equal MMG for all links (and, hence, requiring overlap decomposition) on each clustering iteration and in case the number of clusters is decreased at each iteration by one only. The memory complexity is .

V Experimental Evaluation

V-a Evaluation Environment

Our evaluation was performed using an open-source parallel isolation benchmarking framework, Clubmark 3 [26], on a Linux Ubuntu 16.04.3 LTS server with the Intel Xeon CPU E5-2620 v4 @ 2.10GHz CPU (16 physical cores) and 132 GB RAM. The execution termination constraints for each algorithm are as follows: 64 GB of RAM and 72 hours max per network clustering. Each algorithm is executed on a single dedicated physical CPU core with up to 64 GB of guaranteed available physical memory.

Features \  Algs Daoc Scp Lvn Fcl Osl2 Gnx Psc Cgr Cgri Scd Rnd
Hierarchical + + +
Multi-scale + + + + +
Deterministic + + +
Overlapping clusters + + + + +
Weighted links + + + + + +
Parameter-free +! + * * * * * * +
Consensus/Ensemble + + + + +

Deterministic includes input-order invariance;

+! the feature is available, still the ability to force custom parameters is provided;

* the feature is partially available, parameters tuning might be required for specific cases;

the feature is available in theory but is not supported by the original implementation of the algorithm.

TABLE II: Evaluating clustering algorithms.

We compare DAOC against almost a dozen state-of-the-art clustering algorithms listed in Table II (the original implementations of all algorithms except Louvain are included into Clubmark and are executed as precompiled or JIT-compiled applications or libraries) and described in the following papers: SCP [20], Lvn(Louvain4 [3]), Fcl (Fast Consensus on Louvain: FCoLouv [43]), Osl2(OSLOM2 [21]), Gnx(GANXiS also known as SLPA [47]), Psc(pSCAN [7]), Cgr[i](CGGC[i]_RG [38]), SCD [40] and Rnd(Randcommuns [26]). We have not evaluated a well known CPM-based overlapping clustering algorithm, CFinder [39], because a) GANXiS outperforms CFinder in all aspects by several accuracy metrics [47, 46] and b) we do evaluate SCP, a fast CPM-based algorithm. For a fair accuracy evaluation, we uniformly sample up to 10 levels from the clustering results (levels of the hierarchical / multilevel output or clusterings produced uniformly varying algorithm parameters in the operational range) and take the best value.

V-B Stability Evaluation

We evaluate stability in terms of both robustness and determinism for the consensus (ensemble) and deterministic clustering algorithms listed in Table II. Determinism (non-stochasticity and input order independence) evaluation is performed on synthetic and real-world networks below, where we quantify the standard deviation of the clustering accuracy. To evaluate stability in terms of robustness, we quantify the deviation of the clustering accuracy in response to small perturbations of the input network. The clustering accuracy on each iteration is measured relative to the clustering yielded by the same algorithm at the previous perturbation iteration. For each clustering algorithm, the accuracy is evaluated only for the middle level (scale or hierarchical level), since it is crucial to take the same clustering scale to quantify structural changes in the forming clusters of evolving networks. Robust clustering algorithms are expected to have their accuracy gradually evolving (without any surges) relative to the previous perturbation iteration. In addition, the clustering algorithms sensitive enough to capture the structural changes are expected to have their accuracy monotonically decreasing since the relative network reduction (perturbation) is increased at each iteration: deleted links on iteration represent a fraction of , but on the following iteration this fraction is increased to .

We evaluate robustness and sensitivity (i.e., the ability to capture small structural changes) on synthetic networks with nodes forming overlapping clusters generated by the LFR framework [22]. We generate a synthetic network with ten thousand nodes having an average degree of 20 and using the mixing parameter . This network is shuffled (the links and nodes are reordered) 4 times to evaluate the input order dependence of the algorithms. Small perturbations of the input data are performed gradually reducing the number of links in the network by 2% of the original network size (i.e., 10 1000 20 0.02 = 4000 links) starting from 1% and ending at 15%. The links removal is performed a) randomly to retain the original distributions of the network links and their weights but b) respecting the constraint that each node retains at least a single link. This constraint prevents the formation of disconnected regions. Our perturbation does not include any random modification of the link weights or the creation of new links since it would affect the original distributions of the network links and their weights, causing surges in the clustering results.

(a) F1h (average value and deviation) for subsequent perturbations (link removals). Stable algorithms are expected to have a gracefully decreasing F1h without any surges.
(b) F1h relative to the previous perturbation iteration. Stable and sensitive algorithms are highlighted with bolder line width and have positive F1h evolving without surges. Standard deviation is shown only for the consensus algorithms but visible only for FCoLouv and CGGCi-RG.
Fig. 2: Stability and sensitivity evaluation.

The evaluations of stability in terms of robustness (absence of surges in response to small perturbation of the input network) and sensitivity (ability to capture small structural changes) are shown in Fig. 2. Absolute accuracy values relative to the previous link reduction iteration are shown in Fig. 2(a). The results demonstrate that, as expected, all deterministic clustering algorithms except DAOC (i.e. pSCAN and SCP) result in surges and hence are not robust. We also obtain some unexpected results. First, pSCAN, which is nominally “exact” (i.e., non-stochastic and input-order independent), actually shows significant deviations in accuracy. Second, the clusterings produced by OSLOM2 using default parameters and by FCoLouv using a number of consensus partitions are prone to surges. Hence, OSLOM2 and FCoLouv cannot be classified as robust algorithms according to the obtained results in spite of being a consensus clustering algorithms. Fig. 2(b) illustrates the sensitivity of the algorithms, where the relative accuracy values compared to the previous perturbation iteration are shown. Sensitive algorithms have monotonically decreasing results for the subsequent link reduction, which corresponds to positive values on this plot. The stable algorithms (CGGC-RG, CGGCi-RG and DAOC) are highlighted with a bolder line width on the figure. These results demonstrate that being robust, CGGC-RG and CGGCi-RG are not always able to capture structural changes in the network, i.e., they are less sensitive than DAOC. Overall, the results show that only our algorithm, DAOC, is stable (it is deterministic, including input-order independence, and robust) and at the same time is able to capture even small structural changes in the input network.

V-C Effectiveness and Efficiency Evaluation

Our performance evaluation was performed both a) on weighted undirected synthetic networks with overlapping ground-truth clusters produced by the LFR frameworkintegrated into Clubmark [26] and b) on large real-world networks having overlapping and nested ground-truth communities5 [49]. The synthetic networks were generated with 1, 5, 20 and 50 thousands nodes, each having an average node degrees of 5, 25 and 75. The maximal node degree is uniformly scaled from 70 on the smallest networks up to 800 on the largest ones. Synthetic networks generation parameters are taken by default as provided by Clubmark. The real-world networks contain from 334,863 nodes with 925,872 links (amazon) up to 3,997,962 nodes with 34,681,189 links (livejournal). The ground-truth communities of real-world networks were pre-processed to exclude duplicated clusters (communities having exactly the same nodes). Each network is shuffled (reordered) 4 times and the average accuracy value along with its standard deviation are reported.

A number of accuracy measures exist for overlapping clusters evaluation. We are aware of only two families of accuracy measures applicable to large overlapping clusterings, i.e. having a near-linear computational complexity with the number of nodes: the F1 family [27] and generalized NMI (GNMI) [12, 27]. However, mutual information-based measures are biased to a large numbers of clusters while GNMI does not have any bounded computational complexity in general. Therefore, we evaluate clustering accuracy with F1h [27], a modification of the popular average F1-score (F1a) [48, 40] providing indicative values in the range , since the artificial clusters formed from all combinations of the input nodes yield and .

First, we evaluate accuracy for all the deterministic algorithms listed in Table II on synthetic networks, and then evaluate both accuracy and efficiency for all clustering algorithms on real-world networks. Our algorithm, DAOC, shows the best accuracy among the deterministic clustering algorithms on synthetic networks, outperforming others on each network and being more accurate by 25% on average according to Fig. 3.

Fig. 3: F1h of the deterministic algorithms on the synthetic networks.

Moreover, DAOC also has the best accuracy on average among all evaluated algorithms on large real-world networks as shown in Fig. 4(a). Being parameter-free, our algorithm yields good accuracy on both synthetic networks and real-world networks, unlike some other algorithms having good performance on some datasets but low performance on others.

(a) F1h (average value and deviation).
(b) Execution time for a single algorithm run on a single and dedicated CPU core, sec. The range in SCP shows the execution time for .
Fig. 4: Performance on the real-world networks.
Nets\Algs Daoc Scp* Lvn Fcl Osl2 Gnx Psc Cgr Cgri Scd Rnd
amazon 238 3,237 339 3,177 681 3,005 155 247 1,055 37 337
dblp 225 3,909 373 3,435 717 2,879 167 247 1,394 36 373
youtube 737 4,815 1,052 8,350 508 830 3,865 131 1,050
livejournal 5,038 10,939 4,496 4,899 11,037 761

– denotes that the algorithm was terminated for violating the execution constraints;

* the memory consumption and execution time for SCP are reported for a clique size since they grow exponentially with on dense networks, though accuracy was evaluated varying .

TABLE III: Peak memory consumption (RSS) on the real-world networks, MB.

Besides being accurate, DAOC consumes the least amount of memory among the evaluated hierarchical algorithms (Louvain, OSLOM2) as shown in Table III. In particular, DAOC consumes 2x less memory than Louvain on the largest real-world evaluated network (livejournal) and 3x less memory than OSLOM2 on dblp, while producing much more fine-grained hierarchies of clusters with almost an order of magnitude more levels than other algorithms. Moreover, among the evaluated overlapping clustering algorithms, only pSCAN and DAOC are able to cluster the livejournal network within the specified execution constraints, the missing bars in Fig. 4(b) corresponding to the algorithms that we had to terminate.

Vi Conclusions

In this paper, we presented a new clustering algorithm, DAOC, which is at the same time stable and provides a unique combination of features yielding a fine-grained hierarchy of overlapping clusters in a fully automatic manner. We experimentally compared our approach on a number of different datasets and showed that while being parameter-free and efficient, it yields accurate and stable results on any input networks. DAOC builds on a new (micro) consensus technique, MMG, and a novel overlap decomposition approach, OD, which are both applicable on top of non-overlapping clustering algorithms and allow to produce overlapping and robust clusters. DAOC is released as an open-source clustering library implemented in C++ that includes various cluster analysis features not mentioned in this paper and that is integrated with several data mining applications (StaTIX [28], or DAOR [29] embeddings). In future work, we plan to design an approximate version of MMG to obtain near-linear execution times on dense networks, and to parallelize DAOC taking advantage of modern hardware architectures to further expand the applicability of our method.

Footnotes

  1. https://github.com/eXascaleInfolab/daoc
  2. https://github.com/eXascaleInfolab/clubmark
  3. http://igraph.org/c/doc/igraph-Community.html
  4. https://snap.stanford.edu/data/#communities

References

  1. A. Barabasi (2002-04) Linked: the new science of networks. Cited by: §I.
  2. D. S. Bassett, M. A. Porter, N. F. Wymbs, S. T. Grafton, J. M. Carlson and P. J. Mucha (2013) Robust detection of dynamic community structure in networks. Chaos 23 (1), pp. 013142. Cited by: §III.
  3. V. D. Blondel, J. Guillaume, R. Lambiotte and E. Lefebvre (2008-10) Fast unfolding of communities in large networks. J Stat Mech. 2008 (10), pp. P10008. Cited by: §II, §III, §IV-A1, §IV, §V-A.
  4. M. A. Borkin (2014) Perception, Cognition, and Effectiveness of Visualizations with Applications in Science and Engineering. Ph.D. Thesis, Harvard University. Cited by: §I.
  5. M. E. Celebi and H. A. Kingravi (2015) Linear, deterministic, and order-invariant initialization methods for the k-means clustering algorithm. In Partitional Clustering Algorithms, pp. 79–98. Cited by: item a.
  6. P. C. Céspedes and J. F. Marcotorchino (2013) Comparing different modularization criteria using relational metric. In GSI, pp. 180–187. Cited by: §IV-A1.
  7. L. Chang, W. Li, X. Lin, L. Qin and W. Zhang (2016) PSCAN: fast and exact structural graph clustering. In ICDE, Vol. , pp. 253–264. Cited by: §II, §V-A.
  8. B. Chen and K. M. Ting (2018) Neighbourhood contrast: a better means to detect clusters than density. In PAKDD, Cited by: §II.
  9. M. Chen and B. K. Szymanski (2015) Fuzzy overlapping community quality metrics. SNAM 5 (1), pp. 40:1–40:14. Cited by: §III.
  10. R. N. Davé and R. Krishnapuram (1997) Robust clustering methods: a unified view. IEEE Transactions on Fuzzy systems 5 (2). Cited by: §I, §II.
  11. I. Derényi, G. Palla and T. Vicsek (2005-04) Clique percolation in random networks. Phys. Rev. Lett. 94, pp. 160202. Cited by: §II.
  12. A. V. Esquivel and M. Rosvall (2012) Comparing network covers using mutual information. CoRR abs/1202.0425. Cited by: §V-C.
  13. M. Ester, H. Kriegel, J. Sander and X. Xu (1996) A density-based algorithm for discovering clusters in large spatial databases with noise. In KDD, pp. 226–231. Cited by: §II, §IV-B2.
  14. A. L. N. Fred and A. K. Jain (2003) Robust data clustering. In CVPR, pp. 128–136. Cited by: §II.
  15. B. H. Good, Y. de Montjoye and A. Clauset (2010) Performance of modularity maximization in practical contexts. Phys. Rev. E 81 (4), pp. 046106. Cited by: §III.
  16. S. Gregory (2008) A fast algorithm to find overlapping communities in networks. In ECML PKDD, pp. 408–423. External Links: ISBN 978-3-540-87478-2 Cited by: §III.
  17. S. Gregory (2011) Fuzzy overlapping communities in networks. J Stat Mech. 2011 (02), pp. P02017. Cited by: §III.
  18. J. Hou and W. Liu (2017) Clustering based on dominant set and cluster expansion. In PAKDD, Cited by: item a.
  19. B. Karrer, E. Levina and M. E. J. Newman (2008-04) Robustness of community structure in networks. Phys. Rev. E 77, pp. 046119. Cited by: §I.
  20. J. M. Kumpula, M. Kivelä, K. Kaski and J. Saramäki (2008) Sequential algorithm for fast clique percolation. Phys. Rev. E 78 (2). Cited by: §II, §V-A.
  21. A. Lancichinetti, F. Radicchi, J. J. Ramasco and S. Fortunato (2011) Finding Statistically Significant Communities in Networks. PLoS ONE 6. External Links: 1012.2363 Cited by: §II, §V-A.
  22. A. Lancichinetti and S. Fortunato (2009-07) Benchmarks for testing community detection algorithms on directed and weighted graphs with overlapping communities. Phys. Rev. E 80. Cited by: §V-B.
  23. A. Lancichinetti and S. Fortunato (2012) Consensus clustering in complex networks. Sci. Rep. 2. Cited by: §II, §II.
  24. A. Lázár, D. Abel and T. Vicsek (2010) Modularity measure of networks with overlapping communities. EPL 90 (1), pp. 18001. Cited by: §III.
  25. J. Liu (2010) Fuzzy modularity and fuzzy community structure in networks. Eur. Phys. J. B 77 (4), pp. 547–557. Cited by: §III.
  26. A. Lutov, M. Khayati and P. Cudré-Mauroux (2018) Clubmark: a parallel isolation framework for benchmarking and profiling clustering algorithms on numa architectures. In ICDMW, pp. 1481–1486. Cited by: item a, §V-A, §V-A.
  27. A. Lutov, M. Khayati and P. Cudré-Mauroux (2019) Accuracy evaluation of overlapping and multi-resolution clustering algorithms on large datasets. In BigComp, pp. 1–8. Cited by: §V-C.
  28. A. Lutov, S. Roshankish, M. Khayati and P. Cudré-Mauroux (2018) StaTIX — statistical type inference on linked data. In BigData, Cited by: §VI.
  29. A. Lutov, D. Yang and P. Cudré-Mauroux (2019) Bridging the gap between community and node representations: graph embedding via community detection. In IEEE BigData, Cited by: §VI.
  30. M. E. J. Newman (2016) Equivalence between modularity optimization and maximum likelihood methods for community detection. Phys. Rev. E 94. Cited by: §IV-A1.
  31. D. Mandaglio, A. Amelio and A. Tagarelli (2018) Consensus community detection in multilayer networks using parameter-free graph pruning. In PAKDD, Cited by: §II.
  32. G. A. Miller (1956) The magical number seven, plus or minus two: some limits on our capacity for processing information. The Psychol. Rev. 63 (2). Cited by: §I.
  33. T. Nepusz, A. Petróczi, L. Négyessy and F. Bazsó (2008-01) Fuzzy communities and the concept of bridgeness in complex networks. Phys. Rev. E 77, pp. 016107. Cited by: §III.
  34. M. E. J. Newman and M. Girvan (2004) Finding and evaluating community structure in networks. Phys. Rev. E 69 (2), pp. 026113. Cited by: §III.
  35. M. E. J. Newman (2003) The structure and function of complex networks. SIAM Rev. 45 (2). Cited by: §III.
  36. M. E. J. Newman (2013) Spectral methods for network community detection and graph partitioning. Phys. Rev. E 88 (4), pp. 042822. Cited by: §IV-A1.
  37. V. Nicosia, G. Mangioni, V. Carchiolo and M. Malgeri (2009-03) Extending the definition of modularity to directed graphs with overlapping communities. J Stat Mech. 3, pp. 24. External Links: 0801.1647 Cited by: §III.
  38. M. Ovelgönne and A. Geyer-Schulz (2013) An ensemble learning strategy for graph clustering. Contemp. Math., Vol. 588, pp. 187–206. Cited by: §II, §V-A.
  39. G. Palla, I. Derényi, I. Farkas and T. Vicsek (2005) Uncovering the overlapping community structure of complex networks in nature and society. Nature 435, pp. 814–818. Cited by: §V-A.
  40. A. Prat-Pérez, D. Dominguez-Sal and J. Larriba-Pey (2014) High quality, scalable and parallel community detection for large real graphs. WWW ’14, pp. 225–236. External Links: ISBN 978-1-4503-2744-2 Cited by: §V-A, §V-C.
  41. H. Shen, X. Cheng and J. Guo (2009) Quantifying and identifying the overlapping community structure in networks. J Stat Mech. 2009 (07), pp. P07042. Cited by: §III.
  42. T. Su and J. G. Dy (2007) In search of deterministic methods for initializing k-means and gaussian mixture clustering. Intelligent Data Analysis 11 (4), pp. 319–338. Cited by: item a.
  43. A. Tandon, A. Albeshri, V. Thayananthan, W. Alhalabi and S. Fortunato (2019-04) Fast consensus clustering in complex networks. Phys. Rev. E 99, pp. 042301. Cited by: §II, §V-A.
  44. M. Tepper, P. Musé, A. Almansa and M. Mejail (2011) Automatically finding clusters in normalized cuts. Pattern Recognition 44 (7). Cited by: item a.
  45. S. Vega-Pons and J. Ruiz-Shulcloper (2011) A survey of clustering ensemble algorithms. Int. J. Pattern Recogn. 25 (03), pp. 337–372. Cited by: §II.
  46. J. Xie, S. Kelley and B. K. Szymanski (2013-08) Overlapping community detection in networks: the state-of-the-art and comparative study. ACM Comput. Surv. 45 (4), pp. 1–35. Cited by: item a.
  47. J. Xie, B. K. Szymanski and X. Liu (2011) SLPA: uncovering overlapping communities in social networks via a speaker-listener interaction dynamic process. In ICDMW, pp. 344–349. Cited by: item a, §V-A.
  48. J. Yang and J. Leskovec Overlapping community detection at scale: a nonnegative matrix factorization approach. In WSDM ’13, pp. 587–596. Cited by: §V-C.
  49. J. Yang and J. Leskovec (2015) Defining and evaluating network communities based on ground-truth. Knowl. Inf. Syst.. External Links: ISSN 0219-1377 Cited by: item b.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
402546
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description