Convexified Modularity Maximization for Degree-corrected Stochastic Block Models

Convexified Modularity Maximization for Degree-corrected Stochastic Block Models

Yudong Chen Y. Chen is with the School of Operations Research and Information Engineering at Cornell University, Ithaca, NY. yudong.chen@cornell.edu.    Xiaodong Li X. Li is with the Statistics Department at the University of California, Davis, CA. xdgli@ucdavis.edu.    Jiaming Xu J. Xu is with Department of Statistics, The Wharton School, University of Pennsylvania, Philadelphia, PA. jiamingx@wharton.upenn.edu.
July 3, 2019
Abstract

The stochastic block model (SBM) is a popular framework for studying community detection in networks. This model is limited by the assumption that all nodes in the same community are statistically equivalent and have equal expected degrees. The degree-corrected stochastic block model (DCSBM) is a natural extension of SBM that allows for degree heterogeneity within communities. This paper proposes a convexified modularity maximization approach for estimating the hidden communities under DCSBM. Our approach is based on a convex programming relaxation of the classical (generalized) modularity maximization formulation, followed by a novel doubly-weighted -norm -median procedure. We establish non-asymptotic theoretical guarantees for both approximate clustering and perfect clustering. Our approximate clustering results are insensitive to the minimum degree, and hold even in sparse regime with bounded average degrees. In the special case of SBM, these theoretical results match the best-known performance guarantees of computationally feasible algorithms. Numerically, we provide an efficient implementation of our algorithm, which is applied to both synthetic and real-world networks. Experiment results show that our method enjoys competitive performance compared to the state of the art in the literature.

\newrefformat

algAlgorithm LABEL:#1 \newrefformateq(LABEL:#1) \newrefformatchapChapter LABEL:#1 \newrefformatsecSection LABEL:#1 \newrefformatalgoAlgorithm LABEL:#1 \newrefformatfigFig. LABEL:#1 \newrefformattabTable LABEL:#1 \newrefformatrmkRemark LABEL:#1 \newrefformatclmClaim LABEL:#1 \newrefformatdefDefinition LABEL:#1 \newrefformatcorCorollary LABEL:#1 \newrefformatlmmLemma LABEL:#1 \newrefformatpropProposition LABEL:#1 \newrefformatappAppendix LABEL:#1 \newrefformathypHypothesis LABEL:#1 \newrefformatthmTheorem LABEL:#1

1 Introduction

Detecting communities/clusters in networks and graphs is an important subroutine in many applications across computer, social and natural sciences and engineering. A standard framework for studying community detection in a statistical setting is the stochastic block model (SBM) proposed in Holland et al. [1983]. Also known as the planted partition model in the computer science literature [Condon and Karp, 2001], SBM is a random graph model for generating networks from a set of underlying clusters. The statistical task is to accurately recover the underlying true clusters given a single realization of the random graph.

The versatility and analytic tractability of SBM have made it arguably the most popular model for studying community detections. It however falls short of abstracting a key aspect of real-world networks. In particular, an unrealistic assumption of SBM is that within each community, the degree distributions of each node are the same. In empirical network data sets, however, the degree distributions are often highly inhomogeneous across nodes, sometimes exhibiting a heavy tail behavior with some nodes having very high degrees (so-called hubs). At the same time, sparsely connected nodes with small degrees are also common in real networks. To overcome this shortcoming of the SBM, the degree-corrected stochastic block model (DCSBM) was introduced in the literature to allow for degree heterogeneity within communities, thereby providing a more flexible and accurate model of real-world networks [Dasgupta et al., 2004; Karrer and Newman, 2011].

A number of community detection methods have been proposed based on DCSBM, such as model-based methods and spectral methods. Model-based methods include profile likelihood maximization and modularity maximization [Newman, 2006; Karrer and Newman, 2011]. Although these methods enjoy certain statistical guarantees [Zhao et al., 2012], they often involve optimization over all possible partitions, which is computationally intractable. Recent work in Amini et al. [2013]; Le et al. [2015+] discusses efficient solvers, but the theoretical guarantees are only established under restricted settings such as those with two communities. Spectral methods, which estimate the communities based on the eigenvectors of the graph adjacency matrix and its variants, are computationally fast. Statistical guarantees are derived for spectral methods under certain settings (see, e.g., Dasgupta et al. [2004]; Coja-Oghlan and Lanka [2009]; Chaudhuri et al. [2012]; Qin and Rohe [2013]; Lei and Rinaldo [2015]; Jin [2015]; Gulikers et al. [2015]), but numerical validation on synthetic and real data has not been as thorough. One notable exception is the SCORE method proposed in Jin [2015], which achieved one of the best known clustering performance on the political blogs dataset from Adamic and Glance [2005]. Spectral methods are also known to suffer from inconsistency in sparse graphs [Krzakala et al., 2013] as well as sensitivity to outliers [Cai and Li, 2015]. We discuss other related work in details in \prettyrefsec:related.

In this paper, we seek for a clustering algorithm that is computationally feasible, has strong statistical performance guarantees under DCSBM, and provides competitive empirical performance. Our approach makes use of the robustness and computational power of convex optimization. Under the standard SBM, convex optimization has been proven to be statistically efficient under a broad range of model parameters, including the size and number of communities as well as the sparsity of the network; see e.g. Chen et al. [2012]; Chen and Xu [2014]; Guédon and Vershynin [2015]; Ames and Vavasis [2014]; Oymak and Hassibi [2011]. Moreover, a significant advantage of convex methods is their robustness against arbitrary outlier nodes, as is established in the theoretical framework in Cai and Li [2015]. There, it is also observed that their convex optimization approach leads to state-of-the-art misclassification rates in the political blogs dataset, in which the node degrees are highly heterogeneous. These observations motivate us to study whether strong theoretical guarantees under DCSBM can be established for convex optimization-based methods.

Building on the work of Chen et al. [2012] and Cai and Li [2015], we introduce in \prettyrefsec:methodology a new community detection approach called Convexified Modularity Maximization (CMM). CMM is based on convexification of the elegant modularity maximization formulation, followed by a novel and computationally tractable weighted -norm -median clustering procedure. As we show in \prettyrefsec:theory and \prettyrefsec:simulation, our approach has strong theoretical guarantees, applicable even in the sparse graph regime with bounded average degree, as well as state-of-the-art empirical performance. In both aspects our approach is comparable to or improves upon the best-known results in the literature.

2 Problem setup and algorithms

In this section, we set up the community detection problem under DCSBM, and describe our algorithms based on convexified modularity maximization and weighted -median clustering.

Throughout this paper, we use lower-case and upper-case bold letters such as and to represent vectors and matrices, respectively, with and denoting their elements. We use to denote the -th row of the matrix . If all coordinates of a vector are nonnegative, we write . The notation , as well as and for matrices, are defined similarly. For a symmetric matrix , we write if is positive definite, and if it is positive semidefinite. For any sequences and , we write if there is an absolute constant such that , and we define similarly.

2.1 The degree-corrected stochastic block model

In DCSBM a graph is generated randomly as follows. A total of nodes, which we identify with the set , are partitioned into fixed but unknown clusters . Each pair of distinct nodes and are connected by an (undirected) edge with probability , independently of all others. Here the vector is referred to as the degree heterogeneity parameters of the nodes, and the symmetric matrix is called the connectivity matrix of the clusters. Without loss of generality, we assume that (one can multiply by a scalar and divide by without changing the distribution of the graph). Note that if for all nodes , DCSBM reduces to the classical SBM. Given a single realization of the resulting random graph , the statistical goal is to estimate the true clusters .

Before describing our algorithms, let us first introduce some useful notation. Denote by the adjacency matrix associated with the graph , with if and only if nodes and are connected. For each candidate partition of nodes into clusters, we associate it with a partition matrix , such that if and only if nodes and are assigned to the same cluster, with the convention that . Let be the set of all such partition matrices, and the true partition matrix associated with the ground-truth clusters . The notion of partition matrices plays a crucial role in the subsequent discussion.

2.2 Generalized modularity maximization

Our clustering algorithm is based on Newman and Girvan’s classical notion of modularity (see, e.g., Newman [2006]). Given the graph adjacency matrix of nodes, the modularity of a partition represented by the partition matrix , is defined as

(2.1)

where is the degree of node , and is the total number of edges in the graph. The modularity maximization approach to community detection is based on finding a partition that optimizes :

(2.2)

This standard form of modularity maximization is known to suffer from a “resolution limit” and cannot detect small clusters [Fortunato and Barthelemy, 2007]. To address this issue, several authors have proposed to replace the normalization factor by a tuning parameter  [Reichardt and Bornholdt, 2006; Lancichinetti and Fortunato, 2011], giving rise to the following generalized formulation of modularity maximization:

(2.3)

While modularity maximization enjoys several desirable statistical properties under SBM and DCSBM [Zhao et al., 2012; Amini et al., 2013], the associated optimization problems \prettyrefeq:modularity1 and \prettyrefeq:modularity2 are not computationally feasible due to the combinatorial constraint, which limits the practical applications of these formulations. In practice, modularity maximization is often used as a guidance for designing heuristic algorithms.

Here we take a more principled approach to computational feasibility while maintaining provable statistical guarantees: we develop a tractable convex surrogate for the above combinatorial optimization problems, whose solution is then refined by a novel weighted -median algorithm.

2.3 Convex relaxation

Introducing the degree vector , we can rewrite the generalized modularity maximization problem \prettyrefeq:modularity2 in matrix form as

(2.4)
subject to

where denotes the trace inner product between matrices. The objective function is linear in matrix variable , so it suffices to convexify the combinatorial constraint .

Recall that each matrix in corresponds to a unique partition of nodes into  clusters. There is another algebraic representation of such a partition via a membership matrix , where if and only if node belongs to the cluster . These two representations are related by the identity

(2.5)

which implies that . The membership matrix of a partition is only unique up to permutation of the cluster labels , so each partition matrix corresponds to multiple membership matrices . We use to denote the set of all possible membership matrices of -partitions.

Besides being positive semidefinite, a partition matrix also satisfies the linear constraints and for all . Using these properties of partition matrices, we obtain the following convexification of the modularity optimization problem \prettyrefeq:moduarity3:

(2.6)
subject to

Here is the matrix with all entries equal to . Implementation of the formulation \prettyrefeq:cvx requires choosing an appropriate tuning parameter . We will discuss the theoretical range for for consistent clustering in \prettyrefsec:theory, and empirical choice of in \prettyrefsec:simulation. As our convexification is based on the generalized version \prettyrefeq:modularity2 of modularity maximization, it is capable of detecting small clusters, even when the number of clusters grows with , as is shown later.

2.4 Explicit clustering via weighted -median

Ideally, the optimal solution to the convex relaxation \prettyrefeq:cvx is a valid partition matrix in  and recovers the true partition perfectly — our theoretical results in Section 3 characterize when this is the case. In general, the matrix will not lie in , but we expect it to be close to . To extract an explicit clustering from , we introduce a novel and tractable weighted -median algorithm.

Recall that by definition, the -th and -th rows of the true partition matrix are identical if the corresponding nodes and belong to the same community, and otherwise orthogonal to each other. If is close to , intuitively one can extract a good partition by clustering the row vectors of as points in the Euclidean space . While there exist numerous algorithms (e.g., -means) for such a task, our analysis identifies a particularly viable choice — a -median procedure appropriately weighted by the node degrees — that is efficient both theoretically and empirically.

Specifically, our weighted -median procedure consists of two steps. First, we multiply the columns of by the corresponding degrees to obtain the matrix , where , which is the diagonal matrix formed by the entries of . Clustering is performed on the row vectors of instead of . Note that if we consider the -th row of as a vector of features for node , then the rows of can be thought of as vectors of weighted features.

In the second step, we implement a weighted -median clustering on the row vectors of . Denoting by the -th row of , we search for a partition of and cluster centers that minimize the sum of the weighted distances in norm

Additionally, we require that the center vectors are chosen from the row vectors of (these centers are sometimes called medoids).

Representing the partition by a membership matrix and the centers as the rows of a matrix , we may write the above two-step procedure compactly as

(2.7)

where denotes the collection of row vectors of a matrix , and denotes the sum of the absolute values of all entries of .

We emphasize that the formulation \prettyrefeq:k-median differs from standard clustering algorithms (such as -means) in a number of ways. The objective function is the sum of distances rather than that of squared distances (hence -median), and the distances are in instead of norms. Moreover, our formulation has two levels of weighting: each column of is weighted to form , and the distance of each row to its cluster center is further weighted by . This doubly-weighted -norm -median formulation is crucial in obtaining strong and robust statistical bounds, and is significantly different from previous approaches, such as those in Lei and Rinaldo [2015]; Gulikers et al. [2015] (which only use the second weighting, and the weights are inversely proportional to ). Our double weighting scheme is motivated by the observation that nodes with larger degrees tend to be clustered more accurately — in particular our analysis of the convex relaxation \prettyrefeq:moduarity3 naturally leads to a doubly weighted error bound on its solution . On the one hand, for each given , is expected to be closer to if the degree of node is larger, so we multiply by for every  to get the weighted feature vector. On the other hand, the -th row of is closer to the -the row of if the degree of node is larger, hence we minimize the distances weighted by .

With the constraint , the optimization problem \prettyrefeq:k-median is precisely the weighted -norm -median (also known as -medoid) problem considered in Charikar et al. [1999]. Computing the exact optimizer to \prettyrefeq:k-median, denoted by , is NP-hard. Nevertheless, Charikar et al. [1999] provides a polynomial-time approximation algorithm, which outputs a solution feasible to \prettyrefeq:k-median and provably satisfying

(2.8)

As the solution to the convex relaxation \prettyrefeq:cvx and the approximate solution to the -median problem \prettyrefeq:k-median can both be computed in polynomial-time, our algorithm is computationally tractable. In the next section, we turn to the statistical aspect and show that the clustering induced by and is close to the true underlying clusters, under some mild and interpretable conditions of DCSBM.

3 Theoretical results

In this section, we provide theoretical results characterizing the statistical properties of our algorithm. We show that under mild conditions of DCSBM, the difference between the convex relaxation solution and the true partition matrix , and the difference between the approximate -median clustering and the true clustering , are well bounded. When additional conditions hold, we further show that perfectly recovers the true clusters. Our results are non-asymptotic in nature, valid for any scaling of , , and etc.

3.1 Density gap conditions

In the literature of community detection by convex optimization under standard SBM, it is often assumed that the minimum within-cluster edge density is greater than the maximum cross-cluster edge density, i.e.,

(3.1)

See for example Chen et al. [2012]; Oymak and Hassibi [2011]; Ames and Vavasis [2014]; Cai and Li [2015]; Guédon and Vershynin [2015]. This requirement \prettyrefeq:DGC can be directly extended to the DCSBM setting, leading to the condition

(3.2)

Under DCSBM, however, this condition would often be overly restrictive, particularly when the degree parameters are imbalanced with some of them being very small. In particular, this condition is highly sensitive to the minimum value which is unnecessary since the community memberships of nodes with larger may still be recoverable.

Here we instead consider a version of the density gap condition that is much milder and more appropriate for DCSBM. For each cluster index , define the quantities

(3.3)

Simple calculation gives

Therefore, the quantity controls the average degree of the nodes in the -th cluster. With this notation, we impose the condition that

(3.4)

We refer to the condition \prettyrefeq:dcgap as the degree-corrected density gap condition. This condition can be viewed as the “average” version of \prettyrefeq:SDGC, as it depends on the aggregate quantity  associated with each cluster rather than the ’s of individual nodes in the cluster — in particular, the condition \prettyrefeq:dcgap is robust against small . This condition plays a key role throughout our theoretical analysis, for both approximate and exact cluster recovery under DCSBM.

To gain intuition on the new degree-corrected density gap condition \prettyrefeq:dcgap, consider the following sub-class of DCSBM with symmetric/balanced clusters.

Definition 1.

We say that a DCSBM obeys a -model, if for all , for all , and .

In a -model, the true clusters are balanced in terms of the connectivity matrix and the sum of the degree heterogeneity parameters (rather than the cluster size). Under this model, straightforward calculation gives for all . The degree-corrected density gap condition \prettyrefeq:dcgap then reduces to , i.e., the classical density gap condition \prettyrefeq:DGC.

3.2 Theory of approximate clustering

We now study when the solutions to our convex relaxation \prettyrefeq:cvx and weighted -median algorithms \prettyrefeq:k-median approximately recover the underlying true clusters. Under DCSBM, nodes with different ’s have varying degrees, and therefore contribute differently to the overall graph and in turn to the clustering quality. Such heterogeneity needs to be taken into account in order to get tight bounds on clustering errors. The following version of norm, corrected by the degree heterogeneity parameters, is the natural notion of an error metric:

Definition 2.

For each matrix , its weighted element-wise norm is defined as

Also recall our definitions of and in equation \prettyrefeq:GandH. Furthermore, for each and , define the quantity , which corresponds approximately to the expected degree of node and satisfies .

With the notation above, our first theorem shows that the convex relaxation solution  is close to the true partition matrix  in terms of the weighted norm.

Theorem 1.

Under DCSBM, assume that the degree-corrected density gap condition \prettyrefeq:dcgap holds. Moreover, suppose that the tuning parameter in the convex relaxation \prettyrefeq:cvx satisfies

(3.5)

for some number . Then with probability at least , the solution  to \prettyrefeq:cvx satisfies the bound

(3.6)

where is an absolute constant.

We prove this claim in \prettyrefsec:proof_approx. The bound \prettyrefeq:pillar1 holds with probability close to one. Notably, it is insensitive to as should be expected, because community memberships of nodes with relatively large are still recoverable. In contrast, the error bounds of several existing methods, such as that of SCORE method in [Jin, 2015, eq. (2.15), (2.16)], depend on crucially.

Under the -model, recall that and density gap condition \prettyrefeq:dcgap becomes . Moreover, the constraint \prettyrefeq:RDGC for and becomes

(3.7)

Note that the first inequality above is the same as the standard density gap condition imposed in, for example, Chen et al. [2012]; Chen and Xu [2014]; Cai and Li [2015]. Furthermore, the vector satisfies . Substituting these expressions into the bound \prettyrefeq:pillar1, we obtain the following corollary for the symmetric DCSBM setting.

Corollary 1.

Under the -model of DCSBM, if the condition \prettyrefeq:RDGC2 holds for the density gap and tuning parameter, then with probability at least , the solution  to the convex relaxation \prettyrefeq:cvx satisfies the bound

(3.8)

Note that if for an absolute constant , then the bound \prettyrefeq:pillar2 takes the simpler form . If for all nodes , the -model reduces to the standard SBM with equal community size. If we assume additionally, and note that and let , then the error bound \prettyrefeq:pillar2 becomes

This bound matches the error bounds proved in [Guédon and Vershynin, 2015, Theorem 1.3].

The output of the convex relaxation need not be a partition matrix corresponding to a clustering; a consequence is that the theoretical results in Guédon and Vershynin [2015] do not provide an explicit guarantee on clustering errors (except for the special case of ). We give such a bound below, based on the explicit clustering extracted from using the weighted -norm -median algorithm \prettyrefeq:k-median. Recall that is the membership matrix in the approximate -median solution given in \prettyrefeq:approximation, and let be the membership matrix corresponding to the true clusters. A membership matrix is unique only up to permutation of its columns (i.e., relabeling the clusters), so counting the misclassified nodes in requires an appropriate minimization over such permutations. The following definition is useful to this end. For a matrix , let denote its -th row vector.

Definition 3.

Let denote the set of all permutation matrices. The set of misclassified nodes with respect to a permutation matrix is defined as

With this definition, we have the following theorem that quantifies the misclassification rate of approximate -median solution .

Theorem 2.

Under the -model, assume that the parameters and satisfy \prettyrefeq:RDGC2. Then with probability at least , the approximate -median solution satisfies the bound

(3.9)

for some absolute constant .

We prove this claim in \prettyrefsec:proof_kmedian. Extension to the general DCSBM setting is straightforward.

If we let be a minimizer of the LHS of \prettyrefeq:Sbound and , then the quantity is the number of misclassified nodes weighted by their degree heterogeneity parameters . \prettyrefthm:kmedian controls this weighted quantity, which is natural as nodes with smaller are harder to cluster and thus less controlled in \prettyrefeq:Sbound. Notably, the bound given in \prettyrefeq:Sbound is applicable even in the sparse graph regime with bounded average degrees, i.e., . For example, suppose that and for two fixed constants , and ; if is sufficiently large, then, with the choice , the right hand side of \prettyrefeq:Sbound can be an arbitrarily small constant times . In comparison, conventional spectral methods are known to be inconsistent in this sparse regime [Krzakala et al., 2013]. While this difficulty is alleviated under SBM by the use of regularization or non-backtracking matrices (e.g., Le and Vershynin [2015]; Bordenave et al. [2015]), rigorous justification and numerical validation under DCSBM have not been well explored.

It is sometimes desirable to have a direct (unweighted) bound on the number misclassified nodes. Suppose that is a permutation matrix that minimizes , and let . A bound on the unweighted misclassification error can be easily derived from the general weighted bound \prettyrefeq:Sbound. For example, combining \prettyrefeq:Sbound with the AM-HM inequality , we obtain that

(3.10)

Another bound on , which is applicable even when some ’s are zero, can be derived as follows: we pick any number and use the inequality \prettyrefeq:Sbound to get

(3.11)

This simple bound is already quite useful, for example in standard SBM with , and  equal-sized clusters. In this case, setting in \prettyrefeq:Sbound_tau yields that the number of misclassified nodes satisfies When , this bound is consistent with those in Guédon and Vershynin [2015], but our result is more general as it applies to more clusters .

3.3 Theory of perfect clustering

In this section, we show that under an additional condition on the minimum degree heterogeneity parameter , the solution to the convex relaxation perfectly recovers the true partition matrix . In this case the true clusters can be extracted easily from without using the -median procedure.

For the purpose of studying perfect clustering, we consider a setting of DCSBM with for all , and for all . Under this setup, the degree-corrected density gap condition \prettyrefeq:RDGC becomes

(3.12)

Recalling the definition of in \prettyrefeq:GandH, we further define The following theorem characterizes when perfect clustering is guaranteed.

Theorem 3.

Suppose that the degree-corrected density gap condition \prettyrefeq:sep_perf is satisfied for some number and tuning parameter , and that

(3.13)

for some sufficiently large absolute constant . Then with probability at least , the solution  to the convex relaxation \prettyrefeq:cvx is unique and equals .

The condition \prettyrefeq:assump depends on the minimum values and . Such dependence is necessary for perfect clustering, as clusters and nodes with overly small and will have too few edges and are not recoverable. In comparison, the approximate recovery results in \prettyrefthm:approx are not sensitive to either or , as should be expected. Valid for the more general DCSBM, \prettyrefthm:exact significantly generalizes the existing theory for standard SBM on perfect clustering by SDP in the literature (see, e.g., Chen et al. [2012]; Chen and Xu [2014]; Cai and Li [2015]). Taking , \prettyrefthm:exact guarantees that the probability of perfect clustering converges to one, thereby implying the convex relaxation approach is strongly consistent in the sense of Zhao et al. [2012].

In the special case of standard SBM with , the density gap lower bound \prettyrefeq:assump simplifies to

where is the minimum community size and is the size of community . This density gap lower bound is consistent with best existing results given in Chen et al. [2012]; Chen and Xu [2014]; Cai and Li [2015] — as we discussed earlier, our density condition in \prettyrefeq:RDGC2 under the model (which encompasses SBM with equal-sized clusters) is the same as in these previous papers, with the minor difference that in these papers the term in the convex relaxation \prettyrefeq:cvx is replaced by a tuning parameter  assumed to satisfy the condition .

4 Numerical results

In this section, we provide numerical results on both synthetic and real datasets, which corroborate our theoretical findings. Our convexified modularity maximization approach is found to empirically outperform state-of-the-art methods in several settings.

The convexified modularity maximization problem \prettyrefeq:cvx is a semidefinite program (SDP), and can be solved efficiently by a range of general and specialized algorithms. Here we use the alternating direction method of multipliers (ADMM) suggested in Cai and Li [2015]. To specify the ADMM solver, we need some notations as follows. For any two matrices and , let be the matrix whose -th entry is given by ; the matrix is similarly defined. For a symmetric matrix with an eigenvalue decomposition , let , and let be the matrix obtained by setting all the diagonal entries of to . Recall that denotes the all-one matrix. The ADMM algorithm for solving \prettyrefeq:cvx with the dual update step size equal to , is given as \prettyrefalg:ADMMSDP.

1:  Input: and .
2:  Initialization: , and .
3:  while
end while
4:  Output the final solution .
Algorithm 1 ADMM algorithm for solving the SDP \prettyrefeq:cvx

Our choice of the tuning parameter is motivated by the following simple observation. By standard concentration inequalities, the number is close to its expectation . Under the -model, we have and for all . In this case and with the above choice of , the density gap assumption \prettyrefeq:sep_perf simplifies to , which holds with .

After obtaining the solution to the convex relaxation, we extract an explicit clustering using the weighted -median procedure described in \prettyrefeq:k-median with , where the number of major clusters is assumed known. Our complete community detection algorithm, Convexified Modularity Maximization (CMM), is summarized in \prettyrefalg:CD. In our experiments, the weighted -median problem is solved by an iterative greedy procedure that optimizes alternatively over the variables and in \prettyrefeq:k-median, with random initializations.

1:  Input: , , and .
2:  Solve the convex relaxation \prettyrefeq:cvx for using \prettyrefalg:ADMMSDP.
3:  Solve the weighted -median problem \prettyrefeq:k-median with and , and output the resulting -partition of .
Algorithm 2 Convexified Modularity Maximization (CMM)

4.1 Synthetic data experiments

We provide experiment results on synthetic data generated from DCSBM. For each node , the degree heterogeneity parameter is sampled independently from a Pareto distribution with the density function , where and are called the shape and scale parameters, respectively. We consider different values of the shape parameter, and choose the scale parameter accordingly so that the expectation of each is fixed at . Note that the variability of the ’s decreases with the shape parameter . Given the degree heterogeneity parameters and two numbers , a graph is generated from DCSBM, with the edge probability between nodes and being and .

We applied our CMM approach in \prettyrefalg:CD to the resulting graph, and recorded the misclassification rate (cf. the discussion after \prettyrefthm:kmedian). For comparison, we also applied the SCORE algorithm in Jin [2015] and the OCCAM algorithm in Zhang et al. [2014], which are reported to have state-of-the-art empirical performance on DCSBM in the existing literature. The SCORE algorithm performs -means on the top- to top- eigenvectors of the adjacency matrix normalized element-wise by the top- eigenvector. OCCAM is a type of regularized spectral -median algorithm; it can produce non-overlapping clusters and its regularization parameter is given explicitly in Zhang et al. [2014]. For all -means/medians procedures used in the experiments, we took and used random initializations.

(a) (b)
(c) (d)
Figure 1: Clustering performance on synthetic datasets versus the variability of . Solid lines: our CMM algorithm. Dash lines: the SCORE algorithm. Individual marker: Regularized spectral algorithm. Panel (a): nodes, clusters of size . Panel (b): nodes, clusters of size . Panel (c): nodes, clusters of size . Panel (d): nodes, clusters of size . Each point represents the average of independent runs. In all experiments we set .
\prettyref

fig:synthetic shows the misclassification rates of CMM (solid lines), SCORE (dash lines) and OCCAM (individual markers) for various settings of , , , cluster size and the shape parameter for . We see that the misclassification rate of CMM grows as the degree parameters becomes more heterogeneous (smaller values of the shape parameter), and as the graph becomes sparser, which is consistent with the prediction of \prettyrefthm:kmedian. Moreover, our approach has consistently lower misclassification rates than SCORE and OCCAM, with SCORE and OCCAM exhibiting similar performance.

4.2 Political blog network dataset

We next test the empirical performance of CMM (\prettyrefalg:CD), SCORE and OCCAM on the US political blog network dataset from Adamic and Glance [2005]. This dataset consists of 19090 hyperlinks (directed edges) between 1490 political blogs collected in the year 2005. The political leaning (liberal versus conservative) of each blog has been labeled manually based on blog directories, incoming and outgoing links and posts around the time of the 2004 presidential election. We treat these labels as the true memberships of communities. We ignore the edge direction, and focus on the largest connected component with nodes and edges, represented by the adjacency matrix . This graph has high degree variation: the maximum degree is while the mean degree is around . CMM, SCORE and OCCAM misclassify , and nodes, respectively, out of nodes on this dataset. The SCORE method has the best known error rate on the political blogs dataset in the literature [Jin, 2015], and we see that our CMM approach is comparable to the state of the art. Panel (a) in \prettyreffig:sol shows the adjacency matrix with rows and columns sorted according to the true community labels. The output of ADMM \prettyrefalg:ADMMSDP for solving the convex relaxation \prettyrefeq:cvx is shown in \prettyreffig:sol (b). The partition matrix corresponding to the output of the weighted -median step in \prettyrefalg:CD is shown in \prettyreffig:sol (c).

(a) (b) (c)
Figure 2: Panel (a): The adjacency matrix of the largest connected component in the political blog network with nodes. The rows and columns are sorted according to the true community labels. The first rows/columns correspond to the liberal community, and the next is the conservative community. Panel (b): The output matrix of the convex relaxation \prettyrefeq:cvx solved by ADMM (\prettyrefalg:ADMMSDP), with the entries truncated to the interval . Panel (c): The partition matrix corresponding to the output of CMM (\prettyrefalg:CD), obtained from the weighted -median algorithm. Matrix entry values are shown in gray scale with black corresponding to and white to .

4.3 Facebook dataset

In this section, we consider the Facebook network dataset from Traud et al. [2011, 2012], and compare the empirical performance of our CMM approach with the SCORE and OCCAM methods. The Facebook network dataset consists of 100 US universities and all the “friendship” links between the users within each university, recorded on one particular day in September 2005. The dataset also contains several node attributes such as the gender, dorm, graduation year and academic major of each user. Here we report results on the friendship networks of two universities: Simmons College and Caltech.

4.3.1 Simmons College network

The Simmons College network contains 1518 nodes and 32988 undirected edges. The subgraph induced by nodes with graduation year between 2006 and 2009 has a largest connected component with 1137 nodes and 24257 undirected edges, which we shall focus on. It is observed in Traud et al. [2011, 2012] that the community structure of the Simmons College network exhibits a strong correlation with the graduation year — students in the same year are more likely to be friends. Panel (a) of \prettyreffig:simmons shows this largest component with nodes colored according to their graduation year.

We applied the CMM (\prettyrefalg:CD), SCORE and OCCAM methods to partition the largest component into clusters. In Panels (b)–(d) of \prettyreffig:simmons the clustering results of these three methods are shown as the node colors. In \prettyreffig:simmons_confmat we also provide the confusion matrices of the clustering results against the graduation years; the -th entry of a confusion matrix represents the number of nodes that are from graduation year  but assigned to cluster  by the algorithm. We see that our CMM approach produced a partition more correlated with the actual graduation years. In fact, if we treat the graduation years as the ground truth cluster labels, then CMM misclassified of the nodes, whereas SCORE and OCCAM have higher misclassification rates of and , respectively. A closer investigation of \prettyreffig:simmons and \prettyreffig:simmons_confmat shows that CMM was better in distinguishing between the nodes of year 2006 and 2007.

(a) (b)
(c) (d)
Figure 3: The largest component of the Simmons College network. Panel (a): Each node is colored according to its graduation year, with 2006 in green, 2007 in light blue, 2008 in purple and 2009 in red. Panels (b)–(d): Each node is colored according to the clustering result of (b) CMM, (c) SCORE and (d) OCCAM. (These plots are generated using the Gephi package [Bastian et al., 2009] with the ForceAtlas2 layout algorithm [Jacomy et al., 2014].)
Figure 4: The confusion matrices of CMM, SCORE and OCCAM applied to the largest component of the Simmons College network.

4.3.2 Caltech network

In this section, we provide experiment results on the Caltech network. This network has 769 nodes and 16656 undirected edges. We consider the subgraph induced by nodes with known dorm attributes, and focus on its largest connected component, which consists of 590 nodes and 12822 edges. The community structure is highly correlated with which of the dorms a user is from, as observed in Traud et al. [2011, 2012].

We applied CMM, SCORE and OCCAM to partition this largest component into clusters. With the dorms as the ground truth cluster labels, CMM misclassified of the nodes, whereas SCORE and OCCAM had higher misclassification rates of and , respectively. The confusion matrices of these methods are shown in \prettyreffig:caltech_confmat. We see that dorm 3 was difficult to recover and largely missed by all three methods, but our CMM algorithm better identified the other dorms.

Figure 5: The confusion matrices of CMM, SCORE and OCCAM applied to the largest component of the Caltech network against the dorm assignments of the users.

5 Related work

In this section, we discuss prior results that are related to our work. Existing community detection methods for DCSBM include model-based methods and spectral methods. In model-based methods such as profile likelihood and modularity maximization [Newman, 2006], one fits the model parameters to the observed network based on the likelihood functions or modularity functions determined by the statistical structure of DCSBM. In Karrer and Newman [2011], the maximum likelihood estimator is used to infer the unknown model parameters and . These estimates are then plugged into the log likelihood function, which leads to a quality function for community partitions. An estimate of the community structure is obtained by maximizing this quality function using a greedy heuristic algorithm. No provable theoretical guarantee is known for this greedy algorithm, and one usually needs to run the algorithm with many random initial points to achieve good performance. The work in Zhao et al. [2012] discusses profile likelihood methods for DCSBM and the closely related modularity maximization approach. Under the assumption that the number of clusters is fixed, strong consistency is proved when the average degree is , and weak consistency when it is . However, directly solving the maximization problems is computationally infeasible, as it involves searching over all possible partitions. Numerically, these optimization problems are solved heuristically using Tabu search and spectral decomposition without theoretical guarantees. The algorithm proposed in Amini et al. [2013] involves finding an initial clustering using spectral methods, then iteratively updating the labels via maximizing conditional pseudo likelihood, which is done using the EM algorithm in each step of iteration. After simplifying the iterations into one E-step, they establish guaranteed consistency when there are only two clusters. The work in Le et al. [2015+] proposes to approximate the profile likelihood functions, modularity functions or other criterions using surrogates defined in a -dimensional subspace constructed by spectral dimension reduction. Thanks to the convexity of the surrogate functions, the search complexity is polynomial. The method and theory are however only applicable when there are two communities.

Spectral methods for community detection have attracted interest from diverse communities including computer science, applied math, statistics, and machine learning; see e.g. Rohe et al. [2011] and the references therein for results of spectral clustering under SBM. The seminal work of Dasgupta et al. [2004] on DCSBM (proposed under the name of Extended Planted Partition model) considered a spectral method similar to that in McSherry [2001]. One major drawback is that the knowledge of is required in both the theory and algorithm. In the algorithm proposed in Coja-Oghlan and Lanka [2009], the adjacency matrix is first normalized by the node degrees and then thresholded entrywise, after which spectral clustering is applied. Strong consistency is proved for the setting with a fixed number of clusters. In Chaudhuri et al. [2012], a modified spectral clustering method was proposed using a regularized random-walk graph Laplacian, and strong consistency is established under the assumption that the average degree grows at least as . A different spectral clustering approach based on regularized graph Laplacians is considered in Qin and Rohe [2013]. Their theoretical bound on the misclassified rates depends on the eigenvectors of the graph Laplacian, which is a still random object. Spectral clustering based on unmodified adjacency matrices and degree-normalized adjacency matrices are analyzed in Lei and Rinaldo [2015] and Gulikers et al. [2015], which prove rigorous error rate results but did not provide numerical validation on either synthetic or real data.

It is observed in Jin [2015] that spectral clustering based directly the adjacency matrix (or their normalized version) often result in inconsistent clustering in real data, such as the political blogs dataset Adamic and Glance [2005], a popular benchmark for testing community detection approaches. To address this issue, a new spectral clustering algorithm called SCORE is proposed in Jin [2015]. Specifically, the second to -th leading eigenvectors are divided by the first leading eigenvector elementwisely, and spectral clustering is applied to the resulting ratio matrix. In their theoretical results, an implicit assumption is that the number of communities is bounded by a constant, as implied by the condition (2.14) in Jin [2015]. In comparison, our convexified modularity maximization approach works for growing both theoretically and empirically. As illustrated in \prettyrefsec:synthetic, our method exhibited better performance on both the synthetic and real datasets considered there, especially when .

6 Discussion and future work

In this paper, we studied community detection in networks with possibly highly skewed degree distributions. We introduced a new computationally efficient methodology based on convexification of the modularity maximization formulation and a novel doubly-weighted norm -median clustering procedure. Our complete algorithm runs provably in polynomial time and is computationally feasible. Non-asymptotic theoretical performance guarantees were established under DCSBM for both approximate clustering and perfect clustering, which are consistent with the best known rate results in the literature of SBM.

The proposed method also enjoys good empirical performance, as was demonstrated on both synthetic data and real-world networks. On these datasets our method was observed to have performance comparable to, and sometimes better than, the state-of-the-art spectral clustering methods, particularly when there are more than two communities.

Our work involves several algorithmic and analytical novelties. We provide a tractable solution to the classical modularity maximization formulation via convexification, achieving simultaneously strong theoretical guarantees and competitive empirical performance. The theoretical results are based on an aggregate and degree-corrected version of the density gap condition, which is robust to a small number of outlier nodes and thus is an appropriate condition for approximate clustering. In our algorithms and error bounds we made use of techniques from Guédon and Vershynin [2015]; Jin [2015]; Lei and Rinaldo [2015], but departed from these existing works in several important aspects. In particular, we proposed a novel -median formulation using doubly-weighted norms, which allows for a tight analysis that produces strong non-asymptotic guarantees on approximate recovery. Furthermore, we developed a non-asymptotic theory on perfect clustering, which is based on a divide-and-conquer primal-dual analysis and makes crucial use of certain weighted metrics that exploit the structures of DCSBM.

A future direction important in both theory and practice, is to consider networks with overlapping communities, where a node may belong to multiple communities simultaneously. To accommodate such a setting several extensions of SBM have been introduced in the literature. For example, Zhang et al. [2014] proposed a spectral algorithm based on the Overlapping Continuous Community Assignment Model (OCCAM). As our CMM method is shown to be an attractive alternative to spectral methods for DCSBM, it will be interesting to extend CMM to allow for both overlapping communities and heterogeneous degrees. Another direction of interest is to develop a general theory of optimal misclassification rates for DCSBM along the lines of Gao et al. [2015]; Zhang and Zhou [2015].

7 Proofs

In this section, we prove the theoretical results in \prettyrefthm:approx–3. Introducing the convenient shorthand , we can write the weighted norm of a matrix in \prettyrefdef:weighted_norm as

where denotes the Hadamard (element-wise) product. Several standard matrix norms will also be used: the spectral norm (the largest singular value of ); the nuclear norm (the sum of the singular values); the norm ; the norm ; and the operator norm .

For any vector , we denote by the diagonal matrix whose diagonal entries are correspondingly the entries of . For any matrix , let denote the diagonal matrix with diagonal entries given by the corresponding diagonal entries of . We denote absolute constants by , etc, whose value may change line by line.

Recall that and are the vectors of node degrees and their expectations, respectively, where

for each , . A key step in our proofs is to appropriately control the deviation of the degrees from their expectation. This is done in the following lemma.

Lemma 1.

Under DCSBM, with probability at least , we have

for some absolute constant .

Proof.

Since

we have

and

We bound and separately. For each , there holds and

Therefore, we have

which implies that and

By Markov’s inequality, with probability , there holds

for an absolute constant . To bound , we observe that since and , there holds . By Markov’s inequality, with probability at least , there holds for some absolute constant. Combining these bounds on and proves \prettyreflmm:fd. ∎

7.1 Proof of \prettyrefthm:approx

Recall that the vector is defined by letting for , where is defined in \prettyrefeq:GandH. It follows from the optimality of that

We control the terms , and separately below.

Upper bound for

For each pair and , we have , , and . Hence the condition \prettyrefeq:RDGC implies that , whence

Similarly, for each pair and with , we have

Combining the last two inequalities, we obtain the bound

Upper bound for

By Grothendieck’s inequality [Grothendieck, 1953; Lindenstrauss and Pełczyński, 1968] we have