Exact Recovery in the Stochastic Block Model

Exact Recovery in the Stochastic Block Model

Emmanuel Abbe  Program in Applied and Computational Mathematics (PACM) and Department of Electrical Engineering, Princeton University, Princeton, NJ 08544, USA (eabbe@princeton.edu).    Afonso S. Bandeira  PACM, Princeton University, Princeton, NJ 08544, USA (ajsb@math.princeton.edu). ASB was supported by AFOSR Grant No. FA9550-12-1-0317.    Georgina Hall  Department of Operations Research and Financial Engineering, Princeton University, Princeton, NJ 08544, USA (gh4@princeton.edu).
Abstract

The stochastic block model (SBM) with two communities, or equivalently the planted bisection model, is a popular model of random graph exhibiting a cluster behaviour. In the symmetric case, the graph has two equally sized clusters and vertices connect with probability within clusters and across clusters. In the past two decades, a large body of literature in statistics and computer science has focused on providing lower-bounds on the scaling of to ensure exact recovery. In this paper, we identify a sharp threshold phenomenon for exact recovery: if and are constant (with ), recovering the communities with high probability is possible if and impossible if . In particular, this improves the existing bounds. This also sets a new line of sight for efficient clustering algorithms. While maximum likelihood (ML) achieves the optimal threshold (by definition), it is in the worst-case NP-hard. This paper proposes an efficient algorithm based on a semidefinite programming relaxation of ML, which is proved to succeed in recovering the communities close to the threshold, while numerical experiments suggest it may achieve the threshold. An efficient algorithm which succeeds all the way down to the threshold is also obtained using a partial recovery algorithm combined with a local improvement procedure.

1 Introduction

Learning community structures in graphs is a central problem in machine learning, computer science and complex networks. Increasingly, data is available about interactions among agents (e.g., social, biological, computer or image networks), and the goal is to infer from these interactions communities that are alike or complementary. As the study of community detection grows at the intersections of various fields, in particular computer science, machine learning, statistics and social computing, the notions of clusters, the figure of merits and the models vary significantly, often based on heuristics (see [20] for a survey). As a result, the comparison and validation of clustering algorithms remains a major challenge. Key enablers to benchmark algorithms and to measure the accuracy of clustering methods are statistical network models. More specifically, the stochastic block model has been at the center of the attention in a large body of literature [22, 38, 18, 37, 33, 15, 24, 28, 14, 13, 27, 30, 31], as a testbed for algorithms (see [9] for a survey) as well as a scalable model for large data sets (see [21] and reference therein). On the other hand, the fundamental analysis of the stochastic block model (SBM) is still holding major open problems, as discussed next.

The SBM can be seen as an extension of the ErdősRényi (ER) model [16, 17]. In the ER model, edges are placed independently with probability , providing a models described by a single parameter. This model has been (and still is) a source of intense research activity, in particular due to its phase transition phenomena. It is however well known to be too simplistic to model real networks, in particular due to its strong homogeneity and absence of community structure. The stochastic block model is based on the assumption that agents in a network connect not independently but based on their profiles, or equivalently, on their community assignment. More specifically, each node in the graph is assigned a label , where denotes the set of community labels, and each pair of nodes is connected with probability , where is a fixed probability matrix. Upon observing the graph (without labels), the goal of community detection is to reconstruct the community assignments, with either full or partial recovery.

Of particular interest is the SBM with two communities and symmetric parameters, also known as the planted bisection model, denoted in this paper by , with an even integer denoting the number of vertices. In this model, the graph has two clusters of equal size, and the probabilities of connecting are within the clusters and across the clusters (see Figure 1). Of course, one can only hope to recover the communities up to a global flip of the labels, in other words, only the partition can be recovered. Hence we use the terminology exact recovery or simply recovery when the partition is recovered correctly with high probability (w.h.p.), i.e., with probability tending to one as tends to infinity. When , it is clearly impossible to recover the communities, whereas for or , one may hope to succeed in certain regimes. While this is a toy model, it captures some of the central challenges for community detection.

Figure 1: A graph generated form the stochastic block model with 600 nodes and 2 communities, scrambled on the left and clustered on the right. Nodes in this graph connect with probability within communities and across communities.

A large body of literature in statistics and computer science [7, 15, 6, 34, 23, 12, 8, 28, 5, 32, 10] has focused on determining lower-bounds on the scaling of for which efficient algorithms succeed in recovering the two communities in . We overview these results in the next section. The best bound seems to come from [28], ensuring recovery for , and has not be improved for more than a decade. More recently, a new phenomena has been identified for the SBM in a regime where and [13]. In this regime, exact recovery is not possible, since the graph is, with high probability, not connected. However, partial recovery is possible, and the focus has been shifted on determining for which regime of and it is possible to obtain a reconstruction of the communities which is asymptotically better than a random guess (which gets roughly of accuracy). In other words, to recover only a proportion of the vertices correctly, for some . We refer to this reconstruction requirement as detection. In [13], it was conjectured that detection is possible if and only if . This is a particularly fascinating and strong conjecture, as it provides a necessary and sufficient condition for detection with a sharp closed-form expression. The study of this regime was initiated with the work of Coja-Oghlan [11], which obtains detection when using spectral clustering on a trimmed adjacency matrix. The conjecture was recently proved by Massoulie [27] and Mossel et al. [31] using two different efficient algorithms. The impossibility result was first proved in [30].

While the sparse regime with constant degree points out a fascinating threshold phenomena for the detection property, it also raises a natural question: does exact recovery also admit a similar phase transition? Most of the literature has been focusing on the scaling of the lower-bounds, often up to poly-logarithmic terms, and the answer to this question appears to be currently missing in the literature. In particular, we did not find tight impossibility results, or guarantees of optimality of the proposed algorithms. This paper answers this question, establishing a sharp phase transition for recovery, obtaining a tight bound with an efficient algorithm achieving it.

2 Related works

There has been a significant body of literature on the recovery property for the stochastic block model with two communities , ranging from computer science and statistics literature to machine learning literature. We provide next a partial111The approach of McSherry was recently simplified and extended in [36]. list of works that obtain bounds on the connectivity parameters to ensure recovery with various algorithms:

[7] Bui, Chaudhuri, Leighton, Sipser ’84 min-cut method
[15] Dyer, Frieze ’89 min-cut via degrees
[6] Boppana ’87 spectral method
[34] Snijders, Nowicki ’97 EM algorithm
[23] Jerrum, Sorkin ’98 Metropolis aglorithm
[12] Condon, Karp ’99 augmentation algorithm
[8] Carson, Impagliazzo ’01 hill-climbing algorithm
[28] Mcsherry ’01 spectral method
[5] Bickel, Chen ’09 N-G modularity
[32] Rohe, Chatterjee, Yu ’11 spectral method

While these algorithmic developments are impressive, we next argue how they do not reveal the sharp behavioral transition that takes place in this model. In particular, we will obtain an improved bound that is shown to be tight.

3 Information theoretic perspective and main results

In this paper, rather than starting with a specific algorithmic approach, we first seek to establish the information-theoretic threshold for recovery irrespective of efficiency requirements. Obtaining an information-theoretic benchmark, we then seek for an efficient algorithm that achieves it. There are several reasons to expect that an information-theoretic phase transition takes place for recovery in the SBM:

  • From a random graph perspective, note that recovery requires the graph to be at least connected (with high probability), hence , for and is necessary. In turn, if or , then prohibits recovery (since the model has either two separate Erdős-Reényi graphs that are not connected, or a bipartite Erdős-Reényi graph which is not connected). So one can expect that recovery take place in the regime and , if and only if for some function that satisfies , and where implies . In particular, such a result has been shown to take place for the detection property [27, 31], where a giant component is necessary, i.e., for and , and where detection is shown to be possible if and only if (which is equivalent to ). Note also that the regime and is the bottleneck regime for recovery, as other regimes lead to extremal behaviour of the model (either trivially possible or impossible to recover the communities).

  • From an information theory perspective, note that the SBM can be seen as specific code on a discrete memoryless channel. Namely, the community assignment is a vector , the graph is a vector (or matrix) , , where is the output of through the discrete memoryless channel , for . The problem is hence to decode from correctly with high probability.

    This information theory model is a specific structured channel: first the channel is memoryless but it is not time homogeneous, since and are scaling with . Then the code has a specific structure, it has constant right-degree of 2 and constant left-degree of , and rate . However, as shown in [3] for the constant-degree regime, this model can be approximated by another model where the sparsity of the channel (i.e., the fact that and tend to 0) can be transferred to the code, which becomes an LDGM code of constant degree 2, and for which maximum-likelihood is expected to have a phase transition [3, 25]. It is then legitimate to expect a phase transition, as in coding theory, for the recovery of the input (the community assignment) from the output (the graph).

Figure 2: Figure 2: A graph model like the stochastic block model where edges are drawn dependent on the node profiles (e.g., binary profiles) can be seen as a special (LDGM) code on a memoryless channel.

To establish the information-theoretic limit, note that, as for channel coding, the algorithm maximizing the probability of reconstructing the communities correctly is the Maximum A Posteriori (MAP) decoding. Since the community assignment is uniform, MAP is in particular equivalent to Maximum Likelihood (ML) decoding. Hence if ML fails in reconstructing the communities with high probability when diverges, there is no algorithm (efficient or not) which can succeed with high probability. However, ML amounts to finding a balanced cut (a bisection) of the graph which minimizes the number of edges across the cut (in the case ), i.e., the min-bisection problem, which is well-known to be NP-hard. Hence ML can be used222ML was also used for the SBM in [10], requiring however poly-logarithmic degrees for the nodes. to establish the fundamental limit but does not provide an efficient algorithm, which we consider in a second stage.

We now summarize the main results of this paper. Theorem 1 and Theorem 2 provide the information-theoretic limit for recovery. Theorem 1 establishes the converse, showing that the maximum likelihood estimator does not coincide with the planted partition w.h.p. if and Theorem 2 states that ML succeeds w.h.p. if . One can express the recovery requirement as

(1)

where is the requirement for the connectivity threshold (which is necessary), and the oversampling term is needed to allow for recovery (this is also equivalent to for ). Analyzing ML requires a sharp analysis of the tail event of the sum of discrete random variables tending to constants with the number of summands. Interestingly, standard estimates à la CLT, Chernoff, or Sanov’s Theorem do not provide the right answer in our regime due to the slow concentration taking place.

Note that the best bounds from the table of Section 2 are obtained from [6] and [28], which allow for recovery in the regime where and , obtaining the conditions in [28] and in [6]. Hence, although these works reach the scaling for where the threshold takes place, they do not obtain the right threshold behaviour in terms the parameters and .

For efficient algorithms, we propose first an algorithm based on a semidefinite programming relaxation of ML, and show in Theorem 3 that it succeeds in recovering the communities w.h.p. when . This is shown by building a candidate dual certificate and showing that it indeed satisfies all the require properties, using Berstein’s matrix inequality. To compare this expression with the optimal threshold, the latter can be rewritten as and . The SDP is hence provably successful with a slightly looser threshold. It however already improves on the state of the art for exact recovery in the SBM, since the above condition is implied by , which improves on [28]. Moreover, numerical simulations suggest that the SDP algorithm works all the way down to the optimal threshold, and the analysis may not be tight. The success of the SDP algorithm under the model of this paper, suggest that it may have robustness properties relevant in practical contexts.

Finally, we provide in Section 7.2 an efficient algorithm whose guarantees match the information theoretical threshold, using an efficient partial recovery algorithm, followed by a procedure of local improvements.

Summary of the regimes and thresholds:

Giant component Connectivity
ER model , ,
[17] [17]
Detection Recovery
SBM model , , , ,
[27, 31] (This paper)

4 Additional related literature

From an algorithmic point of view, the censored block model investigated in [1, 2] is also related to this paper. It considers the following problem: is a random graph from the ensemble, and each node is assigned an unknown binary label . For each edge in , the variable is observed, where are i.i.d. Bernouilli() variables. The goal is to recover the values of the node variables from the variables. Matching bounds are obtained in [1, 2] for close to , with an efficient algorithm based on SDP, which is related to the algorithm developed in this paper.

Shortly after the posting of this paper on arXiv, a paper of Mossel, Neeman and Sly [29], fruit of a parallel research effort, was posted for the recovery problem in . In [29], the authors obtained a similar type of result as in this paper, slightly more general, allowing in particular for the parameters and to depend on as long as both parameters are .

5 Information theoretic lower bound

In this section we prove an information theoretic lower bound for exact recovery on the stochastic block model. The techniques are similar to the estimates for decoding a codeword on a memoryless channel with a specific structured codes.

Recall the stochastic block model: denotes the number of vertices in the graph, assumed to be even for simplicity, for each vertex , a binary label is attached, where are uniformly drawn such that , and for each pair of distinct nodes , an edge is placed with probability if and if , where edges are placed independently conditionally on the vertex labels. In the sequel, we consider and , and focus on the case to simplify the writing.

Theorem 1.

Let . If , or equivalently, if either or and , then ML fails in recovering the communities with probability bounded away from zero.

If , recovery is possibly if and only if there are no isolated nodes which is known to have a sharp threshold at . We will focus on .

Let and denote the two communities, each with nodes.

Let

and let be a fixed subset of of size . We define the following events:

(2)

where is the number of edges between two sets. Note that we identify nodes of our graph with integers with a slight abuse of notation when there is no risk of confusion.

We also define

(3)
Lemma 1.

If then .

Proof.

By symmetry, the probability of a failure in is also at least so, by union bound, with probability at least both failures will happen simultaneously which implies that ML fails. ∎

Lemma 2.

If then .

Proof.

It is easy to see that and Lemma 10 states that

(4)

Hence,

which together with Lemma 1 concludes the proof. ∎

Lemma 3.

Recall the definitions in (2) and (3). If

then, for sufficiently large , .

Proof.

We will use Lemma 2 and show that if then , for sufficiently large .

are independent and identically distributed random variables so

This means that is equivalent to . If is not than the inequality is obviously true, if then,

where the last inequality used the hypothesis . ∎

Definition 1.

Let be a natural number, , and , we define

(5)

where are i.i.d. Bernoulli and are i.i.d. Bernoulli, independent of .

Lemma 4.

Let , then

(6)
Proof of Theorem 4.

From the definitions in (2) and (3) we have

(7)

where are i.i.d. Bernoulli and are i.i.d. Bernoulli, all independent. Since

(8)

we get

(9)

and Lemma 4 implies

(10)

Hence , and the conclusion follows from Lemma 3. ∎

6 Information theoretic upper bound

We present now the main result of this Section.

Theorem 2.

If , i.e., if and , then the maximum likelihood estimator exactly recovers the communities (up to a global flip), with high probability.

The case follows directly from the connectivity threshold phenomenon on Erdős-Rényi graphs so we will restrict our attention to .

We will prove this theorem through a series of lemmas. The techniques are similar to the estimates for decoding a codeword on a memoryless channel with a specific structured codes. In what follows we refer to the true community partition as the ground truth.

Lemma 5.

If the maximum likelihood estimator does not coincide with the ground truth, then there exists and a set and with such that

Proof.

Recall that the maximum likelihood estimator finds two equally sized communities (of size each) that have the minimum number of edges between them, thus for it to fail there must exist another balanced partition of the graph with a smaller cut, let us call it and . Without loss of generality and . Picking and gives the result. ∎

Let be the event of the maximum likelihood estimator not coinciding with the ground truth. Given and both of size , define as

(11)

We have, by a simple union bound argument,

(12)

Let be a sequence of i.i.d. Bernoulli random variables and an independent sequence of i.i.d. Bernoulli random variables, note that (cf. Definition 3),

Lemma 8 in the Appendix shows that:

(13)

We thus have, combining (12) and (13), and using ,

(14)
Proof of Theorem 2.

Recall that is the event of the maximum likelihood estimator not coinciding with the ground truth. We next show that for , if,

then there exists a constant such that

(15)

Combining (14) and (15), we have

Note that, for sufficiently large , we have

and . Hence, for sufficiently large ,

which, together with the observation that , concludes the proof of the theorem. ∎

7 Efficient algorithms

7.1 A semidefinite programming based relaxation

We propose and analyze an algorithm, based in semidefinite programming (SDP), to efficiently reconstruct the two communities. Let be the observed graph, where edges are independently present, with probability if they connect two nodes in the same community and with probability if they connect two nodes in different communities, with . Recall that there are n nodes in this graph and that with a slight abuse of notation, we will identify nodes in the graph by an integer in . Our goal is to recover the two communities in .

The proposed reconstruction algorithm will try to find two communities such that the number of within-community edges minus the across-community edges is largest. We will identify a choice of communities by a vector with entries such that the component will correspond to if node is in one community and if it is in the other. We will also define as the matrix with zero diagonal whose non diagonal entries are given by

The proposed algorithm will attempt to maximize the following

(17)
s.t. (18)

Our approach will be to consider a simple SDP relaxation to this combinatorial problem. The SDP relaxation considered here dates back to the seminal work of Goemans and Williamson [19] on the Max-Cut problem. The techniques behind our analysis are similar to the ones used by the first two authors on a recent publication [1, 2]:

s.t. (19)
Theorem 3.

If , the following holds with high probability: (19) has a unique solution which is given by the outer-product of whose entries corresponding to community are and community are . Hence, if , full recovery of the communities is possible in polynomial time.

We will prove this result through a series of lemmas. Recall that is the observed graph and that the vector corresponds to the correct choice of communities. As stated above, the optimization problem (19) is an SDP (Semidefinite Program) and any SDP can be solved in polynomial time using methods such as the Interior Point Method. Hence if we can prove that the solution of (19) is , then we will have proved that the algorithm can recover the correct choice of communities in polynomial time.

Recall that the degree matrix of a graph is a diagonal matrix where each diagonal coefficient corresponds to the number of neighbours of vertex and that is the second smallest eigenvalue of a symmetric matrix .

Definition 2.

Let (resp. ) be a subgraph of that includes the edges that link two nodes in the same community (resp. in different communities) and the adjacency matrix of . We denote by (resp. ) the degree matrix of (resp. ) and define the Stochastic Block Model Laplacian to be

Lemma 6.

If

(20)

then is the unique solution to the SDP (19).

Proof.

We can suppose that WLOG. First of all, we obtain a sufficient condition for to be a solution to SDP (19) by using the KKT conditions. This will give us the first part of condition (20). The primal problem of SDP (19) is

s.t.

The dual problem of SDP (19) is

s.t. (21)

is guaranteed to be an optimal solution to SDP (19) under the following conditions:

  • is a feasible solution for the primal problem

  • There exists a matrix feasible for the dual problem such that .

The first point being trivially verified, it remains to find such a (known as a dual certificate). Generally, one can also use complementary slackness to help find such a certificate but, in this case, it is equivalent to strong duality.

Define a correct (resp. incorrect) edge to be an edge between two nodes in the same (resp. different) community and a correct (resp. incorrect) non-edge to be the absence of an edge between two nodes in different (resp. same) communities. Notice that counts positively the correct edges and non-edges incident from node and negatively incorrect edges and incorrect non edges incident from node . In other words

(22)
(23)
(24)

Hence: so verifies and, thus defined, is diagonal. As long as , or in other words, , we can then conclude that is an optimal solution for SDP (19).

The second part of condition (20) ensures that is the unique solution to SDP (19). Suppose that is another optimal solution to SDP (19), then from complementary slackness and . By assumption, the second smallest eigenvalue of is non-zero. This entails that spans all of its null space. Combining this with complementary slackness, the fact that and , we obtain that needs to be a multiple of . Since we must have . ∎

Proof of Theorem 3.

Given Lemma (6), the next natural step would be to control the eigenvalues of when . We want to use Benstein’s inequality to do this; to make its application easier, we rewrite as a linear combination of elementary deterministic matrices with random coefficients. Define

(25)
(26)

where the , are independent and independent of each other. Define

(27)
(28)

where (resp. ) is the vector of all zeros except the (resp. ) coefficient which is 1. Using these definitions, we can then write as the difference of two matrices and where is a zero-expectation matrix and , a deterministic matrix that corresponds to the expectation, ie

(29)
(30)

where

(31)
(32)

Notice that and , hence .

Condition (20) is then equivalent to

(33)

where (resp. ) represents the projection of (resp. ) onto the space . Typically, if we want to project onto the space spanned by the vector , then the projection matrix would be and . being determinstic, condition (33) amounts to controlling the spectral norm of . This is what is exploited in Lemma 11 in the appendix where it is shown that condition (33) is verified if and for some .

Using Bernstein to conclude, Lemma 12 in the appendix shows that for some when n is big enough and Lemma 13 in the appendix shows that for some if . This concludes the proof of the theorem.

7.2 Efficient full recovery from efficient partial recovery

In this section we show how to leverage state of the art algorithms for partial recovery in the sparse case in order to construct an efficient algorithm that achieves exact recovery down to the optimal information theoretical threshold.

The algorithm proceeds by splitting the information obtained in the graph into a part that is used by the partial recovery algorithm and a part that is used for the local steps. In order to make the two steps (almost) independent, we propose the following procedure: First take a random partition of the edges of complete graph on the nodes into 2 graphs and (done independently of the observed graph ). is an Erdos-Renyi graph on n nodes with edge probability , is the complement of . We then define and subgraphs of as and . In the second step, we apply Massoulie’s [27] algorithm for partial recovery to . As is an SBM graph with parameters , this algorithm is guaranteed [27] to output, with high probability, a partition of the n nodes into two communities and , such that the partition is correct for at least nodes, where as . In other words, and coincide with and (the correct communities) on at least nodes. Lastly, we flip some of the nodes’ memberships depending on the edges they have in . Using the communities and obtained in the previous step, we flip the membership of a given node if it has more edges in going to the opposite community than it has to its own. If the the number of flips in each cluster is not the same, keep the clusters unchanged.

(a) Correct node that will be flipped
(b) Incorrect node that will not be flipped
Figure 3: Two cases where a node in the graph will be mislabeled
Theorem 4.

If , then, there exists large enough (depending only on and ) such that, with high probability, the algorithm described above will successfully recover the communities from the observed graph.

Proof.

In the following, we will suppose that the partial recovery algorithm succeeds as described above w.h.p. and we want to show that when and small enough, the probability that there exists a node that doesn’t belong to the correct community, after the local improvements, goes to 0 when . Our goal is to union bound over all possible nodes. We are thus interested in the probability that a node is mislabeled at the end of the algorithm.

Recall the random variables and iid and mutually independent Bernoulli random variables with expectations respectively and . represents if there is an edge between two nodes in the same community and if there is an edge between two nodes in different communities. Define and iid copies of and . For simplicity, we start by assuming that is the complete graph. In this case we have at most incorrectly labelled nodes (ie nodes that are in A but belong to B’ and nodes that are in B but belong to A’). A node in the graph is mislabeled only if it has at least as many connections to the wrong cluster as connections to the right one. This is illustrated in Figure 3. We can express the event with the random variables , , their copies, and .

(34)

Recall that we assumed that was a complete graph. In reality, using Lemma 14, it can be shown that the degree of any node in is at least w.h.p. . Taking this into consideration, we will loosely upperbound (34) by removing on both the rhs terms. Notice that the removal of edges is independent of the outcome of the random variables and

(35)

Lemma 9 shows that (35) can be upperbounded as follows

(36)

where

C is a constant depending only on and (37)
(38)