Information-theoretic Limits for Community Detection in Network Models

Information-theoretic Limits for Community Detection in Network Models

Chuyang Ke
Department of Computer Science
Purdue University
cke@purdue.edu
   Jean Honorio
Department of Computer Science
Purdue University
jhonorio@purdue.edu
Abstract

We analyze the information-theoretic limits for the recovery of node labels in several network models. This includes the Stochastic Block Model, the Exponential Random Graph Model, the Latent Space Model, the Directed Preferential Attachment Model, and the Directed Small-world Model. For the Stochastic Block Model, the non-recoverability condition depends on the probabilities of having edges inside a community, and between different communities. For the Latent Space Model, the non-recoverability condition depends on the dimension of the latent space, and how far and spread are the communities in the latent space. For the Directed Preferential Attachment Model and the Directed Small-world Model, the non-recoverability condition depends on the ratio between homophily and neighborhood size. We also consider dynamic versions of the Stochastic Block Model and the Latent Space Model.

1 Introduction

Network models have already become a powerful tool for researchers in various fields. With the rapid expansion of online social media including Twitter, Facebook, LinkedIn and Instagram, researchers now have access to more real-life network data and network models are great tools to analyze the vast amount of interactions [14, 2, 1, 18]. Recent years have seen the applications of network models in machine learning [5, 29, 19], bioinformatics [7, 13, 9], as well as in social and behavioral researches [22, 12].

Among these literatures one of the central problems related to network models is community detection. In a typical network model, nodes represent individuals in a social network, and edges represent interpersonal interactions. The goal of community detection is to recover the label associated with each node (i.e., the community where each node belongs to). The exact recovery of 100% of the labels has always been an important research topic in machine learning, for instance, see [2, 8, 17, 23].

One particular issue researchers care about in the recovery of network models is the relation between the number of nodes, and the proximity between the likelihood of connecting within the same community and across different communities. For instance, consider the Stochastic Block Model, in which is the probability for connecting two nodes in the same community, and is the probability for connecting two nodes in different communities. Clearly if equals , it is impossible to identify the communities, or equivalently, to recover the labels for all nodes. Intuitively, as the difference between and increases, labels are easier to be recovered.

In this paper, we analyze the information-theoretic limits for community detection. Our main contribution is the comprehensive study of several social networks used in the literature. To accomplish that task, we carefully construct restricted ensembles. The key idea of using restricted ensembles is that for any learning problem, if a subclass of models is difficult to be learnt, then the original class of models will be at least as difficult to be learnt. The use of restricted ensembles is customary for information-theoretic lower bounds [24, 27].

We provide a series of novel results in this paper. While the information-theoretic limits of the Stochastic Block Model have been heavily studied (in slightly different ways), none of the other models considered in this paper have been studied before. Thus, we provide new information-theoretic results for the Exponential Random Graph Model, the Latent Space Model, the Directed Preferential Attachment Model, and the Directed Small-world Model. We also provide new results for dynamic versions of the Stochastic Block Model and the Latent Space Model.

Table 1 summarizes our results.

Type Model Our Result Previous Result Thm. No.
S SBM [21] Thm. 1
[8]
S ERGM Novel Cor. 1
S LSM Novel Thm. 2
UD DSBM Novel Thm. 3
UD DLSM Novel Thm. 4
DD DPAM Novel Thm. 5
DD DSWM Novel Thm. 6
Table 1: Comparison of network models (S - static; UD - undirected dynamic; DD - directed dynamic)

2 Static Network Models

In this section we analyze the information-theoretic limits for two static network models: the Stochastic Block Model (SBM) and the Latent Space Model (LSM). Furthermore, we include a particular case of the Exponential Random Graph Model (ERGM) as a corollary of our results for the SBM. We call these static models, because in these models edges are independent of each other.

2.1 Stochastic Block Model

Among different network models the Stochastic Block Model (SBM) has received particular attention. Variations of the Stochastic Block Model include, for example, symmetric SBMs [3], binary SBMs [23, 11], labelled SBMs [32, 17, 30, 15], and overlapping SBMs [4]. For regular SBMs [21] and [8] showed that under certain conditions recovering the communities in a SBM is fundamentally impossible. Our analysis for the Stochastic Block Model follows the method used in [8] but we analyze a different regime. In [8], two clusters are required to have the equal size (Planted Bisection Model), while in our SBM setup, nature picks the label of each node uniformly at random. Thus in our model only the expectation of the sizes of the two communities are equal.

We now define the Stochastic Block Model, which has two parameters and .

Definition 1 (Stochastic Block Model).

Let . A Stochastic Block Model with parameters is an undirected graph of nodes with the adjacency matrix , where each . Each node is in one of the two classes {+1, -1}. The distribution of true labels is uniform, i.e., each label is assigned to with probability , and with probability .

The adjacency matrix is distributed as follows: if then is Bernoulli with parameter ; otherwise is Bernoulli with parameter .

The goal is to recover labels that are equal to the true labels , given the observation of . We are interested in the information-theoretic limits. Thus, we define the Markov chain . Using Fano’s inequality, we obtain the following results.

Theorem 1.

In a Stochastic Block Model with parameters with , if

then we have that for any algorithm that a learner could use for picking , the probability of error is greater than or equal to .

Notice that our result for the Stochastic Block Model is similar to the one in [8]. This means that the method of generating labels does not affect the information-theoretic bound.

2.2 Exponential Random Graph Model

Exponential Random Graph Models (ERGMs) are a family of distributions on graphs of the following form: , where is some potential function over graphs. Selecting different potential functions enables ERGMs to model various structures in network graphs, for instance, the potential function can be a sum of functions over edges, triplets, cliques, among other choices [14].

In this section we analyze a special case of the Exponential Random Graph Model as a corollary of our results for the Stochastic Block Model, in which the potential function is defined as a sum of functions over edges. That is, , where and is a parameter. Simplifying the expression above, we have . This leads to the following definition.

Definition 2 (Exponential Random Graph Model).

Let . An Exponential Random Graph Model with parameter is an undirected graph of nodes with the adjacency matrix , where each . Each node is in one of the two classes {+1, -1}. The distribution of true labels is uniform, i.e., each label is assigned to with probability , and with probability .

The adjacency matrix is distributed as follows:

where .

The goal is to recover labels that are equal to the true labels , given the observation of . We are interested in the information-theoretic limits. Thus, we define the Markov chain . Theorem 1 leads to the following result.

Corollary 1.

In a Exponential Random Graph Model with parameter , if

then we have that for any algorithm that a learner could use for picking , the probability of error is greater than or equal to .

2.3 Latent Space Model

The Latent Space Model (LSM) was first proposed by [16]. The core assumption of the model is that each node has a low-dimensional latent vector associated with it. The latent vectors of nodes in the same community follow a similar pattern. The connectivity of two nodes in the Latent Space Model is determined by the distance between their corresponding latent vectors. Previous works on the Latent Space Model [26] analyzed asymptotic sample complexity, but did not focus on information-theoretic limits for exact recovery.

We now define the Latent Space Model, which has three parameters , and , .

Definition 3 (Latent Space Model).

Let and A Latent Space Model with parameters is an undirected graph of nodes with the adjacency matrix , where each . Each node is in one of the two classes {+1, -1}. The distribution of true labels is uniform, i.e., each label is assigned to with probability , and with probability .

For every node , nature generates a latent -dimensional vector according to the Gaussian distribution .

The adjacency matrix is distributed as follows: is Bernoulli with parameter .

The goal is to recover labels that are equal to the true labels , given the observation of . Notice that we do not have access to . We are interested in the information-theoretic limits. Thus, we define the Markov chain . Fano’s inequality and a proper conversion of the above model lead to the following theorem.

Theorem 2.

In a Latent Space Model with parameters , if

then we have that for any algorithm that a learner could use for picking , the probability of error is greater than or equal to .

3 Dynamic Network Models

In this section we analyze the information-theoretic limits for two dynamic network models: the Dynamic Stochastic Block Model (DSBM) and the Dynamic Latent Space Model (DLSM). We call these dynamic models, because we assume there exists some ordering for edges, and the distribution of each edge not only depends on its endpoints, but also depends on previously generated edges.

We start by giving the definition of predecessor sets. Notice that the following definition of predecessor sets employs a lexicographic order, and the motivation is to use it as a subclass to provide a bound for general dynamic models. Fano’s inequality is usually used for a restricted ensemble, i.e., a subclass of the original class of interest. If a subclass (e.g., dynamic SBM or LSM with a particular predecessor set ) is difficult to be learnt, then the original class (SBMs or LSMs with general dynamic interactions) will be at least as difficult to be learnt. The use of restricted ensembles is customary for information-theoretic lower bounds [24, 27].

Definition 4.

For every pair and with , we denote its predecessor set using , where

and

In a dynamic model, the probability distribution of each edge not only depends on the labels of nodes and (i.e., and ), but also on the previously generated edges .

Next, we prove the following lemma using the definition above.

Lemma 1.

Assume now the probability distribution of given labeling is . Then for any labeling and , we have

Similarly, if the probability distribution of given labeling is , we have

3.1 Dynamic Stochastic Block Model

The Dynamic Stochastic Block Model (DSBM) shares a similar setting with the Stochastic Block Model, except that we take the predecessor sets into consideration.

Definition 5 (Dynamic Stochastic Block Model).

Let . Let be a set of functions, where . A Dynamic Stochastic Block Model with parameters is an undirected graph of nodes with the adjacency matrix , where each . Each node is in one of the two classes {+1, -1}. The distribution of true labels is uniform, i.e., each label is assigned to with probability , and with probability .

The adjacency matrix is distributed as follows: if then is Bernoulli with parameter ; otherwise is Bernoulli with parameter .

The goal is to recover labels that are equal to the true labels , given the observation of . We are interested in the information-theoretic limits. Thus, we define the Markov chain . Using Fano’s inequality and Lemma 1, we obtain the following results.

Theorem 3.

In a Dynamic Stochastic Block Model with parameters with , if

then we have that for any algorithm that a learner could use for picking , the probability of error is greater than or equal to .

3.2 Dynamic Latent Space Model

The Dynamic Latent Space Model (DLSM) shares a similar setting with the Latent Space Model, except that we take the predecessor sets into consideration.

Definition 6 (Dynamic Latent Space Model).

Let and Let be a set of functions, where . A Latent Space Model with parameters is an undirected graph of nodes with the adjacency matrix , where each . Each node is in one of the two classes {+1, -1}. The distribution of true labels is uniform, i.e., each label is assigned to with probability , and with probability .

For every node , nature generates a latent -dimensional vector according to the Gaussian distribution .

The adjacency matrix is distributed as follows: is Bernoulli with parameter .

The goal is to recover labels that are equal to the true labels , given the observation of . Notice that we do not have access to . We are interested in the information-theoretic limits. Thus, we define the Markov chain . Using Fano’s inequality and Lemma 1, our analysis leads to the following theorem.

Theorem 4.

In a Dynamic Latent Space Model with parameters , if

then we have that for any algorithm that a learner could use for picking , the probability of error is greater than or equal to .

4 Directed Network Models

In this section we analyze the information-theoretic limits for two directed network models: the Directed Preferential Attachment Model (DPAM) and the Directed Small-world Model (DSWM). In contrast to previous sections, here we consider directed graphs.

Note that in social networks such as Twitter, the graph is directed. That is, each user follows other users. Users that are followed by many others (i.e., nodes with high out-degree) are more likely to be followed by new users. This is the case of popular singers, for instance. Additionally, a new user will follow people with similar preferences. This is referred in the literature as homophily. In our case, a node with positive label will more likely follow nodes with positive label, and vice versa.

The two models defined in this section will require an expected number of in-neighbors , for each node. In order to guarantee this in a setting in which nodes decide to connect to at most nodes independently, one should guarantee that the probability of choosing each of the nodes is less than or equal to .

The above motivates an algorithm that takes a vector in the -simplex (i.e., and ) and produces another vector in the -simplex (i.e., and for all , ). Consider the following optimization problem:

subject to

which is solved by the following algorithm:

input : vector where ,
expected number of in-neighbors
output : vector where and for all
1 for  do
2       ;
3      
4 end for
5for  such that  do
6       ;
7       ;
8       Distribute evenly across all such that ;
9      
10 end for
Algorithm 1 -simplex

One important property that we will use in our proofs is that , as well as .

4.1 Directed Preferential Attachment Model

Here we consider a Directed Preferential Attachment Model (DPAM) based on the classic Preferential Attachment Model [6]. While in the classic model every mode has exactly neighbors, in our model the expected number of in-neighbors is .

Definition 7 (Directed Preferential Attachment Model).

Let be a positive integer with . Let be the homophily parameter. A Directed Preferential Attachment Model with parameters is a directed graph of nodes with the adjacency matrix , where each . Each node is in one of the two classes {+1, -1}. The distribution of true labels is uniform, i.e., each label is assigned to with probability , and with probability .

Nodes through are not connected to each other, and they all have an in-degree of . For node from to , nature first generates the weight for each node , where , and . Then every node connects to node with the following probability: , where is computed from as in Algorithm 1.

The goal is to recover labels that are equal to the true labels , given the observation of . We are interested in the information-theoretic limits. Thus, we define the Markov chain . Using Fano’s inequality, we obtain the following results.

Theorem 5.

In a Directed Preferential Attachment Model with parameters , if

then we have that for any algorithm that a learner could use for picking , the probability of error is greater than or equal to .

4.2 Directed Small-world Model

Here we consider a Directed Small-world Model (DSWM) based on the classic Small-world [28]. While in the classic model every mode has exactly neighbors, in our model the expected number of in-neighbors is .

Definition 8 (Directed Small-world Model).

Let be a positive integer with . Let be the homophily parameter. Let be the mixture parameter with . A Directed Small-world Model with parameters is a directed graph of nodes with the adjacency matrix , where each . Each node is in one of the two classes {+1, -1}. The distribution of true labels is uniform, i.e., each label is assigned to with probability , and with probability .

Nodes through are not connected to each other, and they all have an in-degree of . For node from to , nature first generates the weight for each node , where , and , . Then every node connects to node with the following probability: , where is computed from as in Algorithm 1.

The goal is to recover labels that are equal to the true labels , given the observation of . We are interested in the information-theoretic limits. Thus, we define the Markov chain . Using Fano’s inequality, we obtain the following results.

Theorem 6.

In a Directed Small-world Model with parameters , if

then we have that for any algorithm that a learner could use for picking , the probability of error is greater than or equal to .

5 Concluding Remarks

Our research could be extended in several ways. First, our models only involve two clusters. For the Latent Space Model and dynamic models, it might be interesting to analyze the case with multiple clusters. Some more complicated models involving Markovian assumptions, for example, the Dynamic Social Network in Latent Space model [25], can also be analyzed. While this paper focused on information-theoretic limits for the recovery of various models, it would be interesting to provide a polynomial-time learning algorithm with finite-sample statistical guarantees, for some particular models such as the Latent Space Model.

References

  • [1] Emmanuel Abbe. Community detection and stochastic block models: recent developments. arXiv preprint arXiv:1703.10146, 2017.
  • [2] Emmanuel Abbe, Afonso S Bandeira, and Georgina Hall. Exact recovery in the stochastic block model. IEEE Transactions on Information Theory, 62(1):471–487, 2016.
  • [3] Emmanuel Abbe and Colin Sandon. Community detection in general stochastic block models: Fundamental limits and efficient algorithms for recovery. In Foundations of Computer Science (FOCS), 2015 IEEE 56th Annual Symposium on, pages 670–688. IEEE, 2015.
  • [4] Edoardo M Airoldi, David M Blei, Stephen E Fienberg, and Eric P Xing. Mixed membership stochastic blockmodels. Journal of Machine Learning Research, 9(Sep):1981–2014, 2008.
  • [5] Brian Ball, Brian Karrer, and Mark EJ Newman. Efficient and principled method for detecting communities in networks. Physical Review E, 84(3):036103, 2011.
  • [6] Albert-László Barabási and Réka Albert. Emergence of scaling in random networks. science, 286(5439):509–512, 1999.
  • [7] Irineo Cabreros, Emmanuel Abbe, and Aristotelis Tsirigos. Detecting community structures in Hi-C genomic data. In Information Science and Systems (CISS), 2016 Annual Conference on, pages 584–589. IEEE, 2016.
  • [8] Yudong Chen and Jiaming Xu. Statistical-computational phase transitions in planted models: The high-dimensional setting. In International Conference on Machine Learning, pages 244–252, 2014.
  • [9] Melissa S Cline, Michael Smoot, Ethan Cerami, Allan Kuchinsky, Nerius Landys, Chris Workman, Rowan Christmas, Iliana Avila-Campilo, Michael Creech, Benjamin Gross, et al. Integration of biological networks and gene expression data using Cytoscape. Nature protocols, 2(10):2366, 2007.
  • [10] Thomas M Cover and Joy A Thomas. Elements of information theory. John Wiley & Sons, 2012.
  • [11] Yash Deshpande, Emmanuel Abbe, and Andrea Montanari. Asymptotic mutual information for the binary stochastic block model. In Information Theory (ISIT), 2016 IEEE International Symposium on, pages 185–189. IEEE, 2016.
  • [12] Santo Fortunato. Community detection in graphs. Physics reports, 486(3-5):75–174, 2010.
  • [13] Michelle Girvan and Mark EJ Newman. Community structure in social and biological networks. Proceedings of the national academy of sciences, 99(12):7821–7826, 2002.
  • [14] Anna Goldenberg, Alice X Zheng, Stephen E Fienberg, Edoardo M Airoldi, et al. A survey of statistical network models. Foundations and Trends® in Machine Learning, 2(2):129–233, 2010.
  • [15] Simon Heimlicher, Marc Lelarge, and Laurent Massoulié. Community detection in the labelled stochastic block model. NIPS Workshop on Algorithmic and Statistical Approaches for Large Social Networks, 2012.
  • [16] Peter D Hoff, Adrian E Raftery, and Mark S Handcock. Latent space approaches to social network analysis. Journal of the american Statistical association, 97(460):1090–1098, 2002.
  • [17] Varun Jog and Po-Ling Loh. Information-theoretic bounds for exact recovery in weighted stochastic block models using the Renyi divergence. IEEE Allerton Conference on Communication, Control, and Computing, 2015.
  • [18] Bomin Kim, Kevin Lee, Lingzhou Xue, and Xiaoyue Niu. A review of dynamic network models with latent variables. arXiv preprint arXiv:1711.10421, 2017.
  • [19] Greg Linden, Brent Smith, and Jeremy York. Amazon. com recommendations: Item-to-item collaborative filtering. IEEE Internet computing, 7(1):76–80, 2003.
  • [20] Arakaparampil M Mathai and Serge B Provost. Quadratic forms in random variables: theory and applications. Dekker, 1992.
  • [21] Elchanan Mossel, Joe Neeman, and Allan Sly. Stochastic block models and reconstruction. arXiv preprint arXiv:1202.1499, 2012.
  • [22] Mark EJ Newman, Duncan J Watts, and Steven H Strogatz. Random graph models of social networks. Proceedings of the National Academy of Sciences, 99(suppl 1):2566–2572, 2002.
  • [23] Hussein Saad, Ahmed Abotabl, and Aria Nosratinia. Exact recovery in the binary stochastic block model with binary side information. IEEE Allerton Conference on Communication, Control, and Computing, 2017.
  • [24] Narayana P Santhanam and Martin J Wainwright. Information-theoretic limits of selecting binary graphical models in high dimensions. IEEE Transactions on Information Theory, 58(7):4117–4134, 2012.
  • [25] Purnamrita Sarkar and Andrew W Moore. Dynamic social network analysis using latent space models. In Advances in Neural Information Processing Systems, pages 1145–1152, 2006.
  • [26] Minh Tang, Daniel L Sussman, Carey E Priebe, et al. Universally consistent vertex classification for latent positions graphs. The Annals of Statistics, 41(3):1406–1430, 2013.
  • [27] Wei Wang, Martin J Wainwright, and Kannan Ramchandran. Information-theoretic bounds on model selection for gaussian markov random fields. In Information Theory Proceedings (ISIT), 2010 IEEE International Symposium on, pages 1373–1377. IEEE, 2010.
  • [28] Duncan J Watts and Steven H Strogatz. Collective dynamics of ‘small-world’networks. nature, 393(6684):440, 1998.
  • [29] Rui Wu, Jiaming Xu, Rayadurgam Srikant, Laurent Massoulié, Marc Lelarge, and Bruce Hajek. Clustering and inference from pairwise comparisons. In ACM SIGMETRICS Performance Evaluation Review, volume 43, pages 449–450. ACM, 2015.
  • [30] Jiaming Xu, Laurent Massoulié, and Marc Lelarge. Edge label inference in generalized stochastic block models: from spectral theory to impossibility results. In Conference on Learning Theory, pages 903–920, 2014.
  • [31] Bin Yu. Assouad, Fano, and Le Cam. Festschrift for Lucien Le Cam, 423:435, 1997.
  • [32] Se-Young Yun and Alexandre Proutiere. Optimal cluster recovery in the labeled stochastic block model. In Advances in Neural Information Processing Systems, pages 965–973, 2016.

Appendix A Static Network Models

a.1 Proof of Theorem 1

Proof.

We use to denote the hypothesis class, which has the size of . By Fano’s inequality [10], we have for any ,

(1)

Our main step is to give an upper bound for the mutual information in order to apply Fano’s inequality. By using the pairwise KL-based bound from [31, p. 428] we have

(2)

Among the equations above, (a) holds because is symmetric, and ’s are independent and identically distributed given , while (b) holds because for every and , we have

given that . Next we use formula (16) from [8]:

(3)

By Fano’s inequality [10] and by plugging (3) and (2) into (1), for the probability error to be at least , it is sufficient for the lower bound to be greater than 1/2. Therefore

By solving for in the inequality above, we obtain that if

(4)

then we have that . ∎

a.2 Proof of Corollary 1

Proof.

Starting from the probability distribution of the adjacency matrix , we have

Thus, is Bernoulli with parameter . We denote , and . Plugging and into (4) and requiring the probability error to be at least , we obtain that if

(5)

then we have that . ∎

a.3 Moment Generating Function of Multivariate Gaussian Distribution

We introduce the following result from [20, p. 40], which will later be used in the proof of Theorem 2 and 4.

Lemma 2.

Let , , . Then the moment generating function of is given by

Furthermore, if is symmetric positive definite, we have

a.4 Proof of Theorem 2

First, we start with a required technical lemma:

Lemma 3.

The model considered in Definition 3 is equivalent to the following Modified Latent Space Model:

Let and A modified Latent Space Model with parameters is an undirected graph of nodes with the adjacency matrix , where each . Each node is in one of the two classes {+1, -1}. The distribution of true labels is uniform, i.e., each label is assigned to with probability , and with probability .

For every node , the nature generates a latent -dimensional vector according to the Gaussian distribution .

The adjacency matrix is distributed as follows: if then is Bernoulli with parameter ; otherwise is Bernoulli with parameter .

Proof.

We claim that the Modified Latent Space Model is equivalent to the classic Latent Space Model considered in Definition 3, by defining for every node . Since , we have . As a result,

  • if , is Bernoulli with parameter ,

  • if , is Bernoulli with parameter ,

  • if , is Bernoulli with parameter .

This completes the proof of the lemma. ∎

Now, we provide the proof of the main theorem.

Proof.

Since and are independent, we have the following equalities

(6)

Now we are interested in the expectations and . By definition we know

and