Strategies for online inference of modelbased clustering in large and growing networks\thanksrefT1
Abstract
In this paper we adapt online estimation strategies to perform modelbased clustering on large networks. Our work focuses on two algorithms, the first based on the SAEM algorithm, and the second on variational methods. These two strategies are compared with existing approaches on simulated and real data. We use the method to decipher the connexion structure of the political websphere during the US political campaign in 2008. We show that our online EMbased algorithms offer a good tradeoff between precision and speed, when estimating parameters for mixture distributions in the context of random graphs.
10.1214/10AOAS359 \volume4 \issue2 2010 \firstpage687 \lastpage714
Online methods for modelbased clustering on networks \thankstextT1Supported in part by French Agence Nationale de la Recherche Grant NeMo ANR08BLAN030401.
a]\fnmsHugo \snmZanghilabel=e1]hugo.zanghi@exalead.com,
b]\fnmsFranck \snmPicardlabel=e2]picard@biomserv.univlyon1.fr\corref,
b]\fnmsVincent \snmMielelabel=e3]miele@biomserv.univlyon1.fr
and c]\fnmsChristophe \snmAmbroiselabel=e4]cambroise@genopole.cnrs.fr
Graph clustering \kwdEM Algorithms \kwdonline strategies \kwdweb graph structure analysis.
1 Introduction
Analyzing networks has become an essential part of a number of scientific fields. Examples include such widely differing phenomena as power grids, proteinprotein interaction networks and friendship. In this work we focus on particular networks which are made of political Weblogs. With the impact of new social network websites like Myspace and Facebook, the web has an increasing influence on the political debate. As an example, Adamic and Glance (2005) showed that blogging played an important role in the political debate of the 2004 US Presidential Election. Although only a small minority of Americans actually used these Weblogs, their influence extended far beyond their readership, as a result of their interactions with national mainstream media. In this article we propose to uncover the connexion structure of the political websphere during the US political campaign in 2008. This data set consists of a oneday snapshot of over 130,520 links and 1870 manually classified websites (676 liberal, 1026 conservative and 168 independent) where nodes are connected if there exists a citation from one to another.
Many strategies have been developed to study networks structure and topology. A distinction can be made between modelfree [Newman (2006); Ng, Jordan and Weiss (2002)] and modelbased methods, with connexions between parametric and nonparametric models [Bickel and Chen (2009)]. Among modelbased methods, modelbased clustering has provided an efficient way to summarize complex networks structures. The basic idea of these strategies is to model the distribution of connections in the network, considering that nodes are spread among an unknown number of connectivity classes which are themselves unknown. This generalizes modelbased clustering to network data, and various modeling strategies have been considered. Nowicki and Snijders (2001) propose a mixture model on dyads that belong to some relational alphabet, Daudin, Picard and Robin (2008) propose a mixture on edges, Handcock, Raftery and Tantrum (2007) consider continuous hidden variables and Airoldi et al. (2005, 2007, 2008) consider both mixed membership and stochastic block structure.
In this article our concern is not to assess nor to compare the appropriateness of these different models, but we focus on a computational issue that is shared by most of them. Indeed, even if the modeling strategies are diverse, EM like algorithms constitute a common core of the estimation strategy [Dempster, Laird and Rubin (1977); Snijders and Nowicki (1997)], and this algorithm is known to be slow to convergence and to be very sensitive to the size of the data set. This issue should be put into perspective with a new challenge that is inherent to the analysis of network data sets which is the development of optimization strategies with a reasonable speed of execution, and which can deal with networks composed of tens of thousands of nodes, if not more. To this extent, Bayesian strategies are limited, as they may not handle networks with more than a few hundred [Snijders and Nowicki (1997); Nowicki and Snijders (2001)] or a few thousand [Airoldi et al. (2008)], and heuristicbased algorithms may not be satisfactory from the statistical point of view [Newman and Leicht (2007)]. Variational strategies have been proposed as well [Airoldi et al. (2005); Daudin, Picard and Robin (2008)], but they are concerned by the same limitations as EM. Thus, the new question we assess in this work is “how to perform efficient modelbased clustering from a computational point of view on very large networks or on networks that grow over time?”
Online algorithms constitute an efficient alternative to classical batch algorithms when the data set grows over time. The application of such strategies to mixture models has been studied by many authors [Titterington (1984); Wang and Zhao (2006)]. Typical clustering algorithms include the online means algorithm [MacQueen (1967)]. More recently, Liu et al. (2006) modeled Internet traffic using a recursive EM algorithm for the estimation of Poisson mixture models. However, an additional difficulty of mixture models for random graphs is that the computation of , the distribution of the hidden label variables conditionally on the observation , cannot be factorized due to conditional dependency [Daudin, Picard and Robin (2008)]. In this work we consider two alternative strategies to deal with this issue. The first one is based on the Monte Carlo simulation of , leading to a Stochastic version of the EM algorithm (Stochastic Approximation EM, SAEM) [Delyon, Lavielle and Moulines (1999)]. The second one is the variational method proposed by Daudin, Picard and Robin (2008) which consists in a meanfield approximation of . This strategy has also been proposed by Latouche, Birmele and Ambroise (2008) and by Airoldi et al. (2008) in the Bayesian framework.
In this article we begin by describing the blog database from the 2008 US presidential campaign. Then we present the MixNet model proposed by Daudin, Picard and Robin (2008), and we compare the model with its principal competitors in terms of modeling strategies. We use the Sampson (1968) data set for illustration. We derive the online framework to estimate the parameters of this mixture using SAEM or variational methods. Simulations are used to show that online methods are very effective in terms of computation time, parameter estimation and clustering efficiency. These simulations integrate both fixedsize and increasing size networks for which online methods have been designed. Finally, we uncover the connectivity structure of the 2008 US Presidential websphere using the proposed variational online algorithm of the MixNet model.
2 Data presentation
In this community extraction experiment, we used a data set obtained on November 7, 2007 by the French company RTGI (Information Networks, Territories and Geography) using a specific methodology similar to Fouetillou (2007). This data set consists of a oneday snapshot of over two thousand websites, one thousand of which featured in two online directories: http://wonkosphere.com and http://www.politicaltrends.info.The first site provides a manual classification, and the second an automatic classification based on text analysis. From this seed of a thousand sites, a web crawler [Drugeon (2005)] collected a maximum of 100 pages per hostname which is in general the sitename. External links were examined to check the connectivity with visited and unvisited websites. If websites were still unvisited, and if there existed a minimal path of distance less than two between a hostname which belongs to the seed and these websites, then the web crawler collected them.
Using this seedextension method, 200,000 websites were collected, and a network of websites was created where nodes represent hostnames (a hostname contains a set of pages) and edges represent hyperlinks between different hostnames. Multiple links between two different hostnames were collapsed into a single link. Intradomain links were taken into account if hostnames were not similar. For this web network, we computed an authority score [Kleinberg (1999)] and a keyword score TF/IDF [Salton, Wong and Yang (1975)] on focused words (political entities) in order to identify respectively nodes with highquality websites (high authority scores) and centered on those topics (on a political corpus). 870 new websites emerged out of these two criteria. They were checked by experts and the validity of the seed confirmed. The final tally was 130,520 links and 1870 sites: 676 liberal, 1026 conservative and 168 independent. The data can be downloaded at http://stat.genopole.cnrs.fr/sg/Members/hzanghi.
3 A mixture model for networks
3.1 Model and notation
We model the observed network of websites by a random graph , where denotes the set of fixed vertices which represent hyperlinks between blogs. These random edges are modeled by , a set of random variables coding for the nature of connection between blogs and . The nature of the links can be discrete or continuous, and we consider a model with distributions belonging to the exponential family. In the MixNet model we suppose that nodes are spread among hidden classes and we denote by the indicator variable such that if blog belongs to class . We denote by the vector of random independent label variables such that
with the vector of proportions for classes. In the following, formulas are valid for the case of directed and undirected networks. Selfloops have not been introduced for simplicity of notation, and have been implemented in the MixNet software.
Conditional distribution
MixNet is defined using the conditional distribution of edges given the label of the nodes. ’s are supposed to be conditionally independent:
and is supposed to belong to the regular exponential family, with natural parameter :
where is the vector of sufficient statistics, a normalizing constant and a given function. Consequently, the conditional distribution of the graph is also from the exponential family:
Examples of such distributions are provided in the Appendix.
Models comparison
Many strategies have been considered to construct models for clustering in networks. Variations mainly concern the nature of the link between nodes and the definition of nodes’ memberships. For instance, the stochastic blockstructure model [Snijders and Nowicki (1997); Nowicki and Snijders (2001)] considers links that are dyads , whereas MixNet considers a model on edges only. Consequently, MixNet implicitly assumes the independence of and conditionally on the latent structure. As for the definition of the label variables, the Mixed Membership Stochastic Blockmodel (MMSB) has been proposed to describe the interactions between objects playing multiple roles [Airoldi et al. (2008)]. Consequently, the hidden variables of their model can stand for more than one group for one node, whereas MixNet only considers one label per node. Airoldi et al. (2008) also model the sparsity of the network. This could be done in the context of MixNet by introducing a Dirac mass on zero for the conditional distribution of edges. Differences among approaches also concern the statistical framework that defines subsequent optimization strategies. The Bayesian setting has been a framework chosen by many authors, as it allows the integration of prior information and hierarchical structures [Airoldi et al. (2008)]. On the contrary, our approach does not necessarily rely on stochastic strategies, meaning that each run provides the same set of parameters. However, the likelihood of mixture models in general is multimodal, which is a problem for both approaches. In MCMC procedures it leads to potential label switching issues, and the variational EM may converge to local maxima.
As the model and the statistical frameworks are different, clustering results are likely to be very different as well. In order to illustrate our point, we deviate from the political blog data and we use the small data set of Sampson (1968) which is used in Airoldi et al. (2008). This data set describes relational data between monks in a monastery (whom do you like data). Figure 1 shows 3 possible partitionings of this graph, the first one corresponds to Sampson’s observations, the second one is the result of the MMSB model as presented in Airoldi et al. (2008), and the third one is provided by MixNet. Individual labels are provided in Table 1. As already noted by the authors, the MMSB classes overlap with the relational categories provided by Sampson. This is not the case for MixNet, which uncovers classes of connectivity that show strong interconnections but very few intraconnections (). Since one link exists when a monk likes another, MixNet clusters are made of monks that like the same sets of other monks. For instance, the blue cluster is made of two monks that like each other and that like all monks assigned to the green cluster. The monks in the green cluster do not seem to like each other, but prefer the monks assigned to the red and purple clusters. As a consequence, both approaches provide different information and are very complementary with more modeling possibilities in the MMSB framework, due to the mixed membership and the prior information integration possibilities. The relevance of MixNet results has been published elsewhere [Picard et al. (2009)], and our aim in this article is not to compete the models. Our point is rather computational: we aim at providing an efficient method to perform modelbased clustering on large networks. We use the MixNet model as a basis for development, but the online framework we develop could be applied to the MMSB model as well.
Monk  Sampson label  MMSB label  MixNet label 

Ambrose  LO  Gray  Green 
Boniface  YT  Violet  Green 
Mark  YT  Violet  Purple 
Winfrid  YT  Violet  Green 
Elias  O  Orange  Red 
Basil  O  Orange  Green 
Simplicius  O  Orange  Green 
Berthold  LO  Gray  Purple 
John  YT  Violet  Purple 
Victor  W  Gray  Red 
Bonaventure  LO  Gray  Blue 
Amand  W  Orange  Green 
Louis  LO  Gray  Red 
Albert  YT  Violet  Red 
Ramuald  W  Gray  Blue 
Peter  LO  Gray  Red 
Gregory  YT  Violet  Red 
Hugh  YT  Violet  Purple 
Joint distribution
Since MixNet is defined by its conditional distribution, we first check that the joint distribution also belongs to the exponential family. Using notation
and
we have the factorization which proves the claim. The sufficient statistics of the completedata model are the number of nodes in the classes , the characteristics of the betweengroup links ( through function that can stand for the number of between group links or for the intensity of the connections in the case of edges with Poisson or Gaussian distributions), and the product of frequencies between classes . In the following we aim at estimating .
3.2 Sufficient statistics and online recursion
Online algorithms are incremental algorithms which recursively update parameters, using current parameters and new observations. We introduce the following notation. Let us denote by the adjacency matrix of the data, when nodes are present, and by the associated labels. A convenient notation in this context is , which denotes all the edges related to node . Note that the addition of one node leads to the addition of potential connections.
The use of online methods is based on the additivity of the sufficient statistics regarding the addition of a new node. We can show that
with
Then if we define , we get
(1) 
Those equations will be used for parameter updates in the online algorithms.
3.3 Likelihoods and online inference
Existing estimation strategies are based on maximum likelihood, and algorithms related to EM are used for optimization purposes. The aim is to maximize the conditional expectation of the completedata loglikelihood
and the main difficulty is that cannot be factorized and needs to be approximated [Daudin, Picard and Robin (2008)]. A first strategy to simplify the problem is to consider a classification EMbased strategy [Celeux and Govaert (1992)]. In this setting label variables are considered as nonrandom and are replaced by their prediction (01). This is a generalization of the means algorithm for which the problem of computing is left apart. This strategy has been the subject of a previous work [Zanghi, Ambroise and Miele (2008)]. It is known to give biased estimates, but is very efficient from a computational time point of view.
To this strategy, we propose two different alternatives based on the Stochastic Approximation EM approach [Delyon, Lavielle and Moulines (1999)] which approximates using Monte Carlo simulations, and on the socalled variational approach, which consists of approximating by a more tractable distribution on the hidden variables. In their online versions, these algorithms optimize sequentially, while nodes are added. To this extent, we introduce notation
with being either the number of nodes or the increment of the algorithm, which are identical in the online context.
4 Stochastic approximation EM for network mixture
4.1 A short presentation of SAEM
An original way of estimating the parameters of the MixNet model is to approximate the expectation of the complete data loglikelihood using Monte Carlo simulations corresponding to the Stochastic Approximation EM algorithm [Delyon, Lavielle and Moulines (1999)]. In situations where maximizing is not in a simple closed form, the SAEM algorithm maximizes an approximation computed using standard stochastic approximation theory such that
(2) 
where is an iteration index, a sequence of positive step size and where is obtained by Monte Carlo integration. This is a simulation of the expectation of the complete loglikelihood using the posterior . Each iteration of the algorithm is broken down into three steps:
 Simulation

of the missing data. This can be achieved using Gibbs Sampling of the posterior . The result at iteration number is realizations of the latent class data : .
 Stochastic approximation

of using equation (2), with
(3)  Maximization

of according to .
As regards the online version of the algorithm, the number of iterations usually coincides with , the number of nodes of the network. Although it is possible to go further in the iterative process to improve the estimates, it is rarely necessary since the results obtained with iterations are usually reliable. This can be explained by the fact that the MixNet model is robust to sampling. The information in the network is indeed highly redundant and a reliable estimation of the network parameters can be obtained with a small sample (a few dozen) of the nodes using a classical batch algorithm. When is large, using an online algorithm with all the nodes is similar to performing many iterations of a batch algorithm on a small sample.
4.2 Simulation of in the online context
We use Gibbs sampling which is applicable when the joint distribution is not known explicitly, but the conditional distribution of each variable is known. Here we generate a sequence of approaching using , where stands for the class of all nodes except node . The sequence of samples is a Markov chain, and the stationary distribution of this Markov chain corresponds precisely to the joint distribution we wish to obtain. In the online context, we consider only one simulation to simulate the class of the last incoming node using
4.3 Computing in the online context
As regards the online version of the SAEM algorithm, the difference between the old and the new completedata loglikelihood may be expressed as
where the added simulated vertex label is equal to ().
Recall that in the online framework, the label of the new node has been sampled from the Gibbs sampler described in Section 4.2. Consequently, only one possible label is considered in this equation. Then a natural way to adapt equation (2) to the online context is to approximate
by
Indeed, this quantity corresponds to the difference between the loglikelihood of the original network and loglikelihood of the new network including the additional node. Notice that the larger the network, the larger its associated complete expected loglikelikelihood. Thus, becomes smaller and smaller compared to as increases. The decreasing step is thus set to one in this online context. We propose the following update equation for stochastic online EM computation of the MixNet conditional expectation:
where is drawn from the Gibbs sampler.
4.4 Maximizing , and parameters update
The principle of online algorithms is to modify the current parameter estimation using the information added by a new available node and its corresponding connections to the already existing network. Maximizing according to is straightforward and produces the maximum likelihood estimates for iteration . Here we have proposed a simple version of the algorithm by setting the number of simulations to one (). In this context, the difference between and implies only the terms of the complete loglikelihood which are a function of node . Using notation we get
where were defined in the previous section. Notice that updating the function of the parameter of interest is often more convenient in an online context than directly considering this parameter of interest. An example of parameter update is given for the Bernoulli and Poisson cases in the Appendix.
Once all the nodes in the network have been visited (or are known), the parameters can be further improved and the complete loglikelihood better approximated by continuing with the SAEM algorithm described above.
5 Application of online algorithm to variational methods
Variational methods constitute an alternative to SAEM. Their principle is to approximate the untractable distribution by a newly introduced distribution on denoted by . Then this new distribution is used to optimize , an approximation (lower bound) of the incompletedata loglikelihood , defined such that
with being the Kullback–Leibler divergence between probability distributions [Jordan et al. (1999)]. Then one must choose the form of , and the product of Multinomial distributions is natural in the case of MixNet, with and the constraint . In this case, the form of is
with an approximation of the conditional expectation of the completedata loglikelihood, and the entropy of the approximate posterior distribution of .
The implementation of variational methods in online algorithms relies on the additivity property of when nodes are added. This property is straightforward: is additive thanks to equation (1) [because is factorized], and is also additive, since the hidden variables are supposed independent under and the entropy of independent variables is additive. The variational algorithm is very similar to an EM algorithm, with the Estep being replaced by a variational step which aims at updating variational parameters. Then a standard Mstep follows. In the following, we give the details of these two steps in the case of a variational online algorithm.
5.1 Online variational step
When a new node is added, it is necessary to compute its associated variational parameters . If we consider all the other for as known, the are obtained by differentiating the criterion
where the are the Lagrangian parameters. Since function is additive according to the nodes, the calculation of its derivative according to gives
This leads to
(4)  
(5) 
5.2 Maximization/update step
To maximize the approximated expectation of the complete loglikelihood according to , we solve
(6) 
Differentiating equation (6) with respect to parameters gives the following update equation:
The other update equation is obtained by considering parameters , and using notation , which gives
Thanks to equation (1), which gives the relationships between sufficient statistics at two successive iterations, parameters can be computed recursively using the update of the expectation of the sufficient statistics, such that
An example of parameters update is given in the Appendix for both the Bernoulli and the Poisson distributions. Note the similarity of the formula compared with the SAEM strategy. Hidden variables are either simulated or replaced by their approximated conditional expectation (variational parameters).
6 Experiments
Motivations
Experiments are carried out to assess the tradeoff established by online algorithms in terms of quality of estimation and speed of execution. We propose a twoonlinestep simulation study. We first report simulation experiments using synthetic data generated according to the assumed random graph model. In this first experiment we use a simple affiliation model to check precisely the quality of the estimations given by the online algorithms. Results are compared to the batch variational EM proposed by Daudin, Picard and Robin (2008) to assess the effect of the online framework on the estimation quality and on the speed of execution. In a second step, we use a real data set from the web as a starting point to simulate growing networks with complex structure, and to assess the performance of online methods on this type of network. An ANSI C implementation of the algorithms is available at http://stat.genopole.cnrs.fr/software/mixnet/, as well as an R package named MixeR (http://cran.rproject.org/web/packages/mixer/), along with public data sets. This software is currently used by the Constellations online application (http://constellations.labs.exalead.com/), which instantaneously extracts, visually explores and takes advantages of the MixNet algorithm to reveal the connectivity information induced by hyperlinks between the first hits of a given search request.
6.1 Comparison of algorithms
Simulations setup
We simulate affiliation models with and being the within and between group probability of connection respectively. Five models are considered (Table 2). We set to reduce the number of free parameters, with parameter controlling the complexity of the model. Differences between models lie in their modular structure which varies from no structure (almost the Erdős–Rényi model) to strong modular structure (low intermodule connectivity and strong intramodule connectivity, or strong intermodule connectivity and low intramodule connectivity). Figure 2 illustrates three kinds of connectivity which allows to represent graphically model 1, 4 and 5. For each affiliation model we generate graphs with groups mixed in the same proportions . The number of nodes varies in to explore different sizes of graphs. We generate a total of 45 graph models, each being simulated 30 times.
Model  

1  0.3  0.7 
2  0.35  0.65 
3  0.4  0.6 
4  0.5  0.5 
5  0.9  0.1 
Criteria of comparison
The comparison between algorithms is done using the bias and the mean square error to reflect estimators variability. We also use the adjusted Rand Index [Hubert and Arabie (1985)] to evaluate the agreement between the estimated and the actual partitions. Computing this index is based on a ratio between the number of node pairs belonging to the same and to different classes when considering the actual partition and the estimated partition. It lies between 0 and 1, two identical partitions having an adjusted Rand Index equal to 1.
Algorithms setup
In a first step we compete algorithms that are based on maximum likelihood estimation (MLE). The online SAEM and online variational method we propose are compared with the variational method proposed in Daudin, Picard and Robin (2008) (batch MixNet in the sequel). We also add an online classification version (online CEM) in the comparison since this strategy has been shown to reduce the computational cost as well [Zanghi, Ambroise and Miele (2008)]. To avoid initialization issues, each algorithm is started with the same strategy: multiple initialization points are proposed and the best result is selected based on its likelihood. The number of clusters is chosen using the Integrated Classification Likelihood criterion, as proposed in Daudin, Picard and Robin (2008). The algorithms are stopped when the parameters are stable between two consecutive iterations. In a second step, we compare the MLEbased algorithms with other competitors like spectral clustering [Ng, Jordan and Weiss (2002)] and a means like algorithm [Newman (2006)].
Estimators bias and MSE (Table 3)
A first result is that every algorithm provides estimators with negligible bias (lower than ) and variance for highly structured models (models 1, 2, 5, Table 3). The online framework shows its limitations when the structure of the network is less pronounced (model 3), as every online method shows a significant bias and low precision, whereas the batch MixNet behaves well. This limitation was expected, as the gain in computational burden has an impact on the complexity of structures that can be identified. Finally, among online versions of the algorithm, the online variational method provides the best results on average in terms of bias and precision.
OnlineSAEM  Onlinevariational  OnlineCEM  BatchMixNet  
Model  
1  
2  
3  
4  
5  
RMSE(  RMSE(  RMSE(  RMSE(  RMSE(  RMSE(  RMSE(  RMSE(  
1  
2  
3  
4  
5 
OnlineSAEM  Onlinevariational  OnlineCEM  BatchMixNet  

Model  
1  0.98  0.02  0.98  0.02  0.98  0.02  0.99  0.02 
2  0.96  0.07  0.97  0.07  0.97  0.07  0.98  0.01 
3  0.13  0.13  0.10  0.15  0.25  0.16  0.85  0.14 
4  0.00  0.00  0.00  0.00  0.00  0.00  0.00  0.00 
5  1  0.00  1  0.01  1  0.01  1  0.01 
Quality of partitions (Table 4)
We also focus on the Rand Index for each algorithm. Indeed, even if poor estimation of reveals a small Rand Index (Table 4), good estimates do not always lead to correctly estimated partitions. An illustration is given with model 3 for which algorithms produce good estimates with poor Rand Index, due to the nonmodular structure of the network. As expected, the performance increases with the number of nodes (Table 5).
OnlineSAEM  Onlinevariational  OnlineCEM  BatchMixNet  
100  0.04  0.07  0.05  
250  0.09  0.11  0.01  
500  0.09  0.11  0.14  
750  0.03  0.03  0.04  
1000  0.01  0.01  9.37  
2000  0.00  0.01  0.01  
100  0.00  0.00  0.00  
250  0.01  0.01  0.00  
500  0.01  0.01  0.01  
750  0.02  0.02  0.02  
1000  0.41  0.43  0.40  
2000  1.28  1.41  2.08 
Computational efficiency (Table 5)
Since the aim of online methods is to provide computationally efficient algorithms, the performance mentioned above should be put in perspective with the speed of execution of each algorithm. Indeed, Table 5 shows the strong gain of speed provided by online methods compared with the batch algorithm. The speed of execution is divided by 100 on networks with 2000 nodes, for instance. Table 5 also shows that there is no significant difference in the speed of execution among online methods. Since the online variational method provides the best results in terms of estimation precision, with no significant difference with other methods on partition quality or speed, this will be the algorithm chosen for the following.
Comparison with other algorithms (Table 6)
The above results show that a strong case may be made for the online variational algorithm when choosing between alternative clustering methods. Consequently, we shall now compare it with two suitable “rivals” for large networks: a basic spectral clustering algorithm [Ng, Jordan and Weiss (2002)], and one of the popular community detection algorithms [Newman (2006)]. The spectral clustering algorithm searches for a partition in the space spanned by the eigenvectors of the normalized Laplacian, whereas the community detection algorithm looks for modules which are defined by high intraconnectivity and low interconnectivity.
For our five models with arbitrary fixed parameters , , we ran these algorithms and computed the Rand Index for each of them. From Table 6 we see that our online variational algorithm always produces the best clustering of nodes.
Community detection  Spectral clustering  Onlinevariational  
Model  
1  1.00  0.00  0.97  0.14  1.00  0.00 
2  0.99  0.01  0.98  0.00  1.00  0.00 
3  0.97  0.02  0.97  0.00  1.00  0.00 
4  0.00  0.00  0.00  0.00  0.00  0.00 
5  0.00  0.00  0.92  0.19  1.00  0.00 
We generated networks using the MixNet data generating process. Thus, these results correspond to what may be expected on networks that display a blockmodel structure: the online variational algorithm always yields the best node classification. Apart from model 4, it will also be remarked that the spectral algorithm is fairly efficient with a slight bias, and so the spectral clustering algorithm is consistently more accurate than the community algorithm, the latter failing completely when applied to model 5. Although the community algorithm appears less well adapted to these experiments, we shall see in the next section that this algorithm is particularly suitable when partitioning data sets whose nodes are densely interconnected.
6.2 Realistic networks growing over time
In this section we use a real network as a template to simulate a realistic complex structure. For this purpose, we use a French Political Blogosphere network data set that consists of a sample of 196 political blogs from a single day snapshot. This network was automatically extracted October 14, 2006 and manually classified by the “Observatoire Presidentielle” project. This project is the result of a collaboration between RTGI SAS and Exalead and aims at analyzing the French presidential campaign on the web. In this data set, nodes represent hostnames (a hostname contains a set of pages) and edges represent hyperlinks between different hostnames. If several links exist between two different hostnames, we collapse them into a single one. Note that intradomain links can be considered if hostnames are not identical. Finally, in this experimentation we consider that edges are not oriented, which is not realistic but which does not affect the interpretation of the groups. Six known communities compose this network: Gauche (French Democrat), Divers Centre (Moderate party), Droite (French Republican), Ecologiste (Green), Liberal (supporters of economicliberalism) and, finally, Analysts. The data is provided within the MixeR package. This network presents an interesting organization due to the existence of several political parties and commentators. This complex connectivity pattern is enhanced by MixNet parameters given in Figure 3.
As the algorithm is motivated by large data sets, we use the parameters given by MixNet to generate networks that grow over time. We use this French Blog to generate a realistic network structure as a start point. We simulate 200 nodes networks from this model, then we iterate by simulating the growth over time of these networks according to the same model and we use the online algorithm to update parameters sequentially. The result is striking: even on very large networks with 13,000 nodes and 13,000,000 edges, the online algorithm allows us to estimate mixture parameters with negligible classification error in 6 minutes (Table 7). This is the only algorithmic framework that allows to perform model based clustering on networks of that size.
7 Application to the 2008 US Presidential WebSphere
Since its creation and enhanced by its recent social aspect (Web 2.0), the World Wide Web is the space where individuals use Internet technologies to talk, discuss and debate. Such space can be seen as a directed graph where the pages and hyperlinks are respectively represented by nodes and edges. From this graph, many studies, like Broder et al. (2000), have been published and introduced the key properties of the Web structure. However, this section rather focuses on local studies by considering that the Web is formed by territories and communities with their own conversation leaders and participants [Ghitalla et al. (2003)]. Here, we define a territory as a group of websites concerned by the same topic and a community as a group of websites in the same territory which may share the same opinion or the same link connectivity. One usually assumes that the existence of a hyperlink between two pages implies that they are contentrelated [Kleinberg (1999); Davison (2000)]. By exploring the link page exchanges, one can actually draw the borders of web territories/communities.
Comparison with a community detection algorithm
A first step consists in comparing the results of MixNet with the community detection algorithm proposed by Newman (2006). If the political classification is used as a reference, the community algorithm produces better agreement with a , compared with a for MixNet (see Table 8). However, it appears that this comparison favors Newman, whereas the methods have different objective. Indeed, the community algorithm aims at finding modules which are defined by high intraconnectivity and low interconnectivity. Given that websites tend to link to one another in line with political affinities, the link topology corresponding to the manual classification naturally favors the community module definition. The objective function can also help to explain the community algorithm’s suitability for this data set, since the quality of a partition in terms of Newman’s modules can be expressed in terms of the modularity, which is maximized. The value of this modularity is a scalar between 1 and 1 and measures the density of links inside communities as compared to links between communities [Newman (2006)]. When applying both algorithms on our political network with , the online variational algorithm yields a , whereas the community algorithm yields a , which is close to the manual partition modularity of 0.28. As MixNet classes do not necessarily take the form of modules, one might expect our approach to yield a modularity index that is not “optimal.” Nevertheless, the two class definitions are complementary, and both are needed in order to give a global overview of a network: the community partition to detect dense node connectivity, and the MixNet partition to analyze nodes with similar connectivity profiles. However, as mentioned by Adamic and Glance (2005), the division between liberal and conservative blogs is “unmistakable,” this is why it may be more interesting to uncover the structure of the two communities rather than detecting them.
nodes (previousnew)  Ave. edges  Ave. rand  Ave. cpu time (s) 

200  3131.72  
200 200  50,316.32  
400 400  12,486.24  
800 800  201,009.5  
1600 1600  803,179.6  
3200 3200  3,202,196  
6400 6400  12,804,008 
Conservative  Independent  Liberal  

Cluster 1  734  135  238 
Cluster 2  290  26  8 
Cluster 3  2  7  430 
Interpreting MixNet results
MixNet first confirms what was already mentioned by Adamic and Glance (2005): the political websphere is partioned according to political orientations. In addition, MixNet highlights the role of main US online portals as the core of this websphere (Figure 4, C17). Political communities do not directly cite their opponents but communicate through nytimes.com, washingtonpost.com, cnn.com or msn.com, for instance (in C17). This central structure has two main significations: it confirms the political cyberbalkanization trend that was already observed in 2004, and it emphasizes the role of mass media websites as political referees. Plus, the connectivity pattern estimated by the model shows a particular affinity between the massmedia cluster with the liberal thought, as connections are stronger toward the liberal part of the weblogs (Table 9).
Conservative  Liberal  
ID  17  1  2  10  9  14  16  18  19  11  4  6  7  8  12  13  20 
17  
1  
2  
10  
9  
14  
16  
18  
19  
11  
4  
6  
7  
8  
12  
13  
20  