Analysis of large sparse graphs using regular decomposition of graph distance matrices
Statistical analysis of large and sparse graphs is a challenging problem in data science due to the high dimensionality and nonlinearity of the problem. This paper presents a fast and scalable algorithm for partitioning such graphs into disjoint groups based on observed graph distances from a set of reference nodes. The resulting partition provides a low-dimensional approximation of the full distance matrix which helps to reveal global structural properties of the graph using only small samples of the distance matrix. The presented algorithm is inspired by the information-theoretic minimum description principle. We investigate the performance of this algorithm for selected real data sets and for synthetic graph data sets generated using stochastic block models and power-law random graphs, together with analytical considerations for sparse stochastic block models with bounded average degrees.
Graphs are a useful abstraction of data sets with pairwise relations. In case of very large graph data sets, extracting structural properties and performing basic data processing tasks may be computationally infeasible using raw data stored as link lists, whereas the adjacency matrix may be too large to be stored in central memory. One obvious problem with sampling is the sparsity. The sparsity means that if we pick up two nodes at random, we usually observe no relation between them and it is impossible to create any meaningful low-dimensional model of the network. This prevents using uniform sampling as a tool for learning large and sparse graph structures.
Considerable progress towards statistical inference of sparse graphs has recently been achieved, cf. [1, 2] and references therein. Most of these methods are based on counting cycles and paths in the observed graph, possibly with some added randomness to split data and reduce noise sensitivity. Instead of cycle and path counts, here we suggest an alternative approach based on observed graph distances from a set of reference nodes to a set of target nodes. Such distances form a dense matrix. Of course, also in this case it may not be possible to have a complete matrix for very large networks. What is required is that for any given pair of nodes belonging to a sample, it is possible to have a relatively good estimate of distance between nodes. This is also a nontrivial task requiring an efficient solution, see e.g. . In recent experiments good estimates for distance matrix were reported for a billion-node graphs . Our sampling based approach only requires a sparse sample of the full distance matrix. When the number of reference nodes is bounded, the overall computational complexity of the proposed algorithm is linear in number of target nodes.
We discuss two different sampling schemes of the reference nodes. The first is uniform sampling, which is a feasible method for graphs with light-tailed degree distributions such as those generated by stochastic block models. The second sampling scheme is nonuniform and biased towards sampling nodes with high betweenness centrality, designed to be suitable for scale-free graphs with heavy-tailed degree distributions.
A crucial step is to obtain a low-rank approximation of the distance matrix based on its sample. For this we suggest to use a suitable variant of the regular decomposition (RD) method developed in [6, 12, 13, 14, 16, 7]. RD can be used for dense graphs and matrices and it shows good scalability and tolerance to noise and missing data. Because the observed distance matrix is dense, RD should work. The method permutes the rows of the matrix into few groups so that each column within each group of the matrix is close to a constant. We call such row groups regular. The regular groups form a partition of the node set. Each group of the partition induces a subgraph, and together these subgraphs form a decomposition of the graph into subgraphs and connectivity patterns between them. This decomposition is the main output of the method. The hypothesis of this paper is that the graph decomposition reveals structure of the sparse and large graphs. For instance, it should reveal small but dense subgraphs in sparse graphs as well as sets of similar nodes that form communities.
As a theoretical latent model we consider stochastic block models (SBM). SBM is an important paradigm in network research, see e.g.. Usually SBM revolves around the concept of communities that are well connected subgraphs with only few links between the communities. We also look for other types of structures different from the community structure, see also . For instance, in case of web graphs, Internet, peer-to-peer networks etc., we would expect quite different structure characterized, say, by a power-law degree distribution and hierarchy of subnetworks forming tiers that are used in routing messages in the network. The proposed distance based structuring might give valuable information about the large scale structure of many real-life networks and scale into enormous network sizes.
Our approach is stimulated by Szemerédi’s regularity lemma  which indicates that large dense graphs have decomposition of nodes into a bounded number of groups where most of the pairs are almost like random bipartite graphs. The structure encoded by the regularity lemma ignores all intra-group relations. In our regular decomposition both of these aspects are used and both inter-group and intra-group relations matter.
As a benchmark we consider the famous planted bipartition model . It is a random graph and a special case of SBM. As ground truth, there are two communities of nodes with equal number of nodes in each and with two parameters. First parameter is the link probability between nodes inside each community and the second one, the link probability between nodes in different communities. The links are drawn randomly and independently from each other. For such a model, it is known that there is a sharp transition, or ’critical point’, of detectability of such a structure depending on the difference between the two parameters [8, 9]. The critical point is also located in the area of very sparse graphs, when expected degree is bounded. This example is suitable for testing our method because: having a ground truth, graph sparsity, bounded average degree and the proven sharp threshold. The preliminary results we report here, are promising. It seems that our algorithm is effective right up to the threshold in the limit of large scale. Moreover simulations indicate that such a structure can be found from very sparse and bounded size samples of the distance matrix.
Besides this benchmark, we demonstrate our method using real-life sparse networks. The first example is a Gnutella peer-to-peer file sharing network, and the second is an undirected Internet’s autonomous system network. Both of them heavy-tailed degree distributions . These graphs are not enormous. However, we treat them as if they were very large. Meaning that they are analyzed by using only a small fraction of the full information in the distance matrix. The computations were run in few nodes of a HPC cluster. Using this facility with cores and terabytes of memory, it is possible to run experiments with much bigger data sets in the near future.
Ii Regular decomposition algorithm
Ii-a Communities and partition matrices
Consider a connected (finite, undirected) graph111Or strongly connected directed graph in a directed setting. . If the original graph is not connected, we can first do a rough partitioning using the connected components. Here we assume that this simple task has already been carried out. Our goal is to partition a subset of nodes of the graph into disjoint nonempty sets called communities. Such a partition can be represented as an ordered list where indicates the community of the -th node in . For convenience, we will also use an alternative representation of the partition as an -by- matrix with entries
Such a matrix has binary entries, unit rows sums, and nonzero columns, and will be here called a partition matrix.
Ii-B Statistical model for the distance matrix
The partitioning algorithm presented here is based on observed distances from a set of reference nodes to a (possibly overlapping) set of target nodes. Let be the length of the shortest path from the -th reference node to the -th target node in the graph. The target is to find such a partition of nodes that distances from any particular reference node to nodes in community are approximately similar, with minimal stochastic fluctuations. This modeling assumption can be quantified in terms of an -by- matrix with nonnegative integer entries representing the average distance from the -th reference node to nodes in community . A simple model of a distance matrix in this setting is to assume that all distances are stochastically independent random integers such that the distance from the -th reference node in to a node in community follows a Poisson distribution with mean . This statistical model is parametrized by the -by- average distance matrix and the -by- partition matrix , and corresponds to the discrete probability density function222We could omit terms with from the product because of course , but this does not make a big difference for large graphs.
Such modeling assumption does not assume any particular distribution of distance matrix, question is about approximating the given distance matrix with a random matrix with parameters that give the best fitting. Such particular models are used because it results in a simple program, as we see in Algorithm 1. We have also tested it in our previous works with various data, showing good practical performance, [13, 14, 15].
Having observed a distance matrix , standard maximum likelihood estimation looks for and such that the above formula is maximized. For any fixed , maximizing with respect to the continuous parameters is easy. Differentiation shows that the map is concave and attains its unique maximum at where
is the observed average distance from the -th reference to nodes in community . As a consequence, a maximum likelihood estimate of is obtained by minimizing the function
subject to having unit row sums and nonzero column sums, where is given by (1).
Ii-C Recursive algorithm
Minimizing (2) is a nonlinear discrete optimization problem with an exponentially large input space of order . Hence an exhaustive search is not computationally feasible. The objective function can alternatively be written as , where
This suggests a way to find local maximum by selecting a starting value for at random, and greedily updating the rows of one by one as long as the value of the objective function decreases. A local update rule for is achieved by a mapping defined by where
Algorithm 1 describes a way to implement this method. This is in spirit of the EM algorithm where the averaging step corresponds to an E-step and the optimization step to an M-step. The algorithm iterates these steps by starting from a random initial partition matrix , and recursively computing for . The runtime of the local update is , so that as long as the number of communities and the parameters are bounded, the algorithm finishes in linear time with respect to and and is hence well scalable for very large graphs. The output of Algorithm 1 is a local optimum. To approximate a global optimum, parameter should be chosen as large as possible, within computational resources.
Finally, we describe how the rest of nodes are classified into groups or communities, after the optimal partition for a given target group and reference group is found. Let denote a node out of original target group. First we must obtain distances of this node to all reference nodes
Then the node i is classified into group number according to
The time complexity of this task is dominated by the computations of distances of all nodes to the reference nodes, because for bounded the above optimization is done in a constant time. According to Dijkstra-algorithm computation of distances from all nodes to the target nodes takes . In a sparse graph, that we assume, . Thus, if is bounded, the overall time complexity is , which is only slightly over the best possible , which is needed just to enlist a partition. This is because the classification phase takes only time for all nodes.
Ii-D Estimating the number of groups
The regular decomposition algorithm presented in the previous section requires the number of groups as an input parameter. However, in most real-life situations this parameter is not a priori known and needs to be estimated from the observed data. The problem of estimating the number of groups can be approached by recasting the maximum likelihood problem in terms of the minimum description length (MDL) principle [17, 18] where the goal is to select a model which allows the minimum coding length for both the data and the model, among a given set of models. When the set of models equasl the Poisson model described in Sec. II-B, then the -dependent part of the coding length equals the function given by (2), and a MDL-optimal partition , given , corresponds to the minimal coding length
It is not hard to see that is monotonously decreasing as a function of , and in MDL a balancing term, the model complexity, is added to select the model that best explains the observed data. However, in all of our experiments, the negative log-likelihood as a function of becomes essentially a constant above some value . Such a knee-point is used as an estimate for the number of groups in this paper. Thus we are using a very simplified version of MDL, since it was found sufficient in our cases of examples. In a more accurate analysis one should use model complexity in higher detail. Some early work towards this direction includes .
Iii Theoretical considerations
Iii-a Planted partition model
A stochastic block model (SBM) with nodes and communities is a statistical model parametrized by a nonnegative symmetric -by- matrix and a -vector with entries in . The SBM generates a random graph where each node pair is linked with probability , independently of other node pairs. For simplicity, we restrict the analysis to the special case with communities where the link matrix is of the form
for some constants . This model, also known as the planted partition model, produces sparse random graphs with link density , and is a de facto benchmark for testing the performance of community detection algorithms. As usual, we assume that the underlying partition is such that both communities are approximately of equal size, so that the partition matrix satisfies . If , there are two communities that have larger internal link density than link density between them. A well-known result states that for , partially recovering the partition matrix from an observed adjacency matrix is possible if
Iii-B Expected and realized distances
Our aim is to have analytical formulas for distances in a large graph generated from SBM. This question was addressed in  using spectral approach, where limiting average distances were found. We need the next to the leading term of the average distance. Although these calculations are not rigorous, it is well-known that in case of classical random graph similar approach produces a distance estimate that is asymptotically exact. That is why we believe that such an analysis makes sense in case of SBM as well.
To analyze distances, let us first investigate the growth of the neighborhoods from a given node as a function of the graph distance. Let us denote the communities by for . Fix a node and denote by the expected number of nodes in community at distance from . Note that each node has approximately neighbors in the same community and approximately neighbors in the other community. Moreover, due to sparsity, the graph is locally treelike, and therefore we get the approximations
Writing , this can be expressed in matrix form as where
As a result, with . The matrix has a pair of orthogonal eigenvectors with eigenvalues:
According to the spectral theorem, we can diagonalize the matrix and conclude that its powers are given by
As a result, the expected numbers of nodes of types at distance from a node of type are approximated by
Moreover, if , then
Next we want to find and estimate for the average distance (resp. ) from a node in to another node in (resp. ). We use the heuristic that the distances from a node to all nodes in the same group are well concentrated and close to each other. Under this assumption, we expect that and approximately solve the equations and .
We get the equations for the distances:
We are interested in leading order of difference of for . Because due to , and , we can use following iterative solution scheme. For , we have:
as a result, the equation we want to iterate is:
By expanding the second logarithm in series of powers of , we get the leading terms of the solution:
where is a constant and
A similar procedure yields:
Because , both and have the same limit .
We conjecture that above the Kesten–Stigum threshold (5) the cost function, used in RD to partition graph distance matrix of the giant component of the graph generated from two part SBM, has a deep minimum corresponding to correct partition. More precisely, the cost of misplacing one node from the correct partition grows to infinity as .
First we conjecture that the found distance estimates of and are asymptotically equal to expected distances in a random graph corresponding to the Planted Partition model. For a node and nodes and ,
where and , corresponding to the approximations in the previous section. We also conjecture that for any , we have with high probability,
which is quite plausible if the first conjecture is true. The error term can be neglected if , which is equivalent to being above the Kesten–Stigum threshold (5), which we assume from now on. If all nodes of the graph are partitioned correctly, then the cost (3) of target node in community is approximately
If we switch the community to be then this cost changes into
As a result,
As a result if or equivalently (5), the difference has infinite limit. This heuristic derivation suggests that the regular decomposition algorithm is capable of reaching the fundamental limit of resolution of the planted bipartition model.
Iv Experiments with simulated data
Iv-a Planted partition model
We investigate empirically the performance of the regular decomposition algorithm to synthetic data sets generated by the planted partition model described in Sec. III-A. This is an instance of a very sparse graph with bounded degrees and with only two groups. In this case we argue that uniform random sampling of reference nodes will do. Here it is possible to compute full distance matrix up to sizes of nodes and sampling is not necessary.
For our test, we generated a graph with parameters , , and . Another similar experiment was done with nodes. Next we computed the shortest paths between all pairs of nodes and formed a distance matrix . RD was able to detect the structure with around 1 percent error rate. The average of one regular group shows that the distance has quite high level of noise, see Fig. 1. The reason why the communities become indistinguishable is probably in the increasing level of the variance. Below the threshold it is always too large, no matter how large is and above the threshold the communities can be detected provided is large enough. This is the conclusion of experiments not shown in this work.
Next we did experiments with 10000 nodes. In this particular case it looks that our method works better than a standard community detection algorithm of Girvan-Newman type. See Fig. 2 for graphical presentation.
As a sanity check we also used a usual community detection algorithm found in Wolfram Mathematica library. It was not capable of finding the true communities, see Fig. 4. The RD algorithm using the -matrix, was able to find the communities correctly, with only a handful of misclassified nodes.
Iv-B Sampled distance matrices
To investigate experimentally how many reference nodes are needed to obtain an accurate partitioning of a set of target nodes, we sampled a set of reference nodes uniformly at random, and ran the regular decomposition algorithm on the corresponding -by- distance matrix.
It appears that even a modest sample of about reference nodes is enough to have almost error free partitioning, see Fig. 3. It appears that with reference nodes, a set of target nodes can be accurately partitioned into communities using RD, with error rate less than . For larger sets of target nodes, the results appear similar. This suggests that such a method could work for very large graphs using this kind of sampling.
The regular decomposition algorithm also produces an estimated -by- average difference matrix . This model can be used to classify all nodes in the graph in linear time. To do this, we must compute distances to the reference nodes, compute the negative log-likelihood for two groups based on , and place the node into the class with a smaller negative log-likelihood. All computations take just a constant time and that is why the linear scaling.
As a conclusion, we conjecture that for very large and sparse networks the distance matrix RD could be an option to study community structures.
The RD method seems to have better resolving power than community detection algorithms based on adjacency matrix and could work with sparse samples of data and thus scaling to extremely large networks. For a rather flat topology the uniform sampling method for distance matrix might be sufficient.
Iv-C Preferential attachment models
The degree distributions of many social, physical, and biological networks have heavy tails resembling a power law. For testing network algorithms and protocols on variable instances of realistic graphs, synthetic random graph models that generate degrees according to power laws have been developed .
The main purpose of this exercise is to test sampling approach versus the full analysis. We used an instance of a preferential attachment model (Barabási–Albert random graph) with nodes. The construction starts from a triangle. Then nodes are added one-by-one. Each incoming node makes random links to existing nodes, and the probability of link is proportional to the degree of a node (preferential attachment). The result is somewhat comparable to the Gnutella network. However, in Gnutella networks instead of hubs, we had some more complicated dense parts.
To achieve scalability, instead of using full distance information between all nodes, we wish to restrict to distances to a small number of reference nodes. A main problem with the sampling of reference nodes is that high-degree core nodes are unlikely to show up in uniformly random samples. This is why decided to investigate the following nonuniform sampling scheme. The set of reference nodes was generated as the set of nodes which appeared in shortest paths between a randomly chosen set of 100 pairs of nodes. Distances to such reference nodes of high betweenness centrality are a strong indicator about distances between any two nodes because most short paths traverse through the central nodes. Next we ran the regular decomposition algorithm with reference nodes to partition the set of target nodes into blocks. We get a quite similar result as the one for the entire distance matrix, see Fig 6.
V Experiments with real data
V-a Gnutella network
We studied a Gnutella peer-to-peer network  with nodes representing hosts and directed links representing connections between the hosts. The graph is sparse because the link density is just about . We extracted the largest strongly connected component which contains nodes and ran the regular decomposition algorithm for the corresponding full distance matrix () for different values for the number of communities in the range . From the corresponding plot (Fig. 7) of the negative log-likelihood function we decided that is valid choice.
Fig. 8 illustrates the inter-community structure of the partitioned graph into communities, and Fig. 9 describes the subgraphs induced by the communities. The induced subgraphs are internally quite different from each other. The high degree core-like parts form their own communities and they play a central role in forming paths through the network. Together these two figures provide a low-dimensional summary of the network as a weighted directed graph on nodes with self-loops and weights corresponding to link densities in both directions.
V-B Internet autonomous systems
The next example is a topology graph of Internet’s Autonomous Systems  obtained from traceroute measurements, with around million nodes and million undirected links. This graph was analyzed using a HPC cluster.
We used a simplified scheme to analyze this graph. This was dictated by limited time and also we wanted to test some heuristic ideas to speed-up regular decomposition even further. First we computed shortest paths between a hundred randomly selected pairs of nodes. Then most frequently appearing nodes in those shortest paths were selected as reference nodes. These nodes also appeared at the top of the link list provided by the source . That is why we assume that such an important ordering of nodes is used in this source data set. Next we took top nodes from the source list and uniformly random nodes from the set of all nodes. A distance matrix from the reference nodes to the selected target nodes was computed. Then the regular decomposition algorithm was run on this distance matrix for different values of . From the negative log-likelihood function plot an optimal number of communities was estimated to be . As a result, we get a partition of the selected 5000 nodes into communities.
To enlarge the communities we used the following heuristic. For each node belonging to one of the communities, we include all neighbors of the node to the same group. This can be justified, since such neighbors should have very similar distance patterns as the root nodes. In this way a large proportion of nodes were included in the communities, more than 30 percent of all nodes. The result is partially shown in Fig. 10, some of the groups were very large, having around nodes, and only part of them are plotted.
The found subgraphs are structurally heterogeneous and thus informative. For comparison, most subgraphs induced by a random samples of 1000 nodes contained no links in our experiments.
Vi Conclusions and future work
This paper introduced a new approach for partitioning graphs using observed distances instead of usual path and cycle counts. By design, the algorithm easily scales to very large data sets, linear in the number of target nodes to be partitioned. First experiments presented here with real and synthetic data sets suggest that this method might be quite accurate, and possibly capable of reaching the Kesten–Stigum threshold. However, to be convinced about this, more detailed theoretical studies and more extensive numerical experiments are needed. We also need to estimate quantitatively accuracy of the low-dimensional approximation in synthetic cases like the random power-law graphs. Spectral methods utilizing the distance matrix as a basis of network analysis are of broader interest, see [19, 20]. We are also interested in finding relations of our concept with graph limits in the case of sparse networks , and extending the analytical result to sparse random graph models with nontrivial clustering [27, 28, 29]. We aim to study stochastic block models with more than two groups and the actual distance distributions in such random graphs.
We will also find real-life applications for our method in machine learning such as highly topical multilabel classification, [30, 31, 32, 33, 34]. For instance in case of natural language documents like news release, we can use deep-learning to embed words or paragraphs into points in a vector space. The Euclidean distance between corresponding vectors indicates affinity of meaning words etc. Our graph method could be used to analyze networks of large volumes of of such documents. Each document has usually more than one meaningful labeling. We will study possibilities of aiding such a multilabel classification using RD of the training data.
This work was supported by ECSEL-MegaMaRT2 project.
-  F. Caron, B. Fox, Sparse graphs using exchangeable random measures, J.R. Statist. Soc. B (2017) 79, Part 5, pp. 1295-1366
-  C. Borgs, J. Chayes, C. E. Lee, D. Shah, Iterative Collaborative Filtering for Sparse Matrix Estimation, Dec. 2017, arXiv:1712.00710
-  Bhattacharyya, S., Bickel P.J., Community Detection in Networks using Graph Distance, Networks with Community Structure Workshop, Eurandom, January, 2014, arXiv:1401.3915 [stat.ML] 2014
-  Christoforaki M., Suel T., Estimating Pairwise Distances in Large Graphs, 2014 IEEE International Conference on Big Data, Washington, DC, USA, 2014
-  Qi Z, Xiao Y.,Shao B., Wang H., Towards a Distance Oracle for Billion-Node graphs, Proc. of the VLDB Endowment, Vol 7, No. 2., 2013
-  Reittu H., Norros I., Bazsó F., Regular decomposition of large graphs and other structures: scalability and robustness towards missing data, In Proc. Fourth International Workshop on High Performance Big Graph Data Management, Analysis, and Mining (BigGraphs 2017), Mohammad Al Hasan, Kamesh Madduri and Nesreen Ahmed, Editors, Boston U.S.A. December 11. 2017
-  H. Reittu, I. Norros, T. Räty, M. Bolla, F. Bazsó, Regular decomposition of large graphs: foundation of a sampling approach to stochastic block model fitting, under revision in Data Science and Engineering, Springer.
-  Decelle, A., Krzakala, F. , Moore, C., Zdeborová, L., Asymptotic analysis of the stochastic block model for modular networks and its algorithmic applications, Phys. Rev. E 84 (2011), 066106
-  Elchanan Mossel, Joe Neeman, and Allan Sly, Consistency thresholds for the planted bisection model Electron. J. Probab. Volume 21 (2016), paper no. 21, 24 pp.
-  Abbe, E.: Community detection and stochastic block models: recent developments, arXiv:1703.10146v1 [math.PR] 29 Mar 2017
-  Szemerédi, E.: Regular Partitions of graphs. Problemés Combinatories et Téorie des Graphes, number 260 in Colloq. Intern. C.N.R.S.. 399-401, Orsay, 1976
-  Nepusz, T., Négyessy, L., Tusnády, G., Bazsó, F.: Reconstructing cortical networks: case of directed graphs with high level of reciprocity. In B. Bollobás, and D. Miklós, editors, Handbook of Large-Scale Random Networks, Number 18 in Bolyai Society of Mathematical Sciences pp. 325–368, Spriger, 2008
-  Pehkonen, V., Reittu, H.: Szemerédi-type clustering of peer-to-peer streaming system. In Proceedings of Cnet 2011, San Francisco, U.S.A. 2011
-  Reittu, H., Bazsó, F., Weiss, R.: Regular decomposition of multivariate time series and other matrices. In P. Fränti and G. Brown, M. Loog, F. Escolano, and M. Pelillo, editors, Proc. S+SSPR 2014, number 8621 in LNCS, pp. 424 – 433, Springer 2014
-  Kuusela, P., Norros, I., Reittu, H., Piira, K., Hierarchical Multiplicative Model for Characterizing Residential Electricity Consumption, Journal of Energy Engineering - ASCE, Volume: 144, Issue number: 3, 2018
-  Reittu, H., Bazsó, F. , Norros, I. : Regular Decomposition: an information and graph theoretic approach to stochastic block models arXiv:1704.07114v2[cs.IT] 19 Jun 2017
-  Rissanen, J., A universal prior for integers and estimation by minimum description length, Annals of Statistics, 1983.
-  Grünwald, P.D., The Minimum Description Length Principle, MIT Press, 2007.
-  Bolla, M.: Spectral clustering and biclustering: Learning large graphs and contingency tables, Wiley, 2013
-  Aouchiche M., Hansen P., Distance Spectra of Graphs: A Survey, Linear Algebra and its Applications, 458, 301-386, 2014.
-  Borgs C, Chayes JT, Lovász L, T., Sós V and Vesztergombi K 2008, Convergent graph sequences I: Subgraph Frequencies, metric properties, and testing. Advances in Math. 219, 1801–1851.
-  https://snap.stanford.edu/data/p2p-Gnutella04.html
-  https://snap.stanford.edu/data/as-Skitter.html http://www.caida.org/tools/measurement/skitter
-  Newman M.E.J., Peixoto T.P., Generalized communities in networks, Phys. Rev. Lett. 115, 088701 (2015)
-  Norros I., Reittu H., On a conditionally Poissonian graph process, Advances in Applied Probability, V(38), N 1, pp. 59-75, 2006
-  van der Hofstad, R., Random graphs and complex networks, Cambridge University Press, 2016.
-  M Bloznelis, L Leskelä, Diclique clustering in a directed random graph, Proc. 13th Workshop on Algorithms and Models for the Web Graph (WAW), 2016.
-  J Karjalainen, L Leskelä, Moment-based parameter estimation in binomial random intersection graph models, Proc. 14th Workshop on Algorithms and Models for the Web Graph (WAW), 2017.
-  J Karjalainen, JSH van Leeuwaarden, L Leskelä, Parameter estimators of sparse random intersection graphs with thinned communities, Proc. 15th Workshop on Algorithms and Models for the Web Graph (WAW), 2018.
-  Hongyu Su, Juho Rousu. Multilabel classification through random graph ensembles, Machine Learning, , Volume 99, Issue 2, pp 231â256, 2015.
-  J. Read, L.Martino, P. Olmos, D. Luengo. Scalable Multi-Output Label Prediction: From Classifier Chains to Classifier Trellises, Pattern Recognition, Volume 48, Issue 6, Pages: 2096-2109, 2015.
-  Krzysztof Dembczynski, Willem Waegeman, Weiwei Cheng, and Eyke Hüllermeier. On label dependence and loss mini- mization in multi-label classification. Mach. Learn., 88(1-2):5â45, July 2012.
-  K. Bhatia, H. Jain, P. Kar, M. Varma, and P. Jain, Sparse Local Embeddings for Extreme Multi-label Classification, in NIPS, 2015.
-  R. Babbar, and B. SchÃ¶lkopf, DiSMEC - Distributed Sparse Machines for Extreme Multi-label Classification in WSDM, 2017.