Nonbacktracking expansion of finite graphs

Nonbacktracking expansion of finite graphs

Abstract

Message passing equations yield a sharp percolation transition in finite graphs, as an artifact of the locally treelike approximation. For an arbitrary finite, connected, undirected graph we construct an infinite tree having the same local structural properties as this finite graph, when observed by a nonbacktracking walker. Formally excluding the boundary, this infinite tree is a generalization of the Bethe lattice. We indicate an infinite, locally treelike, random network whose local structure is exactly given by this infinite tree. Message passing equations for various cooperative models on this construction are the same as for the original finite graph, but here they provide the exact solutions of the corresponding cooperative problems. These solutions are good approximations to observables for the models on the original graph when it is sufficiently large and not strongly correlated. We show how to express these solutions in the critical region in terms of the principal eigenvector components of the nonbacktracking matrix. As representative examples we formulate the problems of the random and optimal destruction of a connected graph in terms of our construction, the nonbacktracking expansion. We analyze the limitations and the accuracy of the message passing algorithms for different classes of networks and compare the complexity of the message passing calculations to that of direct numerical simulations. Notably, in a range of important cases, simulations turn out to be more efficient computationally than the message passing.

1Introduction

The Bethe lattice is a valuable substrate, on top of which many cooperative models are solved exactly [1] and show critical behaviors with mean-field critical exponents and Gaussian critical fluctuations. A regular Bethe lattice is equivalent to an infinite random regular graph, which is locally treelike and has only infinite cycles (loops) and no boundary. This is in contrast with an infinite Cayley tree in which the boundary suppresses phase transitions, e.g., in the Ising model on a Cayley tree, finite or infinite, long-range order never emerges. The configuration model [2] of a sparse uncorrelated network and its generalizations represent a heterogeneous version of the Bethe lattice. This basic model also provides exact solutions for percolation and other problems in infinite random networks [4].

Self-consistency equations for the probability of reaching a finite branch by following a link yield the solution of percolation problems for infinite locally treelike random networks [4]. Recently it was found in Ref. [5] that such self-consistency equations produce a good approximation of the percolation behavior of large but still finite, sparse graphs. This conclusion was supported by numerical simulations. The equations are written for a given graph from which a fraction of links is removed. These self-consistency equations—termed message passing equations in the context of finite graphs—predict a sharp continuous phase transition for any finite graph, e.g., for the -clique (four interconnected nodes). In reality, a percolation continuous phase transition cannot be defined in finite systems. The same is true for other models on finite graphs. In this paper, instead of describing a given finite system with some unknown accuracy, we address the issue of this useful approximation from a complementary perspective. Namely, if the message passing equations in principle cannot precisely describe physical models on finite graphs, then what do they describe? For each given finite graph, we construct a related infinite network for which these equations provide an exact description of the cooperative models. This infinite network is a generalization of the Bethe lattice with percolation properties that are described exactly by the message passing equations on the finite graph [5]. The closer the original finite graph is to this construction, the better the approximation provided by the message passing equations. We therefore give a physical interpretation of the solutions of these equations for percolation on a finite graph. We express these solutions in the critical region in terms of the principal eigenvector components of the nonbacktracking matrix. As another example we consider the problem of optimal percolation [6]. We show that using our network construction, an approximate algorithm for finding the so-called decycling number [8] can be formulated in simple terms of the principal eigenvector of the nonbacktracking matrix. We introduce a way to quantify the accuracy of the message passing algorithm and indicate two classes of networks for which the approximation fails. We calculate the time complexity associated with message passing algorithms and show that in a wide range of important situations this method is less efficient than direct numerical simulations.

2Construction

To build the nonbacktracking expansion (NBE) of an arbitrary finite, connected, undirected graph , we first construct the tree in the following way. For any given node of , is a star graph consisting of a node labeled in the center, and for any node in for which a link exists, a node labeled is attached to the center node of . For general we define in a recursive way. To any leaf labeled in , we attach a node labeled if and only if , where is the nonbacktracking matrix of graph and is the label of the parent node of leaf in . The nonbacktracking matrix [9] is defined as , where and are end node pairs of links in the original graph. Note that both directions must be taken into account for each link. We see that the branches emanating from the root in correspond exactly to nonbacktracking walks of, at most, length , in the original graph starting from node . (Note that some of these walks may be shorter than , when a dead end in graph is encountered.) In this construction the nodes of the resulting tree have only labels of the original finite graph, , so different nodes in this tree can have the same labels. The same is true for nodes in each individual branch of the tree. Figure 1(b) explains for the finite graph in Figure 1(a).

Figure 1: Construction of (b) the tree \mathcal{T}(3,1) and (c) the branch \mathcal{B}(2, 2 \leftarrow 1) for an example graph (a). The root (starting node) in \mathcal{T}(3,1) is shown by the open circle and the starting link in \mathcal{B}(2, 2 \leftarrow 1) is represented by the dotted line.
Figure 1: Construction of (b) the tree and (c) the branch for an example graph (a). The root (starting node) in is shown by the open circle and the starting link in is represented by the dotted line.

The construction is also known as a computation tree in the computer science community [11]. Taking the limit and formally excluding the boundary, we obtain a network in which all nodes of the same label and all links of the same label are topologically equivalent, in the sense that the structure of any finite neighborhood of each node of label (and of each link of label ) is the same. This construction is a generalization of the Bethe lattice and we call it the nonbacktracking expansion of graph . It is a specific Bethe lattice preserving the local topology of connections of the graph . For example, the NBE of the -clique is simply a regular Bethe lattice of coordination number 3.

A more explicit definition of the NBE can be given using the “-cloned” network [12] of the given graph . This network is constructed as follows. Let the nodes of be labeled . We make replicas of all these nodes, still labeled within each of the replicas. The nodes obtained in this way are the nodes of the -cloned network defined as follows. Let be the network that is maximally random under the constraint that if and only if nodes and are connected in , then any of the nodes labeled in is connected to exactly one node labeled (of, in total, such nodes). In other words, make copies of graph and consider all possible rewirings that preserve the labels of end nodes of all links, with equal statistical weights (uniform randomness). Figure 2 explains this construction for .

Figure 2: Construction of network \mathcal{N}(2) for the finite graph in Fig. (a), according to Ref. . Three of the members of the statistical ensemble of graphs constituting \mathcal{N}(2) are shown as examples.
Figure 2: Construction of network for the finite graph in Fig. (a), according to Ref. . Three of the members of the statistical ensemble of graphs constituting are shown as examples.

Let us take the limit and consider the infinite network . This network is locally treelike: any finite neighborhood of any node labeled is given by the tree . The relative number of finite loops in this infinite network is negligible, almost all loops are infinite, and almost all nodes of label (and links of label ) are topologically equivalent. The network gives exactly the NBE of graph .

To discuss the properties of the NBE it is helpful to define the branch as the tree that is recursively generated by the nonbacktracking walks of length in starting from the link , as shown in Figure 1(c). Clearly, consists of all branches (all , for which there is a link in graph ).

3Properties of the nonbacktracking expansion

Let us consider properties of these constructions. Let be the number of links of label on the surface of (these are links from parent on the surface of sphere to child on the surface of sphere ). Let be the size (number of nodes) of the surface of , so that is the relative number of links of label on the surface of . Every link on the surface of must emanate from the end node of a link on the surface of , so

that is,

Taking , the ratio converges to an asymptotic mean branching and the numbers must also converge to some asymptotic values . For , therefore, we get

or, in vector form,

Equation (Equation 4) is a right eigenvector equation for the nonbacktracking matrix of the original graph . According to the Perron-Frobenius theorem [13], the nonbacktracking matrix has exactly one strictly non-negative right eigenvector, and this is the eigenvector corresponding to the leading eigenvalue. All of the components of are, by definition, non-negative, therefore the mean branching is equal to the leading eigenvalue of . We note that the leading eigenvalue and the corresponding right eigenvector are independent of the starting link , so we have the following general result. The asymptotic mean branching of , for any , is given by the leading eigenvalue of the nonbacktracking matrix of graph . From the construction of , it is clear that must also have the same asymptotic mean branching for any root . The asymptotic relative numbers of links on the surface of [or ] are given by the components of the principal right eigenvector of . These asymptotic relative numbers are, again, independent of the starting link (or node).

Now we can find the asymptotic relative number of nodes of label on the surface [of either or , independent of the starting link or root ]:

where denotes the set of nearest neighbors of node in graph . The asymptotic relative number of nodes must also be the same on the layer just beneath the surface, and each node in this layer (apart from dead ends) must be the starting node of a link on the surface, so we can also write

where is the degree of nodes . Equations (Equation 5) and (Equation 6) give a general relationship between the components of the eigenvector . Note that is the asymptotic probability of finding a nonbacktracking random walker at node after an infinitely long walk. This quantity coincides with the nonbacktracking centrality of [14].

Let us return to , the size of the surface of . Let be the sum of the sizes of all such surfaces, and be the relative size of the surface of . Then

or

In the limit , the ratio converges to the asymptotic mean branching and the numbers must also converge to some asymptotic values . We can write

or, in vector form,

which is just a left eigenvector equation for . The same reasoning applies as before, so is equal to the leading eigenvalue of , . The vector is equal to the corresponding left eigenvector . Then the asymptotic relative sizes of the surfaces of are given by the components of . We express the asymptotic relative sizes of the surfaces of in terms of these components:

From the symmetric relationship between the left and the right eigenvectors we see that , therefore

Thus the asymptotic relative size of the surface of is equal to the asymptotic relative number of nodes of label on the surface of any independently of . This quantity is expressed in terms of the components of the principal (left or right) eigenvector of the nonbacktracking matrix of the original graph .

It is clear from the above constructions that for an arbitrary graph , the interior of the infinite network is given exactly by , independently of . Note, however, that the distribution of the numbers of nodes of different labels in is given by Eq. (Equation 5), while the same distribution for is uniform—we made the same number of copies of every node. This discrepancy is reconciled by the presence of infinite loops: nodes in the infinite network are “matched up” at infinity to provide the overall uniform distribution of node labels. It is informative to consider the dispersion of the relative number of node labels within a distance of an arbitrarily chosen starting node in both and . Suppose that and are such that the number of nodes in both and is and that is large. The dispersion in both cases will converge to

but will go to for as approaches , while it remains the same for any in the case of . Here the variables are the asymptotic values given by Eq. (Equation 5). Figure 3 qualitatively illustrates this behavior.

Figure 3:  Dispersion of the relative number of node labels within a distance d of an arbitrarily chosen node in the nonbacktracking expansion \mathcal{N}(m) and the tree \mathcal{T}(\ell, r). The number of nodes in both \mathcal{N}(m) and \mathcal{T}(\ell, r) is denoted by V.
Figure 3: Dispersion of the relative number of node labels within a distance of an arbitrarily chosen node in the nonbacktracking expansion and the tree . The number of nodes in both and is denoted by .

4Example Problems

We now consider the problem of approximating percolation on finite graphs. The critical singularity in percolation occurs only in infinite networks where the giant connected component can be strictly defined. In a large though finite random graph the so-called scaling window, in which the behavior of the largest connected cluster noticeably deviates from the singular behavior of the giant connected component, is narrow. In a recent work Karrer et al. [5] considered this problem of “approximate” percolation on a finite graph by solving self-consistency equations for the probabilities of reaching finite clusters by following a link in a given direction. Such self-consistency equations provide an exact solution for percolation in infinite locally treelike random networks, and in finite graphs they give an approximation to the behavior of the largest connected cluster. The larger and more “locally treelike” the finite graph is, the better this approximation is expected to be. For an arbitrary connected finite graph the “approximate critical point” is given by the inverse of the leading eigenvalue of the nonbacktracking matrix of the graph. It was shown in Ref. [15] that for infinite graphs, this is a tight lower bound on the exact percolation threshold. The question arises: More than treating it purely as an approximation, what is the exact meaning of the solutions of self-consistency equations on a finite graph? The results in Ref. [5] rely on solving the following set of basic equations:

where is the probability generating function for the distribution of the sizes of finite clusters reachable by following links from to . (In the context of finite graphs, the meaning of finite clusters is: not the largest cluster.) Here denotes the set of the nearest neighbors of node excluding node , and is the existence probability of any link. Clearly, in finite graphs, these equations do not hold exactly, but may be approximately correct if the graph in question is large and sparse. In our nonbacktracking expansion [the network ], however, Eqs. (Equation 14) hold exactly, and by solving them we actually obtain the exact percolation results for . In Ref. [5] it is shown that the value of , at which a non-trivial solution to Eqs. (Equation 14) appears, is given by the inverse of the leading eigenvalue of the nonbacktracking matrix of the given graph. This coincides with our result that the mean branching of the network is also given by the leading eigenvalue of the nonbacktracking matrix. The inverse of this value is a true critical point in our NBE. The order parameter is the relative size of the giant connected component, and is given by

where are the solutions of Eq.(Equation 14) at , for a given value of . The degree distribution of any NBE coincides with the degree distribution of the original finite graph, therefore it necessarily has a cutoff—the maximum degree in the original graph. Consequently the order parameter exponent , , for the NBE of any finite graph.

We express the slope of the order parameter at in terms of the components of the principal eigenvector of the nonbacktracking matrix which naturally emerges when linearizing Eq. (Equation 14). We cannot do this directly, unlike, e.g., in Refs. [16] and [17], since the nonbacktracking matrix is not symmetric, and so the full set of its eigenvectors does not form an orthogonal basis. To surpass this difficulty, we introduce a new symmetric matrix

where is the leading eigenvalue of the nonbacktracking matrix and matrices , , and are the adjacency matrix, the degree matrix (), and the identity matrix, respectively. Matrix is a special case of the Bethe Hessian matrix [18]. Its eigenvectors already constitute a complete orthogonal basis. The leading eigenvalue of the nonbacktracking matrix, , is also an eigenvalue of , and the corresponding eigenvector components can be expressed in terms of the components of the principal eigenvector of the nonbacktracking matrix. These useful properties of matrix enable us to calculate the slope of the order parameter at , (see the Appendix for details of the derivation):

Figure 4 compares the slopes of the order parameter calculated in the critical region solving Eq. (Equation 15) for the 4-clique, an Erdős–Rényi random graph and three real networks with the slopes found by using formula ( ?). Note the perfect agreement at . The curves in Figure 4(b) also reveal the sizes of the regions in which the relation is applicable for these networks.

The slope of the order parameter may also be obtained using a different method, which does not rely on symmetric matrices. This alternative method uses the fact that for any diagonalizable matrix, the complex conjugate of any left eigenvector corresponding to one eigenvalue is orthogonal to all right eigenvectors corresponding to a different eigenvalue (and vice versa, the conjugate of any right eigenvector corresponding to one eigenvalue is orthogonal to all left eigenvectors corresponding to a different eigenvalue). This method yields a simpler expression for the slope, (see the Appendix for details of the derivation):

Both approaches have their respective merit and they provide equivalent expressions for the slope of the order parameter, Eqs. ( ?) and (Equation 17). The first method introduces a new symmetric matrix, which may be useful for various problems on undirected graphs, where the underlying matrix—the adjacency matrix—is symmetric. The second approach requires only that the relevant matrix be diagonalizable, therefore it may be used to treat a wide range of problems in nonsymmetric systems, e.g., percolation on directed networks. A detailed discussion of these issues will be presented elsewhere.

Figure 4: (Color online) (a) The order parameter S as a function of p, computed using Eq. (), for the 4-clique, an Erdős–Rényi graph, the Zachary Karate Club network , a subgraph of the Gnutella peer-to-peer file sharing network , and a snapshot of the Internet at the autonomous systems level . (b) Normalized slopes of the same example networks, obtained using Eq. () [or Eq. ()]. All normalized slopes go to 1 at p=p_c.
Figure 4: (Color online) (a) The order parameter as a function of , computed using Eq. (), for the 4-clique, an Erdős–Rényi graph, the Zachary Karate Club network , a subgraph of the Gnutella peer-to-peer file sharing network , and a snapshot of the Internet at the autonomous systems level . (b) Normalized slopes of the same example networks, obtained using Eq. () [or Eq. ()]. All normalized slopes go to at .

The above methods can be applied to the general class of problems defined by the set of message passing equations

with the condition that

where is a constant, independent of and . For any such model the solutions of Eq. (Equation 18) and the order parameter near can be calculated using our schemes (see the Appendix).

As another example of the application of our results, we consider an algorithm approximating optimal percolation, suggested in Ref. [6]. It was shown that an efficient way of destroying a network (removing the lowest number of nodes in order to disconnect the giant component) is by sequentially removing nodes according to their collective influence . This characteristic of node is defined as

where is the degree of node and is the set of the nodes that are at a distance from node . In this algorithm, must be large but smaller than the diameter of the graph under consideration. Let us consider the same problem on the nonbacktracking expansion of a finite graph. We see that in this case the collective influences —taking —are just the asymptotic relative sizes of the surfaces of , multiplied by . We have already expressed these relative sizes in terms of the components of the principal eigenvector of the nonbacktracking matrix of the original graph [see Eq. (Equation 11)]. If the finite graph under consideration is large and has no short loops, we can assume that using our expression is a good approximation to the optimal way of disconnecting the graph. An improvement on the collective influence method is suggested in Ref. [7]. It was shown empirically that better attack performance can be achieved by sequentially removing nodes according to the following centrality measure (collective influence propagation)

where and are the components of the principal left and right eigenvectors of the nonbacktracking matrix, respectively. It was also shown in [7] analytically that removing a node according to Eq. (Equation 21) results in the biggest decrease in , the leading eigenvalue of the nonbacktracking matrix, i.e., the mean branching of the corresponding NBE. Thus, this method produces “stepwise optimal” percolation. From the result we see that using the centrality index of Eq. (Equation 21) is actually equivalent to using

Considering the nonbacktracking expansion we can give a physical interpretation of the product . This is the relative number of infinite paths in the NBE passing through a link (or link ). The relative number of infinite paths arriving at a link is given by and the relative number of infinite paths starting from a link is given by (see Section 3). Summing the product over all neighbors of node , we get the relative number of infinite paths in the NBE passing through a node , which is exactly the centrality index. One can consider the stepwise optimal removal of edges instead of nodes. For this problem, the expression (Equation 22) should be replaced by the edge centrality measure

for edge . Note that in the recent work [22] a method for finding the decycling number of graphs was presented, outperforming all previous approaches to optimal percolation. A computationally much more efficient method, providing a similarly high performance, was introduced in [23].

5Accuracy and limitations of the message passing approach

We have shown that when we are using message passing equations to approximately solve any given problem on a finite graph, we are actually solving the problem exactly on the nonbacktracking expansion of the given finite graph. At present there is no theory available to determine how good this approximation is in the general case. We now briefly discuss the accuracy of this approximation, through the example of random percolation, and how well it may be expected to work in certain situations.

Note that although we defined the NBE for finite graphs, it may also be defined for infinite networks as a limit of finite networks. From Refs. [5] and [15] we know that for infinite networks the critical point (for percolation) of the NBE is a lower bound on that of the original network. An even stronger statement is also true: the relative size of the giant component in the NBE is an upper bound on that of the original network (for any link activation probability ). This is easily seen considering the following. The solutions of Eq. (Equation 14) are a lower bound on the actual probabilities of reaching a finite cluster when following link in the direction of , due to the presence of loops in the original network (see [5]). According to Eq. (Equation 15), therefore, the order parameter of the NBE is an upper bound on that of the original network. With this result in mind we can quantify the badness of the NBE approximation as

where is the order parameter of the original network and is the order parameter of the corresponding NBE, for a given link activation probability . For infinite networks the absolute value in Eq. (Equation 24) is not necessary, as for any in this case. For finite graphs, however, for , where is the critical point of the NBE and is the relative size of the largest component. Therefore the absolute value in Eq. (Equation 24) is required to make this a universally applicable, meaningful measure of the badness of the approximation. For uncorrelated random networks the NBE coincides with the network itself (in the infinite size limit), therefore .

We now consider two representative examples of networks, having the same NBE, where the approximation fails: a two-dimensional square lattice and a synthetic modular network given by the following construction. Consider subnetworks, each of which is a random regular network of nodes and degree . In each of the subnetworks, let us remove two randomly selected links, leaving four nodes of degree . Then, let us connect these affected nodes to similar affected nodes in other subnetworks randomly, restoring the uniform degree for every node in the network. In the limit this network has no finite loops and has modularity , according to the definition of modularity in Ref. [24]. The corresponding NBE is exactly the same as the one for a two-dimensional square lattice. This NBE also coincides with that of a -clique; it is a random regular network of degree . The above examples are particularly badly approximated by the NBE as shown in Figure 5 (a).

Figure 5: (Color online) Simulation results and message passing solutions for the size of the giant connected component; examples of networks badly approximated by the nonbacktracking expansion. (a) Synthetic networks: a two-dimensional square lattice and a model network of maximal modularity Q=1. (b) Real-world networks: the road network of California  and an Amazon copurchase network . In all cases the simulation results differ significantly from the message passing solutions. See Table  for details.
Figure 5: (Color online) Simulation results and message passing solutions for the size of the giant connected component; examples of networks badly approximated by the nonbacktracking expansion. (a) Synthetic networks: a two-dimensional square lattice and a model network of maximal modularity . (b) Real-world networks: the road network of California and an Amazon copurchase network . In all cases the simulation results differ significantly from the message passing solutions. See Table for details.

These two examples represent two classes of networks that are not well approximated by the NBE: low-dimensional networks and highly modular networks. Low-dimensional networks have many short loops and, therefore, are essentially non-treelike. A modification of the original message passing method has been suggested in [27] to deal with clustering and in [28] for networks with loopy motifs. Low dimensionality, however, implies many intertwined loops of many different sizes, which appears to be a significant factor in rendering such systems untreatable by the locally treelike approximation. Interestingly, even if there are no finite loops in the network, the NBE may still be a bad approximation if the modularity is very high [see the model modular network “Communities” in Figure 5 (a)]. A message passing method for modular networks is suggested in [29], however, to use it, one needs to know the modular structure of the network a priori. The problem is, that there is no unique definition of modularity in networks. Also, finding the community structure of a network—using any meaningful definition of modularity—typically leads to NP-hard decision problems [30]. In Figure 5 (b) we present real-world examples of the two classes of networks discussed above: a planar network (road network of California [25]) and a modular network (Amazon copurchase network [26]). The respective badness values are listed in Table ?.

An instructive example that simultaneously demonstrates the pitfalls and the remarkable power of this approximation is the neural network of the roundworm Caenorhabditis elegans (C. elegans) [31]. We investigated the hermaphrodite C. elegans, combining both chemical and electrical synapses and removing multiple links and self-loops. This neural network contains two obvious communities: the pharynx and the main body of the roundworm. The two communities are separated by only two links, hence the obvious modular structure of the network. Figure 6 shows percolation simulation results and the corresponding message passing solutions for the whole network and also for the two subnetworks (obtained by cutting the two links separating the modules). The NBE approximation is not particularly good when the whole network is considered, but is remarkably accurate for the two subnetworks. These subnetworks are very small, and have high clustering coefficients ( for the pharynx and for the main body), implying that modularity is a much more important factor in determining the accuracy of the approximation.

Figure 6: (Color online) Simulation results and message passing solutions for the size of the giant connected component for the neural network of C. elegans . The full neural network consists of two well-defined modules: the main body and the pharynx. The message passing equations are a bad approximation for the network as a whole, but a surprisingly good one for the modules separately. (Note the very small size, N=51, of the pharynx network!)
Figure 6: (Color online) Simulation results and message passing solutions for the size of the giant connected component for the neural network of C. elegans . The full neural network consists of two well-defined modules: the main body and the pharynx. The message passing equations are a bad approximation for the network as a whole, but a surprisingly good one for the modules separately. (Note the very small size, , of the pharynx network!)

Figure 7 shows “less correlated” networks—also studied in [5]—where the badness values of the approximation are very close to (see Table ?). These networks are small worlds, i.e., infinite-dimensional, and have low modularities. We showed above that low dimensionality and high modularity are two attributes that cause the approximation to fail. Modular networks are ubiquitous in nature, therefore such excellent agreement as in Figure 7 (or Ref. [5]) may not actually be a very common case.

Figure 7: (Color online) Simulation results and message passing solutions for networks where the nonbacktracking expansion gives an excellent approximation: an Erdős–Rényi network, a subgraph of the Gnutella peer-to-peer file sharing network , and a snapshot of the Internet at the level of autonomous systems . The simulation results practically coincide with the message passing curves.
Figure 7: (Color online) Simulation results and message passing solutions for networks where the nonbacktracking expansion gives an excellent approximation: an Erdős–Rényi network, a subgraph of the Gnutella peer-to-peer file sharing network , and a snapshot of the Internet at the level of autonomous systems . The simulation results practically coincide with the message passing curves.

Finally, we discuss another limitation of message passing methods, their time complexity. A significant advantage of using message passing equations as opposed to doing simulations is that, supposedly, the computational time required to find the solutions is much less than the time involved in doing the actual simulations and averaging over many realizations. This is not always true, however. The number of operations required in message passing methods (i.e., when solving the message passing equations by iterations) is

where the number of iterations to convergence is denoted , and is the number of links in the network. To achieve this complexity one may do the following. Considering the percolation problem, Equation 14, at each iteration, for every node, first compute the product of all incoming messages. This operation has time complexity . To update each outgoing message from every node, divide the precalculated product for the starting node of this message by the incoming message for the same link. This, again, has complexity , resulting in the overall complexity given in Eq. (Equation 25). In order for the message passing equations to converge to a solution, the messages at any part of the network must have information from every other part of the network. Therefore, cannot be less than the network diameter, i.e., the longest of all the shortest paths between node pairs in the network. In a typical network demonstrating the small-world effect the diameter grows logarithmically with the system size . In this case can be expected to grow at least as , and so for networks with sufficiently rapidly decaying degree distributions . (Note that in networks with a divergent second moment of the degree distribution, the diameter may increase with even more slowly than logarithmically [32]. At the other extreme, namely in the case of a chain, the diameter grows as .) Let us compare these results with the time complexity of doing actual simulations.

Say we want to determine the size of the “giant” percolating cluster in a large graph to a given accuracy (given standard error). The quantity we are interested in is an average over samples, , where is a binary quantity ( or ) indicating whether or not a given node of the graph is in the giant component. The standard error of the estimate for is simply , where is the standard deviation of . Therefore, for a given desired error in the number of node samples needed is independent of the system size. To determine whether a node belongs to the giant component, we must explore the neighborhood of this node (e.g., using a simple breadth first search) visiting up to other nodes, where . Here is the typical size of a finite cluster. The typical cluster size depends on the distance from the critical point but not the system size. Considering the above, we arrive at the following conclusion. At a given distance from the critical point, for a given desired accuracy, the order parameter can be determined using simulations in constant time for any network architecture. This is in strong contrast with the message passing approach, where the accuracy is unknown and the time complexity significantly higher. While the message passing method may be faster in certain small networks, it is always inferior to simple simulations for large enough networks.

6Discussion and extensions

Message passing equations often provide a good approximation to the behavior of interacting models on large sparse networks. In this paper, for any finite graph, we have presented a network construction for which the message passing equations are exact, thereby giving a physical interpretation of this approximation. For a wide range of problems, represented by message passing equations, we have shown how to express the solutions near the critical point in terms of the principal eigenvector components of the nonbacktracking matrix.

It is well established that this approximation should work well for infinite-dimensional (small-world) networks which are sparse and have no strong degree-degree correlations. It is not clear how well it should work for networks of low dimensionality, many loops, and correlations. We have introduced a way of quantifying the accuracy of the approximation and elaborated on two distinct classes of networks where this approach fails: low-dimensional and highly modular networks. As real-world examples of these cases we considered the road network of California and an Amazon copurchase network, where the message passing equations, indeed, proved a bad approximation.

Message passing methods based on the locally treelike approximation have a wide range of applicability and, in some cases, provide good results. However, as we have shown, for increasingly large systems these methods have a high computational cost compared with simple simulations. Also, while one can always find an estimate of the error associated with simulations in a straightforward way, the accuracy of message passing methods is entirely unknown.

All the results in this paper apply also to weighted graphs. In this case one must work with a weighted nonbacktracking matrix, obtained by multiplying each row of the unweighted nonbacktracking matrix by the weight of the corresponding link. Using this modified matrix, the critical point and the solutions near the critical point are determined exactly as in the unweighted case.

The network construction presented here may be generalized straightforwardly to directed graphs. The directed percolation critical point is also given by the inverse of the leading eigenvalue of the nonbacktracking matrix. Solutions near the critical point can be found using the method we introduced for nonsymmetric systems (see Method 2 in the Appendix).

The nonbacktracking expansion can be generalized to various percolation problems for interdependent and multiplex networks [34]. In the case of multiplex networks, the only difference from single-layer (simplex) networks is that the resulting infinite network (the NBE) will consist of links of different colors, corresponding to links of different layers. The message passing equations written for any given model on finite multiplex networks [35] will be exact in the corresponding NBE, just as in simplex networks. A number of percolation problems on multiplex and interdependent networks exhibit discontinuous transitions. In order to retain the interesting physics in these systems, one has to account for the nonlinear terms in the basic equations [38]. Finding the analytical solutions similar to our Eqs. ( ?) and (Equation 17) at discontinuous transitions in multiplex networks is an interesting challenge for the future.

We thank L. Zdeborová for useful comments. This work was supported by the FET proactive IP project MULTIPLEX 317532.

References

  1. (, )bibitemNoStop
  2. bibitemNoStop
  3. bibitemNoStop
  4. bibitemNoStop
  5. bibitemNoStop
  6. bibitemNoStop
  7. bibitemNoStop
  8. bibitemNoStop
  9. in , Vol. , (, , ) p. bibitemNoStop
  10. bibitemNoStop
  11. bibitemNoStop
  12. bibitemNoStop
  13. (, )bibitemNoStop
  14. bibitemNoStop
  15. bibitemNoStop
  16. bibitemNoStop
  17. bibitemNoStop
  18. in () p. bibitemNoStop
  19. bibitemNoStop
  20. bibitemNoStop
  21. ()bibitemNoStop
  22. bibitemNoStop
  23. bibitemNoStop
  24. bibitemNoStop
  25. bibitemNoStop
  26. bibitemNoStop
  27. bibitemNoStop
  28. bibitemNoStop
  29. bibitemNoStop
  30. bibitemNoStop
  31. bibitemNoStop
  32. bibitemNoStop
  33. bibitemNoStop
  34. bibitemNoStop
  35. bibitemNoStop
  36. bibitemNoStop
  37. bibitemNoStop
  38. bibitemNoStop
  39. bibitemNoStop
  40. bibitemNoStop
13729
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
Edit
-  
Unpublish
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel
Comments 0
Request comment
""
The feedback must be of minumum 40 characters
Add comment
Cancel
Loading ...

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description