Determination of multifractal dimensions of complex networks by means of the sandbox algorithm

Determination of multifractal dimensions of complex networks by means of the sandbox algorithm

Abstract

Complex networks have attracted much attention in diverse areas of science and technology. Multifractal analysis (MFA) is a useful way to systematically describe the spatial heterogeneity of both theoretical and experimental fractal patterns. In this paper, we employ the sandbox (SB) algorithm proposed by Tél et al. (Physica A, 159 (1989) 155-166), for MFA of complex networks. First we compare the SB algorithm with two existing algorithms of MFA for complex networks: the compact-box-burning (CBB) algorithm proposed by Furuya and Yakubo (Phys. Rev. E, 84 (2011) 036118), and the improved box-counting (BC) algorithm proposed by Li et al. (J. Stat. Mech.: Theor. Exp., 2014 (2014) P02020) by calculating the mass exponents of some deterministic model networks. We make a detailed comparison between the numerical and theoretical results of these model networks. The comparison results show that the SB algorithm is the most effective and feasible algorithm to calculate the mass exponents and to explore the multifractal behavior of complex networks. Then we apply the SB algorithm to study the multifractal property of some classic model networks, such as scale-free networks, small-world networks, and random networks. Our results show that multifractality exists in scale-free networks, that of small-world networks is not obvious, and it almost does not exist in random networks.

Key words: Complex network; multifractal analysis; sandbox algorithm; box-counting algorithm

1 Introduction

Many studies have shown that complex networks play an important role in characterizing complicated dynamic systems in nature and society [1]. This is because the nodes of a complex network represent the elements, and the edges represent and simplify the complexity of their interactions so that we can better focus on the topological relation between two elements in a complex system. Recently, complex networks have attracted the attention of a lot of researchers from different fields. Based on the self-similarity of fractal geometry [2, 3, 4], Song et al. [1] generalized the box-counting algorithm and used it in the field of complex networks. They found that many complex networks are self-similar under certain length-scales. The fractal dimension has been widely used to characterize complex fractal sets [2, 3, 4]. Because the metric on graphs is not the same as the Euclidian metric on Euclidian spaces, the box-counting algorithms to calculate the fractal dimension of a network is much more complicated than the traditional box-counting algorithm for fractal sets in Euclidian spaces. Song et al. [5] developed some algorithms to calculate the fractal dimension of complex networks. Then Kim et al. [6, 7] proposed an improved algorithm to investigate the skeleton of networks and the fractal scaling in scale-free networks. Zhou et al. [8] proposed an algorithm based on the edge-covering box counting to explore self-similarity of complex cellular networks. Later on, a ball-covering approach [9] and an approach defined by the scaling property of the volume [10, 11] were proposed for fractal dimensions of complex networks. The features of topology and statistics [13, 12], the fractality and percolation transition [14], fractal transition [15] in complex networks, and properties of a scale-free Koch networks [16, 17, 18, 19, 20] have turned out to be hot topics in recent years.

As a generalization of fractal analysis, the tool of multifractal analysis (MFA) may have a better performance on characterizing the complexity of complex networks in real world. MFA has been widely applied in a variety of fields such as financial modelling [21, 22], biological systems [23, 24, 25, 26, 27, 28, 29, 30, 31, 32], and geophysical data analysis [33, 34, 35, 36, 37, 38, 39, 40, 41, 42].

In recent years, MFA also has been successfully used in complex networks and seems more powerful than fractal analysis. Lee and Jung [43] found that MFA is the best tool to describe the probability distribution of the clustering coefficient of a complex network. Some algorithms have been proposed to calculate the mass exponents and to study the multifractal properties of complex networks [44, 45, 46, 47]. Based on the compact-box-burning algorithm for fractal analysis of complex networks which is introduced by Song et al. [5], Furuya and Yakubo [44] proposed a compact-box-burning (CBB) algorithm for MFA of complex networks and applied it to show that some networks have multifractal structures. Wang et al. [45] proposed a modified fixed-size box-counting algorithm to detect the multifractal behavior of some theoretical and real networks. Li et al. [46] improved the modified fixed-size box-counting algorithm [45] and used it to investigate the multifractal properties of a family of fractal networks introduced by Gallos et al. [48]. We call the algorithm in Ref. [46] the improved BC algorithm. Recently, we adopted the improved BC to study the multifractal properties of the recurrence networks constructed from fractional Brownian motions [47].

In order to easily obtain the generalized fractal dimensions of real data, Tél et al. [49] introduced a sandbox algorithm which is originated from the box-counting algorithm [50]. They [49] pointed out that the sandbox algorithm gives a better estimation of the generalized fractal dimensions in practical applications. So far, the sandbox algorithm also has been widely applied in many fields. For example, Yu et al. [26] used it to perform MFA on the measures based on the chaos game representation of protein sequences from complete genomes.

In this article, we employ the sandbox (SB) algorithm proposed by Tél et al. [49] for MFA of complex networks. First we compare the SB algorithm with the CBB and improved BC algorithms for MFA of complex networks in detail by calculating the mass exponents of some deterministic model networks. We make a detailed comparison between the numerical and theoretical results of these model networks. Then we apply the SB algorithm to study the multifractal property of some classic model networks, such as scale-free networks, small-world networks, and random networks.

2 Sandbox algorithm for multifractal analysis of complex networks

It is well known that the fixed-size box-covering algorithm [50] is one of the most common and important algorithms for multifractal analysis. For a given measures with support set in a metric space, we consider the following partition sum

(1)

, where the sum runs over all different nonempty boxes of a given size in a box covering of the support set . From the definition above, we can easily obtain and . The mass exponents of the measure can be defined as

(2)

Then the generalized fractal dimensions of the measure are defined as

(3)

and

(4)

where . The linear regression of against for gives a numerical estimation of the generalized fractal dimensions , and similarly a linear regression of against for . In particular, is the box-counting dimension (or fractal dimension), is the information dimension, and is the correlation dimension.

In complex network, the measure of each box can be defined as the ratio of the number of nodes covered by the box and the total number of nodes in the entire network. In addition, we can determine the multifractality of complex network by the shape of or curve. If is a constant or is a straight line, the object is monofractal; on the other hand, if or is convex, the object is multifractal.

Before we use the following SB algorithm to perform MFA of a network, we need to apply the Floyd’s algorithm [51] of Matlab-BGL toolbox [52] to calculate the shortest-path distance matrix of this network according to its adjacency matrix .

The sandbox algorithm proposed by Tél et al. [49] is an extension of the box-counting algorithm [50]. The main idea of this sandbox algorithm is that we can randomly select a point on the fractal object as the center of a sandbox and then count the number of points in the sandbox. The generalized fractal dimensions are defined as

(5)

where is the number of points in a sandbox with a radius of , is the total number of points in the fractal object. The brackets mean to take statistical average over randomly chosen centers of the sandboxes. As a matter of fact, the above equation can be rewritten as

(6)

So, in practice, we often estimate numerically the generalized fractal dimensions by performing a linear regression of against ; and estimate numerically the mass exponents by performing a linear regression of against . In a complex network, we can randomly choose a node of a network as the center of a sandbox. and represent the number of nodes in the sandbox of radius and the size of the network, respectively. The SB algorithm for MFA of complex networks can be described as follows.

  1. Initially, make sure all nodes in the entire network are not selected as a center of a sandbox.

  2. Set the radius of the sandbox which will be used to cover the nodes in the range , where is the diameter of the network.

  3. Rearrange the nodes of the entire network into random order. More specifically, in a random order, nodes which will be selected as the center of a sandbox are randomly arrayed.

  4. According to the size of networks, choose the first 1000 nodes in a random order as the center of 1000 sandboxes, then search all the neighbor nodes by radius from the center of each sandbox.

  5. Count the number of nodes in each sandbox of radius , denote the number of nodes in each sandbox as .

  6. Calculate the statistical average of over all 1000 sandboxes of radius .

  7. For different values of , repeat steps (ii) to (vi) to calculate the statistical average and then use for linear regression.

We need to choose an appropriate range of , then calculate the generalized fractal dimensions and the mass exponents in this scaling range. In our calculation, we perform a linear regression of against and then choose the slope as an approximation of the mass exponents (the process for estimating the generalized fractal dimensions is similar).

For the improved BC and CBB algorithms, we need to cover the entire network by repeating a large number of same steps and then to find the minimum possible number of boxes by performing many realizations. Then we can choose an appropriate range of to calculate the mass exponents by performing a linear regression of against . In the process of finding a covering of the network, the two existing algorithms require that each node in the network cannot be covered by more than one box at the same time. In addition, for the CBB algorithm, we have to randomly select many nodes from the candidate set to form a compact box and then repeat these steps until the entire network is covered. Because of these limitations, these two algorithms must take a large amount of CPU time and memory resources to record information of the nodes which have been covered in the previous steps. From the above descriptions of SB algorithm, however, we find that a big difference from the improved BC and CBB algorithms is that SB algorithm only requires to randomly choose some nodes as the center of a sandbox and then to count the number of nodes in each sandbox, hence we don’t need so many sandboxes to cover the entire network. Moreover, we focus on the number of nodes of each sandbox, so we also don’t need to know whether or not the nodes in the sandbox have been covered by other sandboxes. For the SB algorithm, therefore, we only consume a very small amount of CPU time and memory resources. In this sense, the SB algorithm can be considered to be the most effective and feasible algorithm for MFA of complex networks.

3 Algorithm comparison

Now we compare the SB algorithm with the CBB and improved BC algorithms for MFA of complex networks [44, 46] in detail by calculating the mass exponents of some deterministic model networks. We make a detailed comparison between the numerical and theoretical results of these model networks.

3.1 The model networks

In the past two decades, many network models have been introduced to study and simulate the topological and fractal properties and the growth mechanism of many complex dynamical systems in real world. Watts and Strogatz [53], Newman and Watts [54] proposed the WS and NW small-world network models to explain the small-world character of many real complex networks respectively. In order to reveal the generating principle of power law distributions, Barabási et al. [55] proposed a scale-free network model (BA model) based on the growth and preferential attachment characteristics of real networks.

In 2002, Dorogovtsev et al. [56] introduced the simple deterministic graphs to model scale-free networks. They pointed out that the family of deterministic networks (DGM networks) are pseudo-fractals. In order to understand the self-similarity of complex networks better, Rozenfeld et al. [57] generalized the DGM network to a family of scale-free networks, namely -flowers and -trees. The DGM network is a special case of the -flowers. For , the networks are self-similar only in the weak sense. For , the networks are self-similar and possess the well-defined fractal dimension. In 2006, Song et al. [58] proposed the minimal model to study the evolved law of complex networks and then simulate the emergence of self-similarity and small-world properties of these networks. Later on, Gallos et al. [48] introduced a generalized version of the minimal model proposed by Song et al. [58].

Based on the above network models, some researchers have studied analytically and numerically the fractal and multifractal properties of networks generated from these models. Song et al. [58] and Gallos et al. [48] gave an analytical formula to calculate the fractal dimensions of the minimal model and its generalized version respectively. Then Rozenfeld et al. [57] also put forward a theoretical framework for computing the degree exponent and the fractal dimension of the -flower network. In addition, Furuya and Yakubo [44] analytically and numerically investigated the multifractal properties of several deterministic, stochastic, and real-world fractal scale-free networks. They gave an analytical formula of the mass exponents for some class of fractal scale-free model networks by a mean-field approximation. They define the mass exponents of a complex network as

(7)

where is the fractal dimension, is the degree exponent of the complex network.

For the -flower, we start with a cycle graph consisting of nodes and links. Then we can obtain the -flower network of generation by replacing each link in th generation network by two parallel paths with length (number of links of the path) and respectively. In Fig. 1, we show how to construct the -flower network as an example. Rozenfeld et al. [57] gave the analytical formulas of degree exponent and fractal dimension of -flower. The degree exponent of the th generation -flower network is given by

(8)

and the fractal dimension is presented by

(9)

By substituting Eqs. (8) and (9) into Eq. (7), the mass exponents of the -flower network can be written as

(10)

Figure 1: Construction of a -flower network. Example of the -flower network model of generations with parameters and .

The minimal model proposed by Song et al. [58] is a probabilistic combination of two different connectivity modes: Mode I with probability and Mode II with probability . Mode I means that all the old connections generated in the previous generation are remained; Mode II means that we remove all the old connections generated in the previous generation and add a new edge to connect two new generated nodes. Before using Modes I and II, we attach new nodes to each endpoint of each edge in the network of the current generation. A remarkable advantage of this stochastic combination of the two different growth modes is that its level of fractality can be controlled by the probability . Song et al. [58] pointed out that the minimal model is a pure fractal network when the probability , and a pure small-world network when . In this paper, we only consider the minimal model with probability . We start with a star structure as in Ref. [58]. Then we obtain the minimal model of generation by adding new nodes to each node with degree of generation , where is a given parameter. In addition, we adopt the growth Mode II to replace all the old connections in the previous generation. In Fig. 2, we show how to construct the pure fractal network with parameters and as an example. As pointed out by Song et al. [58], the degree exponent is

(11)

and the fractal dimension is

(12)

By substituting Eqs. (11) and (12) into Eq. (7), the mass exponents of this minimal model can be written as

(13)

Figure 2: Construction of a pure fractal network. Example of the minimal model network of generations with parameters and .

Based on the minimal model, Gallos et al. [48] proposed a generalized version of this model. Here we start with two nodes and one edge between them as in Ref. [48]. Then we obtain the network of next generation by attaching new nodes to each endpoint of each edge in the network of the current generation. With probability , each edge of the current generation is remained and new edges are added to connect pairs of new nodes attached to the endpoints of . Otherwise, we remove each edge in the network of the current generation and add new edges to connect pairs of nodes attached to the endpoints of . In this paper, we only consider the network with probability . In Fig. 3, we show how to construct the generalized network with parameters , , and , as an example. In the case of probability , from Refs. [46, 14], the degree exponent is

(14)

and the fractal dimension is

(15)

By substituting Eqs. (14) and (15) into Eq. (7), we can obtain the mass exponents of the generalized version of the minimal model, which can be written as

(16)

Figure 3: Construction of a generalized network of the minimal model. Example of the generalized network of generations with parameters , , and .

In the following, we generate networks using the three models and numerically study their multifractality by the SB, CBB and improved BC algorithms (in Section II). Then we give a detailed comparison between the three algorithms based on the numerical results and theoretical formulas of the mass exponents for these model networks.

3.2 Comparison results

In this work, we set the range of the values from to with a step of . In order to compare with the results in Ref. [44], we generated the -flower network with and . Considering the limitation of the computational capacity of our computer, we only constructed the 7th generation -flower network. An important step of calculating the mass exponents and the generalized fractal dimensions is to obtain the appropriate range of (i.e. ). As an example, we show the linear regressions of the vs in the SB algorithm for the 7th generation -flower network with and in Fig. 4. We selected the range of as to fit these data points. From Fig. 4, we can observe the apparent power law behaviors for the 7th generation -flower network with and . So we selected the linear fit scaling range of as to calculate the mass exponents . In Fig. 5, we show the mass exponents of the -flower network calculated by the SB, CBB and improved BC algorithms. From Fig. 5, we can see that the numerical results obtained by the three algorithms are consistent with the theoretical results.

For the minimal model network, we started with a star structure of 5 nodes as in Ref. [58] and then generated the 5th generation network with and . We calculated the mass exponents by the three algorithms. The numerical results are shown in Fig. 6. From Fig. 6, we can see that the numerical results obtained by the three algorithms agree well with the theoretical results.

In addition, we also generated the generalized version of the minimal model with , , and . Here we only constructed the 5th generation of the generalized network. The numerical results are shown in Fig. 7. From Fig. 7, we can see that the numerical results obtained by the three algorithms have good agreement with the theoretical results.

Figure 4: Linear regressions for calculating the mass exponents of the 7th generation -flower network with and . The result is calculated by the SB algorithm.

Figure 5: (Color online) Mass exponents for the 7th generation -flower network with and . Solid line (black line) represents the mass exponents given by Eq. (10). The curves indicated by symbols (circles, squares, and diamonds) represent the numerical estimation of the mass exponent calculated by the three algorithms for MFA of complex networks, respectively.

Figure 6: (Color online) Mass exponents for the 5th generation minimal model network with and . Solid line (black line) represents the mass exponent given by Eq. (13). The curves indicated by symbols (circles, squares, and diamonds) represent the numerical estimation of the mass exponent calculated by the three algorithms for MFA of complex networks, respectively.

Figure 7: (Color online) Mass exponents for the 5th generation network from generalized version of the minimal model with , , and . Solid line (black line) represents the mass exponent given by Eq. (16). The curves indicated by symbols (circles, squares, and diamonds) represent the numerical estimation of the mass exponent calculated by the three algorithms for MFA of complex networks, respectively.

It is hard to evaluate the performance of the three algorithms only according to the above three figures. In order to further quantify the performance of these algorithms, we introduce the relative standard error as in Ref. [29]. Based on the relative standard error , we can determine the goodness of the numerical results of the mass exponents obtained from the three algorithms compared with the analytical results for the three deterministic model networks. The relative standard error can be defined as

(17)

where

(18)

and

(19)

the and represent the analytical and numerical results of the mass exponents respectively; is the average of the . The goodness of fit is indicated by the result [29]. We summarize the corresponding relative standard error between the analytical and numerical results of the mass exponents in Table I. From Table I, we see that the relative standard errors for these three methods are all rather small. This result indicates all these three algorithms can give correct numerical results.

In addition, we compare the consumed CPU time of the three algorithms for MFA of the networks generated from the three network models. The results are given in Table II. From Table II, we can CBB algorithm takes a substantial amount of computation time and memory resources, while SB algorithm consumes the least amount of CPU time and memory resources. This is to say that the SB algorithm has an overwhelming advantage in consuming CPU time and memory resources. Therefore the SB algorithm can be considered as the most effective, feasible, and accurate algorithm to calculate the mass exponents and then to study the multifractal properties of complex networks among the three algorithms.

-flower The minimal model The generalized model
improved box-counting (BC) [46] 0.01406849 0.00533030 0.02067627
Compact-box-burning (CBB) [44] 0.01567521 0.02449339 0.02245775
Sandbox (SB) 0.01408235 0.00623890 0.01299720
Table 1: The relative standard error of the three algorithms for MFA of the networks generated from the three network models.
-flower The minimal model The generalized model
improved box-counting (BC) [46] 4069 8568 1299
Compact-box-burning (CBB) [44] 817891 1003087 244206
Sandbox (SB) 95 135 44
Table 2: The CPU time of the three algorithms for MFA of the networks generated from the three network models(unit: s).

4 Applications

Wang et al. [45] applied the modified fixed-size box-counting algorithm to study the multifractal properties of some theoretical model networks and real networks, such as scale-free networks, small-world networks, and random networks. All of these complex networks have been widely used in various studies. In this Section, we want to adopt the SB algorithm to detect the multifractal behavior of these networks.

4.1 Scale-free networks

Based on the growth and preferential attachment characteristics of many complex networks in real world, Barabási et al. [55] proposed a BA model to explain the generating mechanism of power law distributions. In this paper, we use the BA model to generate scale-free networks and then investigate their multifractality.

Here, we start with an initial network with nodes, and its three nodes are connected each other. At each step, we add one node to this initial network. Then this new node is connected to existing nodes with probability . We denote the probability that the new node is connected to node as . The probability is defined as , where is the degree of node .

In this paper, we respectively generated 100 scale-free networks with 6000 nodes, 8000 nodes, and 10000 nodes. The SB algorithm is then used to calculate their average mass exponents and average generalized fractal dimensions , which are shown in Fig. 8. From Fig. 8, we find that the and curves of scale-free networks are convex. So multifractality exists in these scale-free networks.

Figure 8: (Color online) Average mass exponents (upper panel) and average generalized fractal dimensions (lower panel) for the scale-free networks derived from the SB algorithm. Here the average is calculated from 100 realizations.

4.2 Small-world networks

Based on the random rewiring procedure, Watts and Strogatz [53] introduced the WS small-world network model which is a transition from a completely regular network to a completely random graph. Small-world networks not only retain the high clustering coefficient of regular networks, but also capture the short average path length of random graphs. Newman and Watts [54] proposed a NW model which is a modified version of the original WS model. In the NW model, the shortcuts are added between randomly chosen pairs of nodes, but no connections are removed from the regular lattice.

The NW model can be described as follows. Firstly, we start with a regular graph. We consider the nearest-neighbor coupled network containing nodes. Each node of the coupled network is connected to its nearest-neighbors by undirected edges, where is an even integer. Secondly, we randomly add some new connections to the coupled network. With probability , we connect the pair of nodes chosen randomly.

In this paper, we first generated the three nearest-neighbor coupled networks containing 6000 nodes, 8000 nodes, and 10000 nodes, respectively. And we set so that each node of these networks is connected to its 4 nearest-neighbors. Then we added a connection between pairs of nodes of the three coupled networks with probability , , and , respectively. For each case, we generated 100 small-world networks using the NW model. Next, we applied the SB algorithm to calculate their average mass exponents and average generalized fractal dimensions . The and curves are plotted in Fig. 9. From Fig. 9, we find that the and curves of small-world networks are not straight lines, but the fluctuation of these curves is very small. This means that the multifractality of these small-world networks is not obvious.

Figure 9: (Color online) Average mass exponents (upper panel) and average generalized fractal dimensions (lower panel) for the small-world networks derived from the SB algorithm. Here the average is calculated from 100 realizations.

4.3 Random networks

The Erdős-Rényi (ER) random graph model [59] is one of the classical network models for generating a completely random network. We start with isolated nodes. For every possible pair of nodes, we connect them by an undirected connection with probability .

In this paper, we considered the three ER random graph models containing 6000 nodes, 8000 nodes, and 10000 nodes. Then we connected all possible node pairs of the three ER models with probability , , and , respectively. For each case, we generated 100 random networks by using the ER model and extract the largest connected component from each random network. Next, we used the SB algorithm to calculate the average mass exponents and average generalized fractal dimensions of these largest connected components. In Fig. 10, we show the and curves. As we can see from Fig. 10, the and curves of random networks are close to straight lines. So this is to say, the multifractality almost does not exist in these random networks.

Figure 10: (Color online) Average mass exponents (upper panel) and average generalized fractal dimensions (lower panel) for the random networks derived from the SB algorithm. Here the average is calculated from 100 realizations.

5 Conclusion

In this work, we employed the sandbox (SB) algorithm proposed by Tél et al. [49], for multifractal analysis (MFA) of complex networks.

First we compared the SB algorithm with two existing algorithms of MFA for complex networks: the compact-box-burning (CBB) algorithm proposed by Furuya and Yakubo [44], and the improved box-counting (BC) algorithm proposed by Li et al. [46], by calculating the mass exponents of some deterministic model networks (networks generated from the -flower, the minimal model, and the generalized version of the minimal model). We made a detailed comparison between the numerical results and the theoretical ones of these model networks. The comparison results show that the SB algorithm is the most effective, feasible, and accurate algorithm to calculate the mass exponents and to explore the multifractal behavior of complex networks.

In addition, we applied the SB algorithm to study the multifractality of some classic model networks, such as scale-free networks, small-world networks, and random networks. Our numerical results show that multifractality exists in scale-free networks, that of small-world networks is not obvious, and it almost does not exist in random networks.

Acknowledgments

This project was supported by the Natural Science Foundation of China (Grant No. 11371016), the Chinese Program for Changjiang Scholars and Innovative Research Team in University (PCSIRT) (Grant No. IRT1179), the Lotus Scholars Program of Hunan province of China.

References

  1. C. Song, S. Havlin, and H.A. Makse, Nature(London) 433, 392-395(2005).
  2. B.B. Mandelbrot, The Fractal Geometry of Nature (New York: Academic Press, 1983).
  3. J. Feder, Fractals (New York: Plenum, 1988).
  4. K. Falconer, Techniques in Fractal Geometry (New York: Wiley, 1997).
  5. C. Song, L.K. Gallos, S. Havlin, and H.A. Makse, J. Stat. Mech.: Theor. Exp. 3, P03006(2007).
  6. J.S. Kim, K.I. Goh, G. Salvi, E. Oh, B. Kahng, and D. Kim, Phys. Rev. E 75, 016110(2007).
  7. J.S. Kim, K.I. Goh, B. Kahng, and D. Kim, Chaos 17, 026116(2007).
  8. W.X. Zhou, Z.Q. Jiang, and D. Sornette, Physica A 375, 741-752(2007).
  9. L. Gao, Y. Hu, and Z. Di, Phys. Rev. E 78, 046109(2008).
  10. L. Guo and X. Cai, Chin. Phys. Lett. 26(8), 088901(2009).
  11. O. Shanker, Mod.Phys.Lett.B 21, 321-326(2007).
  12. G. Bianconi, Chaos 17, 026114(2007).
  13. Z.K. Gao and N.D. Jin, Chaos 19, 033137(2009).
  14. H.D. Rozenfeld and H.A. Makse, Chem. Eng. Sci. 64, 4572-4575(2009).
  15. H.D. Rozenfeld, C. Song, and H. A. Makse, Phys. Rev. Lett. 104, 025701(2010).
  16. Z. Zhang, S. Zhou, T. Zou, and G. Chen, J. Stat. Mech.: Theor. Exp. 9, P09008(2008).
  17. Z. Zhang, S. Zhou, W. Xie, L. Chen, Y. Lin, and J. Guan, Phys. Rev. E 79, 061113(2009).
  18. J.X. Liu and X.M. Kong, Acta Phy. Sin. 59, 2244-2249(2010).
  19. Z.Z. Zhang, S.Y. Gao, and W.L. Xie, Chaos 20, 043112(2010).
  20. M.F. Dai, X.Y. Li, and L.F. Xi, Chaos 23, 033106(2013).
  21. E. Canessa, J. Phys. A: Math. Gen. 33, 3637-3651(2000).
  22. V.V. Anh, Q.M. Tieng, and Y.K. Tse, Int. Trans. Oper. Res. 7, 349-363(2000).
  23. Z.G. Yu, V. Anh, and K.S. Lau, Physica A 31, 351-361(2001).
  24. Z.G. Yu, V. Anh, and K.S. Lau, Phys. Rev. E 64, 031903(2001).
  25. Z.G. Yu, V. Anh, and K.S. Lau, Phys. Rev. E 68, 021913(2003).
  26. Z.G. Yu, V. Anh, and K.S. Lau, J. Theor. Biol. 226, 341-348(2004).
  27. Z.G. Yu, V. Anh, and K.S. Lau, Phys. Rev. E 73 031920(2006).
  28. Z.G. Yu, Q.J. Xiao, L. Shi, J.W. Yu, and V. Anh, Chin. Phys. B 19, 068701(2010).
  29. V.V. Anh, K.S. Lau, and Z.G. Yu, Phys. Rev. E 66, 031910(2002).
  30. L.Q. Zhou, Z.G. Yu, J.Q. Deng, V. Anh, and S.C. Long, J. Theor. Biol. 232 559-567(2005).
  31. J.J. Han and W.J. Fu, Chin. Phys. B 19, 010205(2010).
  32. S.M. Zhu, Z.G. Yu, and V. Anh, Chin. Phys. B 20, 010505(2011).
  33. D. Schertzer and S. Lovejoy, J. Geophys. Res. 92, 9693-9714(1987).
  34. S. Lovejoy, M.R. Duncan, and D. Schertzer, J. Geophys. Res. 31D, 26479-426492(1996).
  35. S. Lovejoy and D. Schertzer, Computers and Geoscience 36, 1393-1403(2010).
  36. S. Lovejoy and D. Schertzer, Computers and Geoscience 36, 1404-1413(2010).
  37. J.W. Kantelhardt, E. Koscielny-Bunde, D. Rybski, P. Braun, A. Bunde, and S. Havlin, J. Geophys. Res. 111, D01106(2006).
  38. D. Veneziano, A. Langousis, and P. Furcolo, Water Resour. Res. 42, W06D15(2006).
  39. V. Venugopal, S.G. Roux, E. Foufoula-Georgiou, and A. Arneodo, Water Resour. Res. 42, W06D14(2006).
  40. J.A. Wanliss, V.V. Anh, Z.G. Yu, and S. Watson, J. Geophys. Res. 110, A08214(2005).
  41. Z.G. Yu, V. Anh, and R. Eastes, J. Geophys. Res. 114, A05214(2009).
  42. Z.G. Yu, V. Anh, Y. Wang, D. Mao, and J. Wanliss, J. Geophys. Res. 115, A10219(2010).
  43. C.Y. Lee and S.H. Jung, Phys. Rev. E 73, 066102(2006).
  44. S. Furuya and K. Yakubo, Phys. Rev. E 84, 036118(2011).
  45. D.L. Wang, Z.G. Yu, and V. Anh, Chin. Phys. B 21(8), 080504(2012).
  46. B.G. Li, Z.G. Yu, and Y. Zhou, J. Stat. Mech.: Theor. Exp. 2014, P02020(2014).
  47. J.L. Liu, Z.G. Yu, and V. Anh, Phys. Rev. E 89, 032814(2014).
  48. L.K. Gallos, C. Song, S. Havlin, and H.A. Makse, Proc. Natl. Acad. Sci. USA. 104(19), 7746-7751(2007).
  49. T. Tél, Á. Fülöp, and T. Vicsek, Physica A 159, 155-166(1989).
  50. T.C. Halsey, M.H. Jensen, L.P. Kadanoff, I. Procaccia, and B.I. Shraiman, Phys. Rev. A 33, 1141-1151(1986).
  51. R.W. Floyd, Commun. ACM 5(6), 345(1962).
  52. D.F. Gleich, A graph library for Matlab based on the boost graph library, http://dgleich.github.com/matlab-bgl.
  53. D.J. Watts and S.H. Strogatz, Nature 393(6684), 440-442(1998).
  54. M.E.J. Newman and D.J. Watts, Phys. Lett. A 263, 341-346(1999).
  55. A.L. Barabási and R. Albert, Science 286(5439), 509-512(1999).
  56. S.N. Dorogovtsev, A.V. Goltsev, and J.F.F. Mendes, Phys. Rev. E 65, 066122(2002).
  57. H.D. Rozenfeld, S. Havlin, and D. Ben-Avraham, New J. Phys. 9, 175(2007).
  58. C. Song, S. Havlin, and H.A. Makse, Nat. Phys. 2, 275-281(2006).
  59. P. Erdős and A. Rényi, Publ. Math. Debrecen 6, 290-297(1959).
120819
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
Edit
-  
Unpublish
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel
Comments 0
""
The feedback must be of minumum 40 characters
Add comment
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question