Determination of multifractal dimensions of complex networks by means of the sandbox algorithm
Abstract
Complex networks have attracted much attention in diverse areas of science and technology. Multifractal analysis (MFA) is a useful way to systematically describe the spatial heterogeneity of both theoretical and experimental fractal patterns. In this paper, we employ the sandbox (SB) algorithm proposed by Tél et al. (Physica A, 159 (1989) 155166), for MFA of complex networks. First we compare the SB algorithm with two existing algorithms of MFA for complex networks: the compactboxburning (CBB) algorithm proposed by Furuya and Yakubo (Phys. Rev. E, 84 (2011) 036118), and the improved boxcounting (BC) algorithm proposed by Li et al. (J. Stat. Mech.: Theor. Exp., 2014 (2014) P02020) by calculating the mass exponents of some deterministic model networks. We make a detailed comparison between the numerical and theoretical results of these model networks. The comparison results show that the SB algorithm is the most effective and feasible algorithm to calculate the mass exponents and to explore the multifractal behavior of complex networks. Then we apply the SB algorithm to study the multifractal property of some classic model networks, such as scalefree networks, smallworld networks, and random networks. Our results show that multifractality exists in scalefree networks, that of smallworld networks is not obvious, and it almost does not exist in random networks.
Key words: Complex network; multifractal analysis; sandbox algorithm; boxcounting algorithm
1 Introduction
Many studies have shown that complex networks play an important role in characterizing complicated dynamic systems in nature and society [1]. This is because the nodes of a complex network represent the elements, and the edges represent and simplify the complexity of their interactions so that we can better focus on the topological relation between two elements in a complex system. Recently, complex networks have attracted the attention of a lot of researchers from different fields. Based on the selfsimilarity of fractal geometry [2, 3, 4], Song et al. [1] generalized the boxcounting algorithm and used it in the field of complex networks. They found that many complex networks are selfsimilar under certain lengthscales. The fractal dimension has been widely used to characterize complex fractal sets [2, 3, 4]. Because the metric on graphs is not the same as the Euclidian metric on Euclidian spaces, the boxcounting algorithms to calculate the fractal dimension of a network is much more complicated than the traditional boxcounting algorithm for fractal sets in Euclidian spaces. Song et al. [5] developed some algorithms to calculate the fractal dimension of complex networks. Then Kim et al. [6, 7] proposed an improved algorithm to investigate the skeleton of networks and the fractal scaling in scalefree networks. Zhou et al. [8] proposed an algorithm based on the edgecovering box counting to explore selfsimilarity of complex cellular networks. Later on, a ballcovering approach [9] and an approach defined by the scaling property of the volume [10, 11] were proposed for fractal dimensions of complex networks. The features of topology and statistics [13, 12], the fractality and percolation transition [14], fractal transition [15] in complex networks, and properties of a scalefree Koch networks [16, 17, 18, 19, 20] have turned out to be hot topics in recent years.
As a generalization of fractal analysis, the tool of multifractal analysis (MFA) may have a better performance on characterizing the complexity of complex networks in real world. MFA has been widely applied in a variety of fields such as financial modelling [21, 22], biological systems [23, 24, 25, 26, 27, 28, 29, 30, 31, 32], and geophysical data analysis [33, 34, 35, 36, 37, 38, 39, 40, 41, 42].
In recent years, MFA also has been successfully used in complex networks and seems more powerful than fractal analysis. Lee and Jung [43] found that MFA is the best tool to describe the probability distribution of the clustering coefficient of a complex network. Some algorithms have been proposed to calculate the mass exponents and to study the multifractal properties of complex networks [44, 45, 46, 47]. Based on the compactboxburning algorithm for fractal analysis of complex networks which is introduced by Song et al. [5], Furuya and Yakubo [44] proposed a compactboxburning (CBB) algorithm for MFA of complex networks and applied it to show that some networks have multifractal structures. Wang et al. [45] proposed a modified fixedsize boxcounting algorithm to detect the multifractal behavior of some theoretical and real networks. Li et al. [46] improved the modified fixedsize boxcounting algorithm [45] and used it to investigate the multifractal properties of a family of fractal networks introduced by Gallos et al. [48]. We call the algorithm in Ref. [46] the improved BC algorithm. Recently, we adopted the improved BC to study the multifractal properties of the recurrence networks constructed from fractional Brownian motions [47].
In order to easily obtain the generalized fractal dimensions of real data, Tél et al. [49] introduced a sandbox algorithm which is originated from the boxcounting algorithm [50]. They [49] pointed out that the sandbox algorithm gives a better estimation of the generalized fractal dimensions in practical applications. So far, the sandbox algorithm also has been widely applied in many fields. For example, Yu et al. [26] used it to perform MFA on the measures based on the chaos game representation of protein sequences from complete genomes.
In this article, we employ the sandbox (SB) algorithm proposed by Tél et al. [49] for MFA of complex networks. First we compare the SB algorithm with the CBB and improved BC algorithms for MFA of complex networks in detail by calculating the mass exponents of some deterministic model networks. We make a detailed comparison between the numerical and theoretical results of these model networks. Then we apply the SB algorithm to study the multifractal property of some classic model networks, such as scalefree networks, smallworld networks, and random networks.
2 Sandbox algorithm for multifractal analysis of complex networks
It is well known that the fixedsize boxcovering algorithm [50] is one of the most common and important algorithms for multifractal analysis. For a given measures with support set in a metric space, we consider the following partition sum
(1) 
, where the sum runs over all different nonempty boxes of a given size in a box covering of the support set . From the definition above, we can easily obtain and . The mass exponents of the measure can be defined as
(2) 
Then the generalized fractal dimensions of the measure are defined as
(3) 
and
(4) 
where . The linear regression of against for gives a numerical estimation of the generalized fractal dimensions , and similarly a linear regression of against for . In particular, is the boxcounting dimension (or fractal dimension), is the information dimension, and is the correlation dimension.
In complex network, the measure of each box can be defined as the ratio of the number of nodes covered by the box and the total number of nodes in the entire network. In addition, we can determine the multifractality of complex network by the shape of or curve. If is a constant or is a straight line, the object is monofractal; on the other hand, if or is convex, the object is multifractal.
Before we use the following SB algorithm to perform MFA of a network, we need to apply the Floyd’s algorithm [51] of MatlabBGL toolbox [52] to calculate the shortestpath distance matrix of this network according to its adjacency matrix .
The sandbox algorithm proposed by Tél et al. [49] is an extension of the boxcounting algorithm [50]. The main idea of this sandbox algorithm is that we can randomly select a point on the fractal object as the center of a sandbox and then count the number of points in the sandbox. The generalized fractal dimensions are defined as
(5) 
where is the number of points in a sandbox with a radius of , is the total number of points in the fractal object. The brackets mean to take statistical average over randomly chosen centers of the sandboxes. As a matter of fact, the above equation can be rewritten as
(6) 
So, in practice, we often estimate numerically the generalized fractal dimensions by performing a linear regression of against ; and estimate numerically the mass exponents by performing a linear regression of against . In a complex network, we can randomly choose a node of a network as the center of a sandbox. and represent the number of nodes in the sandbox of radius and the size of the network, respectively. The SB algorithm for MFA of complex networks can be described as follows.

Initially, make sure all nodes in the entire network are not selected as a center of a sandbox.

Set the radius of the sandbox which will be used to cover the nodes in the range , where is the diameter of the network.

Rearrange the nodes of the entire network into random order. More specifically, in a random order, nodes which will be selected as the center of a sandbox are randomly arrayed.

According to the size of networks, choose the first 1000 nodes in a random order as the center of 1000 sandboxes, then search all the neighbor nodes by radius from the center of each sandbox.

Count the number of nodes in each sandbox of radius , denote the number of nodes in each sandbox as .

Calculate the statistical average of over all 1000 sandboxes of radius .

For different values of , repeat steps (ii) to (vi) to calculate the statistical average and then use for linear regression.
We need to choose an appropriate range of , then calculate the generalized fractal dimensions and the mass exponents in this scaling range. In our calculation, we perform a linear regression of against and then choose the slope as an approximation of the mass exponents (the process for estimating the generalized fractal dimensions is similar).
For the improved BC and CBB algorithms, we need to cover the entire network by repeating a large number of same steps and then to find the minimum possible number of boxes by performing many realizations. Then we can choose an appropriate range of to calculate the mass exponents by performing a linear regression of against . In the process of finding a covering of the network, the two existing algorithms require that each node in the network cannot be covered by more than one box at the same time. In addition, for the CBB algorithm, we have to randomly select many nodes from the candidate set to form a compact box and then repeat these steps until the entire network is covered. Because of these limitations, these two algorithms must take a large amount of CPU time and memory resources to record information of the nodes which have been covered in the previous steps. From the above descriptions of SB algorithm, however, we find that a big difference from the improved BC and CBB algorithms is that SB algorithm only requires to randomly choose some nodes as the center of a sandbox and then to count the number of nodes in each sandbox, hence we don’t need so many sandboxes to cover the entire network. Moreover, we focus on the number of nodes of each sandbox, so we also don’t need to know whether or not the nodes in the sandbox have been covered by other sandboxes. For the SB algorithm, therefore, we only consume a very small amount of CPU time and memory resources. In this sense, the SB algorithm can be considered to be the most effective and feasible algorithm for MFA of complex networks.
3 Algorithm comparison
Now we compare the SB algorithm with the CBB and improved BC algorithms for MFA of complex networks [44, 46] in detail by calculating the mass exponents of some deterministic model networks. We make a detailed comparison between the numerical and theoretical results of these model networks.
3.1 The model networks
In the past two decades, many network models have been introduced to study and simulate the topological and fractal properties and the growth mechanism of many complex dynamical systems in real world. Watts and Strogatz [53], Newman and Watts [54] proposed the WS and NW smallworld network models to explain the smallworld character of many real complex networks respectively. In order to reveal the generating principle of power law distributions, Barabási et al. [55] proposed a scalefree network model (BA model) based on the growth and preferential attachment characteristics of real networks.
In 2002, Dorogovtsev et al. [56] introduced the simple deterministic graphs to model scalefree networks. They pointed out that the family of deterministic networks (DGM networks) are pseudofractals. In order to understand the selfsimilarity of complex networks better, Rozenfeld et al. [57] generalized the DGM network to a family of scalefree networks, namely flowers and trees. The DGM network is a special case of the flowers. For , the networks are selfsimilar only in the weak sense. For , the networks are selfsimilar and possess the welldefined fractal dimension. In 2006, Song et al. [58] proposed the minimal model to study the evolved law of complex networks and then simulate the emergence of selfsimilarity and smallworld properties of these networks. Later on, Gallos et al. [48] introduced a generalized version of the minimal model proposed by Song et al. [58].
Based on the above network models, some researchers have studied analytically and numerically the fractal and multifractal properties of networks generated from these models. Song et al. [58] and Gallos et al. [48] gave an analytical formula to calculate the fractal dimensions of the minimal model and its generalized version respectively. Then Rozenfeld et al. [57] also put forward a theoretical framework for computing the degree exponent and the fractal dimension of the flower network. In addition, Furuya and Yakubo [44] analytically and numerically investigated the multifractal properties of several deterministic, stochastic, and realworld fractal scalefree networks. They gave an analytical formula of the mass exponents for some class of fractal scalefree model networks by a meanfield approximation. They define the mass exponents of a complex network as
(7) 
where is the fractal dimension, is the degree exponent of the complex network.
For the flower, we start with a cycle graph consisting of nodes and links. Then we can obtain the flower network of generation by replacing each link in th generation network by two parallel paths with length (number of links of the path) and respectively. In Fig. 1, we show how to construct the flower network as an example. Rozenfeld et al. [57] gave the analytical formulas of degree exponent and fractal dimension of flower. The degree exponent of the th generation flower network is given by
(8) 
and the fractal dimension is presented by
(9) 
By substituting Eqs. (8) and (9) into Eq. (7), the mass exponents of the flower network can be written as
(10) 
The minimal model proposed by Song et al. [58] is a probabilistic combination of two different connectivity modes: Mode I with probability and Mode II with probability . Mode I means that all the old connections generated in the previous generation are remained; Mode II means that we remove all the old connections generated in the previous generation and add a new edge to connect two new generated nodes. Before using Modes I and II, we attach new nodes to each endpoint of each edge in the network of the current generation. A remarkable advantage of this stochastic combination of the two different growth modes is that its level of fractality can be controlled by the probability . Song et al. [58] pointed out that the minimal model is a pure fractal network when the probability , and a pure smallworld network when . In this paper, we only consider the minimal model with probability . We start with a star structure as in Ref. [58]. Then we obtain the minimal model of generation by adding new nodes to each node with degree of generation , where is a given parameter. In addition, we adopt the growth Mode II to replace all the old connections in the previous generation. In Fig. 2, we show how to construct the pure fractal network with parameters and as an example. As pointed out by Song et al. [58], the degree exponent is
(11) 
and the fractal dimension is
(12) 
By substituting Eqs. (11) and (12) into Eq. (7), the mass exponents of this minimal model can be written as
(13) 
Based on the minimal model, Gallos et al. [48] proposed a generalized version of this model. Here we start with two nodes and one edge between them as in Ref. [48]. Then we obtain the network of next generation by attaching new nodes to each endpoint of each edge in the network of the current generation. With probability , each edge of the current generation is remained and new edges are added to connect pairs of new nodes attached to the endpoints of . Otherwise, we remove each edge in the network of the current generation and add new edges to connect pairs of nodes attached to the endpoints of . In this paper, we only consider the network with probability . In Fig. 3, we show how to construct the generalized network with parameters , , and , as an example. In the case of probability , from Refs. [46, 14], the degree exponent is
(14) 
and the fractal dimension is
(15) 
By substituting Eqs. (14) and (15) into Eq. (7), we can obtain the mass exponents of the generalized version of the minimal model, which can be written as
(16) 
In the following, we generate networks using the three models and numerically study their multifractality by the SB, CBB and improved BC algorithms (in Section II). Then we give a detailed comparison between the three algorithms based on the numerical results and theoretical formulas of the mass exponents for these model networks.
3.2 Comparison results
In this work, we set the range of the values from to with a step of . In order to compare with the results in Ref. [44], we generated the flower network with and . Considering the limitation of the computational capacity of our computer, we only constructed the 7th generation flower network. An important step of calculating the mass exponents and the generalized fractal dimensions is to obtain the appropriate range of (i.e. ). As an example, we show the linear regressions of the vs in the SB algorithm for the 7th generation flower network with and in Fig. 4. We selected the range of as to fit these data points. From Fig. 4, we can observe the apparent power law behaviors for the 7th generation flower network with and . So we selected the linear fit scaling range of as to calculate the mass exponents . In Fig. 5, we show the mass exponents of the flower network calculated by the SB, CBB and improved BC algorithms. From Fig. 5, we can see that the numerical results obtained by the three algorithms are consistent with the theoretical results.
For the minimal model network, we started with a star structure of 5 nodes as in Ref. [58] and then generated the 5th generation network with and . We calculated the mass exponents by the three algorithms. The numerical results are shown in Fig. 6. From Fig. 6, we can see that the numerical results obtained by the three algorithms agree well with the theoretical results.
In addition, we also generated the generalized version of the minimal model with , , and . Here we only constructed the 5th generation of the generalized network. The numerical results are shown in Fig. 7. From Fig. 7, we can see that the numerical results obtained by the three algorithms have good agreement with the theoretical results.
It is hard to evaluate the performance of the three algorithms only according to the above three figures. In order to further quantify the performance of these algorithms, we introduce the relative standard error as in Ref. [29]. Based on the relative standard error , we can determine the goodness of the numerical results of the mass exponents obtained from the three algorithms compared with the analytical results for the three deterministic model networks. The relative standard error can be defined as
(17) 
where
(18) 
and
(19) 
the and represent the analytical and numerical results of the mass exponents respectively; is the average of the . The goodness of fit is indicated by the result [29]. We summarize the corresponding relative standard error between the analytical and numerical results of the mass exponents in Table I. From Table I, we see that the relative standard errors for these three methods are all rather small. This result indicates all these three algorithms can give correct numerical results.
In addition, we compare the consumed CPU time of the three algorithms for MFA of the networks generated from the three network models. The results are given in Table II. From Table II, we can CBB algorithm takes a substantial amount of computation time and memory resources, while SB algorithm consumes the least amount of CPU time and memory resources. This is to say that the SB algorithm has an overwhelming advantage in consuming CPU time and memory resources. Therefore the SB algorithm can be considered as the most effective, feasible, and accurate algorithm to calculate the mass exponents and then to study the multifractal properties of complex networks among the three algorithms.
flower  The minimal model  The generalized model  

improved boxcounting (BC) [46]  0.01406849  0.00533030  0.02067627 
Compactboxburning (CBB) [44]  0.01567521  0.02449339  0.02245775 
Sandbox (SB)  0.01408235  0.00623890  0.01299720 
4 Applications
Wang et al. [45] applied the modified fixedsize boxcounting algorithm to study the multifractal properties of some theoretical model networks and real networks, such as scalefree networks, smallworld networks, and random networks. All of these complex networks have been widely used in various studies. In this Section, we want to adopt the SB algorithm to detect the multifractal behavior of these networks.
4.1 Scalefree networks
Based on the growth and preferential attachment characteristics of many complex networks in real world, Barabási et al. [55] proposed a BA model to explain the generating mechanism of power law distributions. In this paper, we use the BA model to generate scalefree networks and then investigate their multifractality.
Here, we start with an initial network with nodes, and its three nodes are connected each other. At each step, we add one node to this initial network. Then this new node is connected to existing nodes with probability . We denote the probability that the new node is connected to node as . The probability is defined as , where is the degree of node .
In this paper, we respectively generated 100 scalefree networks with 6000 nodes, 8000 nodes, and 10000 nodes. The SB algorithm is then used to calculate their average mass exponents and average generalized fractal dimensions , which are shown in Fig. 8. From Fig. 8, we find that the and curves of scalefree networks are convex. So multifractality exists in these scalefree networks.
4.2 Smallworld networks
Based on the random rewiring procedure, Watts and Strogatz [53] introduced the WS smallworld network model which is a transition from a completely regular network to a completely random graph. Smallworld networks not only retain the high clustering coefficient of regular networks, but also capture the short average path length of random graphs. Newman and Watts [54] proposed a NW model which is a modified version of the original WS model. In the NW model, the shortcuts are added between randomly chosen pairs of nodes, but no connections are removed from the regular lattice.
The NW model can be described as follows. Firstly, we start with a regular graph. We consider the nearestneighbor coupled network containing nodes. Each node of the coupled network is connected to its nearestneighbors by undirected edges, where is an even integer. Secondly, we randomly add some new connections to the coupled network. With probability , we connect the pair of nodes chosen randomly.
In this paper, we first generated the three nearestneighbor coupled networks containing 6000 nodes, 8000 nodes, and 10000 nodes, respectively. And we set so that each node of these networks is connected to its 4 nearestneighbors. Then we added a connection between pairs of nodes of the three coupled networks with probability , , and , respectively. For each case, we generated 100 smallworld networks using the NW model. Next, we applied the SB algorithm to calculate their average mass exponents and average generalized fractal dimensions . The and curves are plotted in Fig. 9. From Fig. 9, we find that the and curves of smallworld networks are not straight lines, but the fluctuation of these curves is very small. This means that the multifractality of these smallworld networks is not obvious.
4.3 Random networks
The ErdősRényi (ER) random graph model [59] is one of the classical network models for generating a completely random network. We start with isolated nodes. For every possible pair of nodes, we connect them by an undirected connection with probability .
In this paper, we considered the three ER random graph models containing 6000 nodes, 8000 nodes, and 10000 nodes. Then we connected all possible node pairs of the three ER models with probability , , and , respectively. For each case, we generated 100 random networks by using the ER model and extract the largest connected component from each random network. Next, we used the SB algorithm to calculate the average mass exponents and average generalized fractal dimensions of these largest connected components. In Fig. 10, we show the and curves. As we can see from Fig. 10, the and curves of random networks are close to straight lines. So this is to say, the multifractality almost does not exist in these random networks.
5 Conclusion
In this work, we employed the sandbox (SB) algorithm proposed by Tél et al. [49], for multifractal analysis (MFA) of complex networks.
First we compared the SB algorithm with two existing algorithms of MFA for complex networks: the compactboxburning (CBB) algorithm proposed by Furuya and Yakubo [44], and the improved boxcounting (BC) algorithm proposed by Li et al. [46], by calculating the mass exponents of some deterministic model networks (networks generated from the flower, the minimal model, and the generalized version of the minimal model). We made a detailed comparison between the numerical results and the theoretical ones of these model networks. The comparison results show that the SB algorithm is the most effective, feasible, and accurate algorithm to calculate the mass exponents and to explore the multifractal behavior of complex networks.
In addition, we applied the SB algorithm to study the multifractality of some classic model networks, such as scalefree networks, smallworld networks, and random networks. Our numerical results show that multifractality exists in scalefree networks, that of smallworld networks is not obvious, and it almost does not exist in random networks.
Acknowledgments
This project was supported by the Natural Science Foundation of China (Grant No. 11371016), the Chinese Program for Changjiang Scholars and Innovative Research Team in University (PCSIRT) (Grant No. IRT1179), the Lotus Scholars Program of Hunan province of China.
References
 C. Song, S. Havlin, and H.A. Makse, Nature(London) 433, 392395(2005).
 B.B. Mandelbrot, The Fractal Geometry of Nature (New York: Academic Press, 1983).
 J. Feder, Fractals (New York: Plenum, 1988).
 K. Falconer, Techniques in Fractal Geometry (New York: Wiley, 1997).
 C. Song, L.K. Gallos, S. Havlin, and H.A. Makse, J. Stat. Mech.: Theor. Exp. 3, P03006(2007).
 J.S. Kim, K.I. Goh, G. Salvi, E. Oh, B. Kahng, and D. Kim, Phys. Rev. E 75, 016110(2007).
 J.S. Kim, K.I. Goh, B. Kahng, and D. Kim, Chaos 17, 026116(2007).
 W.X. Zhou, Z.Q. Jiang, and D. Sornette, Physica A 375, 741752(2007).
 L. Gao, Y. Hu, and Z. Di, Phys. Rev. E 78, 046109(2008).
 L. Guo and X. Cai, Chin. Phys. Lett. 26(8), 088901(2009).
 O. Shanker, Mod.Phys.Lett.B 21, 321326(2007).
 G. Bianconi, Chaos 17, 026114(2007).
 Z.K. Gao and N.D. Jin, Chaos 19, 033137(2009).
 H.D. Rozenfeld and H.A. Makse, Chem. Eng. Sci. 64, 45724575(2009).
 H.D. Rozenfeld, C. Song, and H. A. Makse, Phys. Rev. Lett. 104, 025701(2010).
 Z. Zhang, S. Zhou, T. Zou, and G. Chen, J. Stat. Mech.: Theor. Exp. 9, P09008(2008).
 Z. Zhang, S. Zhou, W. Xie, L. Chen, Y. Lin, and J. Guan, Phys. Rev. E 79, 061113(2009).
 J.X. Liu and X.M. Kong, Acta Phy. Sin. 59, 22442249(2010).
 Z.Z. Zhang, S.Y. Gao, and W.L. Xie, Chaos 20, 043112(2010).
 M.F. Dai, X.Y. Li, and L.F. Xi, Chaos 23, 033106(2013).
 E. Canessa, J. Phys. A: Math. Gen. 33, 36373651(2000).
 V.V. Anh, Q.M. Tieng, and Y.K. Tse, Int. Trans. Oper. Res. 7, 349363(2000).
 Z.G. Yu, V. Anh, and K.S. Lau, Physica A 31, 351361(2001).
 Z.G. Yu, V. Anh, and K.S. Lau, Phys. Rev. E 64, 031903(2001).
 Z.G. Yu, V. Anh, and K.S. Lau, Phys. Rev. E 68, 021913(2003).
 Z.G. Yu, V. Anh, and K.S. Lau, J. Theor. Biol. 226, 341348(2004).
 Z.G. Yu, V. Anh, and K.S. Lau, Phys. Rev. E 73 031920(2006).
 Z.G. Yu, Q.J. Xiao, L. Shi, J.W. Yu, and V. Anh, Chin. Phys. B 19, 068701(2010).
 V.V. Anh, K.S. Lau, and Z.G. Yu, Phys. Rev. E 66, 031910(2002).
 L.Q. Zhou, Z.G. Yu, J.Q. Deng, V. Anh, and S.C. Long, J. Theor. Biol. 232 559567(2005).
 J.J. Han and W.J. Fu, Chin. Phys. B 19, 010205(2010).
 S.M. Zhu, Z.G. Yu, and V. Anh, Chin. Phys. B 20, 010505(2011).
 D. Schertzer and S. Lovejoy, J. Geophys. Res. 92, 96939714(1987).
 S. Lovejoy, M.R. Duncan, and D. Schertzer, J. Geophys. Res. 31D, 26479426492(1996).
 S. Lovejoy and D. Schertzer, Computers and Geoscience 36, 13931403(2010).
 S. Lovejoy and D. Schertzer, Computers and Geoscience 36, 14041413(2010).
 J.W. Kantelhardt, E. KoscielnyBunde, D. Rybski, P. Braun, A. Bunde, and S. Havlin, J. Geophys. Res. 111, D01106(2006).
 D. Veneziano, A. Langousis, and P. Furcolo, Water Resour. Res. 42, W06D15(2006).
 V. Venugopal, S.G. Roux, E. FoufoulaGeorgiou, and A. Arneodo, Water Resour. Res. 42, W06D14(2006).
 J.A. Wanliss, V.V. Anh, Z.G. Yu, and S. Watson, J. Geophys. Res. 110, A08214(2005).
 Z.G. Yu, V. Anh, and R. Eastes, J. Geophys. Res. 114, A05214(2009).
 Z.G. Yu, V. Anh, Y. Wang, D. Mao, and J. Wanliss, J. Geophys. Res. 115, A10219(2010).
 C.Y. Lee and S.H. Jung, Phys. Rev. E 73, 066102(2006).
 S. Furuya and K. Yakubo, Phys. Rev. E 84, 036118(2011).
 D.L. Wang, Z.G. Yu, and V. Anh, Chin. Phys. B 21(8), 080504(2012).
 B.G. Li, Z.G. Yu, and Y. Zhou, J. Stat. Mech.: Theor. Exp. 2014, P02020(2014).
 J.L. Liu, Z.G. Yu, and V. Anh, Phys. Rev. E 89, 032814(2014).
 L.K. Gallos, C. Song, S. Havlin, and H.A. Makse, Proc. Natl. Acad. Sci. USA. 104(19), 77467751(2007).
 T. Tél, Á. Fülöp, and T. Vicsek, Physica A 159, 155166(1989).
 T.C. Halsey, M.H. Jensen, L.P. Kadanoff, I. Procaccia, and B.I. Shraiman, Phys. Rev. A 33, 11411151(1986).
 R.W. Floyd, Commun. ACM 5(6), 345(1962).
 D.F. Gleich, A graph library for Matlab based on the boost graph library, http://dgleich.github.com/matlabbgl.
 D.J. Watts and S.H. Strogatz, Nature 393(6684), 440442(1998).
 M.E.J. Newman and D.J. Watts, Phys. Lett. A 263, 341346(1999).
 A.L. Barabási and R. Albert, Science 286(5439), 509512(1999).
 S.N. Dorogovtsev, A.V. Goltsev, and J.F.F. Mendes, Phys. Rev. E 65, 066122(2002).
 H.D. Rozenfeld, S. Havlin, and D. BenAvraham, New J. Phys. 9, 175(2007).
 C. Song, S. Havlin, and H.A. Makse, Nat. Phys. 2, 275281(2006).
 P. Erdős and A. Rényi, Publ. Math. Debrecen 6, 290297(1959).