Structural and spectral properties of a family of deterministic recursive trees

Structural and spectral properties of a family of deterministic recursive trees: Rigorous solutions

Yi Qi, Zhongzhi Zhang, Bailu Ding, Shuigeng Zhou, and Jihong Guan School of Computer Science, Fudan University, Shanghai 200433, China
Shanghai Key Lab of Intelligent Information Processing, Fudan University, Shanghai 200433, China
Department of Computer Science and Technology, Tongji University, 4800 Cao’an Road, Shanghai 201804, China
zhangzz@fudan.edu.cn,sgzhou@fudan.edu.cn,jhguan@tongji.edu.cn
Abstract

As one of the most significant models, the uniform recursive tree (URT) has found many applications in a variety of fields. In this paper, we study rigorously the structural features and spectral properties of the adjacency matrix for a family of deterministic uniform recursive trees (DURTs) that are deterministic versions of URT. Firstly, from the perspective of complex networks, we investigate analytically the main structural characteristics of DURTs, and obtain the accurate solutions for these properties, which include degree distribution, average path length, distribution of node betweenness, and degree correlations. Then we determine the complete eigenvalues and their corresponding eigenvectors of the adjacency matrix for DURTs. Our research may shed light in better understanding of the features for URT. Also, the analytical methods used here is capable of extending to many other deterministic networks, making the precise computation of their properties (especially the full spectrum characteristics) possible.

pacs:
89.75.Hc, 02.10.Yn, 02.10.Ud, 89.75.Fb

1 Introduction

Structural characterization is very significant for the study in the field of complex networks that have become a focus of attention for the scientific community [1, 2, 3]. In the past decade, great efforts have been dedicated to characterizing and understanding the structural properties of real networks [4], including degree distribution, average path length (APL), betweenness, degree correlations, fractality, and so forth. These measures have a profound effect on various dynamical processes taking place on top of complex networks [5], such as robustness [6, 7, 8, 9], epidemic spreading [10, 11], synchronization [12, 13, 14, 15], and games [16].

The foregoing measurements focus on direct measurements of structural properties of networks, and play an important role in understanding network complexity [17]. Aside from these measurements there exists a vast literature related to spectrum of complex networks [18, 19, 20, 21], which provides useful insight into the relevant structural properties of and dynamical processes on graphs. In contrast to the fact that structural features capture the static topological properties of complex networks, spectra (eigenvalues and eigenvectors) of adjacency matrix provide global measures of the characterization for network topology. In a variety of dynamical processes, the impact of network structure is encoded in the spectra of its adjacency matrix, especially the extreme eigenvalues and their corresponding eigenvectors. For example, in the dynamical model for the spreading of infections, the epidemic thresholds are governed by the largest eigenvalue of the adjacency matrix [22, 23], which also plays a fundamental role in determining critical couplings for the onset of coherent behavior [24]. In addition, recent research showed that in the Susceptible-Infected model of epidemic outbreaks on complex networks, the eigenvector corresponding to the largest eigenvalue are related to spreading power of network nodes [25]. In spite of the importance of the eigenvalues and eigenvectors of the adjacency matrix, however, until now, most analysis of the spectra has been confined to approximate or numerical methods, the latter of which is prohibitively time and memory consuming for large-scale networks [18].

On the other hand, in order to mimic real systems and study their structural properties, a great number of network models have been presented [1, 2, 3], among which the uniform recursive tree (URT) is perhaps one of the most widely studied models [26]. It is now established that the URT, together with the famous Erdös-Rényi model [27], constitutes the two principal models [28, 29] of random graphs. As one of the simplest trees, the URT is constructed as follows: start with a single node, at each time step, we attach a new node to an existing node selected at random. It has found many important applications in various areas. For example, it has been suggested as models for the spread of epidemics [30], the family trees of preserved copies of ancient or medieval texts [31], chain letter and pyramid schemes [32], to name but a few.

Recently, a class of deterministically growing tree-like networks have been proposed to describe real-world systems whose number of nodes increases exponentially with time [33]. We call them deterministic uniform recursive trees (DURTs), since they are deterministic versions of URT. This kind of deterministic models have received considerable attention from the scientific communities and have turned out to be a useful tool [34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51]. Although uniform recursive tree is well understood [26, 28, 29, 30, 31, 52, 53], relatively less is known about the structural and other nature of the DURTs [33].

In this paper, from the viewpoint of complex networks, we offer a comprehensive analysis of the deterministic uniform recursive trees (DURTs) [33]. We firstly determine exactly relevant structural properties of the DURTs, including degree distribution, average path length, betweenness distribution, and degree correlations. Then, using methods of graph theory and algebra, We calculate all the eigenvalues and eigenvectors of the adjacency matrix, which are obtained through the recurrence relations derived from the very structure of the DURTs.

2 The deterministic uniform recursive trees

The deterministic uniform recursive trees under consideration are constructed in an iterative way [33]. We denote the trees (networks) after steps by (). Then the networks are built as follows. For , is an edge connecting two nodes. For , is obtained from . We attach new nodes to each node in . This iterative process is repeated, then we obtain a class of deterministically growing trees with an exponential decreasing spectrum of degrees as shown below. The definition of the model for a particular case of is illustrated schematically in figure 1.

Figure 1: Illustration of a deterministic uniform recursive tree for the special case of , showing the first several steps of growth process.

We first compute the total number of nodes and the total number of edges in the . Let and denote the numbers of nodes and edges created at step , respectively. Then, and . By construction, we have , thus . Considering the initial condition , we obtain and . Thus, . Notice that at arbitrary step , the addition of each new node leads to only new edge, so for all .

3 Structural properties

In this section, we investigate four important structural properties of , including degree distribution, average path length, betweenness distribution, and degree correlations.

3.1 Degree distribution

By definition, the degree of a node is the number of edges connected to . The degree distribution of a network is the probability that a randomly selected node has exactly edges. Let denote the degree of node at step . If node is added to the network at step , then by construction, . In each of the subsequent time steps, new nodes will be created connected to . Thus the degree of node satisfies the relation

(1)

Considering the initial condition , we obtain

(2)

Since the degree of each node has been obtained explicitly as in equation (2), we can get the degree distribution via its cumulative distribution [3]

(3)

which is the probability that the degree is greater than or equal to . An important advantage of the cumulative distribution is that it can reduce the noise in the tail of probability distribution. Moreover, for some networks whose degree distributions have exponential tails: , the cumulative distribution also has an exponential expression with the same exponent:

(4)

This makes exponential distributions particularly easy to detect experimentally, by plotting the corresponding cumulative distributions on semilogarithmic scales.

Using equation (2), we have . Hence

(5)

which decays exponentially with . Note that when , the possible degrees are not arbitrary, equation (5) holds only for those being equal to 1 modulo . Thus the DURTs are a family of exponential networks, which have a similar form of degree distribution as its stochastic version— the URT [29].

3.2 Average path length

Average path length means the minimum number of edges connecting a pair of nodes, averaged over all node pairs. It is defined to be:

(6)

where denotes the sum of the total distances between two nodes over all pairs, that is

(7)

where is the shortest distance between node and . Note that in equation (7), for a couple of nodes and (), we only count or , not both.

Let and represent the sets of nodes created at step or earlier, respectively. Then one can write the sum over all shortest paths in network as

(8)

where the third term is exactly , i.e.,

(9)

By construction, we can obtain the following relations for the first and second terms on the right-hand side of equation (8):

(10)
(11)

The term in equation (11) pops out from counting: each path connecting two new points comes from a path connecting two old points by adding two edges, is by increasing the length by 2. As there are pairs of new points, the total increase in length is .

Substituting equations. (9), (10) and (11) into equation (8) and considering , the total distance is obtained to be

(12)

Inserting equation (12) into equation (6), we have

(13)

In the infinite network size limit (),

(14)

which means that the average path length shows a logarithmic scaling with the size of the network, indicating a similar small-world behavior as the URT [53] and the Watts-Strogatz (WS) model [54].

3.3 Betweenness distribution

Betweenness of a node is the accumulated fraction of the total number of shortest paths going through the given node over all node pairs [55, 56]. More precisely, the betweenness of a node is

(15)

where is the total number of shortest path between node and , and is the number of shortest path running through node .

Since for a tree, there is a unique shortest path between each pair of nodes [57, 58, 59, 60, 61]. Thus the betweenness of a node is simply given by the number of distinct shortest paths passing through the node. Then in , the betweenness of a -generation-old node , which is created at step , denoted as becomes

(16)

where denotes the total number of descendants of node at time , where the descendants of a node are its children, its children’s children, and so on. Note that the descendants of node exclude itself. The first term in equation (16) counts shortest paths from descendants of to other vertices. The second term accounts for the shortest paths between descendants of . The third term describes the shortest paths between descendants of that do not pass through .

To find , it is necessary to explicitly determine the descendants of node , which is related to that of children via [59, 60]

(17)

Using , we can solve equation (17) inductively,

(18)

Substituting the result of equation (18) and into equation (16), we have

(19)

which is approximately equal to for large . Then the cumulative betweenness distribution is

(20)

which shows that the betweenness distribution exhibits a power law behavior with exponent , the same scaling has been also obtained for the URT [52] and the case of the Barabási-Albert (BA) model [62] describing a random scale-free treelike network [57, 58]. Therefore, power-law betweenness distribution is not an exclusive property of scale-free networks.

3.4 Degree correlations

An interesting quantity related to degree correlations [63] is the average degree of the nearest neighbors for nodes with degree , denoted as [64, 65, 66]. When increases with , it means that nodes have a tendency to connect to nodes with a similar or larger degree. In this case the network is defined as assortative [67]. In contrast, if is decreasing with , which implies that nodes of large degree are likely to have near neighbors with small degree, then the network is said to be disassortative. If correlations are absent, .

For , we can exactly calculate . Except for the initial two nodes generated at step 0, no nodes born at the same step, which have the same degree, will be linked to each other. All links to nodes with larger degree are made at the creation step, and then links to nodes with smaller degree are made at each subsequent steps. This results in the expression for ()

(21)

where represents the degree of a node at step , which was generated at step . Here the first sum on the right-hand side accounts for the links made to nodes with larger degree (i.e. ) when the node was generated at . The second sum describes the links made to the current smallest degree nodes at each step .

After some algebraic manipulations, equation (21) is simplified to

(22)

Writing equation (22) in terms of , it is straightforward to obtain

(23)

Thus we have obtained the degree correlations for those nodes born at . For the initial two nodes, each has a degree of , and it is easy to obtain

(24)

From equations (23) and (24), it is obvious that for large network (i.e., ), is approximately a linear function of , which shows that the network is assortative.

4 Eigenvalues and eigenvectors of the adjacency matrix

As known from section 2, there are vertices in . we denote by the vertex set of , i.e., . Let be the adjacency matrix of network , where if nodes and are connected, otherwise. For an arbitrary graph, it is generally difficult to determine all eigenvalues and the corresponding eigenvectors of its adjacency matrix, but below we will show that for one can settle this problem.

4.1 eigenvalues

We begin by studying the eigenvalues of . By construction, it is easy to find that the adjacency matrix satisfies the following relation:

(25)

where each block is a matrix and I is identity matrix. Then, the characteristic polynomial of is

where the elementary operations of matrix have been used. According to the results in [68], we have

(27)

Thus, can be written recursively as follows:

(28)

where . This recursive relation given by equation (28) is very important, from which we will determine the complete eigenvalues of and their corresponding eigenvectors. Notice that is a monic polynomial of degree , then the exponent of in is , and hence the exponent of factor in is

(29)

Consequently, is an eigenvalue of , and its multiplicity is .

Notice that has eigenvalues. We represent these eigenvalues as , respectively. For convenience, we presume , and denote by the set of these eigenvalues, i.e. . All the eigenvalues in set can be divided into two parts. According to the above analysis, is an eigenvalue with multiplicity , which provide parts of the eigenvalues of . We denote by the set of eigenvalues 0 of , i.e.

(30)

It should be noted that here we neglect the distinctness of elements in the set. The remaining adjacency eigenvalues of are determined by the equation . Let these eigenvalues be , respectively. For convenience, we presume , and denote by the set of these eigenvalues, i.e. . Therefore, the eigenvalue set of can be expressed as .

From equation (28), we have that for an arbitrary element in , say , both solutions of are in . In fact, equation is equivalent to

(31)

We use notations and to represent the two solutions of equation (31), since they provide a natural increasing order of the eigenvalues of , which can be seen from below argument. Solving this quadratic equation, its roots are obtained to be and , where the function and satisfy

(32)
(33)

Substituting each adjacency eigenvalue of into equations (32) and (33), we can obtain the set of eigenvalues of . Since , by recursively applying the functions provided by equations (32) and (33), the eigenvalues of can be determined completely.

It is obvious that both and are monotonously increasing functions. On the other hand, since , so . Similarly, we can show that . Thus for arbitrary fixed , holds for all . Then we have the following conclusion: If the eigenvalues set of is , then solving equations (32) and (33) we can obtain the eigenvalue set of to be ,  where . Recall that is consist of elements, all of which are , so we can easily get the eigenvalue set of to be .

For above arguments, we can easily see that for the case of , all the eigenvalues of are different, which is an interesting feature and has less been previously reported in other network models thus may have some far-reaching consequences. For other , some eigenvalues multiple. In figure 2 we plot the distribution of eigenvalues for two cases: and . It is observed that different from the uniform distribution of case, for , the distribution of eigenvalues exhibit the form of peaks.

Figure 2: The distribution of eigenvalues defined as the ratio between the multiplicity of eigenvalue and the network order .

4.2 eigenvectors

Similarly to the eigenvalues, the eigenvectors of follow directly from those of . Assume that is an arbitrary eigenvalue of , whose corresponding eigenvector is , where represents the -dimensional vector space. Then we can solve equation ( to find the eigenvector v. We distinguish two cases: and , which will be addressed in detail as follows:

For the first case , we can rewrite the equation ( as

(34)

where vector () are components of v. Equation (34) leads to the following equations:

(35)
(36)

Resolve equation (36) to find

(37)

Substituting equation (37) into equation (35) we have

(38)

which indicates that is the solution of equation (35) while () are uniquely decided by via equation (37).

From equation (28) in preceding subsection, it is clear that if is an eigenvalue of adjacency matrix , then must be one eigenvalue of . (Recall that if , then for , or for ). Thus, equation (38) together with equation (28) shows that is an eigenvector of matrix corresponding to the eigenvalue determined by , while

(39)

is an eigenvector of corresponding to the eigenvalue .

Since for the initial graph , its adjacency matrix has two eigenvalues -1 and 1 with respective eigenvectors and . By recursively applying the above process, we can obtain all the eigenvectors corresponding to .

For the second case of , where all , the equation ( can be recast as

(40)

where vector () are components of v. Equation (40) leads to the following equations:

(41)
(42)

From equation (41), is a zero vector, and we denote by the -th component of column vector . Equation (42) gives us the following equations:

The set of all solutions to any equation above consists of vectors that can be written as:

(43)

where , , , are any real numbers. From equation (43), the solutions for all the vectors () can be rewritten as:

(44)

where (; ) are arbitrary real numbers. According to the equation (44), we can obtain the eigenvector v corresponding to the eigenvalue 0. Moreover, it is easy to see that the dimension of the eigenspace of matrix associated with eigenvalue 0 is .

5 Conclusion and discussion

In conclusion, we have studied a family of deterministic models for the uniform recursive tree, which we name the deterministic uniform recursive trees (DURTs) that are constructed in a recursive way. The DURTs are in fact deterministic variants of the intensively studied random uniform recursive tree. We have presented an exhaustive analysis of various structural properties of the DURTs, and obtained the precise solutions for these features that include degree distributions, average path length, betweenness distribution, and degree correlations. Aside from their deterministic structures, the obtained structural characteristics of the DURTs are similar to those of URT. Consequently, the DURTs may provide useful insight to the practices as URT.

Furthermore, by using the methods of linear algebra and graph theory, we have performed a detailed analysis of the complete eigenvalues and their corresponding eigenvectors of the adjacency matrix for DURTs. We have fully characterized the spectral properties and eigenvectors for DURTs. We have shown that all the eigenvalues and eigenvectors of the adjacency matrix for DURTs can be directly determined from those for the initial network. It is expected that the methods applied here can be extended to a larger type of deterministic networks.

Acknowledgment

We would like to thank Shuyang Gao for preparing this manuscript. This research was supported by the National Basic Research Program of China under grant No. 2007CB310806, the National Natural Science Foundation of China under Grant Nos. 60704044, 60873040 and 60873070, Shanghai Leading Academic Discipline Project No. B114, and the Program for New Century Excellent Talents in University of China (NCET-06-0376).

References

References

  • [1] R. Albert and A.-L. Barabási, Rev. Mod. Phys. 74, 47 (2002).
  • [2] S.N. Dorogvtsev and J.F.F. Mendes, Adv. Phys. 51, 1079 (2002).
  • [3] M.E.J. Newman, SIAM Rev. 45, 167 (2003).
  • [4] L. da. F. Costa, F.A. Rodrigues, G. Travieso, and P.R.V. Boas, Adv. Phys. 56, 167 (2007).
  • [5] S. N. Dorogovtsev, A. V. Goltsev and J.F.F. Mendes, Rev. Mod. Phys. 80, 1275 (2008).
  • [6] R. Albert, H. Jeong, A.-L. Barabási, Nature (London) 406, 378 (2000).
  • [7] D. S. Callaway, M. E. J. Newman, S. H. Strogatz, and D. J. Watts, Phys. Rev. Lett. 85, 5468 (2000).
  • [8] R. Cohen, K. Erez, D. ben-Avraham, S. Havlin Phys. Rev. Lett. 86, 3682 (2001).
  • [9] Z. Z. Zhang, S. G. Zhou, and T. Zou, Eur. Phys. J. B 56, 259 (2007).
  • [10] R. Pastor-Satorras and A. Vespignani, Phys. Rev. Lett. 86, 3200 (2001).
  • [11] Z. Z. Zhang, S. G. Zhou, T. Zou, and G. S. Chen, J. Stat. Mech.: Theory Exp. P09008 (2008).
  • [12] M. Barahona and L. M. Pecora, Phys. Rev. Lett. 89, 054101 (2002).
  • [13] Z. Z. Zhang, L. L. Rong, and S. G. Zhou, Phys. Rev. E, 74, 046105 (2006).
  • [14] F. Comellas and S. Gago, J. Phys. A: Math. Theor. 40, 4483 (2007).
  • [15] A. Arenas, A. Díaz-Guilera, J. Kurths, Y. Moreno, and C. S. Zhou, Phy. Rep. 469, 93 (2008).
  • [16] G. Szabó and G. Fáth, Phy. Rep. 446, 97 (2007).
  • [17] A.-L. Barabási, Nat. Phys. 1, 68 (2005).
  • [18] I. J. Farkas, I. Derényi, A.-L. Barabási, and T. Vicsek, Phys. Rev. E 64, 026704 (2001).
  • [19] K.-I. Goh, B. Kahng, and D. Kim, Phys. Rev. E 64, 051903 (2001).
  • [20] S. N. Dorogovtsev, A. V. Goltsev, J. F. F. Mendes, and A. N. Samukhin, Phys. Rev. E 68, 046109 (2003).
  • [21] F. Chung, L. Lu, and V. Vu, Proc. Natl. Acad. Sci. U.S.A. 100, 6313 (2003).
  • [22] M. Boguñá, R. Pastor-Satorras, and A. Vespignani, Phys. Rev. Lett. 90, 028701 (2003).
  • [23] D. Chakrabarti, Y. Wang, C. X. Wang, J. Leskovec, and C. Faloutsos, ACM Trans. Information and System Security, 10, 13 (2008).
  • [24] J. G. Restrepo, E Ott and B. R. Hunt, Chaos 16, 015107 (2006).
  • [25] G. S. Canright and K. Engø-Mosen, Comlexus, 3, 131 (2006).
  • [26] R. T. Smythe, H. Mahmoud, Theor. Probab. Math. Statist. 51, 1 (1995).
  • [27] P. Erdös and A. Rényi, Pub. Math. Insti. Hung. Acad. Sci. 5 17 (1960).
  • [28] S. N. Dorogovtsev, P. L. Krapivsky, and J. F. F. Mendes, Europhys. Lett. 81, 30004 (2008).
  • [29] Z. Z. Zhang, S. G. Zhou, S. H. Zhao, and J. H. Guan, J. Phys. A: Math. Theor. 41, 185101 (2008).
  • [30] J. W. Moon, London Math. Soc. Lecture Note 13, 125 (1974).
  • [31] D. Najock and C. Heyde, J. Appl. Prob. 19, 675 (1982).
  • [32] J. Gastwirth, Amer. Statist. 31, 79 (1977).
  • [33] S. Jung, S. Kim, and B. Kahng, Phys. Rev. E 65, 056101 (2002).
  • [34] A.-L. Barabási, E. Ravasz, and T. Vicsek, Physica A 299, 559 (2001).
  • [35] S.N. Dorogovtsev, A.V. Goltsev, and J.F.F. Mendes, Phys. Rev. E 65, 066122 (2002).
  • [36] F. Comellas, G. Fertin and A. Raspaud, Phys. Rev. E 69, 037104 (2004).
  • [37] Z. Z. Zhang, L. L. Rong, and S. G. Zhou, Physica A 377, 329 (2007).
  • [38] E. Ravasz and A.-L. Barabási, Phys. Rev. E 67, 026112 (2003).
  • [39] J. C. Nacher, N. Ueda, M. Kanehisa and T. Akutsu, Phys. Rev. E 71, 036132 (2005).
  • [40] J. S. Andrade Jr., H. J. Herrmann, R. F. S. Andrade and L. R. da Silva, Phys. Rev. Lett. 94, 018702 (2005).
  • [41] Z. Z. Zhang, L. L. Rong and F. Comellas, J. Phys. A: Math. Gen. 39, 3253 (2006).
  • [42] F. Comellas, J. Ozón, and J.G. Peters, Inf. Process. Lett. 76, 83 (2000).
  • [43] Z. Z. Zhang, L. L. Rong and C. H. Guo, Physica A 363, 567 (2006).
  • [44] M. Hinczewski, Phys. Rev. E 75, 061104 (2007).
  • [45] L. Barriére, F. Comellas, and C. Dalfó, J. Phys. A 39, 11739 (2006).
  • [46] Z. Z. Zhang, S. G. Zhou, L. J. Fang, J. H. Guan, and Y. C. Zhang, Europhys. Lett. 79, 38007 (2007).
  • [47] Z. Z. Zhang, S. G. Zhou, T. Zou, L. C. Chen, and J. H. Guan, Eur. Phys. J. B 60, 259 (2007).
  • [48] C. Bedogne, A. P. Masucci, G. J. Rodgers, Physica A 387, 2161 (2008).
  • [49] S. Boettcher, B. Gonçalves, and H. Guclu, J. Phys. A: Math. Theor. 41, 252001 (2008).
  • [50] L. Barriére, F. Comellas, C. Dalfó, and M. A. Fiol, Linear Algebra Appl. 428, 1499 (2008).
  • [51] Z. Z. Zhang, S. G. Zhou, Y. Qi, and J. H. Guan, Eur. Phys. J. B 63, 507 (2008).
  • [52] K. I. Goh, E. Oh, H. Jeong, B. Kahng, and D. Kim, Proc. Natl. Acad. Sci. U.S.A. 99, 12583 (2002).
  • [53] S.N. Dorogvtsev, J.F.F. Mendes, and J. G. Oliveira, Phys. Rev. E 73, 056122 (2006).
  • [54] D. J. Watts and H. Strogatz, Nature (London) 393, 440 (1998).
  • [55] M. E. J. Newman, Phys. Rev. E 64, 016132 (2001).
  • [56] M. Barthélemy, Eur. Phys. J. B 38, 163 (2004).
  • [57] G. Szabó, M. Alava, and J. Kertész, Phys. Rev. E 66, 026101 (2002).
  • [58] B. Bollobás and O. Riordan, Phys. Rev. E 69, 036114 (2004).
  • [59] C.-M. Ghima, E. Oh, K.-I. Goh, B. Kahng, and D. Kim, Eur. Phys. J. B 38, 193 (2004)
  • [60] Z. Z. Zhang, S. G. Zhou, L. C. Chen, J. H. Guan, L. J. Fang, and Y. C. Zhang, Eur. Phys. J. B 59, 99 (2007).
  • [61] Z. Z. Zhang, S. G. Zhou, L. C. Chen, and J. H. Guan, Eur. Phys. J. B 64, 277 (2008).
  • [62] A.-L. Barabási and R. Albert, Science 286, 509 (1999).
  • [63] S. Maslov and K. Sneppen, Science 296, 910 (2002).
  • [64] R. Pastor-Satorras, A. Vázquez and A. Vespignani, Phys. Rev. Lett. 87, 258701 (2001).
  • [65] A. Vázquez, R. Pastor-Satorras and A. Vespignani, Phys. Rev. E 65, 066130 (2002).
  • [66] Z. Z. Zhang and S. G. Zhou, Physica A, 380, 621 (2007).
  • [67] M. E. J. Newman, Phys. Rev. Lett. 89, 208701 (2002).
  • [68] J. R. Silvester, Math. Gaz. 84, 460 (2000).
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
81697
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description