PageRank in Undirected Random Graphs

PageRank in Undirected Random Graphs

K. Avrachenkov Inria Sophia Antipolis, France Yandex, Russia Moscow Institute of Physics and Technology, Russia    A. Kadavankandy Primary author, arun.kadavankandy@inria.frInria Sophia Antipolis, France Yandex, Russia Moscow Institute of Physics and Technology, Russia   
L. Ostroumova Prokhorenkova and A. Raigorodskii
Inria Sophia Antipolis, France Yandex, Russia Moscow Institute of Physics and Technology, Russia
Abstract

PageRank has numerous applications in information retrieval, reputation systems, machine learning, and graph partitioning. In this paper, we study PageRank in undirected random graphs with an expansion property. The Chung-Lu random graph is an example of such a graph. We show that in the limit, as the size of the graph goes to infinity, PageRank can be approximated by a mixture of the restart distribution and the vertex degree distribution. We also extend the result to Stochastic Block Model (SBM) graphs, where we show that there is a correction term that depends on the community partitioning.

Keywords:
PageRank, undirected random graphs, expander graphs, Chung-Lu random graphs, Stochastic Block Model
\makenomenclature\nomenclature

[Da0] Adjacency matrix

\nomenclature

[Da2] , Degree of node

\nomenclature

[Da3] , Vector of degrees

\nomenclature

[Da4] , Diagonal matrix with degrees on the diagonal

\nomenclature

[Da6] Column-stochastic Markov matrix

\nomenclature

[Da7] Preference vector of PageRank

\nomenclature

[Da8] Transition Matrix of PageRank

\nomenclature

[Da9] , Symmetrized Markov matrix

\nomenclature

[Db1] , Average version of

\nomenclature

[Da5] , Expected Diagonal Matrix

\nomenclature

[Db2] , Resolvent Matrix of

\nomenclature

[Db3] , Resolvent Matrix of

\nomenclature

[Db5] , Frobenius Perron eigenvector

\nomenclature

[Db6] , Degree-normalised and scaled

\nomenclature

[Db7] , Projection of onto the orthogonal subspace of \nomenclature[Da1] , Expectation of adjacency matrix

\nomenclature

[Db4] , Scaled resolvent matrix

1 Introduction

PageRank has numerous applications in information retrieval [22, 31, 37], reputation systems [21, 26], machine learning [4, 5], and graph partitioning [1, 12]. A large complex network can often be conveniently modeled by a random graph. It is surprising that not many analytic studies are available for PageRank in random graph models. We mention the work [6] where PageRank was analysed in preferential attachment models and the more recent works [10, 11], where PageRank was analysed in directed configuration models. According to several studies [18, 20, 29, 36], PageRank and in-degree are strongly correlated in directed networks such as the Web graph. Apart from some empirical studies [9, 32], to the best of our knowledge, there is no rigorous analysis of PageRank on basic undirected random graph models such as the Erdős-Rényi graph [19] or the Chung-Lu graph [14]. In this paper, we attempt to fill this gap and show that under certain conditions on the preference vector and the spectrum of the graphs, PageRank in these models can be approximated by a mixture of the preference vector and the vertex degree distribution when the size of the graph goes to infinity. First, we show the convergence in total variation norm for a general family of random graphs with expansion property. Then, we specialize the results for the Chung-Lu random graph model proving the element-wise convergence. We also analyse the asymptotics of PageRank on Stochastic Block Model (SBM) graphs, which are random graph models used to benchmark community detection algorithms [24]. In these graphs the asymptotic expression for PageRank contains an additional correction term that depends on the community partitioning. This demonstrates that PageRank captures properties of the graph not visible in the stationary distribution of a simple random walk.We conclude the paper with numerical experiments and several future research directions.

2 Definitions

Let denote a family of random graphs, where is a vertex set, , and is an edge set, . Matrices and vectors related to the graph are denoted by bold letters, while their components are denoted by non-bold letters. We denote by the associated adjacency matrix with elements

In the interest of compactness of notation, the superscript is dropped when it is not likely to cause confusion. In this work, since we analyze PageRank on undirected graphs, we have . The personalized PageRank vector is denoted by We consider unweighted graphs; however our analysis easily extends to some families of weighted undirected graphs. Let be a column vector of ones and let be the vector of degrees. It is helpful to define , a diagonal matrix with the degree sequence on its diagonal.

Let be column-stochastic Markov transition matrix corresponding to the standard random walk on the graph and let be the symmetrized transition matrix, whose eigenvalues are the same as those of Note that the symmetrized transition matrix is closely related to the normalized Laplacian [13], where is the identity matrix. Further we will also use the resolvent matrix and the symmetrized resolvent matrix .

Note that since is a symmetric matrix, its eigenvalues are real and can be arranged in decreasing order, i.e., . In particular, we have . The value is called the spectral gap.

In what follows, let be arbitrary constants independent of graph size which may change from one line to the next (of course, not causing any inconsistencies).

For two functions if such that and if Also or if

We use to denote probability and expectation respectively. An event is said to hold with high probability (w.h.p.) if such that (s.t.) for some Recall that if a finite number of events hold true w.h.p., then so does their intersection. Furthermore, we say that a sequence of random variables w.h.p. if there exists a function such that the event holds w.h.p.

In the first part of the paper, we study the asymptotics of PageRank for a family of random graphs with the following two properties:

Property 1

For some w.h.p., where and are the maximum and minimum degrees, respectively.

Property 2

W.h.p.,

The above two properties can be regarded as a variation of the expansion property. In the standard case of an expander family, one requires the graphs to be regular and the spectral gap to be bounded away from zero (see, e.g., [35]). Property 1 is a relaxation of the regularity condition, whereas Property 2 is stronger than the requirement for the spectral gap to be bounded away from zero. These two properties allow us to consider several standard families of random graphs such as ER graphs, regular random graphs with increasing average degrees, and Chung-Lu graphs. For Chung-Lu graphs Property 1 imposes some restriction on the degree spread of the graph.

Remark: Property 2 implies that the graph is connected w.h.p., since the spectral gap is strictly greater than zero.

Later, we study the asymptotics of PageRank for specific classes of random graphs namely the Chung-Lu graphs, and the Stochastic Block Model. Recall that the Personalized PageRank vector with preference vector is defined as the stationary distribution of a modified Markov chain with transition matrix

(1)

where is the so-called damping factor [22]. In other words, satisfies

(2)

or,

(3)

where (3) holds when

3 Convergence in total variation

We recall that for two discrete probability distributions and , the total variation distance is defined as This can also be thought of as the -norm distance measure in the space of probability vectors, wherein for the -norm is defined as Since for any probability vector it makes sense to talk about convergence in 1-norm or TV-distance. Also recall that for a vector is the -norm. Now we are in a position to formulate our first result.

Theorem 3.1

Let a family of graphs satisfy Properties 1 and 2. If, in addition, , PageRank can be asymptotically approximated in total variation norm by a mixture of the restart distribution and the vertex degree distribution. Namely, w.h.p.,

where

(4)

with .

Observations:

  1. This result says that PageRank vector asymptotically behaves like a convex combination of the preference vector and the stationary vector of a standard random walk with transition matrix with the weight being and that it starts to resemble the random walk stationary vector as gets close to

  2. One of the possible intuitive explanations of the result of Theorem 3.1 is based on the observation that when Properties 1 & 2 hold, as the random walk mixes approximately in one step and so for any probability vector is roughly equal to the stationary distribution of the simple random walk. The proposed asymptotic approximation for PageRank can then be seen to follow from the series representation of PageRank if we replace by Note that since is the stationary vector of the simple random walk, if it also holds that Making these substitutions in the series representation of PageRank, namely

    (5)

    we obtain

  3. The condition on the 2-norm of the preference vector can be viewed as a constraint on its allowed localization.

Proof of Theorem 3.1: First observe from (1) that when we have hence from (2) we obtain since Similarly for the case and so in this case is just the stationary distribution of the original random walk, which is well-defined and equals since by Property 2 the graph is connected. Examining (4) for these two cases we can see that the statement of the theorem holds trivially for both and In what follows, we consider the case We first note that the matrix can be written as follows by Spectral Decomposition Theorem [7]:

(6)

where are the eigenvalues and with and are the corresponding orthogonal eigenvectors of Recall that is the Perron–Frobenius eigenvector. Next, we rewrite (3) in terms of the matrix as follows

(7)

Substituting (6) into (7), we obtain

Let us denote the error vector by . Note that since we can write as

where in (a) above we used the fact that since is a probability vector. Then, we can write as

(8)

Now let us bound the -norm of the error:

(9)

where in (a) we used the fact that for any vector by Cauchy-Schwartz inequality. In (b) we used the submultiplicative property of matrix norms, i.e., . We obtain (c) by noting that the norm of a diagonal matrix is the leading diagonal value and the fact that for a symmetric matrix the 2-norm is the largest eigenvalue in magnitude. The last inequality is obtained by noting that the assumption w.h.p. implies that s.t. for some constant C and the fact that

Observing that is bounded w.h.p. by Property 1 and w.h.p. by Property 2 we obtain our result. ∎

Note that in the case of standard PageRank, and hence but Theorem 3.1 also admits more general preference vectors than the uniform one.

Corollary 1

The statement of Theorem 3.1 also holds with respect to the weak convergence, i.e., for any function on such that

Proof: This follows from Theorem 3.1 and the fact that the left-hand side of the above equation is upper bounded by [30]. ∎

4 Chung-Lu random graphs

In this section, we study the PageRank for the Chung-Lu model [14] of random graphs. These results naturally hold for ER graphs also. The spectral properties of Chung-Lu graphs have been studied extensively in a series of papers by Fan Chung et al [15, 16].

4.1 Chung-Lu Random Graph Model

Let us first provide a definition of the Chung-Lu random graph model.

Definition 1

Chung-Lu Random Graph Model A Chung-Lu graph with an expected degree vector , where are positive real numbers, is generated by drawing an edge between any two vertices and independently of all other pairs, with probability To ensure that the probabilities are well-defined, we need .

In the following, let and Below we specify a corollary of Theorem 3.1 as applied to these graphs. But before that we need the following lemmas about Chung-Lu graphs mainly taken from [15, 16].

Lemma 1

If the expected degrees satisfy then in we have, w.h.p., .

In the proof we use Bernstein Concentration Lemma [8]:

Lemma 2

(Bernstein Concentration Lemma [8]) If where are independent random variables such that and if then

for any

Proof of Lemma 1: This result is shown in the sense of convergence in probability in the proof of [16, Theorem 2]; using Lemma 2 we show the result holds w.h.p. By a straight forward application of Lemma 2 to the degrees of the Chung-Lu graph we obtain

if .∎We present below a perturbation result for the eigenvalues of Hermitian matrices, called Weyl’s inequalities, which we will need for our proofs.

Lemma 3

[25, Theorem  4.3.1] Let be Hermitian and let the eigenvalues and be arranged in decreasing order. For each we have

where is the induced 2-norm or the spectral norm of

The following lemma is an application of Theorem 5 in [15].

Lemma 4

If for some and , then for we have almost surely (a.s.)

where , and is a row vector.

Proof: It can be verified that when and the condition in [15, Theorem 5], namely, is satisfied and hence the result follows.∎

Lemma 5

For with and

where is Markov matrix.

Proof: Recall that is the normalized adjacency matrix. We want to be able to bound the eigenvalues of We do this in two steps. Using Lemmas 1 and 3 we first show that if we replace the degree matrix in the expression for by the expected degree matrix the eigenvalues of the resulting matrix are close to those of Then, using Lemma 4 we show that the eigenvalues of roughly coincide with those of which is a unit rank matrix and hence only has a single non-zero eigenvalue. Thus we arrive at the result of Lemma 5. Now we give the detailed proof.

The first step, w.h.p. follows from Lemma 1 and the same argument as in the last part of the proof of Theorem 2 in [16]. We present the steps in the derivation here for the sake of completeness.

Since the 2-norm of a diagonal matrix is the maximum diagonal in absolute value, we have

(10)

by Lemma 1. Also observe that

(11)

We now proceed to bound the norm of the difference as follows

(12)

where (a) follows from triangular inequality of norms, in (b) we used submultiplicativity of matrix norms, and (c) follows from (10), (11) and the fact that

By Lemma 3 we have for any

(13)

by (12). Furthermore, using Lemma 3 and the fact that for we have for

(14)

where the last inequality follows from Lemma 4.
Now recall that We have for any

(15)

which implies from (13) and (14):

∎Armed with these lemmas we now present the following corollary of Theorem 3.1 in the case of Chung-Lu graphs.

Corollary 2

Let and Then PageRank of the Chung-Lu graph can asymptotically be approximated in TV distance by defined in Theorem 3.1, if and for some that does not depend on

Proof: Using Lemma 1 and the condition that one can show that s.t. w.h.p. Then the result is a direct consequence of Lemma 5 and the inequality from (9).∎

We further note that this result also holds for ER graphs with nodes and edge probability such that where we have

4.2 Element-wise Convergence

In Corollary 2 we proved the convergence of PageRank in TV distance for Chung-Lu random graphs. Note that since each component of PageRank could decay to zero as the graph size grows to infinity, this does not necessarily guarantee convergence in an element-wise sense. In this section, we provide a proof for our convergence conjecture to include the element-wise convergence of the PageRank vector. Here we deviate slightly from the spectral decomposition technique and eigenvalue bounds used hitherto, and instead rely on well-known concentration bounds to bound the error in convergence.

Let be a diagonal matrix whose diagonal elements are made of the components of the approximated PageRank vector and i.e., where is the unnormalized error defined in Section 3. Then using (8) we obtain

Therefore, using to denote we can bound as follows

(16)
(17)

Here denotes To obtain (17) we used the submultiplicativity property of matrix norms, the fact that and the fact that

Define the restriction of the matrix to the orthogonal subspace of

Lemma 6

For a Chung-Lu random graph with expected degrees , where and we have w.h.p.,

when

This lemma can be proven by a few applications of Lemma 1 and Bernstein’s concentration inequality. To keep the train of thought intact, please refer to Appendix A for a detailed proof of this lemma.

In the next lemma we prove an upper bound on the infinity norm of the matrix

Lemma 7

Under the conditions of Lemma 6, w.h.p., where is a number independent of that depends only on and .

Proof: Note that Therefore, and the result follows since [28] and using Lemma 1. ∎Now we are in a position to present our main result in this section.

Theorem 4.1

Let and PageRank converges element-wise to in the sense that w.h.p., on the Chung-Lu graph with expected degrees such that for some and for some a constant independent of

Proof: Define We then have:

(18)

Now from (17) we have

where in (a) we used (18) and Lemmas 6 and 7. The rest of the inequalities are obtained by repeatedly using the fact that and from Lemma 1. The last step follows from the assumption that for some constant

Corollary 1 (ER Graphs)

For an ER graph such that we have that asymptotically the personalized PageRank converges pointwise to for such that

5 Asymptotic PageRank for the Stochastic Block Model

In this section, we extend the analysis of PageRank to Stochastic Block Models (SBM) with constraints on average degrees. The SBM is a random graph model that reflects the community structure prevalent in many online social networks. It was first introduced in [24] and has been analyzed subsequently in several works, specifically in the community detection literature, including [17],[27], [33], [3] and several extensions thereof as in [23] and [38], and the references therein.

For the sake of simplicity we focus on an SBM graph with two communities, but the idea of the proof extends easily to generalizations of this simple model.

Definition 1

[Stochastic Block Model (SBM) with two communities]: An SBM graph with two communities is an undirected graph on a set of disjoint vertices such that and let and . Furthermore, if two vertices , then , if and , then The probabilities may scale with and we assume that and this last assumption is necessary for modeling the community structure of a network.

Remark: For the sake of simplicity, we assume that the edge probabilities within both communities are equal to but this is a minor assumption and can be generalised so that community 1 has a different edge probability to community 2.

For an SBM graph we use and to denote the maximum and the minimum expected degrees of the nodes respectively. From Definition 1, by our assumption on and we have and Note that our results only depend on these two parameters. We present our main result on SBM graphs in the following theorem.

Theorem 5.1

For a Stochastic Block Model with and PageRank with preference vector such that satisfies

w.h.p., where

(19)

Here represents the “average” Markov matrix given as where and

Discussion: Let us look at the permissible values of under the assumptions in the above theorem. Recall that we have Therefore the condition on the growth of minimum expected degree is met, for example, if On the other hand we have

which remains bounded if either or tends to infinity, but not both.

The following corollary of Theorem 5.1 gives an interesting expression for PageRank for an SBM graph with two equal-sized communities.

Corollary 2

For an SBM