Perron–based algorithms for the multilinear PageRank

Perron–based algorithms for the multilinear PageRank

Beatrice Meini111Dipartimento di Matematica, Università di Pisa, Italy. and Federico Poloni222Dipartimento di Informatica, Università di Pisa, Italy. federico.poloni@unipi.it.
Abstract

We consider the multilinear PageRank problem studied in [Gleich, Lim and Yu, Multilinear PageRank, 2015], which is a system of quadratic equations with stochasticity and nonnegativity constraints. We use the theory of quadratic vector equations to prove several properties of its solutions and suggest new numerical algorithms. In particular, we prove the existence of a certain minimal solution, which does not always coincide with the stochastic one that is required by the problem. We use an interpretation of the solution as a Perron eigenvector to devise new fixed-point algorithms for its computation, and pair them with a continuation strategy based on a perturbative approach. The resulting numerical method is more reliable than the existing alternatives, being able to solve a larger number of problems.

Multilinear PageRank; Perron vector; fixed point iteration; Newton’s method

1 Introduction

Gleich, Lim and Yu [1] consider the following problem, arising as an approximate computation of the stationary measure of an order-2 Markov chain: given , with and

(1)

solve for the equation

(2)

The solution of interest is stochastic, i.e., and . Here denotes a column vector of all ones (with an optional subscript to specify its length), and inequalities between vectors and matrices are intended in the componentwise sense. In the paper [1], they prove some theoretical properties, consider several solution algorithms, and evaluate their performance. In the more recent paper [2], the authors improve some results concerning the uniqueness of the solution.

This problem originally appeared in [3], and is a variation of problems related to tensor eigenvalue problems and Perron–Frobenius theory for tensors; see, e.g., [4, 5, 6]. However, it also fits in the framework of quadratic vector equations derived from Markovian binary tree models introduced in [7] and later considered in [8, 9, 10]. Indeed, the paper [10] considers a more general problem, which is essentially (2) without the hypotheses (1). Hence, all of its results apply here, and they can be used in the context of multilinear PageRank. In particular, [10] considers the minimal nonnegative solution of (2) (in the componentwise sense), which is not necessarily stochastic as the one sought in [1].

In this paper, we use the theory of quadratic vector equations in [8, 9, 10] to better understand the behavior of the solutions of (2) and suggest new algorithms for computing the stochastic solution. More specifically, we show that if one considers the minimal nonnegative solution of (2) as well, the theoretical properties of (2) become clearer, even if one is only interested in stochastic solutions. Indeed we prove that there always exists a minimal nonnegative solution, which is the unique stochastic solution when . When , the minimal nonnegative solution is not stochastic and there is at least one stochastic solution . Note that [1, Theorem 4.3] already proves that when the stochastic solution is unique and [2, Theorem 1] slightly improves this bound; our results give a broader characterization. All this is in Section 2.

When , as already pointed out in [1], computing the stochastic solution of (2) is easy. Indeed, this is also due to the fact that the stochastic solution is the minimal solution, and for instance the numerical methods proposed in [7, 8, 9] perform very well. The most difficult case is when , in particular when . Since the minimal solution of (2) can be easily computed, the idea is to compute and deflate it, with a similar strategy to the one developed in [8, 9], hence allowing us to compute stochastic solutions even when they are not minimal. The main tool in this approach is rearranging (2) to show that (after a change of variables) a solution corresponds to the Perron eigenvector of a certain matrix that depends on itself. This interpretation in terms of Perron vector allows to devise new algorithms based on fixed point iteration and on Newton’s method. Sections 3 and 4 describe this deflation technique and the algorithms based on the Perron vector computation.

Finally, we propose in Section 5 a continuation strategy based on a perturbative approach that allows one to solve the problem for values in order to obtain better starting values for the more challenging cases when .

We report several numerical experiments in Section 6, to show the effectiveness of these new techniques for the set of small-scale benchmark problems introduced in [1], and draw some conclusions in Section 7.

2 Properties of the nonnegative solutions

In this section, we show properties of the nonnegative solutions of the equation (2). In particular, we prove that there always exists a minimal nonnegative solution, which is stochastic when . These properties can be derived by specializing the results of [10], which apply to more general vector equations defined by bilinear forms.

We introduce the map

and its Jacobian

We have the following result.

Lemma 1.

Consider the fixed-point iteration

(3)

started from . Then the sequence of vectors is such that , there exists and is the minimal nonnegative solution of (2), i.e., equation (2) has a (unique) solution such that for any other possible solution .

Proof.

The map is weakly positive, i.e., , whenever , . Moreover, if then . Therefore Condition A1 of [10] is satisfied which, according to Theorem 4 of [10], implies that the sequence of vectors is bounded and converges monotonically to a vector , which is the minimal nonnegative solution of (2). ∎

2.1 Sum of entries and criticality

In this specific problem, the hypotheses (1) enforce a stronger structure on the iterates of (3): the sum of the entries of is a function of the sum of the entries of only.

Lemma 2.

Let . Then, for any .

Proof.

This fact has important consequences for the sum of the entries of the solutions of (2).

Corollary 3.

For each solution of (2), is one of the two solutions of the quadratic , i.e., or .

Let be one of the solutions of and define the level set . Since is convex and compact, and since maps to itself by Lemma 2, then the Brouwer fixed-point theorem implies the following result.

Corollary 4.

There exists at least a solution to (2) with and a solution with .

Hence we can have two different settings, for which we borrow the terminology from [10].

Subcritical case

, hence the minimal nonnegative solution is the unique stochastic solution.

Supercritical case

, hence the minimal nonnegative solution satisfies and there is at least one stochastic solution .

Note that [1, Theorem 4.3] already proves that when the stochastic solution is unique; these results give a broader characterization.

The tools that we have introduced can already be used to determine the behavior of simple iterations such as (3).

Theorem 5.

Consider the fixed-point iteration (3), with a certain initial value , for the problem (2) with . Define . Then,

  • If , then , and the iteration (3) can converge only to the minimal solution (if it converges).

  • If , then , hence the iteration (3) can converge only to a stochastic solution (if it converges).

  • If , then , hence the iteration (3) diverges.

Proof.

Thanks to Lemma 2, the quantity evolves according to . So the result follows by the theory of scalar fixed-point iterations, since this iteration converges to for , to for , and diverges for . ∎

An analogous result holds for the subcritical case.

The papers [8, 9, 10] describe several methods to compute the minimal solution . In particular, all the ones described in [10] exhibit monotonic convergence, that is, . Due to the uniqueness and the monotonic convergence properties, computing the minimal solution is typically simple, fast, and free of numerical issues. Hence in the subcritical case the multilinear PageRank problem is easy to solve. The supercritical case is more problematic.

Among all available algorithms to compute the minimal solution , we recall Newton’s method, which is one of the most efficient ones. The Newton–Raphson method applied to the function generates the sequence

(4)

The following result holds [10, Theorem 13].

Lemma 6.

Suppose that , and that is irreducible. Then, Newton’s method (4) starting from is well defined and converges monotonically to (i.e., ).

Algorithm (1) shows a straightforward implementation of Newton’s method as described above.

Input: , , as above, a tolerance .
Output: An approximation to the minimal solution of (2).
;
while   do
       ;
       ;
      
end while
;
Algorithm 1 Newton’s method for the computation of the minimal solution to (2).

Note that the theory in [10, Section 9] shows how one can predict where zeros appear in and eliminate them reducing the problem to a smaller one. Indeed, in view of the probabilistic interpretation of multilinear PageRank, zero entries in can appear only when the second-order Markov chain associated to is not irreducible. So we can assume in the following that , and that the nonnegative matrix is irreducible. In particular, in this case also substochastic (as proved in [10, Theorem 6]).

3 Deflating the minimal solution

From now on, we assume to be in the supercritical case, i.e., , and that has been already computed and is explicitly available.

We wish adapt to this setting the deflation strategy introduced in [9]. Since all solutions to (2) satisfy , it makes sense to change variable to obtain an equation in . After a few manipulations, using bilinearity of and the fact that , one gets

(5)

Moreover,

(6)

We set . Note that for each , hence it is irreducible. In addition, if is chosen such that as in (6), then

(7)

so is a stochastic matrix.

Let us introduce the map that gives the Perron vector of an irreducible matrix , normalized such that . Then, since is irreducible and stochastic, (5) and (6) show that

(8)

i.e., the sought vector is the Perron vector of the matrix , multiplied by the constant .

4 Perron-based algorithms

Equation (8) suggests a new fixed-point iteration for computing , which is analogous to the one appearing in [9],

(9)

starting from a given nonnegative vector such that . This iteration is implemented in Algorithm 2

Input: , , as above, with , a tolerance , an initial value .
Output: An approximation to a stochastic solution of (2).
Compute with Algorithm 1;
Normalize (if needed): , ;
;
Normalize (if needed): , ;
;
while   do
       ;
       ;
       ;
      
end while
;
Algorithm 2 The Perron-based iteration for the computation of a stochastic solution to (2).

We may also apply Newton’s method to find a solution to (8), following [8]. To this end, we first compute the Jacobian of the function .

Lemma 7.

Let , with such that . Then, its Jacobian is given by

(10)
Proof.

Let us compute its directional derivative along the direction . We set ; hence, . Since is irreducible, its Perron eigenvalue is a simple eigenvalue, and hence we can define locally smooth functions as the Perron eigenvalue of and . A computation analogous to (7) shows that , hence and .

By differentiating and evaluating at , we get

or, rearranging terms,

where . Since we defined so that , we have , and hence also

The matrix in the left-hand side is nonsingular, since it can be obtained by replacing the eigenvalue with in the eigendecomposition of the singular irreducible M-matrix . Thus we can write

Since , we can simplify this expression further to

from which we get (10). ∎

From the above result, the Jacobian of the function is given by

(11)

and Newton’s method consists of generating the sequence of vectors

The Perron–Newton method is described in Algorithm 3.

Input: , , as above, with , a tolerance , an initial value .
Output: An approximation to a stochastic solution of (2).
Compute with Algorithm 1;
Normalize (if needed): , ;
;
Normalize (if needed): , ;
;
while   do
       ;
       ;
       ;
       ;
       Normalize (if needed): ;
       ;
      
end while
;
Algorithm 3 The Perron–Newton method for the computation of a stochastic solution to (2).

The standard theorems [11] on local convergence of Newton’s method imply the following result.

Theorem 8.

Suppose that the matrix is nonsingular. Then the Perron–Newton method is locally convergent to a stochastic solution of (2).

Remark 9.

Since , the matrix has an eigenvalue 1 with left eigenvector , for each value of .

5 Continuation techniques

The above algorithms, as well as those in [1], are sufficient to solve most of the test problems that are explored in [1]. However, especially when , the algorithms may converge very slowly or stagnate far away from the minimal solution. For this reason, we explore an additional technique based on a perturbation expansion of the solution as a function of , inspired loosely by the ideas of homotopy continuation [12], which is a well-known strategy to derive approximate solutions for parameter-dependent equations.

The main result is the following.

Lemma 10.

Let be a stochastic solution of problem (2) (for a certain ), and suppose that is nonsingular. Then, there is a stochastic solution to (2) with the parameter replaced by such that

(12)

with , , where .

Proof.

Let us make the dependence of the various quantities on the parameter explicit, i.e., we write to denote a stochastic solution of (2) for a certain value of the parameter , and similarly and .

We apply the implicit function theorem [13, Theorem 9.28] to

obtaining

(13a)

Differentiating (13), we get

The function obtained by the theorem must satisfy for each : indeed, by Corollary 3 for a solution of the equation it must hold either or , and the continuous function cannot jump from to without taking any intermediate value.

The formula (12) now follows by Taylor expansion. ∎

This result suggests a strategy to improve convergence for : we start solving the problem with a small value of , e.g., , then we solve it repeatedly for increasing values ; at each step we use as a starting point for our iterative algorithms an approximation of constructed from .

We tried several different strategies to construct this approximation.

T2

Second-order Taylor expansion ;

T1

First-order Taylor expansion ;

EXT

Linear extrapolation from the previous two values ;

IMP

An attempt at building a ‘semi-implicit method’ with only the values available: ideally, one would like to set (which would correspond to expressing with a Taylor expansion around ); however, we do not know . Instead, we take with

i.e., we use the new value of and the old value of in the expression for the first derivative.

The only remaining choice is designing an effective strategy to choose the next point . We adopt the following one. Note that for T1,EXT,IMP we expect . Hence one can expect

Thus we choose a ‘target’ value (e.g., ) for , and then we choose by solving the equation

The resulting continuation algorithm is described in Algorithm 4. Note that we start from , to steer clear of the double solution for . Convergence for such a value of is typically not problematic.

Input: , , as above, with , a tolerance , a continuation ‘speed’ parameter .
Output: An approximation to a stochastic solution of (2).
;
;
while  do
       if  then
            ;
       else
             determined using T1,T2,IMP or (if ) EXT;
            
       end if
       (determined using a the Newton or Perron-Newton method, with initial value );
       if  then
             ;
            
       else
             ;
            
       end if
      if  then
             ;
            
       end if
      ;
      
end while
;
Algorithm 4 The continuation method for the computation of a stochastic solution to (2).

Note that for strategy T2 one would expect instead, but for simplicity (and to ensure a fair comparison between the methods) we use the same step size selection strategy.

6 Numerical experiments

We have implemented the described methods using Matlab, and compared them to some of the algorithms in [14] on the same benchmark set of small-size models () used in [1]. We have used tolerance , where is the machine precision, as an initial value unless differently specified, and for Algorithm 4. To compute Perron vectors, we have used the output of Matlab’s eig. For problems with larger , different methods (eigs, inverse iteration, Gaussian elimination for kernel computation, etc.) can be considered [15].

The results, for various values of , are reported in Tables 1 to 3. Each of the 29 rows represents a different experiment in the benchmark set, and the columns stand for the different algorithms, according to the following legend.

F

The fixed-point iteration (3), from an initial value such that (and with renormalization to at each step).

IO

The inner-outer iteration method, as described in [1].

N

Newton’s method with normalization to at each step, as described in [1]. Note that this is not Algorithm 1, which would converge instead to the minimal solution .

P

The Perron method (Algorithm 2).

PN

The Perron–Newton method (Algorithm 3).

N-xxx, PN-xxx

N or PN methods combined with continuation with one of the four strategies described in Section 5.

For each experiment, we report the number of iterations required, and the CPU times in seconds (obtained on Matlab R2017a on a computer equipped with a 64-bit Intel core i5-6200U processor). The value NaN inside a table represents failure to converge after 10,000 iterations. For P and PN, the number of iterations is defined as the sum of the number of Newton iterations required to compute with Algorithm 1 and the number of iterations in the main algorithm. For the continuation algorithms, we report the number of inner iterations, that is, the total number of iterations performed by PN or N (respectively), summing along all calls to the subroutine.

In addition to the tables, performance profiles [16, Section 22.4] are reported in Figures 13.

Note that neither the number of iterations nor the CPU time are particularly indicative of the true performance of the algorithms: indeed, the iterations in the different methods amount to different quantity of work, since some require merely linear system solutions and some require eigenvalue computations. Moreover, Matlab introduces overheads which are difficult to predict for several instructions (such as variable access and function calls). In order to make the time comparisons more fair, for methods IO and N we did not use the code in [14], but rather rewrote it without making use of Matlab’s object-oriented features, which made the original functions significantly slower. The only change introduced with respect to their implementations is that we changed the stopping criterion to , so that the stopping criterion is the same one for all tested methods.

In any case, the performance comparison between the various methods should be taken with a pinch of salt.

Table 1: Iteration counts and times for the 29 benchmark tensors and
Table 2: Iteration counts and times for the 29 benchmark tensors and
Table 3: Iteration counts and times for the 29 benchmark tensors and
Figure 1: Performance profiles for the 29 benchmark tensors and
Figure 2: Performance profiles for the 29 benchmark tensors and
Figure 3: Performance profiles for the 29 benchmark tensors and

We comment briefly on the results. Newton-based methods typically require a constant number of iterations to converge, but on some of the benchmarks they fail to converge or require a much larger number of iterations. From the point of view of reliability, combining the Perron–Newton algorithm with continuation strategies gives a definite advantage: the resulting methods are the only ones (among the ones considered) that can solve successfully all the problems in the original benchmark set in [1], which contained the experiments with all values of up to . Raising to the more extreme value of reveals failure cases for these methods as well. (However, if one reduces the step-size selection parameter to , methods PN-EXT and PN-T1 succeed on all examples also for .)

Our results confirm the findings of [1] that problem R6_3 (the third to last one) for is the hardest problem of the benchmark set; Perron-based algorithms can solve it successfully in the end, but (like the algorithms in [1]) they stagnate for a large number of iterations around a point which is far away from the true solution. The analysis in [1, End of Section 6.3 and caption of Figure 9] also attributes this difficulty to the presence of a ‘pseudo-solution’ that is an attraction basin for a large set of starting points.

To understand the specific difficulties encountered by continuation methods on problem R6_3, it is instructive to consider the plot in Figure 4.

Figure 4: The first entry of each stochastic solution to (2) for the benchmark problem R6_3, plotted for various values of (on the x-axis). The behavior of all other entries of is similar.

This figure, which has been generated with the help of the software Bertini [17], shows the first entry of each valid stochastic solution to (2) (recall that there may be more than one solution), for varying values of . It turns out that the problem has a single solution for all values of up to , then two more solutions appear, initially coincident and then slowly separating one from the other. Two of these three solutions merge at about and then disappear (i.e., they are replaced by a pair of complex conjugate solutions), so for there is only one real stochastic solution remaining. Note that the points where the number of solutions changes are points in which the Jacobian is singular, in accordance with the implicit function theorem.

Continuation methods ‘track’ the upper solution, and hence fail to provide a good starting point for , when it disappears suddenly and is replaced by a completely different solution. Moreover, for , the numerical behavior of these iterations is affected by the presence of a point with a very small residual. For an intuition, the reader may consider the behavior of Newton’s method (on a function of a single variable) in presence of a local minimum very close to , e.g., on for a very small .

7 Conclusions

We have used the theory of quadratic vector equations in [8, 9, 10] to attack the multilinear PageRank problem described in [1]. The tools that we have introduced improve the theoretical understanding of this equation and of its solutions. In particular, considering the minimal solution in addition to the stochastic ones allows one to use a broader array of algorithms, with computational advantage. The new algorithms achieve better results when , which is the most interesting and computationally challenging case.

Possible future improvements to these algorithms include experimenting with further step-size selection strategies, and more importantly integrating them with inexact eigenvalue solvers that are more suitable to computation with large sparse matrices.

References

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
250737
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description