L^{\gamma}-PageRank for Semi-Supervised Learning

-PageRank for Semi-Supervised Learning

[    [    [ \orgnameUniv Lyon, Inria, CNRS, ENS de Lyon, UCB Lyon 1, LIP UMR 5668, \postcodeF-69342 \cityLyon, \cnyFrance \orgnameUniv Lyon, Ens de Lyon, Univ Claude Bernard, CNRS, Laboratoire de Physique, \postcodeF-69342 \cityLyon, \cnyFrance
Abstract

PageRank for Semi-Supervised Learning has shown to leverage data structures and limited tagged examples to yield meaningful classification. Despite successes, classification performance can still be improved, particularly in cases of fuzzy graphs or unbalanced labeled data. To address such limitations, a novel approach based on powers of the Laplacian matrix (), referred to as -PageRank, is proposed. Its theoretical study shows that it operates on signed graphs, where nodes belonging to one same class are more likely to share positive edges while nodes from different classes are more likely to be connected with negative edges. It is shown that by selecting an optimal , classification performance can be significantly enhanced. A procedure for the automated estimation of the optimal , from a unique observation of data, is devised and assessed. Experiments on several datasets demonstrate the effectiveness of both -PageRank classification and the optimal estimation.

\kwd
\startlocaldefs\endlocaldefs{fmbox}\dochead

Research

addressref=aff1,aff2, corref=aff1, email=esteban.bautista-ruiz@ens-lyon.fr ]\initsEB\fnmEsteban \snmBautista addressref=aff2, email=patrice.abry@ens-lyon.fr ]\initsJRS\fnmPatrice \snmAbry addressref=aff1, email=paulo.goncalves@ens-lyon.fr ]\initsJRS\fnmPaulo \snmGonçalves

{artnotes}{abstractbox}

Semi-Supervised Learning \kwdPageRank \kwdLaplacian powers \kwdDiffusion on graphs \kwdSigned graphs \kwdOptimal tuning \kwdMNIST

1 Introduction

1.1 Context

Graph-based Semi-Supervised Learning (G-SSL) is a modern important tool for classification. While Unsupervised Learning fully relies on the data structure and Supervised Learning demands extensive labeled examples, G-SSL combines limited tagged examples and the data structure to provide satisfactory results. This makes the field of G-SSL of utmost importance as nowadays large and structured datasets can be readily accessed in comparison to expert data which may be hard to obtain. Examples where G-SSL provide state of the art results are vast, ranging from classification of BitTorrent contents and users [1], text categorization [2], medical diagnosis [3], or zombie hunting under BGP protocol [4]. Algorithmically, PageRank constitutes the reference tool in G-SSL. It has spurred a deluge of theory [5, 6, 7, 8], applications [9, 10, 1, 4] and implementations [11, 12]. Despite successes, the performance of G-SSL can still be improved, particularly for fuzzy graphs or imbalance of labeled datasets, two situations that we aim to address in this work.

1.2 Related works

In graphs, a ground truth class is represented by a subset of graph nodes, denoted . Thus, in graphs, the classification challenge corresponds to finding the binary partition of the graph vertices: . If the data is structured, then forms a cluster, i.e., a densely and strongly connected graph region that is weakly connected to the rest of the graph. This is exploited by G-SSL methods that essentially amount to diffuse information placed on the tagged nodes of , through the graph, expecting a concentration of information in that reveals its members. Among the family of G-SSL propositions obeying this rationale [13, 14, 15], PageRank is considered the state of the art approach in terms of performance, algorithms and theoretical understanding. The PageRank algorithm can be interpreted as random walkers that start from the labeled points and, at each step, diffuse to an adjacent node with probability or restart to the starting point with probability . In the limit (of infinite steps), each node is endowed a score proportional to the number of visits to it. Thus, vertices of are expected to get larger scores as walkers get trapped for a long time by the connected structure of . The capacity of PageRank to confine the random walks within depends on a topological parameter of known as the Cheeger ratio, or conductance, counting the ratio of external and internal connections of . More precisely, it is shown in [11] that the probability of a PageRank random walker leaving is upper bounded by the Cheeger ratio of . In other terms, a small Cheeger ratio designates a strongly disconnected cluster that PageRank can eventually easily detect. Based on the scores, a binary partitioning via a sweep-cut procedure allows to retrieve an estimate . This procedure is granted to obtain an estimate with a small Cheeger ratio if a sharp drop in magnitude appears on the sorted scores, then is potentially a good estimation of the ground truth [12]. In [16], an issue affecting G-SSL methods, coined as the ‘curse of flatness’, was highlighted. Such work proposes to extend PageRank by iterating the random walk Laplacian in the PageRank solution, as a mean to enforce Sobolev regularity to the vertex scores and amend the aforementioned problem. However, with this approach, guarantees that a sweep-cut still leads to a meaningful clustering remains unproven and it can be given neither diffusion nor topological interpretations. Thus, preventing insights on the properties and qualities of partitions it retrieves. This makes it hard to build upon and to address the issues listed above.

1.3 Goals, contributions and outline

In this work, we revisit Laplacian powers as a way to improve G-SSL and to address the issues listed above. We propose a generalization of PageRank by using (non necessarily integers) powers of the combinatorial Laplacian matrix (). In contradistinction to [16], our approach (i) enables us to have an explicit closed form expression of the underlying optimization problem (see Eq. 7); (ii) permits a diffusion and a topological interpretation. In our approach, we show that, for each , a new graph is generated. These new graphs, which we refer to as -graphs, reweight the links of the original structure and create edges, which can be positive or negative, between initially far-distant nodes. This topological change has the potential to improve classification as the signed edges introduce what can be seen as agreements (positive edges) or disagreements (negative edges) between nodes, allowing to revamp clusters as groups of nodes agreeing between them and disagreeing with the rest of the graph. This paper investigates the potential of these -graphs to better delineate a targeted , compared to PageRank. The theoretical analysis of our proposition permits to extend the Cheeger ratio to -graphs and to prove that if there is a graph in which has a smaller Cheeger ratio, then we can more accurately identify it with our generalized -PageRank procedure using the sweep-cut technique. Then, by means of numerical investigations, we point the existence of an optimal value that maximizes performance. Finally, we propose an algorithm that allows to estimate the optimal directly from the graph and the labeled points.

The paper is organized as follows: Section 2 sets definitions and recalls classical results on G-SSL. Section 3 presents the main contributions of the paper: Section 3.1 introduces -graphs; Section 3.2 defines -PageRank and its theoretical analysis; Section 3.3 discusses the existence of an optimal and its estimation. Section 4 shows the improvements in classification performance permitted by -PageRank on several real world datasets commonly used in classification, as well as the relevance of the estimation procedure for the optimal tuning.

2 State of the art

2.1 Preliminaries

Let denote a weighted undirected graph with no self-loops in which: refers to the set of vertices of cardinality ; denotes the set of edges, where a connected pair , denoted , implies ; and is a weight function. The graph adjacency matrix is denoted by in which if and otherwise. For a vertex we let denote the degree of and be the diagonal matrix of degrees. Let denote the geodesic distance between and . Given a set of nodes , we denote by the indicator function of such set, meaning that if and otherwise. The volume of is defined to be . We refer to the volume of the entire graph by . Let be a signal lying on the graph vertices. Graph signals are represented as column vectors, where refers to the signal value at node . The sum of signal values in the set is denoted by . We denote by the combinatorial graph Laplacian which, by construction, is a real symmetric matrix with eigendecomposition of the form . The positivity of the Dirichlet form implies that has real non-negative eigenvalues.

A random walk on a graph is a Markov chain where the nodes form the state space. Thus, when a walker is located at a node at a specific time , at time step the walker moves to a neighbor with probability , where . If the graph signal represents the distribution for the random walk starting point, then the signal denotes the distribution of the walker position at time . Independently of the starting distribution, if the graph is connected and not bipartite, the random walk converges to a stationary distribution , where .

Clustering is the search of groups of nodes that are strongly connected between them and weakly connected to the rest of the graph. The Cheeger ratio is a metric that counts the ratio of external and internal connections of a group of nodes, thus assessing its pertinence as a cluster, while penalizing uninteresting solutions that may fit the cluster criteria, like isolated nodes linked by a few edges. It is defined as follows.

Definition 1.

For a set of nodes , the Cheeger ratio, or conductance, of is defined as:

(1)

Thus, we define clustering as finding the binary partition of the graph vertices: such that has low .

2.2 PageRank-based Semi-Supervised Learning

Let denote the set of nodes tagged to belong to the ground truth and be indicator function of , i.e. if node and otherwise. The PageRank G-SSL is defined as the solution to the optimization problem [15]:

(2)

Optimization problem (2) can be seen as the search of a smooth graph signal in the sense that strongly connected nodes should have similar values (left term), while the labeled data is respected (right term), and a regularization parameter tunes the trade off between both terms. Notably, problem (2) is convex with closed form solution given by [15]:

(3)

We present the PageRank solution in this form as it will simplify derivations in the reminder of the paper, but it is not hard to rewrite (3) to its more popular version: where . This latter helps to expose the connection between PageRank and diffusion processes. Namely, it corresponds to the equilibrium state of a random walk that decides either to continue with probability , or to restart to the starting distribution with probability . As is the combination of different starting distributions, it is clear that the PageRank score at a particular node is proportional to the probability of finding a walker, at equilibrium, at this node. PageRank diffusion satisfies the following properties [17]: (i) mass preservation: ; (ii) stationarity: if ; and (iii) limit behavior: as and as .

In [11], it is shown that the behavior of this type of random walks is tightly related to the cluster structure of graphs. This connection between PageRank and clustering is quantified in the following result.

Lemma 1.

[11] Let be an arbitrary set with . For a labeled point placed at a node selected with probability proportional to its degree in , i.e. , the PageRank satisfies

(4)

This lemma implies that if we apply PageRank diffusion to the labels of and it has a small , then the probability of finding a walker outside is small and the nodes with largest PageRank value should index . This is formalized in [11] and [12]. The former shows that a proxy that has small can be found by looking for regions of high concentration of PageRank mass. The latter improves that result, showing that can be found more easily by looking for a sharp drop in the PageRank scores. To state their result, we first introduce the sweep-cut technique.

Definition 2.

A sweep-cut is a procedure to retrieve a partition from the PageRank vector. The procedure is as follows:

  • Let be a rearrangement of the vertices in descending order, so that the permutation vector satisfies

  • Let be the set of vertices indexed by the first j elements of .

  • Let

  • Retrieve for the set achieving

Now, we state the result of [12], showing that if there is a sharp drop in rank at , then the set has small Cheeger ratio.

Lemma 2.

[12] Let , be any index in and denote the PageRank restarting probability. Let be the numerator of the Cheeger ratio. Then, satisfies one of the following: (a) ; or (b) there is some index such that and

In other words, this lemma implies that either has a small Cheeger ratio, or there is no sharp drop at .

2.3 Generalization to multiple classes

PageRank G-SSL can be readily generalized to a multi-class setting in which labeled points of classes are used to find a partition . Let denote the labeled points of class and the indicator function of be placed as the -th column of a matrix . Then, the multi-class PageRank is computed in matrix form as [18]: , with classification matrix given in closed form by . This leads a node to have associated scores and it is assigned to the cluster satisfying . In [18], the following rule explaining the classification is provided: let denote the probability that a random walk reaches node before restarting to node , then is assigned to the class that satisfies the inequality

(5)

This inequality highlights an important issue of the multi-class approach as the sums depend on the cardinality of the sets of labeled points. Thus, cases of unbalanced number of labeled points can potentially bias the classification.

3 -PageRank for Semi-Supervised Learning

3.1 The -graphs

(a) Positive edges
(b) Negative edges
Figure 1: Exemplification of the topology emerging from on a realization of the Planted Partition with nodes and parameters and . The positive edges coincide with the original structure but are reweighted. The negative ones appear between nodes that are initially at a 2-hop distance. It can be seen that a considerable amount of negative edges appear between clusters, bringing the potential to boost their detection.

In this work, we propose to change the graph topology in which the problem is solved as a means to improve classification. We evoke such change by considering powers of the Laplacian matrix, noting that the operator, for , generates a new graph for every fixed value. More precisely, the Laplacian definition indicates that codes for a new graph, where refers to a generalized degree matrix and , with , to a generalized adjacency matrix that satisfies the Laplacian property since . We refer to such graphs as -graphs.

The -graphs reweight the edges of the original structure and creates links between originally far-distant nodes. Indeed, for the new edges can be related to paths of different lengths. To have a grasp on this, let us take the topology from as an example: for , the elements of the emanating graph are given as and , showing that, in , nodes originally connected get their link reweighted (still, remaining positive) while those at a 2-hop distance become linked by a negatively weighted edge.

This change in the topology has the potential to impact clustering, as the emergence of positive and negative edges opens the door for an interpretation in terms of an agreement (positive edge) or a disagreement (negative edge) between datapoints. Hence, clustering can be revamped to assume that nodes agreeing should belong to the same cluster and nodes disagreeing should belong to different ones. From this perspective, a revisit to the case of shows that this is indeed a potentially good topology since, for several graphs, it is more likely that vertices having a 2-hop distance lie in different clusters than in the same one, thus creating a considerable amount of disagreements between clusters, that may enhance their separability. This idea is illustrated in Figure 1, where for a realization of the planted partition model we show that with a big amount of negative edges appear between clusters.

Thus, in the reminder of the paper we investigate if for a target set of nodes , the detection of can be enhanced by solving the clustering problem in some of these new graphs.

Remark 1.

The graphs emerging in the regime have already been studied in [19, 20, 21], where it is shown that such graphs remain within the class of graphs with only positive edges, hence preserving the random walk framework. In these works, the graphs were shown to embed the so-called Lévy flight, permitting random walkers to perform long-distant jumps in a single step.

3.2 The -PageRank method

The signed graphs emerging from preclude the employment of the random walk-based approaches to find clusters, as ‘negative transitions’ appear. Thus, the -graphs call for a technique to find clusters in such graphs. In this subsection, we introduce -PageRank, a generalization of PageRank to finds clusters on the -graphs. Further, we analyze the -PageRank theoretical properties and clustering capabilities.

For our analysis, it is useful to first extend some of the graph topological definitions to the -graphs. Let denote the generalized volume of . Let denote a generalized stationary distribution with entries given by . It is important to stress that . Thus, for all , the generalized volume and the generalized stationary distribution are non-negative quantities.

The Cheeger ratio metric, lacking the ability to account for the sign of edges, cannot be employed to assess the presence of clusters in the -graphs. Thus, we generalize the Cheeger ratio definition to the new graphs as follows.

Definition 3.

For a set of nodes , the generalized Cheeger ratio, or generalized conductance, of is defined as

(6)

This generalization of the Cheeger ratio is mathematically sound. First, it is a non-negative quantity since . Second, the set attaining the minimum value coincides with a sensible clustering. To show the latter, let the edges in be split according to their sign as . Let be the sum of agreements within , the agreements between and , the disagreements within , and the disagreements between and . Then we state the following lemma.

Lemma 3.

Let . Then, also maximizes and .

The proof is provided in Appendix A.1.

Lemma 3 shows that, for clustering in the -graphs, it is good to search for sets with small generalized Cheeger ratio as those sets have strong between-cluster disagreements and strong within-cluster agreements.

Now, we introduce the -PageRank formulation. Departing from the optimization problem in (2), we revamp PageRank to operate on the -topology as follows.

Definition 4.

The -PageRank G-SSL is defined as the solution to the optimization problem:

(7)

The two following Lemmas show that, for any , the -PageRank solution exists in closed form and such solution preserves the PageRank properties.

Lemma 4.

Let . Then, problem (7) is convex with closed form solution given by

(8)

The proof is provided in Appendix A.2.

Remark 2.

Eq. (8) emphasizes the difference between our approach and the one in [16]: they propose to iterate the operator in the G-SSL solution as , for , for which the formulation of the optimization problem having this expression as solution remains unknown.

Remark 3.

The solution of -PageRank in Eq. (8) can be easily cast as a low-pass graph filter, allowing a fast and distributed approximation via Chebyshev polynomials [22].

Lemma 5.

Let . The -PageRank solution in (8) satisfies the following properties: (i) mass preservation: ; (ii) stationarity: if ; and (iii) limit behavior: as and as .

The proof is provided in Appendix A.3.

The previous Lemmas are important because they show that our generalization, for any , is a well-posed problem. Indeed, the properties of Lemma 5 imply that, while not necessarily modeled by random walkers, -PageRank remains a diffusion process having as stationary state and diffusion rate controlled by the parameter.

Our next results shows that it is hard for such diffusion process to escape clusters in the -graphs.

Lemma 6.

Let and let be an arbitrary set with . For a labeled point placed at node with probability proportional to its generalized degree in , i.e. , -PageRank satisfies

(9)

The proof is provided in Appendix A.4.

Lemma 6 admits a similar interpretation as Lemma 1. Namely, if -PageRank is applied to the labeled points of some set with small , then diffusion is confined to and the score values outside of are expected to be small. Thus, by looking at the nodes with largest score values we should be able to retrieve a good estimation of . If such score concentration phenomenon takes place, then a sharp drop must appear after sorting the -PageRank scores in descending order. We will use the following lemma to show that if a sharp drop is present, then the sweep cut procedure applied on the -PageRank vector retrieves a partition that has small .

Lemma 7.

Let denote the permutation vector and denote the set associated to obtained by applying the sweep-cut procedure on the -PageRank vector. Then, the partition satisfies the inequality:

(10)

The proof is provided in Appendix A.5.

We have that . Thus, the generalized Cheeger ratio of is small if is not much larger than . In the inequality above, we have two cases in which : (a) is approximately constant; and (b) has a drop that satisfies and . The former can only occur if and clearly no cluster can be retrieved from that vector, as confirmed by the inequality growing unbounded. The latter case is what we coin as having a sharp drop between and . In such case, the inequality is controlled by the difference which, due to the mass preserving property and the assumption that , should be small. Thus, granting that is not much larger than and has a small .

Discussion. The previous results show that -PageRank is a sensible tool to find clusters in the -graphs, i.e. groups of nodes with small generalized Cheeger ratio. Thus, revisiting the classification case in which we target group of nodes , we have that the smaller the value of , the better the -PageRank method can recover it. This observation, in addition to noting that standard PageRank emerges as the particular case of , indicate that we should be able to enhance the performance of G-SSL in the detection of by finding the graph, i.e. the value, in which .

3.3 The selection of

3.3.1 Case of : analytic study

In Section 3.1, it was argued that the topology emerging from places a negatively weighted link between nodes at a 2-hop distance, thus carrying the potential to place a big amount of disagreements between clusters that may enhance their separability. Our next result formalizes this claim, demonstrating that on graphs from the Planted Partition model it is expected that the -graph improves the generalized Cheeger ratio.

Theorem 1.

Consider a Planted Partition model of parameters (, ) and cluster sizes . Then, as we have that

(11)

where .

The proof is provided in Appendix A.6.

Corollary 1.

If , then , with equality occurring in the case .

The proof is provided in Appendix A.7.

Theorem 1 and Corollary 1 open the door to investigate, on arbitrary graphs, in which cases the -graph improves the generalized Cheeger ratio of a set. In the next Proposition, we provide a sufficient condition in which the -graph improves the generalized Cheeger ratio a set.

Proposition 1.

Let denote the mean degree of . A sufficient condition on so that is

(12)

The proof is provided in Appendix A.8.

This proposition points in the same direction as Theorem 1, saying that graphs having a cluster structure are bound to benefit from . Concretely, the first term on the right hand side of the inequality searches, among all the nodes of , the one that has the maximum number of connections towards . The second term does the reverse for the nodes of . Hence, asking for the nodes of to have, on average, more connections than the maximum possible boundary implies that should have a cluster structure.

3.3.2 An algorithm for the estimation of the optimal

(a)
(b) on subsets of
(c) by algorithm
Figure 2: Generalized Cheeger ratio of as a function of . For the plot, is a digit of the MNIST dataset.

Numerical experiments show that increasing can further decrease the generalized Cheeger ratio up to a point where it starts increasing. We show an example of this phenomenon in Figure 1(a), displaying the evolution of as a function of when corresponds to a digit of the MNIST dataset. From the figure, it is evident that an optimal value appears, denoted , raising the question of how to find such value. Since the behavior of depends on , in practice, the derivative or a greedy search to find cannot be employed since is unknown. A second question that arises is whether the optimal value changes drastically or smoothly with changes in . We perform the following test: for a given (same MNIST digit), we remove some percentage of the nodes in and record the optimal value on subsets of . More precisely, recall that , hence we randomly select some percentage of the entries indexing in , set them to zero and obtain a new indicator function indexing a subset of . Mean results are evaluated in the original curve and displayed in Figure 1(b). The figure suggest that it is not necessary to know to find a proxy of , it suffices to know a subset of . Based on the last observation, we propose Algorithm 1 for the estimation of . The rationale of the algorithm is to exploit the labeled points and the graph to find a proxy of on which we can compute the estimate. The procedure consists in letting walkers started from the label points, run for a number of steps that is determined by the maximum geodesic distance between the labels. This allows walkers to explore without escaping too far from it. After running the walk, we list the nodes in descending order according to the probability of finding a walk at a node. We take the first element on the list (the one where it is more likely to find a walker), add it to and remove it from the list, so that the former second element becomes the first in the listing. We repeat the procedure until the probability of finding a walker in the nodes conforming is 0.7.

Input: and a grid of values.
Output:
Compute .
Set
Set
Run a -step walk with seed :
Reorder the vertices as , so that
for  do
     if  then
         Set
     else
         Set
     end if
end for
Compute
Return .
Algorithm 1 Estimation of

In Table 1, we evaluate the performance of Algorithm 1 on the estimation of for all the digits of the MNIST. The first row displays, as , the value of (from the input range) attaining the minimum generalized Cheeger ratio. The second row displays the performance of the algorithm when estimating such value. The last three rows show the value of the generalized Cheeger ratio evaluated at , and , respectively. The estimator finds values of whose Cheeger ratios are: (a) significantly smaller than those of ; (b) close to the optimal.

Digit 1 2 3 4 5 6 7 8 9
7.0 3.0 7.0 3.2 3.2 7.0 7.0 3.2 4.2
5.45 (0.15) 3.10 (0.14) 6.41 (0.11) 4.92 (0.16) 3.20 (0.14) 6.04 (0.15) 4.98 (0.17) 4.40 (0.18) 5.08 (0.15)
0.065 0.166 0.035 0.141 0.131 0.011 0.052 0.116 0.135
0.073 (9e-4) 0.174 (8e-4) 0.041 (1e-3) 0.185 (4e-3) 0.148 (2e-3) 0.017 (1e-3) 0.074 (2e-3) 0.142 (2e-3) 0.149 (9e-4)
0.175 0.248 0.216 0.258 0.233 0.107 0.203 0.215 0.285
Table 1: Evaluation of Algorithm 1 on the MNIST Dataset. Mean values (95% confidence interval) are shown. The graph construction guidelines are provided in Section 4.2. For the experiment, 500 realizations of labeled points and a grid of ranging from 1 to 7 with a resolution of 0.2 were used.

4 -PageRank in practice

4.1 Planted Partition

Experimental setup and goals. In the following experiment, we show that -PageRank can increase the performance of G-SSL as the graph approaches the Planted Partition detectability transition. More precisely, it is shown in [23] that the Planted Partition possesses a detectability threshold above which unsupervised methods are unable to retrieve a meaningful clustering. Indeed, if the clusters sizes are denoted as , the mean degree of a node is given as , where and . It is then possible to recover a cluster that is positively correlated with the true partition, in an unsupervised manner, if , and impossible otherwise. As for G-SSL, the work in [24] showed that such threshold can be overcome when a fraction of labeled points is introduced to the task. Nonetheless, the performance of G-SSL drastically degrades when approaching the detectability transition.

The experimental setup is the following: for a given , a realization of the Planted Partition is drawn with and . Then, 1% of labeled points are sampled at random and the -PageRank method is applied for different values of lying on a discrete grid. The clusters are determined via a sweep-cut procedure, and the best performance is retained. The whole procedure is repeated for 10 different realizations of the labeled points. Finally, all the preceding steps are repeated for 100 graph realizations. Performance is assessed in terms of the Matthews Correlation Coefficient (MCC) [25], so that a value of 1 implies perfect agreement with the true partition and 0 a random decision.

Results and discussion. Figure 3 displays the performance of -PageRank at recovering the Planted Partition as a function of the ratio . Standard PageRank () performs poorly as the configuration approaches the phase transition (referred by the vertical line) since becomes large. Clearly, the introduction of allows to decrease , which, accordingly, enhances the clustering performance. Furthermore, the figure verifies that the smaller the value of (right plot), the better the -PageRank recovers the true partition (left plot). It is important to remark that, for this experiment, while shows good improvements, larger values of keep improving , until it reaches a saturation plateau, designating a region of optimal values ().

Figure 3: Improved detection of the Planted Partition.

4.2 Real world datasets

Experimental setup and goals. In our following experiment, we assesses the performance of -PageRank and Algorithm 1 on real world datasets.

The experimental setup is as follows: graphs are build connecting the K-Nearest Neighbors (KNN) with distances computed via the Gaussian kernel, so that the weight between points and is given by . For each class, 2% of labeled points are randomly selected, -PageRank is applied for a grid of values, partitions are retrieved via the sweep-cut, and the best performance, assessed in terms of MCC, is retained. Such procedure is repeated for 100 realization of labeled points, except for the MNIST on which 30 realizations only are employed. In all cases, classes are balanced in size and the graph construction parameters are selected to provide a good distribution of weights as follows: (a) MNIST [26]: Images of handwritten digits (1 to 9). From the entire dataset, 200 images of each digit are selected and used to build the graph with KNN = 10 and ; (b) Gender Images [27]: Images of male and female subjects for gender recognition. From the entire dataset, 200 images of each gender are selected and used to build the graph with KNN = 60 and . The large value of KNN is to avoid disconnected components; (c) BBC articles [28]: Word frequency attributes from news media articles. From the entire dataset, 200 business and 200 entertainment articles are used to build the graph with KNN = 5 and ; and (d) Phoneme [29]: Five attributes to discern nasal sounds from oral sounds. From the entire dataset, 200 oral and 200 nasal sounds are used to build the graph with KNN = 10 and .

Results and discussion.  Table LABEL:Performance_RealData.table shows the performance of -PageRank on the classification of these real world datasets. Clearly, the introduction of can significantly improve performance and, in general, the estimation performs close to the optimal value . It can be seen that some datasets are more sensitive to than others. For instance, in the BBC articles we observe that a small change in , going from to , increases performance, and going further to and significantly worsens the classification. On the other hand, the MNIST dataset is less sensitive to , obtaining similar performances with larger variations in .

It is important to stress that, thus far, we have assumed possession of the proper tuning of the diffusion rate () that attains the best results. However, when working with real data, clusters may have intricate local structures, e.g. sub-clusters, that play an important role in the way information diffuses, and that can make more difficult the finding of the optimal diffusion rate . As a result, two clusters may have equal Cheeger ratios but one of them being harder to find if its local structure is complex. Digit 8 poses an example of this phenomenon, where the mean performance for is slightly better than that of . This anomaly can be explained as an aftereffect of using a finite grid on : for some realization of labeled points, the best performance for falls in a region not covered by the grid.

4.3 Unbalanced labeled data

Experimental setup and goals. In our last experiment, we show that -PageRank, adapted to the multi-class setting described in Section 2.3, can improve the performance of G-SSL in the presence of unbalanced labeled data.

The experimental setup is as follows: graphs with two balanced classes (in size) are built using the datasets from the preceding experiments. The parameters of the graphs’ construction follow the guidelines provided in Section 4.2. For the Planted Partition, the configuration is , , . Then, unbalanced labeled points are drawn at random: 2% from one class and 6% from the other. Lastly, -PageRank, in the multi-class setting, is applied for a grid of values and the best performance, assessed by MCC, is recorded. For the planted partition, the procedure is repeated over 15 realizations of the labeled points and for 100 graph realizations. For the other datasets, 100 realizations of labeled points are employed.

Results and discussion.  Table 2 displays the performance -PageRank in the presence of unbalanced labeled data. It is important to stress that, in this framework, a unique value of is used to retrieve all the clusters at the same time, precluding the notion of an optimal as defined in Section 3.3. However, one value of seems to perform better, we denote it as Best. The results confirm that the introduction of helps to improve the classification in the presence of the unbalanced labeled data.

Planted Partition MNIST 4vs9 MNIST 3vs8 BBC articles Gender images Phoneme
0.81 (1.1e-2) 0.51 (1.5e-2) 0.70 (1.4e-2) 0.66 (1.8e-2) 0.63 (2.1e-2) 0.44 (2.3e-2)
0.87 (8.7e-3) 0.56 (1.5e-2) 0.76 (1.2e-2) 0.92 (5.0e-3) 0.73 (1.6e-2) 0.48 (1.4e-2)
Best 0.90 (7.0e-3) [6] 0.57 (1.5e-2) [3] 0.78 (1.2e-2) [4] 0.93 (1.5e-3) [3] 0.75 (1.7e-2) [3] 0.48 (1.4e-2) [1.9]
Table 2: Performance on unbalanced labaled data: each cell reports MCC, 95% confidence interval (parenthesis) and the value of [squared brackets].

5 Conclusion

This work proposed -PageRank, an extension of PageRank based on (non necessary integer) powers of the (combinatorial) Laplacian matrix. Our analysis shows that the added degree of freedom offers more versatility than standard PageRank, providing the potential to address some of the limitations of G-SSL. Precisely, we showed that when clusters are obtained via the sweep-cut procedure, -PageRank can significantly outperform standard PageRank. Further, we showed that the multi-class approach also benefits from our proposition, as performance was enhanced in the presence of unbalanced labeled data. These improvements were possible due to the () operator coding for graphs whose topology can reinforce the separability of clusters. The richness of such graphs comes from the sign of edges, allowing to code for similarities but also to emphasize dissemblance between individuals. Thus, while 2 nodes can only be disconnected on the initial graph, they can ‘repulse’ themselves in these topologies. Notably, we have shown that there is an optimal graph (related to an optimal ) on which the classification will lead to a maximal performance. We proposed a simple yet efficient algorithm to estimate the optimal and hence determine the best topology for analyzing a given dataset. The procedures proposed in this work open the door for more in-depth study of the -graphs and what determines their optimal topology. They also pave the way towards the extension of other standard clustering tools, such as Unsupervised Learning via Spectral Clustering, to exploit these richer topologies.

Appendix A Proofs

a.1 Proof of Lemma 3

Proof.

Let . It is easy to show that , which is monotonocally increasing with . Thus, the task is the partition that minimizes and consequently . ∎

a.2 Proof of Lemma 4

Proof.

It suffices to show the positive semi-definiteness of the functional and to apply the first order optimality condition. Let . Then, the left term satisfies . It can be shown that granting the right term satisfies . Now, computing the derivative of the functional with respect to and equaling to leads to: . The lemma is proved after isolating . ∎

a.3 Proof of Lemma 5

Proof.

From the demonstration of Lemma (4) we have that . Then, . Since we have that , proving (i). We prove property (iii) using the same expression. We only develop the case since the case follows the same steps: taking leads to , whose solution is proportional to . Lastly, we prove (ii) by noting that the operator has a positive real spectrum as it is similar to which is positive semi-definite. Thus, we can use the inverse Laplace transform of the resolvent , which, after using its Taylor expansion, allows to rewrite the PageRank solution as . If , the previous equation is only non-zero for , proving (ii). ∎

a.4 Proof of Lemma 6

Proof.

Let . Using (8) we can see that

(13)

showing that can be interpreted as when labels are selected with probability proportional to their generalized degree in . Using the fact that

(14)

we express

(15)

The upper bound is thus obtained by substituting and summing over .

(16)

Employing property (i) from Lemma 5 finishes the proof. ∎

a.5 Proof of Lemma 7

Proof.

We only show the proof of the lower bound as the upper bound follows a similar derivation. We recast (4) as . Thus, the set satisfies: