Random Walk Sampling For Big Data Over Networks

Random Walk Sampling For Big Data Over Networks


It has been shown recently that graph signals with small total variation can be accurately recovered from only few samples if the sampling set satisfies a certain condition, referred to as the network nullspace property. Based on this recovery condition, we propose a sampling strategy for smooth graph signals based on random walks. Numerical experiments demonstrate the effectiveness of this approach for graph signals obtained from a synthetic random graph model as well as a real-world dataset.


Saeed Basirian, Alexander Jung \addressDepartment of Computer Science, Aalto University, Finland; firstname.lastname(at)aalto.fi


compressed sensing, big data, graph signal processing, total variation, complex networks

I Introduction

Modern information processing systems are generating massive datasets which are partially labeled mixtures of different media (audio, video, text). Many successful approaches to such datasets are based on representing the data as networks or graphs. In particular, within (semi-)supervised machine learning, we represent the datasets by graph signals defined over an underlying graph, which reflects the similarity relations between individual data points. These graph signals often conform to a smoothness hypothesis, i.e., the signal values of close-by nodes are similar.

Two key problems related to processing these datasets are (i) how to sample them, i.e., which nodes provide the most information about the entire dataset, and (ii) how to recover the entire graph signal representation of the dataset from these samples. These problems have been studied in [3] which proposed a convex optimization method for recovering a graph signal from a small number of samples. Moreover, a sufficient condition for this recovery method to be accurate has been presented. This condition is a reformulation of the stable nullspace property of compressed sensing to the graph signal setting.

Contribution. Based on the intuition provided by the recently derived network nullspace property, we propose a sampling strategy based on random walks. The effectiveness of this approach is confirmed via numerical experiments based on synthetic graph signals obtained from a particular random graph model, i.e., the assortative planted partition model, and graph signals induced by a real-world dataset containing product rating information of an online retail shop.

Notation. Vectors and matrices are denoted by boldface lower-case and upper-case letters, respectively. The vector with all entries equal to one (zero) is denoted (). The and norm of a vector are denoted by and respectively.

Outline. The problem setup is discussed in II, were we formulate the problem of recovering a smooth graph signal as a convex optimization problem. Our main contribution is contained in Section III where we present the random walk sampling method and discuss its properties in the context of the assortative planted partition model. The results of illustrative numerical experiments are presented in Section IV. We finally conclude in Section V.

Ii Problem Formulation

We consider massive heterogeneous datasets with intrinsic network structure represented by a graph . The graph consists of the nodes , which are connected by undirected edges . Each node represents an individual data point and an edge connects nodes representing similar data points. For a given node , we define its neighbourhood as


The degree of node counts the number of its neighbours.

Within (semi-)supervised learning, we associate each data point with a label . These labels induce a graph signal defined over the graph underlying the dataset.

We aim at recovering a smooth graph signal based on observing its values for all nodes which belong to the sampling set


The size of the sampling set is typically much smaller than the overall dataset, i.e., . For a fixed sampling budget it is important to choose the sampling set such that the information obtained is sufficient to recover the overall graph signal. By considering a particular recovery method, called sparse label propagation (SLP), [3] presents the network nullspace property as a sufficient condition on the sampling set such that SLP recovers the overall graph signal from the samples.

The SLP recovery method is based on a smoothness hypothesis, which requires signal values of nodes belonging to the same cluster to be similar. This smoothness hypothesis then suggests to search for the particular graph signal which is consistent with the observed signal samples, and moreover has minimum total variation (TV)


which quantifies signal smoothness. Thus the recovery problem amounts to the convex optimization problem


The SLP algorithm is nothing but the the primal-dual optimization method of Pock and Chambolle [2] applied to the problem (4).

Let us from now on assume that the true underlying graph signal is clustered, i.e.,


with the cluster indicator signals


For a partition consisting of disjoint clusters with small cut-sizes, we have that the TV is relatively small. Thus, we expect recovery based on TV minimization (4) to be accurate for signals of the type (5). Indeed, a sufficient condition for the solution of (4) to coincide with can be formulated as

Lemma 1.

We observe a clustered signal of the form (5) on the sampling set . If each boundary edge with , is connected to two sampled nodes in each cluster, i.e.,


then (4) has a unique solution which moreover coincides with the true graph signal .

Iii Random Walk Sampling

We now present a particular strategy (summarized in Algorithm 1 below) for choosing the sampling set of nodes at which the graph signal should be sampled to obtain the observations . Our strategy is based on parallel random walks which are started at randomly selected seed nodes. The endpoints of these random walks, which are run for a fixed number of steps, constitute the sampling set .

1:random walk length , sample budget
2:Sampling set
3:for  do
4:     randomly select a seed (start) node
5:     perform a length- random walk ,
7:end for
Algorithm 1 Random Walk Sampling
Fig. 1: Clustered graph signal (5) defined over a graph composed of two clusters and .

In Figure 1. we illustrate the construction of the sampling set via the random walks . Each random walk forms a finite sequence of nodes that are visited in successive steps of the walk.

The sampling strategy of Algorithm 1 is appealing since it allows for efficient implementation as the random walks can be follows in parallel. Moreover, for a particular random graph model, the sampling set delivered by Algorithm 1 conforms with Lemma 1. According to Lemma 1, we have to select from each cluster a number sampled nodes which is proportional to its cut-size . Thus, we have to sample more densely in those clusters which have large cut-size. We now show that the sampling set obtained by Algorithm 1 follows this rationale for graph signals obtained from the stochastic block model (SBM) [6].

For a given partition of the graph in clusters of size , the SBM is a generative stochastic model for the edge set of the graph . In its simplest form, which is called the assortative planted partition model (APPM)[6], the SBM is defined by two parameters and which specify the probability that two particular nodes of the graph are connected by an edge . In particular, two nodes out of the same cluster are connected by an edge with probability , i.e., for . Two nodes , from different clusters and are connected by an edge with probability , i.e., for and .

Elementary derivations yield the expected degree of any node belonging to cluster as


On the other hand, by similarly elementary calculations, the expected cut-size satisfies


Now consider a particular random walk which is run in Algorithm 1. For a fixed node , let denote the probability that the random walk visits node in the th step. A fundamental result in the theory of random walks over graphs states [8, page 159]


Thus, by running the random walks in Algorithm 1 sufficiently long (choosing sufficiently large), the probability that the delivered sampling set contains a node from cluster statisfies


Contrasting (11) with (9) reveals that the sampling set delivered by Algorithm 1 indeed conforms with Lemma 1, which requires clusters with larger cut-size to be sampled more densely.

Iv Numerical Results

We tested the effectiveness of the sampling method given by Algorithm 1 was verified by applying it to different graph signals and using sparse label propagation (SLP) as the recovery method for obtaining the original graph signal from the samples. The SLP algorithm, derived in [3], is restated as Algorithm 2 for convenience. In Algorithm 2, we make use of the clipping operator for edge signals defined element-wise as .

1:data graph , sampling set , signal samples .
2:, incidence matrix of for some arbitrary orientation, , maximum node degree
10:until stopping criterion is satisfied
Algorithm 2 Sparse Label Propagation [3]

Our numerical experiments involved independent simulation runs. Each simulation run is based on randomly generating an instance (see Figure 2) of the APPM for fixed parameter values , and partition consisting of four clusters with sizes ((cf. Section III). We then generated a clustered graph signal of the form (5) by choosing the cluster values as independent random variables .

For each realization of the APPM, we constructed a sampling set using Algorithm 1 which was then used to obtain the signal samples and subsequently recovering the entire graph signal via Algorithm 2.

Fig. 2: An APPM instance with 60 nodes and three clusters. Node colours represent the signal values.

We measured the recovery accuracy obtained by Algorithm 2 via the normalized empirical mean squared error (NMSE) of the signal estimate , i.e.,


Here, , and denote the NMSE, the original and the recovered graph signal, respectively, obtained in the th simulation run. Note that is random and often we are interested in its empirical mean


We evaluated the quality of the sampling set provided by Algorithm 1 for varying sampling budgets and a fixed length of the random walks . In Table I, we report the mean and standard deviation of the NMSE of for different sampling budgets . Besides the expected decrease in error by increasing the number of samples, it shows that sampling around half of graph nodes, we obtain .

We also investigated the effect of choosing a varying random walk length in Algorithm 1, for a fixed sample budget . In Table II, we display the mean and standard deviation of the NMSE for different values of . It shows that for these range of values, the length of the walks have a relatively insignificant effect on the outcome. This can be partially explained by the fact that the mixing time of random walks (i.e., the number of steps before they reach the stationary distribution) in some cases may be much less than the size of the graph [5].

Sampling Budget
M=10 20 30 40 50
0.285 0.232 0.188 0.132 0.082
STD 0.221 0.178 0.160 0.138 0.091
TABLE I: Average NMSE obtained for different sampling budgets . STD indicates the empirical standard deviation of the NMSE .
Random Walk Length
L=20 40 80 160 320
0.312 0.314 0.285 0.277 0.304
STD 0.235 0.248 0.214 0.232 0.216
TABLE II: Average NMSE obtained for different lengths of the random walks. STD indicates the empirical standard deviation of the NMSE .

The fluctuation of the NMSE, as indicated by the values of the empirical standard deviation in Tables I and II, are on the order of the average NMSE. We expect the reason for this rather large amount of fluctuation to be a too small number of simulation runs. However, due to resource constraints we have not been able to increase the number of runs significantly.

In the final experiment, we challenged the hypothesis that the sampling strategy conforms to the intuition, suggested by Lemma 1, of taking more samples in clusters with larger cut-size (cf. Section III). For this purpose, the same procedure in the first two tests was repeated for and , and the number of samples in each cluster and its cut-size was recorded in each run. In Figure 3, we report the obtained results, which indicates that the mean sample counts are approximately proportional to the cluster cut-sizes .

Iv-a Real-World Data Set

We also tested our approach on the Amazon co-purchase dataset from the Stanford Network Analysis Platform [4]. The dataset consists of a collection of products purchased on the Amazon website. For each product, it provides a list of other products that are frequently co-purchased with it, as well as an average user rating. We first extracted an undirected graph underlying the full dataset (excluding nodes with no co-purchase information), which includes an edge if product is co-purchased with product or vice versa. Subsequently, we selected a subgraph via a random walk and including all the nodes on the path and their neighbours, resulting in a graph with nodes and edges. The graph signal is the average user rating for the products.

The sampling set was extracted using the random walk method with the sampling ratio and . The SLP algorithm was then applied for recovering the graph signal. This resulted in a mean NMSE of over 10 runs. For comparison, we also tested three graph clustering algorithms (also referred to as community detection algorithms) for selecting the sampling set. This comprised of first finding the partitioning of the nodes using the clustering algorithms and then randomly sampling from each cluster, where the number of samples in clusters was uniformly distributed according to the cut-size. For finding the clusters, we used an algorithm by Blondel et. al. (also known as Louvain) [1], an algorithm by Newman [7], and one by Ronhovde et. al. [9]. Choosing the sampling set via these methods and applying SLP for recovering the graph signal resulted in a NMSE of 0.369, 0.478, and 0.364 for the Louvain, Newman, and Ronhovde methods respectively (the value for the Ronhovde method is the average over 5 different clusterings corresponding to 5 values of its gamma parameter equally spaced between 0.1 and 0.5). We conclude that in this case our random walk method performs similarly to more computationally demanding clustering algorithms for sampling the graph signal.

V Conclusions

We proposed a novel random walk strategy for sampling graph signals representing massive datasets with intrinsic network structure. This strategy conforms with the rationale, which is supported by the recently derived network nullspace property, to sample more densely in clusters with large cut-size. The proposed sampling method has been tested on synthetic graph signals generated via an APPM. Our numerical experiments demonstrated that combining our sampling strategy with the SLP recovery algorithm, it is possible to recover graph signals with small error from only few samples. The effectiveness of our sampling strategy has been also verified numerically for graph signals obtained from a real-world dataset containing product rating information of an online retail shop.

Fig. 3: The mean number of samples and the mean cut-size of each cluster. .


  • [1] V. D. Blondel, J.-L. Guillaume, R. Lambiotte, and E. Lefebvre. Fast unfolding of communities in large networks. Journal of statistical mechanics: theory and experiment, 2008(10):P10008, 2008.
  • [2] A. Chambolle and T. Pock. A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vision, 40(1):120–145, 2011.
  • [3] A. Jung. Sparse label propagation. ArXiv e-prints, Dec. 2016.
  • [4] J. Leskovec and A. Krevl. SNAP Datasets: Stanford large network dataset collection. http://snap.stanford.edu/data, June 2014.
  • [5] L. Lovász. Random walks on graphs: a survey. Combinatorics, Paul erdos is eighty, 2:1–46, 1993.
  • [6] E. Mossel, J. Neeman, and A. Sly. Stochastic block models and reconstruction. ArXiv e-prints, Aug. 2012.
  • [7] M. E. J. Newman. Fast algorithm for detecting community structure in networks. Physical review E, 69(6):066133, 2004.
  • [8] M. E. J. Newman. Networks: an Introduction. Oxford Univ. Press, 2010.
  • [9] P. Ronhovde and Z. Nussinov. Local resolution-limit-free potts model for community detection. Physical Review E, 81(4):046114, 2010.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description