HyperANF: Approximating the Neighbourhood Function of Very Large Graphs on a Budget

HyperANF: Approximating the Neighbourhood Function of Very Large Graphs on a Budget

Abstract

The neighbourhood function of a graph gives, for each , the number of pairs of nodes such that is reachable from in less that hops. The neighbourhood function provides a wealth of information about the graph [22] (e.g., it easily allows one to compute its diameter), but it is very expensive to compute it exactly. Recently, the ANF algorithm [22] (approximate neighbourhood function) has been proposed with the purpose of approximating on large graphs. We describe a breakthrough improvement over ANF in terms of speed and scalability. Our algorithm, called HyperANF, uses the new HyperLogLog counters [17] and combines them efficiently through broadword programming [19]; our implementation uses task decomposition to exploit multi-core parallelism. With HyperANF, for the first time we can compute in a few hours the neighbourhood function of graphs with billions of nodes with a small error and good confidence using a standard workstation.

Then, we turn to the study of the distribution of distances between reachable nodes (that can be efficiently approximated by means of HyperANF), and discover the surprising fact that its index of dispersion provides a clear-cut characterisation of proper social networks vs. web graphs. We thus propose the spid (Shortest-Paths Index of Dispersion) of a graph as a new, informative statistics that is able to discriminate between the above two types of graphs. We believe this is the first proposal of a significant new non-local structural index for complex networks whose computation is highly scalable.

1Introduction

The neighbourhood function of a graph returns for each the number of pairs of nodes such that is reachable from in less that steps. It provides data about how fast the “average ball” around each node expands. From the neighbourhood function, several interesting features of a graph can be estimated, and in this paper we are in particular interested in the effective diameter, a measure of the “typical” distance between nodes.

Palmer, Gibbons and Faloutsos [22] proposed an algorithm to approximate the neighbourhood function (see their paper for a review of previous attempts at approximate evaluation); the authors distribute an associated tool, snap, which can approximate the neighbourhood function of medium-sized graphs. The algorithm keeps track of the number of nodes reachable from each node using Flajolet–Martin counters, a kind of sketch that makes it possible to compute the number of distinct elements of a stream in very little space. A key observation was that counters associated to different streams can be quickly combined into a single counter associated to the concatenation of the original streams.

In this paper, we describe HyperANF—a breakthrough improvement over ANF in terms of speed and scalability. HyperANF uses the new HyperLogLog counters [17], and combines them efficiently by means of broadword programming [19]. Each counter is made by a number of registers, and the number of registers depends only on the required precision. The size of each register is doubly logarithmic in the number of nodes of the graph, so HyperANF, for a fixed precision, scales almost linearly in memory (i.e., ). By contrast, ANF memory requirement is .

Using HyperANF, for the first time we can compute in a few hours the neighbourhood function of graphs with more than one billion nodes with a small error and good confidence using a standard workstation with 128 GB of RAM. Our algorithms are implement in a tool distributed as free software within the WebGraph framework.1

Armed with our tool, we study several datasets, spanning from small social networks to very large web graphs. We isolate a statistically defined feature, the index of dispersion of the distance distribution, and show that it is able to tell “proper” social networks from web graphs in a natural way.

2Related work

HyperANF is an evolution of ANF [22], which is implemented by the tool snap. We will give some timing comparison with snap, but we can only do it for relatively small networks, as the large memory footprint of snap precludes application to large graphs.

Recently, a MapReduce-based distributed implementation of ANF called HADI [20] has been presented. HADI runs on one of the fifty largest supercomputers—the Hadoop cluster M45. The only published data about HADI’s performance is the computation of the neighbourhood function of a Kronecker graph with 2 billion links, which required half an hour using 90 machines. HyperANF can compute the same function in less than fifteen minutes on a laptop.

The rather complete survey of related literature in [20] shows that essentially no data mining tool was able before ANF to approximate the neighbourhood function of very large graphs reliably. A remarkable exception is Cohen’s work [3], which provides strong theoretical guarantees but experimentally turns out to be not as scalable as the ANF approach; it is worth noting, though, that one of the proposed applications of [3] (On-line estimation of weights of growing sets) is structurally identical to ANF.

All other results published before ANF relied on a small number of breadth-first visits on uniformly sampled nodes—a process that has no provable statistical accuracy or precision. Thus, in the rest of the paper we will compare experimental data with snap and with the published data about HADI.

3HyperANF

In this section, we present the HyperANF algorithm for computing an approximation of the neighbourhood function of a graph; we start by recalling from [17] the notion of HyperLogLog counter upon which our algorithm relies. We then describe the algorithm, discuss how it can be implemented to be run quickly using broadword programming and task decomposition, and give results about its memory requirements and precision.

3.1HyperLogLog counters

HyperLogLog counters

, as described in [17] (which is based on [16]), are used to count approximately the number of distinct elements in a stream. For the purposes of the present paper, we need to recall briefly their behaviour. Essentially, these probabilistic counters are a sort of approximate set representation to which, however, we are only allowed to pose questions about the (approximate) size of the set.2

Let be a fixed domain and be a hash function mapping each element of into an infinite binary sequence. The function is fixed with the only assumption that “bits of hashed values are assumed to be independent and to have each probability of occurring” [17].

For a given , let denote the sequence made by the leftmost bits of , and be the sequence of remaining bits of ; is identified with its corresponding integer value in the range . Moreover, given a binary sequence , we let be the number of leading zeroes in plus one3 (e.g., ). Unless otherwise specified, all logarithms are in base 2.

The value printed by Algorithm ? is [17][Theorem 1] an asymptotically almost unbiased estimator for the number of distinct elements in the stream; for , the relative standard deviation (that is, the ratio between the standard deviation of and ) is at most , where is a suitable constant (given in [17]). Moreover [16] even if the size of the registers (and of the hash function) used by the algorithm is unbounded, one can limit it to bits obtaining almost certainly the same output ( is a function going to infinity arbitrarily slowly); overall, the algorithm requires bits of space (this is the reason why these counters are called HyperLogLog). Here and in the rest of the paper we tacitly assume that and that registers are made of bits.

3.2The HyperANF algorithm

The approximate neighbourhood function algorithm described in [22] is based on the observation that , the ball of radius around node , satisfies

Since , we can compute each incrementally using sequential scans of the graph (i.e., scans in which we go in turn through the successor list of each node). The obvious problem is that during the scan we need to access randomly the sets (the sets can be just saved on disk on a update file and reloaded later). Here probabilistic counters come into play; to be able to use them, though, we need to endow counters with a primitive for the union. Union can be implemented provided that the counter associated to the stream of data can be computed from the counters associated to and ; in the case of HyperLogLog counters, this is easily seen to correspond to maximising the two counters, register by register.

The observations above result in Algorithm ?: the algorithm keeps one HyperLogLog counter for each node; at the -th iteration of the main loop, the counter is in the same state as if it would have been fed with , and so its expected value is . As a result, the sum of all ’s is an (almost) unbiased estimator of (for a precise statement, see Theorem ?).

We remark that the only sound way of running HyperANF (or ANF) is to wait for all counters to stabilise (e.g., the last iteration must leave all counters unchanged). As we will see, any alternative termination condition may lead to arbitrarily large mistakes on pathological graphs.4

3.3HyperANF at hyper speed

Up to now, HyperANF has been described just as ANF with HyperLogLog counters. The effect of this change is an exponential reduction in the memory footprint and, consequently, in memory access time. We now describe the the algorithmic and engineering ideas that made HyperANF much faster, actually so fast that it is possible to run it up to stabilisation.

Union via broadword programming. Given two HyperLogLog counters that have been set by streams and , the counter associated to the stream can be build by maximising in parallel the registers of each counter. That is, the register of the new counter is given by the maximum between the -th register of the first counter and the -th register of the second counter.

Each time we scan a successor list, we need to maximise a large number of registers and store the resulting counter. The immediate way of obtaining this result requires extracting the value of each register, maximise it with the other corresponding registers, and writing down the result in a temporary counter. This process is extremely slow, as registers are packed in 64-bit memory words. In the case of Flajolet–Martin counters, the problem is easily solved by computing the logical OR of the words containing the registers. In our case, we resort to broadword programming techniques. If the machine word is , we assume that at least registers are allocated to each counter, so each set of registers is word-aligned.

Let and denote right and left (zero-filled) shifting, , and denote bit-by-bit not, and, or, and xor; denotes the bit-by-bit complement of .

We use to denote the constant whose ones are in position , , , …  that is, the constant with the lowest bit of each -bit subword set (e.g, ). We use to denote , that is, the constant with the highest bit of each -bit subword set (e.g, ).

It is known (see [19], or [24] for an elementary proof), that the following expression

performs a parallel unsigned comparison -by--bit-wise. At the end of the computation, the highest bit of each block of bits will be set iff the corresponding comparison is true (i.e., the value of the block in is strictly smaller than the value of the block in ).

Once we have computed , we generate a mask that is made entirely of 1s, or of 0s, for each -bit block, depending on whether we should select the value of or for that block:

This formula works by moving the high bit denoting the result of the comparison to the least significant bit (of each -bit block). Then, we or with and subtract from each block, obtaining either a mask with just the high bit set (if we were starting from 1) or a mask with all bits sets except for the high bit (if we were starting from 0). The last two operation fix those values so that they become or . The result of the maximisation process is now just .

This discussion assumed that the set of registers of a counter is stored in a single machine word. In a realistic setting, the registers are spread among several consecutive words, and we use multiple precision subtractions and shifts to apply the expressions above on a sequence of words. All other (logical) operations have just to be applied to each word in sequence.

All in all, by using the techniques above we can improve the speed of maximisation by a factor of , which in our case is about 13 (for graphs of up to nodes). This actually results in a sixfold speed improvement of the overall application in typical cases (e.g., web graphs and ), as about 90% of the computation time is spent in maximisation.

Parallelisation via task decomposition. Although HyperANF is written as a sequential algorithm, the outer loop lends itself to be executed in parallel, which can be extremely fruitful on a modern multicore architecture; in particular, we approach this idea using task decomposition. We divide the iteration on the whole set of nodes into a set of small tasks (in the order of the thousands), where each task consists in iterating on a contiguous segment of nodes. A pool of threads picks up the first available task and solves it: as a result, we obtain a performance improvement that is linear in the number of cores. Threads can be designed to be extremely agile, helped by WebGraph’s facilities which allow us to provide each thread with a lightweight copy of the graph that shares the bitstream and associated information with all other threads.

Tracking modified counters. It is an easy observation that a counter that does not change its value is not useful for the next step of the computation: all counters using during their update would not change their value when maximising with (and we do not even need to write on disk). We thus keep track of modified counters and skip altogether the maximisation step with unmodified ones. Since, as we already remarked, 90% of computation time is spent in maximisation, this approach leads to a large speedup after the first phases of the computation, when most counters are stabilised.

For the same reason, we keep track of the harmonic partial sums of small blocks (e.g., ) of counters. The amount of memory required is negligible, but if no counter in the block has been modified, we can avoid a costly computation.

Systolic computation. HyperANF can be run in systolic mode. In this case, we use also the transposed graph: whenever a counter changes, it signals back to its predecessors that at the next round they could change their values. Now, at each iteration nodes that have not been signalled are entirely skipped during the computation. Systolic computations are fundamental to get high-precision runs, as they reduce the cost of an iteration to scanning only the arcs of the graph that are actually moving information around. We switch to systolic computation when less than one quarter of the counters change their values.

3.4Correctness, errors and memory usage

Very little has been published about the statistical behaviour of ANF. The statistical properties of approximate counters are well known, but the values of such counters for each node are highly dependent, and adding them in a large amount can in principle lead to an arbitrarily large variance. Thus, making precise statistical statements about the outcome of a computation of ANF or HyperANF requires some care. The discussion in the following sections is based on HyperANF, but its results can be applied mutatis mutandis to ANF as well.

Consider the output of algorithm ? at a fixed iteration . We can see it as a random variable

where5 each is the HyperLogLog counter that counts nodes reached by node in steps; what we want to prove in this section is a bound on the relative standard deviation of (such a proof, albeit not difficult, is not provided in the papers about ANF). First observe that [17], for a fixed a number of registers per counter, the standard deviation of satisfies

where is the guaranteed relative standard deviation of a HyperLogLog counter. Using the subadditivity of standard deviation (i.e., if and have finite variance, ), we prove the following

We have that . By Theorem 1 of [17], , hence the first statement. For the second result, we have:

=1


Since, as we recalled in Section 3.1, the relative standard deviation satisfies , to get a specific value it is sufficient to choose ; this assumption yields an overall space requirement of about

(here, we used the obvious upper bound ). For instance, to obtain a relative standard deviation of (in every iteration) on a graph of one billion nodes one needs  GB of main memory for the registers (for a comparison, snap would require  GB). Note that since we write to disk the new values of the registers, this is actually the only significant memory requirement (the graph can be kept on disk and mapped in memory, as it is scanned almost sequentially).

Applying Chebyshev’s inequality, we obtain the following:

Chebyshev’s theorem gives, for all ,

since , we conclude that

choosing we have

=1


In [17] it is argued that the HyperLogLog error is approximately Gaussian; the counters, however, are not statistically independent and in fact the overall error does not appear to be normally distributed. Nonetheless, for every fixed , the random variable seems to be unimodal (for example, the average p-value of the Dip unimodality test [18] for the cnr-2000 dataset is ), so we can apply the Vysochanskiĭ-Petunin inequality [25], obtaining the bound

In the rest of the paper, to state clearly our theorems we will always assume error with confidence . It is useful, as a practical reminder, to note that because of the above inequality for each point of the neighbourhood function we can assume a relative error of with confidence (e.g., with % confidence, or with % confidence).

As an empirical counterpart to the previous results, we considered a relatively small graph of about nodes (cnr-2000, see Section 6 for a full description) for which we can compute the exact neighbourhood function ; we ran HyperANF 500 times with . At least % of the samples (for all ) has a relative error smaller than twice the theoretical relative standard deviation . The percentage jumps up to % for three times the relative standard deviation, showing that the distribution of the values behaves better than what the theory would guarantee.

4Deriving useful data

As advocated in [22], being able to estimate the neighbourhood function on real-world networks has several interesting applications. Unfortunately, all published results we are aware of lack statistical satellite data (such as confidence intervals, or distribution of the computed values) that make it possible to compare results from different research groups. Thus, in this section we try to discuss in detail how to derive useful data from an approximation of the neighbourhood function.

The distance cdf. We start from the apparently easy task of computing the cumulative distribution function of distances of the graph (in short, distance cdf), which is the function that gives the fraction of reachable pairs at distance at most , that is,

In other words, given an exact computation of the neighbourhood function, the distance cdf can be easily obtained by dividing all values by the largest one. Being able to estimate allows one to produce a reliable approximation of the distance cdf:

Note that if

holds for every , then a fortiori

(because, although the maxima might be first attained at different values of , the same holds for any larger values). As a consequence,

The probability is immediate from the union bound, as we are considering events at the same time.

=1


Note two significant limitations: first of all, making precise statements (i.e., with confidence) about all points of requires a very high initial error and confidence. Second, the theorem holds if HyperANF has been run up to stabilisation, so that the probabilistic guarantees of HyperLogLog hold for all .

The first limitation makes in practice impossible to get directly sensible confidence intervals, for instance, for the average distance or higher moments of the distribution (we will elaborate further on this point later). Thus, only statements about a small, finite number of points can be approached directly.

The second limitation is somewhat more serious in theory, albeit in practice it can be circumvented making suitable assumptions about the graph under examination (which however should be clearly stated along the data). Consider the graph made by two -cliques joined by a unidirectional path of nodes (see Figure ?). Even neglecting the effect of approximation, can “fool” HyperANF (or ANF) so that the distance cdf is completely wrong (see Figure ?) when using any stopping criterion that is not stabilisation.

The real cdf of the graph in Figure  (+), and the one that would be computed using any termination condition that is not stabilisation (*); here \ell=10 and k=260.
The real cdf of the graph in Figure (+), and the one that would be computed using any termination condition that is not stabilisation (*); here and .
Two -cliques joined by a unidirectional path of nodes: terminating even one step earlier than stabilisation completely miscalculates the distance cdf (see Figure ); the effective diameter is , but terminating even just one step earlier than stabilisation yields an estimated effective diameter of 1.

Indeed, the exact neighbourhood function of is given by:

The key observation is that the very last value is significantly larger than all previous values, as at the last step the nodes of the right clique become reachable from the nodes of the first clique. Thus, if iteration stops before stabilisation,6 the normalisation factor used to compute the cdf will be smaller by than the actual value, causing a completely wrong estimation of the cdf, as shown in Figure ?.

Although this counterexample (which can be easily adapted to be symmetric) is definitely pathological, it suggests that a particular care should be taken when handling graphs that present narrow “tubes” connecting large connected components: in such scenarios, the function exhibits relatively long plateaux (preceded and followed by sharp bumps) that may fool the computation of the cdf.

The effective diameter. The first application of ANF was the computation of the effective diameter. The effective diameter of at is the smallest such that ; when is omitted, it is assumed to be .7 The interpolated effective diameter is obtained in the same way on the linear interpolation of the points of the neighbourhood function.

Since that the function is necessarily monotone in (independently of the approximation error), from Theorem ? we obtain:

Unfortunately, since the effective diameter depends sensitively on the distance cdf, again termination conditions can produce arbitrary errors. Getting back to the example of Figure ?, with a sufficiently large , for example , the effective diameter is , which would be correctly output after iterations, whereas even stopping one step earlier (i.e., with ) would produce as output, yielding an arbitrarily large error. snap, indeed, fails to produce the correct result on this graph, because it stops iterating whenever the ratio between two successive iterates of is sufficiently close to 1.

Algorithm ? is used to estimate the effective diameter of a graph; albeit this approach is reasonable (and actually it is similar to that adopted by snap, although the latter does not provide any confidence interval), unless the neighbourhood function is known with very high precision it is almost impossible to obtain good upper bounds, because of the typical flatness of the distance cdf after the 90th percentile. Moreover, results computed using a termination condition different from stabilisation should always be taken with a grain of salt because of the discussion above.

The distance density function. The situation, from a theoretical viewpoint, is somehow even worse when we consider the density function associated to the cdf . Controlling the error on is not easy:

With confidence ,

and similarly . =1


Note that the bound is very weak: since our best generic lower bound is , the relative error with which we known a point is (which, of course, is pretty useless).

Moments. Evaluation of the moments of poses further problems. Actually, by Lemma ? we can deduce that

with confidence , where is the diameter of , which implies that the expected value of is an (almost) unbiased estimator of the expected value of . Nonetheless, the bounds we obtain are horrible (and actually unusable).

The situation for the variance is even worse, as we have to prove that we can use as an estimator to . Note that for a fixed graph , is a precise distribution and is an actual number. Conversely, (and hence ) is a random variable8. By Theorem ?, we know that is an (almost) unbiased pointwise estimator for , and that we can control its concentration by suitably choosing the number of counters. We are going to derive bounds on the approximation of using the values of up to (i.e., the iteration at which HyperANF stabilises):

Assuming error on the values of in implies confidence . Since , and by definition for we have ( ranges in ):

where is the average path length. Similarly

Hence the statement. =1


The error and confidence we obtain are again unusable, but the lemma proves that with enough precision and confidence on we can get precision and confidence on .

The results in this section suggests that if computations involve the moments the only realistic possibility is to resort to parametric statistics to study the behaviour of the value of interest on a large number of samples. That is, it is better to compute a large number of relatively low-precision approximate neighbourhood functions than a small number of high-precision ones, as from the former the latter are easily computable by averaging, whereas it is impossible to obtain a large number of samples of derived values from the latter. As we will see, this approach works surprisingly well.

5Spid

The main purpose of computing aggregated data such as the distance distribution is that we can try to define indices that express some structural property of the graph we study, an obvious example being the average distance, or the effective diameter.

One of the main goal of our recent research has been finding a simple property that clearly distinguishes between social networks deriving from human interaction (what is usually called a social network in the strong or proper sense: DBLP, Facebook, etc.) and web-based graphs, which share several properties of social networks, and as the latter arise from human activity, but present a visibly different structure.

In this paper we propose for the first time to use the index of dispersion (a.k.a. variance-to-mean ratio) of the distance distribution as a measure of the “webbiness” of a social network. We call such an index the spid (shortest-paths index of dispersion)9 of . In particular, networks with a spid larger than one are to be considered “web-like”, whereas networks with a spid smaller than one are to be considered “properly social”. We recall that a distribution is called under- or over-dispersed depending on whether its index of dispersion is smaller or larger than 1, so a network is properly social or not depending on whether its distance distribution is under- or over-dispersed.

The intuition behind the spid is that “properly social” networks strongly favour short connections, whereas in the web long connection are not uncommon: this intuition will be confirmed in Section 6.

As discussed in the previous section, in theory estimating the spid is an impossible task, due to the inherent difficulty of evaluating the moments of . In practice, however, the estimate of the spid computed directly on runs of HyperANF are quite precise. From the actual neighbourhood function computed for cnr-2000 we deduce that the graph spid is . We then ran iteration of HyperANF with a relative standard deviation of %, computing for each of them an estimation of the spid; these values approximately follow a normal distribution of mean and standard deviation (see Figure ?).

 Cumulative density function of 100 values of the spid computed using HyperANF on cnr-2000. For comparison, we also plot random samples of size 100 and 10\,000 drawn from a normal distribution.
Cumulative density function of 100 values of the spid computed using HyperANF on cnr-2000. For comparison, we also plot random samples of size and drawn from a normal distribution.

We obtained analogous concentration results for the average distance. In some pathological cases, the distribution is not Gaussian, albeit it always turns out to be unimodal (in some cases, discarding few outliers), so we can apply the Vysochanskiĭ-Petunin inequality. We will report some relevant observations on the spid of a number of graphs after describing our experiments.

6Experiments

We ran our experiments on the datasets described in Table ?:

  • the web graphs are almost all available at http://law.dsi.unimi.it/, except for the altavista dataset that was provided by Yahoo! within the Webscope program (AltaVista webpage connectivity dataset, version 1.0, http://research.yahoo.com/Academic_Relations);10

  • for the social networks: hollywood (http://www.imdb.com/) is a co-actorship graph where vertices represent actors; dblp (http://www.informatik.uni-trier.de/~ley/db/) is a scientific collaboration network where each vertex represents a scientist and two vertices are connected if they have worked together on an article; in ljournal (http://www.livejournal.com/) nodes are users and there is an arc from to if registered among his friends (it is not necessary to ask permission, so the graph is directed); amazon (http://www.archive.org/details/amazon_similarity_isbn/) describes similarity among books as reported by the Amazon store; enron is a partially anonymised corpus of e-mail messages exchanged by some Enron employees (nodes represent people and there is an arc from to whenever was the recipient of a message sent by ); finally in flickr (http://www.flickr.com/11) vertices correspond to Flickr users and there is an edge connecting and whenever either vertex is recorded as a contact of the other one.

At the best of our knowledge, this is the first paper where such a wide and diverse set of data is studied, and where features such as effective diameter or average path length are computed on very large graphs with precise statistical guarantees.

A comparison of the speed of snap/HADI vs. HyperANF. The tests on snap were performed on our hardware. Both algorithms were stopped at a relative increment of . The timings of HADI on the M45 cluster are the best reported in [20], and both algorithms ran three iterations. We remark that a run of HyperANF on the Kronecker graph takes less than fifteen minutes on a laptop.
Graph snap HyperANF

amazon

9.5 m 5 s

indochina-2004

4.62 h 1.83 m

altavista

- 1.2 h
HADI (90 machines) HyperANF

Kronecker
(177 K nodes, 2 B arcs)

30 m 2.25 m

All experiments are performed on a Linux server equipped with Intel Xeon X5660 CPUs ( GHz,  MB cache size) for overall 24 cores and  GB of RAM; the server cost about EUR in 2010.

A brief comparison with snap and HADI timings is shown in Table ?. Essentially, on our hardware HyperANF is two orders of magnitudes faster than snap. Our run on the Kronecker graph is one order of magnitude faster than HADI’s (or three orders of magnitude faster, if you take into consideration the number of machines involved), but this comparison is unfair, as in principle HADI can scale to arbitrarily large graphs, whereas we are limited by the amount of memory available. Nonetheless, the speedup is clearly a breakthrough in the analysis of large graphs. It would be interesting to compare our timings for the altavista dataset with HADI’s, but none have been published.

It is this speed that makes it possible, for the first time, to compute data associated with the distance distribution with high precision and for a large number of graphs. We have 100 runs with relative standard deviation of % for all graphs, except for those on the altavista dataset (%). All graphs are run to stabilisation. Our computations are necessarily much longer (usually, an order of magnitude longer in iterations) than those used to compute the effective diameter or similar measures. This is due to the necessity of computing with high precision second-order statistics that are used to compute the spid.

Previous publications used few graphs, mainly because of the large computational effort that was necessary, and no data was available about the number of runs. Moreover, we give precise confidence intervals based on parametric statistics for data depending on the second moment, such as the spid—something that has never done before. We gather here our findings.

A posteriori parameters are highly concentrated. According to our experiments, computing the effective diameter, average distance and spid on a large number of low-precision runs generates highly concentrated distributions (see the empirical standard deviation in Table ?). Thus, we suggest this approach for computing such values, provided that termination is by stabilisation.

 A plot showing the strong linear correlation between the average distance and the effective diameter.
A plot showing the strong linear correlation between the average distance and the effective diameter.

Effective diameter and average distance are essentially linearly correlated. Figure ? shows a scatter plot of the two values, and the line . The correlation between the two values has always been folklore in the study of social networks, but we can confirm that on both social and web networks the connection can be exactly expressed in linear terms (it would be of course interesting to prove such a correlation formally, under suitable restrictions on the structure of the graph). This fact suggests that the average distance (which is more principled from a statistic viewpoint, and parameter-free) should be used as the reference parameter to express the closeness between nodes. Moreover, experimentally the standard deviation of the effective diameter in a posteriori computations is usually significantly larger than that of the average distance.

Incidentally, the average distance of the altavista dataset is —slightly more than what reported in [20] (possibly because of termination conditions artifacts).

It is difficult to give a priori confidence intervals for the effective diameter with a small number of runs. Unless a large number of runs is available, so that the precision of the approximation of the neighbourhood function can be significantly lowered, it is impossible to provide interesting upper bounds for the effective diameter.

The spid can tell social networks from web graphs. As shown in Table ?, even taking the standard deviation into account spids are pretty much below 1 for social networks and above 1 for web graphs; host graphs (not surprisingly) behave like social networks. Note that this works both for directed and undirected graphs. Figure ? shows the spid values obtained for our datasets plotted against the graph size, and also witnesses that there is no correlation (a similar graph, not shown here, testifies that spid is also independent from density). Figure ? shows that there is some slight correlation between the spid and the average distance: nonetheless, there is no way to tell networks from our dataset apart using the latter value, whereas the under- or over-dispersion of the distance distribution, as defined by the spid, never makes a mistake. Of course, we expect to enrich this graph in time with more datasets: we are particularly interested in gathering very large social networks to test the spid at large sizes.

We remark that, as a sanity check, we have also computed on several web-graph datasets the spid of the giant component, which turned out to be very similar to the spid of the whole graph. We see this as a clear sign that the spid is largely independent of the artifacts of the crawling process.

Direction should not be destroyed when analysing a graph. We confirm that symmetrising graphs destroys the combinatorial structure of the network: the average distance drops to very low values in all cases, as well as the spid. This suggests that there is important structural information that is being ignored. We also note that since all web snapshot we have at hand are gathered by some kind of breadth-first visit, they represent balls of small diameter centred at the seed: symmetrising the graph we cannot expect to get an average distance that is larger than twice the radius of the ball. All in all, the only advantage of symmetrising a graph is a significant reduction in the number of iterations that are needed to complete a computation of the neighbourhood function.12

 A plot showing the spid values (vertical) for our datasets compared with their size (i.e., number of nodes, horizontal): red squares correspond to social networks, blue diamonds to web graphs and black circles to host graphs.
A plot showing the spid values (vertical) for our datasets compared with their size (i.e., number of nodes, horizontal): red squares correspond to social networks, blue diamonds to web graphs and black circles to host graphs.
 A plot showing the spid against the average distance using the same conventions of Figure .
A plot showing the spid against the average distance using the same conventions of Figure .

To give a more direct idea of the level of precision of our diameter estimation, we computed the actual diameter at for the cnr-2000 dataset, and plotted it against the interval estimation obtained by HyperANF

 Effective diameters at \alpha for the cnr-2000 dataset; red bullets show the real effective diameter, whereas green crosses show the upper and lower extreme of the confidence interval obtained running 100 HyperANF with m=128.
Effective diameters at for the cnr-2000 dataset; red bullets show the real effective diameter, whereas green crosses show the upper and lower extreme of the confidence interval obtained running HyperANF with .

7Future work

HyperANF lends itself naturally to distributed implementations. However, contrarily to the approach taken by HADI [20], we think that the correct parallel framework for implementing a diffusing computation is a synchronous parallel system where computation happens at nodes and communication is sent from node to node with messages. Such a framework, Pregel, has been recently developed at Google [21]. In a Pregel implementation of HyperANF, every computational node sends its own counter as message to its predecessors if it changed from the previous iteration, waits for incoming messages from its successors, and computes the maximisation procedure on the received messages. Due to the small size of HyperLogLog counter (exponentially smaller than the Flajolet–Martin counters used by ANF), the amount of communication would be very small.

Although in this paper, we preferred to focus on the computation of the spid, we remark that HyperANF can also be used to build the radius distribution described in [20], or the related closeness centrality.

8Conclusions

HyperANF is a breakthrough improvement over the original ANF techniques, mainly because of the usage of the more powerful HyperLogLog counters combined with their fast broadword combination and systolic computation. HyperANF can run to stabilisation very large graphs, computing data with statistical guarantees.

We consider, however, the introduction of the spid of a graph the main conceptual contribution of this paper. HyperLogLog is instrumental in making the computation of the spid possible, as the latter requires a number of iterations that is an order of magnitude larger than those required for an estimate of the effective diameter.

Acknowledgements Flavio Chierichetti participated to the earlier phases of this work. We want to thank Dario Malchiodi for fruitful discussions and hints.

Footnotes

  1. See [15]. http://webgraph.dsi.unimi.it/.
  2. We remark that in principle bits are necessary to estimate the number of unique elements in a stream [14]. HyperLogLog is a practical counter that starts from the assumption that a hash function can be used to turn a stream into an idealised multiset (see [17]).
  3. We remark that in the original HyperLogLog papers is used to denote , but is a somewhat standard notation for the ruler function [19].
  4. We remark that snap uses a threshold over the relative increment in the number of reachable pairs as a termination condition, but this trick makes the tail of the function unreliable.
  5. Throughout this paper, we use von Neumann’s notation , so means that .
  6. We remark that stabilisation can occur, in principle, even before the last step because of hash collisions in HyperLogLog counters, but this will happen with a controlled probability.
  7. The actual diameter of is its effective diameter at , albeit the latter is defined for all graphs whereas the former makes sense only in the strongly connected case.
  8. More precisely, is a sequence of (stochastically dependent) random variables , , …
  9. If we were to follow strictly the terminology used in this paper, this would be the index of dispersion of the distance distribution, but we guessed that the acronym IDDD would not have been as as successful.
  10. It should be remarked by this graph, albeit widely used in the literature, is not a good dataset. The dangling nodes are %—an impossibly high value [23], and an almost sure indication that all nodes in the frontier of the crawler (and not only visited nodes) were added to the graph, and the giant component is less than 4% of the whole graph.
  11. We thank Yahoo! for the experimental results on the Flickr graph.
  12. We remark that the “diameter ” claim in [20] about the altavista dataset refers to the effective diameter for the symmetrised version of the graph.

References

  1. The space complexity of approximating the frequency moments.
    Noga Alon, Yossi Matias, and Mario Szegedy. J. Comput. Syst. Sci, 58(1):137–147, 1999.
  2. The WebGraph framework I: Compression techniques.
    Paolo Boldi and Sebastiano Vigna. In Proc. of the Thirteenth International World Wide Web Conference (WWW 2004), pages 595–601, Manhattan, USA, 2004. ACM Press.
  3. Size-estimation framework with applications to transitive closure and reachability.
    Edith Cohen. J. Comput. Syst. Sci., 55:441–453, 1997.
  4. Loglog counting of large cardinalities (extended abstract).
    Marianne Durand and Philippe Flajolet. In Giuseppe Di Battista and Uri Zwick, editors, Algorithms - ESA 2003, 11th Annual European Symposium, Budapest, Hungary, September 16-19, 2003, Proceedings, volume 2832 of Lecture Notes in Computer Science, pages 605–617. Springer, 2003.
  5. Hyperloglog: the analysis of a near-optimal cardinality estimation algorithm.
    Philippe Flajolet, Éric Fusy, Olivier Gandouet, and Frédéric Meunier. In Proceedings of the 13th conference on analysis of algorithm (AofA 07), pages 127–146, 2007.
  6. The dip test of unimodality.
    J. A. Hartigan and P. M. Hartigan. Ann. Statist., 13(1):70–84, 1985.
  7. The Art of Computer Programming. Pre-Fascicle 1A. Draft of Section 7.1.3: Bitwise Tricks and Techniques, 2007.
    Donald E. Knuth.
  8. HADI: Mining radii of large graphs.
    U Kang, Charalampos E. Tsourakakis, Ana Paula Appel, Christos Faloutsos, , and Jure Leskovec. ACM Transactions on Knowledge Discovery from Data, 2010.
  9. Pregel: a system for large-scale graph processing.
    Grzegorz Malewicz, Matthew H. Austern, Aart J.C Bik, James C. Dehnert, Ilan Horn, Naty Leiser, and Grzegorz Czajkowski. In SIGMOD ’10: Proceedings of the 2010 international conference on Management of data, pages 135–146, New York, NY, USA, 2010. ACM.
  10. Anf: a fast and scalable tool for data mining in massive graphs.
    Christopher R. Palmer, Phillip B. Gibbons, and Christos Faloutsos. In KDD ’02: Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 81–90, New York, NY, USA, 2002. ACM.
  11. Stanford matrix considered harmful.
    Sebastiano Vigna. In Andreas Frommer, Michael W. Mahoney, and Daniel B. Szyld, editors, Web Information Retrieval and Linear Algebra Algorithms, number 07071 in Dagstuhl Seminar Proceedings, 2007.
  12. Broadword implementation of rank/select queries.
    Sebastiano Vigna. In Catherine C. McGeoch, editor, Experimental Algorithms. 7th International Workshop, WEA 2008, number 5038 in Lecture Notes in Computer Science, pages 154–168. Springer–Verlag, 2008.
  13. Remark: “Proof of the rule for unimodal distributions” [Teor. Veroyatnost. i Mat. Statist. 21 (1979), 23–35].
    D. F. Vysochanskiĭ and Yu. Ī. Petunīn. Teor. Veroyatnost. i Mat. Statist., 27:26–27, 157, 1982.
  14. The space complexity of approximating the frequency moments.
    Noga Alon, Yossi Matias, and Mario Szegedy. J. Comput. Syst. Sci, 58(1):137–147, 1999.
  15. The WebGraph framework I: Compression techniques.
    Paolo Boldi and Sebastiano Vigna. In WWW 2004, pages 595–601, 2004. ACM Press.
  16. Loglog counting of large cardinalities (extended abstract).
    Marianne Durand and Philippe Flajolet. In ESA 2003, pages 605–617. Springer, 2003.
  17. HyperLogLog: the analysis of a near-optimal cardinality estimation algorithm.
    Philippe Flajolet, Éric Fusy, Olivier Gandouet, and Frédéric Meunier. In AofA 07, pages 127–146, 2007.
  18. The dip test of unimodality.
    J. A. Hartigan and P. M. Hartigan. Ann. Statist., 13(1):70–84, 1985.
  19. The Art of Computer Programming. Pre-Fascicle 1A. Draft of Section 7.1.3: Bitwise Tricks and Techniques, 2007.
    Donald E. Knuth.
  20. HADI: Mining radii of large graphs.
    U Kang, Charalampos E. Tsourakakis, Ana Paula Appel, Christos Faloutsos, , and Jure Leskovec. ACM Trans. on Knowledge Discovery from Data, 2010.
  21. Pregel: a system for large-scale graph processing.
    Grzegorz Malewicz, Matthew H. Austern, Aart J.C Bik, James C. Dehnert, Ilan Horn, Naty Leiser, and Grzegorz Czajkowski. In SIGMOD ’10, pages 135–146, 2010. ACM.
  22. Anf: a fast and scalable tool for data mining in massive graphs.
    Christopher R. Palmer, Phillip B. Gibbons, and Christos Faloutsos. In KDD ’02, pages 81–90, 2002. ACM.
  23. Stanford matrix considered harmful.
    Sebastiano Vigna. In Web Information Retrieval and Linear Algebra Algorithms, number 07071 in Dagstuhl Seminar Proceedings.
  24. Broadword implementation of rank/select queries.
    Sebastiano Vigna. In WEA 2008, pages 154–168. Springer–Verlag, 2008.
  25. Proof of the rule for unimodal distributions.
    D. F. Vysochanskiĭ and Yu. Ī. Petunīn. Teor. Veroyatnost. i Mat. Statist., 21:23–35, 1979.
10059
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
Edit
-  
Unpublish
""
The feedback cannot be empty
Submit
Cancel
Comments 0
""
The feedback cannot be empty
   
Add comment
Cancel

You’re adding your first comment!
How to quickly get a good reply:
  • Offer a constructive comment on the author work.
  • Add helpful links to code implementation or project page.