A space efficient streaming algorithm for estimating transitivity and triangle counts using the birthday paradox

A space efficient streaming algorithm for estimating transitivity and triangle counts using the birthday paradox

MADHAV JHA C. SESHADHRI ALI PINAR Sandia National Laboratories Sandia National Laboratories Sandia National Laboratories
Abstract

We design a space efficient algorithm that approximates the transitivity (global clustering coefficient) and total triangle count with only a single pass through a graph given as a stream of edges. Our procedure is based on the classic probabilistic result, the birthday paradox. When the transitivity is constant and there are more edges than wedges (common properties for social networks), we can prove that our algorithm requires space ( is the number of vertices) to provide accurate estimates. We run a detailed set of experiments on a variety of real graphs and demonstrate that the memory requirement of the algorithm is a tiny fraction of the graph. For example, even for a graph with 200 million edges, our algorithm stores just 60,000 edges to give accurate results. Being a single pass streaming algorithm, our procedure also maintains a real-time estimate of the transitivity/number of triangles of a graph, by storing a minuscule fraction of edges.

triangle counting, streaming graphs, clustering coefficient, transitivity, birthday paradox, streaming algorithms
\category

E.1Data StructuresGraphs and Networks \categoryF.2.2 Nonnumerical Algorithms and ProblemsComputations on discrete structures \categoryG.2.2 Graph Theory Graph algorithms \categoryH.2.8 Database Applications Data mining

\terms

Algorithms, Theory

{bottomstuff}

This manuscript is an extended version of [Jha et al. (2013)].
This work was funded by the GRAPHS program under DARPA, Complex Interconnected Distributed Systems (CIDS) program DOE Applied Mathematics Research program and under Sandia’s Laboratory Directed Research & Development (LDRD) program. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-AC04-94AL85000.

1 Introduction

Triangles are one of the most important motifs in real world networks. Whether the networks come from social interaction, computer communications, financial transactions, proteins, or ecology, the abundance of triangles is pervasive, and this abundance is a critical feature that distinguishes real graphs from random graphs. There is a rich body of literature on analysis of triangles and counting algorithms. Social scientists use triangle counts to understand graphs [Coleman (1988), Portes (1998), Burt (2004), Welles et al. (2010)]; graph mining applications such as spam detection and finding common topics on the WWW use triangle counts [Eckmann and Moses (2002), Becchetti et al. (2008)]; motif detection in bioinformatics often count the frequency of triadic patterns [Milo et al. (2002)]. Distribution of degree-wise clustering coefficients was used as the driving force for a new generative model, Blocked Two-Level Erdös-Rényi [Seshadhri et al. (2012)]. Durak et al. observed that the relationships among degrees of triangle vertices can be a descriptor of the underlying graph [Durak et al. (2012)]. Nevertheless, counting triangles continues to be a challenge due to sheer sizes of the graphs (easily in the order of billions of edges).

(a) Transitivity
(b) Triangle count
Figure 1: Realtime tracking of number of triangles and transitivities on cit-Patents (16M edges), storing only 100K edges from the past.

Many massive graphs come from modeling interactions in a dynamic system. People call each other on the phone, exchange emails, or co-author a paper; computers exchange messages; animals come in the vicinity of each other; companies trade with each other. These interactions manifest as a stream of edges. The edges appear with timestamps, or “one at a time.” The network (graph) that represents the system is an accumulation of the observed edges. There are many methods to deal with such massive graphs, such as random sampling [Schank and Wagner (2005a), Tsourakakis et al. (2009b), Seshadhri et al. (2013a)], MapReduce paradigm [Suri and Vassilvitskii (2011), Plantenga (2012)], distributed-memory parallelism [Arifuzzaman et al. (2012), Chakrabarti et al. (2011)], adopting external memory [Chiang et al. (1995), Arge et al. (2010)], and multithreaded parallelism [Berry et al. (2007)].

All of these methods however, need to store at least a large fraction of the data. On the other hand, a small space streaming algorithm maintains a very small (using randomness) set of edges, called the “sketch”, and updates this sample as edges appear. Based on the sketch and some auxiliary data structures, the algorithm computes an accurate estimate for the number of triangles for the graph seen so far. The sketch size is orders of magnitude smaller than the total graph. Furthermore, it can be updated rapidly when new edges arrive and hence maintains a real-time estimate of the number of triangles. We also want a single pass algorithm, so it only observes each edge once (think of it as making a single scan of a log file). The algorithm cannot revisit edges that it has forgotten.

1.1 The streaming setting

Let be a simple undirected graph with vertices and edges. Let denote the number of triangles in the graph and be the number of wedges, where a wedge is a path of length . A common measure is the transitivity  [Wasserman and Faust (1994)], a measure of how often friends of friends are also friends. (This is also called the global clustering coefficient.)

A single pass streaming algorithm is defined as follows. Consider a sequence of distinct edges . Let be the graph at time , formed by the edge set . The stream of edges can be considered as a sequence of edge insertions into the graph. Vertex insertions can be handled trivially. We do not know the number of vertices ahead of time and simply see each edge as a pair of vertex labels. New vertices are implicitly added as new labels. There is no assumption on the order of edges in the stream. Edges incident to a single vertex do not necessarily appear together.

In this paper, we do not consider edge/vertex deletions or repeated edges. In that sense, this is a simplified version of the full-blown streaming model. Nonetheless, the edge insertion model on simple graphs is the standard for previous work on counting triangles [Bar-Yossef et al. (2002), Jowhari and Ghodsi (2005), Buriol et al. (2006), Ahn et al. (2012), Kane et al. (2012)].

A streaming algorithm has a small memory, , and sees the edges in stream order. At each edge, , the algorithm can choose to update data structures in (using the edge ). Then the algorithm proceeds to , and so on. The algorithm is never allowed to see an edge that has already passed by. The memory is much smaller than , so the algorithm keeps a small “sketch” of the edges it has seen. The aim is to estimate the number of triangles in at the end of the stream. Usually, we desire the more stringent guarantee of maintaining a running estimate of the number of triangles and transitivity of at time . We denote these quantities respectively as and .

1.2 Results

We present a single pass, -space algorithm to provably estimate the transitivity (with arbitrary additive error) in a streaming graph. Streaming algorithms for counting triangles or computing the transitivity have been studied before, but no previous algorithm attains this space guarantee. Buriol et al. [Buriol et al. (2006)] give a single pass algorithm with a stronger relative error guarantee that requires space . We discuss in more detail later.

Although our theoretical result is interesting asymptotically, the constant factors and dependence on error in our bound are large. Our main result is a practical streaming algorithm (based on the theoretical one) for computing and , using additional probabilistic heuristics. We perform an extensive empirical analysis of our algorithm on a variety of datasets from SNAP [SNAP (2013)]. The salient features of our algorithm are:

  • Theoretical basis: Our algorithm is based on the classic birthday paradox: if we choose random people, the probability that of them share a birthday is at least (Chap. II.3 of [Feller (1968)]). We extend this analysis for sampling wedges in a large pool of edges. The final streaming algorithm is designed by using reservoir sampling with wedge sampling [Seshadhri et al. (2013a)] for estimating . We prove a space bound of , which we show is under common conditions for social networks. In general, the number of triangles, is fairly large for many real-world graphs, and this is what gives the space advantage.

    While our theory appears to be a good guide in designing the algorithm and explaining its behavior, it should not be used to actually decide space bounds in practice. For graphs where is small, our algorithm does not provide good guarantees with small space (since is large).

  • Accuracy and scalability with small sketches: We test our algorithm on a variety of graphs from different sources. In all instances, we get accurate estimates for and by storing at most 40K edges. This is even for graphs where is in the order of millions. Our relative errors on and the number of triangles are mostly less than 5% (In a graph with very few triangles where , our triangle count estimate has relative error of 12%). Our algorithm can process extremely large graphs. Our experiments include a run on a streamed Orkut social network with 200M edges (by storing only 40K edges, relative errors are at most 5%). We get similar results on streamed Flickr and Live-journal graphs with tens of millions of edges.

    We run detailed experiments on some test graphs (with 1-3 million edges) with varying parameters to show convergence of our algorithm. Comparisons with previous work [Buriol et al. (2006)] show that our algorithm gets within 5% of the true answer, while the previous algorithm is off by more than 50%.

  • Real-time tracking: For a temporal graph, our algorithm precisely tracks both and with less storage. By storing 60K edges of the past, we can track this information for a patent citation network with 16 million edges [SNAP (2013)]. Refer to Fig. 1. We maintain a real-time estimate of both the transitivity and number of triangles with a single pass, storing less than 1% of the graph. We see some fluctuations in the transitivity estimate due to the randomness of the algorithm, but the overall tracking is consistently accurate.

1.3 Previous work

Enumeration of all triangles is a well-studied problem [Chiba and Nishizeki (1985), Schank and Wagner (2005b), Latapy (2008), Berry et al. (2011), Chu and Cheng (2011)]. Recent work by Cohen [Cohen (2009)], Suri and Vassilvitskii [Suri and Vassilvitskii (2011)], Arifuzzaman et al. [Arifuzzaman et al. (2012)] give massively parallel implementations of these algorithms. Eigenvalue/trace based methods have also been used  [Tsourakakis (2008), Avron (2010)] to compute estimates of the total and per-degree number of triangles.

Tsourakakis et al. [Tsourakakis et al. (2009a)] started the use of sparsification methods, the most important of which is Doulion [Tsourakakis et al. (2009b)]. Various analyses of this algorithm (and its variants) have been proposed [Kolountzakis et al. (2010), Tsourakakis et al. (2011), Yoon and Kim (2011), Pagh and Tsourakakis (2012)]. Algorithms based on wedge-sampling provide provable accurate estimations on various triadic measures on graphs [Schank and Wagner (2005a), Seshadhri et al. (2013a)]. Wedge sampling techniques have also been applied to directed graphs [Seshadhri et al. (2013b)] and implemented with MapReduce [Kolda et al. (2013)].

Theoretical streaming algorithms for counting triangles were initiated by Bar-Yossef et al. [Bar-Yossef et al. (2002)]. Subsequent improvements were given in [Jowhari and Ghodsi (2005), Buriol et al. (2006), Ahn et al. (2012), Kane et al. (2012)]. The space bounds achieved are of the form . Note that whenever (which is a reasonable assumption for sparse graphs). These algorithms are rarely practical, since is often much smaller than . Some multi-pass streaming algorithms give stronger guarantees, but we will not discuss them here.

Buriol et al. [Buriol et al. (2006)] give an implementation of their algorithm. For almost all of their experiments on graphs, with storage of 100K edges, they get fairly large errors (always more than 10%, and often more than 50%). Buriol et al. provide an implementation in the incidence list setting, where all neighbors of a vertex arrive together. In this case, their algorithm is quite practical since the errors are quite small. Our algorithm scales to sizes (100 million edges) larger than their experiments. We get better accuracy with far less storage, without any assumption on the ordering of the data stream. Furthermore, our algorithm performs accurate real-time tracking.

Becchetti et al. [Becchetti et al. (2008)] gave a semi-streaming algorithm for counting the triangles incident to every vertex. Their algorithm uses clever methods to approximate Jaccard similarities, and requires multiple passes over the data. Ahmed et al. studied sampling a subgraph from a stream of edges that preserves multiple properties of the original graph [Ahmed, Neville, and Kompella (Ahmed et al.)]. Our earlier results on triadic measures were presented in [Jha et al. (2013)]. More recently, Pavan et al. [Pavan et al. (2013)] introduce an approach called neighborhood sampling for estimating triangle counts which gives a 1-pass streaming algorithm with space bound , where is the maximum degree of the graph. Their implementation is practical and achieves good accuracy estimates on the lines of our practical implementation. [Tangwongsan et al. (2013)] explores a parallel implementation of [Pavan et al. (2013)]. (As a minor comment, our algorithm gets good results by storing less than 80K edges, while [Pavan et al. (2013)] only shows comparable results for storing 128K “estimators”, each of which at least stores an edge.)

1.4 Outline

A high-level description of our practical algorithm Streaming-Triangles is presented in §2. We start with the intuition behind the algorithm, followed by a detailed description of the implementation. §3 provides a theoretical analysis for an idealized variant called Single-Bit. We stress that Single-Bit is a thought experiment to highlight the theoretical aspects of our result, and we do not actually implement it. Nevertheless, this algorithm forms that basis of a practical algorithm, and in §3.3, we explain the heuristics used to get Streaming-Triangles. §3.2 gives an in-depth mathematical analysis of Single-Bit.

In §4, we give various empirical results of our runs of Streaming-Triangles on real graphs. We show that naïve implementations based on Single-Bit perform poorly in practice, and we need our heuristics to get a practical algorithm.

2 The Main Algorithm

2.1 Intuition for the algorithm

The starting point for our algorithm is the idea of wedge sampling to estimate the transitivity,  [Seshadhri et al. (2013a)]. A wedge is closed if it participates in a triangle and open otherwise. Note that is exactly the probability that a uniform random wedge is closed. This leads to a simple randomized algorithm for estimating (and ), by generating a set of (independent) uniform random wedges and finding the fraction that are closed. But how do we sample wedges from a stream of edges?

Suppose we just sampled a uniform random set of edges. How large does this set need to be to get a wedge? The birthday paradox can be used to deduce that (as long as , which holds for a great majority, if not all, of real networks) edges suffice. A more sophisticated result, given in Lem. 3.2, provides (weak) concentration bounds on the number of wedges generated by a random set of edges. A “small” number of uniform random edges can give enough wedges to perform wedge sampling (which in turn is used to estimate ).

A set of uniform random edges can be maintained by reservoir sampling [Vitter (1985)]. From these edges, we generate a random wedge by doing a second level of reservoir sampling. This process implicitly treats the wedges created in the edge reservoir as a stream, and performs reservoir sampling on that. Overall, this method approximates uniform random wedge sampling.

As we maintain our reservoir wedges, we check for closure by the future edges in the stream. But there are closed wedges that cannot be verified, because the closing edge may have already appeared in the past. A simple observation used by past streaming algorithms saves the day [Jowhari and Ghodsi (2005), Buriol et al. (2006)]. In each triangle, there is exactly one wedge whose closing edge appears in the future. So we try to approximate the fraction of these “future-closed” wedges, which is exactly one-third of the fraction of closed wedges.

Finally, to estimate from , we need an estimate of the total number of wedges . This can be obtained by reverse engineering the birthday paradox: given the number of wedges in our reservoir of sample edges, we can estimate (again, using the workhorse Lem. 3.2).

2.2 The procedure Streaming-Triangles

The streaming algorithm maintains two primary data structures: the edge reservoir and the wedge reservoir. The edge reservoir maintains a uniform random sample of edges observed so far. The wedge reservoir aims to select a uniform sample of wedges. Specifically, it maintains a uniform sample of the wedges created by the edge reservoir at any step of the process. (The wedge reservoir may include wedges whose edges are no longer in the edge reservoir.) The two parameters for the streaming algorithm are and , the sizes of edge and wedge pools, respectively. The main algorithm is described in Streaming-Triangles, although most of the technical computation is performed in Update, which is invoked every time a new edge appears.

After edge is processed by Update, the algorithm computes running estimates for and . These values do not have to be stored, so they are immediately output. We describe the main data structures of the algorithm Streaming-Triangles.

  • Array edge_res: This is the array of reservoir edges and is the subsample of the stream maintained.

  • New wedges : This is a list of all wedges involving formed only by edges in edge_res. This may often be empty, if is not added to the edge_res. We do not necessarily maintain this list explicitly, and we discuss implementation details later.

  • Variable tot_wedges: This is the total number of wedges formed by edges in the current edge_res.

  • Array wedge_res: This is an array of reservoir wedges of size .

  • Array isClosed: This is a boolean array. We set isClosed to be true if wedge wedge_res is detected as closed.

On seeing edge , Streaming-Triangles updates the data structures. The estimates and are computed using the fraction of true bits in isClosed, and the variable tot_wedges.

Initialize edge_res of size and wedge_res of size . For each edge in stream,     Call Update().     Let be the fraction of entries in isClosed set to true.     Set .     Set .
Algorithm 1 Streaming-Triangles()

Update is where all the work happens, since it processes each edge as it arrives. Steps 22 determine all the wedges in the wedge reservoir that are closed by and updates isClosed accordingly. In Steps 2-2, we perform reservoir sampling on edge_res. This involves replacing each entry by with probability . The remaining steps are executed iff this leads to any changes in edge_res. We perform some updates to tot_wedges and determine the new wedges . Finally, in Steps 2-2, we perform reservoir sampling on wedge_res, where each entry is randomly replaced with some wedge in . Note that we may remove wedges that have already closed.

1 for
2   if wedge_res closed by     isClosed true for   Pick a random number in   if     edge_res. if there were any updates of edge_res
3   Update tot_wedges, the number of wedges formed by edge_res.   Determine (wedges involving ) and let new_wedges .   for ,     Pick a random number in     if       Pick uniform random .       wedge_res.
      isClosed false.
Algorithm 2 Update()

2.3 Implementation details

Computing and are simple and require no overhead. We maintain edge_res as a time-variable subgraph. Each time edge_res is updated, the subgraph undergoes an edge insert and edge delete. Suppose . Wedges in are given by the neighbors of and in this subgraph. From random access to the neighbor lists of and , we can generate a random wedge from efficiently.

Updates to the edge reservoir are very infrequent. At time , the probability of an update is . By linearity of expectation, the total number of times that edge_res is updated is

For a fixed , this increases very slowly with . So for most steps, we neither update edge_res or sample a new wedge.

The total number of edges that are stored from the past is . The edge reservoir explicitly stores edges, and at most edges are implicitly stored (for closure). Regardless of the implementation, the extra data structures overhead is at most twice the storage parameters and . Since these are at least orders of magnitude smaller than the graph, this overhead is affordable.

3 The idealized algorithm Single-Bit

3.1 Description of the Algorithm

The algorithm Single-Bit is an idealized variant of Streaming-Triangles that we can formally analyze. It requires more memory and expensive updates, but explains the basic principles behind our algorithm. We later give the memory reducing heuristics that take us from Single-Bit to Streaming-Triangles.

The procedure Single-Bit outputs a single (random) bit, , at each . The expectation of this bit is related to the transitivity . Single-Bit maintains a set of reservoir edges of fixed size. We use to denote the reservoir at time ; abusing notation, the size is just denoted by since it is independent of . The set of wedges constructed from is . Formally, . Single-Bit maintains a set , the set of wedges in for which it has detected a closing edge. Note that this is a subset of all closed wedges in . This set is easy to update as changes.

For each in stream,    For each edge in , replace it independently by with probability . This yields .    Construct the set of wedges .    Denoting as the set of all wedges in closed by , update .    If is empty,       output    Else        Pick a uniform random wedge in .        Output if this wedge is in and otherwise.
Algorithm 3 Single-Bit

For convenience, we state our theorem for the final time step. However, it also holds (with an identical proof) for any large enough time . It basically argues that the expectation of is almost . Furthermore, can be used to estimate .

{theorem}

Assume and fix . Suppose , for some sufficiently large constant . Set . Then and with probability , .

The memory requirement of this algorithm is defined by , which we assume to be . We can show that (usually much smaller for heavy tailed graphs) when . Denote the degree of vertex by . In this case, we can bound , so . By and the Cauchy-Schwartz inequality,

Using the above bound, we get . Hence, when and is a constant (both reasonable assumptions for social networks), we require only space.

3.2 Analysis of the algorithm

The aim of this section is to prove Thm. 3.1. We begin with some preliminaries. First, the set is a set of uniform i.i.d. samples from , a direct consequence of reservoir sampling. Next, we define future-closed wedges. Take the final graph and label all edges with their timestamp. For each triangle, the wedge formed by the earliest two timestamps is a future-closed wedge. In other words, if a triangle has edges , (), then the wedge is future-closed. The number of future-closed wedges is exactly , since each triangle contains exactly one such wedge. We have a simple yet important claim about Single-Bit.

Claim 1

The set is exactly the set of future-closed wedges in .

{proof}

Consider a wedge in . This wedge was formed at time , and remains in all for . If this wedge is future-closed (say by edge , for ), then at time , the wedge will be detected to be closed. Since this information is maintained by Single-Bit, the wedge will be in . If the wedge is not future-closed, then no closing edge will be found for it after time . Hence, it will not be in .

The main technical effort goes into showing that , the number of wedges formed by edges in , can be used to determine the number of wedges in . Furthermore, the number of future-closed wedges in (precisely , by Claim 1) can be used to estimate .

This is formally expressed in the next lemma. Roughly, if , then we expect wedges to be formed by . We also get weak concentration bounds for the quantity. A similar bound (with somewhat weaker concentration) holds even when we consider the set of future-closed wedges.

{lemma}

[Birthday paradox for wedges] Let be a graph with edges and be a fixed subset of wedges in . Let be a set of i.i.d. uniform random edges from . Let be the random variable denoting the number of wedges in formed by edges in .

  1. .

  2. Let be a parameter and be a sufficiently large constant. Assume . If , then with probability at least , .

Using this lemma, we can prove Thm. 3.1. We first give a sketch of the proof. Later we will formalize our claims. At the end of the stream, the output bit is if and a wedge from is sampled. Note that both and are random variables.

To deal with the first event, we apply Lem. 3.2 with being the set of all wedges. So, . If , then (a large enough number). Intuitively, the probability that is very small, and this can be bounded using the concentration bound of Lem. 3.2.

Now, suppose that . The probability that (which is ) is exactly the fraction . Suppose we could approximate this by . By Claim 1, is the set of future-closed wedges, the number of which is , so Lem. 3.2 tells us that . Hence, .

In general, the value of might be different from . But and are reasonably concentrated (by the second part of Lem. 3.2, so we can argue that this difference is small.

Proof of Thm. 3.1: As mentioned in the proof sketch, the output bit is if and a wedge from is sampled. For convenience, we will use for the total number of wedges formed by edges in , and we will use for the number of future-closed wedges formed by edges in . Both and are random variables that depend on . We apply Lem. 3.2 to understand the behavior of and . Let ( is the parameter in the original Thm. 3.1).

Claim 2

. With probability , .

Analogously, . With probability , .

{proof}

First, we deal with . In Lem. 3.2, let the set be the entire set of wedges. The random variable of the lemma is exactly , and the size of is . So . We set in the second part of Lem. 3.2 to be . By the premise of Thm. 3.1, . Moreover, for a large enough constant , the latter is at least . We can apply the second part of Lem. 3.2 to derive the weak concentration of .

For , we apply Lem. 3.2 with the set of future-closed wedges. These are exactly in number. An argument identical to the one above completes the proof. \qed

This suffices to prove the second part of Thm. 3.1. We multiply the inequality by , and note that the estimate is . Hence, with probability .

We have proven that and would like to argue this is almost true for . This is formalized in the next claim.

Claim 3

Suppose is the following event: . Then, . Furthermore, .

{proof}

Since the deviation probabilities as given in Claim 2 are at most , the union bound on probabilities implies .

Since , by Claim 2, . Hence, when happens, . In other words, with probability at least , the edges in will form a wedge.

Now look at . When occurs, we can apply the bounds and .

We manipulate the upper bound with the following fact. For small enough , . Also, we use .

Using a similar calculation for the lower bound, when occurs, . Conditioned on , , implying .

We have a bound on , but we really care about . The key is that conditioned on , the expectation of is , and happens with large probability. We argue formally in Claim 4 that . Combined with Claim 3, we get , as desired.

Claim 4

.

{proof}

Let denote the event . When holds, then also holds. Since Single-Bit outputs 0 when does not hold, we get . And since , . Further observe that is exactly equal to . Therefore, we get . By Bayes’ rule,

(1)

The second last equality uses the fact that , while the last equality uses the fact that (since implies ). Note that . Thus, (1) is at least . Moreover, (1) is at most .

Proof of Lem. 3.2: The first part is an adaptation of the birthday paradox calculation. Let the (multi)set . We define random variables for each with . Let if the wedge belongs to and otherwise. Then .

Since consists of uniform i.i.d. edges from , the following holds: for every and every (unordered) pair of edges from , . This implies . By linearity of expectation and identical distribution of all s, , as required.

The second part is obtained by applying the Chebyschev inequality. Let denote the variance of . For any ,

(2)

We need an upper bound on the variance of to apply (2). This is given in Lem. 3.2. Before proving the lemma, we use it to complete the main proof. We set . Note that , so . By (2), is at most the following.

where the final inequality holds .

{lemma}

[Variance bound] Assuming and ,

{proof}

We use the same notation as in the proof of Lem. 3.2. For convenience, we set , which is . By the definition of variance and linearity of expectation,

The summation is split as follows.

We deal with each of these terms separately. For convenience, we refer to the terms (in order) as , and . We first list the upper bounds for each of these terms and derive the final bound on .

  • .

  • .

  • .

We shall prove these shortly. From these, we directly bound .

Note that . Since the -norm is less than the -norm, . Since , we have . Plugging these bounds in (and using gross upper bounds to ignore constants),

The final step uses the fact that .

We now bound the terms , , and in three separate claims.

Claim 5

.

{proof}

Since only takes the values and , .

Claim 6
{proof}

How many terms are in the summation? There are 3 distinct indices in . For each distinct triple of indices, there are 6 possible way of choosing among these indices such that . This gives terms. By symmetry, each term in the summation is equal to . This is exactly the probability that and . We bound this probability above by , completing the proof.

Let be the event and . Let be the event that edge intersects edges and . Observe that implies . Therefore, it suffices to bound the probability of the latter event. We also use the inequality . Also, note that the number of edges intersecting any edge is exactly .

For the final equality, consider the number of terms in the summation where appears. This is the number of edges (over all ), which is exactly .

Claim 7
{proof}

There are terms in the summation ( ways of choose and 6 different orderings). Note that and are independent, regardless of the structure of or the set of wedges . This is because . In other words, the outcomes of the random edges and do not affect the edges (by independence of these draws) and hence cannot affect the random variable . Thus, .

3.3 Circumventing problems with Single-Bit

In this section we will discuss two problems that limit the practicality of Single-Bit and how Streaming-Triangles circumvents these problems with heuristics.

Thm. 3.1 immediately gives a small sublinear space streaming algorithm for estimating . The output of Single-Bit has almost the exact expectation. We can run many independent invocations of Single-Bit and take the fraction of s to estimate (which is close to ). A Chernoff bound tells us that invocations suffice to estimate within an additive error of . The total space required by the algorithm becomes , which can be very expensive in practice. Even though is not large, for the reasonable value of , the storage cost blows up by a factor of . This is the standard method used in previous work for streaming triangle counts.

This blowup is avoided in Streaming-Triangles by reusing the same reservoir of edges for sampling wedges. Note that Single-Bit is trying to generate a single uniform random wedge from , and we use independent reservoirs of edges to generate multiple samples. Lem. 3.2 says that for a reservoir of edges, we expect wedges. So, if and we get wedges. Since the reservoir contains a large set of wedges, we could just use a subset of these for estimating . Unfortunately, these wedges are correlated with each other, and we cannot theoretically prove the desired concentration. In practice, the algorithm generates so many wedges that downsampling these wedges for the wedge reservoir leads to a sufficiently uncorrelated sample, and we get excellent results by reusing the wedge reservoir. This is an important distinction all other streaming work [Jowhari and Ghodsi (2005), Buriol et al. (2006), Pavan et al. (2013)]. We can multiply our space by to (heuristically) get error , but this is not possible through previous algorithms. Their space is multiplied by .

The second issue is that Single-Bit requires a fair bit of bookkeeping. We need to generate a random wedge from the large set , the set of wedges formed by the current edge reservoir. While this is possible by storing edge_res as a subgraph, we have a nice (at least in the authors’ opinion) heuristic fix that avoids these complications.

Suppose we have a uniform random wedge . We can convert it to an “almost” uniform random wedge in . If (thus which is true most of the time), then is also uniform in . Suppose not. Note that is constructed by removing some wedges from and inserting . Since is uniform random in , if is also present in , then it is uniform random in . Replacing by a uniform random wedge in with probability yields a uniform random wedge in . This is precisely what Streaming-Triangles does.

When , then the edge replaced by must be in . We approximate this as a low probability event and simply ignore this case. Hence, in Streaming-Triangles, we simply assume that is always in . This is technically incorrect, but it appears to have little effect on the accuracy in practice. And it leads to a cleaner, efficient implementation.

4 Experimental Results

We implemented our algorithm in C++ and ran our experiments on a MacBook Pro laptop equipped with a 2.8GHz Intel core i7 processor and 8GB memory.

Predictions on various graphs: We run Streaming-Triangles on a variety of graphs obtained from the SNAP database [SNAP (2013)]. The vital statistics of all the graphs are provided in Tab. 1. We simply set the edge reservoir size as 20K and wedge reservoir size as 20K for all our runs. Each graph is converted into a stream by taking a random ordering of the edges. In Fig. 2, we show our results for estimating both the transitivity, and triangle count, . The absolute values are plotted for together with the true values. For the triangle counts, we plot the relative error (so , where is the algorithm output) for each graph, since the true values can vary over orders of magnitude. Observe that the transitivity estimates are very accurate. The relative error for is mostly below 8%, and often below 4%.

All the graphs listed have millions of edges, so our storage is always 2 orders of magnitude smaller than the size of graph. Most dramatically, we get accurate results on the Orkut social network, which has 220M edges. The algorithm stores only 40K edges, a 0.0001-fraction of the graph. Also observe the results on the Flickr and Livejournal graphs, which also run into tens of millions of edges.

Graph
amazon0312 401K 2350K 69M 3686K 0.160
amazon0505 410K 2439K 73M 3951K 0.162
amazon0601 403K 2443K 72M 3987K 0.166
as-skitter 1696K 11095K 16022M 28770K 0.005
cit-Patents 3775K 16519K 336M 7515K 0.067
roadNet-CA 1965K 2767K 6M 121K 0.060
web-BerkStan 685K 6649K 27983M 64691K 0.007
web-Google 876K 4322K 727M 13392K 0.055
web-Stanford 282K 1993K 3944M 11329K 0.009
wiki-Talk 2394K 4660K 12594M 9204K 0.002
youtube 1158K 2990K 1474M 3057K 0.006
flickr 1861K 15555K 14670M 548659K 0.112
livejournal 5284K 48710K 7519M 310877K 0.124
orkut 3073K 223534K 45625M 627584K 0.041
Table 1: Properties of the graphs used in the experiments

amazon0312

amazon0505

amazon0601

as-skitter

cit-Patents

roadNet-CA

web-BerkStan

web-Google

web-Stanford

wiki-Talk

youtube

flickr

livejournal

orkut

Transitivity

Exact

Estimate

(a) Transitivity

amazon0312

amazon0505

amazon0601

as-skitter

cit-Patents

roadNet-CA

web-BerkStan

web-Google

web-Stanford

wiki-Talk

youtube

flickr

livejournal

orkut

Relative error

(b) Triangles
Figure 2: Output of a single run of Streaming-Triangles on a variety of real datasets with 20K edge reservoir and 20K wedge reservoir. The plot on the left gives the estimated transitivity values (labelled streaming) alongside their exact values. The plot on the right gives the relative error of Streaming-Triangles’s estimate on triangles . Observe that the relative error for is mostly below , and often below .

Real-time tracking: A benefit of Streaming-Triangles is that it can maintain a real-time estimate of and . We take a real-world temporal graph, cit-Patents, which contains patent citation data over a 40 year period. The vertices of this graph are the patents and the edges correspond to the citations. The edges are time stamped with the year of citation and hence give a stream of edges. Using an edge reservoir of 50K and wedge reservoir of 50K, we accurately track these values over time (refer to Fig. 1). Note that this is still orders of magnitude smaller than the full size of the graph, which is 16M edges. The figure only shows the true values and the estimates for the year ends. As the figure shows the estimates are consistently accurate over time.

Convergence of our estimate: We demonstrate that our algorithm converges to the true value as we increase the space. We run our algorithm on amazon0505 graph by increasing the space () available to the algorithm. For convenience, we keep the size of edge reservoir and wedge reservoir the same. In Fig. 3, estimates for transitivity and triangles rapidly converge to the true value. Accuracy increases with more storage for unto 10,000 edges, but after that stabilizes. We get similar results for other graphs, but do not provide all details for brevity.

(a) Convergence of the transitivity estimate
(b) Convergence of triangles estimate
Figure 3: Concentration of estimate on amazon0505: We run our algorithm keeping the size of edge reservoir and wedge reservoir the same. We plot the transitivity and triangles estimate and observe that they converge to the true value.

Effects of storage on estimates: We explore the effect that the sizes of the edge reservoir, and the wedge reservoir, have on the quality of the estimates for . In the first experiment we fix to 10K and 20K and increase . The results are presented in Fig. (a)a. In this figure, for any point on the horizontal axis, the corresponding point on the vertical axis is the average error in . In all cases, the error decreases as we increase . However, it decreases sharply initially but then flattens showing that the marginal benefit of increasing beyond improvements diminish, and it does not help to only increase .

In Fig. (b)b, we fix to 10K and 20K and increase