Inferring Networks of Diffusion and Influence

Inferring Networks of Diffusion and Influence

Abstract

Information diffusion and virus propagation are fundamental processes taking place in networks. While it is often possible to directly observe when nodes become infected with a virus or adopt the information, observing individual transmissions (i.e., who infects whom, or who influences whom) is typically very difficult. Furthermore, in many applications, the underlying network over which the diffusions and propagations spread is actually unobserved. We tackle these challenges by developing a method for tracing paths of diffusion and influence through networks and inferring the networks over which contagions propagate. Given the times when nodes adopt pieces of information or become infected, we identify the optimal network that best explains the observed infection times. Since the optimization problem is NP-hard to solve exactly, we develop an efficient approximation algorithm that scales to large datasets and finds provably near-optimal networks.

We demonstrate the effectiveness of our approach by tracing information diffusion in a set of 170 million blogs and news articles over a one year period to infer how information flows through the online media space. We find that the diffusion network of news for the top 1,000 media sites and blogs tends to have a core-periphery structure with a small set of core media sites that diffuse information to the rest of the Web. These sites tend to have stable circles of influence with more general news media sites acting as connectors between them.

Networks of diffusion, Information cascades, Blogs, News media, Meme-tracking, Social networks
\category

H.2.8Database ManagementDatabase applications[Data mining] \termsAlgorithms, Experimentation

1

1 Introduction

The dissemination of information, cascading behavior, diffusion and spreading of ideas, innovation, information, influence, viruses and diseases are ubiquitous in social and information networks. Such processes play a fundamental role in settings that include the spread of technological innovations [Rogers (1995), Strang and Soule (1998)], word of mouth effects in marketing [Domingos and Richardson (2001), Kempe et al. (2003), Leskovec et al. (2006)], the spread of news and opinions [Adar et al. (2004), Gruhl et al. (2004), Leskovec et al. (2007), Leskovec et al. (2009), Liben-Nowell and Kleinberg (2008)], collective problem-solving [Kearns et al. (2006)], the spread of infectious diseases [Anderson and May (2002), Bailey (1975), Hethcote (2000)] and sampling methods for hidden populations [Goodman (1961), Heckathorn (1997)].

In order to study network diffusion there are two fundamental challenges one has to address. First, to be able to track cascading processes taking place in a network, one needs to identify the contagion (i.e., the idea, information, virus, disease) that is actually spreading and propagating over the edges of the network. Moreover, one has then to identify a way to successfully trace the contagion as it is diffusing through the network. For example, when tracing information diffusion, it is a non-trivial task to automatically and on a large scale identify the phrases or “memes” that are spreading through the Web [Leskovec et al. (2009)].

Second, we usually think of diffusion as a process that takes place on a network, where the contagion propagates over the edges of the underlying network from node to node like an epidemic. However, the network over which propagations take place is usually unknown and unobserved. Commonly, we only observe the times when particular nodes get “infected” but we do not observe who infected them. In case of information propagation, as bloggers discover new information, they write about it without explicitly citing the source. Thus, we only observe the time when a blog gets “infected” with information but not where it got infected from. Similarly, in virus propagation, we observe people getting sick without usually knowing who infected them. And, in a viral marketing setting, we observe people purchasing products or adopting particular behaviors without explicitly knowing who was the influencer that caused the adoption or the purchase.

These challenges are especially pronounced in information diffusion on the Web, where there have been relatively few large scale studies of information propagation in large networks [, Leskovec et al. (2006), Leskovec et al. (2007), Liben-Nowell and Kleinberg (2008)]. In order to study paths of diffusion over networks, one essentially requires to have complete information about who influences whom, as a single missing link in a sequence of propagations can lead to wrong inferences [Sadikov et al. (2011)]. Even if one collects near complete large scale diffusion data, it is a non-trivial task to identify textual fragments that propagate relatively intact through the Web without human supervision. And even then the question of how information diffuses through the network still remains. Thus, the questions are, what is the network over which the information propagates on the Web? What is the global structure of such a network? How do news media sites and blogs interact? Which roles do different sites play in the diffusion process and how influential are they?

Our approach to inferring networks of diffusion and influence. We address the above questions by positing that there is some underlying unknown network over which information, viruses or influence propagate. We assume that the underlying network is static and does not change over time. We then observe the times when nodes get infected by or decide to adopt a particular contagion (a particular piece of information, product or a virus) but we do not observe where they got infected from. Thus, for each contagion, we only observe times when nodes got infected, and we are then interested in determining the paths the diffusion took through the unobserved network. Our goal is to reconstruct the network over which contagions propagate. Figure 1 gives an example.

(a) True network
(b) Inferred network using heuristic baseline method
(c) Inferred network using NetInf algorithm
Figure 1: Diffusion network inference problem. There is an unknown network (a) over which contagions propagate. We are given a collection of node infection times and aim to recover the network in figure (a). Using a baseline heuristic (see Section 4) we recover network (b) and using the proposed NetInf algorithm we recover network (c). Red edges denote mistakes. The baseline makes many mistakes but NetInf almost perfectly recovers the network.

Edges in such networks of influence and diffusion have various interpretations. In virus or disease propagation, edges can be interpreted as who-infects-whom. In information propagation, edges are who-adopts-information-from-whom or who-listens-to-whom. In a viral marketing setting, edges can be understood as who-influences-whom.

The main premise of our work is that by observing many different contagions spreading among the nodes, we can infer the edges of the underlying propagation network. If node tends to get infected soon after node for many different contagions, then we can expect an edge to be present in the network. By exploring correlations in node infection times, we aim to recover the unobserved diffusion network.

Figure 2: The underlying true network over which contagions spread is illustrated on the top. Each subsequent layer depicts a cascade created by the diffusion of a particular contagion. For each cascade, nodes in gray are the “infected” nodes and the edges denote the direction in which the contagion propagated. Now, given only the node infection times in each cascade we aim to infer the connectivity of the underlying network .

The concept of set of contagions over a network is illustrated in Figure 2. As a contagion spreads over the underlying network it creates a trace, called cascade. Nodes of the cascade are the nodes of the network that got infected by the contagion and edges of the cascade represent edges of the network over which the contagion actually spread. On the top of Figure 2, the underlying true network over which contagions spread is illustrated. Each subsequent layer depicts a cascade created by a particular contagion. A priori, we do not know the connectivity of the underlying true network and our aim is to infer this connectivity using the infection times of nodes in many cascades.

We develop NetInf, a scalable algorithm for inferring networks of diffusion and influence. We first formulate a generative probabilistic model of how, on a fixed hypothetical network, contagions spread as directed trees (i.e., a node infects many other nodes) through the network. Since we only observe times when nodes get infected, there are many possible ways of the contagion could have propagated through the network that are consistent with the observed data. In order to infer the network we have to consider all possible ways of the contagion spreading through the network. Thus, naive computation of the model takes exponential time since there is a combinatorially large number of propagation trees. We show that, perhaps surprisingly, computations over this super-exponential set of trees can be performed in polynomial (cubic) time. However, under such model, the network inference problem is still intractable. Thus, we introduce a tractable approximation, and show that the objective function can be both efficiently computed and efficiently optimized. By exploiting a diminishing returns property of the problem, we prove that NetInf infers near-optimal networks. We also speed-up NetInf by exploiting the local structure of the objective function and by using lazy evaluations [Leskovec et al. (2007)].

In a broader context, our work here is related to the network structure learning of probabilistic directed graphical models [Friedman et al. (1999), Getoor et al. (2003)] where heuristic greedy hill-climbing or stochastic search that both offer no performance guarantees are usually used in practice. In contrast, our work here provides a novel formulation and a tractable polynomial time algorithm for inferring directed networks together with an approximation guarantee that ensures the inferred networks will be of near-optimal quality.

Our results on synthetic datasets show that we can reliably infer an underlying propagation and influence network, regardless of the overall network structure. Validation on real and synthetic datasets shows that NetInf outperforms a baseline heuristic by an order of magnitude and correctly discovers more than 90% of the edges. We apply our algorithm to a real Web information propagation dataset of 170 million blog and news articles over a one year period. Our results show that online news propagation networks tend to have a core-periphery structure with a small set of core blog and news media websites that diffuse information to the rest of the Web, news media websites tend to diffuse the news faster than blogs and blogs keep discussing about news longer time than media websites.

Inferring how information or viruses propagate over networks is crucial for a better understanding of diffusion in networks. By modeling the structure of the propagation network, we can gain insight into positions and roles various nodes play in the diffusion process and assess the range of influence of nodes in the network.

The rest of the paper is organized as follows. Section 2 is devoted to the statement of the problem, the formulation of the model and the optimization problem. In section 3, an efficient reformulation of the optimization problem is proposed and its solution is presented. Experimental evaluation using synthetic and MemeTracker data are shown in section 4. We conclude with related work in section 5 and discussion of our results in section 6.

2 Diffusion network inference problem

We next formally describe the problem where contagions propagate over an unknown static directed network and create cascades. For each cascade we observe times when nodes got infected but not who infected them. The goal then is to infer the unknown network over which contagions originally propagated. In an information diffusion setting, each contagion corresponds to a different piece of information that spreads over the network and all we observe are the times when particular nodes adopted or mentioned particular information. The task then is to infer the network where a directed edge carries the semantics that node tends to get influenced by node (i.e., mentions or adopts the information after node does so as well).

2.1 Problem statement

Given a hidden directed network , we observe multiple contagions spreading over it. As the contagion propagates over the network, it leaves a trace, a cascade, in the form of a set of triples , which means that contagion reached node at time by spreading from node (i.e., by propagating over the edge ). We denote the fact that the cascade initially starts from some active node at time as .

Now, we only get to observe the time when contagion reached node but not how it reached the node , i.e., we only know that got infected by one of its neighbors in the network but do not know who ’s neighbors are and who of them infected . Thus, instead of observing the triples that fully specify the trace of the contagion through the network, we only get to observe pairs that describe the time when node got infected by the contagion . Now, given such data about node infection times for many different contagions, we aim to recover the unobserved directed network , i.e., the network over which the contagions originally spread.

We use the term hit time to refer to the time when a cascade created by a contagion hits (infects, causes the adoption by) a particular node . In practice, many contagions do not hit all the nodes of the network. Simply, if a contagion hits all the nodes this means it will infect every node of the network. In real-life most cascades created by contagions are relatively small. Thus, if a node is not hit by a cascade, then we set . Then, the observed data about a cascade is specified by the vector of hit times, where is the number of nodes in , and is the time when node got infected by the contagion ( if did not get infected by ).

Our goal now is to infer the network . In order to solve this problem we first define the probabilistic model of how contagions spread over the edges of the network. We first specify the contagion transmission model that describes how likely is that node spreads the contagion to node . Based on the model we then describe the probability that the contagion propagated in a particular cascade tree pattern , where tree simply specifies which nodes infected which other nodes (e.g., see Figure 2). Last, we define , which is the probability that cascade occurs in a network . And then, under this model, we show how to estimate a (near-)maximum likelihood network , i.e., the network that (approximately) maximizes the probability of cascades occurring in it.

2.2 Cascade Transmission Model

We start by formulating the probabilistic model of how contagions diffuse over the network. We build on the Independent Cascade Model [Kempe et al. (2003)] which posits that an infected node infects each of its neighbors in the network independently at random with some small chosen probability. This model implicitly assumes that every node in the cascade is infected by at most one node . That is, it only matters when the first neighbor of infects it and all infections that come afterwards have no impact. Note that can have multiple of its neighbors infected but only one neighbor actually activates . Thus, the structure of a cascade created by the diffusion of contagion is fully described by a directed tree , that is contained in the directed graph , i.e., since the contagion can only spread over the edges of and each node can only be infected by at most one other node, the pattern in which the contagion propagated is a tree and a subgraph of . Refer to Figure 2 for an illustration of a network and a set of cascades created by contagions diffusing over it.

Probability of an individual transmission. The Independent Contagion Model only implicitly models time through the epochs of the propagation. We thus formulate a variant of the model that preserves the tree structure of cascades and also incorporates the notion of time.

We think of our model of how a contagion transmits from to in two steps. When a new node gets infected it gets a chance to transmit the contagion to each of its currently uninfected neighbors independently with some small probability . If the contagion is transmitted we then sample the incubation time, i.e., how long after got infected, will get a chance to infect its (at that time uninfected) neighbors. Note that cascades in this model are necessarily trees since node only gets to infect neighbors that have not yet been infected.

First, we define the probability that a node spreads the cascade to a node , i.e., a node influences/infects/transmits contagion to a node . Formally, specifies the conditional probability of observing cascade spreading from to .

Consider a pair of nodes and , connected by a directed edge and the corresponding hit times and . Since the contagion can only propagate forward in time, if node got infected after node () then , i.e., nodes can not influence nodes from the past. On the other hand (if ) we make no assumptions about the properties and shape of . To build some intuition, we can think that the probability of propagation between a pair of nodes and is decreasing in the difference of their infection times, i.e., the farther apart in time the two nodes get infected the less likely they are to infect one another.

However, we note that our approach allows for the contagion transmission model to be arbitrarily complicated as it can also depend on the properties of the contagion as well as the properties of the nodes and edges. For example, in a disease propagation scenario, node attributes could include information about the individual’s socio-economic status, commute patterns, disease history and so on, and the contagion properties would include the strength and the type of the virus. This allows for great flexibility in the cascade transmission models as the probability of infection depends on the parameters of the disease and properties of the nodes.

Symbol Description
Directed graph with nodes and edges over which contagions spread
Probability that contagion propagates over an edge of
Incubation time model parameter (refer to Eq. 1)
Set of -edges, and
Contagion that spreads over
Time when node got hit (infected) by a particular cascade
Set of node hit times in cascade . if node did not participate in
Time difference between the node hit times in a particular cascade
Set of contagions and corresponding hit times, i.e., the observed data
Set of all possible propagation trees of cascade on graph
Cascade propagation tree,
Node set of ,
Edge set of ,
Table 1: Table of symbols.

Purely for simplicity, in the rest of the paper we assume the simplest and most intuitive model where the probability of transmission depends only on the time difference between the node hit times . We consider two different models for the incubation time distribution , an exponential and a power-law model, each with parameter :

(1)

Both the power-law and exponential waiting time models have been argued for in the literature [Barabási (2005), Leskovec et al. (2007), Malmgren et al. (2008)]. In the end, our algorithm does not depend on the particular choice of the incubation time distribution and more complicated non-monotonic and multimodal functions can easily be chosen [Crane and Sornette (2008), Wallinga and Teunis (2004), Gomez-Rodriguez et al. (2011)]. Also, we interpret , i.e., if , then with probability . Note that the parameter can potentially be different for each edge in the network.

Considering the above model in a generative sense, we can think that the cascade reaches node at time , and we now need to generate the time when spreads the cascade to node . As cascades generally do not infect all the nodes of the network, we need to explicitly model the probability that the cascade stops. With probability , the cascade stops, and never reaches , thus . With probability , the cascade transmits over the edge , and the hit time is set to , where is the incubation time that passed between the hit times and .

Likelihood of a cascade spreading in a given tree pattern . Next we calculate the likelihood that contagion in a graph propagated in a particular tree pattern under some assumptions. This means we want to assess the probability that a cascade with hit times propagated in a particular tree pattern .

Due to our modeling assumption that cascades are trees the likelihood is simply:

(2)

where is the edge set and is the vertex set of tree . Note that is the set of nodes that got infected by , i.e., and contains elements of where . The above expression has an intuitive explanation. Since the cascade spread in tree pattern , the contagion successfully propagated along those edges. And, along the edges where the contagion did not spread, the cascade had to stop. Here, we assume independence between edges to simplify the problem. Despite this simplification, we later show empirically that NetInf works well in practice

Moreover, can be rewritten as:

(3)

where is the number of edges in and counts the edges over which the contagion successfully propagated. Similarly, counts the number of edges that did not activate and failed to transmit the contagion: , and is the out-degree of node in graph .

We conclude with an observation that will come very handy later. Examining Eq. 3 we notice that the first part of the equation before the product sign does not depend on the edge set but only on the vertex set of the tree . This means that the first part is constant for all trees with the same vertex set but possibly different edge sets . For example, this means that for a fixed and maximizing with respect to (i.e., finding the most probable tree), does not depend on the second product of Eq. 2. This means that when optimizing, one only needs to focus on the first product where the edges of the tree simply specify how the cascade spreads, i.e., every node in the cascade gets influenced by exactly one node, that is, its parent.

Cascade likelihood. We just defined the likelihood that a single contagion propagates in a particular tree pattern . Now, our aim is to compute , the probability that a cascade occurs in a graph . Note that we observe only the node infection times while the exact propagation tree (who-infected-whom) is unknown. In general, over a given graph there may be multiple different propagation trees that are consistent with the observed data. For example, Figure 3 shows three different cascade propagation paths (trees ) that are all consistent with the observed data

Figure 3: Different propagation trees of cascade that are all consistent with observed node hit times . In each case, wider edges compose the tree, while thinner edges are the rest of the edges of the network .

So, we need to combine the probabilities of individual propagation trees into a probability of a cascade . We achieve this by considering all possible propagation trees that are supported by network , i.e., all possible ways in which cascade could have spread over :

(4)

where is a cascade and is the set of all the directed connected spanning trees on a subgraph of induced by the nodes that got hit by cascade . Note that even though the sum ranges over all possible spanning trees , in case is inconsistent with the observed data, then .

Assuming that all trees are a priori equally likely (i.e., ) and using the observation from Equation 3 we obtain:

(5)

Basically, the graph defines the skeleton over which the cascades can propagate and defines a particular possible propagation tree. There may be many possible trees that explain a single cascade (see Fig. 3), and since we do not know in which particular tree pattern the cascade really propagated, we need to consider all possible propagation trees in . Thus, the sum over is a sum over all directed spanning trees of the graph induced by the vertices that got hit by the cascade .

We just computed the probability of a single cascade occurring in , and we now define the probability of a set of cascades occurring in simply as:

(6)

where we again assume conditional independence between cascades given the graph .

2.3 Estimating the network that maximizes the cascade likelihood

Now that once we have formulated the cascade transmission model, we now state the diffusion network inference problem, where the goal is to find that solves the following optimization problem:

Problem 1

Given a set of node infection times for a set of cascades , a propagation probability parameter and an incubation time distribution , find the network such that:

(7)

where the maximization is over all directed graphs of at most edges, and is defined by equations 64 and 2.

We include the constraint on the number of edges in simply because we seek for a sparse solution, since real graphs are sparse. We discuss how to choose in further sections of the paper.

The above optimization problem seems wildly intractable. To evaluate Eq. (6), we need to compute Eq. (4) for each cascade , i.e., the sum over all possible spanning trees . The number of trees can be super-exponential in the size of but perhaps surprisingly, this super-exponential sum can be performed in time polynomial in the number of nodes in the graph , by applying Kirchhoff’s matrix tree theorem [Knuth (1968)]:

Theorem 1 ([Tutte (1948)])

If we construct a matrix such that if and if and if is the matrix created by removing any row and column from , then

(8)

where is each directed spanning tree in .

In our case, we set to be simply and we compute the product of the determinants of matrices, one for each cascade, which is exactly Eq. 4. Note that since edges where have weight 0 (i.e., they are not present), given a fixed cascade , the collection of edges with positive weight forms a directed acyclic graph (DAG). A DAG with a time-ordered labeling of its nodes has an upper triangular connectivity matrix. Thus, the matrix of Theorem 1 is, by construction, upper triangular. Fortunately, the determinant of an upper triangular matrix is simply the product of the elements of its diagonal. This means that instead of using super-exponential time, we are now able to evaluate Eq. 6 in time (the time required to build and compute the determinant for each of the cascades).

However, this does not completely solve our problem for two reasons: First, while cuadratic time is a drastic improvement over a super-exponential computation, it is still too expensive for the large graphs that we want to consider. Second, we can use the above result only to evaluate the quality of a particular graph , while our goal is to find the best graph . To do this, we would need to search over all graphs to find the best one. Again, as there is a super-exponential number of graphs, this is not practical. To circumvent this one could propose some ad hoc search heuristics, like hill-climbing. However, due to the combinatorial nature of the likelihood function, such a procedure would likely be prone to local maxima. We leave the question of efficient maximization of Eq. 4 where considers all possible propagation trees as an interesting open problem.

3 Alternative formulation and the NetInf algorithm

The diffusion network inference problem defined in the previous section does not seem to allow for an efficient solution. We now propose an alternative formulation of the problem that is tractable both to compute and also to optimize.

3.1 An alternative formulation

We use the same tree cascade formation model as in the previous section. However, we compute an approximation of the likelihood of a single cascade by considering only the most likely tree instead of all possible propagation trees. We show that this approximate likelihood is tractable to compute. Moreover, we also devise an algorithm that provably finds networks with near optimal approximate likelihood. In the remainder of this section, we informally write likelihood and log-likelihood even though they are approximations. However, all approximations are clearly indicated.

First we introduce the concept of -edges to account for the fact that nodes may get infected for reasons other than the network influence. For example, in online media, not all the information propagates via the network, as some is also pushed onto the network by the mass media [Katz and Lazarsfeld (1955), Watts and Dodds (2007)] and thus a disconnected cascade can be created. Similarly, in viral marketing, a person may purchase a product due to the influence of peers (i.e., network effect) or for some other reason (e.g., seing a commercial on TV) [Leskovec et al. (2006)].

Modeling external influence via -edges. To account for such phenomena when a cascade “jumps” across the network we can think of creating an additional node that represents an external influence and can infect any other node with small probability. We then connect the external influence node to every other node with an -edge. And then every node can get infected by the external source with a very small probability . For example, in case of information diffusion in the blogosphere, such a node could model the effect of blogs getting infected by the mainstream media.

However, if we were to adopt this approach and insert an additional external influence node into our data, we would also need to infer the edges pointing out of , which would make our problem even harder. Thus, in order to capture the effect of external influence, we introduce a concept of -edge. If there is not a network edge between a node and a node in the network, we add an -edge and then node can infect node with a small probability . Even though adding -edges makes our graph a clique (i.e., the union of network edges and -edges creates a clique), the -edges play the role of external influence node.

Thus, we now think of graph as a fully connected graph of two disjoint sets of edges, the network edge set and the -edge set , i.e., and .

Now, any cascade propagation tree is a combination of network and -edges. As we model the external influence via the -edges, the probability of a cascade occurring in tree (i.e., the analog of Eq. 2) can now be computed as:

(9)

where we compute the transmission probability as follows:

Note that above we distinguish four type of edges: network and -edges that participated in the diffusion of the contagion and network and -edges that did not participate in the diffusion.

Figure 4 further illustrates this concept. First, Figure a shows an example of a graph on five nodes and four network edges (solid lines), and any other possible edge is the -edge (dashed lines). Then, Figure b shows an example of a propagation tree in graph . We only show the edges that play a role in Eq. 9 and label them with four different types: (a) network edges that transmitted the contagion, (b) -edges that transmitted the contagion, (c) network edges that failed to transmit the contagion, and (d) -edges that failed to transmit the contagion.

(a) Graph on five vertices and four network edges (solid edges). -edges shown as dashed lines.
(b) Cascade propagation tree
Figure 4: (a) Graph : Network edges are shown as solid, and -edges are shown as dashed lines. (b) Propagation tree . Four types of edges are labeled: (i) network edges that transmitted the contagion (solid bold), (ii) -edges that transmitted the contagion (dashed bold), (iii) network edges that failed to transmit the contagion (solid), and (iv) -edges that failed to transmit the contagion (dashed).

We can now rewrite the cascade likelihood as combination of products of edge-types and the product over the edge incubation times:

(10)
(11)

where is the number of network edges in (type (a) edges in Fig. b), is the number of -edges in , is the number of network edges that did not transmit and is the number of -edges that did not transmit. Note that the above approximation is valid since real networks are sparse and cascades are generally small, and hence . Thus, even though we expect to be of about same order of magnitude as .

The formulation in Equation 11 has several benefits. Due to the introduction of -edges the likelihood is always positive. For example, even if we consider graph with no edges, is still well defined as we can explain the tree via the diffusion over the -edges. A second benefit that will become very useful later is that the likelihood now becomes monotonic in the network edges of . This means that adding an edge to (i.e., converting -edge into a network edge) only increases the likelihood.

Considering only the most likely propagation tree. So far we introduced the concept of -edges to model the external influence or diffusion that is exogenous to the network, and introduce an approximation to treat all edges that did not participate in the diffusion as -edges.

Now we consider the last approximation, where instead of considering all possible cascade propagation trees , we only consider the most likely cascade propagation trees :

(12)

Thus now we aim to solve the network inference problem by finding a graph that maximizes Equation 12, where is defined in Equation 11.

This formulation simplifies the original network inference problem by considering the most likely (best) propagation tree per cascade instead of considering all possible propagation trees for each cascade . Although in some cases we expect the likelihood of with respect to the true tree to be much higher than that with respect to any competing tree and thus the probability mass will be concentrated at , there might be some cases in which the probability mass does not concentrate on one particular T. However, we run extensive experiments on small networks with different structures in which both the original network inference problem and the alternative formulation can be solved using exhaustive search. Our experimental results looked really similar and the results were indistinguishable. Therefore, we consider our approximation to work well in practice.

For convenience, we work with the log-likelihood rather than likelihood . Moreover, instead of directly maximizing the log-likelihood we equivalently maximize the following objective function that defines the improvement of log-likelihood for cascade occurring in graph over occurring in an empty graph (i.e., graph with only -edges and no network edges):

(13)

Maximizing Equation (12) is equivalent to maximizing the following log-likelihood function:

(14)

We now expand Eq. (13) and obtain an instance of a simplified diffusion network inference problem:

(15)

where is a non-negative weight which can be interpreted as the improvement in log-likelihood of edge under the most likely propagation tree in . Note that by the approximation in Equation 11 one can ignore the contribution of edges that did not participate in a particular cascade . The contribution of these edges is constant, i.e., independent of the particular shape that propagation tree takes. This is due to the fact that each spanning tree of with node set has (network and -) edges that participated in the cascade, and all remaining edges stopped the cascade from spreading. The number of non-spreading edges depends only on the node set but not the edge set . Thus, the tree that maximizes also maximizes .

Since is a tree that maximizes the sum of the edge weights this means that the most likely propagation tree is simply the maximum weight directed spanning tree of nodes , where each edge has weight , and is simply the sum of the weights of edges in .

We also observe that since edges where have weight 0 (i.e., such edges are not present) then the outgoing edges of any node only point forward in time, i.e., a node can not infect already infected nodes. Thus for a fixed cascade , the collection of edges with positive weight forms a directed acyclic graph (DAG).

Now we use the fact that the collection of edges with positive weights forms a directed acyclic graph by observing that the maximum weight directed spanning tree of a DAG can be computed efficiently:

Proposition 1

In a DAG with vertex set and nonnegative edge weights , the maximum weight directed spanning tree can be found by choosing, for each node , an incoming edge with maximum weight .

{proof}

The score

of a tree is the sum of the incoming edge weights for each node , where is the parent of node in (and the root is handled appropriately). Now,

Latter equality follows from the fact that since is a DAG, the maximization can be done independently for each node without creating any cycles.

This proposition is a special case of the more general maximum spanning tree (MST) problem in directed graphs [Edmonds (1967)]. The important fact now is that we can find the best propagation tree in time , i.e., linear in the number of edges and the maximum in-degree by simply selecting an incoming edge of highest weight for each node . Algorithm 1 provides the pseudocode to efficiently compute the maximum weight directed spanning tree of a DAG.

0:  Weighted directed acyclic graph
  
  for all  do
     
     
  return  
Algorithm 1 Maximum weight directed spanning tree of a DAG

Putting it all together we have shown how to efficiently evaluate the log-likelihood of a graph . To find the most likely tree for a single cascade takes , and this has to be done for a total of cascades. Interestingly, this is independent of the size of graph and only depends on the amount of observed data (i.e., size and the number of cascades).

3.2 The NetInf algorithm for efficient maximization of

Now we aim to find graph that maximizes the log-likelihood . First we notice that by construction , i.e., the empty graph has score 0. Moreover, we observe that the objective function is non-negative and monotonic. This means that for graphs and , where . Hence adding more edges to does not decrease the solution quality, and thus the complete graph maximizes . Monotonicity can be shown by observing that, as edges are added to , -edges are converted to network edges, and therefore the weight of any tree (and therefore the value of the maximum spanning tree) can only increase. However, since real-world social and information networks are usually sparse, we are interested in inferring a sparse graph , that only contains some small number of edges. Thus we aim to solve:

Problem 2

Given the infection times of a set of cascades , probability of propagation and the incubation time distribution , find that maximizes:

(16)

where the maximization is over all graphs of at most edges, and is defined by Eqs. 14 and 15.

Naively searching over all edge graphs would take time exponential in , which is intractable. Moreover, finding the optimal solution to Eq. (16) is NP-hard, so we cannot expect to find the optimal solution:

Theorem 2

The network inference problem defined by equation (16) is NP-hard.

{proof}

By reduction from the MAX--COVER problem [Khuller et al. (1999)]. In MAX--COVER, we are given a finite set , and a collection of subsets . The function

counts the number of elements of covered by sets indexed by . Our goal is to pick a collection of subsets maximizing . We will produce a collection of cascades over a graph such that . Graph will be defined over the set of vertices , i.e., there is one vertex for each set and one extra vertex . For each element we define a cascade which has time stamp associated with all nodes such that , time stamp for node and for all remaining nodes.

Furthermore, we can choose the transmission model such that whenever and for all remaining edges , by choosing the parameters , and appropriately. Since a directed spanning tree over a graph can contain at most one edge incoming to node , its weight will be if contains any edge from a node to where , and otherwise. Thus, a graph of at most edges corresponds to a feasible solution to MAX--COVER where we pick sets whenever edge , and each solution to MAX--COVER corresponds to a feasible solution of (16). Furthermore, by construction, . Thus, if we had an efficient algorithm for deciding whether there exists a graph , such that , we could use the algorithm to decide whether there exists a solution to MAX--COVER with value at least .

While finding the optimal solution is hard, we now show that satisfies submodularity, a natural diminishing returns property. The submodularity property allows us to efficiently find a provably near-optimal solution to this otherwise NP-hard optimization problem.

A set function that maps subsets of a finite set to the real numbers is submodular if for and , it holds that

This simply says adding to the set increases the score more than adding to set ().

Now we are ready to show the following result that enables us to find a near optimal network :

Theorem 3

Let be a set of nodes, and be a collection of cascades hitting the nodes . Then is a submodular function defined over subsets of directed edges.

{proof}

Fix a cascade , graphs and an edge not contained in . We will show that . Since nonnegative linear combinations of submodular functions are submodular, the function is submodular as well. Let be the weight of edge in , and be the weight in . As argued before, the maximum weight directed spanning tree for DAGs is obtained by assigning to each node the incoming edge with maximum weight. Let be the edge incoming at of maximum weight in , and the maximum weight incoming edge in . Since it holds that . Furthermore, . Hence,

proving submodularity of .

Maximizing submodular functions in general is NP-hard [Khuller et al. (1999)]. A commonly used heuristic is the greedy algorithm, which starts with an empty graph , and iteratively, in step , adds the edge which maximizes the marginal gain:

(17)

The algorithm stops once it has selected edges, and returns the solution . The stopping criteria, i.e., value of , can be based on some threshold of the marginal gain, of the number of estimated edges or another more sophisticated heuristic.

In our context we can think about the greedy algorithm as starting on an empty graph with no network edges. In each iteration , the algorithm adds to the edge that currently improves the most the value of the log-likelihood. Another way to view the greedy algorithm is that it starts on a fully connected graph where all the edges are -edges. Then adding an edge to graph corresponds to that edge changing the type from -edge to a network edge. Thus our algorithm iteratively swaps -edges to network edges until network edges have been swapped (i.e., inserted into the network ).

Guarantees on the solution quality. Considering the NP-hardness of the problem, we might expect the greedy algorithm to perform arbitrarily bad. However, we will see that this is not the case. A fundamental result of Nemhauser et al. [Nemhauser et al. (1978)] proves that for monotonic submodular functions, the set returned by the greedy algorithm obtains at least a constant fraction of of the optimal value achievable using edges.

Moreover, we can acquire a tight online data-dependent bound on the solution quality:

Theorem 4 ([Leskovec et al. (2007)])

For a graph , and each edge , let . Let be the sequence with in decreasing order, where is the total number of edges with marginal gain greater than . Then,

Theorem 4 computes how far a given (obtained by any algorithm) is from the unknown NP-hard to find optimum.

Speeding-up the NetInf algorithm. To make the algorithm scale to networks with thousands of nodes we speed-up the algorithm by several orders of magnitude by considering two following two improvements:

Localized update: Let be the subset of cascades that go through the node (i.e., cascades in which node is infected). Then consider that in some step the greedy algorithm selects the network edge with marginal gain , and now we have to update the optimal tree of each cascade. We make a simple observation that adding the network edge may only change the optimal trees of the cascades in the set and thus we only need to revisit (and potentially update) the trees of cascades in . Since cascades are local (i.e., each cascade hits only a relatively small subset of the network), this localized updating procedure speeds up the algorithm considerably.

Lazy evaluation: It can be used to drastically reduce the number of evaluations of marginal gains  [Leskovec et al. (2007)]. This procedure relies on the submodularity of . The key idea behind lazy evaluations is the following. Suppose is the sequence of graphs produced during iterations of the greedy algorithm. Now let us consider the marginal gain

of adding some edge to any of these graphs. Due to the submodularity of it holds that whenever . Thus, the marginal gains of can only monotonically decrease over the course of the greedy algorithm. This means that elements which achieve very little marginal gain at iteration cannot suddenly produce large marginal gain at subsequent iterations. This insight can be exploited by maintaining a priority queue data structure over the edges and their respective marginal gains. At each iteration, the greedy algorithm retrieves the highest weight (priority) edge. Since its value may have decreased from previous iterations, it recomputes its marginal benefit. If the marginal gain remains the same after recomputation, it has to be the edge with highest marginal gain, and the greedy algorithm will pick it. If it decreases, one reinserts the edge with its new weight into the priority queue and continues. Formal details and pseudo-code can be found in [Leskovec et al. (2007)].

As we will show later, these two improvements decrease the run time by several orders of magnitude with no loss in the solution quality. We call the algorithm that implements the greedy algorithm on this alternative formulation with the above speedups the NetInf algorithm (Algorithm 2). In addition, NetInf nicely lends itself to parallelization as likelihoods of individual cascades and likelihood improvements of individual new edges can simply be computed independently. This allows us to to tackle even bigger networks in shorter amounts of time.

A space and runtime complexity analysis of NetInf depends heavily of the structure of the network, and therefore it is necessary to make strong assumptions on the structure. Due to this, it is out of the scope of the paper to include a formal complexity analysis. Instead, we include an empirical runtime analysis in the following section.

0:  Cascades and hit times , number of edges
  
  for all  do
      \hfill{Find most likely tree (Algorithm 1)}
  while  do
     for all  do
         \hfill{Marginal improvement of adding edge to G}
        
        for all  in  do
           Let be the weight of in
           if  then
              
              
     
     
     for all  do
        
  return  G;
Algorithm 2 The NetInf Algorithm

4 Experimental evaluation

In this section we proceed with the experimental evaluation of our proposed NetInf algorithm for inferring network of diffusion. We analyze the performance of NetInf on synthetic and real networks. We show that our algorithm performs surprisingly well, outperforms a heuristic baseline and correctly discovers more than 90% of the edges of a typical diffusion network.

4.1 Experiments on synthetic data

The goal of the experiments on synthetic data is to understand how the underlying network structure and the propagation model (exponential and power-law) affect the performance of our algorithm. The second goal is to evaluate the effect of simplification we had to make in order to arrive to an efficient network inference algorithm. Namely, we assume the contagion propagates in a tree pattern (i.e., exactly edges caused the propagation), consider only the most likely tree (Eq. 12), and treat non-propagating network edges as -edges (Eq. 11).

(a) FF: Cascades per edge
(b) FF: Cascade size
Figure 5: Number of cascades per edge and cascade sizes for a Forest Fire network ( nodes, edges) with forward burning probability , backward burning probability and exponential incubation time model with parameter and propagation probability . The cascade size distribution follows a power-law. We found the power-law coefficient using maximum likelihood estimation (MLE).

In general, in all our experiments we proceed as follows: We are given a true diffusion network , and then we simulate the propagation of a set of contagions over the network . Diffusion of each contagion creates a cascade and for each cascade, we record the node hit times . Then, given these node hit times, we aim to recover the network using the NetInf algorithm. For example, Figure 1(a) shows a graph of 20 nodes and 23 directed edges. Using the exponential incubation time model and we generated cascades. Now given the node infection times, we aim to recover . A baseline method (b) (described below) performed poorly while NetInf (c) recovered almost perfectly by making only two errors (red edges).

Experimental setup. Our experimental methodology is composed of the following steps:

  1. Ground truth graph

  2. Cascade generation: Probability of propagation , and the incubation time model with parameter .

  3. Number of cascades

(1) Ground truth graph : We consider two models of directed real-world networks to generate , namely, the Forest Fire model [Leskovec et al. (2005)] and the Kronecker Graphs model [Leskovec and Faloutsos (2007)]. For Kronecker graphs, we consider three sets of parameters that produce networks with a very different global network structure: a random graph [Erdős and Rényi (1960)] (Kronecker parameter matrix ), a core-periphery network [Leskovec et al. (2008)] () and a network with hierarchical community structure [Clauset et al. (2008)] (). The Forest Fire generates networks with power-law degree distributions that follow the densification power law [Barabási and Albert (1999), Leskovec et al. (2007)].

(2) Cascade propagation: We then simulate cascades on using the generative model defined in Section 2.1. For the simulation we need to choose the incubation time model (i.e., power-law or exponential and parameter ). We also need to fix the parameter , that controls probability of a cascade propagating over an edge. Intuitively, controls how fast the cascade spreads (i.e., how long the incubation times are), while controls the size of the cascades. Large means cascades will likely be large, while small makes most of the edges fail to transmit the contagion which results in small infections.

(3) Number of cascades: Intuitively, the more data our algorithm gets the more accurately it should infer . To quantify the amount of data (number of different cascades) we define to be the set of edges that participate in at least cascades. This means is a set of edges that transmitted at least contagions. It is important to note that if an edge of did not participate in any cascade (i.e., it never transmitted a contagion) then there is no trace of it in our data and thus we have no chance to infer it. In our experiments we choose the minimal amount of data (i.e., ) so that we at least in principle could infer the true network . Thus, we generate as many cascades as needed to have a set that contains a fraction of all the edges of the true network . In all our experiments we pick cascade starting nodes uniformly at random and generate enough cascades so that 99% of the edges in participate in at least one cascade, i.e., 99% of the edges are included in .

Table 2 shows experimental values of number of cascades that let cover different percentages of the edges. To have a closer look at the cascade size distribution, for a Forest Fire network on 1,024 nodes and 1,477 edges, we generated 4,038 cascades. The majority of edges took part in 4 to 12 cascades and the cascade size distribution follows a power law (Figure b). The average and median number of cascades per edge are 9.1 and 8, respectively (Figure a).

Type of network BEP AUC
Forest Fire 0.5 388 2,898 0.393 0.29
0.9 2,017 14,027 0.75 0.67
0.95 2,717 19,418 0.82 0.74
0.99 4,038 28,663 0.92 0.86
0.5 289 1,341 0.37 0.30
0.9 1,209 5,502 0.81 0.80
0.95 1,972 9,391 0.90 0.90
0.99 5,078 25,643 0.98 0.98
0.5 140 1,392 0.31 0.23
0.9 884 9,498 0.84 0.80
0.95 1,506 14,125 0.93 0.91
0.99 3,110 30,453 0.98 0.96
0.5 200 1,324 0.34 0.26
0.9 1,303 7,707 0.84 0.83
0.95 1,704 9,749 0.89 0.88
0.99 3,652 21,153 0.97 0.97
Table 2: Performance of synthetic data. Break-even Point (BEP) and Receiver Operating Characteristic (AUC) when we generated the minimum number of cascades so that -fraction of edges participated in at least one cascades . These cascades generated the total of edge transmissions, i.e., average cascade size is . All networks have 1,024 nodes and 1,446 edges. We use the exponential incubation time model with parameter , and in each case we set the probability such that is neither too small nor too large (i.e., ).

Baseline method. To infer a diffusion network , we consider the a simple baseline heuristic where we compute the score of each edge and then pick edges with highest score.

More precisely, for each possible edge of , we compute , i.e., overall how likely were the cascades to propagate over the edge . Then we simply pick the edges with the highest score to obtain . For example, Figure 1(b) shows the results of the baseline method on a small graph.

Solution quality. We evaluate the performance of the NetInf algorithm in two different ways. First, we are interested in how successful NetInf is at optimizing the objective function that is NP-hard to optimize exactly. Using the online bound in Theorem 4, we can assess at most how far from the unknown optimal the NetInf solution is in terms of the log-likelihood score. Second, we also evaluate the NetInf based on accuracy, i.e., what fraction of edges of NetInf managed to infer correctly.

Figure a plots the value of the log-likelihood improvement as a function of the number of edges in . In red we plot the value achieved by NetInf and in green the upper bound using Theorem 4. The plot shows that the value of the unknown optimal solution (that is NP-hard to compute exactly) is somewhere between the red and the green curve. Notice that the band between two curves, the optimal and the NetInf curve, is narrow. For example, at 2,000 edges in , NetInf finds the solution that is least 97% of the optimal graph. Moreover, also notice a strong diminishing return effect. The value of the objective function flattens out after about 1,000 edges. This means that, in practice, very sparse solutions (almost tree-like diffusion graphs) already achieve very high values of the objective function close to the optimal.

(a) Kronecker network
(b) Real MemeTracker data
Figure 6: Score achieved by NetInf in comparison with the online upper bound from Theorem 4. In practice NetInf finds networks that are at 97% of NP-hard to compute optimal.

Accuracy of NetInf. We also evaluate our approach by studying how many edges inferred by NetInf are actually present in the true network . We measure the precision and recall of our method. For every value of () we generate on edges by using NetInf or the baseline method. We then compute precision (which fraction of edges in is also present ) and recall (which fraction of edges of appears in ). For small , we expect low recall and high precision as we select the few edges that we are the most confident in. As increases, precision will generally start to drop but the recall will increase.

(a) Hier. Kronecker (Exp)
(b) Core-Periph. Kronecker (Exp)
(c) Flat Kronecker (Exp)
(d) Hier. Kronecker (PL)
(e) Core-Periph. Kronecker (PL)
(f) Flat Kronecker (PL)
(g) Forest Fire (PL, )
(h) Forest Fire (PL, )
Figure 7: Precision and recall for three 1024 node Kronecker and Forest Fire network networks with exponential (Exp) and power law (PL) incubation time model. The plots are generated by sweeping over values of , that controls the sparsity of the solution.

Figure 7 shows the precision-recall curves of NetInf and the baseline method on three different Kronecker graphs (random, hierarchical community structure and core-periphery structure) with 1024 nodes and two incubation time models. The cascades were generated with an exponential incubation time model with , or a power law incubation time model with and a value of low enough to avoid generating too large cascades (in all cases, we pick a value of ). For each network we generated between 2,000 and 4,000 cascades so that 99% of the edges of participated in at least one cascade. We chose cascade starting points uniformly at random.

First, we focus on Figures ab and c where we use the exponential incubation time model on different Kronecker graphs. Notice that the baseline method achieves the break-even point2 between 0.4 and 0.5 on all three networks. On the other hand, NetInf performs much better with the break-even point of 0.99 on all three datasets.

We view this as a particularly strong result as we were especially careful not to generate too many cascades since more cascades mean more evidence that makes the problem easier. Thus, using a very small number of cascades, where every edge of participates in only a few cascades, we can almost perfectly recover the underlying diffusion network . Second important point to notice is that the performance of NetInf seems to be strong regardless of the structure of the network . This means that NetInf works reliably regardless of the particular structure of the network of which contagions propagated (refer to Table 2).

Similarly, Figures de and f show the performance on the same three networks but using the power law incubation time model. The performance of the baseline now dramatically drops. This is likely due to the fact that the variance of power-law (and heavy tailed distributions in general) is much larger than the variance of an exponential distribution. Thus the diffusion network inference problem is much harder in this case. As the baseline pays high price due to the increase in variance with the break-even point dropping below the performance of NetInf remains stable with the break even point in the high 90s.

Figure 8: Performance of NetInf as a function of the amount of cascade data. The units in the x-axis are normalized. means that the total number of transmission events used for the experiment was equal to the number of edges in . On average NetInf requires about two propagation events per edge of the original network in order to reliably recover the true network structure.

We also examine the results on the Forest Fire network (Figures g and h). Again, the performance of the baseline is very low while NetInf achieves the break-even point at around 0.90.

Generally, the performance on the Forest Fire network is a bit lower than on the Kronecker graphs. However, it is important to note that while these networks have very different global network structure (from hierarchical, random, scale free to core periphery) the performance of NetInf is remarkably stable and does not seem to depend on the structure of the network we are trying to infer or the particular type of cascade incubation time model.

Finally, in all the experiments, we observe a sharp drop in precision for high values of recall (near ). This happens because the greedy algorithm starts to choose edges with low marginal gains that may be false edges, increasing the probability to make mistakes.

Performance vs. cascade coverage. Intuitively, the larger the number of cascades that spread over a particular edge the easier it is to identify it. On one hand if the edge never transmitted then we can not identify it, and the more times it participated in the transmission of a contagion the easier should the edge be to identify.

In our experiments so far, we generated a relatively small number of cascades. Next, we examine how the performance of NetInf depends on the amount of available cascade data. This is important because in many real world situations the data of only a few different cascades is available.

Figure 8 plots the break-even point of NetInf as a function of the available cascade data measured in the number of contagion transmission events over all cascades. The total number of contagion transmission events is simply the sum of cascade sizes. Thus, means that the total number of transmission events used for the experiment was equal to the number of edges in . Notice that as the amount of cascade data increases the performance of NetInf also increases. Overall we notice that NetInf requires a total number of transmission events to be about 2 times the number of edges in to successfully recover most of the edges of .

Moreover, the plot shows the performance for different values of edge transmission probability . As noted before, big values of produce larger cascades. Interestingly, when cascades are small (small ) NetInf needs less data to infer the network than when cascades are larger. This occurs because the larger a cascade, the more difficult is to infer the parent of each node, since we have more potential parents for each the node to choose from. For example, when NetInf needs about transmission events, while when it needs twice as much data (about transmissions) to obtain the break even point of .

Figure 9: Average time per edge added by our algorithm implemented with lazy evaluation (LE) and localized update (LU).

Stopping criterion. In practice one does not know how long to run the algorithm and how many edges to insert into the network . Given the results from Figure 6, we found the following heuristic to give good results. We run the NetInf algorithm for steps where is chosen such that the objective function is “close” to the upper bound, i.e., , where OPT is obtained using the online bound. In practice we use values of in range . That means that in each iteration , OPT is computed by evaluating the right hand side expression of the equation in Theorem 4, where is simply the iteration number. Therefore, OPT is computed online, and thus the stopping condition is also updated online.

Scalability. Figure 9 shows the average computation time per edge added for the NetInf algorithm implemented with lazy evaluation and localized update. We use a hierarchical Kronecker network and an exponential incubation time model with and . Localized update speeds up the algorithm for an order of magnitude (45) and lazy evaluation further gives a factor of 6 improvement. Thus, overall, we achieve two orders of magnitude speed up (280), without any loss in solution quality.

In practice the NetInf algorithm can easily be used to infer networks of 10,000 nodes in a matter of hours.

Figure 10: Break-even point of NetInf as a function of the amount of additive Gaussian noise in the incubation time.

Performance vs. incubation time noise. In our experiments so far, we have assumed that the incubation time values between infections are not noisy and that we have access to the true distribution from which the incubation times are drawn. However, real data may violate any of these two assumptions.

We study the performance of NetInf (break-even point) as a function of the noise of the waiting time between infections. Thus, we add Gaussian noise to the waiting times between infections in the cascade generation process.

Figure 10 plots the performance of NetInf (break-even point) as a function of the amount of Gaussian noise added to the incubation times between infections for both an exponential incubation time model with , and a power law incubation time model with . The break-even point degrades with noise but once a high value of noise is reached, an additional increment in the amount of noise does not degrade further the performance of NetInf. Interestingly, the break-even point value for high values of noise is very similar to the break-even point achieved later in a real dataset (Figures a and b).

Performance vs. infections by the external source. In all our experiments so far, we have assumed that we have access to complete cascade data, i.e., we are able to observe all the nodes taking part in each cascade. Thereby, except for the first node of a cascade, we do not have any “jumps” or missing nodes in the cascade as it spreads across the network. Even though techniques for coping with missing data in information cascades have recently been investigated [Sadikov et al. (2011)], we evaluate NetInf against both scenarios.

(a) Missing node infection data
(b) Node infections due to external source
Figure 11: Break-even point of NetInf as (a) function of the fraction of missing nodes per cascade, and as (b) function of the fraction of nodes that are infected by an external source per cascade.

First, we consider the case where a random fraction of each cascade is missing. This means that we first generate a set of cascades, but then only record node infection times of -fraction of nodes. We first generate enough cascades so that without counting the missing nodes in the cascades, we still have that 99% of the edges in participate in at least one cascade. Then we randomly delete (i.e., set infection times to infinity) -fraction of nodes in each cascade.

Figure a plots the performance of NetInf (break-even point) as a function of the percentage of missing nodes in each cascade. Naturally, the performance drops with the amount of missing data. However, we also note that the effect of missing nodes can be mitigated by an appropriate choice of the parameter . Basically, higher makes propagation via -edges more likely and thus by giving a cascade a greater chance to propagate over the -edges NetInf can implicitly account for the missing data.

Second, we also consider the case where the contagion does not spread through the network via diffusion but rather due to the influence of an external source. Thus, the contagion does not really spread over the edges of the network but rather appears almost at random at various nodes of the network.

Figure b plots the performance of NetInf (break-even point) as a function of the percentage of nodes that are infected by an external source for different values of . In our framework, we model the influence due to the external source with the -edges. Note that appropriately setting can appropriately account for the exogenous infections that are not the result of the network diffusion but due to the external influence. The higher the value of , the stronger the influence of the external source, i.e., we assume a greater number of missing nodes or number of nodes that are infected by an external source. Thus, the break-even is more robust for higher values of .

Figure 12: Hyperlink-based cascades versus meme-based cascades. In hyper-link cascades, if post linked to post , we consider this as a contagion transmission event with the post creation time as the corresponding infection time. In MemeTracker cascades, we follow the spread of a short textual phrase and use post creation times as infection times.

4.2 Experiments on real data

Dataset description. We use more than million news articles and blog posts from million online sources over a period of one year from September 1 2008 till August 31 20093. Based on this raw data, we use two different methodologies to trace information on the Web and then create two different datasets:

(1) Blog hyperlink cascades dataset: We use hyperlinks between blog posts to trace the flow of information [Leskovec et al. (2007)]. When a blog publishes a piece of information and uses hyper-links to refer to other posts published by other blogs we consider this as events of information transmission. A cascade starts when a blog publishes a post and the information propagates recursively to other blogs by them linking to the original post or one of the other posts from which we can trace a chain of hyperlinks all the way to the original post . By following the chains of hyperlinks in the reverse direction we identify hyperlink cascades [Leskovec et al. (2007)]. A cascade is thus composed of the time-stamps of the hyperlink/post creation times.

(1) MemeTracker dataset: We use the MemeTracker [Leskovec et al. (2009)] methodology to extract more than 343 million short textual phrases (like, “Joe, the plumber” or “lipstick on a pig”). Out of these, 8 million distinct phrases appeared more than 10 times, with the cumulative number of mentions of over 150 million. We cluster the phrases to aggregate different textual variants of the same phrase [Leskovec et al. (2009)]. We then consider each phrase cluster as a separate cascade . Since all documents are time stamped, a cascade is simply a set of time-stamps when blogs first mentioned phrase . So, we observe the times when blogs mention particular phrases but not where they copied or obtained the phrases from. We consider the largest 5,000 cascades (phrase clusters) and for each website we record the time when they first mention a phrase in the particular phrase cluster. Note that cascades in general do not spread over all the sites, which our methodology can successfully handle.

Figure 12 further illustrates the concept of hyper-link and MemeTracker cascades.

(a) Blog hyperlink cascades dataset
(b) MemeTracker dataset
Figure 13: Precision and recall for a 500 node hyperlink network using (a) the blog hyperlink cascades dataset (i.e., hyperlinks cascades) and (b) the MemeTracker dataset (i.e., MemeTracker cascades). We used , and the exponential model with . The time units were hours.

Accuracy on real data. As there is not ground truth network for both datasets, we use the following way to create the ground truth network . We create a network where there is a directed edge between a pair of nodes and if a post on site linked to a post on site . To construct the network we take the top 500 sites in terms of number of hyperlinks they create/receive. We represent each site as a node in and connect a pair of nodes if a post in first site linked to a post in the second site. This process produces a ground truth network with 500 nodes and 4,000 edges.

First, we use the blog hyperlink cascades dataset to infer the network and evaluate how many edges NetInf got right. Figure a shows the performance of NetInf and the baseline. Notice that the baseline method achieves the break-even point of 0.34, while our method performs better with a break-even point of 0.44, almost a 30% improvement.

Figure 14: Small part of a news media (red) and blog (blue) diffusion network. We use the blog hyperlink cascades dataset, i.e., hyperlinks between blog and news media posts to trace the flow of information.

NetInf is basically performing a link-prediction task based only on temporal linking information. The assumption in this experiment is that sites prefer to create links to sites that recently mentioned information while completely ignoring the authority of the site. Given such assumption is not satisfied in real-life, we consider the break even point of 0.44 a good result.

Now, we consider an even harder problem, where we use the Memetracker dataset to infer . In this experiment, we only observe times when sites mention particular textual phrases and the task is to infer the hyperlink structure of the underlying web graph. Figure b shows the performance of NetInf and the baseline. The baseline method has a break-even point of 0.17 and NetInf achieves a break-even point of 0.28, more than a 50% improvement

To have a fair comparison with the synthetic cases, notice that the exponential incubation time model is a simplistic assumption for our real dataset, and NetInf can potentially gain additional accuracy by choosing a more realistic incubation time model.

Solution quality. Similarly as with synthetic data, in Figure b we investigate the value of the objective function and compare it to the online bound. Notice that the bound is almost as tight as in the case of synthetic networks, finding the solution that is least 84% of optimal and both curves are similar in shape to the synthetic case value. Again, as in the synthetic case, the value of the objective function quickly flattens out which means that one needs a relatively few number of edges to capture most of the information flow on the Web.

Figure 15: Small part of a news media (red) and blog (blue) diffusion network. We use the MemeTracker dataset, i.e., textual phrases from MemeTracker to trace the flow of information.

In the remainder of the section, we use the top 1,000 media sites and blogs with the largest number of documents.

Visualization of diffusion networks. We examine the structure of the inferred diffusion networks using both datasets: the blog hyperlink cascades dataset and the MemeTracker dataset.

Figure 14 shows the largest connected component of the diffusion network after edges have been chosen using the first dataset, i.e., using hyperlinks to track the flow of information. The size of the nodes is proportional to the number of articles on the site and the width of the edge is proportional to the probability of influence, i.e., stronger edges have higher width. The strength of an edge across all cascades is simply defined as the marginal gain given by adding the edge in the greedy algorithm (and this is proportional to the probability of influence). Since news media articles rarely use hyperlinks to refer to one another, the network is somewhat biased towards web blogs (blue nodes). There are several interesting patterns to observe.

First, notice that three main clusters emerge: on the left side of the network we can see blogs and news media sites related to politics, at the right top, we have blogs devoted to gossip, celebrity news or entertainment and on the right bottom, we can distinguish blogs and news media sites that deal with technological news. As Huffington Post and Political Carnival play the central role on the political side of the network, mainstream media sites like Washington Post, Guardian and the professional blog Salon.com play the role of connectors between the different parts of the network. The celebrity gossip part of the network is dominated by the blog Gawker and technology news gather around blogs Gizmodo and Engadget, with CNet and TechChuck establishing the connection to the rest of the network.

Figure 15 shows the largest connected component of the diffusion network after edges have been chosen using the second methodology, i.e. using short textual phrases to track the flow of information. In this case, the network is biased towards news media sites due to its higher volume of information.

Insights into the diffusion on the web. The inferred diffusion networks also allow for analysis of the global structure of information propagation on the Web. For this analysis, we use the MemeTracker dataset and analyze the structure of the inferred information diffusion network.

(a) Influence Index
(b) Number of edges as iterations proceed
(c) Median edge time lag
Figure 16: (a) Distribution of node influence index. Most nodes have very low influence (they act as sinks). (b) Number and strength of edges between different media types. Edges of news media influencing blogs are the strongest. (c) Median time lag on edges of different type.

First, Figure a shows the distribution of the influence index. The influence index is defined as the number of reachable nodes from by traversing edges of the inferred diffusion network (while respecting edge directions). Nevertheless, we are also interested in the distance from to its reachable nodes, i.e. nodes at shorter distances are more likely to be infected by . Thus, we slightly modify the definition of influence index to be where we sum over all the reachable nodes from and is the distance between and . Notice that we have two types of nodes. There is a small set of nodes that can reach many other nodes, which means they either directly or indirectly propagate information to them. On the other side we have a large number of sites that only get influenced but do not influence many other sites. This hints at a core periphery structure of the diffusion network with a small set of sites directly or indirectly spreading the information in the rest of the network.

Figure b investigates the number of links in the inferred network that point between different types of sites. Here we split the sites into mainstream media and blogs. Notice that most of the links point from news media to blogs, which says that most of the time information propagates from the mainstream media to blogs. Then notice how at first many media-to-media links are chosen but in later iterations the increase of these links starts to slow down. This means that media-to-media links tend to be the strongest and NetInf picks them early. The opposite seems to occur in case of blog-to-blog links where relatively few are chosen first but later the algorithm picks more of them. Lastly, links capturing the influence of blogs on mainstream media are the rarest and weakest. This suggests that most information travels from mass media to blogs.

Last, Figure c shows the median time difference between mentions of different types of sites. For every edge of the inferred diffusion network, we compute the median time needed for the information to spread from the source to the destination node. Again, we distinguish the mainstream media sites and blogs. Notice that media sites are quick to infect one another or even to get infected from blogs. However, blogs tend to be much slower in propagating information. It takes a relatively long time for them to get “infected” with information regardless whether the information comes from the mainstream media or the blogosphere.

Finally, we have observed that the insights into diffusion on the web using the inferred network are very similar to insights obtained by simply taking the hyperlink network. However, our aim here is to show that (i) although the quantitative results are modest in terms of precision and recall, the qualitative insights makes sense, and that (ii) it is surprising that using simply timestamps of links, we are able to draw the same qualitative insights as using the hyperlink network

5 Further related work

There are several lines of work we build upon. Although the information diffusion in on-line settings has received considerable attention [Gruhl et al. (2004), Kumar et al. (2004), , Leskovec et al. (2006), Leskovec et al. (2006), Leskovec et al. (2007), Liben-Nowell and Kleinberg (2008)], only a few studies were able to study the actual shapes of cascades [Leskovec et al. (2007), Liben-Nowell and Kleinberg (2008), Ghosh and Lerman (2011), Romero et al. (2011), Ver Steeg et al. (2011)]. The problem of inferring links of diffusion was first studied by Adar and Adamic [], who formulated it as a supervised classification problem and used Support Vector Machines combined with rich textual features to predict the occurrence of individual links. Although rich textual features are used, links are predicted independently and thus their approach is similar to our baseline method in the sense that it picks a threshold (i.e., hyperplane in case of SVMs) and predicts individually the most probable links.

The work most closely related to our approach, CoNNIe [Myers and Leskovec (2010)] and NetRate [Gomez-Rodriguez et al. (2011)], also uses a generative probabilistic model for the problem of inferring a latent social network from diffusion (cascades) data. However, CoNNIe and NetRate use convex programming to solve the network inference problem. CoNNIe includes a -like penalty term that controls sparsity while NetRate provides a unique sparse solution by allowing different transmission rates across edges. For each edge