Estimating Diffusion Network Structures: Recovery Conditions,
Sample Complexity & Softthresholding Algorithm
Abstract
Information spreads across social and technological networks, but often the network structures are hidden from us and we only observe the traces left by the diffusion processes, called cascades. Can we recover the hidden network structures from these observed cascades? What kind of cascades and how many cascades do we need? Are there some network structures which are more difficult than others to recover? Can we design efficient inference algorithms with provable guarantees?
Despite the increasing availability of cascade data and methods for inferring networks from these data, a thorough theoretical understanding of the above questions remains largely unexplored in the literature. In this paper, we investigate the network structure inference problem for a general family of continuoustime diffusion models using an regularized likelihood maximization framework. We show that, as long as the cascade sampling process satisfies a natural incoherence condition, our framework can recover the correct network structure with high probability if we observe cascades, where is the maximum number of parents of a node and is the total number of nodes. Moreover, we develop a simple and efficient softthresholding inference algorithm, which we use to illustrate the consequences of our theoretical results, and show that our framework outperforms other alternatives in practice.
MPI for Intelligent Systems and Georgia Institute of Technology
1 Introduction
Diffusion of information, behaviors, diseases, or more generally, contagions can be naturally modeled as a stochastic process that occur over the edges of an underlying network (Rogers, 1995). In this scenario, we often observe the temporal traces that the diffusion generates, called cascades, but the edges of the network that gave rise to the diffusion remain unobservable (Adar & Adamic, 2005). For example, blogs or media sites often publish a new piece of information without explicitly citing their sources. Marketers may note when a social media user decides to adopt a new behavior but cannot tell which neighbor in the social network influenced them to do so. Epidemiologist observe when a person gets sick but usually cannot tell who infected her. In all these cases, given a set of cascades and a diffusion model, the network inference problem consists of inferring the edges (and model parameters) of the unobserved underlying network (GomezRodriguez, 2013).
The network inference problem has attracted significant attention in recent years (Saito et al., 2009; GomezRodriguez et al., 2010, 2011; Snowsill et al., 2011; Du et al., 2012a), since it is essential to reconstruct and predict the paths over which information can spread, and to maximize sales of a product or stop infections. Most previous work has focused on developing network inference algorithms and evaluating their performance experimentally on different synthetic and real networks, and a rigorous theoretical analysis of the problem has been missing. However, such analysis is of outstanding interest since it would enable us to answer many fundamental open questions. For example, which conditions are sufficient to guarantee that we can recover a network given a large number of cascades? If these conditions are satisfied, how many cascades are sufficient to infer the network with high probability? Until recently, there has been a paucity of work along this direction (Netrapalli & Sanghavi, 2012; Abrahao et al., 2013) which provide only partial views of the problem. None of them is able to identify the recovery condition relating to the interaction between the network structure and the cascade sampling process, which we will make precise in our paper.
Overview of results. We consider the network inference problem under the continuoustime diffusion model recently introduced by GomezRodriguez et al. (2011). We identify a natural incoherence condition for such a model which depends on both the network structure, the diffusion parameters and the sampling process of the cascades. This condition captures the intuition that we can recover the network structure if the cooccurrence of a node and its nonparent nodes is small in the cascades. Furthermore, we show that, if this condition holds for the population case, we can recover the network structure using an regularized maximum likelihood estimator and cascades, and the probability of success is approaching 1 in a rate exponential in the number of cascades. Importantly, if this condition also holds for the finite sample case, then the guarantee can be improved to cascades. Beyond theoretical results, we also propose a new, efficient and simple proximal gradient algorithm to solve the regularized maximum likelihood estimation. The algorithm is especially wellsuited for our problem since it is highly scalable and naturally finds sparse estimators, as desired, by using softthresholding. Using this algorithm, we perform various experiments illustrating the consequences of our theoretical results and demonstrating that it typically outperforms other stateoftheart algorithms.
Related work. Netrapalli & Sanghavi (2012) propose a maximum likelihood network inference method for a variation of the discretetime independent cascade model (Kempe et al., 2003) and show that, for general networks satisfying a correlation decay, the estimator recovers the network structure given cascades, and the probability of success is approaching 1 in a rate exponential in the number of cascades. The rate they obtained is on a par with our results. However, their discrete diffusion model is less realistic in practice, and the correlation decay condition is rather restricted: essentially, on average each node can only infect one single node per cascade. Instead, we use a general continuoustime diffusion model (GomezRodriguez et al., 2011), which has been extensively validated in real diffusion data and extended in various ways by different authors (Wang et al., 2012; Du et al., 2012a, b).
Abrahao et al. (2013) propose a simple network inference method, FirstEdge, for a slightly different continuoustime independent cascade model (GomezRodriguez et al., 2010), and show that, for general networks, if the cascade sources are chosen uniformly at random, the algorithm needs cascades to recover the network structure and the probability of success is approaching 1 only in a rate polynomial in the number of cascades. Additionally, they study trees and boundeddegree networks and show that, if the cascade sources are chosen uniformly at random, the error decreases polynomially as long as and cascades are recorded respectively. In our work, we show that, for general networks satisfying a natural incoherence condition, our method outperforms the FirstEdge algorithm and the algorithm for boundeddegree networks in terms of rate and sample complexity.
Gripon & Rabbat (2013) propose a network inference method for unordered cascades, in which nodes that are infected together in the same cascade are connected by a path containing exactly the nodes in the trace, and give necessary and sufficient conditions for network inference. However, they consider a restrictive, unrealistic scenario in which cascades are all three nodes long.
2 ContinuousTime Diffusion Model
In this section, we revisit the continuoustime generative model for cascade data introduced by GomezRodriguez et al. (2011). The model associates each edge with a transmission function, , a density over time parameterized by . This is in contrast to previous discretetime models which associate each edge with a fixed infection probability (Kempe et al., 2003). Moreover, it also differs from discretetime models in the sense that events in a cascade are not generated iteratively in rounds, but event timings are sampled directly from the transmission functions in the continuoustime model.
Function  Infected node ()  Uninfected node () 




2.1 Cascade generative process
Given a directed contact network, with nodes, the process begins with an infected source node, , initially adopting certain contagion at time zero, which we draw from a source distribution . The contagion is transmitted from the source along her outgoing edges to her direct neighbors. Each transmission through an edge entails a random transmission time, , drawn from an associated transmission function . We assume transmission times are independent, possibly distributed differently across edges, and, in some cases, can be arbitrarily large, . Then, the infected neighbors transmit the contagion to their respective neighbors, and the process continues. We assume that an infected node remains infected for the entire diffusion process. Thus, if a node is infected by multiple neighbors, only the neighbor that first infects node will be the true parent. Figure 1 illustrates the process.
Observations from the model are recorded as a set of cascades . Each cascade is an dimensional vector recording when nodes are infected, . Symbol labels nodes that are not infected during observation window – it does not imply they are never infected. The ‘clock’ is reset to 0 at the start of each cascade. We assume for all cascades; the results generalize trivially.
2.2 Likelihood of a cascade
GomezRodriguez et al. (2011) showed that the likelihood of a cascade under the continuoustime independent cascade model is
(1) 
where denotes the collection of parameters, is the survival function and is the hazard function. The survival terms in the first line account for the probability that uninfected nodes survive to all infected nodes in the cascade up to and the survival and hazard terms in the second line account for the likelihood of the infected nodes. Then, assuming cascades are sampled independently, the likelihood of a set of cascades is the product of the likelihoods of individual cascades given by Eq. 1. For notational simplicity, we define , and if and 0 otherwise.
3 Network Inference Problem
Consider an instance of the continuoustime diffusion model defined above with a contact network and associated parameters . We denote the set of parents of node as with cardinality and the minimum positive transmission rate as . Let be a set of cascades sampled from the model, where the source of each cascade is drawn from a source distribution . Then, the network inference problem consists of finding the directed edges and the associated parameters using only the temporal information from the set of cascades .
This problem has been cast as a maximum likelihood estimation problem (GomezRodriguez et al., 2011)
(2) 
where the inferred edges in the network correspond to those pairs of nodes with nonzero parameters, \ie .
In fact, the problem in Eq. 2 decouples into a set of independent smaller subproblems, one per node, where we infer the parents of each node and the parameters associated with these incoming edges. Without loss of generality, for a particular node , we solve the problem
(3) 
where are the relevant variables, and corresponds to the terms in Eq. 2 involving (also see Table 1 for the definition of ). In this subproblem, we only need to consider a superneighborhood of , with cardinality , where is the set of upstream nodes from which is reachable, is the set of nodes which are reachable from at least one node . Here, we consider a node to be reachable from a node if and only if there is a directed path from to . We can skip all nodes in from our analysis because they will never be infected in a cascade before , and thus, the maximum likelihood estimation of the associated transmission rates will always be zero (and correct).
Below, we show that, as , the solution, , of the problem in Eq. 3 is a consistent estimator of the true parameter . However, it is not clear whether it is possible to recover the true network structure with this approach given a finite amount of cascades and, if so, how many cascades are needed. We will show that by adding an regularizer to the objective function and solving instead the following optimization problem
(4) 
we can provide finite sample guarantees for recovering the network structure (and parameters). Our analysis also shows that by selecting an appropriate value for the regularization parameter , the solution of Eq. 4 successfully recovers the network structure with probability approaching 1 exponentially fast in .
In the remainder of the paper, we will focus on estimating the parent nodes of a particular node . For simplicity, we will use , , , , , , and .
4 Consistency
Can we recover the hidden network structures from the observed cascades? The answer is yes. We will show this by proving that the estimator provided by Eq. 3 is consistent, meaning that as the number of cascades goes to infinity, we can always recover the true network structure.
More specifically, GomezRodriguez et al. (2011) showed that the network inference problem defined in Eq. 3 is convex in if the survival functions are logconcave and the hazard functions are concave in . Under these conditions, the Hessian matrix, , can be expressed as the sum of a nonnegative diagonal matrix and the outer product of a matrix with itself, \ie,
(5) 
Here the diagonal matrix is a sum over a set of diagonal matrices , one for each cascade (see Table 1 for the definition of its entries); and is the Hazard matrix
(6) 
with each column . Intuitively, the Hessian matrix captures the cooccurrence information of nodes in cascades. Then, we can prove {theorem} If the source probability is strictly positive for all , then, the maximum likelihood estimator given by the solution of Eq. 3 is consistent. {proof} We check the three criteria for consistency: continuity, compactness and identification of the objective function (Newey & McFadden, 1994). Continuity is obvious. For compactness, since for both and for all so we lose nothing imposing upper and lower bounds thus restricting to a compact subset. For the identification condition, , we use Lemma A and B (refer to Appendices A and B), which establish that has full row rank as , and hence is positive definite.
5 Recovery Conditions
In this section, we will find a set of sufficient conditions on the diffusion model and the cascade sampling process under which we can recover the network structure from finite samples. These results allow us to address two questions:

[noitemsep,nolistsep]

Are there some network structures which are more difficult than others to recover?

What kind of cascades are needed for the network structure recovery?
The answers to these questions are intertwined. The difficulty of finitesample recovery depends crucially on an incoherence condition which is a function of both network structure, parameters of the diffusion model and the cascade sampling process. Intuitively, the sources of the cascades in a diffusion network have to be chosen in such a way that nodes without parentchild relation should cooccur less often compared to nodes with such relation. Many commonly used diffusion models and network structures can be naturally made to satisfy this condition.
More specifically, we first place two conditions on the Hessian of the population loglikelihood, , where the expectation here is taken over the distribution of the source nodes, and the density of the cascades given a source node . In this case, we will further denote the Hessian of evaluated at the true model parameter as . Then, we place two conditions on the Lipschitz continuity of , and the boundedness of and at the true model parameter . For simplicity, we will denote the subset of indexes associated to node ’s true parents as , and its complement as . Then, we use to denote the submatrix of indexed by and the set of parameters indexed by .
Condition 1 (Dependency condition): There exists constants and such that and where and return the leading and the bottom eigenvalue of its argument respectively. This assumption ensures that two connected nodes cooccur reasonably frequently in the cascades but are not deterministically related.
Condition 2 (Incoherence condition): There exists such that , where . This assumption captures the intuition that, node and any of its neighbors should get infected together in a cascade more often than node and any of its nonneighbors.
Condition 3 (Lipschitz Continuity): For any feasible cascade , the Hazard vector is Lipschitz continuous in the domain ,
where is some positive constant. As a consequence, the spectral norm of the difference, , is also bounded (refer to appendix C), \ie,
(7) 
Furthermore, for any feasible cascade , is Lipschitz continuous for all ,
where is some positive constant.
Condition 4 (Boundedness): For any feasible cascade , the absolute value of each entry in the gradient of its loglikelihood and in the Hazard vector, as evaluated at the true model parameter , is bounded,
where and are positive constants. Then the absolute value of each entry in the Hessian matrix , is also bounded .
Remarks for condition 1 As stated in Theorem 6, as long as the source probability is strictly positive for all , the maximum likelihood formulation is strictly convex and thus there exists such that . Moreover, condition 4 implies that there exists such that .
Remarks for condition 2 The incoherence condition depends, in a nontrivial way, on the network structure, diffusion parameters, observation window and source node distribution. Here, we give some intuition by studying three small canonical examples.
First, consider the chain graph in Fig. 2(a) and assume that we would like to find the incoming edges to node when . Then, it is easy to show that the incoherence condition is satisfied if and , where denotes the probability of a node to be the source of a cascade. Thus, for example, if the source of each cascade is chosen uniformly at random, the inequality is satisfied. Here, the incoherence condition depends on the source node distribution.
Second, consider the directed tree in Fig. 2(b) and assume that we would like to find the incoming edges to node when . Then, it can be shown that the incoherence condition is satisfied as long as (1) , (2) () or ( and ), and (3) . As in the chain, the condition depends on the source node distribution.
Finally, consider the star graph in Fig. 2(c), with exponential edge transmission functions, and assume that we would like to find the incoming edges to a leave node when . Then, as long as the root node has a nonzero probability of being the source of a cascade, it can be shown that the incoherence condition reduces to the inequalities , , which always holds for some . If , then the condition holds whenever . Here, the larger the ratio is, the smaller the maximum value of for which the incoherence condition holds. To summarize, as long as , there is always some for which the condition holds, and such value depends on the time window and the parameters .
Remarks for conditions 3 and 4 Wellknown pairwise transmission likelihoods such as exponential, Rayleigh or Powerlaw, used in previous work (GomezRodriguez et al., 2011), satisfy conditions 3 and 4.
6 Sample Complexity
How many cascades do we need to recover the network structure? We will answer this question by providing a sample complexity analysis of the optimization in Eq. 4. Given the conditions spelled out in Section 5, we can show that the number of cascades needs to grow polynomially in the number of true parents of a node, and depends only logarithmically on the size of the network. This is a positive result, since the network size can be very large (millions or billions), but the number of parents of a node is usually small compared the network size. More specifically, for each individual node, we have the following result: {theorem} Consider an instance of the continuoustime diffusion model with parameters and associated edges such that the model satisfies condition 14, and let be a set of cascades drawn from the model. Suppose that the regularization parameter is selected to satisfy
(8) 
Then, there exist positive constants and , independent of , such that if
(9) 
then the following properties hold with probability at least :

For each node , the regularized network inference problem defined in Eq. 4 has a unique solution, and so uniquely specifies a set of incoming edges of node .

For each node , the estimated set of incoming edges does not include any false edges and include all true edges.
Furthermore, suppose that the finite sample Hessian matrix satisfies conditions 1 and 2. Then there exist positive constants and , independent of , such that the sample complexity can be improved to with other statements remain the same.
Remarks. The above sample complexity is proved for each node separately for recovering its parents. Using a union bound, we can provide the sample complexity for recovering the entire network structure by joining these parentchild relations together. The resulting sample complexity and the choice of regularization parameters will remain largely the same, except that the dependency on will change from to (the largest number of parents of a node), and the dependency on will change from to ( the number of nodes in the network).
6.1 Outline of Analysis
The proof of Theorem 6 uses a technique called primaldual witness method, previously used in the proof of sparsistency of Lasso (Wainwright, 2009) and highdimensional Ising model selection (Ravikumar et al., 2010). To the best of our knowledge, the present work is the first that uses this technique in the context of diffusion network inference. First, we show that the optimal solutions to Eq. 4 have shared sparsity pattern, and under a further condition, the solution is unique (proven in Appendix D): {lemma} Suppose that there exists an optimal primaldual solution to Eq. 4 with an associated subgradient vector such that . Then, any optimal primal solution must have . Moreover, if the Hessian submatrix is strictly positive definite, then is the unique optimal solution.
Next, we will construct a primaldual vector along with an associated subgradient vector . Furthermore, we will show that, under the assumptions on stated in Theorem 6, our constructed solution satisfies the KKT optimality conditions to Eq. 4, and the primal vector has the same sparsity pattern as the true parameter , \ie,
(10)  
(11) 
Then, based on Lemma 6.1, we can deduce that the optimal solution to Eq. 4 correctly recovers the sparsisty pattern of , and thus the incoming edges to node .
More specifically, we start by realizing that a primaldual optimal solution to Eq. 4 must satisfy the generalized KarushKuhnTucker (KKT) conditions (Boyd & Vandenberghe, 2004):
(12)  
(13)  
(14)  
(15)  
(16) 
where and denotes the subgradient of the norm.
Suppose the true set of parent of node is . We construct the primaldual vector and the associated subgradient vector in the following way

We set as the solution to the partial regularized maximum likelihood problem
(17) Then, we set as the dual solution associated to the primal solution .

We obtain from (12) by substituting in the constructed , and .
Then, we only need to prove that, under the stated scalings of , with highprobability, the remaining KKT conditions (10), (13), (15) and (16) hold.
For simplicity of exposition, we first assume that the dependency and incoherence conditions hold for the finite sample Hessian matrix . Later we will lift this restriction and only place these conditions on the population Hessian matrix . The following lemma show that our constructed solution satisfies condition (10):
Under condition 3, if the regularization parameter is selected to satisfy
and , then,
as long as .
Based on this lemma, we can then further show that the KKT conditions (13) and (15) also hold for the constructed solution. This can be trivially deduced from condition (10) and (11), and our construction steps (a) and (b). Note that it also implies that , and hence .
Proving condition (16) is more challenging. We first provide more details on how to construct mentioned in step (c). We start by using a Taylor expansion of Eq. 12,
(19) 
where is a remainder term with its th entry
and with according to the mean value theorem. Rewriting Eq. 19 using block matrices
and, after some algebraic manipulation, we have
Next, we upper bound using the triangle inequality
and we want to prove that this upper bound is smaller than . This can be done with the help of the following two lemmas (proven in Appendices F and G): {lemma} Given from the incoherence condition, we have,
which converges to zero at rate as long as . {lemma} Given from the incoherence condition, if conditions 3 and 4 holds, is selected to satisfy
where , and , then, as long as . Now, applying both lemmas and the incoherence condition on the finite sample Hessian matrix , we have
and thus condition (16) holds.
A possible choice of the regularization parameter and cascade set size such that the conditions of the Lemmas 6.16.1 are satisfied is and .
Last, we lift the dependency and incoherence conditions imposed on the finite sample Hessian matrix . We show that if we only impose these conditions in the corresponding population matrix , then they will also hold for with high probability (proven in Appendices H and I). {lemma} If condition 1 holds for , then, for any ,
where , , and are constants independent of . {lemma} If , then,
where is a constant independent of .
Note in this case the cascade set size need to increase to , where is a sufficiently large positive constant independent of , for the error probabilities on these last two lemmas to converge to zero.
7 Efficient softthresholding algorithm
Can we design efficient algorithms to solve Eq. (4) for network recovery? Here, we will design a proximal gradient algorithm which is well suited for solving nonsmooth, constrained, largescale or highdimensional convex optimization problems (Parikh & Boyd, 2013). Moreover, they are easy to understand, derive, and implement.
We first rewrite Eq. 4 as an unconstrained optimization problem:
where the nonsmooth convex function if and otherwise. Here, the general recipe from Parikh & Boyd (2013) for designing proximal gradient algorithm can be applied directly.
Algorithm 1 summarizes the resulting algorithm. In each iteration of the algorithm, we need to compute (Table 1) and the proximal operator , where is a step size that we can set to a constant value or find using a simple line search (Beck & Teboulle, 2009). Using Moreau’s decomposition and the conjugate function , it is easy to show that the proximal operator for our particular function is a softthresholding operator, , which leads to a sparse optimal solution , as desired.
8 Experiments
In this section, we first illustrate some consequences of Th. 6 by applying our algorithm to several types of networks, parameters , and regularization parameter . Then, we compare our algorithm to two different stateoftheart algorithms: NetRate (GomezRodriguez et al., 2011) and FirstEdge (Abrahao et al., 2013).
Experimental Setup We focus on synthetic networks that mimic the structure of realworld diffusion networks – in particular, social networks. We consider two models of directed realworld social networks: the Forest Fire model (Barabási & Albert, 1999) and the Kronecker Graph model (Leskovec et al., 2010), and use simple pairwise transmission models such as exponential, powerlaw or Rayleigh. We use networks with nodes and, for each edge, we draw its associated transmission rate from a uniform distribution . We proceed as follows: we generate a network and transmission rates , simulate a set of cascades and, for each cascade, record the node infection times. Then, given the infection times, we infer a network . Finally, when we illustrate the consequences of Th. 6, we evaluate the accuracy of the inferred neighborhood of a node using probability of success , estimated by running our method of independent cascade sets. When we compare our algorithm to NetRate and FirstEdge, we use the score, which is defined as , where precision (P) is the fraction of edges in the inferred network present in the true network , and recall (R) is the fraction of edges of the true network present in the inferred network .
Parameters According to Th. 6, the number of cascades that are necessary to successfully infer the incoming edges of a node will increase polynomially to the node’s neighborhood size and logarithmically to the superneighborhood size . Here, we infer the incoming links of nodes of a hierarchical Kronecker network with the same indegree () but different superneighboorhod set sizes under different scalings of the number of cascades and choose the regularization parameter as a constant factor of as suggested by Th. 6. We used an exponential transmission model and . Fig. 3(a) summarizes the results, where, for each node, we used cascades which contained at least one node in the superneighborhood of the node under study. As predicted by Th. 6, very different values lead to curves that line up with each other quite well.
Regularization parameter Our main result indicates that the regularization parameter should be a constant factor of . Fig. 3(b) shows the success probability of our algorithm against different scalings of the regularization parameter for different types of networks using cascades and . We find that for sufficiently large , the success probability flattens, as expected from Th. 6. It flattens at values smaller than one because we used a fixed number of cascades , which may not satisfy the conditions of Th. 6.
Comparison with NetRate and FirstEdge Fig. 4 compares the accuracy of our algorithm, NetRate and FirstEdge against number of cascades for a hierarchical Kronecker network with powerlaw transmission model and a Forest Fire network with exponential transmission model, with an observation window . Our method outperforms both competitive methods, finding especially striking the competitive advantage with respect to FirstEdge.
9 Conclusions
Our work contributes towards establishing a theoretical foundation of the network inference problem. Specifically, we proposed a regularized maximum likelihood inference method for a wellknown continuoustime diffusion model and an efficient proximal gradient implementation, and then show that, for general networks satisfying a natural incoherence condition, our method achieves an exponentially decreasing error with respect to the number of cascades as long as cascades are recorded.
Our work also opens many interesting venues for future work. For example, given a fixed number of cascades, it would be useful to provide confidence intervals on the inferred edges. Further, given a network with arbitrary pairwise likelihoods, it is an open question whether there always exists at least one source distribution and time window value such that the incoherence condition is satisfied, and, and if so, whether there is an efficient way of finding this distribution. Finally, our work assumes all activations occur due to network diffusion and are recorded. It would be interesting to allow for missing observations, as well as activations due to exogenous factors.
Acknowledgement
This research was supported in part by NSF/NIH BIGDATA 1R01GM10834101, NSF IIS1116886, and a Raytheon faculty fellowship to L. Song.
References
 Abrahao et al. (2013) Abrahao, B., Chierichetti, F., Kleinberg, R., and Panconesi, A. Trace complexity of network inference. In KDD, 2013.
 Adar & Adamic (2005) Adar, E. and Adamic, L. A. Tracking Information Epidemics in Blogspace. In Web Intelligence, pp. 207–214, 2005.
 Barabási & Albert (1999) Barabási, A.L. and Albert, R. Emergence of Scaling in Random Networks. Science, 286:509–512, 1999.
 Beck & Teboulle (2009) Beck, A. and Teboulle, M. Gradientbased algorithms with applications to signal recovery. Convex Optimization in Signal Processing and Communications, 2009.
 Boyd & Vandenberghe (2004) Boyd, S. P. and Vandenberghe, L. Convex optimization. Cambridge University Press, 2004.
 Du et al. (2012a) Du, N., Song, L., Smola, A., and Yuan, M. Learning Networks of Heterogeneous Influence. In NIPS, 2012a.
 Du et al. (2012b) Du, N., Song, L., Woo, H., and Zha, H. Uncover TopicSensitive Information Diffusion Networks. In AISTATS, 2012b.
 GomezRodriguez et al. (2010) GomezRodriguez, M., Leskovec, J., and Krause, A. Inferring Networks of Diffusion and Influence. In KDD, 2010.
 GomezRodriguez et al. (2011) GomezRodriguez, M., Balduzzi, D., and Schölkopf, B. Uncovering the Temporal Dynamics of Diffusion Networks. In ICML, 2011.
 GomezRodriguez (2013) GomezRodriguez, Manuel. Ph.D. Thesis. Stanford University & MPI for Intelligent Systems, 2013.
 Gripon & Rabbat (2013) Gripon, V. and Rabbat, M. Reconstructing a graph from path traces. arXiv:1301.6916, 2013.
 Kempe et al. (2003) Kempe, D., Kleinberg, J. M., and Tardos, É. Maximizing the Spread of Influence Through a Social Network. In KDD, 2003.
 Leskovec et al. (2010) Leskovec, J., Chakrabarti, D., Kleinberg, J., Faloutsos, C., and Ghahramani, Z. Kronecker Graphs: An Approach to Modeling Networks. JMLR, 2010.
 Mangasarian (1988) Mangasarian, O. L. A simple characterization of solution sets of convex programs. Operations Research Letters, 7(1):21–26, 1988.
 Netrapalli & Sanghavi (2012) Netrapalli, P. and Sanghavi, S. Finding the Graph of Epidemic Cascades. In ACM SIGMETRICS, 2012.
 Newey & McFadden (1994) Newey, W. K. and McFadden, D. L. Large Sample Estimation and Hypothesis Testing. In Handbook of Econometrics, volume 4, pp. 2111–2245. 1994.
 Parikh & Boyd (2013) Parikh, Neal and Boyd, Stephen. Proximal algorithms. Foundations and Trends in Optimization, 2013.
 Ravikumar et al. (2010) Ravikumar, P., Wainwright, M. J., and Lafferty, J. D. Highdimensional ising model selection using l1regularized logistic regression. The Annals of Statistics, 38(3):1287–1319, 2010.
 Rogers (1995) Rogers, E. M. Diffusion of Innovations. Free Press, New York, fourth edition, 1995.
 Saito et al. (2009) Saito, K., Kimura, M., Ohara, K., and Motoda, H. Learning continuoustime information diffusion model for social behavioral data analysis. Advances in Machine Learning, pp. 322–337, 2009.
 Snowsill et al. (2011) Snowsill, T., Fyson, N., Bie, T. De, and Cristianini, N. Refining Causality: Who Copied From Whom? In KDD, 2011.
 Wainwright (2009) Wainwright, M. J. Sharp thresholds for highdimensional and noisy sparsity recovery using l1constrained quadratic programming (lasso). IEEE Transactions on Information Theory, 55(5):2183–2202, 2009.
 Wang et al. (2012) Wang, L., Ermon, S., and Hopcroft, J. Featureenhanced probabilistic models for diffusion network inference. In ECML PKDD, 2012.
Appendix A Proof of Lemma A
{lemma}Given logconcave survival functions and concave hazard functions in the parameter(s) of the pairwise transmission likelihoods, then, a sufficient condition for the Hessian matrix to be positive definite is that the hazard matrix is nonsingular. {proof} Using Eq. 5, the Hessian matrix can be expressed as a sum of two matrices, and . The matrix is trivially positive semidefinite by logconcavity of the survival functions and concavity of the hazard functions. The matrix is positive definite matrix since is full rank by assumption. Then, the Hessian matrix is positive definite since it is a sum a positive semidefinite matrix and a positive definite matrix.
Appendix B Proof of Lemma B
{lemma}If the source probability is strictly positive for all , then, for an arbitrarily large number of cascades , there exists an ordering of the nodes and cascades within the cascade set such that the hazard matrix is nonsingular. {proof} In this proof, we find a labeling of the nodes (row indices in ) and ordering of the cascades (column indices in ), such that, for an arbitrary large number of cascades, we can express the matrix as , where is an upper triangular with nonzero diagonal elements and . And, therefore, has full rank (rank ). We proceed first by sorting nodes in and then continue by sorting nodes in :

Nodes in : For each node , consider the set of cascades in which was a source and got infected. Then, rank each node according to the earliest position in which node got infected across all cascades in in decreasing order, breaking ties at random. For example, if a node was, at least once, the source of a cascade in which node got infected just after the source, but in contrast, node was never the source of a cascade in which node got infected the second, then node will have a lower index than node . Then, assign row in the matrix to node in position and assign the first columns to the corresponding cascades in which node got infected earlier. In such ordering, for all and .

Nodes in : Similarly as in the first step, and assign them the rows to . Moreover, we assign the columns to to the corresponding cascades in which node got infected earlier. Again, this ordering satisfies that for all and . Finally, the remaining columns can be assigned to the remaining cascades at random.
This ordering leads to the desired structure , and thus it is nonsingular.
Appendix C Proof of Eq 7.
If the Hazard vector is Lipschitz continuous in the domain ,
where is some positive constant. Then, we can bound the spectral norm of the difference, , in the domain as follows:
Appendix D Proof of Lemma 6.1
By Lagrangian duality, the regularized network inference problem defined in Eq. 4 is equivalent to the following constrained optimization problem: