Scalable Bicriteria Algorithms for the Threshold Activation Problem in Online Social Networks ©2017 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting /republishing this material for advertising or promotional purposes, creating new collective works, or resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.

Scalable Bicriteria Algorithms for the Threshold Activation Problem in Online Social Networks


We consider the Threshold Activation Problem (TAP): given social network and positive threshold , find a minimum-size seed set that can trigger expected activation of at least . We introduce the first scalable, parallelizable algorithm with performance guarantee for TAP suitable for datasets with millions of nodes and edges; we exploit the bicriteria nature of solutions to TAP to allow the user to control the running time versus accuracy of our algorithm through a parameter : given , with probability our algorithm returns a solution with expected activation greater than , and the size of the solution is within factor of the optimal size. The algorithm runs in time , where , , refer to the number of nodes, edges in the network. The performance guarantee holds for the general triggering model of internal influence and also incorporates external influence, provided a certain condition is met on the cost-effectivity of seed selection.

1 Introduction

With the growth of online social networks, viral marketing where influence spreads through a social network has become a central research topic. Users of a social network can activate their friends by influencing them to adopt certain behaviors or products. In this context, the influence maximization (IM) problem has been studied extensively [1, 2, 3, 4, 5, 6, 7, 8]: given a budget, the IM problem is to find a seed set, or set of initially activated users, within the budget that maximizes the expected influence. Much recent work [4, 9, 10, 3] has developed scalable algorithms for IM that are capable of running on social networks with millions of nodes while retaining the provable guarantees on the quality of solution; namely, that the algorithm for IM will produce a solution with expected influence within of the optimal activation.

However, a company with a specific target in mind may adopt a more flexible approach to its budget: instead of having a fixed budget for the seed set, it is natural to minimize the size of the seed set while activating a desired threshold of users within the network. For example, suppose a company desires a certain level of exposure on social media; such exposure could boost the sales of any of its products. Alternatively, suppose a profit goal for a product must be met with the least expense possible. Thus, we consider the following threshold activation problem (TAP): given a threshold , minimize the size of the set of seed users in order to activate at least users of the network in expectation.

Goyal et al. [11] provided bicriteria performance guarantees for a greedy approach to TAP based upon Monte Carlo sampling at each iteration to select the best seed node, an algorithm reminiscent of the greedy algorithm for IM in Kempe et al. [1]; this approach is inefficient and impractical for large networks. Although TAP is related to IM, scalable solutions that already exist for IM are unsuitable for TAP: the TIM [10] and IMM [9] algorithms require knowledge of the size of the seed set ahead of time; the SKIM algorithm [3] for average reachability has been shown to be effective for IM in specialized settings, but it is unclear how to apply SKIM to more general situations or to TAP while retaining performance guarantees.

Moreover, empirical studies have shown that in the viral marketing context, it is insufficient to consider internal propagation alone; external activation, i.e. activations that cannot be explained by the network structure, play a large role in influence propagation events [12, 13, 14, 15], and recent works on scalable algorithms for IM [10, 9, 3] have neglected the consequences of external influence. For internal diffusion, two basic models have been widely adopted, the independent cascade and linear threshold models; Kempe et al. [1] showed these two models are special cases of the triggering model, a powerful, general model that has desirable properties in a viral marketing context.

Motivated by the above observations, the main contributions of this work are:

  • We establish a new connection between the triggering model and a concept of generalized reachability that allows a natural combination of external influence with the triggering model. We show any instance of the triggering model combined with any model of external influence is monotone and submodular.

  • We show how to use the generalized reachability framework to efficiently estimate the expected influence of the triggering model combined with external influence, leveraging scalable estimators of average reachability by Cohen et al. [16, 3, 17]. This efficient estimation results in a parallelizable algorithm (STAB) for TAP with performance guarantee in terms of user-adjustable trade-offs between efficiency and accuracy. The desired accuracy is input as parameter which determines running time as , where are number of nodes, edges in the network, and is the seed set returned by STAB. With probability , the expected activation is guaranteed to be with of threshold , and the size of the seed set is guaranteed to be within factor of the optimal size. If the cost-effectivity of seed selection falls below 1, this performance bound may not hold; we provide a looser bound for this case.

  • Through a comprehensive set of experiments, we demonstrate that on large networks, STAB not only returns a better solution to TAP, but it runs faster than existing algorithms for TAP and algorithms for IM adapted to solve TAP, often by factors of more than even for the state-of-the-art IMM algorithm [9]. In addition, we investigate the effect of varying levels of external influence on the solution of STAB.

The rest of this paper is organized as follows: in Section 2, we introduce models of influence, including the triggering model and our concept of generalized reachability. We prove these two concepts are equivalent. In Section 3, we formally define TAP and first prove bicriteria guarantees in a general setting. Next, we employ the generalized reachability concept to show how combination of triggering model and external influence can be estimated efficiently, and we present and analyze STAB, our scalable bicriteria algorithm. In Section 4, we analyze STAB experimentally and compare with prior work. We discuss related work in Section 5.

2 Models of influence

A social network can be modeled as a directed graph , where is the set of users and directed edges denote social connections, such as friendships, between the users . In this work, we study the propagation of influence through a social network; for example, say a user on the Twitter network posts a message to her account; this message may be reposted by the friends of this user, and as such it propagates across the social network. In order to study such events from a theoretical standpoint, we require the concept of a model of influence propagation.

Intuitively, the idea of a model of influence propagation in a network is a way by which nodes can be activated given a set of seed nodes. In this work, we use to denote a model of influence propagation. Such a model is usually probabilistic, and the notation will denote the expected number of activations under the model given seed set . Kempe et al. [1] studied a variety of models in their seminal work on influence propagation on a graph, including the Independent Cascade (IC) and Linear Threshold (LT) models. For completeness, we briefly describe these two models. An instance of influence propagation on a graph follows the IC model if a weight can be assigned to each edge such that the propagation probabilities can be computed as follows: once a node first becomes active, it is given a single chance to activate each currently inactive neighbor with probability proportional to the weight of the edge . In the LT model each network user has an associated threshold chosen uniformly from which determines how much influence (the sum of the weights of incoming edges) is required to activate . becomes active if the total influence from its activated neighbors exceeds the threshold .

These well-studied models are both examples of the Triggering Model, also introduced in [1]: Each node independently chooses a random “triggering set” according to some probability distribution over subsets of its neighbors. A seed set is activated at time ; a node becomes active at time if any node in is active at time .

Two important properties that it is desirable for a model of influence propagation to satisfy are firstly the submodularity property of the expected activation function: for any sets , , and secondly the monotonicity property: if , These properties together allow a greedy approach to have a performance ratio for the influence maximization problem (IM): given , find a seed set of size such that the expected activation of is maximized. Kempe et al. [1] showed that the triggering model is both submodular and monotonic. Both properties are also important in proving performance guarantees for TAP, the problem studied in this work and defined in Section 3.

It is -hard to compute the exact influence of a single seed node under even the restricted version of IC where each edge is assigned the probability [18]. Therefore, it is necessary to estimate the value of by sampling the probability distribution determined by . Sampling efficiently such that the estimated value satisfies for all seed sets is a difficult problem; we discuss this problem when is a combination of the triggering model and external influence in Section 3.2.2. Because of the errors associated with estimating the value , we introduce slightly generalized versions of the above two properties. First, let us define , for any model , and subset . The following property is equivalent to satisfying submodularity and monotonicity together.

Property 1 (Submodularity and monotonicity).

For all , and ,

Let us define a to be -approximately monotonic and submodular if the following property is satisfied instead:

Property 2 (-submodularity and monotonicity).

Let . For all , and ,

2.1 Triggering model from the perspective of generalized reachability

Next we define a class of influence propagation models that naturally generalize the notion of reachability in a directed graph: that is, these models generalize the simple model that a node is activated by a seed set if it is reachable from the seed set by edges in the graph. Somewhat surprisingly, the triggering model is equivalent to this notion of generalized reachability.

Suppose instead of a single directed graph , we have a set of directed graphs on the same vertex set , and associated probabilities such that . Then define an influence propagation model in the following way: when a seed set is activated, graph is chosen with probability . Then, influence is allowed to propagate from seed set in according to the directed edges of . Let be defined as the number of vertices reachable from in . Then the expected activation of a seed set is given by We will term a model of this form a model of generalized reachability, since it generalizes the notion of simple reachability on a directed graph. We have the following important proposition:

Proposition 1.

Generalized reachability is equivalent to the Triggering Model.


Suppose we have an instance of the triggering model. For each node , triggering set is chosen independently with probability . If all nodes choose a triggering set, then define graph for this choice by adding directed edges for each . Assign graph probability and we have instance of generalized reachability with the same expected activation.

Conversely, suppose is an instance of generalized reachability. For , let be any subset of nodes excluding itself. Assign to be the sum of the probabilities of the graphs in which the in-neighbors of are exactly the set . Then we have an instance of the triggering model. ∎

2.2 External influence

In this section, we outline our model of external influence in a social network. We wish to capture the idea that users in the network may be activated by a source external to the network; that is, these activations do not occur through friendships or connections within the social network. The most general model of external influence is simply an arbitrary probability distribution on the set of subsets of nodes. That is, for any , there is a probability that is activated from an external source. In this work, we adopt this model and denote such a model of external influence by . In order to consider both external and internal influence in our social networks, we next define the concept of combining models of influence together.

Definition 1 (Combination of ).

Let and be two models of influence propagation. We define the combination of these two models in the following way: At any timestep , if set is activated new nodes may be activated from according to either or ; that is, if in the next timestep activates and activates , then . We denote the combination model as and write for the expected number of activations resulting from seeding under this model.

In this work, we are most interested in combining external activation with the triggering model. That is, if is a model of external influence as defined above, and is an instance of the triggering model, then we consider .

Submodularity of

In this subsection, we establish the submodularity and monotonicity of . First, we require the following proposition.

Proposition 2.

Let be a submodular, monotone-increasing model of influence propagation. Define to be the expected influence of seeding set when is already activated; that is . Then, for any , is submodular and monotone increasing.


Let . We prove submodularity only, monotonicity is similar.

Theorem 1.

Let be any instance of the Triggering Model, and be any instance of external influence. Then, is submodular and monotonic.


For any , let be the probability that is activated via , the external influence. Then, By Prop. 2, for fixed is monotone and submodular. Since a non-negative combination of submodular and monotonic set functions is also submodular and monotonic, the result follows. ∎

3 Threshold Activation Problem

The framework has been established to define the problem we consider in this work. We suppose a company wants to minimize the number of seed users while expecting a certain level of activation in the social network. Formally, we have

Problem 1 (Threshold activation problem (TAP)).

Let be a social network, . Given influence propagation model and threshold such that , find a subset such that is minimized with .

First, we consider performance guarantees for a greedy approach to TAP with only the assumption that the influence propagation is -submodular. We do not discuss how to sample for these results. Subsequently, we specialize to the case when is the the estimated value of the combination of the triggering model and external influence, which is approximately submodular up to the error of estimation, and we show how to efficiently estimate to a desired accuracy in Sections 3.2.2 and 3.2.3. We detail the algorithm STAB in Section 3.2.1, a scalable algorithm with performance guarantees for TAP utilizing this estimation and analysis.

3.1 Results when is -submodular

We analyze the greedy algorithm to solve TAP, which adds a node that maximizes the marginal gain at each iteration to its chosen seed set until , at which point it terminates. One might imagine in analogy to the set cover problem that there would be a -approximation ratio for the greedy algorithm for TAP – however, this result only holds for integral submodular functions [11]. We next give a bicriteria result for the greedy algorithm that incorporates the error inherent in -approximate submodularity into its bounds. In this context, a bicriteria guarantee means that the algorithm is allowed to violate the constraints of the problem by a specified amount, and also to approximate the solution to the problem. In the viral marketing context, this means that we may not activate the intended threshold of users, but we will guarantee to activate a number close to . Furthermore, we will not achieve a solution of minimum size, but there is a guarantee on how large the solution returned could be.

Theorem 2.

Consider the TAP problem for when is -approximately submodular. Then the greedy algorithm that terminates when the marginal gain is less than returns a solution of size within factor of the optimal solution size, , and the solution satisfies


Let be the greedy solution after iterations, and let be the final solution returned by the greedy algorithm. Let be the size of an optimal solution satisfying . Then


Therefore, Then


From here, there exists an such that the following differences satisfy


Thus, by inequalities (2) and (3), and By inequality (4) and the assumption on the termination of the algorithm, the greedy algorithm adds at most more elements, so Finally, if the algorithm terminates before , then the marginal gain is less than . Hence, by (1),

Notice that the above argument requires only that is a -submodular set function; in particular, it did not use the fact that represents expected influence propagation on a social network.

Approximation ratios

Next, we consider ways in which the bicriteria guarantees of Theorem 2 can be improved. In viral marketing, we may suppose a company seeks to choose a threshold such that the marginal gain to reach is always at least ; seeding nodes with a marginal gain of less than would be cost-ineffective. In other words, it would cost more to seed a node than the benefit obtained from seeding it. There is little point in activating users if the marginal gain drops too low; intuitively, the company has already activated as many as it cost-effectively can.

We term this assumption the cost-effectivity assumption (CEA): In an instance of TAP, if such that , there always exists a node such that . Under CEA, the greedy algorithm in Theorem 2 would be an approximation algorithm; that is, it would ensure , with the same bound on solution size as stated in the theorem. To see this fact, notice that once inequality (4) above is satisfied, the algorithm must add at most more elements before , by CEA.

More generally, each node has an associated reticence ; is the probability that will remain inactive even if all of the neighbors of are activated. Then we have the following theorem, whose proof is analogous to Theorem 2.

Theorem 3.

Let be -approximately submodular and let . Suppose . Then the greedy algorithm for TAP is an approximation algorithm which returns solution within factor of optimal size.

3.2 Scalable bicriteria algorithm for

In this subsection, we detail the scalable bicriteria Algorithm 4 for TAP when the propagation is given by an instance of the triggering model in the presence of external influence; that is, when . We describe our scalable algorithm STAB first and then discuss the necessary sampling and estimation techniques in the subsequent sections.

Description of algorithm

As input, the algorithm takes a graph representing a social network, an external influence model , internal influence model , an instance of the triggering model, and the desired threshold of activated users . In addition, the user specifies the fractional error , on which the running time of the algorithm and the accuracy of the solution depend. Using , in line 1 the algorithm first determines , the number of graph samples it requires according to Section 3.2.2.

Next in the for loop on line 2, the algorithm constructs a collection of oracles which will be used to estimate the average reachability in the sampled graphs , which is used to approximate the expected influence. Each graph is needed only while updating the oracle collection in iteration ; once this step is completed, may be safely discarded. Since the samples are independent, this process is completely parallelizable.

Once the set of oracles has been constructed, a greedy algorithm is performed in attempt to satisfy the threshold of expected activation with a minimum-sized seed set. The estimation in line 11 may be done in one of two ways: using estimator or ; both are described in detail in Section 3.2.3. The estimator chosen has a strong effect on both the running time and performance of the algorithm: given the same oracles, can be computed in time , and in practice is much faster to compute than . However, the quality of degrades with the size of seed set. On the other hand, takes time time to compute, where is the seed set for which the average reachability is estimated; our experimental results show that the quality of is vastly superior to for larger seed set sizes ; however its running time increases.

This algorithm achieves the following guarantees on performance:

Theorem 4.

Suppose we have an instance of TAP with and that has been chosen such that CEA holds.

Then, if , by choosing , the solution returned by Alg. 4 satisfies the following two conditions, with probability at least :

  1. If is an optimal solution satisfying ,

If Assumption CEA is violated, the algorithm can detect this violation by terminating when the marginal gain drops below 1. In this case, bound 1 above becomes .

Using estimator for average reachability in line 11 yields running time . If estimator is used, a factor of is multiplied by this bound.


Let be a seed set of minimum size satisfying . Then, as discussed in Section 3.2.2, if , then with probability at least , satisfies by the choice of in Alg. 4, and the analysis in Section 3.2.2. Hence , where is a set of minimum size satisfying .

Notice is -approximately submodular on the sets considered by the greedy algorithm with probability at least . By Section 3.1.1, in this case, the solution returned by Alg. 4 satisfies . Furthermore, if CEA holds, otherwise the alternate bound follows from Theorem 2.

Next, we consider the running time of Alg. 4. Let be the number of edges that have a nonzero probability to exist in one of the reachability instances ( is at most the number of edges in input graph ). Lines 2 and 3 clearly take time . By Cohen et al. [3], line 4 takes time . The while loop on line 9 executes exactly times, and the for loop on line 10 requires time if estimator is used, and time if estimator by Section 3.2.3. By the choices of and on line 1, we have the total running times bounded as stated. ∎


[h!] \KwData \KwResult Seed set Choose , as discussed in Section 3.2.2\For to Sample graph from   Sample external seed set for   Construct , and store the value of , as described in Section 3.2.3  Update the oracle collection for according to as in Section 3.2.3

, \While \For Estimate using one of the estimators or as described in Sections 3.2.3, 3.2.3, where maximizes   Compute by Lemma 2Scalable TAP Algorithm with Bicriteria guarantees (STAB)

Estimation of

Let be a model of internal influence propagation, which in this section will be an instance of the triggering model. Let be the model of external influence activation, and let be the combination of the two as defined above. We use the following version of Hoeffding’s inequality ( will be the threshold that is input to Alg. 4).

Theorem 5 (Hoeffding’s inequality).

Let be independent random variables in . Let be the empirical mean of these variables, Then for any

Since is an instance of the triggering model, by Theorem 1, there exists a set such that for any seed set , where is simply the size of the set of nodes reachable from in . Thus, if is fixed, then by taking independent samples of graphs from the probability distribution on , we get independent samples of .

In the general model of external influence presented above, every seed set has a probability of being activated by the external influence. By sampling from this distribution on subsets of nodes, and independently sampling as above from , could be computed exactly in the following way: . In most cases, this sum cannot be computed in polynomial time, and certainly it has summands. Accordingly, we estimate its value by independently sampling externally activated sets from ; we also independently sample reachability graphs according to and estimate by averaging the size of the reachability from a seed set in this context: To estimate in this way within error with probability at least from Hoeffding’s inequality we require such samples.

Now, in our analysis of the greedy algorithm in Theorem 2 only at most sets were considered. All that is required for the analysis to be correct as it that those sets were estimated within the error ; if , where , then by the union bound, with probability at least , the analysis for Theorem 2 holds. In practice, we were able to get good results with much higher values of , see Section 4.

Estimation of

In the previous section, we describe an approach to estimate , based upon independent samples of graphs from the triggering model, and independent samples of externally activated nodes . Next, we need to compute the value of the estimator . One method would be to compute it directly using breadth-first search from the sets in each graph . This method would unfortunately add a factor of to the running time, which would result in a running time of Alg. 4 of , too large for our purposes. Thus, we would like to take advantage of estimators formulated by Cohen et al. [3] for the average reachability of a seed set over a set of graphs. However, because the external seed sets for each graph vary with , we must first convert the problem into an average reachability format.

Conversion to an average reachability problem Suppose we have sampled as above pairs of sample graphs and external seeds: To compute efficiently, we first convert this sum to a generalized reachability problem: we construct graph from by removing all nodes (and incident edges) reachable from : . The average reachability of a set in the graphs , which we term , is


where is the size of the set reachable from in . The estimators formulated by Cohen et al. are suitable to estimate the value of , and the following two lemmas show how we can compute from .

Lemma 1.

The size of the reachable set from in can be computed from reachability in as follows:


Suppose, in , is reachable from , but not from . This is true iff there exists a path from to in avoiding , which is equivalent to the path existing in in . ∎

The next lemma shows explicitly how to get from :

Lemma 2.


This statement follows directly from Lemma 1 and the definitions of , . ∎

In Alg. 4, for each , is computed in the construction of ; its size can be stored as instructed on line 5, and used in the computation of for line 14 in the greedy algorithm’s stopping criterion.

Estimation of , method In this subsection, we utilize methods developed by Cohen et al. [3] to estimate efficiently the average reachability problem defined in (5). For convenience, we refer to this method of estimation in the rest of the paper as method .

Each pair , consisting of a node in graph , is assigned an independent, random rank value uniformly chosen on the interval . The reachability sketch for a set is defined as follows: let be an integer, and consider where bottom-(S) means to take the smallest values of the set , and is the set of nodes reachable from in graph . The threshold rank of a set is then defined to be if , and if . The estimator for is then

If , the probability that this estimator has error greater than is at most [3]. The bounds on needed for the proof of Theorem 4 are determined by taking .

Computation of First, we compute for all using Algorithm 2 of [3] in time , where is the maximum number of edges in any . This collection is referred to as oracles.

Next, we discuss how to compute, for an arbitrary node , given that has already been computed. This computation will take time and is necessary for the bicriteria algorithm: given that and are both sorted, we compute by merging these two sets together until the size of the new set reaches values.

Estimation of , method Alternatively to estimator , we can estimate from the oracles in the following way. Let be the threshold rank as defined above for . Then For convenience, we refer to the estimator in the rest of the paper as estimator ; it was originally introduced in [17].

4 Experimental evaluation

In this section, we demonstrate the scalability of STAB as compared with the current state-of-the-art IMM algorithm [9] and with the greedy algorithm for TAP in Goyal et al. [11]. The methodology is described in Section 4.1, comparison to existing IM algorithms is in Section 4.2, and investigation of the effect of external influence on the performance of STAB is in Section 4.3. All experiments were run on an Intel(R) Core(TM) i7-3770K CPU @ 4.0GHz CPU with 32 GB RAM.

4.1 Datasets and framework

We evaluated the following algorithms in our experiments.


the greedy algorithm by naive sampling of Kempe et al. [1] can be modified to find a solution to TAP, as shown in Goyal et al. [11]. The modified algorithm performs Monte Carlo sampling at each step to select the node with the highest marginal gain into the seed set until the threshold is satisfied. The Cost-Effective Lazy Forward (CELF) approach by Leskovec et al. [19] improves the running time of this algorithm by reducing the number of evaluations required.


The IMM algorithm [9] is the current state-of-the-art algorithm to solve the IM problem, where the number of seeds is input. Since TAP asks to minimize the number of seeds, this algorithm cannot be applied directly. For the purpose of comparison to our methods, we adapt the algorithm by performing a binary search on in the interval , where is the threshold given in TAP. At each stage of the search, IMM utilizes the value of in question until the minimum as estimated by IMM is found. Since binary search can identify the minimum in at most iterations, we chose this approach over starting at and incrementing by one until the minimum is found, which in the worst case would require iterations.


the STAB algorithm (Alg. 4) using estimators and , referred to as STAB-C1, STAB-C2 respectively. Since these are greedy algorithms with an approximately submodular function, we also use the CELF approach to reduce the number of evaluations performed by STAB-C2; for STAB-C1, we found this optimization unnecessary.

Network topologies: We generated networks according to the Erdos-Renyi (ER) and Barabasi-Albert (BA) models. For ER random graphs, we used varying number of nodes ; the independent probability that an edge exists is set as . The BA model was used to generate scale-free synthesized graphs; the exponent in the power law degree distribution was set at for all BA graphs.

The following topologies of real networks collected by the Stanford Network Analysis Project [20] were utilized: 1) Facebook, a section of the Facebook network, with , ; 2) Nethept, high energy physics collaboration network, , ; 3) Slashdot, social network with , ; 4) Youtube, from the Youtube online social network, , ; and 5) Wikitalk, the Wikipedia talk (communication) network, , .

ER 1000 1.0 0.4 0.2 1
BA 15000 6.4 1.6 0.9 0.1
Nethept 954 230 102 1
Slashdot 25 6.4 3.3 0.01
Youtube 385 91 41 0.01
Wikitalk 1704 274 122 0.01
Table 1: Oracle computation time (sec)

Models of internal and external influence: In all experiments, we used the independent cascade model for internal influence propagation: each edge in the graph is assigned a uniform probability . For synthesized networks, we usually set ; for real networks, we observed that if , then in most cases a single node can activate a large fraction of the network (often more than 33%). However, a large number of empirical studies have confirmed that most activation events occur within a few hops of a seed node [12, 13, 14]; these works indicate that is not a realistic parameter value for the IC model. Therefore, we also ran experiments using lower values for .

For external influence, we adopted in all experiments the model that each node is activated externally independently with uniformly chosen probability . The setting of is discussed in the context of each experiment. Unless otherwise stated, we set , which we found sufficient to return a solution within the guarantee provided in Theorem 4 in most cases with estimator , and in all cases with estimator .

(a) ER, activation
(b) Nethept, activation
(c) Youtube, activation
(d) Wikitalk, activation
(e) ER, running time
(f) Nethept, running time
(g) Youtube, running time
(h) Wikitalk, running time
(i) ER, number of seeds
(j) Nethept, number of seeds
(k) Youtube, number of seeds
(l) Legend
Figure 1: First row: expected activation normalized by threshold versus . Second row: running time in minutes versus . Third row: size of seed set versus .

4.2 Performance comparison and demonstration of scalability

In this section, we compare the performance of STAB-C1 and STAB-C2 to CELF and IMM as described above; we experimented on the datasets listed in Table 1, where the total CPU time required to compute the oracles is shown. The oracle computation is parallelizable, and in our experiments we used 7 threads of computation. The oracles for each value of were computed once and stored, and thereafter when running STAB the oracles were simply read from a file. The running times we report for the various versions of STAB do not include the oracle computation time unless otherwise specified. Also in Table 1 are the values of used for each dataset in this set of experiments. Since IMM and CELF do not consider external influence, the experiments in this section had no external influence; that is, for all datasets. To evaluate the seed set returned by the algorithms we used the average activation from 10000 independent Monte Carlo samples.

We show typical results in Fig. 1 on the following four datasets: ER 1000 (), Nethept, Youtube, and Wikitalk. The first row of Fig. 1 shows the expected activation, normalized by the threshold value , of the seed set returned by each algorithm plotted against . Thus, a value of indicates the algorithm successfully achieved the threshold of activation. The second row of the figure shows the running time in minutes of each algorithm against , and the third row shows the size of the seed set returned by each algorithm.

For the ER 1000 network results are shown in the first column of Fig. 1; this dataset was the only one on which CELF finished under the time limit of 60 minutes. We see that IMM is consistently returning a larger seed set than STAB and CELF and overshooting the threshold value in activation by as much as a factor of 2.5. Thus, it has poor performance of minimizing the size of the seed set for TAP. This behavior appears to be a result of IMM underestimating the influence of its seed set internally. As expected, CELF performs very well in terms of the size of the seed set and meeting the threshold , but is running on a timescale larger than the other algorithms by factor of at least 100 and as large as . Notice that STAB-C2 with has virtually identical activation and size of seed set to CELF, while running at a much faster speed; as expected, the quality of solution of STAB deteriorates as is raised, but the running time decreases drastically. In addition, notice that none of the versions of STAB seed too many nodes and overshoot the threshold as IMM does; instead, STAB errs on the side of seeding too few nodes and only partially achieving the threshold of activation. Finally, it is evident that STAB-C1 runs faster than STAB-C2 and has a similar amount of activation when the seed set required is relatively small. However, the larger the seed set required, the farther is STAB-C1 from achieving the threshold while STAB-C2 does not suffer from this drawback.

The results from Nethept are shown in the second column of Fig. 1; with , CELF was unable to run within the time limit, and IMM was able to complete its binary search only for ; for the higher threshold values, IMM exceeded the 32 GB memory usage limit imposed in our setup. On this network, STAB-C2 again exhibits the best performance. Despite an initial decrease in activation relative to shown in Fig. b, the trend reverses around and seed set size and the algorithm gets closer to achieving the threshold. This behavior is explained by lower coefficient of variation of estimator as analyzed in [17]; the CV can be lower than estimator by up to a factor of , where is the seed set. In stark contrast, STAB-C1 was unable to proceed past even for because the estimator appears to lose accuracy as the number of seeds increases. By this mechanism, STAB-C1 had achieved its maximum estimation of influence of any set and thereby could not increase it before reaching an estimated activation of , for .

Next, the Youtube network is shown in the third column of Fig. 1. As in the ER network, IMM is underestimating the influence of its seed set and thereby picking too many seeds and overshooting the threshold, as shown in Fig. c; IMM picks nearly twice as many seed nodes as STAB-C1, , as shown in Fig. k. In Fig. g, the scalability of STAB is demonstrated as the most precise version, STAB-C2 with , runs faster than IMM by as much as a factor of 50. On our largest dataset, the Wikitalk network, IMM again exceeded 32 GB memory after ; notice that the running time of STAB in all cases is under 2 minutes and STAB-C2 with maintained activation greater than while running in less than 5 seconds, as shown in Fig. h. With inclusion of the parallelizable and reusable oracle computation time of 122 seconds from Table 1, the total time taken by STAB-C2 is less than 3 minutes. The total running time for of STAB-C2 including the oracles at is less than 30 minutes.

Choice of : The above discussion demonstrates that provides the close activation to the threshold while maintaining high scalability. If faster running time is desired, may be increased, which results in a loss of accuracy shown clearly in Fig. d, for . On the other hand, if activation closer to is required, smaller values of may be used at higher computational cost.

4.3 External influence

In this section, we analyze the performance of STAB when external influence is present in the network; that is, when . For this section, we considered a BA network with 100,000 nodes, with a threshold of , and the Facebook network with . Results on other topologies were qualitatively similar. For all experiments in this section, we set .

In a, we plot the expected activation (Act) of the seed set returned by STAB-C2 normalized by the threshold , as varies from 0 to 0.006. As in the previous section, the expected activation of the returned seed set is estimated by independent Monte Carlo sampling. We also plot the expected fraction of activated by the external influence, along with the size of the seed set returned by STAB-C2, normalized by its maximum value. As the effect of external influence in the network increases, the algorithm requires fewer seed nodes to ensure the expected activation is within the specified error tolerance to threshold .

In b, we show an analogous plot for the Facebook network; interestingly, the size of the seed set chosen by STAB-C2 nearly doubles as increase from 0 to 0.01, before beginning to decrease to 0. This increase differs from what we expected; it is counterintuitive that the algorithm would require more seeds to reach the threshold as external influence increases. One possible explanation for this effect is that the external influence both decreases and distributes the marginal gain more evenly among seed nodes, so that the greedy algorithm has a more difficult time identifying the best seed nodes, especially in the presence of the error of estimation.

(a) BA 100,000
(b) Facebook
Figure 2: The effect of external influence on activation and size of seed set of STAB-C2.

5 Related work

Kempe et al. introduced the triggering model in a seminal work on the IM problem [1], where they exhibited a Monte Carlo greedy sampling algorithm that achieves performance ratio for IM; this algorithm, although it runs in polynomial time, is very inefficient and cannot scale well. Since the maximum coverage problem prohibits performance guarantee better than under standard assumptions, the ratio for IM likely cannot be improved, but much work has improved the scalability of the algorithm while retaining the guarantee. Leskovec et al. [19] introduced the CELF method to exploit submodularity and improve the running time. Reverse Influence Sampling (RIS) was introduced in [4] to further improve the greedy performance; algorithms using RIS include [4, 10, 9, 21, 22]; the current state-of-the-art are the SSA [21] and the IMM algorithm [9] to which we compare in this paper. Cohen et al. [3] introduced a methodology for a highly scalable IM algorithm SKIM; in this work, we extend this methodology to solve the TAP problem with the triggering model and external influence.

As compared with IM, much less effort has been devoted to scalable solutions to TAP while maintaining performance guarantees; Goyal et al. [11] studied the TAP problem with monotonic and submodular models of influence propagation; their bicriteria guarantees differ from ours, and provide no method of efficient sampling required for scalability. Chen et al. [23] considered external influence in a viral marketing context. However, their model of external influence is much less general than ours. Furthermore, they restrict external influence to only pass through seed consumers and have no discussion of sampling, scalability, or the TAP problem. Nguyen et al. [24, 25] studied methods to restrain propagation in social networks.

6 Conclusion

We establish equivalency between the triggering model and generalized reachability, which allows incorporation of external influence into our efficient sampling techniques. We gave precise trade-off between accuracy and running time with a bound on the number of samples required to solve TAP. Our algorithm is highly scalable and outperforms adaptations to TAP of the current state-of-the-art algorithm for the IM problem.


This work is supported in part by the NSF grant #CCF-1422116.


  1. thanks: ©2017 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting /republishing this material for advertising or promotional purposes, creating new collective works, or resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.


  1. David Kempe, Jon Kleinberg, and Éva Tardos. Maximizing the spread of influence through a social network. In Proceedings of the 9th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 137–146, 2003.
  2. Thang N Dinh, Huiyuan Zhang, Dzung T Nguyen, and My T Thai. Cost-effective viral marketing for time-critical campaigns in large-scale social networks. IEEE/ACM Transactions on Networking (TON), 22(6):2001–2011, 2014.
  3. Edith Cohen, Daniel Delling, Thomas Pajor, and Renato F. Werneck. Sketch-based Influence Maximization and Computation: Scaling up with Guarantees. In Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management, pages 629–638, 2014.
  4. Christian Borgs, Michael Brautbar, Jennifer Chayes, and Brendan Lucier. Maximizing Social Influence in Nearly Optimal Time. In Proceedings of the Twenty-Fifth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 946–957, 2014.
  5. Huiyuan Zhang, Thang N Dinh, and My T Thai. Maximizing the spread of positive influence in online social networks. In Distributed Computing Systems (ICDCS), 2013 IEEE 33rd International Conference on, pages 317–326. IEEE, 2013.
  6. Huiyuan Zhang, Dung T Nguyen, Huiling Zhang, and My T Thai. Least cost influence maximization across multiple social networks. IEEE/ACM Transactions on Networking, 24(2):929–939, 2016.
  7. Thang N Dinh, Dung T Nguyen, and My T Thai. Cheap, easy, and massively effective viral marketing in social networks: truth or fiction? In Proceedings of the 23rd ACM conference on Hypertext and social media, pages 165–174. ACM, 2012.
  8. Huiyuan Zhang, Huiling Zhang, Alan Kuhnle, and My T Thai. Profit Maximization for Multiple Products in Online Social Networks. In IEEE International Conference on Computer Communications, 2016.
  9. Youze Tang. Influence Maximization in Near-Linear Time : A Martingale Approach. In Proceedings of the 2015 ACM SIGMOD International Conference on Management of Data, pages 1539–1554, 2015.
  10. Youze Tang, Xiaokui Xiao, and Yanchen Shi. Influence maximization: Near-optimal time complexity meets practical efficiency. In Proceedings of the 2014 ACM SIGMOD International Conference on Mangement of Data, pages 75–86, 2014.
  11. Amit Goyal, Francesco Bonchi, Laks V S Lakshmanan, and Suresh Venkatasubramanian. On minimizing budget and time in influence propagation over social networks. Social Network Analysis and Mining, 3(2):179–192, 2013.
  12. Sharad Goel, Ashton Anderson, Jake Hofman, and Duncan J Watts. The Structural Virality of Online Diffusion. Management Science, 62(1):180–196, 2016.
  13. Sandra González-Bailón, Javier Borge-Holthoefer, and Yamir Moreno. Broadcasters and Hidden Influentials in Online Protest Diffusion. American Behavioral Scientist, 57:943–965, 2013.
  14. Seth A. Myers and Jure Leskovec. The bursty dynamics of the Twitter information network. In Proceedings of the 23rd International Conference on World Wide Web, pages 913–924, 2014.
  15. Seth A. Myers, Chenguang Zhu, and Jure Leskovec. Information diffusion and external influence in networks. In Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 33–41, 2012.
  16. Edith Cohen. Size-Estimation Framework with Applications to Transitive Closure and Reachability. Journal of Computer and System Sciences, 55(3):441–453, 1997.
  17. Edith Cohen, H Kaplan, and Tel Aviv. Leveraging discarded samples for tighter estimation of multiple-set aggregates. ACM SIGMETRICS Performance Evaluation Review, 37(1):251–262, 2009.
  18. Wei Chen, Chi Wang, and Yajun Wang. Scalable influence maximization for prevalent viral marketing in large-scale social networks. In Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 1029–1038, 2010.
  19. Jure Leskovec, Andreas Krause, Carlos Guestrin, Christos Faloutsos, Jeanne VanBriesen, and Natalie Glance. Cost-effective Outbreak Detection in Networks. In Proceedings of the 13th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 420–429, 2007.
  20. Jure Leskovec and Andrej Krevl. SNAP Datasets: Stanford large network dataset collection., June 2014.
  21. H.T. Nguyen, M. T. Thai, and T. N. Dinh. Stop-and-Stare: Optimal Sampling Algorithms for Viral Marketing in Billion-Scale Networks. In ACM SIGMOD/POSD Conference, 2016.
  22. H. T. Nguyen, T. N. Dinh, and M. T. Thai. Cost-aware Targeted Viral Marketing in Billion-scale Networks. In IEEE International Conference on Computer Communications, 2016.
  23. Wei Chen, Fu Li, Tian Lin, and Aviad Rubinstein. Combining Traditional Marketing and Viral Marketing with Amphibious Influence Maximization. In Proceedings of the 16th ACM Conference on Economics and Computation, pages 779–796, 2015.
  24. N. P. Nguyen, Y. Xuan, and M. T. Thai. A novel method for worm containment on dynamic social networks. In Military Communications Conference, pages 2180–2185, 2010.
  25. Nam P. Nguyen, T. N. Dinh, D. T. Nguyen, and M. T. Thai. Overlapping community structures and their detection on social networks. In Privacy, Security, Risk, and Trust (PASSAT) and 2011 Third International Conference on Social Computing, 2011.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minumum 40 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description