Scalable Temporal Latent Space Inference for Link Prediction in Dynamic Social Networks
Abstract
We propose a temporal latent space model for link prediction in dynamic social networks, where the goal is to predict links over time based on a sequence of previous graph snapshots. The model assumes that each user lies in an unobserved latent space, and interactions are more likely to occur between similar users in the latent space representation. In addition, the model allows each user to gradually move its position in the latent space as the network structure evolves over time. We present a global optimization algorithm to effectively infer the temporal latent space. Two alternative optimization algorithms with local and incremental updates are also proposed, allowing the model to scale to larger networks without compromising prediction accuracy. Empirically, we demonstrate that our model, when evaluated on a number of realworld dynamic networks, significantly outperforms existing approaches for temporal link prediction in terms of both scalability and predictive power.
Latent Space Model, Link Prediction, Nonnegative Matrix Factorization, Social Network Analysis
1 Introduction
Understanding and characterizing the processes driving social interactions is one of the fundamental problems in social network research. A particular instance of this problem, known as link prediction, has recently attracted considerable attention in various research communities. Besides academic interest (see [15, 24] for a survey of different methods), link prediction has many important commercial applications, e.g., recommending friends in an online social network such as Facebook and suggesting potential hires in a professional network such as LinkedIn.
In this work we focus on the temporal link prediction problem: Given a sequence of graph snapshots , , from time 1 to , how do we predict links in future time ? To perform link prediction in a network, one needs to construct models for link probabilities between pairs of nodes. Recent research interests in link prediction have focused on latent space modeling of networks. That is, given the observed interactions between nodes, the goal is to infer the position of each node in some latent space, so that the probability of a link between two nodes depends on their positions in that space. Latent space modeling allows us to naturally incorporate the wellknown homophily effect [25] (birds of a feather flock together). Namely, each dimension of the latent space characterizes an unobservable homogeneous attribute, and shared attributes tend to create a link in a network. We illustrate this concept with an example shown in Figure 1.
Example 1
Assume that we have observed some interactions during a political campaign. An example of a latent space is shown in Figure 1, where most observed links were formed between users with similar political attitudes. Based on the learned latent space, we could then predict that Bob is more likely to interact with Alice than Kevin in a political campaign.
Various approaches, including Bayesian inference [16, 45], multidimensional scaling (MDS) [32], matrix factorization [26, 29, 11, 44, 47, 9], and mixedmembership model [3], have been proposed to infer the static latent space representation of the observed network. Nevertheless, most of these studies were focusing on static graphs, where the latent positions of the nodes are fixed. In social networks, “There is nothing permanent except change”; the network itself is dynamic and changing with time [14] [36]. In addition, the latent position of each node might also be evolving over time [32, 34, 8, 48, 46]. A naive approach to extend static latent space modeling for dynamic networks is to model each node as a single latent representation and then update its position whenever the network evolves. Unfortunately, this approach tends to overfit on the current time step, leading to abrupt transitions and poor incorporation of historical information. Therefore, we are interested in inferring temporal latent positions for all the nodes in a dynamic network. The underlying problem can be stated as follows: Given a sequence of graph snapshots , , , how can we infer the temporal latent space so that at each time point links are more likely to be formed between nodes that are closer in the temporal latent space? With the inferred temporal latent space, links can be accurately predicted at future times.
Unfortunately, we still have a limited understanding of how the latent space is evolving for temporal social networks. In this work we propose a Temporal Latent Space Model for Dynamic Link Prediction. Our model is based on the intuition that nodes can move smoothly in the latent space over time, and that large moves are less likely [32, 46]. As illustrated in Figure 2, in time , Bob is biased towards liberal; it is unlikely that Bob will suddenly move to very conservative in time . In addition to temporal smoothness, our model imposes a constraint on the dimensionality of the latent space and assumes that the dimension of latent space is much smaller than the number of nodes. With the dimensionality constraint, the online link prediction using our model is efficient in both computational time and storage cost (see related work for more details). In addition, varying the dimension of latent space offers an opportunity to finetune the compromise between computational cost and solution quality. The higher dimension leads to a more accurate latent space representation of each node, but also yields higher online computational cost.
One of the most widely used methods for inferring lowrank latent space in networks is via matrix factorization with the “blockcoordinate gradient descent (BCGD)” algorithm [6, 38], which has been successfully applied in community detection for static networks [42]. However, the introduced additional temporal smoothness term in the objective function creates new challenges for the BCGD approach. Moreover, while there is a lot of work on studying convergence of BCGD, it is unknown whether the new objective function satisfies these criteria. We prove that the global BCGD algorithm proposed in this work has a quadratic convergence rate.
The global BCGD algorithm is computationally expensive and not applicable for truly largescale problems. Therefore, for the temporal link prediction problem with latent space models, a significant gap remains between theoretical capabilities and practical applications. Towards this end, in this work we further introduce two new variants of BCGD: a local BCGD algorithm and an incremental BCGD algorithm. In a local BCGD algorithm, instead of using all historical graph snapshots to jointly infer temporal latent space at all timestamps, it sequentially infers the temporal latent space at each time position with a single graph snapshot and previous temporal latent space—thus significantly reducing computational cost. In addition, as we stated earlier, the latent positions of nodes are changing smoothly over time, not suddenly. To make use of this temporal smoothness, we develop an incremental BCGD algorithm to adaptively infer changes in latent positions based only on the changes in interactions. As shown in Figure 2, when Bob receives new interactions, his latent position in time requires an update which, however, can be performed efficiently leveraging his previous latent position in time .
In summary, the contributions of this work are:

We propose a temporal latent space model for dynamic networks to model the temporal link probability of node pairs, which can be further used to accurately recover or predict the formation of links.

We address algorithmic issues in learning the temporal latent space representation of nodes by developing the standard global BCGD algorithm. We also provide a set of theoretical results for the standard global BCGD algorithm.

We develop two fast BCGD algorithms: a local BCGD algorithm and an incremental BCGD algorithm. Furthermore, we illustrate that the proposed incremental BCGD algorithm only requires a conditional update on a small set of affected nodes, and is therefore significantly faster than all existing approaches.

We conduct experiments over large reallife graphs. The experimental results show that the proposed approaches—global BCGD, local BCGD, and incremental BCGD—achieve good predictive quality in both sparse and dense networks with average AUC scores of 0.81, 0.79 and 0.78 respectively. In addition, it takes only less than half an hour for the incremental BCGD algorithm to learn the temporal latent spaces of massive networks with millions of nodes and links.
The remainder of the paper is organized as follows: We first introduce our problem formulation in Section 2, and then present the proposed global BCGD algorithm and two fast BCGD algorithms in Section 3 and Section 4 respectively. In Section 5, we conduct an experimental study to evaluate the effectiveness and efficiency of our approach. Related work is discussed in Section 6. We conclude this work and present future work in Section 7.
2 Problem Formulation
A graph is denoted as (, ), where is the set of nodes and is the set of (directed or undirected) interactions. In this work we first focus on undirected graphs, in which case its matrix representation is symmetric. We use and to denote individual nodes, and and to denote timestamps. Let (, ) be a timedependent network snapshot recorded at time , where is the set of nodes and is the set of interactions. In addition, denote by and the sets of vertices and interactions to be introduced (or removed) at time , and let = (, ) denote the change in the whole network.
Dynamic social network. A dynamic network is a sequence of network snapshots within a time interval and evolving over time: (, …, ).
Temporal latent space and its model. Let be the lowrank dimension temporal latent space representation for node set at time . For each individual at time , we use a row vector to denote its temporal latent space representation and a scalar to denote its position in dimension. Inspired by the dynamic community model [36, 37], we impose the following assumptions on the temporal latent space:

Temporal smoothness: Individuals change their latent positions gradually over time.

Network embedding: Two nodes that are close to each other in the interaction network, in terms of the network distance, are also close to each other in the temporal latent space in terms of latent space distance.

Latent Homophily: Two members who are close to each other in latent space interact with one another more frequently than two faraway members.
Within our model, we assume that the probability of a link between two nodes depends only on their latent positions. In principle, given a dynamic graph (, …, ), we need to predict future graph . However, since is not available, in our model we assume it can be approximated by . Therefore, given a dynamic graph (, …, ), we infer , , based on , , , use , , to approximate , and finally predict .
With the above definitions, we focus on the following problem:
Problem 1
(Temporal Latent Space Inference.) Given a dynamic social network = (, , …, ), we aim to find a dimension latent space representation at each timestamp that minimizes the quadratic loss with temporal regularization:
(1)  
where is a regularization parameter, and the term penalizes node for suddenly changing its latent position. Note that when computing the quadratic loss , we ignore all of the diagonal entries.
In the above model, the latent representation of each node corresponds to a point on the surface of a unit hypersphere. Note that this is different from mixed membership stochastic block models [3] where nodes are mapped onto simplex. In practice, we find that sphere modeling gives us a clearer boundary between linked pairs and nonlinked pairs when we project all pairs of nodes into the latent space. In addition, we impose the constraints , not only because the nonnegativity establishes the duality between our modeling and nonnegative matrix factorization, but also because it gives latent space an intuitive partsbased interpretation, as suggested by Lee and Seung [22]. In the facial image example [22], with nonnegative constraints, each dimension of latent space corresponds to a part of the face, such as nose, eye and ear; while without nonnegative constraints, each dimension of latent space corresponds to a blur representation of the entire face, of which it is very difficult to interpret. Similarly, in the social network domain, with the nonnegative constraints, each dimension of latent space corresponds to a part of users’ attributes such as current city, hometown, personality and so on. It is more intuitive to represent each user as an additive mixture of attributes (current city: A, hometown: B, personality: openness to experience), rather than represent each user as a combination of different representatives.
Link prediction. Given that we have inferred by optimizing Eq. 1, our goal is to predict the adjacency matrix at the next timestamp . The most natural estimator is the conditional expectation: . By assuming that the temporal dynamics of latent positions is Markovian and satisfies and (a diagonal matrix), as well as an unbiased estimate of , we obtain
However, as we ignore the diagonal entries of the adjacency matrix (the graph does not contain selfloops) and because the conditional covariance matrix is assumed to be diagonal, we will use the offdiagonal of to predict .
Generalization. Although in this work, we use to predict , without losing generality, the predicted graph at can be formulated as: , where is the link function and is the temporal function. Learning or selecting the best link function and temporal function is beyond the scope of this work. For example, we could apply nonparametric approaches [31] to automatically learn the link function . Additionally, though this work focuses on undirected graphs, our model can be generalized to directed and weighted graphs. Specifically, this can be done by introducing another matrix , which represents the weighted mapping from one dimension to another dimension in the latent space. With the matrix , the term in Eq. 1 is now generalized to . If is symmetric, the underlying graph is undirected; otherwise, it is directed.
Problem complexity. Vavasis recently proved that nonnegative matrix factorization (NMF) is NPhard [39]. In addition, with the separability condition, that is, for each dimension in the latent space , there is a node such that and for , there exists a polynomial time exact algorithm to solve the nonnegative matrix factorization problem [4]. Unfortunately, in our modeling, there is no guarantee that the latent space satisfies the separability condition. In addition, even if there exists a latent space that satisfies the separability condition, Recht et al. [30] pointed out that an exact algorithm is still too computationally expensive for largescale data. Therefore, in the following we focus on the block coordinate gradient descent (BCGD) approach to obtain an approximate solution to Problem 1.
3 A Standard BCGD Algorithm
In this section we present the details of the standard BCGD algorithm that provide a local optimal solution to Problem 1.
With a partial observed graph structure, the objective function in Eq. 1 can be decomposed into a linked part and a nonlinked part. That is,
(2)  
Unfortunately, the decomposed objective function in Eq. 2 for Problem 1 is still a fourthorder polynomial and nonconvex. Therefore, we then adopt a block coordinate descent approach to solve Eq. 2. We update for each node at time by fixing both latent positions of all other nodes at time as well as all the temporal latent positions other than at time . Proceeding in this way, each step of block coordinate descent is solving a convex problem.
For each node at time , we focus on optimizing the following problem:
(3)  
In the following, we use the projected gradient descent algorithm to find the approximation solution with a nonnegativity constraint. With the gradient descent optimization algorithm, for each node at timepoint , we could iteratively update (u) in each iteration with the following rule:
where is the step size.
Step size. Nesterov’s gradient method [28, 13, 5] iteratively updates the step size using the Lipschitz constant. According to the following lemma, we show that our step size can also be iteratively updated using the Lipschitz constant.
Lemma 1
The gradient of the objective function is Lipschitz continuous, and the Lipschitz constant is equal to , where is the number of nodes in a graph, and is the number of dimensions.
Proof: A detailed proof is in Supplement A.
With the Lipschitz constant and shown in Eq. 12 in Supplement A, each node at timepoint can be computed via the update rule stated in the following Lemma.
Lemma 2
The latent position of each node at timepoint can be iteratively computed with the following update rule:
(4)  
where , denotes the set of ’s neighbors, is the Lipschitz constant, is the degree of node , and
(5) 
Proof: A detailed proof is in Supplement B.
Note that when updating according to Lemma 2, it requires the latent positions of all the neighbors of . For nodes with extralarge degrees, the above update will become timeconsuming. One interesting direction in the future is to apply stochastic gradient descent update rules to nodes with large degrees. That is, instead of using the latent positions of all of ’s neighbors, each iteration randomly selects one of ’s neighbors , and leverages ’s latent positions to update the latent position of .
We summarize our global block coordinate descent approach to solve Eq. 2 in Algorithm 1. In the following, we provide additional theoretical properties of the proposed BCGD algorithm, including convergence analysis and complexity analysis.
Input: Graphs ,, and latent space 
dimension 
Output: latent space ,, and prediction 
1: Nonnegative initial guess for ,, 
2: Repeat 
3: for each time from 1 to 
4: for each in graph 
5: update by Eq. 4 
6: normalize 
7: until ,, converges. 
8: return and ,, 
3.1 Theoretical Analysis
Convergence rate. Since our algorithm uses Nesterov’s gradient method to determine the step size (see Lemma 2), we can conclude that our algorithm achieves the convergence rate as stated in the following theorem.
Theorem 1
Proof: The proof is shown in Supplement C.
Local error bound. The solution returned by Algorithm 1 is a local optimum of the objective in Eq. 2. Unfortunately, it is not trivial to assess how this locally optimal solution compares to the global optimum of the objective in Eq. 2. This is because the objective function in Eq. 2 is nonconvex, and thus arriving at the global optimum via local iterations is not guaranteed. Only if the input matrix is separable or nearseparable [4, 30], we are able to achieve the global optimum or a global error bound.
Sparse  Dense  

Initialize  O  
Update  
Convergence 
Computational complexity. Table I summarizes the cost of each operation in an iteration for both sparse and dense graphs. Fortunately, Gibson et al. [12] concluded that realworld networks are generally globally very sparse, where most of nodes have a relatively small number of neighbors. Therefore, for such realworld sparse networks, the total cost of a single iteration of BCGD should be , which is linear in the number of edges and nodes. Assume that the total number of iterations is , then the time complexity is bounded by . Since in Algorithm 1, we need to store all the input matrices and the output temporal latent space matrices, the storage complexity is bounded by .
4 Fast BCGD Algorithms
Unfortunately, the standard BCGD is very expensive in both computation and storage since it requires all historical graph snapshots ,, to jointly and cyclicly update all temporal latent space , , . In this section we aim to further reduce the time complexity of the BCGD algorithm so that it scales with big data. In contrast to the joint inference in BCGD, we propose a sequential inference algorithm and an incremental inference algorithm, both of which utilize the locality and temporal information. The proposed two faster BCGD algorithms are as efficient as the fast independent inference (i.e., infer from only) and as effective as the global joint inference (BCGD).
As illustrated in Figure 3, we first introduce the local BCGD algorithm, which sequentially infers each temporal latent space from a single graph snapshot and partial prior knowledge .
4.1 Local BCGD Algorithm
Specifically, at each timestamp , we aim to optimize the following local objective function to compute :
(6)  
Using the same BCGD approach, we iteratively update the latent position of each node by fixing the latent positions of all the other nodes. This leads to the following update rules for in the iteration:
(7)  
where , is the Lipschitz constant, and is defined in Lemma 2.
Input: Graphs ,, and latent space 
dimension 
Output: and latent space ,, 
1: Nonnegative initial guess for 
2: for each from 1 to 
3: Initial based on 
4: repeat 
5: for each in graph 
6: update by Eq. 7 and normalize it 
7: until converges. 
8: return and ,, 
We summarize the proposed local BCGD algorithm in Algorithm 2. Note that in the global BCGD algorithm, for each iteration we jointly update , until and then iterate back to in the next iteration. This cyclic update schema is very expensive; while in the local BCGD algorithm, as shown in Lines 4–7, we sequentially compute by a single inner iteration. That is, we iteratively update until converges and then move to the computation of temporal latent space . This local sequential update schema greatly reduces the computational cost in practice, as we analytically show in the following.
Complexity analysis. The cost of each operation in Algorithm 2 remains the same as that of Algorithm 1, as reported in Table I. Thus, the total computational cost is . In practice, since we leverage prior knowledge to locally update , it converges much faster than the global BCGD algorithm, and thus the local BCGD algorithm is more efficient than the global BCGD algorithm, as also verified in our experiments. In addition, in the inmemory algorithm where we put all the data into memory, the global BCGD algorithm requires at least memory cost; while the local BCGD only requires storage of a single graph snapshot and two latent space matrices representation. Therefore, the memory cost of the local BCGD algorithm is bounded by .
4.2 Incremental BCGD Algorithm
In this section we further study how to infer or maintain temporal latent space incrementally with graph changes (nodes and edges insertion and deletion). Instead of recomputing the temporal latent space with the entire graph snapshot and the prior knowledge , our dynamic update strategy is to adjust incrementally from as the network structure changes. Specifically, we take advantage of what we already learned in previous snapshots, as well as the temporal transition for inferring the dynamics in the current timestamp.
Overview of incremental BCGD. The overview of this algorithm is as follows: First, we identify nodes whose local neighborhood has changed between two consecutive snapshots, including cases where an existing node adds or deletes links, or a node is added to or removed from the network. With the matrix representation, we model all the four updates towards in terms of rowwise updates on matrix representation. For example, a newly added node with degree is modeled as an update from a row with allzero entries to a row with nonzero entries. For nodes whose local neighborhood has not changed, we assume that their initial temporal latent positions do not change either. For nodes whose neighborhood has changed, we initialize their new temporal latent positions based on their new neighbors’ latent space positions. Next, we iteratively perform a conditioned latent position update for each affected node (i.e., a candidate set of nodes whose latent position might change) and an update for the affected node set until there are no more changes in the temporal latent position of each node. The entire process is outlined in Algorithm 3.
Input: Graphs ,, and latent space 
dimension 
Output: and latent space ,, 
01: Nonnegative initial guess for 
02: for each time stamp from 1 to 
03: for each in graph 
04: if is not updated 
05: 
06: else 
07: Initialize using Eq. 8 
08: affected node set = 
09: repeat 
10: for each in 
11: update by Eq. 7 and normalize it 
12: update affected node set with Alg. 4 
13: until converges or is empty. 
14: return and ,, 
Initialize updated nodes. In our algorithm, the four update operations are handled simply by comparing whether is identical to . For each updated node , we initialize its latent position based on the probability of seeing its updated neighbors’ latent positions. Specifically, for each node and the dimension of the latent space at time , the position of in dimension is computed using the following equation:
(8) 
The initialization of latent position for an updated node follows the notion of “latent homophily” introduced earlier: The latent position of the node is as close as possible to those of its network neighbors.
Identifying affected nodes. Our dynamic update strategy can be viewed as an extra conditional update by which only nodes affected accept a new latent position. Unfortunately, the set of affected nodes for which the latent positions need to be updated is not limited to only the set of recently changed nodes and/or their network neighbors. Therefore, how could we identify the set of affected nodes?
The overall idea of our affected nodes identification is outlined as follows. Initially, the set of affected nodes is identical to the set of updated nodes (Line 8 in Algorithm 3). Next, after performing each iteration of a conditional update with Eq. 7 in Line 11 in Algorithm 3, some old affected nodes are no longer required to be updated since their latent positions have converged. On the other hand, the effects of the update could be further propagated from old affected nodes to their neighborhood. The details of our affected nodes update procedure are presented in Algorithm 4.
Input: , , 
Output: A set of affected nodes 
1: 
2: for each in 
3: if , 
4: 
5: for each 
6: if 
7: 
8: return 
Complexity analysis. The total computational cost of Algorithm 3 is , where denotes the iteration number of the conditional update process outlined in Lines 9–13 in Alg. 3, denotes the set of affected nodes and denotes the neighborhood of affected nodes. When =1, is equal to the number of nodes , and is equal to the number of edges in the first graph snapshot . For the inmemory storage cost, compared with Algorithm 2, Algorithm 3 requires additional storage cost about . Therefore, the inmemory storage cost of Algorithm 3 is .
5 Experiments
5.1 Dataset and Evaluation
We use five real temporal datasets in our experiments, which are obtained from the Koblenz Large Network Collection [21]. The statistics of each data set are reported in Table II. Note that in our experimental setting, each graph snapshot consists of all the nodes and the interactions that appear within the time interval .
Data  # nodes  volume  # edges  # snapshots 

Infection [18]  410  17,298  2,765  8 
Facebook [40]  63,731  817,035  183,412  5 
HepPh [23]  28,093  4,596,803  3,148,447  9 
DBLP [1]  1,314,050  18,986,618  10,724,828  11 
YouTube [27]  3,223,589  9,375,374  9,375,374  7 
Evaluation metrics. We evaluate the performance of the proposed approaches from three aspects: the effect of parameters, the efficiency of both offline latent space inference and online link prediction, and the accuracy of online link prediction. We use total running time and memory consumption to evaluate the efficiency of both offline latent space inference and online link prediction. We use prediction error to evaluate the inference accuracy. Given the training graph , , , prediction error is defined as . Therefore, a smaller prediction error indicates better inference accuracy.
For link prediction accuracy, we use Area Under Curves (both Receiver Operating Characteristic (ROC) and PrecisionRecall (PR) curves), termed as and . The ROC curve is created by plotting the true positive rate vs. false positive rate; while the PR curve is created by plotting precision vs. recall at various threshold settings. Thus, AUC score evaluates the overall ranking yielded by the algorithms with a larger AUC indicating better link prediction performance. In the following we only report the experiment results based on ; the experiment results based on are reported in the Supplement.
Test pair generation. Given the training graph , , , we perform link prediction for the test graph . We provide two different setups to generate test pairs: (1) all links: In this setup, we focus on how well the methods predict links in the test graph, no matter whether those links are repeated links or new links. Toward this goal, we randomly generate 100,000 linked and an equal number of nonlinked pairs from as test pairs. (2) new links: In this setup, we focus on how well the methods predict the emergence of new links and deletion of existing links. Again, we randomly generate 10,000 linked and an equal number of nonlinked pairs from .
Comparable approaches. We compare our approaches (standard BCGD , local BCGD and incremental BCGD ) with a set of stateoftheart approaches, which are summarized in Table III and Section 6. For the offline latent space inference time comparison, we compare our approaches with global optimization approach DMMSB [10], local optimization approaches NMFR [43], Hott [30], BIGCLAM [42], PTM [45] and incremental optimization approach LabelRT [41]. We compare online link prediction times of our approaches to that of highdimension latent space approaches BIGCLAM and LabelRT, and popular graph heuristic AA [2]. For the Hott, NMFR and DMMSB, the total running time for their approaches is the same as that of our BCGD approaches. This is because the online prediction time cost is proportional to the number of dimensions and is set to 20 for all the lowdimension constraint latent space approaches including our BCGD approaches, Hott, NMFR, PTM and DMMSB.
Latent space dimension  Input graph  Scalability  Accuracy  
low  high  NA  aggregated  timeseries  inference  prediction  all links  new links  
PTM [45], NMFR [43]  
Hott [30]  
LabelRT [41]  
BIGCLAM [42]  
AA [2]  NA  
, DMMSB [10]  
, 
Configurations. The results are reported as an average over 10 runs. If the maximum number of iterations is required as an input; we set it to 100. All experiments are conducted on a single machine with 8G memory and i7 2.7 GHZ CPU.
5.2 Effect of Parameters
Prediction error by  
0  10  
Infection  228  2251.26  288 
3436  331216  3410  
Hepph  2043  16977.9  1769 
DBLP  80566  65878429  65926 
Youtube  195023  161239726  162282 
Prediction error by  
0  10  
Infection  250  2433.5  249 
3659  348536  3525  
Hepph  2566  210651  2172 
DBLP  81633  6595718  66471 
Youtube  198646  162287357  163675 
This set of experiments aims to determine the impact of parameters on the performance of the proposed approaches and/or the comparable approaches.
The effect of temporal smootheness. While fixing all the other parameters (e.g., =20), we first study the effect of the smoothing parameter . The effect of parameter might be related to the inference algorithms, thus we evaluate the performance of both global BCGD and local BCGD when varying . We do not plot the running time since we notice that the running time is insensitive to the parameter . We vary from 0.0001 to 10 with logarithmic scale and report the prediction errors shown in Table IV. We also report the prediction errors for =0. Note that when is 0, the update rules stated in Eq. 4 and Eq. 7 become identical. differs from since they have different ways to proceed with initialization and convergence, see Lines 2–7 in Algorithm 1 and 2. Clearly, either absence of temporal smoothness (=0) or strong temporal smoothness (=10) leads to higher prediction errors. While varies from 0.0001 to 1, there are no significant differences in terms of prediction errors. Therefore, in the following experiments, we simply fix as 0.0001 for and 0.01 for .
The effect of dimensionality. Next, we fix all the other parameters and analyze the effect of . Since the effect of number of dimensions is lightly correlated with inference algorithms, here we choose the representative local BCGD algorithm. We vary from 5 to 30, and report both the running time of temporal latent space inference and prediction error in Figure 5. The overall trends indicate that the running time increases with number of dimensions , while prediction error decreases with number of dimensions . However, increasing number of dimensions does not necessarily lead to the decrease of prediction error. For example, for a Facebook dataset, when the number of dimensions is increased from 25 to 30, the prediction error is increased. In order to balance the efficiency and effectiveness, we opt to fix =20 for the proposed approach in all of the following experiments. A nonparametric approach that automatically selects the best dimension will be an interesting future direction.
In order to perform a fair comparison, we also examine the effect of to other comparable approaches including DMMSB, PTM, NMFR, and Hott. LabelRT and BIGCLAM are not included because their best dimension is automatically computed. Furthermore, since several baselines DMMSB, PTM, NMFR are not scalable and can not support running on the two large datasets DBLP and YouTube, we simply choose the relatively largest dataset, Facebook, on which all of the approaches are able to be tested. The comparison of running time and on new links is reported in Figure 5(a) and Figure 5(b) respectively. Clearly, the overall trends of other approaches are very similar to those of : The inference time is growing linearly or even quadratically with the dimensionality while the prediction accuracy also improves with . Therefore, we use the same value of =20 for other comparable approaches as well.
The effect of time intervals. In order to understand the proposed model more deeply, we also study how the value of time interval impacts prediction performance. Specifically, we choose the Facebook dataset, vary the value of time interval , and create five different sequences of graph snapshots. As reported in Figure 6(a) and Figure 6(b), both the inference time and prediction accuracy increase with number of snapshots. The inference time increases because a larger number of snapshots mean more frequent model updates per shorter time interval. Meanwhile, shorter time interval leads to less change of graphs, which indicates that it is more reliable to forecast links at time based on the current observation at time . Therefore, the prediction accuracy also improves with number of snapshots. Again, the overall trends hold for both proposed and other approaches. Therefore, we simply fix the number of snapshots as reported in Table II.
The effect of lazy update. Finally, we evaluate the effect of two parameters and that are related to the incremental BCGD algorithm. Since parameters and are correlated with each other, we vary both of them and report the total inference time and prediction error on the Facebook dataset in Figure 7. The results verify that when setting both and to smaller values, the predictive accuracy is improved, while the total running time is also increased. However, Figure 7(a) shows that when we reduce and from to , the running time almost remains the same. The results verified that we can safely set and to relatively small values without losing too much efficiency. We repeated this set of experiments on all the other datasets. Then, using curve fitting to capture how the best value of and can be approximated by parameters such as , , , and , we noticed that a good value for and can be achieved around and . Therefore, in the following experiments, without specification, we set as and as .
5.3 Evaluation of Efficiency
In this section we first compare the proposed global BCGD , local BCGD , and incremental BCGD with other latent space inferring approaches in terms of inference time and memory consumption. We then subsequently compare BCGD approaches with two autodimensionality latent space approaches BIGCLAIM and LabelRT and graph heuristic AA in terms of online link prediction time and memory consumption.
Offline inference efficiency. We first report the total running time and memory consumption of our BCGD approaches in Figure 8. Clearly, is neither memory efficient nor time efficient. Both and are very time and memory efficient: is more efficient in running time, while is more efficient in memory usage. This is because requires additional storage for graph changes ; but it adaptively updates latent space with the graph changes and thus is very time efficient.
In the following we group other baselines into two classes to facilitate results visualization. We first compare those memory or timeinefficient approaches DMMSB, PTM, LabelRT and NMFR with our and report the results in Figure 9. Among them, PTM and LabelRT require much larger memory consumption and thus fail to process the two large graph DBLP and YouTube due to their memory bottlenecks. NMFR is more memory efficient than PTM and LabelRT, but it still can not handle the two large graphs. In addition, global optimization algorithm DMMSB takes too long to process the two large graph DBLP and Youtube due to running time bottleneck. Figure 9(a) shows that on Infection and Facebook data, LabelRT PTM {NMFR, } DMMSB, where denotes that A is faster than B on average. On HepPh data, {NMFR, } {LabelRT, PTM} DMMSB. Clearly, our global optimization algorithm is at least five times faster than the other global approach, DMMSB, and is comparable to inefficient local approach NMFR. For small graphs, it is not surprising that the most efficient algorithm is the incremental approach LabelRT since it incrementally maintains and updates latent spaces. Unfortunately, it consumes large amounts of memory, and becomes much slower than our approach for graph HepPh. In addition, all of these approaches are much slower than our fast BCGD algorithms and , each of which takes less than 400 seconds to finish learning temporal latent space for HepPh.
We further compare our fast BCGD algorithms with stateofthearts scalable approaches Hott and BigCLAM. is comparable to Hott and BigCLAM. In addition, the incremental approach is consistently more efficient than alternative approaches Hott and BigCLAM. Because we noticed from Figure 10(b) that the memory usage of these four approaches is similar, we conclude that our two fast BCGD algorithms are both memory and timeefficient.
Online Prediction Efficiency. We now verify that the lowdimension latent space approach is very efficient for online link prediction. Here we choose AA as a representative graph heuristic approach for link predication, not only because it achieves good accuracy [31], but also because it is fast to compute compared to other graph heuristics [7]. However, AA is still much slower than latent spacebased approaches in online link probability computation. As shown in Figure 11, on the HepPh dataset, for 20,000 node pairs, it takes more than 150 seconds to compute the AA scores; while all the lowdimension latent spacebased approaches are able to finish online link probability computation in less than one second. In addition, latent spacebased link prediction approaches are also more efficient than AA in terms of memory consumption, see Figure 11(b). To predict whether two nodes are linked, latent spacebased approaches only need to read the latent positions of two nodes; while AA requires the neighborhood information of two nodes. Finally, Figure 11(a) also intuitively supports the reason why we use lowdimension constraint. With lowdimension constraint (=20), our BCGD approach obtains at least four times speedup than autodimensionality approaches LabelRT and BigCLAM (sometimes can be more than two hundred) in online link prediction.
5.4 Evaluation of Link Prediction Accuracy
In this section we quantitatively evaluate the quality of learned latent spaces in terms of their predictive power. We give an overall comparison of all the approaches in terms of on predicting all links and report the results in Figure 12. Here, denotes the simple baseline using to predict . Clearly, for all link prediction accuracy, we have {, , , PTM, DMMSB} {Hott, NMFR} {BIGCLAM, LabelRT, AA, }, where denotes that on average, is more accurate than in terms of predictive power. BigCLAM and LabelRT have poor AUC scores because they only output a single, hard assignment of nodes to each dimension, which provides no means to control the tradeoff between true and false positive rates. AA did not perform well on Facebook and YouTube because it can not capture the deleted links prediction; AA still gives each unlinked node pairs a high score based on their topological similarity on aggregated graph from to . Our methods perform much better than due to: 1) encodes not only the link information of , but also the temporal information from to ; 2) is a good estimate for ; 3) the reconstruction error between and reflects the possible future changes. Given any pair , if the reconstruction error between and is large, then it indicates that the proximity of the pair is changing over time.
We next evaluate the link prediction performance of all the approaches on new links. The results are plotted in Figure 13. As expected, the AUC values are much lower for the new links. For the new links prediction, we have {, , , LabelRT, DMMSB} {Hott, NMFR} {AA, BIGCLAM}. It is not surprising that all the temporal link prediction approaches perform better than static link prediction approaches.
Figure 13 also reflects an interesting phenomenon: our fast BCGD algorithms and not only improve the efficiency bottleneck of latent space inference, but also achieve comparable prediction accuracy with the global optimization approach . For the new links prediction, they even outperform global approaches on both dataset Infection and YouTube. This is because both and utilize local structures and local invariance, which significantly enhances local and temporal information encoding in the temporal latent space. In addition, we notice that on average, the incremental approach is able to obtain higher accuracy than the local approach . Although both and utilize local invariance, is able to exploit the local structure and local invariance in an adaptive way with graph changes, which leads to a better performance. Because is more scalable than all the other alternative approaches (as shown in Figure 10(a)). We conclude that is a practical solution for link prediction on largescale dynamic networks.
6 Related work
Recently, link prediction has attracted significant attention, and various methods have been proposed for different types of networks. In the following, we first give a brief overview of related work in link prediction. Particularly, we focus on two categories: graphbased heuristics and latent spacebased approaches. Next, we present some related work in inferring latent positions of nodes in a latent space.
6.1 Link Prediction
The link prediction problem in static networks has been extensively studied. We refer readers to two recent survey papers [24, 15] and the related work section of the paper [49] for an exhaustive introduction to this thriving field. Among these existing works, one of the most popular categories of methods is graphbased heuristics. Graphbased heuristics [24, 2, 35, 20, 19] model the link probability between two nodes as a function of their topological similarity. The topological similarity can be defined based on local proximity such as common neighbors [2], triad closures [35], or global proximity such as weighted shortestpaths [20]. Here we highlight Adamic and Adar (AA) [2] who proposed a degreeweighted commonneighbor heuristic that works well in collaboration networks.
Unfortunately, in online link prediction, the computation of topological similarity metrics, especially for those metrics that require path computation or random walk over the entire network, is timeconsuming if computed onthefly [7]. On the other hand, one can precompute all pairs of topological similarity scores in advance, leading to a quadratic storage cost. To strike a balance between online computation cost and offline storage cost, recent research interests in link prediction have been directed towards temporal link prediction with latent space modeling [16]. For example, Fu et al. [10] extended the mixed membership block model to allow a linear Gaussian trend in the model parameters (DMMSB). Sewell and Chen [33] proposed a model which embeds longitudinal network data as trajectories in a latent Euclidean space, and the probability of link formation between two nodes depends on their Euclidean distance and popularity. In this work we do not take the popularity of each node into consideration since we assume that the popularity of each node is automatically captured by the latent space modeling: a popular node is surrounded by many other nodes in the latent space. Furthermore, our model penalizes sudden and large changes in the latent positions.
Dunlavy et al. [8] developed a tensorbased latent space modeling to predict temporal links. Unfortunately, in online prediction, it requires the tensor product of vectors, which is much more computationally expensive than the dot product of vectors in the proposed approach. In addition, their approaches require very large storage costs since they need to put the entire tensor into memory. Recently, there have been several more efficient approaches to conduct tensor decomposition. For instance, Huang et al. [17] proposed to conduct tensor decomposition with singular value decomposition to find temporal communities in a GPGPU setting.
6.2 Inferring Latent Space
Recent work has explored static latent space modeling for social network analysis. For example, Hoff et al. [16] developed a class of models where the probability of a relation between actors depends on the positions of individuals in an unobserved social space. They make inference for the latent space with the maximum likelihood estimation and Bayesian posterior inference. Obtaining this posterior is, however, often infeasible in largescale graphs. Yin et al. [45] proposed an efficient stochastic variational inference algorithm and a parsimonious triangular model to infer the latent spaces of large networks (PTM). Sarkar and Moore [32] first generalized the classical multidimensional scaling to get an initial estimate of the positions in the latent space, and then applied nonlinear optimization directly to obtain the latent space representation.
Matrix factorization approaches are also applied to observed networks to learn the lowdimension latent space representation. Yang et al. [43] propose a multiplicative algorithm with graph random walk to solve the symmetric graph factorization problem (NMFR). However, their approach does not scale well due to the high computation cost in each iteration. Additionally, the effectiveness of their approach decreases as the density of the input graph increases. Yang and Leskovec [42] proposed a matrix factorization approach over networks to learn the static latent space representation by maximizing the link likelihood (BIGCLAM). Their approach is very scalable; nevertheless, the learned latent space is of high dimension, which leads to expensive online link prediction computation. HottTopixx (Hott [30]) uses a new approach for NMF with lowrank constraint which is highly parallel and allows it to run on very large datasets. Unfortunately, their approach is designed to factorize the traditional rectangle matrices, and scalable approaches in symmetric graph factorization are much less studied than rectangle matrix factorization such as Hott. In this work we apply symmetric matrix factorization approaches directly on the observed networks to infer lowrank latent spaces. We propose a block coordinate gradient descent algorithm, which is more scalable than NMFR and performs well, regardless of the topology of input graphs.
In addition to matrix factorization approaches, recently Xie et al. [41] (LabelRT) proposed a label propagationbased approach that incrementally detects communities in dynamic networks. In this work we propose an incremental BCGD algorithm to incrementally detect latent positions at each time step with conditional updates on a set of affected nodes.
7 Conclusion and Future Work
In this work we propose a scalable approach for link prediction with a temporal latent space model, which assumes two nodes are more likely to be linked if they are close to each other in their latent space. In addition, our temporal latent space model prefers smoothly evolving by penalizing frequent changes in latent positions for each node. With respect to the proposed model, we developed three different inference algorithms, , and to learn the temporal latent space via nonnegative matrix factorization approaches. We also provide a set of theoretical analyses characterizing their performance guarantees. We conducted a set of experiments on large networks with millions of nodes and links to verify both the efficiency and predictive quality of all the proposed approaches.
Our model still has limitations. First, the temporal smoothness assumption may not hold in some circumstance. For example, in a road network, the latent position of each node changes significantly from rush hour to a nonrush hour. And even in social networks, where temporal smoothness assumptions typically hold, external events may cause significant shifts to the network that reflect abrupt changes in latent node positions. We plan to extend the proposed model to support temporal nonlinear transitions. Second, we plan to propose a new continuous time model that supports continuous inputs rather than discretized graph snapshots. Moreover, we plan to extend our approach to generalized cases including directed and weighted graphs. In addition to model improvement, another interesting direction is to further improve the efficiency. Note that in our block coordinate gradient descent approach, the latent position update for a node can be conducted simultaneously with another node if they do not share the same neighborhood. This property can be leveraged for very good parallelization in the future.
Acknowledgment
We are very grateful to Prof. Dacheng Tao, Prof. Kun Zhang, Prof. Rong Ge, Li Han and Dingxiong Deng for their insightful discussions. This research was supported in part by DARPA grant No. W911NFâ12â1â0034.
References
 [1] “Dblp network dataset – KONECT,” May 2015. [Online]. Available: http://konect.unikoblenz.de/networks/dblp_coauthor
 [2] L. A. Adamic and E. Adar, “Friends and neighbors on the web,” SOCIAL NETWORKS, vol. 25, pp. 211–230, 2001.
 [3] E. M. Airoldi, D. M. Blei, S. E. Fienberg, and E. P. Xing, “Mixed membership stochastic blockmodels,” J. Mach. Learn. Res., vol. 9, pp. 1981–2014, 2008.
 [4] S. Arora, R. Ge, R. Kannan, and A. Moitra, “Computing a nonnegative matrix factorization – provably,” in STOC Conference, 2012, pp. 145–162.
 [5] A. Beck and M. Teboulle, “A fast iterative shrinkagethresholding algorithm for linear inverse problems,” SIAM journal on imaging sciences, vol. 2, no. 1, pp. 183–202, 2009.
 [6] M. W. Berry, M. Browne, A. N. Langville, V. P. Pauca, and R. J. Plemmons, “Algorithms and applications for approximate nonnegative matrix factorization,” Comp. Stat. & Data Analysis, vol. 52, no. 1, pp. 155–173, 2007.
 [7] S. Cohen and N. CohenTzemach, “Implementing linkprediction for social networks in a database system,” in SIGMOD DBSocial Workshop, 2013, pp. 37–42.
 [8] D. M. Dunlavy, T. G. Kolda, and E. Acar, “Temporal link prediction using matrix and tensor factorizations,” ACM Trans. Knowl. Discov. Data, vol. 5, no. 2, pp. 10:1–10:27, 2011.
 [9] D. Erdős, R. Gemulla, and E. Terzi, “Reconstructing graphs from neighborhood data,” ACM Trans. Knowl. Discov. Data, vol. 8, no. 4, pp. 23:1–23:22, 2014.
 [10] W. Fu, L. Song, and E. P. Xing, “Dynamic mixed membership blockmodel for evolving networks,” in ICML Conference, 2009, pp. 329–336.
 [11] S. Gao, L. Denoyer, and P. Gallinari, “Temporal link prediction by integrating content and structure information,” in CIKM Conference, 2011, pp. 1169–1174.
 [12] D. Gibson, R. Kumar, and A. Tomkins, “Discovering large dense subgraphs in massive graphs,” in VLDB Conference, 2005, pp. 721–732.
 [13] N. Guan, D. Tao, Z. Luo, and B. Yuan, “Nenmf: An optimal gradient method for nonnegative matrix factorization.” IEEE Trans. on Signal Processing, pp. 2882–2898, 2012.
 [14] P. Gupta, V. Satuluri, A. Grewal, S. Gurumurthy, V. Zhabiuk, Q. Li, and J. Lin, “Realtime twitter recommendation: Online motif detection in large dynamic graphs,” Proc. VLDB Endow., vol. 7, no. 13, pp. 1379–1380, 2014.
 [15] M. A. Hasan and M. J. Zaki, “A survey of link prediction in social networks,” in Social Network Data Analytics. Springer US, 2011, pp. 243–275.
 [16] P. D. Hoff, A. E. Raftery, and M. S. Handcock, “Latent space approaches to social network analysis,” Journal of the American Statistical Association, vol. 97, 2002.
 [17] F. Huang, N. U. N, M. U. Hakeem, P. Verma, and A. Anandkumar, “Fast detection of overlapping communities via online tensor methods on gpus.” CoRR, vol. abs/1309.0787, 2013.
 [18] L. Isella, J. Stehlé, A. Barrat, C. Cattuto, J.F. Pinton, and W. V. den Broeck, “What’s in a crowd? analysis of facetoface behavioral networks,” J. of Theoretical Biology, vol. 271, no. 1, pp. 166–180, 2011.
 [19] G. Jeh and J. Widom, “Simrank: a measure of structuralcontext similarity,” in SIGKDD Conference, 2002, pp. 538–543.
 [20] L. Katz, “A new status index derived from sociometric analysis,” Psychometrika, vol. 18, no. 1, pp. 39–43, March 1953.
 [21] J. Kunegis, “KONECT  The Koblenz Network Collection,” in Proc. Int. Web Observatory Workshop, 2013.
 [22] D. D. Lee and H. S. Seung, “Learning the parts of objects by nonnegative matrix factorization,” Nature, vol. 401, no. 6755, pp. 788–791, 1999.
 [23] J. Leskovec, J. Kleinberg, and C. Faloutsos, “Graph evolution: Densification and shrinking diameters,” ACM Trans. Knowledge Discovery from Data, vol. 1, no. 1, pp. 1–40, 2007.
 [24] D. LibenNowell and J. Kleinberg, “The link prediction problem for social networks,” in CIKM Conference, 2003, pp. 556–559.
 [25] M. Mcpherson, L. SmithLovin, and J. M. Cook, “Birds of a feather: Homophily in social networks,” Annual Review of Sociology, vol. 27, pp. 415–444, 2001.
 [26] A. K. Menon and C. Elkan, “Link prediction via matrix factorization,” in ECML/PKDD Conference, 2011, pp. 437–452.
 [27] A. Mislove, “Online social networks: Measurement, analysis, and applications to distributed information systems,” Ph.D. dissertation, Department of Computer Science, Rice University, 2009.
 [28] Y. Nesterov, Introductory lectures on convex optimization : a basic course. Kluwer Academic Publ., 2004.
 [29] G.J. Qi, C. C. Aggarwal, and T. Huang, “Link prediction across networks by biased crossnetwork sampling,” in ICDE Conference, 2013, pp. 793–804.
 [30] B. Recht, C. Re, J. A. Tropp, and V. Bittorf, “Factoring nonnegative matrices with linear programs,” in NIPS Conference, 2012, pp. 1223–1231.
 [31] P. Sarkar, D. Chakrabarti, and M. I. Jordan, “Nonparametric link prediction in dynamic networks,” in ICML Conference, 2012, pp. 1687–1694.
 [32] P. Sarkar and A. W. Moore, “Dynamic social network analysis using latent space models,” SIGKDD Explor. Newsl., vol. 7, no. 2, pp. 31–40, 2005.
 [33] D. K. Sewell and Y. Chen, “Latent space models for dynamic networks,” Journal of the American Statistical Association, vol. 110, no. 512, pp. 1646–1657, 2015.
 [34] J. Sun, C. Faloutsos, S. Papadimitriou, and P. S. Yu, “Graphscope: Parameterfree mining of large timeevolving graphs,” in SIGKDD Conference, 2007, pp. 687–696.
 [35] P. Symeonidis, E. Tiakas, and Y. Manolopoulos, “Transitive node similarity for link prediction in social networks with positive and negative links,” in Proceedings of the 4th ACM conference on Recommender systems, 2010, pp. 183–190.
 [36] C. Tantipathananandh, T. BergerWolf, and D. Kempe, “A framework for community identification in dynamic social networks,” in SIGKDD Conference, 2007, pp. 717–726.
 [37] C. Tantipathananandh and T. Y. BergerWolf, “Finding communities in dynamic social networks,” in ICDM Conference, 2011, pp. 1236–1241.
 [38] P. Tseng and S. Yun, “A coordinate gradient descent method for nonsmooth separable minimization,” Math. Program., vol. 117, no. 12, pp. 387–423, 2009.
 [39] S. A. Vavasis, “On the complexity of nonnegative matrix factorization,” J. on Optimization, vol. 20, no. 3, pp. 1364–1377, 2009.
 [40] B. Viswanath, A. Mislove, M. Cha, and K. P. Gummadi, “On the evolution of user interaction in Facebook,” in Proc. Workshop on Online Social Networks, 2009, pp. 37–42.
 [41] J. Xie, M. Chen, and B. K. Szymanski, “Labelrankt: Incremental community detection in dynamic networks via label propagation,” CoRR, vol. abs/1305.2006, 2013.
 [42] J. Yang and J. Leskovec, “Overlapping community detection at scale: A nonnegative matrix factorization approach,” in WSDM Conference, 2013, pp. 587–596.
 [43] Z. Yang, T. Hao, O. Dikmen, X. Chen, and E. Oja, “Clustering by nonnegative matrix factorization using graph random walk,” in NIPS Conference, 2012, pp. 1088–1096.
 [44] J. Ye, H. Cheng, Z. Zhu, and M. Chen, “Predicting positive and negative links in signed social networks by transfer learning,” in WWW Conference, 2013, pp. 1477–1488.
 [45] J. Yin, Q. Ho, and E. P. Xing, “A scalable approach to probabilistic latent space inference of largescale networks,” in NIPS Conference, 2013, pp. 422–430.
 [46] J. Zhang, C. Wang, J. Wang, and J. X. Yu, “Inferring continuous dynamic social influence and personal preference for temporal behavior prediction,” Proc. VLDB Endow., vol. 8, no. 3, pp. 269–280, 2014.
 [47] Y. Zhang, M. Zhang, Y. Liu, S. Ma, and S. Feng, “Localized matrix factorization for recommendation based on matrix block diagonal forms,” in WWW Conference, 2013, pp. 1511–1520.
 [48] L. Zhu, A. Galstyan, J. Cheng, and K. Lerman, “Tripartite graph clustering for dynamic sentiment analysis on social media,” in SIGMOD Conference, 2014, pp. 1531–1542.
 [49] L. Zhu and K. Lerman, “A visibilitybased model for link prediction in social media,” in ASE SocialCom Conference, 2014.
Linhong Zhu is a computer scientist at USC’s Information Sciences Institute. Prior to that she worked as a ScientistI in the data analytics department at the Institute for Infocomm Research, Singapore. She obtained her Ph.D. degree in computer engineering from Nanyang Technological University, Singapore in 2011 and the BS degree from the University of Science and Technology of China in 2006. Her research interests are in largescale graph analytics with applications to social network analysis, social media analysis, and predictive modeling. She is a member of IEEE and ACM. 
Dong Guo received the BS degree in physics from the University of Science and Technology of China, Hefei, China, in 2007, and entered the Ph.D. program in computer science at the University of Southern California in 2011. His doctoral projects consisted of studying the datadriven representation learning algorithms and applying machine pipeline in industrial data analysis. 
Junming Yin is an assistant professor in the Department of Management Information Systems at the University of Arizona. Prior to that he was a Lane Fellow in the Lane Center of Computational Biology at Carnegie Mellon University. He received his Ph.D. from UC Berkeley in Computer Science. His research focus is on statistical machine learning and its applications in business intelligence, digital marketing, healthcare systems, and computational biology. 
Greg Ver Steeg is a research assistant professor in computer science at USC’s Information Sciences Institute. He received his Ph.D. in physics from Caltech in 2009 and since then has focused on using ideas from information theory to understand complex systems like human behavior, biology, and language. His work has been recognized with an AFOSR Young Investigator Award. 
Aram Galstyan is a research director for data science and machine learning at USC’s Information Sciences Institute and a research associate professor in the USC Viterbi School of Engineering’s Computer Science Department. His current research focuses on different aspects of machine learning, with applications in computational social sciences, forecasting, and bioinformatics. 
Supplemental Materials to “Scalable Temporal Latent Space Inference for Link Prediction in Dynamic Social Networks”
Appendix A Proof of Lemma 1
According to Eq. 3, we can obtain the gradient of at time stamp
(9)  
Since for each , , the gradient of can be simplified as
(12)  
For any two matrices , , we replace in Eq. 12 with and , we then have
Since for any two positive semidefinite matrices and , we have . By substituting , , we have
Let us apply the above trace inequality again and using , , we then have
Thus, we have