Spread-gram: A spreading-activation schema of network structural learning

Spread-gram: A spreading-activation schema of network structural learning

Jie Bai1, Linjing Li1, Daniel Zeng1

1The State Key Laboratory of Management and Control for Complex Systems,
Institute of Automation, Chinese Academy of Sciences,
Beijing, China
{baijie2013, linjing.li, dajun.zeng}@ia.ac.cn

Abstract

Network representation learning has exploded recently. However, existing studies usually reconstruct networks as sequences or matrices, which may cause information bias or sparsity problem during model training. Inspired by a cognitive model of human memory, we propose a network representation learning scheme. In this scheme, we learn node embeddings by adjusting the proximity of nodes traversing the spreading structure of the network. Our proposed method shows a significant improvement in multiple analysis tasks based on various real-world networks, ranging from semantic networks to protein interaction networks, international trade networks, human behavior networks, etc. In particular, our model can effectively discover the hierarchical structures in networks. The well-organized model training speeds up the convergence to only a small number of iterations, and the training time is linear with respect to the edge numbers.

Keywords: representation learning, network analysis, Spreading Activation, cognitive psychology, node embeddings

1 Introduction

Modern society contains innumerable networks [1], such as social network, the World Wide Web, knowledge graphs and various information networks constructed by human behaviours. Network representation learning, also known as structure representation [2] or network embedding [3], is an unique building block of understanding and analyzing the networks [4]. This technique map the networks into a latent space where the structural semantic information condenses, i.e., node embeddings, which can also be seen as the dimensional reduction for networks. Similar research questions also include latent space model [5] and graph embedding [6], who try to make the structural semantics of the network computable.

Recent network representation learning studies, including random-walk-based methods [7, 8, 9, 10] and neural-network-based methods [11, 12], develop with the advance of deep learning techniques and have made major process in many specific network analysis tasks. However, these deep learning techniques are mostly designed for text or images originally. Unlike text and images whose element assignment are fixed (for example, a sentence is a sequence of words, an image is a matrix of pixels), the topological structure of the network is uncertain, i.e., for each node in a network, we cannot fix the number of its neighbors in advance. The uncertainty of the topological structure in networks leads to the particular requirements of network representation learning, where the common sequential input scheme will brining in bias, and the matrix input scheme will suffer from the data sparsity. These drawbacks can easily damage the globalized high-order structure of networks [13].

On the other side, it is widely received that the cognitive mechanisms of human intelligence are skilled at representing structure and reasoning about relations based on networks [2]. Many cognitive theories believed that human knowledge system is stored as networks (graph), and high-order cognitive activities, like learning, reasoning and problem solving, are operations based on these networks [14, 15, 16]. Among those various theories, the spreading-activation theories are a major branch in cognitive psychology which study concentratedly on the interactions of entities in the memory network of human cognition. They describes the dynamical memory searching process happened in human brain. Figure 1 illustrates the common spreading-activation mechanism. The spreading-activation mechanism overcome the challenge we discussed before, i.e., it fulfills the uncertaintity of network topological structure, and handles both global structure and local correlations simultaneously. The first spreading activation theory was proposed to implement computer simulations of memory search [17]. Since proposed, it has been applied extensively in semantic network, information retrieval [18, 19, 20, 21] and social network analysis [22, 23, 24, 25]. However, to our knowledge, how to apply spreading activation theories in the field of network representation learning has not been well studied.

Figure 1: Illustration of the spreading-activation mechanism. Different colors indicates the order of the nodes being searched.

We propose spread-gram, a spreading-activation schema of network representation learning. Spread-gram is inspired by the ACT spreading-activation theory [26], which addresses the dynamical retrieval issue in human brain when complex memory activity happen. Our intuition is that spreading-activation process in human cognitive nature can help building better network analysis schema. In spread-gram, the node embeddings are learned following the spreading structure of network, each time the proximity of node pairs along the spreading paths will be compared and adjusted. More specifically, given a source node, the activation procedure will spread to all of the targeted nodes that connected with the source node. And then these targeted nodes will become new source nodes and spread the activation procedure to all of their neighbors. The spreading will terminate until all nodes in the connected graph have been activated once. On one hand, this learning schema will make sure that the nodes have all been selected and updated along the network structure, so the non-linear structure is permitted and the input bias can be avoid. On the other hand, the well-organized activation and learning will speed-up the convergence of model training.

We discuss the availability and the approach that apply the ACT spreading activation theory to network representation learning. Then the network searching schema and node representation updating rules are constructed accordingly. Finally, we propose network representation learning methods for both homogeneous and heterogeneous separately. Through the experiment, we found that spread-gram is good at preserve both local and global correlations of the networks. Its distinct ability is on discovering the hierarchical structure of the networks, which can hardly be captured by previous works. The related experimental results are shown in Figure 2. Moreover, spread-gram succeeds against multiple network analysis tasks in a wide range of real-world networks.

Figure 2: Network representation learning for discovering hierarchical structure of the networks. The network in use is the taxonomy of the Wikipedia entries, where the hierarchies are labeled by different colors. (a) is the 2-D visualization of our proposed model, (b)-(e) are that of some typical baseline models, i.e., DeepWalk, Node2Vec, TransE and GCN seperately. More details are described in the “node embedding visualization” part.

Compared with existing network representation learning methods, the superiority of Spread-gram lies in:

  • The model is designed specifically for the uncertainty of network topological structure, avoiding the information bias or sparsity problem that may happened in other models;

  • It integrates both global and local structure information of networks, which shows a significant advance in learning and representing the hierarchical structure;

  • The well-organized model training speeds up the convergence to only a small number of iterations, and the training time grows linear in the number of the edge numbers.

2 Method Design

2.1 Problem Formulation

Existing literatures [27] usually define a network as , where is the collection of the nodes and is the collection of the edges. For the heterogeneous networks, there are type-mapping functions and , which map and to the node-type collection and edge-type collection , separately.

Under this context, network representation learning can be defined as mapping to an implicit vector space , i.e, learning a mapping , such that for any , preserves the structural semantics of as much as possible. Here is the dimensionality of the implicit vector space. Given that the object of this research is the network , we will abbreviate as in the following descriptions.

Based on the definitions above, we need to first define the metric which can preserve the semantic information of the network. Existing literatures tried to explain and reconstruct network semantics from various perspectives, such as inner product of node vectors for distance representation [7, 3], Laplacian Eigenmaps for global neighborhood relationships [28, 29]. These studies usually construct loss functions by comparing vector similarities with actual relationships of node pairs. Our work integrate this idea with the structural characteristics of the ACT spreading activation formula. The ACT spreading activation formula is represented as [26]:

(1)

where and represent the activation rate of and separately, represents the correlation between and , and represents the baseline activation of . Note that and serve equivalent posts in this equation, i.e., they can be replaced by each other in the further transmissions.

Inspired by the ACT spreading activation formula, we generalized and from scalar to vector representations, and take as the former state of . Then equation 1 can be transformed to:

(2)

The equation 2 means that the representation of y is adjusted according to its surrounding nodes. Here is the collection of y’s neighbors, represents the correlation that obtained through and , i.e.,

(3)

Now we need to construct the objective function which will make the update function fulfills equation 2. We will first consider the situations in homogeneous networks, then the model inference will be generalized to heterogeneous networks.

2.2 Homogeneous Networks

In heterogeneous networks, and are of the same type, so and share the same semantic space. Considering the structure of equation 2, we can construct a log-linear model as the objective function. Then maximum likelihood estimation can be used to obtain the gradient for model updating, i.e., to obtain .

Let

(4)

Here u represents the association of and . If then , otherwise . is the inner product of and , it can reflect the associations of and in the context of the vector space. is the likelihood function of , which can be used to define the consistence between the associations of node pairs and that of the corresponding node vectors.

Define a sigmoid function:

(5)

then equation 4 can be transformed to:

(6)

For any node pair , equation 6 holds. As a result, the log likelihood of the network is:

(7)

The gradient of and under should be:

(8)

We can see that the format of and are the same, although and serves as different roles in the objective function. It means that and are interconvertible in the general model training process. Putting equation 2 and equation 8 together, we can get:

(9)

Then the node vector updating function can be represented as:

(10)

Equation 10 implies that, the node embedding updating equation is identified with the ACT spreading activation formula, when the objective function is defined by log-linear model, the node-pair association is represented through the inner product of their embeddings, and the parameters are evaluated with the maximum-likelihood estimation.

2.3 Heterogeneous Networks

According to the definition of network above, in a heterogeneous network there are more than one type of nodes or edges. In this paper we only consider the heterogeneous networks who have more than one type of nodes. Assuming that each node type has its specific semantic space, the problem is how to integrate different semantic spaces together to perform representation learning. We address this problem by designing a benchmark space and a transforming mechanism among different semantic spaces.

Let be the benchmark space, be the mapping matrix from to . For any , its type is , the correlated vector representations of in space is:

(11)

Then for any and in the network, their correlation in the benchmark space should be:

(12)

According to the analysis of homogeneous networks, let

(13)

So the log-linear model of heterogeneous networks is:

(14)

The gradient of and should be:

(15)

The gradient of and should be:

(16)

3 Node Search Strategy and Model Training

3.1 Node Search

The ACT spreading-activation theory proposed two typical spreading schema [26], which is called “A-B, A-D” and “A-B, C-D” schemas. “A-B, A-D” means the spreading process starting from a specific node and spread to multiple targets; “A-B, C-D” means the spreading proceed concurrently from multiple origins to multiple targets. Figure 5 illustrates these schemas.

(a) “A-B, A-D” schema
(b) “A-B, C-D” schema
Figure 5: Two typical spreading schema.

The strategy of node searching and updating in spread-gram integrates “A-B, A-D” and “A-B, C-D” schemas. A node searching example using spread-gram is shown in Figure 6. It selects a node randomly as the source, spread to its neighbors (“A-B, A-D”); then taking these neighbors as sources, and spread to their neighbors. Note that from here the spreading progress proceed in parallel (“A-B, C-D”). This node searching strategy ensure the spreading and updating reaching every node in the network. Besides, the ordered updating is more efficient than random updating, since every source node is updated before spreading to its neighbors. For a network constituted by multiple connected one, the spread-gram node searching conducted on these connected networks respectively.

Figure 6: Node searching of spread-gram in a connected network.

Table 1 describes a realization of spread-gram node searching algorithm.

Input: node set , edge set
Output: Order of the activated nodes, a sequence
Algorithm: spread-gram node searching
    Initialize an empty list, an empty queue, an empty map
    for in
         node set of u for each in
     sample form                                                                         step *
    append to
    while isn’t empty, do
        if isn’t empty, then
             Pop a node form
            if in , then
                append to , remove from
                append all of in to
        else
            return to step *
    end for
    return
Table 1: A realization of spread-gram node searching algorithm

3.2 Model Training

In this part we will discuss the spread-gram model training. First is the training samples construction. According to the objective functions (equation 7 and equation 14), each need to compute the likelihood with all of the other nodes in the network. However, it should be noticed that the nonadjacent node pairs in a network usually far more than the adjacent ones. So the origin objective functions could lead to the huge computational consumption and unbalanced models potentially. We adopt the negative sampling strategy [30] to construct the training samples and solve the above problem. More specifically, each time we choose an adjacent node pair as a training sample, a fixed number of nonadjacent node pairs related to y will also be included, where is sampled randomly from the network.

Let denotes the coefficient of the negative sampling, is the neighbor set of , and we will choose nodes from the nonadjacent nodes in for each as negative samples, along with adjacent nodes as positive samples. Therefore the training samples corresponding to should be:

(17)

Based on equation 17, the objective functions need to be adjusted. For example, equation 7 is reformulated as:

(18)

For and , the update function should be:

(19)

4 Complexity Analysis

A spread-gram model iteration is constructed by two components: node searching and parameter updating. We discuss the complexity of the algorithm by analyzing these two components.

In the node searching process, the algorithm goes through the network with the spreading mechanism and generates a node list to decide the order of nodes’ activation. We should preserve a dynamical node query and an ordered list . When the searching process beginning, we continuously pop the node from and check if it is activated (exist in ). If not, append the node to and push all of its neighbors to . Where there is an edge, there should be a node being added to the (some nodes may be added repeatedly). Meanwhile, each node in should be checked the existence in . In each iteration, there will be nodes adding to . If there is a hash set to record the activated nodes, the existence checking for a single node will cost time. Therefore the time complexity of node searching process is .

For the parameter updating process, we should discuss the situations in homogeneous networks and heterogeneous networks, separately. In homogeneous networks, for each node we should first construct a collection of training nodes . Then for each , compute its association with y and update the vectors of x and y according to equation 10. Although the size of varies according to y, the total amount of nodes in all s is computable, i.e., , where k is the parameter of negative sampling. The time consumption of vector updating lies in the calculating of inner product and summation of the vectors, whose time complexity are both where d is the dimension of the node embeddings. As a result, the time complexity of parameter updating in homogeneous networks is .

In heterogeneous networks, we should further consider the involving of mapping matrix. For a single pair of nodes, the time consumption of calculating their inner product in the benchmark space is . Therefore, the time consumption of infer the gradient and updating a pair of nodes should be , and the node embedding training process in an iteration will take time. The mapping matrixes are updated at the end of an iteration, which needs to consider all of the training node pairs and involves matrix operations. The time consumption of updating single matrix is . Because there are mapping matrixes that need to be updated, so the total time consumption of training mapping matrix should be .

5 Experiments

5.1 Toy case study

We first observed the iteration process of spread-gram through its running on toy case networks. We used two typical networks (a simple connected network and a simple complex network constitute by multiple connect networks), learned 2-dimensional node embeddings on each network, and visualized the network through the node embeddings (random initialized). The result is shown in Figure 9.

(a) A simple connected network
(b) A complex network constructed by a few connected graphs
Figure 9: Spread-gram model iterations on two toy cases. The case (a) is a connected network with 8 nodes. During the spread-gram iterations, the structure of the network was learned and represented better and better gradually, and two nodes that have same neighbors shared the same location. The case (b) is a network containing multiple connected graphs. The spread-gram model identified the connected graphs and distributed them with different locations. All of the learning and representation is converged within just a few iterations.

5.2 Experimental Settings for Real-World Networks

Experiments are conducted for various real-world networks, including undirected networks, directed networks and heterogeneous networks. Multiple tasks were conducted to evaluate the performance of the models quantitatively and qualitatively.

5.2.1 Methods

The methods in comparison are as follows:

  • Spread-gram is the network representation learning model proposed by this paper. The modeling strategy (homogeneous or heterogeneous) is decided by the nature of the network. The maximum number of iteration is set as 30. The dimension of the node embeddings .

  • DeepWalk [7] is a classical random-walk based representation learning method for homogeneous networks. Because that the length of node sequences in DeepWalk is identical with the number of iterations in spread-gram, we set it as 30. The window weigh of training node embeddings is 8, the dimension of node embeddings is 128 as well.

  • Node2vec [8] is a network representation learning method which combines the depth-first search and breadth-first search and can be seen as a generalization of DeepWalk. For the hyper-parameter p and q in the model, we set them as 0.5 and 2 separately, which makes the random-walk tends to be breadth-first searching. The other parameter settings of node2vec is the same as that of DeepWalk.

  • Metapath2vec++ [9] is a state-of-the-art random-walk based heterogeneous network representation learning method following a predefined metapath schema. We adopted “A-P-A” as the meta-path of the model. The other parameters are the same with that in DeepWalk.

  • PTE [31] is a semi-supervised network representation learning method which is superior in preserving semantic correlations among nodes. PTE obeys a sequential learning schema during the model training. We set the window size of the training model as 5 and the batch size as 300.

  • GCN [32] is a typical neural network based network representation learning method. During the training of the GCN models, the number of iterations is set as 30, and the dimension of node embeddings is set as 128, which are same as the settings in Spread-gram.

  • TransE [33] is a typical knowledge graph representation learning model. Knowledge graph is a specific type of network, so we chose a representative method for learning knowledge graphs for comparison. For the parameter settings, we sampled 10,000 batches for model training, and each batch has 100 triples. A triples is a minimal unit of the knowledge graph, which is constructed by a head node, a tail node and an edge. For better training of the model, we ranked the network to make sure that there is no circle on top of the triples.

5.2.2 Datasets

We chose five datasets including both homogeneous and heterogeneous networks. The datasets come from various fields, such as bioinformatics, economics and human behaviors. For all of the datasets, there are group information for the nodes, so supervised classification experiments can be conducted conveniently. The details about the datasets are as follows.

  • WITS111https://wits.worldbank.org/Default.aspx?lang=en is an international trading analysis tool provided by the World Bank. We downloaded the country-wide trading data from WITS which contain 233 countries/regions along with the top frequent import and export country during the last few years for each country/region. By taking the countries as nodes and the import/export relationships as edges, we constructed a directed homogeneous network with 233 nodes and 4301 edges. The continents the countries/regions belong were set as node categories.

  • Wiki is a semantic correlation dataset collected by West and Leskovec [34]. This dataset is constructed based on the Wikipedia websites, which has 4,592 entries and 119,882 hyperlinks. We built the homogeneous network through taking the entries as nodes and the hyperlinks between entries as edges. Besides, there is a hierarchical taxonomy to locate the entries. By choosing the first-stage categories in the taxonomy to classify the entries, we could classify the entries into 15 categories, including science, history, art, and so on.

  • DIP222https://dip.doe-mbi.ucla.edu/dip/Main.cgi is a protein interaction database collected by UCLA. It contains a large amount of experimentally determined interactions between different proteins. This dataset has 28,255 proteins and 76,881 interactions. We built the homogeneous network through taking the proteins as nodes, the interactions as edges and the proteins’ types as categories.

  • DBLP333https://dblp.uni-trier.de/faq/How+can+I+download+the+whole+dblp+datasetis a bibliographical dataset provided by the DBLP, a website collecting and managing bibliographical information from the field of computer science. This dataset is dynamically increased, which has accumulated 216,636 literatures published by 619,626 authors until our downloading. If taking the authors and the literatures as two types of nodes, tanking the publishing behavior as the edges, we can get a heterogeneous network with 836,262 nodes and 1,605,633 edges. The group information in this dataset is the publication venue, such as the name of the journal or conference.

  • Amazon is a production review dataset collected from Amazon online shopping website [35]. Based on the assumption that a review is corresponding to a buying behavior, we constructed an online shopping network constituted by products and consumers. This network contains 986,934 nodes and 2,329,915 edges. The categories of the products can be used as the group information.

The following of this part will be organized according to the different tasks, which include (1) link prediction, (2) node classification, (3) node visualization, and (4) iteration analysis. To unify the networks, we set the edge weight of the networks as 1 uniformly. When learning node embeddings for the different networks using different models, the learning rate is chosen from the range of 0.01 to 0.05. For all of the classification tasks (link prediction, node classification), we divided the dataset into 70% training set and 30% testing set.

5.3 Link Prediction

The link prediction task is used to examine the consistency of the proximity between a pair of nodes and that between their corresponding node embeddings. It evaluates the models’ ability to preserve the semantic correlation among nodes. In this task we trained a binary classifier, whose input is the difference of the embeddings of a specific node pair, and output is a binary value to indicate whether this pair of nodes are connected by an edge. During data construction for the link prediction, for each node we randomly chose one of its neighbors as a positive sample, and one node who is not its neighbor as a negative sample. For the heterogeneous networks, a pair of nodes should be selected from different types, and the node embeddings learned from the spread-gram method needs to be mapped to the benchmark space before calculating their relative locations. Metapath2vec++ is used in heterogeneous networks specially, so there will be no result of applying Metapath2vec++ to homogeneous networks, i.e., Wiki, WITS and DIP.

We adopt SVM with RBF core as the classifier. The settings about the classifier is: penalty rate =500, parameter of the RBF core . Average accuracy is used as the metric to evaluate the models’ performance. The experimental results are shown in Table 2.

Dataset WITS Wiki DIP DBLP Amazon
spread-gram 0.756 0.864 0.891 0.989 0.930
DeepWalk 0.565 0.601 0.851 0.664 0.595
Node2vec 0.580 0.650 0.847 0.667 0.599
TransE 0.519 0.538 0.589 0.501 0.557
PTE 0.674 0.770 0.621 0.647 0.708
GCN 0.611 0.628 0.512 0.696 0.707
Metapath2vec++ - - - 0.690 0.718
Table 2: Link prediction results

From the experimental results in Table 2, we can see that our proposed model spread-gram outperforms the other models significantly in the link prediction task. It is not surprising because spread-gram is good at learning the correlations between nodes through global input and spreading-learning strategy.

5.4 Node Classification

The node classification task evaluates the node embeddings with respect to discriminations on different categories. For the 5 datasets in use, the number of categories are various. We ranked the categories in each dataset based on the number of entries, and selected the top 7-15 categories for each dataset to conduct node classification. Specifically, for the heterogeneous networks, there are more than one type of nodes; for some specific node types, there is more than one category that each node should belong to, such as the “authors” nodes in DBLP, and the “consumers” nodes in Amazon. As a result, multi-label is needed for these node types.

For the multi-label node classification experiments, we adopted the decision tree as the classifier with AUC as the metric; for the other classification experiments, we used the SVM as the classifier with average F1 score as the metric. To get more details about the performance of the models, we conducted multiple classifications for each model on each dataset, by increasing the amount of the labels gradually. The experimental results are shown as follows, where Figure 13 shows the results on the homogeneous networks, and Figure 18 shows the results on the heterogeneous networks.

(a) “Countries” nodes in WITS network
(b) “Entries” nodes in Wiki network
(c) “Proteins” nodes in DIP network
Figure 13: Node classification results on homogeneous networks.
(a) “Authors” nodes in DBLP network
(b) “Publications” nodes in DBLP network
(c) “Consumers” nodes in Amazon network
(d) “Products” nodes in Amazon network
Figure 18: Node classification results on heterogeneous networks.

Figure 13 and Figure 18 show that spread-gram succeed in the node classification tasks generally, i.e., for most cases, spread-gram performs better and more stable than the baseline models. We can also see that some specific method like PTE produces better results in some specific cases. However, it is reasonable because PTE is a semi-supervised method which has leveraged the label information during model training. In contrast, in spread-gram only node-node correlations is used, and the global structure like node category belonging is learned effectively. It indicates that spread-gram is able to preserve both local and global information for the networks.

5.5 Node Embedding Visualization

In the node embedding visualization task, we plot the nodes so as to qualitatively evaluate the distribution of node embeddings learned by different methods. Among the 5 networks, the nodes in Wiki contain abundant semantic information, so we chose it to compare the network structure and spatial correlations of nodes generated by the models.

The origin node embeddings are 128-dimensional vectors. For the convenience of visualization, we condensed the node embeddings to two dimensional vectors through T-SNE [36], a manifold-learning based dimensional reduction method. The vectors were then plotted on a two-dimensional plane, and were labeled by different colors to separate their categories. To present more clearly, we selected and showed the nodes which refer to independent and general concepts. For example, the node of “Library” was plotted but “Library_of_Alexandira” wasn’t. The experimental results are shown in Figure 25.

(a) Spread-gram
(b) DeepWalk
(c) Node2vec
(d) TransE
(e) PTE
(f) GCN
Figure 25: Node embeddings distribution on a 2-diminsional plane.

From Figure 25 we can see that the node embeddings learned by different methods shows different characteristics in their distributions. For DeepWalk and Node2vec, the distributions are more intensive; for TransE and PTE the distributions are rather unconsolidated; and for GCN the distribution is streamline-like. The distribution of spread-gram is more balanced and discriminative compared with the other methods. However, we can still find that there is an area where nodes from multiple categories mixed (the area circled with red rectangle). To investigate the insights of these nodes, we enlarged this area and annotated the nodes with their corresponding entries, which is shown in Figure 26.

Figure 26: Local plot of the “red rectangle” area in Figure 9(a).

From Figure 26 we can find that the entries are all related with religions or cultures of Southeast Asia. It implies that these entries, although belongs to different categories, constitute a meaningful semantic space. Therefore, the node embeddings of Wikipedia entries learned by spread-gram are semantically reasonable.

The above experimental results prove the effectiveness of spread-gram on network representation learning from multiple perspectives. We can conclude that spread-gram outperforms the baseline methods in almost all the tasks above.

5.6 Iteration Analysis

To further analysis the performance of spread-gram during the iterations, we recorded the results of the above quantitative experiments for the spread-gram models after each iteration. More specifically, for different networks, we conducted the link prediction and node classification experiments once per iteration during the model training. The experimental settings follows the corresponding descriptions above. The results are shown in Figure 33 and Figure 40.

(a) node classification in WITS
(b) link prediction in WITS
(c) node classification in Wiki
(d) link prediction in Wiki
(e) node classification in DIP
(f) link prediction in DIP
Figure 33: The performance of spread-gram during the iterations (homogeneous networks).
(a) multi-classification in DBLP
(b) multi-classification in Amazon
(c) node classification in DBLP
(d) node classification in Amazon
(e) link prediction in DBLP
(f) link prediction in Amazon
Figure 40: The performance of spread-gram during the iterations (heterogeneous networks).

From Figure 33 and Figure 40, we can see that the models usually stabilize within a small number of iterations. For the most of the node classification experiments, the results converged at around the 15th iteration. For the link prediction experiments, the results converged slower, but usually within 30 iterations. This may because the object function of the spread-gram model maximizes the link prediction accuracy potentially, and the link prediction results stabilize at last. However, compared with the baseline methods, such as random-walk based methods which usually need more than 30 walk length, spread-gram converged with less computation cost.

6 Related Works

Network representation learning attracted widely attention from the researchers during recent years. According to the techniques adopted, the network representation learning methods can be mainly divided into factorization based, random-walk based and neural-network based branches [37].

Factorization based methods were the mainstream in the early researches of network representation learning. By taking the network as an adjacent matrix, these studies usually get the lower dimension representation of the nodes through matrix factorization. The representative works include LLE [38], IsoMap [39], Graph Factorization [40], EdgeCluster [41], as well as other methods applying SVD or Laplacian eigenmaps [42, 28]. However, these methods usually have high computational complexity, which cannot be applied effectively to the modern large-scale networks.

Random-walk based methods, due to their efficiency and flexibility on the large-scale networks, is adopted extensively in practical network analysis works. Deepwalk [7] first combined random walk and word embedding learning models to get network representations. More specifically, it generates a number of node sequences through random walks on the network, and feed the node sequences to the word embedding learning model skip-gram [43] to learn node representations. Afterwards, a series of models were proposed to extend the random-walk based methods, such as Node2vec [8], metapath2vec [9], anonymous walk [10], HiWalk [44], etc. In spite of the advantages, the random-walk process distorts the origin structure of the networks and involves the input bias inevitably since it tries to reorganize the nonlinear networks with linear node sequences.

Neural network based network representation learning is an emerging area with the advances of deep learning applied to various fields. Multiple neural network structures, like autoencoders [11, 29], convolutional neural networks [32, 45, 46, 47] and sequential neural networks [12] have been used in building network representation learning frameworks. Among these works, the convolutional networks based methods are the most extensively studied. Kipf and Welling proposed Graph Convolutional Networks (GCN) [32], which leverages the convolutional operation on nodes to collect associations from neighbors and update node embeddings. After proposed, GCN was further extended to multiple variations, such as signed Graph Convolutional Networks [45], Graph Wavelet Neural Network [46], and High-order Graph Convolutional Networks [47]. The Graph Convolutional Networks solve the input bias problems through combining the global and local structural information of the networks with graph convolution. However, they usually compute the network as a matrix, and the dynamical updating of local nodes cannot spread to the global network in a single layer. Moreover, the number of layers in a Graph Convolutional Networks is usually limited [48].

6.1 Network Analysis Applying Spreading Activation Theory

Spreading activation theories has been applied in network analysis studies from various perspectives, like trust inference, churn prediction, word of mouth analysis, recommendation, etc.

Ziegler and Lausen [22] believed that compared with investigating the interactions among entities, knowing about the entities’ credibility is equally important. They proposed Appleseed, a spreading-activation based model for trust computing in the network. The built local group trust metrics and evaluated credibility of entities through a trust propagation method, which borrowed ideas from spreading activation theories. Based on this work, Ziegler [49] studied the “distrust propagation” problem under the spreading activation framework.

Dasgupta et.al [23] found that, in Telecom Business Intelligence, a critical factor that influence churn is the choice from friends. They proposed a spreading-activation based churn prediction method to predict user behavior according to the user’s neighboring nodes in a social network. Different from the method above, Kusuma et.al [50] and Kim et.al [51] used spreading-activation-based methods to evaluate the attributes of social ties, and predict churners according to these attributes. Existing studies [52] proved that the churn prediction models usually perform better through combining non-relational classifiers and relational classifiers, enhanced by the spreading-activation mechanism.

Yang et.al [24] proposed a spreading-activation based viral marketing scheme. It considered both the location and the number of initial spreaders during the viral market modeling, which is proved to be effective in viral marketing. Besides, Wang et.al [25] applied spreading-activation mechanism to recommendation, Grobe-bolting et.al [53] applied spreading-activation theories to analyze user preferences.

7 Conclusion

The main contributions of this paper are three-fold. First, we involve the human cognitive mechanism in network representation learning and prove its effectiveness. Second, we propose a new network representation learning method, spread-gram, which leverages the spreading-activation to overcome the limitations of existing method in integrating global structure and local structure. Third, we designed the spread-gram model learning methods for both homogeneous networks and heterogeneous networks. We conducted comprehensive experiments to evaluate the performance of spread-gram, and the results shows its effectiveness, efficiency and applicability to a wide range of real-world networks. We found a significant advantage of spread-gram in learning and representing the hierarchical structure of networks.

The future improvements of this work could proceed from two perspectives. For one thing, analyzing weighted networks is an important issue, therefore how to deal with weighted network under the framework of spread-gram should be considered. For another, although spread-gram is developed for general networks, we should still find the scenes where the method is particularly suitable, such as information propagation network, human interaction network, etc.

References

  • [1] D. Easley, J. Kleinberg, et al., Networks, crowds, and markets, vol. 8. Cambridge university press Cambridge, 2010.
  • [2] P. W. Battaglia, J. B. Hamrick, V. Bapst, A. Sanchez-Gonzalez, V. Zambaldi, M. Malinowski, A. Tacchetti, D. Raposo, A. Santoro, R. Faulkner, et al., “Relational inductive biases, deep learning, and graph networks,” arXiv preprint arXiv:1806.01261, 2018.
  • [3] J. Tang, M. Qu, M. Wang, M. Zhang, J. Yan, and Q. Mei, “Line: Large-scale information network embedding,” in Proceedings of the 24th international conference on world wide web, pp. 1067–1077, International World Wide Web Conferences Steering Committee, 2015.
  • [4] Y. Bengio, A. Courville, and P. Vincent, “Representation learning: A review and new perspectives,” IEEE transactions on pattern analysis and machine intelligence, vol. 35, no. 8, pp. 1798–1828, 2013.
  • [5] P. Sarkar and A. W. Moore, “Dynamic social network analysis using latent space models,” in Advances in Neural Information Processing Systems, pp. 1145–1152, 2006.
  • [6] S. Yan, D. Xu, B. Zhang, H.-J. Zhang, Q. Yang, and S. Lin, “Graph embedding and extensions: A general framework for dimensionality reduction,” IEEE Transactions on Pattern Analysis & Machine Intelligence, no. 1, pp. 40–51, 2007.
  • [7] B. Perozzi, R. Al-Rfou, and S. Skiena, “Deepwalk: Online learning of social representations,” in Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 701–710, ACM, 2014.
  • [8] A. Grover and J. Leskovec, “node2vec: Scalable feature learning for networks,” in Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 855–864, ACM, 2016.
  • [9] Y. Dong, N. V. Chawla, and A. Swami, “metapath2vec: Scalable representation learning for heterogeneous networks,” in Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining, pp. 135–144, ACM, 2017.
  • [10] S. Ivanov and E. Burnaev, “Anonymous walk embeddings,” arXiv preprint arXiv:1805.11921, 2018.
  • [11] S. Cao, W. Lu, and Q. Xu, “Deep neural networks for learning graph representations,” in Thirtieth AAAI Conference on Artificial Intelligence, 2016.
  • [12] Z. Liu, V. W. Zheng, Z. Zhao, F. Zhu, K. C.-C. Chang, M. Wu, and J. Ying, “Semantic proximity search on heterogeneous graph by proximity embedding,” in Thirty-First AAAI Conference on Artificial Intelligence, 2017.
  • [13] A. R. Benson, D. F. Gleich, and J. Leskovec, “Higher-order organization of complex networks,” Science, vol. 353, no. 6295, pp. 163–166, 2016.
  • [14] J. R. Anderson, “Acquisition of cognitive skill.,” Psychological review, vol. 89, no. 4, p. 369, 1982.
  • [15] T. L. Griffiths, N. Chater, C. Kemp, A. Perfors, and J. B. Tenenbaum, “Probabilistic models of cognition: Exploring representations and inductive biases,” Trends in cognitive sciences, vol. 14, no. 8, pp. 357–364, 2010.
  • [16] T. R. Jonker, H. Dimsdale-Zucker, M. Ritchey, A. Clarke, and C. Ranganath, “Neural reactivation in parietal cortex enhances memory for episodically linked information,” Proceedings of the National Academy of Sciences, vol. 115, no. 43, pp. 11084–11089, 2018.
  • [17] A. M. Collins and E. F. Loftus, “A spreading-activation theory of semantic processing.,” Psychological review, vol. 82, no. 6, p. 407, 1975.
  • [18] F. Crestani, “Application of spreading activation techniques in information retrieval,” Artificial Intelligence Review, vol. 11, no. 6, pp. 453–482, 1997.
  • [19] S. E. Preece, “A spreading activation network model for information retrieval.,” 1982.
  • [20] P. Shoval, “Expert/consultation system for a retrieval data-base with semantic network of concepts,” in ACM SIGIR Forum, vol. 16, pp. 145–149, ACM, 1981.
  • [21] P. R. Cohen and R. Kjeldsen, “Information retrieval by constrained spreading activation in semantic networks,” Information processing & management, vol. 23, no. 4, pp. 255–268, 1987.
  • [22] C.-N. Ziegler and G. Lausen, “Spreading activation models for trust propagation,” in IEEE International Conference on e-Technology, e-Commerce and e-Service, 2004. EEE’04. 2004, pp. 83–97, IEEE, 2004.
  • [23] K. Dasgupta, R. Singh, B. Viswanathan, D. Chakraborty, S. Mukherjea, A. A. Nanavati, and A. Joshi, “Social ties and their relevance to churn in mobile telecom networks,” in Proceedings of the 11th international conference on Extending database technology: Advances in database technology, pp. 668–677, ACM, 2008.
  • [24] J. Yang, C. Yao, W. Ma, and G. Chen, “A study of the spreading scheme for viral marketing based on a complex network model,” Physica A: Statistical Mechanics and its Applications, vol. 389, no. 4, pp. 859–870, 2010.
  • [25] S. Wang, D. Lo, B. Vasilescu, and A. Serebrenik, “Entagrec++: An enhanced tag recommendation system for software information sites,” Empirical Software Engineering, vol. 23, no. 2, pp. 800–832, 2018.
  • [26] J. R. Anderson, “A spreading activation theory of memory,” Journal of verbal learning and verbal behavior, vol. 22, no. 3, pp. 261–295, 1983.
  • [27] J. Han, “Mining heterogeneous information networks: the next frontier,” in Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 2–3, ACM, 2012.
  • [28] M. Belkin and P. Niyogi, “Laplacian eigenmaps and spectral techniques for embedding and clustering,” in Advances in neural information processing systems, pp. 585–591, 2002.
  • [29] D. Wang, P. Cui, and W. Zhu, “Structural deep network embedding,” in Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 1225–1234, ACM, 2016.
  • [30] T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean, “Distributed representations of words and phrases and their compositionality,” in Advances in neural information processing systems, pp. 3111–3119, 2013.
  • [31] J. Tang, M. Qu, and Q. Mei, “Pte: Predictive text embedding through large-scale heterogeneous text networks,” in Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1165–1174, ACM, 2015.
  • [32] T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” arXiv preprint arXiv:1609.02907, 2016.
  • [33] A. Bordes, N. Usunier, A. Garcia-Duran, J. Weston, and O. Yakhnenko, “Translating embeddings for modeling multi-relational data,” in Advances in neural information processing systems, pp. 2787–2795, 2013.
  • [34] R. West and J. Leskovec, “Human wayfinding in information networks,” in Proceedings of the 21st international conference on World Wide Web, pp. 619–628, ACM, 2012.
  • [35] J. Leskovec, L. A. Adamic, and B. A. Huberman, “The dynamics of viral marketing,” ACM Transactions on the Web (TWEB), vol. 1, no. 1, p. 5, 2007.
  • [36] L. v. d. Maaten and G. Hinton, “Visualizing data using t-sne,” Journal of machine learning research, vol. 9, no. Nov, pp. 2579–2605, 2008.
  • [37] P. Goyal and E. Ferrara, “Graph embedding techniques, applications, and performance: A survey,” Knowledge-Based Systems, vol. 151, pp. 78–94, 2018.
  • [38] S. T. Roweis and L. K. Saul, “Nonlinear dimensionality reduction by locally linear embedding,” science, vol. 290, no. 5500, pp. 2323–2326, 2000.
  • [39] J. B. Tenenbaum, V. De Silva, and J. C. Langford, “A global geometric framework for nonlinear dimensionality reduction,” science, vol. 290, no. 5500, pp. 2319–2323, 2000.
  • [40] A. Ahmed, N. Shervashidze, S. Narayanamurthy, V. Josifovski, and A. J. Smola, “Distributed large-scale natural graph factorization,” in Proceedings of the 22nd international conference on World Wide Web, pp. 37–48, ACM, 2013.
  • [41] L. Tang and H. Liu, “Scalable learning of collective behavior based on sparse social dimensions,” in Proceedings of the 18th ACM conference on Information and knowledge management, pp. 1107–1116, ACM, 2009.
  • [42] M. Ou, P. Cui, J. Pei, Z. Zhang, and W. Zhu, “Asymmetric transitivity preserving graph embedding,” in Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 1105–1114, ACM, 2016.
  • [43] T. Mikolov, K. Chen, G. Corrado, and J. Dean, “Efficient estimation of word representations in vector space,” arXiv preprint arXiv:1301.3781, 2013.
  • [44] J. Bai, L. Li, and D. Zeng, “Hiwalk: Learning node embeddings from heterogeneous networks,” Information Systems, vol. 81, pp. 82–91, 2019.
  • [45] T. Derr, Y. Ma, and J. Tang, “Signed graph convolutional networks,” in 2018 IEEE International Conference on Data Mining (ICDM), pp. 929–934, IEEE, 2018.
  • [46] B. Xu, H. Shen, Q. Cao, Y. Qiu, and X. Cheng, “Graph wavelet neural network,” arXiv preprint arXiv:1904.07785, 2019.
  • [47] J. B. Lee, R. A. Rossi, X. Kong, S. Kim, E. Koh, and A. Rao, “Higher-order graph convolutional networks,” arXiv preprint arXiv:1809.07697, 2018.
  • [48] Q. Li, Z. Han, and X.-M. Wu, “Deeper insights into graph convolutional networks for semi-supervised learning,” in Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
  • [49] C. N. Ziegler and J. Golbeck, “Models for trust inference in social networks,” Intelligent Systems Reference Library, vol. 85, pp. 53–89, 2015.
  • [50] P. D. Kusuma, D. Radosavljevik, F. W. Takes, and P. van der Putten, “Combining customer attribute and social network mining for prepaid mobile churn prediction,” in Proc. the 23rd Annual Belgian Dutch Conference on Machine Learning (BENELEARN), pp. 50–58, 2013.
  • [51] K. Kim, C.-H. Jun, and J. Lee, “Improved churn prediction in telecommunication industry by analyzing a large network,” Expert Systems with Applications, vol. 41, no. 15, pp. 6575–6584, 2014.
  • [52] M. Óskarsdóttir, C. Bravo, W. Verbeke, C. Sarraute, B. Baesens, and J. Vanthienen, “A comparative study of social network classifiers for predicting churn in the telecommunication industry,” in Proceedings of the 2016 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, pp. 1151–1158, IEEE Press, 2016.
  • [53] G. Große-Bölting, C. Nishioka, and A. Scherp, “Generic process for extracting user profiles from social media using hierarchical knowledge bases,” in Proceedings of the 2015 IEEE 9th International Conference on Semantic Computing (IEEE ICSC 2015), pp. 197–200, IEEE, 2015.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
392259
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description