Representation Learning for Attributed Multiplex Heterogeneous Network

Representation Learning for Attributed Multiplex Heterogeneous Network

Yukuo Cen, Xu Zou, Jianwei Zhang, Hongxia Yang, Jingren Zhou, Jie Tang Department of Computer Science and Technology, Tsinghua University DAMO Academy, Alibaba Group cyk18, zoux18@mails.tsinghua.edu.cn zhangjianwei.zjw, yang.yhx, jingren.zhou@alibaba-inc.com jietang@tsinghua.edu.cn
Abstract.
11footnotetext: Hongxia Yang and Jie Tang are the corresponding authors.

Network embedding (or graph embedding) has been widely used in many real-world applications. However, existing methods mainly focus on networks with single-typed nodes/edges and cannot scale well to handle large networks. Many real-world networks consist of billions of nodes and edges of multiple types, and each node is associated with different attributes. In this paper, we formalize the problem of embedding learning for the Attributed Multiplex Heterogeneous Network and propose a unified framework to address this problem. The framework supports both transductive and inductive learning. We also give the theoretical analysis of the proposed framework, showing its connection with previous works and proving its better expressiveness. We conduct systematical evaluations for the proposed framework on four different genres of challenging datasets: Amazon, YouTube, Twitter, and Alibaba111Code is available at https://github.com/cenyk1230/GATNE.. Experimental results demonstrate that with the learned embeddings from the proposed framework, we can achieve statistically significant improvements (e.g., 5.99-28.23% lift by F1 scores; , test) over previous state-of-the-art methods for link prediction. The framework has also been successfully deployed on the recommendation system of a worldwide leading e-commerce company, Alibaba Group. Results of the offline A/B tests on product recommendation further confirm the effectiveness and efficiency of the framework in practice.

Network embedding; Multiplex network; Heterogeneous network
copyright: rightsretainedjournalyear: 2019copyright: acmcopyrightconference: The 25th ACM SIGKDD Conference on Knowledge Discovery and Data Mining; August 4–8, 2019; Anchorage, AK, USAbooktitle: The 25th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD ’19), August 4–8, 2019, Anchorage, AK, USAprice: 15.00doi: 10.1145/3292500.3330964isbn: 978-1-4503-6201-6/19/08ccs: Mathematics of computing Graph algorithmsccs: Computing methodologies Learning latent representations

1. Introduction

Figure 1. The left illustrates an example of an attributed multiplex heterogeneous network. Users in the left part of the figure are associated with attributes, including gender, age, and location. Similarly, items in the left part of the figure include attributes such as price and brand. The edge types between users and items are from four interactions, including click, add-to-preference, add-to-cart and conversion. The three subfigures in the middle represent three different ways of setting up the graphs, including HON, MHEN, and AMHEN from the bottom to the top. The right part shows the performance improvement of the proposed models over DeepWalk on the Alibaba dataset. As can be seen, GATNE-I achieves a +28.23% performance lift compared to DeepWalk.

Network embedding (Cui et al., 2018), or network representation learning, is a promising method to project nodes in a network to a low-dimensional continuous space while preserving network structure and inherent properties. It has attracted tremendous attention recently due to significant progress in downstream network learning tasks such as node classification (Bhagat et al., 2011), link prediction (Taskar et al., 2004), and community detection (Fortunato, 2010). DeepWalk (Perozzi et al., 2014), LINE (Tang et al., 2015b), and node2vec (Grover and Leskovec, 2016) are pioneering works that introduce deep learning techniques into network analysis to learn node embeddings. NetMF (Qiu et al., 2018) gives a theoretical analysis of equivalence for the different network embedding algorithms, and later NetSMF (Qiu et al., 2019) gives a scalable solution via sparsification. Nevertheless, they were designed to handle only the homogeneous network with single-typed nodes and edges. More recently, PTE (Tang et al., 2015a), metapath2vec (Dong et al., 2017), and HERec (Shi et al., 2018b) are proposed for heterogeneous networks. However, real-world network-structured applications, such as e-commerce, are much more complicated, comprising not only multi-typed nodes and/or edges but also a rich set of attributes. Due to its significant importance and challenging requirements, there have been tremendous attempts in the literature to investigate embedding learning for complex networks. Depending on the network topology (homogeneous or heterogeneous) and attributed property (with or without attributes), we categorize six different types of networks and summarize their relative comprehensive developments, respectively, in Table 1. These six categories include HOmogeneous Network (or HON), Attributed HOmogeneous Network (or AHON), HEterogeneous Network (or HEN), Attributed HEterogeneous Network (or AHEN), Multiplex HEterogeneous Network (or MHEN), and Attributed Multiplex HEterogeneous Network (or AMHEN). As can be seen, among them the AMHEN has been least studied.

In this paper, we focus on embedding learning for AMHENs, where different types of nodes might be linked with multiple different types of edges, and each node is associated with a set of different attributes. This is common in many online applications. For example, in the four datasets that we are working with, there are 20.3% (Twitter), 21.6% (YouTube), 15.1% (Amazon) and 16.3% (Alibaba) of the linked node pairs having more than one type of edges respectively. As an instance, in an e-commerce system, users may have several types of interactions with items, such as click, conversion, add-to-cart, add-to-preference. Figure 1 illustrates such an example. Obviously, “users” and “items” have intrinsically different properties and shall not be treated equally. Moreover, different user-item interactions imply different levels of interests and should be treated differently. Otherwise, the system cannot precisely capture the user’s behavioral patterns and preferences and would be insufficient for practical use.

Not merely because of the heterogeneity and multiplicity, in practice, dealing with AMHEN poses several unique challenges:

  • Multiplex Edges. Each node pair may have multiple different types of relationships. It is important to be able to borrow strengths from different relationships and learn unified embeddings.

  • Partial Observations. The real networked data are usually partially observed. For example, a long-tailed customer may only present few interactions with some products. Most existing network embedding methods focus on the transductive settings, and cannot handle the long-tailed or cold-start problems.

  • Scalability. Real networks usually have billions of nodes and tens or hundreds of billions of edges  (Wang et al., 2018). It is important to develop learning algorithms that can scale well to large networks.

Network Type Method Heterogeneity Attribute
Node Type Edge Type
Homogeneous Network (HON) DeepWalk (Perozzi et al., 2014) Single Single /
LINE (Tang et al., 2015b)
node2vec (Grover and Leskovec, 2016)
NetMF (Qiu et al., 2018)
NetSMF (Qiu et al., 2019)
Attributed Homogeneous Network (AHON) TADW (Yang et al., 2015) Single Single Attributed
LANE (Huang et al., 2017b)
AANE (Huang et al., 2017a)
SNE (Liao et al., 2018)
DANE (Gao and Huang, 2018)
ANRL (Zhang et al., 2018b)
Heterogeneous Network (HEN) PTE (Tang et al., 2015a) Multi Single /
metapath2vec (Dong et al., 2017)
HERec (Shi et al., 2018b)
Attributed HEN (AHEN) HNE (Chang et al., 2015) Multi Single Attributed
Multiplex Heterogeneous Network (MHEN) PMNE (Liu et al., 2017) Single Multi /
MVE (Qu et al., 2017)
MNE (Zhang et al., 2018a)
mvn2vec (Shi et al., 2018a)
GATNE-T Multi Multi
Attributed MHEN (AMHEN) GATNE-I Multi Multi Attributed
Table 1. The network types handled by different methods.

To address the above challenges, we propose a novel approach to capture both rich attributed information and to utilize multiplex topological structures from different node types, namely General Attributed Multiplex HeTerogeneous Network Embedding, or abbreviated as GATNE. The key features of GATNE are the following:

  • We formally define the problem of attributed multiplex heterogeneous network embedding, which is a more general representation for real-world networks.

  • GATNE supports both transductive and inductive embeddings learning for attributed multiplex heterogeneous networks. We also give the theoretical analysis to prove that our transductive model is a more general form than existing models (e.g., MNE (Zhang et al., 2018a)).

  • Efficient and scalable learning algorithms for GATNE have been developed. Our learning algorithms are able to handle hundreds of million nodes and billions of edges efficiently.

We conduct extensive experiments to evaluate the proposed models on four different genres of datasets: Amazon, YouTube, Twitter, and Alibaba. Experimental results show that the proposed framework can achieve statistically significant improvements (5.99-28.23% lift by F1 scores on Alibaba dataset; 0.01, test) over state-of-the-art methods. We have deployed the proposed model on Alibaba’s distributed system and apply the method to Alibaba’s recommendation engine. Offline A/B tests further confirm the effectiveness and efficiency of our proposed models.

2. Related Work

In this section, we review related state-of-the-arts for network embedding, heterogeneous network embedding, multiplex heterogeneous network embedding, and attributed network embedding.

Network Embedding. Works in network embedding mainly consist of two categories, graph embedding (GE) and graph neural network (GNN). Representative works for GE include DeepWalk (Perozzi et al., 2014) which generates a corpus on graphs by random walk and then trains a skip-gram model on the corpus. LINE (Tang et al., 2015b) learns node presentations on large-scale networks while preserving both first-order and second-order proximities. node2vec (Grover and Leskovec, 2016) designs a biased random walk procedure to efficiently explore diverse neighborhoods. NetMF (Qiu et al., 2018) is a unified matrix factorization framework for theoretically understanding and improving DeepWalk and LINE. For popular works in GNN, GCN (Kipf and Welling, 2017) incorporates neighbors’ feature representations into the node feature representation using convolutional operations. GraphSAGE (Hamilton et al., 2017) provides an inductive approach to combine structural information with node features. It learns functional representations instead of direct embeddings for each node, which helps it work inductively on unobserved nodes during training.

Heterogeneous Network Embedding. Heterogeneous networks examine scenarios with nodes and/or edges of various types. Such networks are notoriously difficult to mine because of the bewildering combination of heterogeneous contents and structures. The creation of a multidimensional embedding of such data opens the door to the use of a wide variety of off-the-shelf mining techniques for multidimensional data. Despite the importance of this problem, limited efforts have been made on embedding a scalable network of dynamic and heterogeneous data. HNE (Chang et al., 2015) jointly considers the contents and topological structures in networks and represents different objects in heterogeneous networks to unified vector representations. PTE (Tang et al., 2015a) constructs large-scale heterogeneous text network from labeled information and different levels of word co-occurrence information, which is then embedded into a low-dimensional space. metapath2vec (Dong et al., 2017) formalizes meta-path based random walk to construct the heterogeneous neighborhood of a node and then leverages a heterogeneous skip-gram model to perform node embeddings. HERec (Shi et al., 2018b) uses a meta-path based random walk strategy to generate meaningful node sequences to learn network embeddings that are first transformed by a set of fusion functions and subsequently integrated into an extended matrix factorization (MF) model.

Multiplex Heterogeneous Network Embedding. Existing approaches usually study networks with a single type of proximity between nodes, which only captures a single view of a network. However, in reality, there usually exist multiple types of proximities between nodes, yielding networks with multiple views, or multiplex network embedding. PMNE (Liu et al., 2017) proposes three methods to project a multiplex network into a continuous vector space. MVE (Qu et al., 2017) embeds networks with multiple views in a single collaborated embedding using attention mechanism. MNE (Zhang et al., 2018a) uses one common embedding and several additional embeddings of each edge type for each node, which are jointly learned by a unified network embedding model. Mvn2vec (Shi et al., 2018a) explores the feasibility to achieve better embedding results by simultaneously modeling preservation and collaboration to represent semantic meanings of edges in different views respectively.

Attributed Network Embedding. Attributed network embedding aims to seek for low-dimensional vector representations for nodes in a network, such that original network topological structure and node attribute proximity can be preserved in such representations. TADW (Yang et al., 2015) incorporates text features of vertices into network representation learning under the framework of matrix factorization. LANE (Huang et al., 2017b) smoothly incorporates label information into the attributed network embedding while preserving their correlations. AANE (Huang et al., 2017a) enables a joint learning process to be done in a distributed manner for accelerated attributed network embedding. SNE (Liao et al., 2018) proposes a generic framework for embedding social networks by capturing both the structural proximity and attribute proximity. DANE (Gao and Huang, 2018) can capture the high nonlinearity and preserve various proximities in both topological structure and node attributes. ANRL (Zhang et al., 2018b) uses a neighbor enhancement autoencoder to model the node attribute information and an attribute-aware skip-gram model based on the attribute encoder to capture the network structure.

3. Problem Definition

Notation Description
the input network
the node/edge set of
the node/edge type set of
the attribute set of
the number of nodes
the number of edge types
an edge type
the dimension of base/overall embeddings
the dimension of edge embeddings
a node in the graph
the neighborhood set of a node on an edge type
the base/edge/context/overall embedding of a node
transformation functions in our inductive approach
the attribute of a node
Table 2. Notations.

Denote a network , where is a set of nodes and is a set of edges between nodes. Each edge is associated with a weight , indicating the strength of the relationship between and . In practice, the network could be either directed or undirected. If is directed, we have and ; if is undirected, we have and . Notations are summarized in Table 2.

Definition 0 (Heterogeneous Network).

A heterogeneous network (Sun et al., 2013; Chang et al., 2015) is a network associated with a node type mapping function and an edge type mapping function , where and represent the set of all node types and the set of all edge types, respectively. Each node belongs to a particular node type. Similarly, each edge is categorized into a specific edge type. If , the network is called heterogeneous; otherwise homogeneous.

Notice that in a heterogeneous network, an edge can no longer be denoted as since there may be multiple types of edges between node and . Under such situations, an edge is denoted as , where corresponds to a certain edge type.

Definition 0 (Attributed Network).

An attributed network (Huang et al., 2017b; Chang et al., 2015) is a network endowed with an attribute representation , i.e., . Each node is associated with some types of feature vectors. is the set of node features for all nodes, where is the associated node feature of node .

Definition 0 (Attributed Multiplex Heterogeneous Network).

An attributed multiplex heterogeneous network is a network , , where consists of all edges with edge type , and . We separate the network for every edge type or view as .

An example of AMHEN is illustrated in Figure 1, which contains node types and edge types. The two node types are and with different attributes. Given the above definitions, we can formally define our problem for representation learning on networks.

Problem 1 (AMHEN Embedding).

Given an AMHEN , the problem of AMHEN embedding is to give a unified low-dimensional space representation of each node on every edge type . The goal is to find a function for every edge type , where .

4. Methodology

In this section, we first explain the proposed GATNE framework in the transductive context (Kipf and Welling, 2017). The resultant model is named as GATNE-T. We also give a theoretical discussion about the connection of GATNE-T with the newly trending models, e.g., MNE. To deal with the partial observation problem, we further extend the model to the inductive context (Yang et al., 2016) and present a new inductive model named as GATNE-I. For both models, we present efficient optimization algorithms.

Figure 2. Illustration of GATNE-T and GATNE-I models. GATNE-T only uses network structure information while GATNE-I considers both structure information and node attributes. The output layer of heterogeneous skip-gram specifies one set of multinomial distributions for each node type in the neighborhood of the input node . In this example, and specify the size of ’s neighborhood on each node type respectively.

4.1. Transductive Model: Gatne-T

We begin with embedding learning for attributed multiplex heterogeneous networks in the transductive context, and present our model GATNE-T. More specifically, in GATNE-T, we split the overall embedding of a certain node on each edge type into two parts: base embedding, edge embedding as shown in Figure 2. The base embedding of node is shared between different edge types. The -th level edge embedding of node on edge type is aggregated from neighbors’ edge embeddings as:

(1)

where is the neighbors of node on edge type . The initial edge embedding for each node and each edge type is randomly initialized in our transductive model. Following GraphSAGE (Hamilton et al., 2017), the function can be a mean aggregator as:

(2)

or other pooling aggregators such as max-pooling aggregator:

(3)

where is an activation function. We denote the -th level edge embedding as edge embedding , and concatenate all the edge embeddings for node as with size -by-, where is the dimension of edge-embeddings:

(4)

We use self-attention mechanism (Lin et al., 2017) to compute the coefficients of linear combination of vectors in on edge type as:

(5)

where and are trainable parameters for edge type with size and respectively and the superscript denotes the transposition of the vector or the matrix. Thus, the overall embedding of node for edge type is:

(6)

where is the base embedding for node , is a hyper-parameter denoting the importance of edge embeddings towards the overall embedding and is a trainable transformation matrix.

Connection with Previous Work. We choose MNE (Zhang et al., 2018a), a recent representative work for MHEN, as the base model for multiplex heterogeneous networks to discuss the connection between our proposed model and previous work. In GATNE-T, we use the attention mechanism to capture the influential factors between different edge types. We theoretically prove that our transductive model is a more general form of MNE and improves the expressiveness. For MNE, the overall embedding of node on edge type is

(7)

where is a edge-specific transformation matrix. And for GATNE-T, the overall embedding for node on edge type is:

(8)

where denotes the -th element of and is computed as:

(9)
Theorem 4.1 ().

For any , there exist parameters and , such that for any , and corresponding matrix , with dimension for each and size for , there exist , and corresponding matrix , with dimension for each and size for , that satisfy .

Proof.

For any , let concatenated by two vectors, where the first dimension is , and is an -dimensional vector. Let denote the -th dimension of , then take , where is the Kronecker delta function as

(10)

Let be all zero, except for the element on row and column with a large enough number ; therefore becomes a vector with its dimension approximately being , and other dimensions being . Take as a vector with its dimension being , then , so . Finally we take being at its first dimension, and on the following dimension, and we can get . ∎

Thus the parameter space of MNE is almost included by our model’s parameter space and we can conclude that GATNE-T is a generalization of MNE, if edge embeddings can be trained directly. However, in our model, the edge embedding is generated from single or multiple layers of aggregation. We give more discussions about the aggregation case.

Effects of Aggregation. In the GATNE-T model, the edge embedding is computed by aggregating the edge embedding of its neighbors as:

(11)

The mean aggregator is basically a matrix multiplication,

(12)

where is the neighborhood matrix on edge type , is the -th level edge embedding of all nodes in the graph on edge type , and denotes the column of . can be a normalized adjacency matrix. The mean operator of Equation (11) can be weighted and the neighborhood matrix can be sampled. Take , where is an identity matrix, then . If is of full rank, then for any , there exists such that .

If the activation function is an automorphism, i.e., and is of full rank, we can use the construction method described in Theorem 4.1 to construct and the above method to construct each level of edge embeddings subsequently. Therefore, our model is still a more general form that can generalize the MNE model, when all the neighborhood matrices and the activation function are invertible in all levels of aggregation.

4.2. Inductive Model: Gatne-I

The limitation of GATNE-T is that it cannot handle unobserved data. However, in many real-world applications, the networked data is often partially observed (Yang et al., 2016). We then extend our model to the inductive context and present a new model named GATNE-I. The model is also illustrated in Figure 2. Specifically, we define the base embedding as a parameterized function of ’s attribute as , where is a transformation function and is node ’s corresponding node type. Notice that nodes with different types may have different dimensions of their attributes . The transformation function can have different forms such as a multi-layer perceptron (Pal and Mitra, 1992). Similarly, the initial edge embeddings for node on edge type should be also parameterized as the function of attributes as , where is also a transformation function that transforms the feature to an edge embedding of node on the edge type and is ’s corresponding node type. To be more specific, for the inductive model, we also add an extra attribute term to the overall embedding of node on type :

(13)

where is a coefficient and is a feature transformation matrix on ’s corresponding node type . The difference between our transductive and inductive model mainly lies on how the base embedding and initial edge embeddings are generated. In our transductive model, the base embedding and initial edge embedding are directly trained for each node based on the network structure, and the transductive model cannot handle nodes that are not seen during training. As for our inductive model, instead of training and directly for each node, we train transformation functions and that transforms the raw feature to and , which works for nodes that did not appear during training as long as they have corresponding raw features.

Input: network , embedding dimension , edge embedding dimension , learning rate , negative samples , coefficient , .
Output: overall embeddings for all nodes on every edge type
1 Initialize all the model parameters .
2 Generate random walks on each edge type as .
3 Generate training samples from random walks on each edge type .
4 while not converged do
5       foreach  training samples do
6             Calculate using Equation (6) or (13)
7             Sample negative samples and calculate objective function using Equation (17)
8             Update model parameters by .
9      
Algorithm 1 GATNE

4.3. Model Optimization

We discuss how to learn the proposed transductive and inductive models. Following (Perozzi et al., 2014; Tang et al., 2015b; Grover and Leskovec, 2016), we use random walk to generate node sequences and then perform skip-gram (Mikolov et al., 2013a; Mikolov et al., 2013b) over the node sequences to learn embeddings. Since each view of the input network is heterogeneous, we use meta-path-based random walks (Dong et al., 2017). To be specific, given a view of the network, i.e., and a meta-path scheme , where is the length of the meta-path scheme, the transition probability at step is defined as follows:

(14)

where and denotes the neighborhood of node on edge type . The flow of the walker is conditioned on the pre-defined meta path . The meta-path-based random walk strategy ensures that the semantic relationships between different types of nodes can be properly incorporated into skip-gram model (Dong et al., 2017). Supposing the random walk with length on edge type follows a path such that , denote ’s context as , where is the radius of the window size.

Thus, given a node with its context of a path, our objective is to minimize the following negative log-likelihood:

(15)

where denotes all the parameters. Following metapath2vec (Dong et al., 2017) we use the heterogeneous softmax function which is normalized with respect to the node type of node . Specifically, the probability of given is defined as:

(16)

where , is the context embedding of node and is the overall embedding of node for edge type .

Finally, we use heterogeneous negative sampling to approximate the objective function for each node pair as:

(17)

where is the sigmoid function, is the number of negative samples correspond to a positive training sample, and is randomly drawn from a noise distribution defined on node ’s corresponding node set .

We summarize our algorithm in Algorithm 1. The time complexity of our random walk based algorithm is where is the number of nodes, is the number of edge types, is overall embedding size, is the number of negative samples per training sample (). The memory complexity of our algorithm is with being the size of edge embedding.

5. Experiments

In this section, we first introduce the details of four evaluation datasets and the competitor algorithms. We focus on the link prediction task to evaluate the performances of our proposed methods compared to other state-of-the-art methods. Parameter sensitivity, convergence, and scalability are then discussed. Finally, we report the results of offline A/B tests of our method on Alibaba’s recommendation system.

Dataset # nodes # edges  # n-types  # e-types
Amazon 10,166 148,865 1 2
YouTube 2,000 1,310,617 1 5
Twitter 10,000 331,899 1 4
Alibaba-S 6,163 17,865 2 4
Alibaba 41,991,048 571,892,183 2 4
Table 3. Statistics of Datasets.

5.1. Datasets

We work on three public datasets and the Alibaba dataset for the link prediction task. Amazon Product Dataset222http://jmcauley.ucsd.edu/data/amazon/ (McAuley et al., 2015; He and McAuley, 2016) includes product metadata and links between products; YouTube dataset333http://socialcomputing.asu.edu/datasets/YouTube (Tang et al., 2009b; Tang and Liu, 2009) consists of various types of interactions; Twitter dataset444https://snap.stanford.edu/data/higgs-twitter.html (De Domenico et al., 2013) also contains various types of links. Alibaba dataset has two node types, user and item (or product), and includes four types of interactions between users and items. Since some of the baselines cannot scale to the whole graph, we evaluate performances on sampled datasets. The statistics of these four sampled datasets are summarized in Table 3. Notice that n-types and e-types in the table denote node types and edge types, respectively.

Amazon. In our experiments, we only use the product metadata of Electronics category, including the product attributes and co-viewing, co-purchasing links between products. The product attributes include the price, sales-rank, brand, and category.

YouTube. YouTube dataset is a multiplex bidirectional network dataset that consists of five types of interactions between 15,088 YouTube users. The types of edges include contact, shared friends, shared subscription, shared subscriber, and shared favorite videos between users.

Twitter. Twitter dataset is about tweets related to the discovery of the Higgs boson between 1st and 7th, July 2012. It is made up of four directional relationships between more than 450,000 Twitter users. The relationships are re-tweet, reply, mention, and friendship/follower relationships between Twitter users.

Alibaba. Alibaba dataset consists of four types of interactions including click, add-to-preference, add-to-cart, and conversion between two types of nodes, user and item. The sampled Alibaba dataset is denoted as Alibaba-S. We also provide the evaluation of the whole dataset on Alibaba’s distributed cloud platform; the full dataset is denoted as Alibaba.

5.2. Competitors

We categorize our competitors into the following four groups. The overall embedding size is set to 200 for all methods. The specific hyper-parameter settings for different methods are listed in the Appendix.

Network Embedding Methods. The compared methods include DeepWalk (Perozzi et al., 2014), LINE (Tang et al., 2015b), and node2vec (Grover and Leskovec, 2016). As these methods can only deal with HON, we feed separate graphs with different edge types to them and obtain different node embeddings for each separate graph.

Heterogeneous Network Embedding Methods. We focus on the representative work metapath2vec (Dong et al., 2017), which is designed to deal with the node heterogeneity. When there is only one node type in the network, metapath2vec degrades to DeepWalk. For Alibaba dataset, the meta-path schemes are set to and , where and denote User and Item respectively.

Multiplex Heterogeneous Network Embedding Methods. The compared methods include PMNE (Liu et al., 2017), MVE (Qu et al., 2017), MNE (Zhang et al., 2018a). We denote the three methods of PMNE as PMNE(n), PMNE(r) and PMNE(c) respectively. MVE uses collaborated context embeddings and applies an attention mechanism to view-specific embedding. MNE uses one common embedding and several additional embeddings for each edge type, which are jointly learned by a unified network embedding model.

Attributed Network Embedding Methods. The compared method is ANRL (Zhang et al., 2018b). ANRL uses a neighbor enhancement auto-encoder to model the node attribute information and an attribute-aware skip-gram model based on the attribute encoder to capture the network structure.

Our Methods. Our proposed methods include GATNE-T and GATNE-I. GATNE-T considers the network structure and uses base embeddings and edge embeddings to capture the influential factors between different edge types. GATNE-I considers both the network structure and the node attributes, and learns an inductive transformation function instead of learning base embeddings and meta embeddings for each node directly. For Alibaba dataset, we use the same meta-path schemes as metapath2vec. For some datasets without node attributes, we also generate node features for them. Due to the size of the Alibaba dataset with more than 40 million nodes and 500 million edges and the scalabilities of the other competitors, we only compare our GATNE model with DeepWalk, MVE, and MNE. Specific implementations can be found in the Appendix.

Amazon YouTube Twitter Alibaba-S
ROC-AUC PR-AUC F1 ROC-AUC PR-AUC F1 ROC-AUC PR-AUC F1 ROC-AUC PR-AUC F1
DeepWalk 94.20 94.03 87.38 71.11 70.04 65.52 69.42 72.58 62.68 59.39 60.62 56.10
node2vec 94.47 94.30 87.88 71.21 70.32 65.36 69.90 73.04 63.12 62.26 63.40 58.49
LINE 81.45 74.97 76.35 64.24 63.25 62.35 62.29 60.88 58.18 53.97 54.65 52.85
metapath2vec 94.15 94.01 87.48 70.98 70.02 65.34 69.35 72.61 62.70 60.94 61.40 58.25
ANRL 71.68 70.30 67.72 75.93 73.21 70.65 70.04 67.16 64.69 58.17 55.94 56.22
PMNE(n) 95.59 95.48 89.37 65.06 63.59 60.85 69.48 72.66 62.88 62.23 63.35 58.74
PMNE(r) 88.38 88.56 79.67 70.61 69.82 65.39 62.91 67.85 56.13 55.29 57.49 53.65
PMNE(c) 93.55 93.46 86.42 68.63 68.22 63.54 67.04 70.23 60.84 51.57 51.78 51.44
MVE 92.98 93.05 87.80 70.39 70.10 65.10 72.62 73.47 67.04 60.24 60.51 57.08
MNE 90.28 91.74 83.25 82.30 82.18 75.03 91.37 91.65 84.32 62.79 63.82 58.74
GATNE-T 97.44 97.05 92.87 84.61 81.93 76.83 92.30 91.77 84.96 66.71 67.55 62.48
GATNE-I 96.25 94.77 91.36 84.47 82.32 76.83 92.04 91.95 84.38 70.87 71.65 65.54
Table 4. Performance comparison of different methods on four datasets.

5.3. Performance Analysis

Link prediction is a common task in both academia and industry. For academia, it is widely used to evaluate the quality of network embeddings obtained by different methods. In the industry, link prediction is a very demanding task since in real-world scenarios we are usually facing graphs with partial links, especially for e-commerce companies that rely on the links between their users and items for recommendations. We hide a set of edges/non-edges from the original graph and train on the remaining graph. Following (Kipf and Welling, 2016; Bojchevski and Günnemann, 2018), we create a validation/test set that contains 5%/10% randomly selected positive edges respectively with the equivalent number of randomly selected negative edges for each edge type. The validation set is used for hyper-parameter tuning and early stopping. The test set is used to evaluate the performance and is only run once under the tuned hyper-parameter. We use some commonly used evaluation criteria, i.e., the area under the ROC curve (ROC-AUC) (Hanley and McNeil, 1982) and the PR curve (PR-AUC) (Davis and Goadrich, 2006) in our experiments. We also use F1 score as the other metric for evaluation. To avoid the thresholding effect (Tang et al., 2009a), we assume that the number of hidden edges in the test set is given (Tang et al., 2009a; Perozzi et al., 2014; Qiu et al., 2018). All of these metrics are uniformly averaged among the selected edge types.

ROC-AUC PR-AUC F1
DeepWalk 65.58 78.13 70.14
MVE 66.32 80.12 72.14
MNE 79.60 93.01 84.86
GATNE-T 81.02 93.39 86.65
GATNE-I 84.20 95.04 89.94
Table 5. The experimental results on Alibaba dataset.
(a) Convergence
(b) Scalability
Figure 3. (a) The convergence curve for GATNE-T and GATNE-I models on Alibaba dataset. The inductive model converges faster and achieves better performance than the transductive model. (b) The training time decreases as the number of workers increases. GATNE-I takes less training time to converge compared with GATNE-T.
(a) Base embedding dimension
(b) Edge embedding dimension
Figure 4. The performance of GATNE-T and GATNE-I on Alibaba-S when changing base/edge embedding dimensions exponentially.

Quantitative Results. The experimental results of three public datasets and Alibaba-S are shown in Table 4. GATNE outperforms all sorts of baselines in the various datasets. GATNE-T obtains better performance than GATNE-I on Amazon dataset as the node attributes are limited. The node attributes of Alibaba dataset are abundant so that GATNE-I obtains the best performance. ANRL is very sensitive to the weak node attributes and obtains the worst result on Amazon dataset. The different node attributes of users and items also limit the performance of ANRL on Alibaba-S dataset. On YouTube and Twitter datasets, GATNE-I performs similarly to GATNE-T as the node attributes of these two datasets are the node embeddings of DeepWalk, which are generated by the network structure. Table 5 lists the experimental results of Alibaba dataset. GATNE scales very well and achieves state-of-the-art performance on Alibaba dataset, with performance lift in PR-AUC, in ROC-AUC, and in F1-score, compared with best results from previous state-of-the-art algorithms. The GATNE-I performs better than GATNE-T model in the large-scale dataset, suggesting that the inductive approach works better on large-scale attributed multiplex heterogeneous networks, which is usually the case in real-world situations.

Convergence Analysis. We analyze the convergence properties of our proposed models on Alibaba dataset. The results, as shown in Figure 3(a), demonstrate that GATNE-I converges faster and achieves better performance than GATNE-T on extremely large-scale real-world datasets.

Scalability Analysis. We investigate the scalability of GATNE that has been deployed on multiple workers for optimization. Figure 3(b) shows the speedup w.r.t. the number of workers on the Alibaba dataset. The figure shows that GATNE is quite scalable on the distributed platform, as the training time decreases significantly when we add up the number of workers, and finally, the inductive model takes less than 2 hours to converge with 150 distributed workers. We also find that GATNE-I’s training speed increases almost linearly as the number of workers increases but less than 150. While GATNE-T converges slower and its training speed is about to reach a limit when the number of workers being larger than 100. Besides the state-of-the-art performance, GATNE is also scalable enough to be adopted in practice.

Parameter Sensitivity. We investigate the sensitivity of different hyper-parameters in GATNE, including overall embedding dimension and edge embedding dimension . Figure 4 illustrates the performance of GATNE when altering the base embedding dimension or edge embedding dimension from the default setting (). We can conclude that the performance of GATNE is relatively stable within a large range of base/edge embedding dimensions, and the performance drops when the base/edge embedding dimension is either too small or too large.

5.4. Offline A/B Tests

We deploy our inductive model GATNE-I on Alibaba’s distributed cloud platform for its recommendation system. The training dataset has about 100 million users and 10 million items, with 10 billion interactions between them per day. We use the model to generate embedding vectors for users and items. For every user, we use K nearest neighbor (KNN) with Euclidean distance to calculate the top-N items that the user is most likely to click. The experimental goal is to maximize top-N hit-rate. Under the framework of A/B tests, we conduct an offline test on GATNE-I, MNE, and DeepWalk. The results demonstrate that GATNE-I improves hit-rate by 3.26% and 24.26% compared to MNE and DeepWalk, respectively.

6. Conclusion

In our paper, we formalized the attributed multiplex heterogeneous network embedding problem and proposed GATNE to solve it with both transductive and inductive settings. We split the overall node embedding of GATNE-I into three parts: base embedding, edge embedding, and attribute embedding. The base embedding and attribute embedding are shared among edges of different types, while the edge embedding is computed by aggregation of neighborhood information with the self-attention mechanism. Our proposed methods achieve significantly better performances compared to previous state-of-the-art methods on link prediction tasks across multiple challenging datasets. The approach has been successfully deployed and evaluated on Alibaba’s recommendation system with excellent scalability and effectiveness.

Acknowledgements.
We thank Qibin Chen, Ming Ding, Chang Zhou, and Xiaonan Fang for their comments. The work is supported by the NSFC for Distinguished Young Scholar (61825602), NSFC (61836013), and a research fund supported by Alibaba Group.

References

  • (1)
  • Bhagat et al. (2011) Smriti Bhagat, Graham Cormode, and S Muthukrishnan. 2011. Node classification in social networks. In Social network data analytics. Springer, 115–148.
  • Bojchevski and Günnemann (2018) Aleksandar Bojchevski and Stephan Günnemann. 2018. Deep Gaussian Embedding of Graphs: Unsupervised Inductive Learning via Ranking. In ICLR’18.
  • Chang et al. (2015) Shiyu Chang, Wei Han, Jiliang Tang, Guo-Jun Qi, Charu C Aggarwal, and Thomas S Huang. 2015. Heterogeneous network embedding via deep architectures. In KDD’15. ACM, 119–128.
  • Cui et al. (2018) Peng Cui, Xiao Wang, Jian Pei, and Wenwu Zhu. 2018. A survey on network embedding. TKDE (2018).
  • Davis and Goadrich (2006) Jesse Davis and Mark Goadrich. 2006. The relationship between Precision-Recall and ROC curves. In ICML’06. ACM, 233–240.
  • De Domenico et al. (2013) Manlio De Domenico, Antonio Lima, Paul Mougel, and Mirco Musolesi. 2013. The anatomy of a scientific rumor. Scientific reports 3 (2013), 2980.
  • Dong et al. (2017) Yuxiao Dong, Nitesh V Chawla, and Ananthram Swami. 2017. metapath2vec: Scalable representation learning for heterogeneous networks. In KDD’17. ACM, 135–144.
  • Fortunato (2010) Santo Fortunato. 2010. Community detection in graphs. Physics reports 486, 3-5 (2010), 75–174.
  • Gao and Huang (2018) Hongchang Gao and Heng Huang. 2018. Deep Attributed Network Embedding.. In IJCAI’18. 3364–3370.
  • Grover and Leskovec (2016) Aditya Grover and Jure Leskovec. 2016. node2vec: Scalable feature learning for networks. In KDD’16. ACM, 855–864.
  • Hamilton et al. (2017) Will Hamilton, Zhitao Ying, and Jure Leskovec. 2017. Inductive representation learning on large graphs. In NIPS’17. 1024–1034.
  • Hanley and McNeil (1982) James A Hanley and Barbara J McNeil. 1982. The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology 143, 1 (1982), 29–36.
  • He and McAuley (2016) Ruining He and Julian McAuley. 2016. Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering. In WWW’16. 507–517.
  • Hochreiter and Schmidhuber (1997) Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation 9, 8 (1997), 1735–1780.
  • Huang et al. (2017a) Xiao Huang, Jundong Li, and Xia Hu. 2017a. Accelerated attributed network embedding. In SDM’17. SIAM, 633–641.
  • Huang et al. (2017b) Xiao Huang, Jundong Li, and Xia Hu. 2017b. Label informed attributed network embedding. In WSDM’17. ACM, 731–739.
  • Kingma and Ba (2014) Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014).
  • Kipf and Welling (2016) Thomas N Kipf and Max Welling. 2016. Variational graph auto-encoders. arXiv preprint arXiv:1611.07308 (2016).
  • Kipf and Welling (2017) Thomas N. Kipf and Max Welling. 2017. Semi-Supervised Classification with Graph Convolutional Networks. In ICLR’17.
  • Liao et al. (2018) Lizi Liao, Xiangnan He, Hanwang Zhang, and Tat-Seng Chua. 2018. Attributed social network embedding. TKDE 30, 12 (2018), 2257–2270.
  • Lin et al. (2017) Zhouhan Lin, Minwei Feng, Cicero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. 2017. A structured self-attentive sentence embedding. ICLR’17.
  • Liu et al. (2017) Weiyi Liu, Pin-Yu Chen, Sailung Yeung, Toyotaro Suzumura, and Lingli Chen. 2017. Principled multilayer network embedding. In ICDMW’17. IEEE, 134–141.
  • McAuley et al. (2015) Julian McAuley, Christopher Targett, Qinfeng Shi, and Anton Van Den Hengel. 2015. Image-based recommendations on styles and substitutes. In SIGIR’15. ACM, 43–52.
  • Mikolov et al. (2013a) Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. ICLR’13.
  • Mikolov et al. (2013b) Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In NIPS’13. 3111–3119.
  • Pal and Mitra (1992) Sankar K Pal and Sushmita Mitra. 1992. Multilayer Perceptron, Fuzzy Sets, Classifiaction. (1992).
  • Perozzi et al. (2014) Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. 2014. Deepwalk: Online learning of social representations. In KDD’14. ACM, 701–710.
  • Qiu et al. (2019) Jiezhong Qiu, Yuxiao Dong, Hao Ma, Jian Li, Chi Wang, Kuansan Wang, and Jie Tang. 2019. NetSMF: Large-Scale Network Embedding as Sparse Matrix Factorization. In WWW’19.
  • Qiu et al. (2018) Jiezhong Qiu, Yuxiao Dong, Hao Ma, Jian Li, Kuansan Wang, and Jie Tang. 2018. Network embedding as matrix factorization: Unifying deepwalk, line, pte, and node2vec. In WSDM’18. ACM, 459–467.
  • Qu et al. (2017) Meng Qu, Jian Tang, Jingbo Shang, Xiang Ren, Ming Zhang, and Jiawei Han. 2017. An Attention-based Collaboration Framework for Multi-View Network Representation Learning. In CIKM’17. ACM, 1767–1776.
  • Shi et al. (2018b) Chuan Shi, Binbin Hu, Xin Zhao, and Philip Yu. 2018b. Heterogeneous Information Network Embedding for Recommendation. TKDE (2018).
  • Shi et al. (2018a) Yu Shi, Fangqiu Han, Xinran He, Carl Yang, Jie Luo, and Jiawei Han. 2018a. mvn2vec: Preservation and Collaboration in Multi-View Network Embedding. arXiv preprint arXiv:1801.06597 (2018).
  • Sun et al. (2013) Yizhou Sun, Brandon Norick, Jiawei Han, Xifeng Yan, Philip S Yu, and Xiao Yu. 2013. Pathselclus: Integrating meta-path selection with user-guided object clustering in heterogeneous information networks. TKDD 7, 3 (2013), 11.
  • Tang et al. (2015a) Jian Tang, Meng Qu, and Qiaozhu Mei. 2015a. Pte: Predictive text embedding through large-scale heterogeneous text networks. In KDD’15. ACM, 1165–1174.
  • Tang et al. (2015b) Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, and Qiaozhu Mei. 2015b. Line: Large-scale information network embedding. In WWW’15. 1067–1077.
  • Tang and Liu (2009) Lei Tang and Huan Liu. 2009. Uncovering cross-dimension group structures in multi-dimensional networks. In SDM workshop on Analysis of Dynamic Networks. ACM, 568–575.
  • Tang et al. (2009a) Lei Tang, Suju Rajan, and Vijay K Narayanan. 2009a. Large scale multi-label classification via metalabeler. In WWW’09. ACM, 211–220.
  • Tang et al. (2009b) Lei Tang, Xufei Wang, and Huan Liu. 2009b. Uncoverning groups via heterogeneous interaction analysis. In ICDM’09. IEEE, 503–512.
  • Taskar et al. (2004) Ben Taskar, Ming-Fai Wong, Pieter Abbeel, and Daphne Koller. 2004. Link prediction in relational data. In NIPS’04. 659–666.
  • Wang et al. (2018) Jizhe Wang, Pipei Huang, Huan Zhao, Zhibo Zhang, Binqiang Zhao, and Dik Lun Lee. 2018. Billion-scale Commodity Embedding for E-commerce Recommendation in Alibaba. KDD’18, 839–848.
  • Yang et al. (2015) Cheng Yang, Zhiyuan Liu, Deli Zhao, Maosong Sun, and Edward Y Chang. 2015. Network representation learning with rich text information.. In IJCAI’15. 2111–2117.
  • Yang et al. (2016) Zhilin Yang, William W Cohen, and Ruslan Salakhutdinov. 2016. Revisiting semi-supervised learning with graph embeddings. In ICML’16. 40–48.
  • Zhang et al. (2018a) Hongming Zhang, Liwei Qiu, Lingling Yi, and Yangqiu Song. 2018a. Scalable Multiplex Network Embedding. In IJCAI’18. 3082–3088.
  • Zhang et al. (2018b) Zhen Zhang, Hongxia Yang, Jiajun Bu, Sheng Zhou, Pinggang Yu, Jianwei Zhang, Martin Ester, and Can Wang. 2018b. ANRL: Attributed Network Representation Learning via Deep Neural Networks.. In IJCAI’18. 3155–3161.

Appendix A Appendix

In the appendix, we first give the implementation notes of our proposed models. The detailed descriptions of datasets and the parameter configurations of all methods are then given. Finally, we discuss the questions about fair comparison and our future work.

a.1. Implementation Notes

Running Environment. The experiments in this paper can be divided into two parts. One is conducted on four datasets using a single Linux server with 4 Intel(R) Xeon(R) Platinum 8163 CPU @ 2.50GHz, 512G RAM and 8 NVIDIA Tesla V100-SXM2-16GB. The codes of our proposed models in this part are implemented with TensorFlow555https://www.tensorflow.org/ 1.12 in Python 3.6. The other part is conducted on the full Alibaba dataset using Alibaba’s distributed cloud platform which contains thousands of workers. Every two workers share an NVIDIA Tesla P100 GPU with 16GB memory. Our proposed models are implemented with TensorFlow 1.4 in Python 2.7 in this part.

Implementation Details. Our codes used by single Linux server can be split into three parts: random walk, model training and evaluation. The random walk part is implemented referring to the corresponding part of DeepWalk666https://github.com/phanein/deepwalk and metapath2vec777https://ericdongyx.github.io/metapath2vec/m2v.html. The training part of the model is implemented referring to the word2vec part of TensorFlow tutorials888https://www.tensorflow.org/tutorials/representation/word2vec. The evaluation part uses some metric functions from scikit-learn999https://scikit-learn.org/stable/modules/classes.html#module-sklearn.metrics including roc_auc_score, f1_score, precision_recall_curve, auc. Our model parameters are updated and optimized by stochastic gradient descent with Adam updating rule (Kingma and Ba, 2014). The distributed version of our proposed models is implemented based on the coding rules of Alibaba’s distributed cloud platform in order to maximize the distribution efficiency. High-level APIs, such as tf.estimator and tf.data, are used for the higher coefficient of utilization of computation resources in the Alibaba’s distributed cloud platform.

Function Selection. Many different aggregator functions in Equation (1), such as the mean aggregator (Cf. Equation (2)) or pooling aggregator (Cf. Equation (3)), achieve similar performance in our experiments. Mean aggregator is finally used to be reported in the quantitative experiments in our model. We use the linear transformation function as the parameterized function of attributes and in Equation (13) of our inductive model GATNE-I.

Parameter Configuration. Our base/overall embedding dimension is set to 200 and the dimension of edge embedding is set to 10. The number of walks for each node is set to 20 and the length of walks is set to 10. The window size is set to 5 for generating node contexts. The number of negative samples for each positive training sample is set to 5. The number of maximum epochs is set to 50 and our models will early stop if ROC-AUC on the validation set does not improve in 1 training epoch. The coefficient and are all set to 1 for every edge type . For Alibaba dataset, the node types include and representing User and Item respectively. The meta-path-schemes of our methods are set to and . We use the default setting of Adam optimizer in TensorFlow; the learning rate is set to . For offline A/B test in section 5.4, we use .

Code and Dataset Releasing Details. The codes of our proposed models on the single Linux server (based on Tensorflow 1.12), together with our partition of the three public datasets and the Alibaba-S dataset are available.

a.2. Compared Methods

We give the detailed running configuration about all compared methods as follows. The embedding size is set to 200 for all methods. For random-walk based methods, the number of walks for each node is set to 20 and the length of walks is set to 10. The window size is set to 5 for generating node contexts. The number of negative samples for each training pairs is set to 5. The number of iterations for training the skip-gram model is set to 100. The code sources and other specific hyper-parameter settings of compared methods are explained below.

a.2.1. Network Embedding Methods

  • DeepWalk (Perozzi et al., 2014). For public and Alibaba-S datasets, we use the codes from the corresponding author’s GitHub6. For Alibaba dataset, we re-implemented DeepWalk on Alibaba distributed cloud platform.

  • LINE (Tang et al., 2015b). The codes of LINE are from the corresponding author’s GitHub101010https://github.com/tangjianpku/LINE. We use the LINE(1st+2nd) as the overall embeddings. The embedding size is set to 100 both for first-order and second-order embeddings. The number of samples is set to 1000 million.

  • node2vec (Grover and Leskovec, 2016). The codes of node2vec are from the corresponding author’s GitHub111111https://github.com/aditya-grover/node2vec. Node2vec adds two parameters to control the random walk process. The parameter is set to 2 and the parameter is set to 0.5 in our experiments.

a.2.2. Heterogeneous Network Embedding Methods

  • metapath2vec (Dong et al., 2017). The codes provided by the corresponding author are only for specific datasets and could not directly generalize to other datasets. We re-implement metapath2vec for networks with arbitrary node types in Python based on the original C++ codes121212https://ericdongyx.github.io/metapath2vec/m2v.html. As the number of node types of three public datasets is one, metapath2vec degrades to DeepWalk in these three datasets. For Alibaba dataset, the node types include and representing User and Item respectively. The meta-path-schemes are set to and .

a.2.3. Multiplex Heterogeneous Network Embedding Methods

  • PMNE (Liu et al., 2017). PMNE proposes three different methods to apply node2vec on multiplex networks. We denote their network aggregation algorithm, result aggregation algorithm, and layer co-analysis algorithm as PMNE(n), PMNE(r), and PMNE(c) respectively in accord with the denotations of MNE(Zhang et al., 2018a). We use the codes from MNE’s GitHub131313https://github.com/HKUST-KnowComp/MNE. The probability of traversing layers of PMNE(c) is set to 0.5.

  • MVE (Qu et al., 2017). MVE uses collaborated context embeddings and applies an attention mechanism to view-specific embeddings. The code of MVE was received from the corresponding author by email. The embedding dimensions for each view is set to 200. The number of training samples for each epoch is set to 100 million and the number of epochs is set to 10. As for Alibaba dataset, we re-implemented this method on the Alibaba distributed cloud platform.

  • MNE (Zhang et al., 2018a). MNE uses one common embedding and several additional embeddings of each edge type for each node, which are jointly learned by a unified network embedding model. The additional embedding size for MNE is set to 10. We use the codes released by the corresponding author in the GitHub13. As for Alibaba dataset, we re-implemented it on the Alibaba distributed cloud platform.

a.2.4. Attributed Network Embedding Methods

  • ANRL (Zhang et al., 2018b). We use the codes from Alibaba’s GitHub141414https://github.com/cszhangzhen/ANRL. As YouTube and Twitter datasets do not have node attributes, we generate node attributes for them. To be specific, we use the node embeddings (200 dimensions) of DeepWalk as the input node features on these datasets for ANRL. For Alibaba-S and Amazon dataset, we use raw features as attributes.

Dataset # nodes # edges # n-types # e-types
Amazon 312,320 7,500,100 1 4
YouTube 15,088 13,628,895 1 5
Twitter 456,626 15,367,315 1 4
Table 6. Statistics of Original Datasets.

a.3. Datasets

Our experiment evaluates on five datasets, including four datasets and Alibaba dataset. Due to the limitation of memory and computation resources on a single Linux server, the four datasets are subgraphs sampled from the original datasets for training and evaluation. Table 6 shows the statistics of the original public datasets.

  • Amazon is a dataset of product reviews and metadata from Amazon. In our experiments, we only use the product metadata, including the product attributes and co-viewing, co-purchasing links between products. The node type set of Amazon is and the edge type set of Amazon is , which denotes two products are co-bought or co-viewed by the same user respectively. The products of Amazon are split into many categories. The number of products in all the categories is so large that we use the Electronics category of products for experimentation. The number of products in Electronics is still large for many algorithms; therefore, we extract a connected subgraph from the whole graph.

  • YouTube is a multi-dimensional bidirectional network dataset consists of 5 types of interactions between 15,088 YouTube users. The types of edges include contact, shared friends, shared subscription, shared subscriber, and shared favorite videos between users. It is a multiplex network with and .

  • Twitter is a dataset about tweets posted on Twitter about the discovery of the Higgs boson between 1st and 7th, July 2012. It is made up of directional relationships between more than 450,000 Twitter users. The relationships are re-tweet, reply, mention, and friendship/follower relationship between Twitter users. It is a multiplex network with and .

  • Alibaba consists of types of interactions which including click, add-to-preference, add-to-cart, and conversion between two types of nodes, user and item. The node type set of Alibaba is and the size of the edge type set is . The whole graph of Alibaba is so large that we cannot evaluate the performances of different methods on it by a single machine. We extract a subgraph from the whole graph for comparison with different methods, denoted as Alibaba-S. By the way, we also provide the evaluation of the whole graph on the Alibaba’s distributed cloud platform, the full graph is denoted as Alibaba.

a.4. Discussion

As for research on network embedding, many people use link prediction or node classification tasks for evaluating the representation of network embeddings. However, although there are many commonly used public datasets, like Twitter or YouTube dataset, none of them provide a ”standard” separation for train, validation, and test for different tasks. This causes different results on the same dataset for different evaluation separations so the results from previous papers cannot be directly used, and researchers have to re-implement and run all baselines themselves, reducing their attention on improving their model.

Here we appeal on researchers to provide the standardized dataset, which contains a standard separation of train, validation and test sets as well as the full dataset. Therefore researchers can evaluate their method based on a standard environment, and results across papers can be compared directly. This also helps to increase the reproducibility of research.

Future Work. Apart from the heterogeneity of networks, the dynamics of networks are also crucial to network representation learning. There are three ways to capture the dynamic information of networks. Firstly, we can add dynamic information into node attributes. For example, we can use methods like LSTM (Hochreiter and Schmidhuber, 1997) to capture the dynamic activities of users. Secondly, the dynamic information, such as the timestamp of each interaction, can be considered as the attributes of edges. Thirdly, we may consider the several snapshots of networks representing the dynamic evolution of networks. We leave representation learning for the dynamic attributed multiplex heterogeneous network as our future work.

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
365599
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description