Edge-Featured Graph Attention Network

Edge-Featured Graph Attention Network

Abstract

Lots of neural network architectures have been proposed to deal with learning tasks on graph-structured data. However, most of these models concentrate on only node features during the learning process. The edge features, which usually play a similarly important role as the nodes, are often ignored or simplified by these models. In this paper, we present edge-featured graph attention networks, namely EGATs, to extend the use of graph neural networks to those tasks learning on graphs with both node and edge features. These models can be regarded as extensions of graph attention networks (GATs). By reforming the model structure and the learning process, the new models can accept node and edge features as inputs, incorporate the edge information into feature representations, and iterate both node and edge features in a parallel but mutual way. The results demonstrate that our work is highly competitive against other node classification approaches, and can be well applied in edge-featured graph learning tasks.

1 Introduction

In many real-world applications, data are best constructed as graphs to analyze and display. The graph is such a natural structure whose nodes and edges can be used to characterize the entities and their inner-relationships among data. Recently, several works have defined neural networks on graphs[27, 26]. Kipf et al.[10] proposed graph convolutional networks, namely GCNs, based on spectral graph theory. Veličković et al.[21] presented graph attention networks (GATs), which aggregate features following a self-attention strategy. Though such graph neural networks have been proven successful in some of node classification tasks, they still have obvious shortcomings. One of these networks’ major problems is that the edge features are not incorporated into the models. In fact, most of the current state-of-the-art graph neural networks have not consider the edge features.

However, edges with their features play an essential role in many real-world node classification tasks. For example, in a trading network, the node labels may be highly relevant to the transactions. In such a case, the information contained in edges may have a more significant contribution to the classification accuracy compared with node features. Actually, different graphs have different preferences for features of nodes and edges, whereas all existing GNNs ignore or shun this fact.

In this paper, we proposed edge-featured graph attention networks (EGATs) to address the above challenges. This work can be regarded as an extension of GATs. To exploit the edge features effectively, we enhance the original attention mechanism; thus, the edge information can be an important factor in attention-weight computing. Further, the structures and learning processes of traditional attention models are also redesigned in our work, so the models can accept both node and edge features and iterate them individually. The updating of edge features is necessary and should be node-equivalent, because an iterative consistency between nodes and edges should be kept during the learning. Besides, a multi-scale merge strategy, which concatenates features from different iterations, is also adopted in our work. All node and edge features will be gathered in the final layer so that the model can learn the necessary characteristics which benefit the classification from various scales.

To our best knowledge, we are the first to incorporate edges into GATs as equivalent entities like nodes, point out graphs’ different different preferences for features, and handle them spontaneously within the models. Our models can be applied to graphs with discrete and continuous features for both nodes and edges, which can satisfy the demands of many real-world node classification tasks.

2 Related work

Graph neural networks (GNNs)[17] first extended neural network architectures to graphs. As a result, many related works were sprung up ensuingly. Spectral approaches, established on spectral graph theory, are among the most critical parts of these works. Bruna et al.[2] first defined convolution operations on Fourier domain. However, the filters this model computed are non-spatially localized, so Henaff et al.[8] improved it by generating spatially localized filters. Kipf et al.[10] proposed graph convolutional networks (GCNs), which further simplified above methods by using a first-order approximation on the Chebyshev polynomials. Unlike GCNs, Veličković et al.[21] proposed graph attention networks (GATs) to dynamically aggregate node features. Numerous variants have been derived from such design. Wang et al.[22] introduced heterogeneous graph attention networks (HANs) to process various heterogeneous graphs. Ma et al.[13] put forward disentangled graph convolutional networks using a routing algorithm. Several other works also made efforts to learn graph representations. GraphSAGE[7] generates node embeddings by aggregating node features using several pre-defined aggregate operations. Inspired by RNNs like LSTM[9] and GRU[4], gated graph neural networks[11], namely GGNNs, were proposed. Furthermore, Xu et al.[24] explored jumping knowledge networks, in which layer aggregation is adopted to acquire multi-scale features.

However, all these graph neural networks have a common characteristic: focus on node rather than edge features. Only very few works have tried to integrate edge features into GNN architecture. Schlichtkrull et al.[18] proposed an extension architecture of GCNs named R-GCNs. Gong et al.[6] presented a framework that augments GCNs and GATs with edges. However, such approaches are somewhat not as reasonable as they are. We will highlight their limitations in the next section.

3 Motivation

The demands of processing edge-featured graphs are quite common in real-world tasks. For example, if there is a need to find users who may have illegal behaviors in a trading network, it is better to use some node classification approaches to pick them up. Evidently, one user is suspicious or not is highly relevant to the amount he paid or received some time. In other words, the edge features are likely to have a more significant impact on classification than node features under such a situation. However, traditional GNNs cannot handle these graphs in a direct, elegant, and reasonable way. There may be some doubts that if it is possible to convert graphs to which current models can readily accept. Obviously, ignoring edge features is unacceptable. Using pre-defined aggregate functions to integrate edge features into nodes may be a better solution. We do not deny it may perform well on certain graphs, whereas it is not a panacea suitable for every condition, for the selection of function is highly dependent on graphs’ traits. It is more like feature engineering, rather than a universal approach.

Only a few works exploit edge features in graph neural networks, and all of them have obvious limitations. Schlichtkrull et al.[18] proposed R-GCNs to process modeling relational data. However, the models can accept graphs only when their edges are labeled, which indicates that the edges cannot include continuous attributes. Gong et al.[6] presented a framework enhances GCNs[10] and GATs[21]. This framework can accept continuous attributes of edges, whereas it merely regards them as weights between different node pairs. In most cases, it is somewhat unreasonable. For example, a special graph can be constructed, with the same features for nodes and different features for edges. If we consider edge features as weights and all weights of each node sum as one, it is interesting to find that no matter how the node features update, it will remain unchanged during the learning process.

The above phenomenon shows a fact, which has never been discussed in recent researches, that different graphs may have different preferences for node and edge features. To those graphs that edges possess a great impact, it is infelicitous to treat edge features as weights or labels. However, all existing works have ignored such a key fact. Our work’s motivation is not only to integrate edge features into GATs but also to propose general models that spontaneously handle such preferences. To our best knowledge, we are the first to try to solve such problems. It should be emphasized that we do not want to present disparate models against state-of-the-art approaches. Since the attention mechanism has proved itself in lots of tasks, it is unnecessary to propose a completely new one. Our work is an extension of GATs[21], and all the improvements we made are served for our motivation.

4 The proposed model

4.1 EGAT layer overview

A single EGAT layer contains two different blocks: node attention block and edge attention block. Each EGAT layer is designed in a symmetrical scheme; thus, the node and edge features can update themselves in a parallel and equivalent way. Figure  1 (a) gives an illustration of the EGAT layer.

Each EGAT layer accepts a set of node features, , , as well as a set of edge features, , , as inputs. and represent the number of nodes and edges, while and symbolize the number of their respective features. After processing, the layer will produce high-level outputs, which include a new set of node features, , , and a new set of edge features, , .

The cardinality of and may be different (whether the is or ), since the linear transformations performed on the node and edge features are not same. We use two learnable matrices, , and , to achieve such transformations. For each node and edge , their transformed features can be computed by , and , respectively. Then, both of them will be fed into the node attention block and the edge attention block, which individually producing the new sets of node and edge features. Moreover, the adjacency and mapping matrices of nodes and edges will also be injected into the two blocks for ancillary computation. For simplicity, we will re-use some symbols, which include H, E, , and . These symbols will have new meanings that characterize the features transformed by linear transformations in the rest of the paper.

Figure 1: (a) An illustration of one EGAT layer. It accepts H and E as inputs, and produces two sets of new features. and are adjacency matrices, while and are mapping matrices for nodes and edges, respectively. The is edge-integrated node features generated by node attention block, which will only be used in the merge layer; (b) The architecture of EGATs. The model is constructed by several EGAT layers and a merge layer. The node and edge features generated from each iteration will be concatenated in the merge layer to achieve a multi-scale feature fusion. For convenience, the adjacency and mapping matrices are not shown in this figure, as well as the multi-head attention.

4.2 Node attention block

The node attention block accepts H, a set of node features, and E, a set of edge features, and produces , a new set of node features. In E, the edge features are ranked in a preset order, so it is hard to find the relations between edges and their adjacent nodes. Thus, a mapping transformation will be first applied to E in the block, to re-organize it into another common form . Every element in can be represented as , while and denote the nodes on each end of an edge. The transformation from E to can be realized by matrix multiplication using an edge mapping matrix , which is an tensor. Compared with the adjacency matrix, it expands the third dimension to indicate where each edge should be placed. Figure  2 (b) gives a simple example about the mapping process.

Before the multiplication, the edge mapping matrix should first be reshaped into so that and E could have the same dimension of 2. Eventually, the multiplication result needs to reshape back with a size of , transforming the edge set E into the adjacency form. The edge mapping matrix is unique for a particular graph structure with determining orders of nodes and edges so that it can be constructed in a pre-processing step before the learning process.

Thanks to the adjacent form, the model can quickly seek out the edge between two specified nodes. Based on that, an edge-integrated attention mechanism can be performed on each node, generating the attention weights of its neighbors includes not only the features of the two nodes but also the edge connecting them. For each node , the weight will be computed for every , where the is the set including the first-order neighbors of node as well as the node itself. During the process, features will be concatenated, parameterized by a weight vector , and applied LeakyReLU as the activation function. Normalization will also be performed on these weights across all choices of node , where , by using a softmax function. The whole process can be formulated as follows:

(1)

It is interesting to note that, for each node, the aggregated features include not only the neighbors’ but also the ones of itself. Without edge features, the problem can be solved by adding an identity matrix to the adjacency matrix. However, the introduction of edge features makes it more difficult. In our work, we use a tricky method by adding virtual featured self-loops to the graph. If a node does not have an edge that connected itself, a virtual self-loop will be attached to it. In particular, for every virtual self-loop, its features will be computed as an average of all its adjacent edges’ features in each dimension as a compromise. All these operations should be done before being fed to the model.

Figure 2: Left: (a) An illustration of a regular graph; (b) An example of the mapping transformation performed on (a). is the edge mapping matrix in size of , with its last dimension encoded in a one-hot scheme. For simplicity, we draw the matrix by replacing one-hot encoding vectors with non-zero indices. is the edge feature set with a size of . should first be reshaped to , and recover to after multiplication. Right: (c) An example of graph transformation. The nodes and edges’ roles are inversed in the new graph; (d) The adjacency matrix of the new graph, namely edge adjacency matrix. An identity matrix has been added to the matrix to pre-build the self-adjacent relations; (e) The mapping matrix of the new graph.

After acquiring the normalized attention weights for each neighborhood, we can perform a weighted sum on these neighbor node features. In addition, a non-linearity will be applied to these summation results. The final results, which is also the outputs of this node attention block, can be expressed as:

(2)

It should be noticed that we only aggregate the node features to generate the new set of node features. The edge features only play a part in weight computing but not a part of the new node features. It is for the clarity and symmetry of the model that we design such a strategy. If we merge edge features into nodes in each iteration, all the features may tangle up together and make the network more complicated and confusing. In fact, we also produce the set of edge-integrated node features in the node attention block. For each node , we generate its new edge-integrated features as follows:

(3)

However, these features will only be used in the last-level merge layer to achieve a multi-scale concatenation. They will never be passed to the next EGAT layer as the inputs.

4.3 Edge Attention Block

The node features can update themselves periodically in node attention blocks to acquire high-level features, whereas it is unreasonable to reuse the original low-level edge features during the weight computation. Besides, we also need high-level edge features to keep a balance of importance between nodes and edges. Thus, we proposed edge attention blocks, each of which accepts a set of node features, H, and a set of edge features, E, and produces , a new set of edge features.

A natural idea to realize such blocks is to update each edge’s features by aggregating adjacent edges’ features. In undirected graphs, we consider two edges are adjacent only if two edges have at least one common vertex. To achieve the aggregation, we adopt a tricky approach in our work by first switching the roles of nodes and edges in the graph. A similar concept on directed graphs has been proposed by Chen et al.[3] for community detection. To achieve this, we create a new graph based on the original graph, whose nodes and edges are the edges and nodes of the original one, respectively. The transformation of the graph and the matrices structured by us are illustrated in Figure  2 (right).

The inputs of node and edge features are organized in the same sequential form. Thanks to the symmetric design, we can easily perform the attention mechanism on the new graph because the node feature set can be converted into the adjacency form by using , the node mapping matrix, with no difficulty. For each edge , the normalized attention weight of edge can be expressed as:

(4)

where is the first-order neighbor set of edge (including ), and is a weight vector with a size of . One noteworthy point is that, when we compute the attention weight of an arbitrary edge and the edge itself, it is no middle node between the two edges. In our experiments, we logically create an empty node between the two edges, by padding all the features of this virtual node as zeros. Like node features, the computing of new set of edge features can be represented as:

(5)

4.4 EGAT Architecture

In this section, we present EGATs, which are constructed by stacking several EGAT layers and appending a merge layer at the tail. The architecture of EGATs is illustrated by Figure  1 (b).

Multi-scale strategies have been widely used to aggregate hierarchical feature maps in CNN models. Xu et al.[24] first introduced such strategies into GNNs and proposed jumping knowledge networks, which further improve the accuracy. Inspired by such works, we adopt a multi-scale merge strategy by adding a merge layer in EGATs. Unlike jump knowledge networks, we collect not only node features but also edge features. Edge features will be integrated into nodes in each EGAT layer, result in , which we mentioned in  4.2. All generated from different iterations will aggregate together using a concatenation operation. Besides, we adopt the multi-head attention in the merge layer to further stabilize the attention mechanism. Unlike GATs[21], our multi-head attention is performed on the unity of all EGAT layers rather than a single layer. independent multi-scale edge-integrated features would be computed and merged, resulting in the feature representation as follows:

(6)

where indicates the number of the EGAT layers, and represents the edge-integrated node features of node produced in iteration of the group . To obtain a more refined representation, we apply a one-dimensional convolution to the results as a linear transformation and a non-linearity. For node classification tasks, a softmax function will be applied in the end to generate predicted labels.

5 Experiments

We experientially assess the efficiency of EGAT by performing comparative evaluation against state-of-the-art approaches on several node classification datasets, which include both node-sensitive and edge-sensitive graphs. Besides, some additional analyses are also included in this section.

5.1 Datasets

We conduct our experiments on five node classification tasks containing both node-sensitive and edge-sensitive datasets. The former are graphs whose node features highly correlate with node labels, while the latter are those whose edges possess a dominant position. Such a division is somewhat relative, and it does not mean the features in a weak status have no contributions to the final results.

Node-Sensitive Graphs. Three real-world node classification datasets, which include Cora, Citeseer, and Pubmed[19], are utilized in our experiments for node-sensitive graph learning. Such datasets are citation networks and have been widely used in graph learning research works as standard benchmarks. Notably, these datasets are undirected and do not have edge features within the graphs. For a fair comparison, in our work, we adopt the same dataset splits used in papers of GCNs[10] and GATs[21].

Edge-Sensitive Graphs. We derive two trading networks, Trade-B and Trade-M, to test the effectiveness of our models on edge-sensitive graphs. The two datasets are financial-collaborative and refer to real-world trading records. For confidentiality, we cleaned and extracted some distinct patterns of abnormal behaviors from the original data provided by a bank and regenerated them as new datasets. In these datasets, each node represents a customer, with an attribute indicating the risk level of it. The edges, however, represent the relations among customers, whose features contain the number and total amount of recent transactions. Trade-B is a binary classification dataset, which possesses 3907 nodes (97 of them are labeled) and 4394 edges. Trade-M, however, is ternary classified, with 4431 nodes (139 of them are labeled) and 4900 edges. For both datasets, we separated the labeled nodes into three parts, for training, validation, and test, with a ratio of 3:1:1. The two datasets are directed initially; however, to make it suitable for EGATs, we converted them into an undirected form.

5.2 Experimental Setup

For all the experiments, we implement EGATs based on the Pytorch framework[15]. Because of the large memory usage for both adjacency and mapping matrices, we convert them into sparse forms to reduce the memory requirement and computational complexity during the learning process. The experimental setup for node-sensitive and edge-sensitive graphs are described as follows.

Node-Sensitive Graphs. Because all three citation networks do not possess even an edge feature, we generate one weak topological feature for each edge, by enumerating the numbers of its adjacent edges. In those experiments, we adopt an EGAT model with = 2 and = 8, where and represent the number of the EGAT layers and the attention heads. For simplicity, we use the same numbers of the output features for every EGAT layer, where = 8 and = 4, for nodes and edges separately. A one-dimensional convolution operation are performed in the merge layer to produce C features (where C is the number of classes), followed by a softmax function. To improve accuracy, some techniques like dropout[20] and regularization are also used in EGATs. All these experiments, but Pubmed, were run on a machine with two GPUs of Geforce RTX 1080 Ti. Because of a larger requirement on video memory, Pubmed was run on Tesla V100 instead.

Edge-Sensitive Graphs. Trade-B and Trade-M are the two edge-sensitive benchmarks used in our experiments. As we mentioned above, some virtual self-loops are added to the graphs before the training. For those datasets, we apply an EGAT model whose = 2 and = 8, with the same output feature dimension for each EGAT layer. Differing from the node experiments, we use three kinds of combinations of and here with different ratios, which can be listed as 8:4, 6:6, 4:8, respectively. Besides, all other details of this model are similar to those used for node-sensitive graph learning. All these experiments were run on a machine with two GPUs of Geforce RTX 1080 Ti.

5.3 Results

For the node-sensitive tasks, we report the classification accuracy on the test nodes after 10 runs, which are listed in Table  1. We compare our results against several strong baselines and state-of-the-art approaches proposed in previous works. In particular, we re-implement a two-layer GAT model[21] by PyTorch, namely SP-GAT*, with , the number of hidden units, equals to 8. For a fair comparison, SP-GAT* accepts the same sparse representations of matrices used in our model.

The results show that EGATs are highly competitive against the state-of-the-art models on such node-sensitive graphs. We notice that there is a slight decrease for both Cora and Citeseer compared with SP-GAT*, which may be caused by the introduction of edge features. Since we generate the feature for each edge with the number of its adjacent edges, some interference may occur if these features are kind of useless. However, those negative effects are quite insignificant. Thanks to the symmetrical design, EGATs can adjust themselves during the learning and put more concentration on these useful features and produce acceptable results. In a word, EGATs can achieve high accuracy in node-sensitive classification tasks, surpassing the performance of most state-of-the-art approaches.

Method Cora Citeseer Pubmed
MLP 55.1% 46.5% 71.4%
ManiReg[1] 59.5% 60.1% 70.7%
SemeiEmb[23] 59.0% 59.6% 71.7%
LP[28] 68.0% 45.3% 63.0%
DeepWalk[16] 67.2% 43.2% 65.3%
ICA[12] 75.1% 69.1% 73.9%
Planetoid[25] 75.7% 64.7% 77.2%
Chebyshev[5] 81.2% 69.8% 74.4%
GCN[10] 81.5% 70.3% 79.0%
Monet[14] 81.7% - 78.8%
SP-GAT* 82.50.4% 70.80.5% 78.10.4%
EGAT (ours) 82.10.7% 70.30.5% 78.10.4%
Table 1: Summary of the results on node classification accuracy, for Cora, Citeseer and Pubmed. SP-GAT* corresponds to the best result of GAT implemented by us with a sparse form.
Method Trade-B Trade-M
SP-GAT* 65.0% 46.4%
SP-GAT-sum* 85.0% 51.1%
SP-GAT-avg* 78.0% 71.4%
SP-GAT-max* 81.5% 65.7%
EGAT ( = 8, = 4) 87.5% 84.3%
EGAT ( = 6, = 6) 88.0% 85.4%
EGAT ( = 4, = 8) 92.0% 78.2%
Table 2: Summary of the results on node classification accuracy, for Trade-B and Trade-M. The hyper-parameters and represent and used in our EGAT model, respectively.

For the edge-sensitive tasks, we report the mean classification accuracy on test nodes after 10 runs, and apply SP-GAT* and its variants to the benchmarks as comparisons. For SP-GAT*, we only feed the original node features as inputs. To ensure fairness, we further create three variants of SP-GAT*, by aggregating edge features into nodes as node features in advance using different functions, including sum, average, and max pooling. Besides, we evaluate the accuracy by comparing three EGATs with different ratios of and . The comparative results are listed in Table  2. EGATs show an incredible performance from the table, which is streets ahead of other approaches on both two datasets. For Trade-B and Trade-M, the best classification accuracy of EGATs can reach 92.0% and 85.4%. It is also interesting to observe that different datasets may possess different characteristics. For example, the edge features within Trade-B can be better expressed by summing up together. On the contrary, the average operation may be more applicable to representing the edge features in Trade-M. Despite their traits, EGATs can achieve high accuracy against these baselines on all these datasets, which means that EGATs can learn these characteristics of graphs spontaneously. To our best knowledge, there are no existing approaches that can process these kinds of graphs effectively.

We also investigate the effects of the ratio of and on accuracy. According to the results, if the edge features may play a more important role than node ones, we recommend choosing a small or balance value of so that edges will have a higher chance to show themselves. However, there may be exceptions in some cases. For example, the accuracy decreased to 78.2% when we select a high in Trade-M. Due to the mutual effect of the features of nodes and edges, the model becomes complex, and it is hard to consider both features separately. So, if better performance is demanded, it is better to adjust these hyper-parameters several times to choose the most suitable ones.

5.4 Complexity Analysis

The complexity analysis of EGATs is given in this subsection. Since the constructions of adjacency and mapping matrices occur in a pre-processing step rather than the critical path, we merely ignore them and concentrate on the learning process. In EGATs, the matrix multiplication is the most time-consuming operation, which can be regarded as the entry point. Assume that now we have a graph with nodes and edges. In each node attention block, we introduce the edge features by applying a mapping transformation, which is actually matrix multiplication. It can be easily proved that the computation complexity of multiplying an sparse matrix and an dense matrix can be reduced to , where denotes to the number of non-zero elements in . Thus, the complexity of such a transformation is , for the reason that , the number of features, can be seen as a constant. Because the complexity of GATs is no less than O(E) in each iteration, so the introduction of edges in node attention blocks will not significantly increase the complexity.

Things may be a little different occurring in edge attention blocks. Consider a graph with one central node and neighbors. Because every two edges are neighbors, when we switch the roles of nodes and edges, the number of edges in the new graph, , is on the order of . When we extend this conclusion to a regular graph with nodes and edges, the number of non-zero elements in the converted mapping matrix can be represented as , where indicate the degree of each node in the graph. Thus, the complexity of edge attention block is in the same order. However, based on our experimental results, the delay of EGATs is quite acceptable on the benchmarks and those graphs with a similar scale. Besides, all the test datasets except Pubmed can run on Geforce RTX 1080 Ti without exceeding the memory limit. If someone has higher performance requirements, some modifications could be made in edge attention block. For example, each edge can regard the two adjacent nodes as virtual edges and only aggregate the edge part of during the learning.

6 Conclusions

We proposed edge-featured graph attention networks (EGATs), novel edge-integrated graph neural networks that performed on graphs with node and edge features. We incorporate edge features into GNNs and present a symmetrical approach to exploit them. To our best knowledge, we are the first to incorporate edges as node-equivalent entities, point out graphs’ different preferences for features, and handle them spontaneously. The results demonstrate that EGATs have successfully achieved state-of-the-art performance on node classification tasks, especially for edge-sensitive datasets.

There are some potential improvements to EGATs that could be addressed as future work. In EGATs, we update each edge’s features by aggregating its neighbors’ information. However, the number of neighbors of each edge may be huge. Despite transforming matrices into sparse forms, the models still need a large memory usage when it performed on large-scale graphs. Thus, it is better to find an improved way to reduce the models’ memory requirement. Besides, EGATs do not naturally support directed graphs as well as multi-graphs. We intend to achieve these extensions further on.

Broader Impact

In this work, edge-featured graph attention networks (EGATs), novel edge-integrated graph neural networks, were proposed to perform node classification on those graphs with node and edge features. This work has the following potential positive impact on society. First, the models proposed in this paper are kind of versatile and have a broad application foreground in many fields. For example, the models can be applied in the financial sector, as an aid to finding those people who may have a suspicion in financial fraud, money laundering, etc. Second, given the absence of edge features in traditional GNN approaches, our work may attract the attention of other researchers, spawning a series of related research, to enhance further the basic framework of graph neural networks from a theoretical level. At the same time, this works may have some negative consequences with a small probability. Because there are very few works trying to exploit edge features in graph neural networks, this field is still kind of immature and receives little attention. Therefore, the negative impact of our models on society are quite unclear and need further exploration. Besides, we should be cautious about the result of the failure of the system. It should be noticed that the prediction results of our models should only be regarded as an auxiliary reference rather than a definite truth. Users of the models should perform a second manual verification by their own to ensure the authenticity of the results. We will not be responsible for the negative effects caused by the wrong prediction of EGATs.

References

  1. Mikhail Belkin, Partha Niyogi, and Vikas Sindhwani. Manifold regularization: A geometric framework for learning from labeled and unlabeled examples. Journal of machine learning research, 7(Nov):2399–2434, 2006.
  2. Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. Spectral networks and locally connected networks on graphs. arXiv preprint arXiv:1312.6203, 2013.
  3. Zhengdao Chen, Xiang Li, and Joan Bruna. Supervised community detection with line graph neural networks. arXiv preprint arXiv:1705.08415, 2017.
  4. Kyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014.
  5. Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering. In Advances in neural information processing systems, pages 3844–3852, 2016.
  6. Liyu Gong and Qiang Cheng. Exploiting edge features for graph neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 9211–9219, 2019.
  7. Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. In Advances in neural information processing systems, pages 1024–1034, 2017.
  8. Mikael Henaff, Joan Bruna, and Yann LeCun. Deep convolutional networks on graph-structured data. arXiv preprint arXiv:1506.05163, 2015.
  9. Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997.
  10. Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016.
  11. Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. Gated graph sequence neural networks. arXiv preprint arXiv:1511.05493, 2015.
  12. Qing Lu and Lise Getoor. Link-based classification. In Proceedings of the 20th International Conference on Machine Learning (ICML-03), pages 496–503, 2003.
  13. Jianxin Ma, Peng Cui, Kun Kuang, Xin Wang, and Wenwu Zhu. Disentangled graph convolutional networks. In International Conference on Machine Learning, pages 4212–4221, 2019.
  14. Federico Monti, Davide Boscaini, Jonathan Masci, Emanuele Rodola, Jan Svoboda, and Michael M Bronstein. Geometric deep learning on graphs and manifolds using mixture model cnns. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5115–5124, 2017.
  15. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems, pages 8024–8035, 2019.
  16. Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 701–710, 2014.
  17. Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The graph neural network model. IEEE Transactions on Neural Networks, 20(1):61–80, 2008.
  18. Michael Schlichtkrull, Thomas N Kipf, Peter Bloem, Rianne Van Den Berg, Ivan Titov, and Max Welling. Modeling relational data with graph convolutional networks. In European Semantic Web Conference, pages 593–607. Springer, 2018.
  19. Prithviraj Sen, Galileo Namata, Mustafa Bilgic, Lise Getoor, Brian Galligher, and Tina Eliassi-Rad. Collective classification in network data. AI magazine, 29(3):93–93, 2008.
  20. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1):1929–1958, 2014.
  21. Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph attention networks. arXiv preprint arXiv:1710.10903, 2017.
  22. Xiao Wang, Houye Ji, Chuan Shi, Bai Wang, Yanfang Ye, Peng Cui, and Philip S Yu. Heterogeneous graph attention network. In The World Wide Web Conference, pages 2022–2032, 2019.
  23. Jason Weston, Frédéric Ratle, Hossein Mobahi, and Ronan Collobert. Deep learning via semi-supervised embedding. In Neural networks: Tricks of the trade, pages 639–655. Springer, 2012.
  24. Keyulu Xu, Chengtao Li, Yonglong Tian, Tomohiro Sonobe, Ken-ichi Kawarabayashi, and Stefanie Jegelka. Representation learning on graphs with jumping knowledge networks. arXiv preprint arXiv:1806.03536, 2018.
  25. Zhilin Yang, William W Cohen, and Ruslan Salakhutdinov. Revisiting semi-supervised learning with graph embeddings. arXiv preprint arXiv:1603.08861, 2016.
  26. Ziwei Zhang, Peng Cui, and Wenwu Zhu. Deep learning on graphs: A survey. arXiv preprint arXiv:1812.04202, 2018.
  27. Jie Zhou, Ganqu Cui, Zhengyan Zhang, Cheng Yang, Zhiyuan Liu, Lifeng Wang, Changcheng Li, and Maosong Sun. Graph neural networks: A review of methods and applications. arXiv preprint arXiv:1812.08434, 2018.
  28. Xiaojin Zhu, Zoubin Ghahramani, and John D Lafferty. Semi-supervised learning using gaussian fields and harmonic functions. In Proceedings of the 20th International conference on Machine learning (ICML-03), pages 912–919, 2003.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
426445
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description