A Regularized Attention Mechanism for Graph Attention Networks

A Regularized Attention Mechanism for Graph Attention Networks

Abstract

Machine learning models that can exploit the inherent structure in data have gained prominence. In particular, there is a surge in deep learning solutions for graph-structured data, due to its wide-spread applicability in several fields. Graph attention networks (GAT), a recent addition to the broad class of feature learning models in graphs, utilizes the attention mechanism to efficiently learn continuous vector representations for semi-supervised learning problems. In this paper, we perform a detailed analysis of GAT models, and present interesting insights into their behavior. In particular, we show that the models are vulnerable to heterogeneous rogue nodes and hence propose novel regularization strategies to improve the robustness of GAT models. Using benchmark datasets, we demonstrate performance improvements on semi-supervised learning, using the proposed robust variant of GAT.

\name

Uday Shankar Shanthamallu, Jayaraman J, Thiagarajan1 and Andreas Spanias \addressArizona State University, Lawrence Livermore National Laboratory
Email: {ushantha@asu.edu, jjayaram@llnl.gov, spanias@asu.edu} {keywords} semi-supervised learning, graph attention models, robust attention mechanism, graph neural networks

1 Introduction

Dealing with relational data is central to a wide-range of applications including social networks [3], epidemic modeling [13], chemistry [10], medicine, energy distribution, and transportation [6]. Consequently, machine learning formalisms for graph-structured data [4, 8] have become prominent, and are regularly being adopted for information extraction and analysis. In particular, graph neural networks [11, 9, 16] form an important class of approaches, and they have produced unprecedented success in supervised and semi-supervised learning problems. Broadly, these methods generalize convolutional networks to the case of arbitrary graphs [7, 1], through spectral analysis (e.g. Laplacian [5]) or neighborhood based techniques [2]. Graph Attention Networks (GAT)[15] are the recent addition to this class of methods and they rely solely on attention mechanisms for feature learning. In contrast to spectral approaches, attention models do not require the construction of an explicit Laplacian operator and can be readily applied to non-Euclidean data. Further, GATs are highly effective, thanks to the recent advances in attention modeling [14], and easily scalable. Given the wide-spread adoption of attention models in language modeling and computer vision, it is crucial to understand the functioning and robustness of the attention mechanism.

(a) Cora: Node ID-188, degree-3
(b) Cora: Node ID-1701, degree-75
(c) Citeseer: Node ID-1591, degree-7
(d) Citeseer: Node ID-582, degree-52
Figure 1: Attention weights learned by the GAT model for Cora and Citeseer datasets. There is a tendency to produce uniform attention weights even when a node has a large neighborhood size.

In this paper, we perform a detailed analysis of the GAT models, and present interesting insights to their behavior. We regularize the attention mechanism in GAT in order to improve the robustness of the attention models. An attention model parameterizes the local dependencies to determine the most relevant parts of the neighborhood to focus on, while computing the features for a node. Using empirical analysis with GAT, we make an interesting observation that with unweighted graphs, there is a tendency to produce uniform attention weights to all connected neighbors at every node. In other words, all neighbors are chosen with the same importance; consequently nodes with high degree or high valency end up being highly influential to the feature learning. Subsequently, inferencing can be greatly affected by introducing even a small number of ”rogue” or ”noisy” nodes with high degree into the network structure. For example, in a social network, an entity (node) with an ill intent, can corrupt the network structure by establishing connections with several other nodes even though it does not necessarily share coherent community association. In practice, such noisy nodes can arise due to measurement errors, availability of partial information while constructing the relational database, or the presence of adversaries specifically designed to make inference challenging. This motivates the need to regularize the attention mechanism and improve robustness of attention models on graphs.

We propose an improved variant of GAT that analyzes the distribution of attention coefficients, and attempts to minimize global influence of each node and the tendency to produce uniform attention weights across a neighborhood. We achieve this through the inclusion of sparsity based regularization strategies. Using experiments with benchmark network datasets, we demonstrate improvements over standard GAT in semi-supervised learning, thus effectively combating structural noise in graphs.

2 Graph Attention Networks

We represent an undirected and unweighted graph using the tuple set , where denotes the set of nodes with cardinality = , denotes the set of edges. Each node is endowed with a -dimensional node attribute vector (also referred as the graph signal), . For a given node , its closed neighborhood is given by .

An attention head is the most basic unit in GAT [15]. A head basically learns a hidden representation for each node by performing a weighted combination of node attributes in the closed neighborhood, where the weights are trainable. In our setup, we consider a simple dot-product attention, similar to the Transformer architecture [14]. Formally, an attention head is comprised of the following steps:

Step 1: Feed-forward layer that transforms each .

Step 2: A shared trainable dot-product attention mechanism which learns coefficients for each valid edge in the graph. This is carried out using attributes of the connected neighbors, , where denotes the parameters of the attention function, and represents concatenation of features from nodes and respectively.

Step 3: A softmax layer for normalizing the learned attention coefficients across the closed neighborhood, . For simplicity, we represent the normalized attention coefficients for the entire graph as , where denotes the importance of node features in approximating the feature for node .

Step 4: A linear combiner that performs weighted combination of node features with the learned attention coefficients followed by a non-linearity: , where .

3 Analysis of Attentions Inferred by GAT

In this section, we perform a detailed analysis of the attention coefficients learned by GAT. The presented observations will motivate the need for strategies to regularize the attention mechanism in attention models. In contrary to our expectation that an attention head might assign different levels of importance to the nodes in the closed neighborhood, our analysis shows that for most of the nodes, the distribution of coefficients in a closed neighborhood is almost always uniform in nature. This is particularly the case when the node degree is low.

(a) Original Cora Dataset without any rogue nodes/edges
(b) Cora Dataset with 50 rogue nodes and 100 noisy edges per rogue node
Figure 2: A plot of non-uniformity scores (sorted) vs node id. Observe that the robust GAT with attention regularization produces higher non-uniformity score indicating it produces non-uniform attention scores across closed-neighborhood.

For this study, we consider two benchmark citation datasets, namely Cora and Citeseer [12] (details can be found in Section 5). Figure 1 illustrates the weight distribution for two nodes from the Cora and Citeseer dataset with varying degrees. Interestingly, we find that, even for nodes with high degree, the attention mechanism fails to prioritize nodes in a closed neighborhood, and learns uniform weights for all nodes. This is particularly undesirable since the resulting features can be noisy for small neighborhoods, while rogue nodes with large degree can have a much higher impact than expected.

In order to quantitatively test our hypothesis on the lack of meaningful structure in the computed attention scores in a closed neighborhood, we use a discrepancy metric given by

(1)

where is the uniform distribution score for the node . So gives a measure of non-uniformity in the learned attention score, a lower discrepancy value indicates a strong uniformity in the attention score and vice versa. For every node, we measure the discrepancy score. Figure 2 shows a plot of discrepancy score.

4 Proposed Approach

In this section, we describe the proposed regularization strategy, in order to improve the robustness of GAT models to produce non-uniform attention scores. The proposed solution will help combat structured noise in graph datasets. Intuitively, our approach attempts to improve the reliability of local attention structure by systematically limiting the influence of nodes globally. To this end, we build upon the observations in the previous section, and propose two additional regularization terms with respect to the attention mechanism in the original GAT formulation.

As described in Section 3, there is a tendency in GAT models, with unweighted graphs, to produce a uniform weight distribution while training the attention mechanism. For a node with a small closed neighborhood, it is prudent to utilize information from all its neighbors in order to approximate the node’s latent representation. Consequently, producing uniform attention weights can be a reasonable in such cases. However, if the closed neighborhood contains a rogue node, i.e. a node that cannot be characterized as part of any coherent community in the graph, uniform attention can lead to severe uncertainties in the local approximation. The first regularization strategy limits the global influence of a rogue node in terms of its participation in the approximation of other nodes’s features. We introduce a penalty for exclusivity. For every node , this term is defined as the -norm of attention coefficients assigned to that node in an attention head: . Generalizing this to independent heads in the GAT model, we obtain

(2)

This prevents any one node (or subset of nodes) in the graph to be exclusively influential to the overall feature inferencing. In other words, this does not allow a node with high degree to arbitrarily participate in the approximation of all its neighbors. This is particularly important when those nodes are noisy or adversarial

(a) Cora dataset
(b) Citeseer dataset
Figure 3: Semi-supervised learning performance under the presence of noisy nodes. Test accuracies are obtained by aggregating the performance from 20 random trials.

The second regularization term explicitly penalizes the uniformity in the attention scores. Note that this is similar to the equation (1). However, we found that the below form produces better results.

(3)

Here, we measure the norm, i.e. the number of nodes in the neighborhood that have been assigned a non-zero attention weight. By comparing it to the degree of that node, we penalize if all nodes in the neighborhood participate in the approximation. So we maximize the and here represents the number of independent heads.

By including the two proposed regularization terms to the original loss function of GAT, we obtain

(4)

where the hyperparameters and are used to weight the penalty terms with respect to the classification loss. Note that, includes an additional regularization on the learned model parameters. We optimize all the model parameters with respect to . We refer to this formulations as the Robust GAT. With the proposed modification to the GAT objective, we can now limit the global influence of a node while simultaneously producing a non-uniform attention scores in a closed-neighborhood.

5 Experiment Setup and Results

In this section, we describe the experimental setup for evaluating the impact of adversarial nodes on the GAT model, and present the results from the proposed robust variant. We use two benchmark citation networks: Cora and Citeseer [12]. In both the datasets, documents are treated as nodes, and citations among the documents are encoded as undirected edges. Additionally, each node (document) is endowed with an attribute vector (bag-of-word representations). We follow the experimental setup for training and inferencing similar to that of GAT [15] and perform transductive learning. We perturb the graph structure by explicitly introducing rogue nodes which have noisy edges with regular nodes in the graph. This represents a scenario where the presence of adversaries can make the inferencing more challenging.

We introduce structured noise to graph datasets in the following manner: First, we sample nodes uniformly at random (without replacement) from the validation set, and delete all the existing edges for each of the selected nodes. We then add a total of arbitrary edges for each of the nodes. Note, the nodes to which the edges were established were also chosen at random, but from the entire graph. We specifically introduced noisy nodes only in the validation set to show the impact of adversaries on the overall performance, even when they are not part of training or testing. For comparison, we generate results from the baseline GAT approach in each of these cases. We mainly compare to GAT as it has the state of the art accuracies on the datasetsFor a fair comparison, our architectures and the hyperparameter choices were fixed to be the same for both Baseline GAT and the Robust GAT approaches. Note that, Robust GAT has two additional hyperparameters, and which were set to and respectively.

For Cora dataset, we vary the number of noisy nodes from to and fix the number of noisy edges per node at . Since the order of the Citeseer dataset is high, we vary from to and set . For each case of , we performed independent realizations and report the average. The results from this case study are shown in Figure 3.

6 Conclusion

In this paper, we analyzed the attention mechanism in graph attention networks, and showed that they are highly vulnerable to noisy nodes with high degrees in the graph. This can be attributed to the surprising behavior of GAT in producing uniform attention to all connected neighbors at every node. In order to alleviate this limitation, we proposed a robust variant of GAT, that minimizes the global influence of a node (or a subset of nodes) and also produces non-uniform attention scores for a closed neighborhood. Using benchmark datasets, we demonstrated improvements in the semi-supervised learning performance in the presence of structural noise.

Footnotes

  1. thanks: This work was supported in part by the ASU SenSIP Center, Arizona State University. Portions of this work were performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

References

  1. M. Defferrard, X. Bresson and P. Vandergheynst (2016) Convolutional neural networks on graphs with fast localized spectral filtering. In Advances in Neural Information Processing Systems, pp. 3844–3852. Cited by: §1.
  2. D. K. Duvenaud, D. Maclaurin, J. Iparraguirre, R. Bombarell, T. Hirzel, A. Aspuru-Guzik and R. P. Adams (2015) Convolutional networks on graphs for learning molecular fingerprints. In Advances in neural information processing systems, pp. 2224–2232. Cited by: §1.
  3. N. Eagle and A. S. Pentland (2006) Reality mining: sensing complex social systems. Personal and ubiquitous computing 10 (4), pp. 255–268. Cited by: §1.
  4. W. L. Hamilton, R. Ying and J. Leskovec (2017) Representation learning on graphs: methods and applications. arXiv preprint arXiv:1709.05584. Cited by: §1.
  5. M. Henaff, J. Bruna and Y. LeCun (2015) Deep convolutional networks on graph-structured data. arXiv preprint arXiv:1506.05163. Cited by: §1.
  6. K. Henderson, B. Gallagher, T. Eliassi-Rad, H. Tong, S. Basu, L. Akoglu, D. Koutra, C. Faloutsos and L. Li (2012) Rolx: structural role extraction & mining in large graphs. In Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 1231–1239. Cited by: §1.
  7. T. N. Kipf and M. Welling (2016) Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907. Cited by: §1.
  8. P. Latouche and F. Rossi (2015) Graphs in machine learning: an introduction. In European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN), Proceedings of the 23-th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN 2015), pp. 207–218. Cited by: §1.
  9. M. Niepert, M. Ahmed and K. Kutzkov (2016) Learning convolutional neural networks for graphs. In International conference on machine learning, pp. 2014–2023. Cited by: §1.
  10. T. Pham, T. Tran and S. Venkatesh (2018) Graph memory networks for molecular activity prediction. arXiv preprint arXiv:1801.02622. Cited by: §1.
  11. F. Scarselli, M. Gori, A. C. Tsoi, M. Hagenbuchner and G. Monfardini (2009) The graph neural network model. IEEE Transactions on Neural Networks 20 (1), pp. 61–80. Cited by: §1.
  12. P. Sen, G. Namata, M. Bilgic, L. Getoor, B. Galligher and T. Eliassi-Rad (2008) Collective classification in network data. AI magazine 29 (3), pp. 93. Cited by: §3, §5.
  13. P. L. Simon, M. Taylor and I. Z. Kiss (2011) Exact epidemic models on graphs using graph-automorphism driven lumping. Journal of mathematical biology 62 (4), pp. 479–508. Cited by: §1.
  14. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser and I. Polosukhin (2017) Attention is all you need. In Advances in Neural Information Processing Systems, pp. 5998–6008. Cited by: §1, §2.
  15. P. Velickovic, G. Cucurull, A. Casanova, A. Romero, P. Lio and Y. Bengio (2017) Graph attention networks. arXiv preprint arXiv:1710.10903. Cited by: §1, §2, §5.
  16. M. Zhang and Y. Chen (2018) Link Prediction Based on Graph Neural Networks. External Links: Link Cited by: §1.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
407738
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description