Towards a Spectrum of Graph Convolutional Networks

Towards a Spectrum of Graph Convolutional Networks

Abstract

We present our ongoing work on understanding the limitations of graph convolutional networks (GCNs) as well as our work on generalizations of graph convolutions for representing more complex node attribute dependencies. Based on an analysis of GCNs with the help of the corresponding computation graphs, we propose a generalization of existing GCNs where the aggregation operations are (a) determined by structural properties of the local neighborhood graphs and (b) not restricted to weighted averages. We show that the proposed approach is strictly more expressive while requiring only a modest increase in the number of parameters and computations. We also show that the proposed generalization is identical to standard convolutional layers when applied to regular grid graphs.

Towards a Spectrum of Graph Convolutional Networks

Mathias Niepert and Alberto García-Durán
NEC Labs Europe


Index Terms—  graph convolutional networks, graph partition, convolutions

1 Introduction

With this paper we present our ongoing work on exploring a spectrum of graph convolutional networks that addresses some of the shortcomings of GCNs as introduced in the context of graph convolutional learning [1, 2]. We provide some background on graph convolutional networks, and explore GCNs with the help of computation graphs. We then introduce a novel way to generalize GCNs. This is done by clustering the neighborhood of each node into exactly components. Each of these components determines the nodes whose attribute values are aggregated into a -dimensional vector. This introduces more complex feature maps and we can show that for regular graphs the method is as expressive as standard 2D convolutional networks. At the same time, we can show that the GCN as introduced by Kipf & Welling and the MoNet framework are special instances of the proposed approach where the attribute values of all nodes in the 1-hop neighborhood are average pooled based on global node degree information. The limitations of GCNs as introduced by Kipf & Welling on regular graphs, that is, graphs with a regular neighborhood connectivity such as grid graphs, have been explored before111http://www.inference.vc/how-powerful-are-graph-convolutions-review-of-kipf-welling-2016-2/. Our motivation is an exploration of generalizations of GCNs.

Since this is a paper describing ongoing work, we do not provide experimental results. We state, however, some results on the expressivity of the method and its relationship to existing approaches. We also believe that the discussion of GCNs in terms of computation graphs is helpful for understanding GCNs and their relation to other graph-based convolutional network approaches which perform an ordering operation in node neighborhoods [3].

2 Related Work

There has been a large number of methods for learning graph representations. These include not only approaches to the node classification problem [4, 5, 6, 1, 7] but also approaches for the problem of classifying entire graphs [8, 9, 10, 3, 11, 12]. With our work we focus on the problem of node representation learning and, specifically, graph convolutional networks applied to this problem.

There is a large number of graph-based learning frameworks such as graph neural networks (GNN) [13], gated graph sequence neural networks (GG-SNN[10], diffusion-convolutional neural networks (DCNN[14], and graph convolutional networks (GCN) [1], to name but a few. Most of these methods can be described as instances of the Message Passing Neural Network (Mpnn) framework [15] where messages are passed between nodes to update their representations. Moreover, several more recent approaches such as GCNs [1], DCNNs [14], and graph attention networks [16] are special cases of MoNet [2]. The training of these instances is guided by a supervised loss and the methods were shown to have state-of-the-art performance on a diverse set of learning problems.

There are several approaches defining convolutions by applying operations on groups of neighbors. One of the challenges all of these methods have in common is to apply convolutions to neighborhoods of varying sizes so as to preserve the weight sharing property of standard CNNs for regular graphs. A seminal strategy involved the learning of a weight matrix for each possible node degree [9]. An additional recent idea applies self-attention to the graph learning domain [16].

There are also several novel unsupervised learning approaches suitable to address node classification problems. DeepWalk [4] collects random walks on the graphs and applies a Skipgram [17] model to learn node representations. Despite its conceptual simplicity, it was shown to outperform spectral clustering approaches with the advantage of scaling to large graphs. More recent work [18, 7] introduce unsupervised learning frameworks that can incorporate node attributes.

3 Background

A predecessor to CNNs was the Neocognitron [19]. A typical (deep) CNN is composed of convolutional and dense layers. The purpose of the convolutional layers is the extraction of common patterns found within local regions of the input images. CNNs convolve learned filters over the input image, computing the inner product at every location in the image and outputting the result as tensors. Figure 1 illustrates a convolutional layer.

A graph is a pair with the set of vertices and the set of edges. Let be the number of vertices and the number of edges. Each graph can be represented by an adjacency matrix of size , where if there is an edge from vertex to vertex , and otherwise. Node and edge attributes are features that attain one value for each node and edge of a graph. We use the term attribute value instead of label to avoid confusion with the graph-theoretical concept of a labeling.

To compute partitions of node neighborhoods, we utilizes graph labeling methods to impose an order on nodes. A graph labeling is a function from the set of vertices to an ordered set such as the real numbers and integers. A graph labeling procedure computes a graph labeling for an input graph. A ranking (or coloring) is a function . Every labeling induces a ranking with if and only if . Every graph labeling induces a partition on with if and only if . Examples of graph labeling procedures are node degree and other measures of centrality commonly used in the analysis of networks. A canonicalization of a graph is a graph with a fixed vertex order which is isomorphic to and which represents its entire isomorphism class. In practice, the graph canonicalization tool Nauty has shown remarkable performance [20].

Fig. 1: Each convolutional layer moves a set of 2-D filters (each being part of one 3-D filter) over the input channels from left to right and top to bottom and multiplies and accumulates the corresponding values.

4 Graph Convolutional Networks

Convolutional neural networks have been successfully applied to a large number of problems in areas such as computer vision and natural language processing. Figure 1 depicts the working of a typical convolutional network. Since a straight-forward application of convolutions is only possible in domains where the data can be described with a regular graph such as a grid graph modeling the pixels of an image, there has been recent work on extending ideas from convolutional network to irregular domains.

In that line of work, graph convolutional networks (GCNs) as proposed by Kipf and Welling are a class of graph-based machine learning models for semi-supervised learning. GCNs are highly competitive and were shown to outperform Chebyshev [11], Planetoid [6], and EmbedNN [21] and to be competitive with MoNet [2] on the commonly used benchmark data sets Cora, Citeseer, and Pubmed.

The GCNs introduced in previous work [1] can be characterized by the expression

where is some normalized variant of the adjacency matrix, contains the vector representation (the feature map) of the graph vertices in the -th layer, is the weight matrix of the th layer, and a non-linearity.

While GCNs are highly useful for the type of benchmark data sets commonly used in the literature (e.g. the citation networks with text data Cora and Pubmed), they have some limitations. To better understand these limitations it helps to explore GCNs with the corresponding computation graph. It is straight-forward to show that the above formulation of GCNs is equivalent (up to a slightly different pooling operation) to the one depicted in Figure 2. A pooling operation is an operation that takes a set of -dimensional vectors and outputs an -dimensional vector and can be more complex than average or max pooling. Each of the layers performs, for each node in the graph, a pooling operation that includes all 1-hop neighbors of as well as itself. Hence, there is one aggregate value computed for each input feature of that layer. The various 2D filters of the conv layers, therefore, have always dimension where is the number of input features (attributes). This makes the modeling of more complex dependencies between the features of nodes in the 1-hop neighborhood challenging if not impossible. Again, we believe that Figure 2 is helpful when trying to understand the limitations of Kipf & Wellings GCNs. MoNet [2] generalizes this pooling operation by performing several weighted averages where the weights are computed from Gaussian distributions. Similar to GCNs, however, the weights are determined only by the global node degree information from neighboring nodes.

Fig. 2: Illustration of the GCN pooling and convolution operations in previous work [1]. For each node in the graph, the values of each attribute and all nodes in the 1-hop neighborhood are pooled. For the sake of simplicity, we use average pooling in the illustration. The pooling operation in Kipf & Welling is fixed for each node and computed from the global node degrees.

5 Generalizing GCNs

We explore generalizations of the GCN of Kipf & Welling and the generic MoNet framework. One such generalization involves the computation of a partition (or clustering) of , the neighborhood of , and using this partition to guide the aggregation operations. Instead of aggregating the feature values of all nodes before applying impoverished convolutions, we only aggregate the feature values of the nodes in each of the components of the partition. Hence, instead of only one aggregated value per feature, the input to the convolutions is now a vector of values on which filters can operate. In the following we describe the different changes and components of the proposed generalization. Figure 3 illustrates the core ideas again using the corresponding computation graph.

5.1 Neighborhood Clustering

To apply more complex convolutional filters to the neighborhood of a given vertex , we compute a partition or (soft) clustering of the local neighborhood and obtain a more fine-grained set of aggregation operations. This shifts the problem to finding efficient and reasonable methods for generating such clusterings. Fortunately, there are efficient graph labeling algorithms that compute scores for vertices in a graph based on their structural roles. For instance, centrality measures score nodes according to their roles in the graph. The approach we propose here is motivated by the objective to compute clusterings such that the nodes in one cluster fulfill similar structural roles in the graph induced by .

1:  input: vertex , graph labeling method , partition size
2:  output: -partition of v’s 1-hop neighbors
3:  create subgraph induced by and its 1-hop neighborhood
4:  compute values of by applying to
5:  set
6:  apply clustering approach (k-means, …) to with centers
7:  assign vertices to the clusters
8:  return
Algorithm 1 Structural Partitioning

Hence, for each node in the graph and as a preprocessing step, we apply a graph labeling method to the graph induced by the neighborhood of . The resulting labeling (ranking) is used to construct the partition of the neighbors. Note that node labeling approaches such as Weisfeiler-Lehman [22] and centrality measures provide a strictly more fine-grained partition/clustering of the nodes compared to the node degree used in GCNs and MoNet. Moreover, the weighted averaging performed by GCNs and MoNet is based on the global node degree and not structural measures applied to the local neighborhood graphs. Algorithm 1 lists the procedure for computing a partition for each of the nodes in the graph. Again, the idea is that the partition lumps together nodes with similar structural roles. The aggregation operation is then only performed on sets of these more similar nodes. As alternatives to labeling methods such as centrality measures, one could also use the attribute values or some existing domain knowledge to compute the partitions.

The partitioning can be integrated into an end-to-end differentiable model either by using the EM algorithm or by using differentiable components such as RBF functions or existing attention mechanisms.

Fig. 3: An illustration of a -GCN where every 1-hop neighborhood is partitioned into exactly components. The aggregation operation is then performed per component and attribute. The feature map of the following convolution can now operate on more than just one value per channel.

5.2 Statistical Aggregation

The aggregation component takes, for each node , the partition of computed in the previous step as input. It performs an aggregation operation for each of the node attributes and each component , providing a statistical summary of the node attribute values. For instance, if there are two node attributes and and with average pooling, we compute, for each ,

where is the result of the pooling operation for partition component and attribute , and is the value of attribute at node . One might want to chose a different aggregation operation for different attributes or use several aggregation functions and concatenate the results. The output of one aggregation operation is a tensor , where is the number of attributes and the number of components in the partitions.

GCNs of Kipf & Welling and MoNets always perform a (weighted) average aggregation which is determined by the global node degree. We believe that a combination of arbitrary pooling operations is useful in designing novel GCN variants.

5.3 Convolutions

The input to the convolution is a tensor , where is the number of attributes. Hence, for every node in the graph, there is a matrix . The convolutional module maintains a set of 2D filters of dimension whose weights are learned during training. Each of these 2D filters is applied to the matrix and results in a vector of dimension . Hence, the output of the concolution is a feature map of dimension . To this feature map one can apply a non-linear function element-wise such as the ReLU activation function as was done in the original GCN variant.

As in previous work on GCNs, the number of layers, non-linearities, etc. can be chosen freely. In practice, a small number of layers has been shown to perform best.

5.4 Formal Analysis

As discussed before, we are motivated specifically by the lack of expressivity of the original GCN for semi-supervised learning on regular graphs. We can now show that the generalization is as expressive as a 2D convolutional net on image data without an explicit encoding of spatial information. We can relate the -GCN to CNNs for images in the following way.

Theorem 1.

Given an image in form of a grid graph. A standard convolutional layer with receptive field size , no zero padding, and filters is identical (up to a fixed permutation) to the application of a -GCN with filters where the -partition is computed with a graph canonicalization algorithm.

Proof.

It is possible to show that if an input graph is a grid, then the partition of the neighbors resulting from a canonicalization algorithm such as Nauty is unique and identical for each of the vertices in the graph. ∎

The computational overhead of -GCNs is in the computation of the neighborhood clustering for each node. Depending on the labeling/canonicalization algorithm used, this can be more or less expensive. However, as has been shown before [3], computing a canonicalization on smaller neighborhood graphs is highly efficient. Moreover, the partitions have to be computed only once as a preprocessing step. Once the partitions are computed, the computational complexity of a -GCN is identical (up to the constant ) to that of a GCN.

With the ideas in this paper it is possible to define a spectrum of GCNs. On the one end, there is Kipf & Welling GCNs where feature values of all 1-hop neighbors are average-pooled prior to the application of the convolutional filters. On one other end is the model where we chose to be the maximum node degree and we assign one node to each component of the clustering. Note that the proposed approach also generalizes recent methods for normalizing node neighborhoods [3]. We believe that while the best choice of clusters depends on the data, in numerous cases a more fine-grained partition can be beneficial. This is what we explore with ongoing experiments.

References

  • [1] Thomas N Kipf and Max Welling, “Semi-supervised classification with graph convolutional networks,” in Proceedings of ICLR, 2017.
  • [2] Federico Monti, Davide Boscaini, Jonathan Masci, Emanuele Rodola, Jan Svoboda, and Michael M Bronstein, “Geometric deep learning on graphs and manifolds using mixture model cnns,” in Proceeedings of CVPR, 2017.
  • [3] Mathias Niepert, Mohamed Ahmed, and Konstantin Kutzkov, “Learning convolutional neural networks for graphs,” in International Conference on Machine Learning, 2016, pp. 2014–2023.
  • [4] Bryan Perozzi, Rami Al-Rfou, and Steven Skiena, “Deepwalk: Online learning of social representations,” in Proceedings of KDD, 2014, pp. 701–710.
  • [5] Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, and Qiaozhu Mei, “Line: Large-scale information network embedding,” in Proceedings of the 24th International Conference on World Wide Web. International World Wide Web Conferences Steering Committee, 2015, pp. 1067–1077.
  • [6] Zhilin Yang, William W Cohen, and Ruslan Salakhutdinov, “Revisiting semi-supervised learning with graph embeddings,” in Proceedings of the International Conference on Machine Learning (ICML), 2016.
  • [7] Alberto Garcia-Duran and Mathias Niepert, “Learning graph representations with embedding propagation,” in Advances in neural information processing systems, 2017.
  • [8] Pinar Yanardag and SVN Vishwanathan, “Deep graph kernels,” in Proceedings of KDD, 2015, pp. 1365–1374.
  • [9] David K Duvenaud, Dougal Maclaurin, Jorge Iparraguirre, Rafael Bombarell, Timothy Hirzel, Alán Aspuru-Guzik, and Ryan P Adams, “Convolutional networks on graphs for learning molecular fingerprints,” in Advances in neural information processing systems, 2015, pp. 2224–2232.
  • [10] Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel, “Gated graph sequence neural networks,” in Proceedings of the International Conference on Learning Representations (ICLR), 2016.
  • [11] Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst, “Convolutional neural networks on graphs with fast localized spectral filtering,” in Advances in Neural Information Processing Systems, 2016, pp. 3844–3852.
  • [12] Michael M Bronstein, Joan Bruna, Yann LeCun, Arthur Szlam, and Pierre Vandergheynst, “Geometric deep learning: going beyond euclidean data,” IEEE Signal Processing Magazine, vol. 34, no. 4, pp. 18–42, 2017.
  • [13] Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini, “The graph neural network model,” IEEE Transactions on Neural Networks, vol. 20, no. 1, pp. 61–80, 2009.
  • [14] James Atwood and Don Towsley, “Diffusion-convolutional neural networks,” in Advances in Neural Information Processing Systems, 2016, pp. 1993–2001.
  • [15] Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl, “Neural message passing for quantum chemistry,” in Proceedings of ICML, 2017.
  • [16] Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio, “Graph attention networks,” Proceedings of ICLR, 2018.
  • [17] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean, “Distributed representations of words and phrases and their compositionality,” in Advances in neural information processing systems, 2013, pp. 3111–3119.
  • [18] Will Hamilton, Zhitao Ying, and Jure Leskovec, “Inductive representation learning on large graphs,” in Advances in Neural Information Processing Systems, 2017, pp. 1025–1035.
  • [19] Kunihiko Fukushima, “Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position,” Biological Cybernetics, vol. 36, no. 4, pp. 193–202, 1980.
  • [20] Brendan D. McKay and Adolfo Piperno, “Practical graph isomorphism, {II},” Journal of Symbolic Computation, vol. 60, no. 0, pp. 94 – 112, 2014.
  • [21] Jason Weston, Frédéric Ratle, Hossein Mobahi, and Ronan Collobert, “Deep learning via semi-supervised embedding,” in Neural Networks: Tricks of the Trade, pp. 639–655. Springer, 2012.
  • [22] Boris Weisfeiler and AA Lehman, “A reduction of a graph to a canonical form and an algebra arising during this reduction,” Nauchno-Technicheskaya Informatsia, vol. 2, no. 9, pp. 12–16, 1968.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
184711
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description