DEMONet: Degreespecific Graph Neural Networks for
Node and Graph Classification
Abstract.
Graph data widely exist in many highimpact applications. Inspired by the success of deep learning in gridstructured data, graph neural network models have been proposed to learn powerful nodelevel or graphlevel representation. However, most of the existing graph neural networks suffer from the following limitations: (1) there is limited analysis regarding the graph convolution properties, such as seedoriented, degreeaware and orderfree; (2) the node’s degreespecific graph structure is not explicitly expressed in graph convolution for distinguishing structureaware node neighborhoods; (3) the theoretical explanation regarding the graphlevel pooling schemes is unclear.
To address these problems, we propose a generic degreespecific graph neural network named DEMONet motivated by WeisfeilerLehman graph isomorphism test that recursively identifies 1hop neighborhood structures. In order to explicitly capture the graph topology integrated with node attributes, we argue that graph convolution should have three properties: seedoriented, degreeaware, orderfree. To this end, we propose multitask graph convolution where each task represents node representation learning for nodes with a specific degree value, thus leading to preserving the degreespecific graph structure. In particular, we design two multitask learning methods: degreespecific weight and hashing functions for graph convolution. In addition, we propose a novel graphlevel pooling/readout scheme for learning graph representation provably lying in a degreespecific Hilbert kernel space. The experimental results on several node and graph classification benchmark data sets demonstrate the effectiveness and efficiency of our proposed DEMONet over stateoftheart graph neural network models.
1. Introduction
Nowadays, graph data is being generated across multiple highimpact application domains, ranging from bioinformatics (Gilmer et al., 2017) to financial fraud detection (Zhou et al., 2018; Zhou et al., 2017b), from genomewide association study (Wu et al., 2018) to social network analysis (Hamilton et al., 2017a). In order to leverage the rich information in graphstructured data, it is of great importance to learn effective node or graph representation from both node/edge attributes and the graph topological structure. To this end, numerous graph neural network models have been proposed recently inspired by the success of deep learning architectures on gridstructured data (e.g., images, videos, languages, etc.). One intuition behind this line of approaches is that the topological structure as well as node attributes could be integrated by recursively aggregating and compressing the continuous feature vectors from local neighborhoods in an endtoend training architecture.
One key component of graph neural networks (Hamilton et al., 2017b; Gilmer et al., 2017) is the graph convolution (or feature aggregation function) that aggregates and transforms the feature vectors from a node’s local neighborhood. By integrating the node attributes with the graph structure information using Laplacian smoothing (Li et al., 2018; Kipf and Welling, 2017) or advanced attention mechanism (Velickovic et al., 2018), graph neural networks learn the node representation in a lowdimensional feature space where nearby nodes in the graph would share a similar representation. Moreover, in order to learn the representation for the entire graph, researchers have proposed the graphlevel pooling schemes (Atwood and Towsley, 2016) that compress the nodes’ representation into a global feature vector. The node or graph representation learned by graph neural networks has achieved stateoftheart performance in many downstream graph mining tasks, such as node classification (Zhang et al., 2018b), graph classification (Xu et al., 2019), etc.
However, most of the existing graph neural networks suffer from the following limitations. (L1) There is limited analysis on graph convolution properties that could guide the design of graph neural networks when learning node representation. (L2) In order to preserve the node proximity, the graph convolution applies a special form of Laplacian smoothing (Li et al., 2018), which simply mixes the attributes from node’s neighborhood. This leads to the loss of degreespecific graph structure information for the learned representation. An illustrative example is shown in Figure 1: although nodes 4 and 5 are structurally different, they would be mapped to similar representation due to firstorder node proximity using existing methods. Moreover, the neighborhood subsampling methods used to improve model efficiency (Hamilton et al., 2017a) significantly degraded the discrimination of degreespecific graph structure. (L3) The theoretical explanation regarding the graphlevel pooling schemes is largely missing.
To address the above problems, in this paper, we propose a generic graph neural network model DEMONet that considers the degreespecific graph structure in learning both node and graph representation. Inspired by WeisfeilerLehman graph isomorphism test (Weisfeiler and Lehman, 1968), the graph convolution of graph neural networks should have three properties: seedoriented, degreeaware, orderfree, in order to map different neighborhoods to different feature representation. As shown in Figure 1, nodes with identical degree value typically share similar subtree (root node followed by its 1hop neighbors) structures. As a result, the representation of nodes 2 and 8 should be close in the feature space due to the similar subtree structure. On the other hand, nodes 4 and 5 have different subtree structures (i.e., number of subtree leaves), and they indicate different roles in the network, e.g., leader vs. deputy in a covert group. Therefore, they should not be mapped closely in the feature space.
To the best of our knowledge, very little effort on graph neural networks is devoted to learning the degreespecific representation for each node or the entire graph. To bridge the gap, we present a degreespecific graph convolution by assuming that nodes with the same degree value would share the same graph convolution. It can be formulated as a multitask feature learning problem where each task represents the node representation learning for nodes with specific degree values.
In addition, we introduce a degreespecific graphlevel pooling scheme to learn the graph representation. We theoretically show that the graph representation learned by our model lies in a Reproducing Kernel Hilbert space (RKHS) induced by a degreespecific WeisfeilerLehman graph kernel. The most similar work to us is Graph Isomorphism Network (GIN) (Xu et al., 2019) which used the sumaggregator associated with multilayer perceptrons as the neighborhoodinjective graph convolution that mapped different node neighborhood to different features. However, one issue of GIN is that the degreeaware structures are implicitly expressed in its graph convolution relying on the universal approximation capacity of multilayer perceptrons.
The main contributions of this paper are summarized as follows:

We provide theoretical analysis for graph neural networks from the perspective of WeisfeilerLehman graph isomorphism test, which motivates us to design the graph convolution based on the following properties: seedoriented, degreeaware and orderfree.

we propose a generic graph neural network framework named DEMONet by assuming that nodes with the same degree value would share the same graph convolution. A degreespecific multitask graph convolution function is presented to learn the node representation. Furthermore, a novel graphlevel pooling scheme is introduced for learning the graph representation provably lying in a degreespecific Hilbert kernel space.

The experimental results on several node and graph classification benchmark data sets demonstrate the effectiveness and efficiency of our proposed DEMONet model.
The rest of the paper is organized as follows. We review the related work in Section 2, followed by the problem definition and background introduction in Section 3. Section 4 presents our proposed DEMONet framework for node and graph representation learning. The extensive experiments and discussion are provided in Section 5. Finally, we conclude the paper in Section 6.
2. Related Work
In this section, we briefly review the related work on graph neural networks for node and graph classification.
2.1. Node Classification
Most of the existing graph neural networks (Hamilton et al., 2017b) learn the node representation by recursively aggregating the continuous feature vectors from local neighborhoods in an endtoend fashion. They could be fitted into the Message Passing Neural Networks (MPNNs) (Gilmer et al., 2017) which explained the feature aggregation of graph neural networks as message passing in local neighborhoods. Generally, they focus on extracting the spatial topological information by operating the convolutions in the node domain (Zhang et al., 2018b), which differs from some spectral approaches (Defferrard et al., 2016; Kipf and Welling, 2017) considering a node representation in the spectral domain. Graph Convolutional Network (GCN) (Kipf and Welling, 2017) defined the convolution operation via a neighborhood aggregation function. Following the same intuition, many graph neural network models have been proposed with different aggregation functions, e.g., attention mechanism (Velickovic et al., 2018), mean and max functions (Hamilton et al., 2017a), etc.
However, most of the graph neural network architectures are motivated by the success of deep learning on gridlike data, thus leading to little theoretical analysis for explaining the high performance and guiding the novel methodologies. Up till now, some work have been proposed to explain why graph neural networks work. The convolution of GCN was a special form of Laplacian smoothing on graph (Li et al., 2018), which explained the oversmoothing phenomena brought by many convolution layers. Lei et al. (Lei et al., 2017) showed that the graph representation generated by graph neural networks lies in the Reproducing Kernel Hilbert Space (RKHS) of some popular graph kernels. Moreover, it shows that 1dimensional aggregationbased graph neural networks are at most as powerful as the WeisfeilerLehman (WL) isomorphism test (Weisfeiler and Lehman, 1968) in distinguishing graphs (Xu et al., 2019). Compared with the existing work on graph neural networks, in this paper, we design a degreespecific graph convolution that captures the node neighborhood structures inspired by WL isomorphism test. This is in sharp contrast to the existing work which focused on preserving the node proximity in the feature space, thus leading to the loss of local graph structures.
2.2. Graph Classification
The graphlevel pooling/readout schemes aim to learn a representation of the entire graph from its node representations for graphlevel classification tasks. Mean/max/sum functions are commonly used due to its computational efficiency and effectiveness (Atwood and Towsley, 2016; Xu et al., 2019). One challenge for graphlevel pooling is to maintain the invariance to node order. PATCHYSAN (Niepert et al., 2016) first adopted the external software to obtain a global node order for the entire graph, which is very timeconsuming. More recently, a number of graph neural network models have been proposed (Zhang et al., 2018a; Xu et al., 2019; Ying et al., 2018), which formulated the node representation learning and graphlevel pooling into a unified framework. Different from graph kernel approaches (Shervashidze et al., 2011; Yanardag and Vishwanathan, 2015) that intuitively extract the graph feature or define the graph similarity using adhoc knowledge or random walk properties, graph neural networks would automatically learn the graph representation to integrate node attributes with its topological information via an endtoend training architecture.
Nevertheless, very little effort has been devoted to explicitly considering the degreespecific graph structures for graph representation learning. Our proposed degreespecific graphlevel pooling method is designed to address this issue by compressing the learned node representation according to degree values.
3. Preliminaries
In this section, we introduce the notation and problem definition, as well as some background information on graph neural networks.
3.1. Notation
Suppose that a graph is represented as , where is the set of nodes and is the edge set. Let denote the attribute matrix where each row is the dimensional attribute vector for node . The graph can also be represented by an adjacency matrix , where represents the similarity between and on the graph. For each node , its 1hop neighborhood is denoted as . Let denote a set of graphs. In this paper, we focus on undirected attributed networks, although our model can be naturally generalized to other types of networks. The main notation used in this paper is summarized in Table 1.
Notation  Definition 

A set of graphs  
A graph with node set and edge set  
Attribute matrix  
Adjacency matrix  
Number of nodes in the graph  
Dimensionality of the node or graph representation  
1hop neighborhood of node  
Indices of labeled nodes’ for node classification  
Indices of labeled graphs’ for graph classification  
Label of node  
Label of graph  
Node ’s representation at the iteration  
Feature set within node ’s neighborhood  
A set of subtrees  
A set of the degree values in graph 
3.2. Problem Definition
In this paper, we focus on two problems: nodelevel and graphlevel representation learning by formulating a novel degreespecific graph neural network model. Furthermore, we analyze the proposed model from various aspects, and empirically demonstrate its superior performance on both node and graph classification.
Formally, the node and graphlevel representation learning problems can be defined below.
Definition 3.1 ().
(Nodelevel Representation Learning)
Input: (i) An attributed graph with adjacency matrix and node attributes ; (ii) Labeled training nodes .
Output: A vector representation for each node on the dimensional embedding space where nodes would be well separated if their local neighborhoods are structurally different.
Definition 3.2 ().
(Graphlevel Representation Learning)
Input: (i) A set of attributed graphs with adjacency matrix and node attributes ; (ii) Labeled training graphs .
Output: A vector representation for each graph on the dimensional embedding space where graphs would be well separated if they have different graph topological structure.
3.3. Graph Neural Networks
It has been observed that a broad class of graph neural network (GNN) architectures followed the 1dimensional WeisfeilerLehman (WL) graph isomorphism test (Weisfeiler and Lehman, 1968). From the perspective of WL isomorphism test, they mainly consist of the following crucial steps at each iteration of feature aggregation:

Feature initialization (label^{1}^{1}1 Here, label is an identifier of nodes. In order not to be confused with a class label, we will use node attribute to represent it in this paper. initialization): The node features are initialized by original attribute vectors.

Neighborhood detection (multisetlabel determination): It decides the local neighborhood in which node gathers the information from neighbors. More specifically, a seed^{2}^{2}2The seed denotes the root node to be learned in the graph. For example, node in Figure 2 is a seed when updating its feature at each iteration. followed by its neighbors generates a subtree pattern.

Neighbors sorting (multisetlabel sorting): The neighbors are sorted in the ascending or descending order of degree values. The subtrees with permutation order of neighbors are recognized as the same one.

Feature aggregation (label compression): The node feature is updated by compressing the feature vectors of the aggregated neighbors including itself.

Graphlevel pooling (graph representation): It summarizes all the node features to form a global graph representation.
Next, we briefly go over some existing graph neural network models, which follow the aforementioned steps of the 1dimensional WL algorithms. We would like to point out that graph neural networks would learn the node or graph representation using continuous node attributes, whereas WL algorithms update the node attributes by directly compressing the augmented discrete attributes.
Taking 1hop neighborhood into consideration at each iteration, the following nodelevel graph neural network variants have the same feature initialization and neighborhood detection on learning node representation. And when elementwise average or max operations are used for feature aggregation, graph neural networks would be invariant to the order of neighbors. We summarize the feature aggregation functions (graph convolution) of those graph neural networks as follows.

Graph Convolutional Network (GCN) (Kipf and Welling, 2017):
(1) where is the renormalization of the adjacency matrix with added selfloops, and is the trainable matrix at layer. It is essentially a weighted feature aggregation from node neighborhood.

Graph Attention Network (GAT) (Velickovic et al., 2018):
(2) where is a selfattention score indicating the importance of node to node on feature aggregation. It is obvious that GCN can be considered as a special case of GAT when the attention score is defined as .

GraphSAGE (Hamilton et al., 2017a):
(3) where mean, max and LSTMaggregator are presented for feature aggregation. Though LSTM considers node neighbors as an ordered sequence, the LSTM aggregator is adapted on an unordered neighbors with random permutation.
There are some observations from these GNN variants: (i) Their feature aggregation schemes are invariant to the order of the neighbors except for GraphSAGE with LSTMaggregator; (ii) The output feature at layer neural network can be seen as the representation of a subtree around the seed; (iii) The node representation become closer and indistinguishable when the neural layers are going deeper, because the subtrees would share more common elements. However, little work theoretically discusses the reasons behind these observations to guide the design of graph neural networks: how is the node representation affected by node degree and order of neighbors? what kind of graph convolution is required to learn the subtree structures? Inspired by WL graph isomorphism test, we present a degreespecific graph neural network model named DEMONet in Section 4 to discuss those problems.
Additionally, the neighborhood aggregation schemes of graph neural networks, such as meanaggregator in GraphSAGE (Hamilton et al., 2017a), selfattention in GAT (Velickovic et al., 2018), can be regarded as the relabeling step in WL isomorphism test. Figure 2 provides an example to illustrate the essence of feature aggregation on graph neural networks. The node feature is actually a special representation of subtree consisting of the seed followed by its neighbors. For example, node 1’s feature represents the subtree collected from previous layer. As a result, graph neural networks with layers learn the representation of subtree with depth rooted at the seed. That provides us an intuition to design a graph convolution for explicitly preserving the degreespecific subtree structures.
4. Proposed Model: DEMONet
In this section, we propose a generic degreespecific graph neural network named DEMONet. Key to our algorithm is the degreespecific graph convolution for feature aggregation which can map different subtrees to different feature vectors. Figure 4 provides an overview of the proposed DEMONet framework on learning node and graph representation, which will be described in detail below.
4.1. Node Representation Learning
Let denote the feature set within node ’s neighborhood. Let be the set of subtrees consisting of the features of seed and its 1hop neighbors . To formalize our analysis, we first give the definition of structurally identical subtree below.
Definition 4.1 ().
(Structurally Identical Subtree) Any two subtrees in are structurally identical if the only possible difference between them is the order of neighbors.
The following lemma shows that graph neural networks could distinguish the local graph structures as well as the WL graph isomorphism test when graph convolution is an injective function that maps two subtrees in to different features if they are not structurally identical.
Lemma 4.2 ().
Let be a graph and be two nodes in the graph. When the mapping function in graph neural networks is injective, the learned features of and will be different if and only if the WL graph isomorphism test determines that they are not structurally identical.
The feature aggregation of graph neural networks can be simply summarized as follows.
(4) 
Obviously, most of the existing graph neural networks (Kipf and Welling, 2017; Velickovic et al., 2018) did not consider the injective aggregation function when learning node representation. From the perspective of WL isomorphism test, an injective graph convolution has the following properties.
Lemma 4.3 ().
(Properties) Let be the aggregation function. If it is an injective function that maps any different subtrees in to different feature vectors, then it has the following properties:

Seedoriented: if the seeds’ attributes are different, i.e., .

Degreeaware: if the seeds’ degree values are different, i.e., .

Orderfree: if and the only possible difference between and is the order of neighbors.
Figure 3 lists some examples to illustrate those properties. The injective function maps the subtrees in Figure 3(a) to different features due to the distinctive seeds’ attributes. Here, we hold that the subtree’s structure properties are guided by seed node. Thus they are not structurally identical though both subtrees share the same leaf elements. Seeds’ degree values also decide the subtree structure (shown in Figure 3(b)) because it is obvious that nodes with identical degree value share the similar structure. Figure 3(c) shows that neighbors’ order will not change the subtree structure.
These properties will guide us to build a structurespecific graph neural network model. Based on properties (i) and (ii), the feature aggregation function in Eq. (4) can be expressed as follow.
(5) 
where and are seedrelated and degreespecific mapping functions, respectively, and denotes the degree value of node . All the nodes share one seedoriented mapping function , but have a degreespecific function for compressing node neighborhoods. Here, denotes the vector concatenation which combines the mapped features to form a single vector. If and are injective, it will have the first two properties in Lemma 4.2 that subtrees with different seeds’ features or degree values would be mapped differently. Additionally, the degreespecific mapping function should be symmetric^{3}^{3}3A symmetric function of variables is one whose value given arguments is the same no matter the order of the arguments. For example, for any pair . that is invariant to the order of neighbors. And we have the following theorem (proven in Appendix) to show the existence of mapping functions and .
Theorem 4.4 ().
(Existence Theorem) Assume is countable, there exist mapping functions and such that for any two subtrees in , the function defined in Eq. (5) maps them to different features if they are not structurally identical.
Next, we present our graph neural network model where the injective aggregation function could be approximated by multilayer neural network due to its exceptional expression power. For seedrelated mapping function in Eq. (5), we use a simple onelayer fullyconnected neural network as follows.
(6) 
where the trainable matrix is shared by all the seeds at hidden layer. Here is a nonlinear activation function.
For degreespecific neighborhood aggregation on , it can be formulated as a multitask feature learning problem (shown in Figure 4(b)(c)) in which each task represents node representation learning for nodes with a specific degree value, thus leading to preserving the degreespecific graph structure. Here, we present two schemes for this multitask learning problem.
Degreespecific weight function: The degreespecific aggregation function can be expressed as follow.
(7) 
where is a degreespecific trainable matrix at layer and is a global trainable matrix shared by all the seeds.
Hashing function: Since the number of degree values on graphs could be very large, a critical challenge is how to perform multitask learning efficiently. To address this challenge, hash kernel (Weinberger et al., 2009) (also called feature hashing or hash trick) is applied for our multitask neighborhood learning problem. Given two vectors and , the hash map and the corresponding kernel are defined:
(8) 
(9) 
where and denote two hash functions such that and . Notice that hash kernel is unbiased, i.e., for any pair of input feature vectors. Let denote one of the row vectors in , then we have . In this way, the multitask feature aggregation function can be expressed as:
(10) 
where is the trainable matrix shared by all the nodes, and and are global and degreespecific hash maps, respectively.
One common assumption in multitask learning is that all the tasks are related with some shared knowledge, and meanwhile have their own taskspecific knowledge. As shown in Figure 3(b), two subtrees are structurally different, but they share some common leaves for neighborhood aggregation. By adopting both common (global) and taskspecific (local) weight/hash functions, it allows learning the shared substructures and degreespecific neighborhood structures simultaneously.
There might be many different node degrees in real networks. One intuitive idea is that we could partition the degree values into several buckets to reduce the number of tasks. This heuristic solution might improve our model robustness to noisy graph structure or labeled nodes on source networks brought by human annotations (Zhou and He, 2016; Zhou et al., 2017a). We leave this as our future work because hashing kernel (Weinberger et al., 2009) used in DEMONet is efficient to tackle largescale multitask learning problem.
4.2. Graph Representation Learning
The goal of graph representation learning is to use a compact feature vector to represent the entire graph. To this end, we provide a degreespecific graphlevel pooling scheme.
When graph neural networks are going deeper, node representation actually captures the higherorder topological information within its local neighborhood. By mapping the original graph to a sequence of graphs where denotes the original graph and represents the graph after the layer of feature aggregation (as shown in Figure 4(e)(g)), the graph representation can be expressed as follow.
(11) 
where denotes the set of degree values in graph , and is 1 when its two arguments are equal and 0 otherwise.
As discussed before, the node representation in captures the topological information within hop neighborhood. In order to consider all the subtrees’ information, we concatenate the representation from all graphs :
(12) 
Next, we compare the degreespecific pooling scheme with existing graphlevel pooling methods (Atwood and Towsley, 2016; Xu et al., 2019) and WeisfeilerLehman (WL) subtree kernel (Shervashidze et al., 2011). We define a degreespecific WL kernel:
(13)  
The corresponding mapping function is defined as:
(14) 
where is the degree of , and
(15) 
As shown in (Lei et al., 2017), the nonlinear activation function has a mapping function such that for some mapping constructed from . By the following theorem (proven in Appendix), we show that our graph representation lies in a degreespecific Hilbert kernel space.
Theorem 4.5 ().
For a degreespecific WeisfeilerLehman kernel, the graph representation in Eq. (12) belongs to the Reproducing Kernel Hilbert Space (RKHS) of kernel where
(16) 
The sum/mean based graphlevel pooling approaches make the learned graph representation lie in the kernel as follow.
(17) 
And WL subtree Kernel (Shervashidze et al., 2011) can be expressed as:
(18) 
It is easy to see that: (1) WL subtree kernel cannot be applied to measure the graph similarity when nodes have the continuous attribute vectors. (2) Our graphlevel representation lies in a degreespecific kernel space comparing Eq. (13) with (17), thus leading to explicitly preserving the degreespecific graph structure.
4.3. Discussion
We compare the proposed DEMONet with some existing graph neural networks regarding the properties of graph convolution.
Lemma 4.3 shows that an injective aggregation function has three properties: seedoriented, degreeaware, orderfree. We summarize the properties of graph convolution of GCN (Kipf and Welling, 2017), GAT (Velickovic et al., 2018), GraphSAGE (Hamilton et al., 2017a), and DCNN (Atwood and Towsley, 2016) in Table 2. It can be seen that: (1) The existing graph neural networks do not have all the three properties. More importantly, none of them capture the degreespecific graph structures. (2) For graphSAGE, it is orderfree when using mean or max aggregator. But graphSAGE with LTSMaggregator is not orderfree because it considers the node neighborhood as an ordered sequence. (3) Our proposed DEMONet considers all the properties, and the degreeaware property in particular allows our model to explicitly preserve the neighborhood structures for node and graph representation learning. In addition, the time complexity of graph convolution of DEMONet is linear with respect to the number of nodes and edges.
5. Experimental Results
In this section, we present the experimental results on real networks. In particular, we focus on answering the following questions:
Q1: Is the proposed DEMONet algorithm effective on node classification compared to the stateoftheart graph neural networks?
Q2: How does the proposed DEMONet perform on identifying graph structure compared to structureaware embedding approaches?
Q3: How does the proposed DEMONet with degreespecific graphlevel pooling perform on graph classification task?
Q4: Is the proposed degreespecific graph convolution of DEMONet efficient on learning node representation?
5.1. Experiment Setup
Data Sets: We use seven node classification data sets, including four social networks and three airtraffic networks. Facebook, WikiVote (Leskovec and Krevl, 2014), BlogCatalog and Flickr^{4}^{4}4http://people.tamu.edu/~xhuang/Code.html are social networks. The posted keywords or tags in BlogCatalog and Flickr networks are used as node attribute information. There are three airtraffic networks (Ribeiro et al., 2017): Brazil, Europe and USA, where each node corresponds to an airport and edge indicates the existence of commercial flights between the airports. Their class labels are assigned based on the level of activity measured by flights or people that passed the airports. Data statistics are summarized in Table 3. For those networks without node attributes, we use the onehot encoding of node degrees. In BlogCatalog, Flickr and other airtraffic networks, node class labels are available. In Facebook and WikiVote, we use the degreeinduced class labels by labeling the node according to its degree value.
In addition, we use four bioinformatics networks to evaluate the model performance on graph classification, including MUTAG, PTC, PROTEINS and ENZYMES^{5}^{5}5https://ls11www.cs.tudortmund.de/staff/morris/graphkerneldatasets where the nodes are associated with categorical input features. The detailed statistics for these bioinformatics networks are summarized in Table 4.
Model Configuration: We adopt two hidden layers followed by the softmax activation layer in DEMONet, where the proposed multitask feature learning schemes in Eq. (7) and (10) are applied to each hidden layer for neighborhood aggregation (termed as DEMONet(weight) and DEMONet(hash), respectively). In addition, we apply Adam optimizer (Kingma and Ba, 2014) with the learning rate 0.005 on the crossentropy loss to train our models. To prevent our models from overfitting, we adopt the dropout (Srivastava et al., 2014) with and regularization with . The hidden layer size of neural units is set as 64. An early stopping strategy with a patience of 100 epochs on validation set is applied in our experiments.
Baseline Methods: The baseline methods used in our experiments are given below: (1) nodelevel graph neural networks: GCN (Kipf and Welling, 2017), GCN_cheby (Kipf and Welling, 2017), GraphSAGE (mean aggregator) (Hamilton et al., 2017a), Union (Li et al., 2018), Intersection (Li et al., 2018) and GAT (Velickovic et al., 2018); (2) nodelevel structureaware embedding approaches: RolX (Henderson et al., 2012), struc2vec (Ribeiro et al., 2017) and GraphWAVE (Donnat et al., 2018); (3) graphlevel graph neural networks: DCNN (Atwood and Towsley, 2016), PATCHYSAN (Niepert et al., 2016) and DIFFPOOL (Ying et al., 2018); (4) deep graph kernel: DeepWL (Yanardag and Vishwanathan, 2015). In our experiments, all the baseline models used the default hyperparameters suggested in the original papers.
All our experiments are performed on a Windows machine with four 3.60GHz Intel Cores and 32GB RAM. The source code will be available at https://github.com/jwu4sml/DEMONet.
Data sets  # nodes  # edges  # classes  # attributes 

4039  88234  4    
WikiVote  7115  103689  4   
BlogCatalog  5196  171743  6  8189 
Flickr  7575  239738  9  12047 
Brazil  131  1038  4   
Europe  399  5995  4   
USA  1190  13599  4   
Data sets  # graphs  # classes  Avg # nodes  # attributes 

MUTAG  188  2  17.9  7 
PTC  344  2  25.5  19 
PROTEINS  1113  2  39.1  3 
ENZYMES  600  6  32.6  3 
5.2. Node Classification
For a fair comparison of different architectures (Shchur et al., 2018), we use different train/validation/test splits of the networks on node classification. For social networks, we randomly choose 10% and 20% of the graph nodes as the training and validation set, respectively, and the rest as the test set. For airtraffic networks, the training, validation and test sets are randomly assigned with equal number of nodes. We run 10 times and report the mean accuracy with the standard variance for performance comparison. As shown in Table 5, we report the classification results on the real networks where the best results are indicated in bold. It can be observed that the proposed DEMONetmodels significantly outperform other graph neural networks (answering Q1). In particular, our DEMONet models are at least 10% higher on mean accuracy over baseline methods. One explanation is that baseline methods focus on preserving the node proximity by roughly mixing a node with its neighbors, whereas our proposed DEMONet models capture the degreespecific structure to distinguish the structural roles of nodes in the networks.
We also evaluate the performance of our models against three structureaware embedding approaches: RolX, struc2vec and GraphWAVE. All of them are unsupervised embedding approaches identifying the structural roles of nodes in the networks. Following (Ribeiro et al., 2017), we use the onevsrest logistic regression with L2 regularization to train a classifier for node representations learned by baseline methods. Here we consider using different traintest splits where the percentage of training nodes ranges from 10% to 90% and the rest is used for testing. The experimental results on the Brazil and USA airtraffic networks are provided in Figure 5. We observe that our proposed DEMONet models outperform the comparison methods across all the data sets (answering Q2). Besides, the structure roles identified by those baselines only represent the local graph structure without considering node attributes. Instead, both topological information and node attributes are captured in our DEMONet models when learning node representation.
Social networks  Airtraffic networks  
WikiVote  BlogCatalog  Flickr  Brazil  Europe  USA  
GraphSAGE (Hamilton et al., 2017a)  0.389 ±0.019  0.245 ±0.000  0.828 ±0.007  0.641 ±0.006  0.404 ±0.035  0.272 ±0.022  0.316 ±0.022 
GCN (Kipf and Welling, 2017)  0.575 ±0.013  0.329 ±0.029  0.720 ±0.013  0.546 ±0.019  0.432 ±0.064  0.371 ±0.046  0.432 ±0.022 
GCN_cheby (Kipf and Welling, 2017)  0.646 ±0.012  0.495 ±0.016  0.686 ±0.037  0.479 ±0.023  0.516 ±0.070  0.460 ±0.038  0.526 ±0.045 
Union (Li et al., 2018)  0.600 ±0.000  0.463 ±0.000  0.730 ±0.000  0.566 ±0.000  0.466 ±0.006  0.418 ±0.002  0.582 ±0.000 
Intersection (Li et al., 2018)  0.598 ±0.000  0.462 ±0.000  0.725 ±0.000  0.557 ±0.000  0.459 ±0.003  0.443 ±0.002  0.573 ±0.000 
GAT (Velickovic et al., 2018)  0.570 ±0.036  0.594 ±0.070  0.663 ±0.000  0.359 ±0.000  0.382 ±0.126  0.424 ±0.073  0.585 ±0.021 
DEMONet(hash)  0.887 ±0.020  0.997 ±0.000  0.849 ±0.006  0.678 ±0.010  0.614 ±0.069  0.479 ±0.064  0.659 ±0.020 
DEMONet(weight)  0.919 ±0.003  0.998 ±0.000  0.849 ±0.000  0.656 ±0.000  0.543 ±0.034  0.459 ±0.025  0.647 ±0.021 
5.3. Graph Classification
We use four public graph classification benchmarks to evaluate the proposed DEMONet models with the degreespecific graphlevel pooling scheme. DCNN (Atwood and Towsley, 2016), PATCHYSAN (Niepert et al., 2016) and DIFFPOOL (Ying et al., 2018) adopted the endtoend training architectures for supervised graph classification. For unsupervised graph kernel method DeepWL (Yanardag and Vishwanathan, 2015), we use the onevsrest logistic regression with L2 regularization to train a supervised classifier for graph classification. We also consider our model variants (denoted as DEMONet_m(hash) and DEMONet_m(weight) respectively) which replace the proposed degreespecific graphlevel pooling with meanpooling scheme (Atwood and Towsley, 2016). The input graphs are randomly assigned to the training, validation, or test set where each set has the same number of nodes.
The graph classification results are shown in Table 6 where the best results are indicated in bold. It is observed that (1) compared to the existing meanpooling method, the proposed degreespecific pooling method improves the model performance in most cases, which is consistent with our analysis in Section 4.2; (2) the classification results of our DEMONet models are comparable to other graph neural networks and graph kernel method (answering Q3). Moreover, on MUTAG and ENZYMES data sets, our proposed DEMONet(weight) outperforms the baseline methods. One explanation might be that the graph representation generated by DEMONet explicitly preserves the degreespecific graph structure information.
MUTAG  PTC  PROTEINS  ENZYMES  
DeepWL (Yanardag and Vishwanathan, 2015)  0.733  0.537  0.680  0.210 
DCNN (Atwood and Towsley, 2016)  0.670  0.572  0.579  0.160 
PATCHYSAN (Niepert et al., 2016)  0.795  0.568  0.714  0.170 
DIFFPOOL (Ying et al., 2018)  0.663  0.251  0.733  0.184 
DEMONet_m(hash)  0.760  0.586  0.617  0.236 
DEMONet_m(weight)  0.798  0.550  0.616  0.251 
DEMONet(hash)  0.771  0.563  0.705  0.251 
DEMONet(weight)  0.814  0.572  0.708  0.272 
5.4. Efficiency Analysis
It is easy to show that the time complexity of each layer in our proposed DEMONet(hash) model is where and are the number of nodes and edges in the graph, respectively, and are the dimensionalities of input and output features at each layer, respectively, is the number of tasks (degree values) in the graph, and is the hashing dimension. By observing that in the networks, its time complexity would be , which is on par with GCN and GAT models. Similarly, we can show that the time complexity of each layer in DEMONet(weight) is . When and , it also scales linearly with respect to the number of nodes and edges.
Following (Kipf and Welling, 2017), we report the running time (measured in seconds wallclock time) per epoch (including forward pass, crossentropy calculation, backward pass) on a synthetic network assigning edges uniformly at random. As shown in Figure 6, we observe that (answering Q4) (1) the wallclock time of our proposed DEMONet model is linear with respect to the number of nodes; (2) our models are much more efficient than GAT on node classification task.
6. Conclusions
In this paper, we focus on building a degreespecific graph neural network for both node and graph classification. We start by analyzing the limitations of the existing graph neural networks from the perspective of WeisfeilerLehman graph isomorphism test. Furthermore, it is observed that the graph convolution should have the following properties: seedoriented, degreeaware, orderfree. To this end, we propose a generic graph neural network model named DEMONet which formulates the feature aggregation into a multitask learning problem according to nodes’ degree values. In addition, we also present a novel graphlevel pooling method for learning graph representations provably lying in a degreespecific Hilbert kernel space. The extensive experiments on real networks demonstrate the effectiveness of our DEMONet algorithm.
Acknowledgements.
This work is supported by the United States Air Force and DARPA under contract number FA875017C0153, National Science Foundation under Grant No. IIS1552654, Grant No. IIS1813464 and Grant No. CNS1629888, the U.S. Department of Homeland Security under Grant Award Number 17STQAC000010200, and an IBM Faculty Award. The views and conclusions are those of the authors and should not be interpreted as representing the official policies of the funding agencies or the government.References
 (1)
 Atwood and Towsley (2016) James Atwood and Don Towsley. 2016. Diffusionconvolutional neural networks. In NIPS.
 Defferrard et al. (2016) Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. 2016. Convolutional neural networks on graphs with fast localized spectral filtering. In NIPS.
 Donnat et al. (2018) Claire Donnat, Marinka Zitnik, David Hallac, and Jure Leskovec. 2018. Learning structural node embeddings via diffusion wavelets. In SIGKDD.
 Gilmer et al. (2017) Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. 2017. Neural message passing for quantum chemistry. In ICML.
 Hamilton et al. (2017a) Will Hamilton, Zhitao Ying, and Jure Leskovec. 2017a. Inductive representation learning on large graphs. In NIPS.
 Hamilton et al. (2017b) William L Hamilton, Rex Ying, and Jure Leskovec. 2017b. Representation learning on graphs: Methods and applications. arXiv preprint arXiv:1709.05584 (2017).
 Henderson et al. (2012) Keith Henderson, Brian Gallagher, Tina EliassiRad, Hanghang Tong, Sugato Basu, Leman Akoglu, Danai Koutra, Christos Faloutsos, and Lei Li. 2012. RolX: structural role extraction & mining in large graphs. In SIGKDD.
 Kingma and Ba (2014) Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014).
 Kipf and Welling (2017) Thomas N Kipf and Max Welling. 2017. Semisupervised classification with graph convolutional networks. In ICLR.
 Lei et al. (2017) Tao Lei, Wengong Jin, Regina Barzilay, and Tommi Jaakkola. 2017. Deriving neural architectures from sequence and graph kernels. In ICML.
 Leskovec and Krevl (2014) Jure Leskovec and Andrej Krevl. 2014. SNAP datasets: Stanford large network dataset Collection. http://snap.stanford.edu/data.
 Li et al. (2018) Qimai Li, Zhichao Han, and XiaoMing Wu. 2018. Deeper insights into graph convolutional networks for semisupervised learning. In AAAI.
 Niepert et al. (2016) Mathias Niepert, Mohamed Ahmed, and Konstantin Kutzkov. 2016. Learning convolutional neural networks for graphs. In ICML. 2014–2023.
 Ribeiro et al. (2017) Leonardo FR Ribeiro, Pedro HP Saverese, and Daniel R Figueiredo. 2017. struc2vec: Learning node representations from structural identity. In SIGKDD.
 Shchur et al. (2018) Oleksandr Shchur, Maximilian Mumme, Aleksandar Bojchevski, and Stephan Günnemann. 2018. Pitfalls of graph neural network evaluation. In NIPS.
 Shervashidze et al. (2011) Nino Shervashidze, Pascal Schweitzer, Erik Jan van Leeuwen, Kurt Mehlhorn, and Karsten M Borgwardt. 2011. WeisfeilerLehman graph kernels. Journal of Machine Learning Research (2011), 2539–2561.
 Srivastava et al. (2014) Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research 15, 1 (2014), 1929–1958.
 Velickovic et al. (2018) Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2018. Graph attention networks. In ICLR.
 Weinberger et al. (2009) Kilian Weinberger, Anirban Dasgupta, Josh Attenberg, John Langford, and Alex Smola. 2009. Feature hashing for large scale multitask learning. In ICML.
 Weisfeiler and Lehman (1968) Boris Weisfeiler and AA Lehman. 1968. A reduction of a graph to a canonical form and an algebra arising during this reduction. NauchnoTechnicheskaya Informatsia 2, 9 (1968), 12–16.
 Wu et al. (2018) Mengmeng Wu, Wanwen Zeng, Wenqiang Liu, Hairong Lv, Ting Chen, and Rui Jiang. 2018. Leveraging multiple gene networks to prioritize GWAS candidate genes via network representation learning. Methods (2018).
 Xu et al. (2019) Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. 2019. How powerful are graph neural networks?. In ICLR.
 Yanardag and Vishwanathan (2015) Pinar Yanardag and SVN Vishwanathan. 2015. Deep graph kernels. In SIGKDD.
 Ying et al. (2018) Zhitao Ying, Jiaxuan You, Christopher Morris, Xiang Ren, Will Hamilton, and Jure Leskovec. 2018. Hierarchical graph representation learning with differentiable pooling. In NIPS.
 Zhang et al. (2018a) Muhan Zhang, Zhicheng Cui, Marion Neumann, and Yixin Chen. 2018a. An endtoend deep learning architecture for graph classification. In AAAI.
 Zhang et al. (2018b) Si Zhang, Hanghang Tong, Jiejun Xu, and Ross Maciejewski. 2018b. Graph convolutional networks: Algorithms, applications and open challenges. In International Conference on Computational Social Networks. Springer, 79–91.
 Zhou et al. (2018) Dawei Zhou, Jingrui He, Hongxia Yang, and Wei Fan. 2018. SPARC: Selfpaced network representation for fewshot rare category characterization. In SIGKDD.
 Zhou et al. (2017b) Dawei Zhou, Si Zhang, Mehmet Yigit Yildirim, Scott Alcorn, Hanghang Tong, Hasan Davulcu, and Jingrui He. 2017b. A local algorithm for structurepreserving graph cut. In SIGKDD. 655–664.
 Zhou and He (2016) Yao Zhou and Jingrui He. 2016. Crowdsourcing via Tensor Augmentation and Completion.. In IJCAI. 2435–2441.
 Zhou et al. (2017a) Yao Zhou, Lei Ying, and Jingrui He. 2017a. MultiC: an optimization framework for learning from task and worker dual heterogeneity. In SDM. 579–587.
Appendix A Appendix for Reproducibility
To better reproduce the experimental results, we provide additional details about the algorithms.
Proof of Theorem 4.4. Theorem 4.4 says that there exist mapping functions and such that for any two subtrees in , the function defined in Eq. (5) maps them to different feature vectors if they are not structurally identical.
Proof.
Let denote the seed set in and the maximum degree values plus one. Becuase is countable, there exists an injective function that maps each subtree from to an unique natural number. It can be observed that can be divided into disjoint sets: , , , .
There exists an injective function that maps each seed from to an unique natural number in . Let denote the neighbor set consisting of the seeds’ neighbors when their degree values are equal to . Because is countable, all the subsets are countable. There exists the injective, symmetric function that maps each element from to an unique real number in . Moreover, there is a function that maps each subtree from to an unique feature vector in when . Please note that the structurally identical subtrees would be considered the same one when the degreespecific function is symmetric.
It is easy to construct an injective function . Based on the properties of injective function, will be injective function that maps any two subtrees in to different feature vectors in if they are not structurally identical, which completes the proof. ∎
Proof of Theorem 4.5. Theorem 4.5 says that the graph representation learned in Eq. (12) belongs to the Reproducing Kernel Hilbert Space (RKHS) of kernel .
Proof.
Let denote the feature vector of graph for nodes with degree value . Let denote the element of . Our graph convolution (feature aggregation) function can be written as:
(19) 
where represents the degreespecific parameters, and more specifically, for degreespecific weight matrix in Eq. (7) and . Because we use the concatenation operator to combine the learned features of seed and its neighborhood, it holds that lies in either seed’s feature or , but not both.
Let denote the row from . To show our results, we construct a regular ”reference graph” which has the same nodes as the input graph (i.e., ). Its degree value is and each node in ”reference graph” is associated with the same feature vector . Then when lies in the seed’s feature , we have:
(20)  
The lemma 1 in (Lei et al., 2017) holds that for activation functions , there exists kernel functions and the underlying mapping such that for some mapping function constructed from . Therefore, we have:
(21) 
where is the composition of and , and . And is the ”reference graph” constructed from model parameters and activation function.
Let denote the row from with . Similarly, we construct a regular ”reference graph” which has the same nodes as the input graph with degree value . Each node in this ”reference graph” is associated with the same feature vector . when when lies in the neighborhood’s feature , we have:
(22)  
Please notice that in this case, node features are assumed to be the sum of neighborhood features. And moreover, it can be written as:
(23) 
where is the ”reference graph” constructed from model parameters and activation function. Therefore, the graph representation belongs to the RKHS of kernel , which completes the proof. ∎