Message Passing Graph Kernels

Message Passing Graph Kernels

Giannis Nikolentzos
École Polytechnique
nikolentzos@lix.polytechnique.fr
&Michalis Vazirgiannis
École Polytechnique
mvazirg@lix.polytechnique.fr
Abstract

Graph kernels have recently emerged as a promising approach for tackling the graph similarity and learning tasks at the same time. In this paper, we propose a general framework for designing graph kernels. The proposed framework capitalizes on the well-known message passing scheme on graphs. The kernels derived from the framework consist of two components. The first component is a kernel between vertices, while the second component is a kernel between graphs. The main idea behind the proposed framework is that the representations of the vertices are implicitly updated using an iterative procedure. Then, these representations serve as the building blocks of a kernel that compares pairs of graphs. We derive four instances of the proposed framework, and show through extensive experiments that these instances are competitive with state-of-the-art methods in various tasks.

 

Message Passing Graph Kernels


  Giannis Nikolentzos École Polytechnique nikolentzos@lix.polytechnique.fr Michalis Vazirgiannis École Polytechnique mvazirg@lix.polytechnique.fr

\@float

noticebox[b]Preprint. Work in progress.\end@float

1 Introduction

Graph-structured data arises naturally in many domains ranging from bioinformatics and social networks to cybersecurity. A key issue in many applications is to perform machine learning tasks on this type of data. In the past years, the problem of graph classification has found applications in several fields such as in chemoinformatics ralaivola2005graph, in malware detection gascon2013structural and in text categorization nikolentzos2017shortest. For instance, in chemoinformatics, molecules are commonly represented as graphs where vertices correspond to atoms and edges to chemical bonds between them. The task is then to predict the class label of each graph (e. g., its anti-cancer activity).

Graph kernels have recently evolved into the dominant approach for learning on graph-structured data. A graph kernel is a positive semidefinite function defined on the space of graphs . This function corresponds to an inner product in some Hilbert space. Given a kernel , there exists a map into a Hilbert space such that for all . One of the major advantages of graph kernels is that they allow kernel methods such as the Support Vector Machines (SVM) to work directly on graphs. Research in graph kernels has achieved a remarkable progress in the past years. However, graph kernels have been applied mainly to graphs that are either unlabeled or contain discrete node labels. For such kind of graphs, there exist several highly scalable graph kernels which can handle graphs with thousands of vertices (e. g., the Weisfeiler-Lehman subtree kernel shervashidze2011weisfeiler). However, graphs that emerge from several real settings typically contain multi-dimensional vertex attributes (a.k.a. features). Such types of graphs appear in computer vision (harchaoui2007image) and in bioinformatics (borgwardt2005protein), among others. For instance, in computer vision, attributes may represent the RGB values of colors, while in bioinformatics, they may represent physical properties of protein secondary structure elements. When continuous node labels are available, taking them into account usually leads to significant performance improvements. Designing graph kernels for such types of graphs is however a much less well studied problem which started to gain some attention recently (feragen2013scalable; neumann2016propagation; orsini2015graph; kriege2012subgraph; morris2016faster). Unfortunately, most of the proposed apporaches do not scale even to relatively small datasets consisting of graphs with tens of vertices. An open challenge is thus to develop scalable kernels for graphs with continuous-valued multi-dimensional vertex attributes.

Very recently, research in machine learning on graphs shifted towards neural network architectures. Several neural network models have been generalized to work on graph-structured data niepert2016learning; lee2017deep; zhang2018end; li2015gated; kipf2016semi; battaglia2016interaction. In contrast to graph kernels, these networks can efficiently take into account continuous node attributes. Several of these networks fall under the general class of message passing neural networks gilmer2017neural. The main idea behind these methods is that each vertex receives messages from its neighbours and utilizes these messages to update its representation. This is a well-established idea which has been widely applied in graph mining. In fact, this even constitutes the key underlying principle of some graph kernels (e. g., Weisfeiler-Lehman kernel shervashidze2011weisfeiler, propagation kernel neumann2016propagation).

In this paper, we present a new framework for designing graph kernels, called Message Passing graph kernels (MPGK). The proposed framework consists of two components: () a kernel that compares vertices, and () a kernel that compares graphs. The first component is computed for a number of iterations. At each iteration, the representation of each vertex is updated implicitly based on its own representation and the representations of its neighbors. Although this idea has been already explored in the past, we provide a radically different formulation. The proposed framework is very general, and the whole computation is performed in the kernel space. In contrast to previous approaches, the proposed framework is capable of handling any type of graphs, while it can more effectively capture similarities between rooted subgraphs. In contrast to the message passing neural network architectures, the proposed framework uses more sophisticated functions for updating vertex rerpesentations. One of the drawbacks of the proposed farmework is that its instances suffer from high computational complexity. Since efficient computation is central for the applicability of the farmework to read-world datasets, we propose an approximation method which reduces the number of evaluations of the kernel between vertices which allows the framework to scale to large datasets.

The rest of this paper is organized as follows. Section 2 introduces some preliminary concepts of message passing approaches. Section 3 presents the proposed framework for designing message passing graph kernels. Section 4 evaluates the proposed framework in several tasks and compares it with existing methods. Finally, Section 5 concludes.

2 Preliminaries

Several approaches that deal with the problem of learning on graph-structured data generate fixed dimensional vector representations for small subgraphs extracted from the input graphs. For instance, many recent neural networks for graphs collect each node’s -hop neighborhood and then generate representations for them. Since there is no correspondence between the neighbors of different vertices, someone either has to impose an order on the neighboring vertices or to employ a permutation invariant function. To impose an order on the vertices, it is common to apply labeling procedures (e. g., degree, eigenvector centrality, etc.) niepert2016learning. On the other hand, message passing neural networks summarize the neighborhood of a vertex using permutation invariant functions gilmer2017neural. Given a set , such functions take as input the power set , and produce a function that is independent of the ordering of the elements of the input. More formally,

Definition 1.

A function acting on sets is permutation invariant to the order of the objects in the set if for any permutation it holds that .

Message passing neural networks do not operate on fixed dimensional vectors, but they take into account the whole set of neighbors of each vertex. Hence, to aggregate neighborhood information, they employ functions defined on sets that are invariant to permutations. The majority of these architectures achieve invariance by simply summing the messages coming from each neighbor. Let be a graph. Let also be the set of neighbors of vertex . We denote as the representation of vertex at layer . Then, most message passing architectures update the representation of each vertex based on the representations of its neighbors. More specifically, during the message passing phase, hidden states at each vertex in the graph are updated based on messages according to:

(1)

The above update strategy illustrates the major weakness of such neural networks. Taking the sum of the messages sent from each neighbor is clearly a permutation invariant function since the response of the function is “indifferent” to the ordering of the elements. However, due to its simplistic nature, this function poses a serious limitation that restricts the representation power of message passing neural networks. Hence, it is clear that more sophisticated approaches are required to learn meaningful vertex representations and as a consequence, meaningful graph representations.

In contrast to the message passing neural networks, some graph kernels are capable of learning more expressive representations for the neighborhood of each vertex. However, some of them operate only on graphs with discrete node labels shervashidze2011weisfeiler, while others employ very expensive procedures such as the Bhattacharyya kernel kondor2003kernel to compare the neighborhood graphs kondor2016multiscale.

3 Message Passing Graph Kernels

In this Section, we introduce a message passing framework for comparing graphs. Due to the sophisticated permutation invariant kernel functions it employs, the framework is more expressive than neural message passing architectures. We propose an iterative procedure that propagates vertex representations. The framework assumes that each vertex is assigned an initial representation (either a discrete label or continuous attributes). In the case of unlabeled graphs, the representation of each vertex can be initialized using local vertex features. Such features include for instance, the degree of the vertex, the number of triangles in which the vertex participates, etc.

The proposed framework consists of two components. The first component is a kernel between vertices and the second component a kernel between graphs. Note that the first component allows someone to perform machine learning tasks at the node level, while combined with the second component, it allows someone to perform machine learning tasks at the graph level. Let be a kernel between vertices and a kernel between neighborhoods. Then, the proposed framework computes iteratively a kernel between each pair of vertices, where denotes the timestep. Specifically, the kernel values between the vertices are updated following the recurrence shown below:

(2)

where and are nonnegative constants. Clearly, is a positive semidefinite function defined on the space of vertices given that and are also positive semidefinite kernels. It is interesting to note that the above procedure implicitly updates the representations of the considered vertices. At the first iteration, a kernel function that compares the labels/attributes of the vertices is employed and then, all the computations are performed in the kernel space. After computing the kernel between each pair of vertices for iterations, we can compute a kernel between graphs as follows:

(3)

where is a kernel between sets of vertices. Note that both and are functions defined on sets of vertices, and hence, they are required to satisfy the constraint of permutation invariance. To compute the above two kernels, we employ two well-known design paradigms for developing kernels: () the R-convolution framework haussler1999convolution and () the theory of valid optimal assignment kernels kriege2016valid. Given two sets of vertices and , we propose the following R-convolution kernel:

(4)

and the following assignment kernel:

(5)

where denotes the set of all bijections between the two sets of vertices (for simplicity we have assumed that the size of both sets is the same) and is a strong kernel defined on vertices (see kriege2016valid for more details). Both kernels defined above are permutation invariant and are therefore eligible for comparing the vertices of two graphs and/or the neighbors of two vertices.

Based on the above two formulations, the update scheme defined in Equation 2 becomes:

(6)

and

(7)

respectively. To compare a pair of graphs, we use the same permuation invariant kernel functions, and Equation 3 becomes:

(8)

and

(9)

where denotes the set of all bijections between the sets of vertices of and . Again, we have assumed that the size of the two graphs is the same.

By combining Equations 67 with Equations 89 we derive four variants of the proposed framework where neighbor vertices and graph vertices are compared using either an R-convolution or an assignment kernel. We denote these variants by the abbreviations MPGK RR, MPGK RA, MPGK AR, and MPGK AA; here the letter R stands for the R-convolution kernel and the letter A for the assignment kernel. The first letter indicates the employed kernel between neighbor vertices and the second letter the employed kernel between graph vertices.

As regards kernel , for graphs with discrete node labels, we use a delta kernel, while for graphs with continuous node attributes, we use a linear kernel between the nodes’ attributes. To compute the assignment kernel between two sets of vertices (either the sets of neighbors of two vertices or the sets of vertices of two graphs), we capitalize on the methodology of valid assignment kernels kriege2016valid. Therefore, we define a hierarchy which induces a strong kernel and this kernel ensures that the emerging assignment function is positive semidefinite. To build the hierarchy, we resort to clustering. Specifically, to create the tree , we perform kernel -means using the kernel values between the vertices (those of the previous time step when comparing neighborhoods and those of the last time step when comparing graphs. For , since there is no previous time step, we first compute the kernel between vertices as defined above). The value of the weighting function is determined based on the function which is defined as follows: the value of the root , is set equal to . The value of any other vertex is set equal to where is the length of the shortest path between the root and vertex and is the set of vertices of the tree . Hence, as expected, weights more vertices appearing lower in the hierarchy than those appearing higher.

3.1 Link to Weisfeiler-Lehman Subtree Kernel

The proposed kernel is related to the Weisfeiler-Lehman subtree kernel shervashidze2011weisfeiler, a state-of-the-art kernel for graphs with discrete node labels. In fact, the Weisfeiler-Lehman subtree kernel can be seen as an instance of the proposed framework. The Weisfeiler-Lehman subtree kernel uses a combination of message passing and hashing to update the labels of the vertices. The kernel between two vertices at each time step is equal to the sum of the kernel value of the previous time step and the output of a delta kernel between their labels at that time step. The kernel between two graphs is the R-convolution kernel defined in Equation 8. In the case of the Weisfeiler-Lehman subtree kernel, the notion of structural equivalence is very rigid since it is defined as a binary property (i. e. due to the delta function). Perturbing the edges by a small amount leads to completely different vertex labels. Even if two vertices have very similar neighborhoods, it is very likely that they will be assigned different labels after some iterations, and will be thus considered different from each other. Conversely, in the case of the proposed framework, structural equivalence is not defined as a binary property and the variants we derive are capable of identifying how similar to each other the neighborhoods of two vertices are.

3.2 Low Rank Approximation

Given two graphs and , computing requires computing between all pairs of vertices of the two graphs. Given a dataset that contains graphs each consisting of vertices, this translates to computing between all pairs of vertices. Furthermore, the complexity of the R-convolution kernel that compares all pairs of neighbors of two vertices is where is the average degree of the vertices of the input graphs. In the worst scenario (each graph is complete), the average degree of the vertices is , and therefore, the complexity of the R-convolution kernel described above becomes . Clearly, for large datasets and/or datasets that contain large graphs, evaluating all these kernels between vertices is infeasible. The complexity of the assignment kernel is lower than that of the R-convolution kernel. Provided we have already generated the hierarchy inducing kernel , the assignment kernel can be computed in time . However, even in that case, the computation of the kernel matrix may still be costly when dealing with large datasets. In adition, storing the kernel matrix between the vertices (i. e. an -dimensional matrix) may turn out to be infeasible. To account for that, we resort to approximation algorithms. Since our goal is to approximate the kernel matrix between the vertices, we employ the popular Nyström method williams2001using. It is important to note that the employed method does not require the graphs of the test set to be known during training, but they can be projected to the low-dimensional space at test time. To compute the kernel value between two graphs, using the R-convolution kernel requires time, while the assignment kernel can be computed in time given a hierarchy inducing the kernel. The combination of these two factors makes computing the entire stack of kernels feasible.

4 Experimental Evaluation

In this Section, we empirically evaluate the proposed framework on several tasks, and we compare it to state-of-the-art methods.

4.1 Node Embedding

We first demonstrate the effectiveness of the proposed framework in learning meaningful vertex representations. In contrast to many recent methods, the proposed framework can accurately capture the structural identity of vertices. Conversely, recent approaches for learning node representations such as DeepWalk perozzi2014deepwalk and LINE tang2015line may fail to encode structural similarity since they mainly take into account the proximity of the vertices in the graph to generate node embeddings. Therefore, two vertices that are “far” from each other in the graph (e. g., they belong to different connected components) will also be far from each other in the embedding space, independent of their local structure. Hence, most recent methods will fail to generate rerpesentations that capture structural equivalence. One notable exception is struc2vec ribeiro2017struc2vec which compares the ordered degree sequences of the nodes’ -hop neighborhoods.

We use the proposed framework to embed the vertices of barbell graph in the -dimensional space. We denote as the -barbell graph which consists of two copies of the complete graph (each having vertices) that are joined by a path graph of length . Let be the set of vertices of . Then, vertex is connected with an edge with one of the vertices of the one complete graph, while vertex is connected with an edge with one of the vertices of the second complete graph. The graph is illustrated in Figure 1 (Top). It is clear that many pairs of vertices of the graph have the same structural identity. More specifically, vertices with the same color in the Figure are structurally equivalent. For instance, all the vertices of the two complete graphs except from the two that are connected with and are structurally equivalent. Permuting any pair of these vertices gives rise to an automorphism.

We expect the proposed framework to learn vertex representations that capture the structural equivalence illustrated in Figure 1 (Top). Pairs of vertices that are structurally equivalent should be close to each other in the embeddings space. Figure 1 shows the representations of the vertices of learned by struct2vec (Botton Left) and by MPGK RR (Bottom Right). To learn these representations, we assigned an attribute to each vertex. The attribute of a vertex was set equal to its degree. In the first iteration, we used a linear kernel to compare two vertices (i. e. the product of their degrees). We then computed all the kernel values between vertices for more iterations and built the kernel matrix between the vertices. Finally, we projected the vertices in the -dimensional space using kernel PCA scholkopf1997kernel. Both the proposed framework and struct2vec managed to learn representations that properly separate the equivalent classes. Specifically, the proposed kernel learned exactly the same representation for structurally equivalent vertices, while struc2vec placed such vertices close to each other in the embedding space. Furthermore, the proposed kernel also captures structural hierarchies: the vertices belonging to the following two groups: () the two complete graphs, and () the path graph , are very close to vertices belonging to the same group and very far from vertices belonging to the other group.

\includegraphics

[trim = 20mm 25mm 30mm 50mm,width=.5]barbell.pdf
struc2vec\includegraphics [width=.8]struc2vec_barbell.pdf MPGK\includegraphics [width=.8]kernel_barbell.pdf

Figure 1: The Barbell graph (Top). Vertex representations in learned by struc2vec (Bottom Left) and by the proposed kernel MPGK (Bottom Right).

4.2 Graph Classification

Datasets. We evaluated the proposed framework on standard graph classification datasets111The datasets, further references and statistics are available at https://ls11-www.cs.tu-dortmund.de/staff/morris/graphkerneldatasets derived from bioinformatics and chemoinformatics (MUTAG, ENZYMES, NCI, PROTEINS), and from social networks (IMDB-BINARY, IMDB-MULTI, REDDIT-BINARY, REDDIT-MULTI-K, COLLAB). We also demonstrated the effectiveness of the proposed framework on a synthetic dataset (Synthie). Note that MUTAG and NCI contain graphs with discrete node labels, Synthie contains graphs with continuous node attributes, and ENZYMES and PROTEINS contain graphs with both discrete node labels and continuous node attributes. On the other hand, the graphs contained in the social interaction datasets are unlabeled.

Experimental Setup. To perform graph classification, we employed a C-Support Vector Machine (SVM) classifier. We performed -fold cross-validation, using folds for training and fold for testing. The whole process was repeated times for each dataset and each kernel. The parameter of the SVM was optimized on the training set only.

The parameters of the proposed message passing graph kernels were selected using cross-validation on the training dataset. We chose the number of iterations from , which means that we computed different kernel matrices in each experiment. We set parameters and to and respectively. Furthermore, we use the the Nyström method with samples to approximate the kernel matrix between vertices. The proposed kernels were written in Python222Code available at https://github.com/giannisnik/message_passing_graph_kernels.

We compare the proposed framework against several state-of-the-art kernels. Specifically, our set of baselines include the GraphHopper kernel (GH) feragen2013scalable, an instance of the graph invariant kernels (GI) orsini2015graph, the propagation kernel (P2K) neumann2016propagation and the hash Weisfeiler-Lehman subtree kernel (HGK-WL) morris2016faster. All these kernels support continuous vertex attributes. Additionally, we compare the proposed message passing kernels to the Weisfeiler-Lehman subtree kernel (WL) shervashidze2011weisfeiler and the shortest-path kernel (SP) borgwardt2005shortest, which can only handle graphs with discrete node labels, to exemplify the usefulness of using continuous attributes. For GH, GI, P2K and HGK-WL, we report the results from morris2016faster since the experimental setup is the same with ours. For WL and SP, we report the results from shervashidze2011weisfeiler. We also compare the proposed message passing kernels against two recent neural network architectures for graph classification: () PATCHY-SAN (PSCN ) niepert2016learning and () Deep Graph Convolutional Neural Network (DGCNN) zhang2018end. For both neural network architectures, we report the best results from the corresponding papers since they were under the same setting as ours.

Results. We report in Table 1 average prediction accuracies and standard deviations over the runs of the -fold cross validation procedure.

\resizebox

!
  Method
Dataset
\@killglue
MUTAG ENZYMES NCI PROTEINS Synthie
SP shervashidze2011weisfeiler 87.28 ( 0.55) 41.68 ( 1.79) 73.47 ( 0.11) NA NA WL shervashidze2011weisfeiler 82.05 ( 0.36) 52.22 ( 1.26) 82.19 ( 0.18) NA NA GH morris2016faster NA 68.80 ( 0.96) NA 72.26 ( 0.34) 73.18 ( 0.77) GI morris2016faster NA 71.70 ( 0.79) NA 76.88 ( 0.47) 95.75 ( 0.50) P2K morris2016faster NA 69.22 ( 0.34) NA 73.45 ( 0.48) 50.15 ( 1.92) HGK-WL morris2016faster NA 67.63 ( 0.95) NA 76.70 ( 0.41) 96.75 ( 0.51) PSCN niepert2016learning 88.95 ( 4.37) NA 76.34 ( 1.68) NA NA DGCNN zhang2018end 85.83 ( 1.66) NA 74.44 ( 0.47) NA NA MPGK RR 85.26 ( 1.16) 44.38 ( 1.36) 60.12 ( 0.17) 60.03 ( 0.12) 77.80 ( 0.94) MPGK RA 84.10 ( 1.12) 69.58 ( 0.96) 83.08 ( 0.51) 75.88 ( 0.26) 90.92 ( 0.92) MPGK AR 84.80 ( 2.33) 48.58 ( 1.51) 71.50 ( 0.37) 72.91 ( 0.35) 89.35 ( 1.12) MPGK AA 83.21 ( 0.94) 70.18 ( 1.33) 83.85 ( 0.36) 61.14 ( 1.14) 98.45 ( 0.48)

\resizebox!
  Method
Dataset
\@killglue
IMDB IMDB REDDIT REDDIT COLLAB
BINARY MULTI BINARY MULTI-5K DGK yanardag2015deep 66.96 ( 0.56) 44.55 ( 0.52) 78.04 ( 0.39) 41.27 ( 0.18) 73.09 ( 0.25) PSCN niepert2016learning 71.00 ( 2.29) 45.23 ( 2.84) 86.30 ( 1.58) 49.10 ( 0.70) 72.60 ( 2.15) DGCNN zhang2018end 70.03 ( 0.86) 47.83 ( 0.85) NA NA 73.76 ( 0.49) MPGK RR 67.15 ( 0.94) 29.50 ( 0.49) 74.67 ( 0.39) 47.64 ( 0.10) 65.52 ( 0.14) MPGK RA 72.83 ( 0.73) 50.98 ( 0.29) 91.62 ( 0.54) 53.62 ( 0.17) 74.85 ( 0.16) MPGK AR 72.64 ( 0.51) 49.40 ( 0.38) 84.57 ( 0.40) 52.87 ( 0.43) 68.95 ( 0.28) MPGK AA 73.67 ( 0.44) 51.76 ( 0.42) 90.91 ( 0.31) 52.11 ( 0.52) 82.60 ( 0.54)

Table 1: Classification accuracy ( standard deviation) of the proposed message passing graph kernels and the baselines on the graph classification datasets. NA indicates that results are not available.

The proposed message passing graph kernels outperform all baselines on out of the datasets. The difference in performance between the proposed kernels and the baselines is larger on the social interaction datasets. In some cases, the gains in accuracy over the best performing competitors are considerable. For instance, on the REDDIT-BINARY, REDDIT-MULTI-K, and COLLAB datasets, we offer respective absolute improvements of , , and in accuracy over the best competitor. Furthermore, on almost all datasets, our message passing graph kernels reach better performance than the recent graph neural network architectures (PSCN and DGCNN), showing that kernels are still the dominant approach for the classification of small and medium-sized graph datasets. It is interesting to note that the two kernels that operate on graphs with discrete node labels (SP and WL) fail to achieve performance comparable to kernels that use vertex attribute information (GH, GI, P2K, HGK-WL and proposed kernels) on the ENZYMES dataset (this dataset contains both discrete node labels and continuous node attributes). This highlights the added advantage of kernels capable of handling continuous vertex attributes. The proposed kernels outperform the baselines that use vertex attribute information on Synthie, and reach comparable performance on the two datasets that contain both discrete node labels and continuous node attributes (ENZYMES and PROTEINS). It should be mentioned that the baseline kernels take into account both types of labels. Our proposed framework is very general and we could have also designed variants that also take both types of information into account. As regards the four variants of the proposed framework, MPGK AA was the best performing variant, while MPGK RA performed comparably on most datasets. Both these kernels were generally superior than MPGK RR and MPGK AR.

4.3 Molecular Graph Regression

Dataset. We evaluate the proposed framework on the publicly available QM9 dataset ramakrishnan2014quantum. The dataset contains approximately organic molecules. Each molecule consists of Hydrogen (H), Carbon (C), Oxygen (O), Nitrogen (N), and Flourine (F) atoms and contain up to heavy (non Hydrogen) atoms. Furthermore, each molecule has target properties to predict. These properties can be grouped into categories: () those related to how tightly bound together the atoms in a molecule are (U0, U, H, G), () those related to fundamental vibrations of the molecule (Omega, ZPVE), () those that concern the states of the electrons in the molecule (HOMO, LUMO, gap), and () measures of the spatial distribution of electrons in the molecule (mu, alpha, R2).

Experimental Setup. The dataset was divided into a train, a validation and a test set according to a split. All target variables were normalized to have zero mean and unit variance. Although the dataset contains spatial information related to the atomic configurations, in our experiments, we only used the graph representation of each molecule along with the attributes of the atoms (i. e. vertex attributes).

To predict the targets, we only used MPGK AA, the kernel that performed best in graph classification. We performed iterations in total. We set parameters and to and respectively. Furthermore, we use the the Nyström method with samples to approximate the kernel matrix between vertices. Due to the large size of the dataset, we did not compute the whole kernel matrix between graphs at each iteration, but we instead used the Nyström method with samples to approximate it. Hence, we generated (one for each iteration) -dimensional representations for each graph. We concatenated these rerpesentations and fed them to a fully connected neural network with hidden units. To train the model, we used the Adam optimizer with an initial learning rate of . The learning rate decayed linearly after each step towards a minimum of . We set the number of epochs to after experimenting on the validation set. We trained the network separately for each target.

We compare the proposed message passing kernel against the optimal assignment Weisfeiler–Lehman graph kernel kriege2016valid, the convolutional neural network for learning molecular fingerprints (NGF) duvenaud2015convolutional, PATCHY-SAN (PSCN ) niepert2016learning, and the nd order covariant compositional network (nd order CCN) kondor2018covariant. For all baselines, we report the results from kondor2018covariant since the experimental setup is the same with ours.

Results. Table 2 illustrates the mean absolute error (MAE) and the the root mean squared error (RMSE) for the normalized targets.

\resizebox

!
  Target
Method
\@killglue
WLGK NGF PSCN nd order MPGK
CCN MAE RMSE MAE RMSE MAE RMSE MAE RMSE MAE RMSE
mu 0.69 0.92 0.63 0.87 0.54 0.75 0.48 0.67 0.51 0.79 alpha 0.46 0.68 0.43 0.65 0.20 0.31 0.16 0.26 0.20 0.31 HOMO 0.64 0.91 0.58 0.81 0.51 0.70 0.39 0.55 0.39 0.53 LUMO 0.70 0.84 0.65 0.79 0.59 0.73 0.53 0.68 0.22 0.32 gap 0.72 0.86 0.67 0.82 0.60 0.75 0.54 0.69 0.28 0.40 R2 0.55 0.81 0.49 0.71 0.22 0.31 0.19 0.27 0.34 0.50 ZPVE 0.57 0.72 0.51 0.66 0.43 0.55 0.39 0.51 0.07 0.09 U0 0.52 0.67 0.47 0.62 0.34 0.44 0.29 0.39 0.23 0.34 U 0.52 0.67 0.47 0.62 0.34 0.44 0.29 0.40 0.23 0.34 H 0.52 0.68 0.47 0.62 0.34 0.44 0.30 0.40 0.22 0.35 G 0.51 0.67 0.46 0.62 0.33 0.43 0.29 0.38 0.23 0.33 Cv 0.59 0.78 0.47 0.65 0.27 0.34 0.23 0.30 0.15 0.21 Omega 0.72 0.84 0.63 0.77 0.57 0.73 0.45 0.65 0.02 0.35

Table 2: Comparison of the baseline methods (Left) and the proposed message passing graph kernel (right) on the QM9 dataset.

MPGK achieves lower MAE and RMSE values than all the baselines on out of the targets, while it is outperformed by the nd order CCN on the remaining targets. In some cases, the improvement in performance is significant. For example, the MAE of the nd order CCN on the ZPVE target was , while that of the proposed kernel was . Overall, the obtained results suggest that the proposed kernel is competitive with state-of-the-art methods.

5 Conclusion

In this paper, we proposed a general framework for designing graph kernels. The proposed kernel capitalizes on the well-known message passing scheme on graphs. We derived four instances of the proposed framework, and showed through extensive experiments that these kernels are competitive with state-of-the-art methods.

References

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
247388
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description