Framework for Designing Filters of Spectral Graph Convolutional Neural Networks in the Context of Regularization Theory

Framework for Designing Filters of Spectral Graph Convolutional Neural Networks in the Context of Regularization Theory

Abstract

Graph convolutional neural networks (GCNNs) have been widely used in graph learning. It has been observed that the smoothness functional on graphs can be defined in terms of the graph Laplacian. This fact points out in the direction of using Laplacian in deriving regularization operators on graphs and its consequent use with spectral GCNN filter designs. In this work, we explore the regularization properties of graph Laplacian and proposed a generalized framework for regularized filter designs in spectral GCNNs. We found that the filters used in many state-of-the-art GCNNs can be derived as a special case of the framework we developed. We designed new filters that are associated with well-defined regularization behavior and tested their performance on semi-supervised node classification tasks. Their performance was found to be superior to that of the other state-of-the-art techniques.

1 Introduction

Convolutional neural networks (CNNs) 726791 () have been applied to a wide range of problems in computer vision and speech processing for various tasks such as image classification/segmentation/detection and speech recognition/synthesis etc. The success of CNNs as a powerful feature extractor for data in the Euclidean domain motivated researchers to extend the concepts to non-euclidean domains such as manifolds and graphs 8100059 ().

The GCNN designs mainly fall into two categories namely spatial and spectral approaches. Spatial approaches directly define the convolution on the graph vertices. They are based on a message-passing mechanism gilmer2017neural () between the nodes of the graph. Examples are network for learning molecular finger prints duvenaud2015convolutional (), Molecular graph convolutions kearnes2016molecular (), GraphSage hamilton2017inductive (), Gated graph neural networks li2015gated (), graph attention networks velivckovic2017graph () etc. Spectral filter designs are based on the concepts of spectral graph theory chung1997spectral () and signal processing on graphs shuman2013emerging (). The graph fourier transform has been utilized to define graph convolutions in terms of graph Laplacian. Examples are Spectral graph CNN bruna2013spectral (), Chebyshev network (ChebyNet) defferrard2016convolutional (), graph convolutional network (GCN) kipf2016semi (), graph wavelet network xu2019graph (), GraphHeat network xu2019graphheat () etc.

Regularization in graphs is realized with the help of graph Laplacian. A smoothness functional on graphs can be obtained in terms of Laplacian and by processing on its eigenfunctions, regularization properties on graphs can be achieved. This has been utilized for inference in the case of semi-supervised classification, link prediction etc, zhou2004regularization (), belkin2004regularization (), and for graph kernel designs smola2003kernels ().

In this work, we propose a framework for filter designs in spectral GCNNs from which its regularization behavior can be analyzed. We observed that by processing on the eigenvalues of the Laplacian, different kinds of filters with their corresponding regularization behavior can be obtained. We identified the condition on the spectrum of the Laplacian that ensures regularized spectral filter designs. The filters used in state-of-the-art networks can be deduced as special cases of our framework. We also proposed a new set of filters inspired by the framework and identified certain state-of-the-art models as special cases. The rest of the paper is organized as follows. Section 2 discusses the related work. Section 3 defines the notations used in the manuscript. Spectral GCNNs are briefly discussed in Section 4. Section 5 discusses the framework for designing regularized graph convolution filters. Section 6 discusses the experiments and results. Conclusions are made in Section 7.

2 Related works

Spectral GCNNs: The spectral filter designs for GCNNs start with the work by Bruna et.al bruna2013spectral () where the graph Fourier transform of the signals on nodes is utilized. The filter is then defined in terms of the eigenvectors of graph Laplacian. In ChebyNet defferrard2016convolutional (), convolution is done in the spectral domain. Convolution filter is then defined as the linear combination of powers of the Laplacian. The scaling factors of the linear combination are considered as the parameters to be learned from the network. Kipf et.al kipf2016semi () proposed graph convolutional network as the first-order approximation to spectral convolutions defined in ChebyNet. GraphHeat xu2019graphheat () uses negative exponential processing on the eigenvalues of Laplacian to improve the smoothness of the function to be learned by penalizing high-frequency components of the signal. Li et al li2019label () proposed improved graph convolution networks (ICGN) by proposing higher-order filters used in kipf2016semi ().

Regularization in graphs: Regularization associated with graphs has emerged along with the development of algorithms related to semi-supervised learning belkin2006manifold (). The data points are converted into a network and the unknown label of a data point can be learned in a transductive setting. A generalized regularization theory is formulated where the classical formulation is augmented with a smoothness functional on graphs in terms of its Laplacian. These concepts are used for semi-supervised learning and in related applications belkin2004regularization (), zhou2004regularization (), zhu2003semi (), weston2012deep (). Smola et.al smola2003kernels () leverages these concepts to define support vector kernels on graph nodes and they also formulated its connection with regularization operators. Our work is mainly motivated by this work.

Spectral analysis of GCNNs: The low pass filtering property of ChebyNet and GCN networks are analyzed by Wu et.al pmlr-v97-wu19e (). They have shown that the renormalization trick kipf2016semi () applied to GCN shrinks the spectrum of the modified Laplacian from to which favors the low pass filtering process. Li et.al li2019label () and Klicpera et.al klicpera2019diffusion () have given the frequency response analysis of their proposed filters. Compared to these works, we form a generalized framework for designing filters based on their regularization property. Adding to this, Gama et.al gama2019stability () studies the perturbation of graphs, consequent effects in filters, and proposed the conditions under which the filters are stable to small changes in the graph structure.

3 Notations

We define as an undirected graph, where is the set of nodes and is the set of edges. The adjacency matrix is defined as where denotes the weight associated with the edge and otherwise . The degree matrix, , is defined as the diagonal matrix where . The Laplacian of is defined as and the normalized Laplacian is defined as . As is a real symmetric positive semi definite matrix, it has a complete set of orthonormal eigenvectors , known as the graph Fourier modes and the associated ordered real non negative eigenvalues , identified as the frequencies of the graph. Let the eigen decomposition of be where is the matrix of eigenvectors and is the diagonal matrix of eigenvalues. Graph Fourier Transform (GFT) of a signal is defined as and inverse GFT is defined as shuman2013emerging ().

4 Spectral graph convolution networks

Spectral convolutions on graphs can be defined as the multiplication of a signal on nodes with a graph filter. We define as a signal on nodes on a graph . Graph filter, , can be regarded as a function that takes as input and outputs another function . The convolution operation can be written as, , where is a diagonal matrix. The function (with parameters ) is defined as frequency response function of the filter .

The graph filters associated with state-of-the-art GCNNs correspond to a unique frequency response function as listed in Table 1. In spectral CNN bruna2013spectral (), output is defined as, , where is a non-linearity function and is a matrix characterized by a spline kernel parameterized by . The drawback of this model is that it is of and not localized in space. These problems are rectified by ChebyNet defferrard2016convolutional () where a polynomial filter is used. It helped to reduce the number of parameters to and the powers of Laplacians in the filter design solves the localization issue.

The graph convolutional network (GCN) kipf2016semi () is the first-order approximation of ChebyNet except for a sign change and improved graph convolutional network (IGCN) li2019label () uses higher orders of GCN filters which can be inferred from the Table 1. The frequency response of all the networks is parameterized by . The network is optimized to learn these parameters with respect to the associated loss function for the downstream task. In the case of ChebyNet and IGCN, and for GraphHeat, is a hyper-parameter.

In our work, the objective is to propose a framework to design regularized graph convolution filters and for this, we make use of regularization theory over graphs via graph Laplacian as discussed in the following section. We also found that the spectral filters in Table 1 can be deduced as special cases of the proposed frequency response functions in our framework.

5 Regularized graph convolution filters

We consider the signals on the nodes of the graph are generated from the function . It is being identified that the eigenvectors of corresponding to lower frequencies or smaller eigenvalues are smoother on graphs zhu2009introduction (). The smoothness corresponding to the th eigenvector is,

(1)

From Equation 1, we can infer that a smoothly varying graph signal corresponds to eigenvectors with smaller eigenvalues. This is under the assumption that the neighborhood of topologically identical nodes would be similar. In real-world applications, the signals over the graph could be noisy. In this context, we should filter out high-frequency content of the signal as it contains noise and low-frequency contents (eigenvectors corresponding to lower eigenvalues) should be maintained as it contains robust information. In other words, smoothness corresponds to spatial localization in the graphs which is important to infer local variability of the node neighborhoods. This is where the regularization behavior of the frequency response functions of a GCNN becomes important. Extending Equation 1, we can define the smoothness functional on graph as, .

Network Freq. response () Output,
ChebyNet defferrard2016convolutional ()
GCN kipf2016semi ()
GraphHeat xu2019graphheat ()
IGCN li2019label ()
Table 1: Frequency response function and output of spectral filters of GCNNs

The smoothness property associated with or also indicates its potential application to design regularized filters for GCNNs. Since the spectrum of is limited in , in this work we use normalized Laplacian. In the following section, we discuss how graph Laplacian can be used for the regularization in graphs and we propose our framework to design regularized graph convolution filters.

5.1 Graph Laplacian and regularization

Regularization functionals on can be written as

(2)

where , is a regularization operator, denotes Fourier transform of , is a frequency penalizing function and is a function that acts on spectrum of the continuous Laplace operator smola2003kernels (). Equation 2 can be mapped into the case of graphs by making an analogy between the continuous Laplace operator and its discrete counterpart which is the graph Laplacian . Analogous to Equation 2, Smola et.al smola1998connection () used a function of Laplacian, , in the place of in Equation 2 under Laplacian’s capability to impart a smoothness functional on graphs. Hence regularization functionals on graphs can be written as , where .

The choice of should be in such a way that it favors the low pass filtering of the graph convolution filter, i.e, the function should be high for a higher value of to impose more penalization on high frequency (high eigenvalue) content of the graph signal. Similarly, the penalization of low frequency should be less. Hence we name as regularization function. The examples for choices of are listed in the second column of Table 2 and the functions are plotted in Figure 1.

Filter Regularization function Filter definition
Regularized Laplacian
Diffusion
-step random walk
Cosine
Table 2: Filters, corresponding regularization function and its filter definition

Figure 1: Regularization function, . (a) regularized Laplacian (), (b) diffusion function (), (c) one-step random walk (), (d) 2-step random walk (), (e) inverse cosine function.
Remark 1.

There exists an inverse relationship between the regularization function and frequency response function. To impose high penalization on higher frequencies, regularization function is supposed to be a monotonically increasing function of the eigenvalues. At the same time, for the low pass filtering characteristics, to make high filter gain on a lower frequency and vice versa, the frequency response function is supposed to be a monotonically decreasing function of the eigenvalues.

Remark 2.

Smola et.al smola1998connection () has shown that (psuedo-inverse if not invertible) is a positive semidefinite (p.s.d) support vector kernel in a reproducing kernel Hilbert space (RKHS) where is a positive semidefinite regularization matrix and is the image of under .

Remark 1 and 2 points out in the direction of using the inverse of a regularization function, via , as a frequency response function for spectral filters. In this context, regularized filters corresponding to their regularization functions can be obtained as,

(3)

where is the eigensystem of and is the reciprocal function of regularization function. In the context of Remark 2, we can see that filters of GCNNs defined as per Equation 3 are also support vector kernels on graphs provided their parameterization (if any) maintains positive semidefiniteness. The detailed discussion is provided in Appendix A.

5.2 A framework for designing graph convolution filters

Regularized graph filters which are defined as follows can be designed by making use of Equation 3.

Definition 5.1 (Regularized graph convolution filter).

: The graph filter whose frequency response function behaves like a low-pass filter, i.e, should be a monotonically decreasing function in or equivalently the associated regularization function should be a monotonically increasing function in .

The design strategy for regularized graph convolution filters is summarized in Theorem 1.

Theorem 1.

A monotonically increasing function in the interval is a valid regularization function to design regularized graph convolution filters using (3) where is the maximum eigenvalue of .

Proof.

Note that Laplacian can be decomposed as . Equivalently, this decomposition can be considered as a sum of matrix of projections onto one-dimensional subspace spanned by the eigenvectors (Fourier basis), i.e, , where the linear map is the orthogonal projection onto the subspace spanned by the Fourier basis vector . Consider a regularization function that is monotonically increasing. Note that frequency response function is monotonically decreasing. The filtering operation of a signal by the filter can be written in terms of the mappings , i.e,

where constitutes the eigensystem of . The values can be considered as weights that measure the importance of the corresponding eigenspace in the amplification or attenuation of the signal . As is monotonically decreasing the weight of eigenspace corresponding to eigenvectors of lower frequencies (lower eigenvalues) of are higher and vice versa. The filter gain of the lower frequency components of are higher compared to the higher frequencies or is a low pass filter. The validity of is established by the low pass filtering and hence the proof. ∎

Note that the theorem also holds for other definitions of normalized or unnormalized Laplacian and any spectrum in , provided the monotonicity property is maintained. In the context of Theorem 1, to design regularized filters, it is enough to pick a regularization function with the monotonicity property and to plug into Equation 3 to define the filter.

Factors affecting the choices of the regularization function

We can have custom designs for the regularization function. However, to define a closed-form expression for the filter , the regularization function should be able to be expressed in a closed-form. Hence the choice of should be limited to the functions with power series expansion to get closed-form expressions for easier computations.

The powers of the Laplacian involved in the expression can also affect graph learning since if the shortest path distance between nodes and is greater than hammond2011wavelets (). For example, the regularization function form of one step random walk (where = 2) and inverse cosine is approximately same. But the computation of cosine filter involves higher-order even powers of Laplacian whose non zero elements are determined by graph structure. Similarly, by knowing the spectrum of the Laplacian, it is possible to precompute the values of hyper-parameters to precisely design the form of the regularization function that spans in the spectrum. In the next section, we discuss a set of filters corresponding to the regularization functions in Table 2.

5.3 Regularized filters for GCNNs

We take Equations in the second column of the Table 2and plug into Equation 3 to define the regularized filters. The results are summarized in the third column of Table 2. Note that variants of some filters are already familiar in the literature as explained below.

Case 1: In a -step random walk filter if we put and , we get the filter corresponding to GCN kipf2016semi (). Case 2: The filter used in IGCN li2019label () uses higher powers of the GCN filter. Hence it corresponds to a -step random walk filter with a value of and . Case 3: As per li2019label (), the graph filter of the label propagation (LP) method for semi-supervised learning takes the form of the regularized Laplacian filter. Case 4: GraphHeat filter is similar to a diffusion filter together with an identity matrix.

Computational complexity: For regularized Laplacian, learning complexity costs as it involves matrix inversion. For other filters, it is , where is the maximum power of the Laplacian involved in the approximation of the equation of the filter.

5.4 Analysis of regularization behavior of GCNNs

The idea is to identify the regularization function corresponding to the state-of-the-art networks from their filter definition. This helps to analyze the regularization capability of their filters.

Chebynet: ChebyNet filtering defferrard2016convolutional () is defined as, , where we assume parameters are the coefficients of the expansion of matrix exponential . The regularization function where is a constant determined by ’s, the parameters learned by the network. Hence the regularization happening in ChebyNet is the exact opposite of the expected behavior since small eigenvalues are attenuated more and large ones are attenuated less as shown in Figure 2(a). To improve the regularization property of ChebyNet, the filtering shall be done as per the negative exponential or diffusion regularization function and for this, we shall change the filtering operation as, with the constraint to keep up with the desired regularization property. We can also bring an additional hyper-parameter into the power of exponential function. Note that in this case, it corresponds to the diffusion filter.

Figure 2: Regularization function, . (a) ChebyNet, (b) GCN, (c) GraphHeat, (d) IGCN for , (e) IGCN for . All graphs are for ()

GCN: GCN filtering kipf2016semi () operation can be written as, , where we assume parameter is 1 in the exponential approximation. The regularization function where is a constant determined by the parameter . Hence the regularization happening in GCN is as desired as shown in Figure 2(b). Note that the filter of GCN corresponds to the first-order approximation of the diffusion filter. So in effect, it is the diffusion process that harnesses the representation capability of GCN by changing the sign of parameters compared to ChebyNet as explained in Section 4.

Spectral analysis of renormalization trick : The trick refers to the process of adding self-loops kipf2016semi () to the graphs for stable training of the network. Wu et.al pmlr-v97-wu19e () have shown that adding self-loops helps to shrink the Laplacian spectrum form [0, 2] to [0, 1.5] which boosts the low pass filtering behavior. So the regularization function remains in the same form as mentioned above, but the range of eigenvalues being in the interval [0, 1.5].

GraphHeat: GraphHeat filtering xu2019graphheat () operation can be written as, . The regularization function where we assume is a factor determined by and . The regularization function is shown in Figure 2(c).

IGCN: Improved graph convolutional network (IGCN) filtering li2019label () operation can be written as, . The regularization function where is a constant determined by the parameter . Hence the regularization happening in GCN is as desired as shown in Figure 2 (d).

6 Experiments

The variants of proposed filters as in Table 2 is compared with state-of-the-art GCNNs namely ChebyNet defferrard2016convolutional (), GCN kipf2016semi (), GraphHeat xu2019graphheat (), and IGCN (RNM variant of the filter) li2019label (). The comparison is also made with graph regularization based algorithms for semi-supervised learning namely - manifold regularization (ManiReg) belkin2006manifold (), semi-supervised embedding (SemiEmb) weston2012deep (), and label propagation (LP) zhu2003semi (). Other baselines used are Planetoid yang2016revisiting (), DeepWalk perozzi2014deepwalk (), and iterative classification algorithm (ICA) lu2003link (). Citation network datasets yang2016revisiting () - Cora, Citeseer, and Pubmed are used for the study. In these graphs, nodes represent documents, and edges represent citations. The datasets also contain ’bag-of-words’ feature vectors for each document.

Methods Cora Citeseer Pubmed
ManiReg 59.5 60.1 70.7
SemiEmb 59.0 59.6 71.1
LP 68.0 45.3 63.0
DeepWalk 67.2 43.2 65.3
ICA 75.1 69.1 73.9
PLanetoid 75.7 64.7 77.2
MLP 56.2 57.1 70.7
GCN 81.78 0.64 (1.02) 70.73 0.53 (1.03) 78.48 0.58 (1.21)
IGCN 80.49 1.58 (1.02) 68.86 1.01 (1.06) 77.87 1.55 (1.25)
ChebyNet 82.16 0.74 (1.03) 70.46 0.70 (1.04) 78.24 0.43 (1.21)
GraphHeat 81.38 0.69 (1.04) 69.90 0.50 (1.05) 75.64 0.64 (1.34)
Diffusion 83.12 0.37 (1.11) 71.17 0.43 (1.06) 79.20 0.36 (1.80)
1-step RW 82.36 0.34 (1.02) 71.05 0.34 (1.03) 78.74 0.27 (1.21)
2-step RW 82.51 0.22 (1.03) 71.18 0.59 (1.05) 78.64 0.20 (1.29)
3-step RW 82.56 0.24 (1.05) 71.21 0.63 (1.04) 78.28 0.36 (1.81)
Cosine 75.53 0.52 (1.03) 67.29 0.64 (1.03) 75.52 0.53 (1.29)
Table 3: Classification accuracy (in percentage standard deviation) along with average time taken for one epoch (in brackets) and parameters as triplets in the order of datasets ( - number of filters).

6.1 Experimental setup

For GCNN models, network architecture proposed by Kipf et.al kipf2016semi () is used for the experiments. Networks with one layer, two layers, and three layers of graph convolution (GC) are used to evaluate all the filters under study. Along with this, networks with a GC layer followed by one and two layers of dense layers are also studied. In the experiments, it has been found that the network with two layers of GC has outperformed other architectures. It that takes the form

where is the filter, is the input feature matrix, is the filter parameters of first layer ( is the number of filters) and is the filter parameters of second layer ( is the number of filters). Note that the value of equals the total number of classes in the data output. The loss function optimized is the cross-entropy error over the labeled examples kipf2016semi () defined as follows.

where is the set of nodes whose labels are known and is defined as 1 if label of node is and 0 otherwise. For training, all the feature vectors and 20 labels per class are used. The same dataset split as used by Yang et.al yang2016revisiting () is followed in the experiments. All the GCNN models corresponding to different filters are trained 10 times each according to a unique random seed selected at random. All models are trained for a maximum of 200 epochs using the ADAM optimizer kingma2014adam () with the learning rate fixed as 0.01. Early stopping is done in the training if the validation loss does not decrease for 10 consecutive epochs. Network weight initialization and normalization of input feature vectors of the nodes are done as per glorot2010understanding (). Implementation is done using Tensorflow abadi2016tensorflow (). The hardware used for the experiments is Intel Xeon E5-2630 v3 2.4 GHz CPU, 80 GB RAM, and Nvidia GeForce GTX 1080-Ti GPU. Accuracy is used as the performance measure where models are evaluated on a test set of 1000 labeled examples. Since the focus of the study is on comparison of spectral filters, for GCNN variants mean accuracy along with standard deviation is reported. For algorithms other than GCNN, accuracy on a single dataset split reported by Kipf et.al kipf2016semi () is given, since we also follow the same dataset split.

The higher powers of graph Laplacian is computed with the Chebyshev polynomial approximations hammond2011wavelets () for ChebyNet and -step random walk filters considering its computational advantage.

Chebyshev polynomial of order is computed by the recurrence relation

where and is defined as 1 and respectively. The polynomials form an orthogonal basis for , i.e, the space of square integrable functions with respect to the measure . Hence when it comes to computing the powers of graph Laplacian (), its spectrum has to be rescaled in the interval [-1, 1] as follows

where is the rescaled Laplacian, is the maximum eigenvalue of and is the number of nodes in the graph.

But for GraphHeat, diffusion and cosine filters direct computation is done. As these filters involve a hyper-parameter being multiplied to the Laplacian matrix (unlike ChebyNet, and step RW), Chebyshev approximation is not possible because when the rescaling of the matrix happens, the effect of this hyper-parameter multiplication is nullified.

6.2 Results

The results are tabulated in Table 3. The best results are bolded. The hyper-parameters used for the models are: dropout rate = 0.8, L2 regularization factor for the first layer weights = , and the number of filters used in each layer is tuned from 16, 32, 64, and 128. These hyper-parameters are optimized on an additional validation set of 500 labeled examples as followed in kipf2016semi (). Diffusion filter which is a matrix exponential is approximated for first terms, i.e, where there is only a single parameter is learned. For diffusion filter, ChebyNet and GraphHeat the value of used for the approximation of matrix exponential is tuned from . Similarly, cosine filter which involves cosine of a matrix is taken as . The value of is tuned from . For diffusion and GraphHeat filter, the value of is tuned in the range [0.5, 1.5] and for -step randomwalk filters, the value of is tuned in the range [2,24]. For GCN and IGCN, the author’s code was reproduced for experiments.

6.3 Discussion

Compared with graph regularization and label propagation methods, GCNN methods have better performance. The diffusion filter has the highest accuracy in Cora and Pubmed. In Citeseer it is 3-step RW but the difference with diffusion filter is negligible. Among methods other than GCNNs, ICA and Planetoid have better performance. 1-step RW filter is an improvised version of GCN and IGCN in terms of regularization ability. Analyzing their comparison, 1-step RW filter performs better in all datasets. The 3 variants of RW filters have higher performance in Cora and Citeseer datasets compared with GCN and IGCN the reason being attributed to the tuning of parameter and in the case of Pubmed, RW filters and GCN have better performance than IGCN.

Figure 3: Accuracy variation with hyper-parameters. (a) Diffusion, (b) 1-step RW, (c) 2-step RW, (d) 3-step RW

ChebyNet, GraphHeat, and diffusion filters calculate the first few powers of the Laplacian in their learning settings. It has been observed that the performance of the diffusion filter is better than both despite using one parameter to learn. The performance of the cosine filter is lower compared with other filters. The reason is due to the approximation of the cosine filter that requires higher even powers of the Laplacian whose elements can be mostly zeros based on the graph structure as discussed in Section 5.2.1. It also requires skipping odd hopes in the graph that results in some information loss while learning.

The average time required for one training epoch is shown in the brackets along with the accuracy. It can be noted that the time taken increases as the higher order powers of the Laplacian and number of filters increase.

Effects of hyper-parameter tuning: The variation in accuracy against hyper-parameters of the proposed filters applied to the Cora dataset is given in Figure 3. For diffusion filter (), accuracy increases as is increased but there is a drop in accuracy after a peak value of . Similar is the case of 1-step RW. In the case of 2-step and 3-step RW filters, accuracy increases as the value of increases but after a threshold point, accuracy variation is minimal. Similar is the trend observed for other datasets. For the experiments, the network with two layers of GC having 32 number of filters is used.

6.4 Decoupling low pass filtering from network learning

To underline the practical impact of the framework we proposed, an experiment is done that decouples the low pass filtering from the network learning inspired by pmlr-v97-wu19e (). First, the filtering is done separately using (with no learning parameters) and the resulting filtered features are given into a two-layer MLP (chosen after ablation studies among 1 & 3 layer models). This helps to identify the impact of the choice of formulated in our work independent of the network parameters. The results are given in Table 4.

Methods Cora Citeseer Pubmed
MLP 56.50 (1.21) 53.57 (2.71) 71.87 (0.21)
GCN 77.99 (0.75) 68.28 (0.51) 76.05 (0.33)
IGCN 81.44 (0.41) 70.64 (0.67) 78.46 (0.70)
ChebyNet 27.18 (1.46) 28.63 (1.08) 59.99 (0.58)
GraphHeat 1+ 73.57 (0.72) 66.31 (0.59) 73.50 (0.85)
Diffusion 78.47 (0.32) 68.47 (0.61) 76.72 (0.59)
1-step RW 77.31 (0.52) 67.95 (0.80) 76.34 (0.47)
2-step RW 78.08 (0.62) 69.41 (0.44) 76.90 (0.23)
3-step RW 78.98 (0.28) 69.23 (0.71) 71.49 (0.81)
Cosine 70.38 (0.82) 64.67 (0.62) 72.01 (0.74)
Table 4: Accuracy of the filters. Std dev. is given in brackets.

Observations: In section 5.4, we found that unlike other networks, of ChebyNet is the opposite of the required monotone property. This is evident from the results as its performance is lower than the rest including MLP. All other filters except the ChebyNet satisfy Theory 1, and hence their performance is better. They also perform better than MLP which indicates the importance of the proposed framework. The results are lower compared with GCN architecture kipf2016semi () followed in the previous experiment. This points out to the possibility that the stochastic nature of neural networks may not guarantee the desired filtering properties. The case of the ChebyNet is an example as its performance is good in GCN architecture despite having contradictions with Theory 1 whereas its performance in the new experiment is lower.

7 Conclusion

We formulated a framework to design regularized filters for GCNNs based on regularization in graphs modeled by graph Laplacian. A new set of regularized filters are proposed and identified the state-of-the-art filter designs as their special cases. The new filters designs proposed in the context of the framework has shown superior performance in semi-supervised classification task compared to conventional methods and state-of-the-art GCNNs. Considering the practical impacts of the framework we proposed, we also observed that the stochastic nature of the neural networks can possibly does not guarantee the desired low pass filtering property that has to be satisfied by the spectral GCNN filters.

Acknowledgments

The authors thank Subrahamanian Moosath K.S, IIST, Thiruvananthapuram, and Shiju S.S, Technical Architect, IBS Software Pvt. Ltd., Thiruvananthapuram for the productive discussions.

Appendix A Regularization in graphs, support vector kernels and spectral GCNN filters

The support vector kernel is considered as a similarity measure between a pair of data points in a space . Support vector kernels can be formulated by solving the self-consistency condition (smola1998connection ()),

(4)

where is the regularization operator.

From equation 4, Smola et.al smola1998connection () found that given a regularization operator , there exist a support vector kernel that minimize the regularized risk functional,

(5)

that also enforce flatness (determined by ) in the feature space or Reproducing Kernel Hilbert Space (RKHS) of functions. They also found that given a support vector kernel , regularization operator can be found out such that a regularization network smola1998connection () using is equivalent to a support vector machine that uses the kernel . Note that is the empirical loss function and is a hyper-parameter.

Smola et.al smola2003kernels () used the above concepts to design support vector kernels on graphs. As shown in Equation 3, graph Laplacian () can be used to define a smoothness functional on graphs that aids in designing regularization operators. They proved that if is the image of under (a positive semidefinite regularization matrix), then whose dot product is defined as is a RKHS and the corresponding support vector kernel is defined as , where denotes the psuedo-inverse if is not invertible.

Now if we consider the case of GCNN filters, we can observe that if the parameters s associated with the filter definition maintains positive definiteness of the matrix , then the filter can be considered as valid and equivalent support vector kernel that solves the regularized risk functional in Equation 5. The corresponding regularization behavior induced by in equation 5 can be attributed to the corresponding regularization function .

References

  1. Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, pp. 2278–2324, Nov 1998.
  2. F. Monti, D. Boscaini, J. Masci, E. Rodolà, J. Svoboda, and M. M. Bronstein, “Geometric deep learning on graphs and manifolds using mixture model cnns,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5425–5434, July 2017.
  3. J. Gilmer, S. S. Schoenholz, P. F. Riley, O. Vinyals, and G. E. Dahl, “Neural message passing for quantum chemistry,” in Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1263–1272, JMLR. org, 2017.
  4. D. K. Duvenaud, D. Maclaurin, J. Iparraguirre, R. Bombarell, T. Hirzel, A. Aspuru-Guzik, and R. P. Adams, “Convolutional networks on graphs for learning molecular fingerprints,” in Advances in neural information processing systems, pp. 2224–2232, 2015.
  5. S. Kearnes, K. McCloskey, M. Berndl, V. Pande, and P. Riley, “Molecular graph convolutions: moving beyond fingerprints,” Journal of computer-aided molecular design, vol. 30, no. 8, pp. 595–608, 2016.
  6. W. Hamilton, Z. Ying, and J. Leskovec, “Inductive representation learning on large graphs,” in Advances in neural information processing systems, pp. 1024–1034, 2017.
  7. Y. Li, D. Tarlow, M. Brockschmidt, and R. Zemel, “Gated graph sequence neural networks,” arXiv preprint arXiv:1511.05493, 2015.
  8. P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Lio, and Y. Bengio, “Graph attention networks,” arXiv preprint arXiv:1710.10903, 2017.
  9. F. R. Chung, Spectral graph theory. No. 92, American Mathematical Soc., 1997.
  10. D. I. Shuman, S. K. Narang, P. Frossard, A. Ortega, and P. Vandergheynst, “The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains,” IEEE signal processing magazine, vol. 30, no. 3, pp. 83–98, 2013.
  11. J. Bruna, W. Zaremba, A. Szlam, and Y. LeCun, “Spectral networks and locally connected networks on graphs,” arXiv preprint arXiv:1312.6203, 2013.
  12. M. Defferrard, X. Bresson, and P. Vandergheynst, “Convolutional neural networks on graphs with fast localized spectral filtering,” in Advances in neural information processing systems, pp. 3844–3852, 2016.
  13. T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” arXiv preprint arXiv:1609.02907, 2016.
  14. B. Xu, H. Shen, Q. Cao, Y. Qiu, and X. Cheng, “Graph wavelet neural network,” arXiv preprint arXiv:1904.07785, 2019.
  15. B. Xu, H. Shen, Q. Cao, K. Cen, and X. Cheng, “Graph convolutional networks using heat kernel for semi-supervised learning,” in Proceedings of the 28th International Joint Conference on Artificial Intelligence, pp. 1928–1934, AAAI Press, 2019.
  16. D. Zhou and B. Schölkopf, “A regularization framework for learning from graph data,” in ICML 2004 Workshop on Statistical Relational Learning and Its Connections to Other Fields (SRL 2004), pp. 132–137, 2004.
  17. M. Belkin, I. Matveeva, and P. Niyogi, “Regularization and semi-supervised learning on large graphs,” in International Conference on Computational Learning Theory, pp. 624–638, Springer, 2004.
  18. A. J. Smola and R. Kondor, “Kernels and regularization on graphs,” in Learning theory and kernel machines, pp. 144–158, Springer, 2003.
  19. Q. Li, X.-M. Wu, H. Liu, X. Zhang, and Z. Guan, “Label efficient semi-supervised learning via graph filtering,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9582–9591, 2019.
  20. M. Belkin, P. Niyogi, and V. Sindhwani, “Manifold regularization: A geometric framework for learning from labeled and unlabeled examples,” Journal of machine learning research, vol. 7, no. Nov, pp. 2399–2434, 2006.
  21. X. Zhu, Z. Ghahramani, and J. D. Lafferty, “Semi-supervised learning using gaussian fields and harmonic functions,” in Proceedings of the 20th International conference on Machine learning (ICML-03), pp. 912–919, 2003.
  22. J. Weston, F. Ratle, H. Mobahi, and R. Collobert, “Deep learning via semi-supervised embedding,” in Neural networks: Tricks of the trade, pp. 639–655, Springer, 2012.
  23. F. Wu, A. Souza, T. Zhang, C. Fifty, T. Yu, and K. Weinberger, “Simplifying graph convolutional networks,” in Proceedings of the 36th International Conference on Machine Learning, pp. 6861–6871, 2019.
  24. J. Klicpera, S. Weißenberger, and S. Günnemann, “Diffusion improves graph learning,” in Advances in Neural Information Processing Systems, pp. 13333–13345, 2019.
  25. F. Gama, J. Bruna, and A. Ribeiro, “Stability properties of graph neural networks,” arXiv preprint arXiv:1905.04497, 2019.
  26. X. Zhu and A. B. Goldberg, “Introduction to semi-supervised learning,” Synthesis lectures on artificial intelligence and machine learning, vol. 3, no. 1, pp. 1–130, 2009.
  27. A. J. Smola, B. Schölkopf, and K.-R. Müller, “The connection between regularization operators and support vector kernels,” Neural networks, vol. 11, no. 4, pp. 637–649, 1998.
  28. D. K. Hammond, P. Vandergheynst, and R. Gribonval, “Wavelets on graphs via spectral graph theory,” Applied and Computational Harmonic Analysis, vol. 30, no. 2, pp. 129–150, 2011.
  29. Z. Yang, W. W. Cohen, and R. Salakhutdinov, “Revisiting semi-supervised learning with graph embeddings,” in Proceedings of the 33rd International Conference on International Conference on Machine Learning-Volume 48, pp. 40–48, 2016.
  30. B. Perozzi, R. Al-Rfou, and S. Skiena, “Deepwalk: Online learning of social representations,” in Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 701–710, 2014.
  31. Q. Lu and L. Getoor, “Link-based classification,” in Proceedings of the 20th International Conference on Machine Learning (ICML-03), pp. 496–503, 2003.
  32. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
  33. X. Glorot and Y. Bengio, “Understanding the difficulty of training deep feedforward neural networks,” in Proceedings of the thirteenth international conference on artificial intelligence and statistics, pp. 249–256, 2010.
  34. M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, et al., “Tensorflow: Large-scale machine learning on heterogeneous distributed systems,” arXiv preprint arXiv:1603.04467, 2016.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
414482
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description