CayleyNets: Graph Convolutional Neural Networks with Complex Rational Spectral Filters

CayleyNets: Graph Convolutional Neural Networks with Complex Rational Spectral Filters

Abstract

The rise of graph-structured data such as social networks, regulatory networks, citation graphs, and functional brain networks, in combination with resounding success of deep learning in various applications, has brought the interest in generalizing deep learning models to non-Euclidean domains. In this paper, we introduce a new spectral domain convolutional architecture for deep learning on graphs. The core ingredient of our model is a new class of parametric rational complex functions (Cayley polynomials) allowing to efficiently compute localized regular filters on graphs that specialize on frequency bands of interest. Our model scales linearly with the size of the input data for sparsely-connected graphs, can handle different constructions of Laplacian operators, and typically requires less parameters than previous models. Extensive experimental results show the superior performance of our approach on various graph learning problems.

1Introduction

In many domains, one has to deal with large-scale data with underlying non-Euclidean structure. Prominent examples of such data are social networks, genetic regulatory networks, functional networks of the brain, and 3D shape represented as discrete manifolds. The recent success of deep neural networks and, in particular, convolutional neural networks (CNNs) [20] have raised the interest in geometric deep learning techniques trying to extend these models to data residing on graphs and manifolds. Geometric deep learning approaches have been successfully applied to computer graphics and vision [22], brain imaging [19], and drug design [11] problems, to mention a few. For a comprehensive presentation of methods and applications of deep learning on graphs and manifolds, we refer the reader to the review paper [5].

Related work. In this paper, we are in particular interested in deep learning on graphs. The earliest neural network formulation on graphs was proposed by Scarselli et al. [12] combining random walks with recurrent neural networks (their paper has recently enjoyed renewed interest [21]). The first CNN-type architecture on graphs was proposed by Bruna et al. [6]. One of the key challenges of extending CNNs to graphs is the lack of vector-space structure and shift-invariance making the classical notion of convolution elusive. Bruna et al. formulated convolution-like operations in the spectral domain, using the graph Laplacian eigenbasis as an analogy of the Fourier transform [29]. Henaff et al. [14] used smooth parametric spectral filters in order to achieve localization in the spatial domain and keep the number of filter parameters independent of the input size. Defferrard et al. [9] proposed an efficient filtering scheme using recurrent Chebyshev polynomials applied on the Laplacian operator. Kipf and Welling [18] simplified this architecture using filters operating on 1-hop neighborhoods of the graph. Atwood and Towsley [1] proposed a Diffusion CNN architecture based on random walks on graphs. Monti et al. [24] (and later, [13]) proposed a spatial-domain generalization of CNNs to graphs using local patch operators represented as Gaussian mixture models, showing a significant advantage of such models in generalizing across different graphs. In [25], graph CNNs were extended to multiple graphs and applied to matrix completion and recommender system problems.

Main contribution. In this paper, we construct graph CNNs employing an efficient spectral filtering scheme based on Cayley polynomials that enjoys similar advantages of the Chebyshev filters [9] such as localization and linear complexity. The main advantage of our filters over [9] is their ability to detect narrow frequency bands of importance during training, and to specialize on them while being well-localized on the graph. We demonstrate experimentally that this affords our method greater flexibility, making it perform better on a broad range of graph learning problems.

Notation. We use , and to denote scalars, vectors, and matrices, respectively. denotes the conjugate of a complex number, its real part, and is the imaginary unit. denotes an diagonal matrix with diagonal elements . denotes an diagonal matrix obtained by setting to zero the off-diagonal elements of . denotes the matrix containing only the off-diagonal elements of . is the identity matrix and denotes the Hadamard (element-wise) product of matrices and . Proofs are given in the appendix.

2Spectral techniques for deep learning on graphs

Spectral graph theory. Let be an undirected weighted graph, represented by a symmetric adjacency matrix . We define if and if . We denote by the -hop neighborhood of vertex , containing vertices that are at most edges away from . The unnormalized graph Laplacian is an symmetric positive-semidefinite matrix , where is the degree matrix. The normalized graph Laplacian is defined as . In the following, we use the generic notation to refer to some Laplacian.

Since both normalized and unnormalized Laplacian are symmetric and positive semi-definite matrices, they admit an eigendecomposition , where are the orthonormal eigenvectors and is the diagonal matrix of corresponding non-negative eigenvalues (spectrum) . The eigenvectors play the role of Fourier atoms in classical harmonic analysis and the eigenvalues can be interpreted as (the square of) frequencies. Given a signal on the vertices of graph , its graph Fourier transform is given by . Given two signals on the graph, their spectral convolution can be defined as the element-wise product of the Fourier transforms,

which corresponds to the property referred to as the Convolution Theorem in the Euclidean case.

Spectral CNNs. Bruna et al. [6] used the spectral definition of convolution (Equation 1) to generalize CNNs on graphs, with a spectral convolutional layer of the form

Here the and matrices and represent respectively the - and -dimensional input and output signals on the vertices of the graph, is an matrix of the first eigenvectors, is a diagonal matrix of spectral multipliers representing a learnable filter in the frequency domain, and is a nonlinearity (e.g., ReLU) applied on the vertex-wise function values. Pooling is performed by means of graph coarsening, which, given a graph with vertices, produces a graph with vertices and transfers signals from the vertices of the fine graph to those of the coarse one.

This framework has several major drawbacks. First, the spectral filter coefficients are basis dependent, and consequently, a spectral CNN model learned on one graph cannot be applied to another graph. Second, the computation of the forward and inverse graph Fourier transforms incur expensive multiplication by the matrices , as there is no FFT-like algorithms on general graphs. Third, there is no guarantee that the filters represented in the spectral domain are localized in the spatial domain (locality property simulates local reception fields [8]); assuming Laplacian eigenvectors are used, a spectral convolutional layer requires parameters to train.

To address the latter issues, Henaff et al. [14] argued that smooth spectral filter coefficients result in spatially-localized filters (an argument similar to vanishing moments). The filter coefficients are represented as , where is a smooth transfer function of frequency . Applying such filter to signal can be expressed as , where applying a function to a matrix is understood in the operator functional calculus sense (applying the function to the matrix eigenvalues). Henaff et al. [14] used parametric functions of the form

where are some fixed interpolation kernels such as splines, and are the interpolation coefficients used as the optimization variables during the network training. In matrix notation, the filter is expressed as , where is a matrix. Such a construction results in filters with parameters, independent of the input size. However, the authors explicitly computed the Laplacian eigenvectors , resulting in high complexity.

ChebNet. Defferrard et al. [9] used polynomial filters represented in the Chebyshev basis

applied to rescaled frequency ; here, is the -dimensional vector of polynomial coefficients parametrizing the filter and optimized for during the training, and denotes the Chebyshev polynomial of degree defined in a recursive manner with and . Chebyshev polynomials form an orthogonal basis for the space of polynomials of order on . Applying the filter is performed by , where is the rescaled Laplacian such that its eigenvalues are in the interval .

Such an approach has several important advantages. First, since contains only matrix powers, additions, and multiplications by scalar, it can be computed avoiding the explicit expensive computation of the Laplacian eigenvectors. Furthermore, due to the recursive definition of the Chebyshev polynomials, the computation of the filter entails applying the Laplacian times, resulting in operations assuming that the Laplacian is a sparse matrix with non-zero elements in each row (a valid hypothesis for most real-world graphs that are sparsely connected). Second, the number of parameters is , independent of the graph size . Third, since the Laplacian is a local operator affecting only 1-hop neighbors of a vertex and a polynomial of degree of the Laplacian affects only -hops, the resulting filters have guaranteed spatial localization.

A key disadvantage of Chebyshev filters is the fact that using polynomials makes it hard to produce narrow-band filters, as such filters require very high order . This deficiency is especially pronounced when the Laplacian has clusters of eigenvalues concentrated around a few frequencies with large spectral gap. Such a behavior is characteristic of graphs with community structures, which is very common in many real-world graphs, for instance, social networks.

3Cayley filters

A key construction of this paper is a family of complex filters that enjoy the advantages of Chebyshev filters while avoiding some of their drawbacks. A Cayley polynomial of order is a real-valued function with complex coefficients,

where is a vector of one real coefficient and complex coefficients and is the spectral zoom parameter. A Cayley filter is a spectral filter defined on real signals by

where the parameters and are optimized for during training. Similarly to the Chebyshev filters, Cayley filters involve basic matrix operations such as powers, additions, multiplications by scalars, and also inversions. This implies that application of the filter can be performed without explicit expensive eigendecomposition of the Laplacian operator. In the following, we show that Cayley filters are analytically well behaved; in particular, any smooth spectral filter can be represented as a Cayley polynomial, and low-order filters are localized in the spatial domain. We also discuss numerical implementation and compare Cayley and Chebyshev filters.

Analytic properties. Cayley filters are best understood through the Cayley transform, from which their name derives. Denote by the unit complex circle. The Cayley transform is a smooth bijection between and . The complex matrix obtained by applying the Cayley transform to the scaled Laplacian has its spectrum in and is thus unitary. Since for , we can write . Therefore, using , any Cayley filter (Equation 5) can be written as a conjugate-even Laurent polynomial w.r.t. ,

Since the spectrum of is in , the operator can be thought of as a multiplication by a pure harmonic in the frequency domain for any integer power ,

A Cayley filter can be thus seen as a multiplication by a finite Fourier expansions in the frequency domain . Since (Equation 6) is conjugate-even, it is a (real-valued) trigonometric polynomial.

Note that any spectral filter can be formulated as a Cayley filter. Indeed, spectral filters are specified by the finite sequence of values , which can be interpolated by a trigonometric polynomial. Moreover, since trigonometric polynomials are smooth, we expect low order Cayley filters to be well localized in some sense on the graph, as discussed later.

Finally, in definition (Equation 5) we use complex coefficients. If then (Equation 6) is an even cosine polynomial, and if then (Equation 6) is an odd sine polynomial. Since the spectrum of is in , it is mapped to the lower half-circle by , on which both cosine and sine polynomials are complete and can represent any spectral filter. However, it is beneficial to use general complex coefficients, since complex Fourier expansions are overcomplete in the lower half-circle, thus describing a larger variety of spectral filters of the same order without increasing the computational complexity of the filter.

Filters (spatial domain, top and spectral domain, bottom) learned by CayleyNet (left) and ChebNet (center, right) on the MNIST dataset. Cayley filters are able to realize larger supports for the same order r.
Filters (spatial domain, top and spectral domain, bottom) learned by CayleyNet (left) and ChebNet (center, right) on the MNIST dataset. Cayley filters are able to realize larger supports for the same order .

Spectral zoom. To understand the role of the parameter in the Cayley filter, consider . Multiplying by dilates its spectrum, and applying on the result maps the non-negative spectrum to the complex half-circle. The greater is, the more the spectrum of is spread apart in , resulting in better spacing of the smaller eigenvalues of . On the other hand, the smaller is, the further away the high frequencies of are from , the better spread apart are the high frequencies of in (see Figure ?). Tuning the parameter allows thus to ‘zoom’ in to different parts of the spectrum, resulting in filters specialized in different frequency bands.

Eigenvalues of the unnormalized Laplacian h\boldsymbol{\Delta}_\mathrm{u} of the 15-communities graph mapped on the complex unit half-circle by means of Cayley transform with spectral zoom values (left-to-right) h=0.1, 1, and 10. The first 15 frequencies carrying most of the information about the communities are marked in red. Larger values of h zoom (right) on the low frequency band.
Eigenvalues of the unnormalized Laplacian of the 15-communities graph mapped on the complex unit half-circle by means of Cayley transform with spectral zoom values (left-to-right) , , and . The first 15 frequencies carrying most of the information about the communities are marked in red. Larger values of zoom (right) on the low frequency band.

Numerical properties. The numerical core of the Cayley filter is the computation of for , performed in a sequential manner. Let denote the solutions of the following linear recursive system,

Note that sequentially approximating in (Equation 7) using the approximation of in the rhs is stable, since is unitary and thus has condition number .

Equations (Equation 7) can be solved with matrix inversion exactly, but it costs . An alternative is to use the Jacobi method,1 which provides approximate solutions . Let be the Jacobi iteration matrix associated with equation (Equation 7). For the unnormalized Laplacian, . Jacobi iterations for approximating (Equation 7) for a given have the form

initialized with and terminated after iterations, yielding . The application of the approximate Cayley filter is given by , and takes operations under the previous assumption of a sparse Laplacian. The method can be improved by normalizing .

Next, we give an error bound for the approximate filter. For the unnormalized Laplacian, let and . For the normalized Laplacian, we assume that is dominant diagonal, which gives .

Proposition ? is pessimistic in the general case, while requires strong assumptions in the regular case. We find that in most real life situations the behavior is closer to the regular case. It also follows from Proposition ? that smaller values of the spectral zoom result in faster convergence, giving this parameter an additional numerical role of accelerating convergence.

Localization. Unlike Chebyshev filters that have the small -hop support, Cayley filters are rational functions supported on the whole graph. However, it is still true that Cayley filters are well localized on the graph. Let be a Cayley filter and denote a delta-function on the graph, defined as one at vertex and zero elsewhere. We show that decays fast, in the following sense:

Cayley vs Chebyshev. Below, we compare the two classes of filters:
Spectral zoom and stability. Generally, both Chebyshev polynomials and trigonometric polynomials give stable approximations, optimal for smooth functions. However, this crude statement is over-simplified. One of the drawbacks in Chebyshev filters is the fact that the spectrum of is always mapped to in a linear manner, making it hard to specialize in small frequency bands. In Cayley filters, this problem is mitigated with the help of the spectral zoom parameter . As an example, consider the community detection problem discussed in the next section. A graph with strong communities has a cluster of small eigenvalues near zero. Ideal filters for extracting the community information should be able to focus on this band of frequencies. Approximating such filters with Cayley polynomials, we zoom in to the band of interest by choosing the right , and then project onto the space of trigonometric polynomials of order , getting a good and stable approximation (Figure Figure 1, bottom right). However, if we project onto the space of Chebyshev polynomials of order , the interesting part of concentrated on a small band is smoothed out and lost (Figure Figure 1, middle right). Thus, projections are not the right way to approximate such filters, and the stability of orthogonal polynomials cannot be invoked. When approximating on the small band using polynomials, the approximation will be unstable away from this band; small perturbations in will result in big perturbations in the Chebyshev filter away from the band. For this reason, we say that Cayley filters are more stable than Chebyshev filters.
Regularity. We found that in practice, low-order Cayley filters are able to model both very concentrated impulse-like filters, and wider Gabor-like filters. Cayley filters are able to achieve a wider range of filter supports with less coefficients than Chebyshev filters (Figure ?), making the Cayley class more regular than Chebyshev.
Complexity. Under the assumption of sparse Laplacians, both Cayley and Chebyshev filters incur linear complexity . The implementation of Cayley filter in TensorFlow is slower due to the use of TensorFlow’s for loop in Jacobi iterations . A faster implementation should involve programming the whole Jacobi method as an OP in native CUDA.

4Results

Experimental settings. We test the proposed CayleyNets reproducing the experiments of [9] and using ChebNet [9] as our main baseline method. All the methods were implemented in TensorFlow. The experiments were executed on a machine with a 3.5GHz Intel Core i7 CPU, 64GB of RAM, and NVIDIA Titan X GPU with 12GB of RAM. SGD+Momentum and Adam [17] optimization methods were used to train the models in MNIST and the rest of the experiments, respectively. Training and testing were always done on disjoint sets.

Community detection. We start with an experiment on a synthetic graph consisting of 15 communities with strong connectivity within each community and sparse connectivity across communities (Figure 1, left). Though rather simple, such a dataset allows to study the behavior of different algorithms in controlled settings. On this graph, we generate noisy step signals, defined as if belongs to the community, and otherwise, where is Gaussian i.i.d. noise. The goal is to classify each such signal according to the community it belongs to. The neural network architecture used for this task consisted of a spectral convolutional layer (based on Chebyshev or Cayley filters) with 32 output features, a mean pooling layer, and a softmax classifier for producing the final classification into one of the 15 classes. The classification accuracy is shown in Figure 1 (right, top) along with examples of learned filters (right, bottom). We observe that CayleyNet significantly outperforms ChebNet for smaller filter orders. Studying the filter responses, we note that due to the capability to learn the spectral zoom parameter, CayleyNet allows to generate band-pass filters in the low-frequency band that discriminate well the communities.

Complexity. We experimentally validated the computational complexity of our model applying filters of different order to synthetic 15-community graphs of different size using exact matrix inversion and approximation with different number of Jacobi iterations (Figure 2, center and right). As expected, approximate inversion guarantees complexity. We further conclude that typically very few Jacobi iterations are required (Figure 2, left shows that our model with just one Jacobi iteration outperforms ChebNet for low-order filters on the community detection problem).

Figure 1: Left: synthetic 15-communities graph. Right: community detection accuracy of ChebNet and CayleyNet (top); normalized responses of four different filters learned by ChebNet (middle) and CayleyNet (bottom). Grey vertical lines represent the frequencies of the normalized Laplacian (\tilde{\lambda} = 2\lambda_n^{-1} \lambda - 1 for ChebNet and C(\lambda) = (h\lambda - i)/(h\lambda + i) unrolled to a real line for CayleyNet). Note how thanks to spectral zoom property Cayley filters can focus on the band of low frequencies (dark grey lines) containing most of the information about communities.
Figure 1: Left: synthetic 15-communities graph. Right: community detection accuracy of ChebNet and CayleyNet (top); normalized responses of four different filters learned by ChebNet (middle) and CayleyNet (bottom). Grey vertical lines represent the frequencies of the normalized Laplacian ( for ChebNet and unrolled to a real line for CayleyNet). Note how thanks to spectral zoom property Cayley filters can focus on the band of low frequencies (dark grey lines) containing most of the information about communities.

Mnist. Following [9], we approached the classical MNIST digits classification as a learning problem on graphs. Each pixel of an image is a vertex of a graph (regular grid with 8-neighbor connectivity), and pixel color is a signal on the graph. We used a graph CNN architecture with two spectral convolutional layers based on Chebyshev and Cayley filters (producing 32 and 64 output features, respectively), interleaved with pooling layers performing 4-times graph coarsening using the Graclus algorithm [10], and finally a fully-connected layer (this architecture replicates the classical LeNet5 [20] architecture, which is shown for comparison). MNIST classification results are reported in Table ?. CayleyNet achieves the same (near perfect) accuracy as ChebNet with filters of lower order ( vs ). Examples of filters learned by ChebNet and CayleyNet are shown in Figure ?.

Table 1: Performance (RMSE) of different matrix completion methods on the MovieLens dataset.
Model Order Accuracy
LeNet5 - 99.33%
ChebNet 25 99.14%
CayleyNet 12 99.18%
Table 2: Performance (RMSE) of different matrix completion methods on the MovieLens dataset.
Method RMSE
MC 0.973
IMC 1.653
GMC 0.996
GRALS 0.945
sRGCNN 0.929
sRGCNN 0.922
Table 3: Performance (RMSE) of different matrix completion methods on the MovieLens dataset.
Method Accuracy
DCNN 86.60%
ChebNet 87.12%
GCN 87.17%
CayleyNet 87.90%

Citation network. Next, we address the problem of vertex classification on graphs using the popular CORA citation graph [28]. Each of the 2708 vertices of the CORA graph represents a scientific paper, and an undirected unweighted edge represents a citation (5429 edges in total). For each vertex, a 1433-dimensional binary feature vector representing the content of the paper is given. The task is to classify each vertex into one of the 7 groundtruth classes. We split the graph into training (1,708 vertices), validation (500 vertices) and test (500 vertices) sets, for simulating the labeled and unlabeled information. We use the architecture introduced in [18] (two spectral convolutional layers with 16 and 7 outputs, respectively) for solving the vertex classification task. The spectral zoom was determined with cross-validation in order to avoid overfitting. The performance is reported in Figure 3 and Table ?. CayleyNet consistently outperforms ChebNet and other methods. We further note that ChebNet is unable to handle unnormalized Laplacians.

Figure 2: Left: community detection test accuracy as function of filter order r. Center and right: computational complexity (test times on batches of 100 samples) as function of filter order r and graph size n. Shown are exact matrix inversion (dashed) and approximate Jacobi with different number of iterations (colored). For reference, ChebNet is shown (dotted).
Figure 2: Left: community detection test accuracy as function of filter order . Center and right: computational complexity (test times on batches of 100 samples) as function of filter order and graph size . Shown are exact matrix inversion (dashed) and approximate Jacobi with different number of iterations (colored). For reference, ChebNet is shown (dotted).

Recommender system. In our final experiment, we applied CayleyNet to recommendation system, formulated as matrix completion problem on user and item graphs [24]. The task is, given a sparsely sampled matrix of scores assigned by users (columns) to items (rows), to fill in the missing scores. The similarities between users and items are given in the form of column and row graphs, respectively. Monti et al. [24] approached this problem as learning with a Recurrent Graph CNN (RGCNN) architecture, using an extension of ChebNets to matrices defined on multiple graphs in order to extract spatial features from the score matrix; these features are then fed into an RNN producing a sequential estimation of the missing scores. Here, we repeated verbatim their experiment on the MovieLens dataset [23], replacing Chebyshev filters with Cayley filters. We used separable RGCNN architecture with two CayleyNets of order employing 15 Jacobi iterations. The results are reported in Table 3. Our version of sRGCNN outperforms all the competing methods, including the previous result with Chebyshev filters reported in [24].

Figure 3: ChebNet (blue) and CayleyNet (orange) test accuracies obtained on the CORA dataset using normalized (left) and unnormalized (right) Laplacians. Unlike ChebNet, our CayleyNet is able to handle both types of Laplacians.
Figure 3: ChebNet (blue) and CayleyNet (orange) test accuracies obtained on the CORA dataset using normalized (left) and unnormalized (right) Laplacians. Unlike ChebNet, our CayleyNet is able to handle both types of Laplacians.

5Conclusions

In this paper, we introduced a new efficient graph CNN architecture that scales linearly with the dimension of the input data. Our architecture is based on a new class of complex rational Cayley filters that are localized in space, can represent any smooth spectral transfer function, and are highly regular. The key property of our model is its ability to specialize in narrow frequency bands with a small number of filter parameters, while still preserving locality in the spatial domain. We validated these theoretical properties experimentally, demonstrating the superior performance of our model in a broad range of graph learning problems. In future work, we will explore more modern linear solvers instead of the Jacobi method, built in native CUDA as a TensorFlow OP.

Acknowledgment

FM and MB are supported in part by ERC Starting Grant No. 307047 (COMET), ERC Consolidator Grant No. 724228 (LEMAN), Google Faculty Research Award, Nvidia equipment grant, Radcliffe fellowship from Harvard Institute for Advanced Study, and TU Munich Institute for Advanced Study, funded by the German Excellence Initiative and the European Union Seventh Framework Programme under grant agreement No. 291763. XB is supported in part by NRF Fellowship NRFF2017-10.

Appendix

Proof of Proposition 1

First note the following classical result for the approximation of using the Jacobi method: if the initial condition is , then . In our case, note that if we start with initial condition , the next iteration gives , which is the initial condition from our construction. Therefore, since we are approximating by , we have

Define the approximation error in by

By the triangle inequality, by the fact that is unitary, and by (Equation 9)

where the last inequality is due to

Now, using standard norm bounds, in the general case we have . Thus, by we have

The solution of this recurrent sequence is

If we use the version of the algorithm, in which each is normalized, we get by (Equation 12) . The solution of this recurrent sequence is

We denote in this case

In case the graph is regular, we have . In the non-normalized Laplacian case,

The spectral radius of is bounded by . This can be shown as follows. a value is not an eigenvalue of (namely it is a regular value) if and only if is invertible. Moreover, the matrix is strictly dominant diagonal for any . By Levy–Desplanques theorem, any strictly dominant diagonal matrix is invertible, which means that all of the eigenvalues of are less than in their absolute value. As a result, the spectral radius of is realized on the smallest eigenvalue of , namely it is . This means that the specral radius of is . As a result . We can now continue from (Equation 12) to get

As before, we get , and if each is normalized. We denote in this case .

In the case of the normalized Laplacian of a regular graph, the spectral radius of is bounded by , and the diagonal entries are all 1. Equation (Equation 11) in this case reads , and has spectral radius . Thus and we continue as before to get and .

In all cases, we have by the triangle inequality

Proof of Theorem 4

In this proof we approximate by . Note that the signal is supported on one vertex, and in the calculation of , each Jacobi iteration increases the support of the signal by 1-hop. Therefore, the support of is the -hop neighborhood of . Denoting , and using Proposition 1, we get

Footnotes

  1. We remind that the Jacobi method for solving consists in decomposing and obtaining the solution iteratively as .

References

  1. Diffusion-convolutional neural networks.
    J. Atwood and D. Towsley. arXiv:1511.02136v2, 2016.
  2. Learning class-specific descriptors for deformable shapes using localized spectral convolutional networks.
    D. Boscaini, J. Masci, S. Melzi, M. M. Bronstein, U. Castellani, and P. Vandergheynst. Computer Graphics Forum, 34(5):13–23, 2015.
  3. Learning shape correspondence with anisotropic convolutional neural networks.
    D. Boscaini, J. Masci, E. Rodolà, and M. M. Bronstein. In Proc. NIPS, 2016.
  4. Anisotropic diffusion descriptors.
    D. Boscaini, J. Masci, E. Rodolà, M. M. Bronstein, and D. Cremers. Computer Graphics Forum, 35(2):431–441, 2016.
  5. Geometric deep learning: going beyond euclidean data.
    M. M. Bronstein, J. Bruna, Y. LeCun, A. Szlam, and P. Vandergheynst. arXiv:1611.08097, 2016.
  6. Spectral networks and locally connected networks on graphs.
    J. Bruna, W. Zaremba, A. Szlam, and Y. LeCun. Proc. ICLR, 2013.
  7. Exact matrix completion via convex optimization.
    E. Candes and B. Recht. Comm. ACM, 55(6):111–119, 2012.
  8. Selecting Receptive Fields in Deep Networks.
    A. Coates and A. Ng. In Proc. NIPS, 2011.
  9. Convolutional neural networks on graphs with fast localized spectral filtering.
    M. Defferrard, X. Bresson, and P. Vandergheynst. In Proc. NIPS, 2016.
  10. Weighted graph cuts without eigenvectors a multilevel approach.
    I. S. Dhillon, Y. Guan, and B. Kulis. Trans. PAMI, 29(11), 2007.
  11. Convolutional networks on graphs for learning molecular fingerprints.
    D. K. Duvenaud et al. In Proc. NIPS, 2015.
  12. A new model for learning in graph domains.
    M. Gori, G. Monfardini, and F. Scarselli. In Proc. IJCNN, 2005.
  13. A generalization of convolutional neural networks to graph-structured data.
    Y. Hechtlinger, P. Chakravarti, and J. Qin. arXiv:1704.08165, 2017.
  14. Deep convolutional networks on graph-structured data.
    M. Henaff, J. Bruna, and Y. LeCun. arXiv:1506.05163, 2015.
  15. Provable inductive matrix completion.
    P. Jain and I. S. Dhillon. arXiv:1306.0626, 2013.
  16. Matrix completion on graphs.
    V. Kalofolias, X. Bresson, M. M. Bronstein, and P. Vandergheynst. arXiv:1408.1717, 2014.
  17. Adam: A method for stochastic optimization.
    D. P. Kingma and J. Ba. arXiv:1412.6980, 2014.
  18. Semi-supervised classification with graph convolutional networks.
    T. N. Kipf and M. Welling. arXiv:1609.02907, 2016.
  19. Distance metric learning using graph convolutional networks: Application to functional brain networks.
    S. I. Ktena, S. Parisot, E. Ferrante, M. Rajchl, M. Lee, B. Glocker, and D. Rueckert. arXiv:1703.02161, 2017.
  20. Gradient-based learning applied to document recognition.
    Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Proc. IEEE, 86(11):2278–2324, 1998.
  21. Gated graph sequence neural networks.
    Y. Li, D. Tarlow, M. Brockschmidt, and R. Zemel. arXiv:1511.05493, 2015.
  22. Geodesic convolutional neural networks on Riemannian manifolds.
    J. Masci, D. Boscaini, M. M. Bronstein, and P. Vandergheynst. In Proc. 3DRR, 2015.
  23. MovieLens unplugged: experiences with an occasionally connected recommender system.
    B. N. Miller et al. In Proc. Intelligent User Interfaces, 2003.
  24. Geometric deep learning on graphs and manifolds using mixture model CNNs.
    F. Monti, D. Boscaini, J. Masci, E. Rodolà, J. Svoboda, and M. M. Bronstein. In Proc. CVPR, 2017.
  25. Geometric matrix completion with recurrent multi-graph neural networks.
    F. Monti, M. M. Bronstein, and X. Bresson. arXiv:1704.06803, 2017.
  26. Collaborative filtering with graph information: Consistency and scalable methods.
    N. Rao, H.-F. Yu, P. K. Ravikumar, and I. S. Dhillon. In Proc. NIPS, 2015.
  27. The graph neural network model.
    F. Scarselli, M. Gori, A. C. Tsoi, M. Hagenbuchner, and G. Monfardini. IEEE Trans. Neural Networks, 20(1):61–80, 2009.
  28. Collective classification in network data.
    P. Sen, G. Namata, M. Bilgic, L. Getoor, B. Galligher, and T. Eliassi-Rad. AI Magazine, 29(3):93, 2008.
  29. The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains.
    D. I. Shuman, S. K. Narang, P. Frossard, A. Ortega, and P. Vandergheynst. IEEE Sig. Proc. Magazine, 30(3):83–98, 2013.
  30. Learning multiagent communication with backpropagation.
    S. Sukhbaatar, A. Szlam, and R. Fergus. arXiv:1605.07736, 2016.
  31. Speedup matrix completion with side information: Application to multi-label learning.
    M. Xu, R. Jin, and Z.-H. Zhou. In Proc. NIPS, 2013.
10014
This is a comment super asjknd jkasnjk adsnkj
""
The feedback cannot be empty
Submit
Cancel
Comments 0
""
The feedback cannot be empty
   
Add comment
Cancel

You’re adding your first comment!
How to quickly get a good reply:
  • Offer a constructive comment on the author work.
  • Add helpful links to code implementation or project page.