Graph Learning-Convolutional Networks

Graph Learning-Convolutional Networks

Bo Jiang, Ziyan Zhang, Doudou Lin, Jin Tang
School of Computer Science and Technology
Anhui University
Hefei, China
jiangbo@ahu.edu.cn
Abstract

Recently, graph Convolutional Neural Networks (graph CNNs) have been widely used for graph data representation and semi-supervised learning tasks. However, existing graph CNNs generally use a fixed graph which may be not optimal for semi-supervised learning tasks. In this paper, we propose a novel Graph Learning-Convolutional Network (GLCN) for graph data representation and semi-supervised learning. The aim of GLCN is to learn an optimal graph structure that best serves graph CNNs for semi-supervised learning by integrating both graph learning and graph convolution together in a unified network architecture. The main advantage is that in GLCN, both given labels and the estimated labels are incorporated and thus can provide useful ‘weakly’ supervised information to refine (or learn) the graph construction and also to facilitate the graph convolution operation in GLCN for unknown label estimation. Experimental results on seven benchmarks demonstrate that GLCN significantly outperforms state-of-the-art traditional fixed structure based graph CNNs.

 

Graph Learning-Convolutional Networks


  Bo Jiang, Ziyan Zhang, Doudou Lin, Jin Tang School of Computer Science and Technology Anhui University Hefei, China jiangbo@ahu.edu.cn

\@float

noticebox[b]Preprint. Work in progress.\end@float

1 Introduction

Deep neural networks have been widely used in computer vision and pattern recognition. In particular, Convolutional Neural Networks (CNNs) have been successfully applied in many problems, such as object detection and recognition, in which the underlying data generally have grid-like structure. However, in many real applications, data usually have irregular structure forms which are generally represented as structured graphs. Traditional CNNs generally fail to address graph structure data.

Recently, many methods have been proposed to generalize the convolution operator on arbitrary graphs duvenaud2015convolutional (); atwood2016diffusion (); monti2017geometric (); kipf2016semi (); adaptive_GCN (); velickovic2017graph (). Overall, these methods can be categorized into spatial convolution and spectral convolution methods. For spatial methods, they generally define graph convolution operation directly by defining an operator on node groups of neighbors. For example, Duvenaud et al. duvenaud2015convolutional () propose a convolutional neural network that operates directly on graphs and provide an end-to-end feature learning for graph data. Atwood and Towsley atwood2016diffusion () propose Diffusion-Convolutional Neural Networks (DCNNs) by employing a graph diffusion process to incorporate the contextual information of node in graph node classification. Monti et al. monti2017geometric () present mixture model CNNs (MoNet) and provide a unified generalization of CNN architectures on graphs. Velickovic et al. velickovic2017graph () present Graph Attention Networks (GAT) for semi-supervised learning by designing an attention layer. For spectral methods, they generally define graph convolution operation based on spectral representation of graphs. For example, Bruna et al. bruna2014spectral () propose to define graph convolution in the Fourier domain based on eigen-decomposition of graph Laplacian matrix. Defferrard et al. defferrard2016convolutional () propose to approximate the spectral filters based on Chebyshev expansion of graph Laplacian to avoid the high computational complexity of eigen-decomposition. Recently, Kipf et al. kipf2016semi () propose a more simplified Graph Convolutional Network (GCN) for semi-supervised learning by employing a first-order approximation of spectral filters.

The above graph CNNs have been widely used for supervised or semi-supervised learning tasks. In this paper, we focus on semi-supervised learning. One important aspect of graph CNNs is the graph structure representation of data. In general, the data we feed to graph CNNs either has a known intrinsic graph structure, such as social networks, or we construct a human established graph for it, such as -nearest neighbor graph with Gaussian kernel. However, it is difficult to evaluate whether the graphs obtained from domain knowledge (e.g., social network) or established by human are optimal for semi-supervised learning in graph CNNs. Henaff et al. henaff2015deep () propose to learn a supervised graph with a fully connected network. However, the learned graph is obtained from a separate network which is also not guaranteed to best serve the graph CNNs. Li et al. velickovic2017graph () propose optimal graph CNNs, in which the graph is learned adaptively by using a traditional distance metric learning. However, it use an approximate algorithm to estimate graph Laplacian which may lead to weak local optimal solution.

Figure 1: Architecture of the proposed GLCN network for semi-supervised learning.

In this paper, we propose a novel Graph Learning-Convolutional Network (GLCN) for semi-supervised learning problem. The main idea of GLCN is to learn an optimal graph representation that best serves graph CNNs for semi-supervised learning by integrating both graph learning and graph convolution simultaneously in a unified network architecture. The main advantages of the proposed GLCN for semi-supervised learning are summarized as follows.

  • In GLCN, both given labels and the estimated labels are incorporated and thus can provide useful ‘weakly’ supervised information to refine (or learn) the graph construction and to facilitate the graph convolution operation in graph CNN for unknown label estimation.

  • GLCN can be trained via a single optimization manner, which can thus be implemented simply.

To the best of our knowledge, this is the first attempt to build a unified graph learning-convolutional network architecture for semi-supervised learning. Experimental results on seven benchmarks demonstrate that GLCN significantly outperforms state-of-the-art graph CNNs on semi-supervised learning tasks.

2 Related Work

Recently, graph convolutional network (GCN) defferrard2016convolutional (); kipf2016semi () has been commonly used to address structured graph data. In this section, we briefly review GCN based semi-supervised learning proposed in  kipf2016semi ().

Let be the collection of data vectors in dimension. Let be the graph representation of with encoding the pairwise relationship (such as similarities, neighbors) among data . GCN contains one input layer, several propagation (hidden) layers and one final perceptron layer kipf2016semi (). Given an input and graph , GCN conducts the following layer-wise propagation in hidden layers as  kipf2016semi (),

(1)

where . is a diagonal matrix with and is a layer-specific weight matrix needing to be trained. denotes an activation function, such as , and denotes the output of activations in the -th layer.

For semi-supervised node classification, GCN defines the final perceptron layer as

(2)

where and denotes the number of classes. The final output denotes the label prediction for all data in which each row denotes the label prediction for the -th node. The optimal weight matrices are trained by minimizing the following cross-entropy loss function over all the labeled nodes , i.e.,

(3)

where indicates the set of labeled nodes and denotes the corresponding label indication for the -th labeled node.
Remark. By using Eq.(1) and Eq.(2), GCN indeed provides a kind of nonlinear function to predict the labels for graph nodes.

3 Graph Learning-Convolutional Network

One core aspect of GCN is the graph representation of data . In some applications, the graph structure of data are available from domain knowledge, such as chemical molecules, social networks etc. In this case, one can use the existing graph directly for GCN based semi-supervised learning. In many other applications, the graph data are not available. One popular way is to construct a human established graph (e.g., -nearest neighbor graph) for GCN. However, the graphs obtained from domain knowledge or estimated by human are generally independent of GCN (semi-supervised) learning process and thus are not guaranteed to best serve GCN learning. Also, the human established graphs are usually sensitive to the local noise and outliers. To overcome these problems, we propose a novel Graph Learning-Convolution Network (GLCN) which integrates both graph learning and graph convolution simultaneously in a unified network architecture and thus can learn an adaptive (or optimal) graph representation for GCN learning. As shown in Figure 1, GLCN contains one graph learning layer, several convolution layers and one final perceptron layer. In the following, we explain them in detail.

Figure 2: Architecture of the proposed graph learning architecture in GLCN.

3.1 Graph learning architecture

Given an input , we aim to seek a nonnegative function that represents the pairwise relationship between data and . We implement via a single-layer neural network, which is parameterized by a weight vector . Formally, we learn a graph as

(4)

where is an activation function, which guarantees the nonnegativity of . The role of the above softmax operation on each row of is to guarantee that the learned graph can satisfy the following property,

(5)

We optimize the optimal weight vector by minimizing the following loss function,

(6)

That is, larger distance between data point and encourages a smaller value . The second term is used to control the sparsity of learned graph because of simplex property of (Eq.(5)), as discussed in nie2014clustering ().

Remark. Minimizing the above loss independently may lead to trivial solution, i.e., . We use it as a regularized term in our final loss function, as shown in Eq.(15) in §3.2.

For some problems, when an initial graph is available, we can incorporate it in our graph learning as

(7)

We can also incorporate the information of by considering a regularized term in the learning loss function as

(8)

On the other hand, when the dimension of the input data is large, the above computation of may be less effective due to the long weight vector needing to be trained. Also, the computation of Euclidean distances between data pairs in loss function is complex for large dimension . To solve this problem, we propose to conduct our graph learning in a low-dimensional subspace. We implement this via a single-layer low-dimensional embedding network, parameterized by a projection matrix . In particular, we conduct our final graph learning as follows,

(9)
(10)

where denotes an initial graph. If it is unavailable, we can set in the above update rule. The loss function becomes

(11)

The whole architecture of the proposed graph learning network is shown in Figure 2.

Remark. The proposed learned graph has a desired probability property (Eq.(5)), i.e., the optimal can be regarded a probability that data is connected to as a neighboring node. That is, the proposed graph learning (GL) architecture can establish the neighborhood structure of data automatically either based on data feature only or by further incorporating the prior initial graph with . The GL architecture indeed provides a kind of nonlinear function to predict/compute the neighborhood probabilities between node pairs.

3.2 GLCN architecture

The proposed graph learning architecture is general and can be incorporated in any graph CNNs. In this paper, we incorporate it into GCN kipf2016semi () and propose a unified Graph Learning-Convolutional Network (GLCN) for semi-supervised learning problem. Figure 1 shows the overview of GLCN architecture. The aim of GLCN is to learn an optimal graph representation for GCN network and integrates graph learning and convolution simultaneously to boost their respectively performance.

As shown in Figure 1, GLCN contains one graph learning layer, several graph convolution layers and one final perceptron layer. The graph learning layer aims to provide an optimal adaptive graph representation for the following graph convolutional layers. That is, in the convolutional layers, it conducts the layer-wise propagation rule based on the adaptive neighbor graph returned by graph learning layer, i.e.,

(12)

where . is a diagonal matrix with diagonal element . is a layer-specific trainable weight matrix for each convolution layer. denotes an activation function, such as , and denotes the output of activations in the -th layer. Since the learned graph satisfies , thus Eq.(13) can be simplified as

(13)

For semi-supervised classification task, we define the final perceptron layer as

(14)

where and denotes the number of classes. The final output denotes the label prediction of GLCN network, in which each row denotes the label prediction for the -th node. The whole network parameters are jointly trained by minimizing the following loss function as

(15)

where and are defined in Eq.(11) and Eq.(3), respectively. Parameter is a tradeoff parameter. It is noted that, when , the optimal graph is learned based on labeled data (i.e., cross-entropy loss) only which is also feasible in our GLCN.

Demonstration and analysis. There are two main benefits of the proposed GLCN network:

  • In GLCN, both given labels and the estimated labels are incorporated and thus can provide useful ‘weakly’ supervised information to refine the graph construction and thus to facilitate the graph convolution operation in GCN for unknown label estimation. That is, the graph learning and semi-supervised learning are conducted jointly in GLCN and thus can boost their respectively performance.

  • GLCN is a unified network which can be trained via a single optimization manner and thus can be implemented simply.

Figure 3 shows the cross-entropy loss values over labeled node across different epochs. One can note that, GLCN obtains obviously lower cross-entropy value than GCN at convergence, which clearly demonstrates the higher predictive accuracy of GLCN model. Also, the convergence speed of GLCN is just slightly slower than GCN, indicating the efficiency of GLCN. Figure 4 demonstrates 2D t-SNE Geoffrey2017Visualizing () visualizations of the feature map output by the first convolutional layer of GCN kipf2016semi () and GLCN, respectively. Different classes are marked by different colors. One can note that, the data of different classes are distributed more clearly and compactly in our GLCN representation, which demonstrates the desired discriminative ability of GLCN on conducting graph node representation and thus semi-supervised classification tasks.

Figure 3: Demonstration of cross-entropy loss values across different epochs on MNIST dataset.
Figure 4: 2D t-SNE Geoffrey2017Visualizing () visualizations of the feature map output by the first convolutional layer of GCN kipf2016semi () and GLCN respectively on Scene15 dataset. Different classes are marked by different colors. One can note that, the data of different classes are distributed more clearly and compactly in our GLCN convolutional layer feature representation.

4 Experiments

4.1 Datasets

To verify the effectiveness and benefit of the proposed GLCN on semi-supervised learning tasks, we test it on seven benchmark datasets, including three standard citation network benchmark datasets (Citeseer, Cora and Pubmed  sen2008collective ()) and four image datasets (CIFAR10 krizhevsky2009learning (), SVHN netzer2011reading (), MNIST and Scene 15  jiang2013label ()). The details of these datasets and their usages in our experiments are introduced below.

Citeseer. This data contains 3327 nodes and 4732 edges. Each node corresponds to a document and edge to citation relationship between documents. The nodes of this network are classified into 6 classes and each node has been represented by a 3703 dimension feature descriptor.
Cora. This data contains 2708 nodes and 5429 edges. Nodes correspond to documents and edges to citations between documents. Each node has a 1433 dimension feature descriptor and all the nodes are classified into 6 classes.
Pubmed. This data contains 19717 nodes and 44338 edges. Each node is represented by a 500 dimension feature descriptor and all the nodes are classified into 3 classes.
CIFAR10. This dataset contains 50000 natural images which are falling into 10 classes  krizhevsky2009learning (). Each image in this dataset is a RGB color image. In our experiments, we select 1000 images for each class and use 10000 images in all for our evaluation. We have not use all of images for our evaluation because large storage and high computational complexity are required for graph convolution operation in our GLCN and other comparing GCN based methods. For each image, we extract a CNN feature descriptor for it.
SVHN. It contains 73257 training and 26032 test images  netzer2011reading (). Each image is a RGB image which contains multiple number of digits and the task is to recognize the digit in the image center. Similar to CIFAR 10 dataset, in our experiments, we select 1000 images for each class and obtain 10000 images in all for our evaluation. For each image, we extract a CNN feature descriptor for it.
MNIST. This dataset consists of images of hand-written digits from ‘0’ to ‘9’. Each image is centered on a grid. Here, we randomly select 1000 images from each digit class and obtain 10000 images in all for our evaluation. Similar to other related works, we use gray value directly and convert it to a 784 dimension vector.
Scene15. It consists of 4485 scene images with 15 different categories jiang2013label (). For each image, we use the feature descriptor provided by work jiang2013label ().

Methond Citeseer Cora Pummed
ManiReg belkin2006manifold () 60.1% 59.5% 70.7%
LP zhu2003semi () 59.6% 59.0% 71.1%
DeepWalk perozzi2014deepwalk () 43.2% 67.2% 65.3%
GCN kipf2016semi () 68.9% 82.9% 77.9%
GAT  velickovic2017graph () 71.0% 83.2% 78.0%
GLCN 72.0% 85.5% 78.3%
Table 1: Comparison results of semi-supervised learning on dataset Citeseer, Cora and Pubmed.
Dataset SVHN CIFAR
No. of label 1000 2000 3000 1000 2000 3000
ManiReg belkin2006manifold () 69.440.69 72.730.44 74.630.45 52.300.66 57.080.80 59.690.71
LP zhu2003semi () 69.680.84 70.351.73 69.472.96 57.520.67 59.220.67 60.380.51
DeepWalk perozzi2014deepwalk () 74.640.23 76.210.23 77.040.42 56.160.54 59.730.35 61.260.32
GCN kipf2016semi () 71.331.48 73.430.46 73.630.52 60.430.56 60.910.50 60.990.49
GAT  velickovic2017graph () 73.870.32 74.850.55 75.170.43 63.250.50 65.550.58 66.560.58
GLCN 79.140.38 80.680.22 81.430.34 66.670.24 69.330.54 70.390.54
Dataset MNIST Scene15
No. of label 1000 2000 3000 500 750 1000
ManiReg belkin2006manifold () 92.740.33 93.960.23 94.620.22 81.293.35 86.451.91 89.860.71
LP zhu2003semi () 79.280.91 81.910.82 83.450.53 89.404.74 92.122.87 92.982.45
DeepWalk perozzi2014deepwalk () 94.550.27 95.040.28 95.340.26 95.640.24 96.010.24 96.530.37
GCN kipf2016semi () 90.590.26 90.910.19 91.010.23 91.422.07 94.410.92 95.440.89
GAT  velickovic2017graph () 92.110.35 92.640.28 92.810.29 93.980.75 94.640.41 95.030.46
GLCN 94.280.28 95.090.17 95.460.20 96.190.38 96.710.40 96.670.37
Table 2: Comparison results on dataset SVHN, CIFAR, MNIST and Scene15

4.2 Experimental setting

For Cora, Citeseer and Pubmed datasets, we follow the experimental setup of previous works  kipf2016semi (); velickovic2017graph (). That is, for each class, we select 20 nodes as label data and evaluate the performance of label prediction on 1000 test nodes. In addition, we use 300 additional nodes for validation, which is same as setting in  kipf2016semi (); velickovic2017graph (). Note that, for graph based semi-supervised setting, we use all of the nodes in our network training. For image dataset CIFAR10 krizhevsky2009learning (), SVHN netzer2011reading () and MNIST, we randomly select 1000, 2000 and 3000 images as labeled samples and the remaining data are used as unlabeled samples. For unlabeled samples, we select 1000 images for validation purpose and use the remaining 8000, 7000 and 6000 images as test samples, respectively. All the reported results are averaged over 10 runs with different groups of training, validation and testing data splits. For image dataset Scene15 jiang2013label (), we randomly select 500, 750 and 1000 images as label data and use 500 images for validation, respectively. The remaining samples are used as the unlabeled test samples. The reported results are averaged over 10 runs with different groups of training, validation and testing data splits.

Similar to kipf2016semi (), we set the number of convolution layers in our GLCN to 2. The number of units in each hidden layer is set to 70. We provide additional experiments on different number of units and convolutional layers in §4.4. We train our GLCN for a maximum of 3000 epochs (training iterations) using an ADAM algorithm Adam () with a learning rate of 0.005. We stop training if the validation loss does not decrease for 100 consecutive epochs, as suggested in kipf2016semi (). All the network weights are initialized using Glorot initialization glorot2010understanding ().

4.3 Comparison with state-of-the-art methods

Baselines. We first compare our GLCN model with GCN kipf2016semi () which is the most related model with our GLCN. We also compare our method against some other graph neural network based semi-supervised learning approaches which contain i) two traditional graph based semi-supervised learning methods including Label Propagation (LP) zhu2003semi (), Manifold Regularization (ManiReg) belkin2006manifold (), and ii) three graph neural network methods including DeepWalk  perozzi2014deepwalk (), Graph Convolutional Network (GCN) kipf2016semi () and Graph Attention Networks (GAT)  velickovic2017graph (). The codes of these comparison methods were provided by authors and we can use them directly in our experiments.

Results. Table 1 summarizes the comparison results on three citation network benchmark datasets (Citeseer, Cora and Pubmed  sen2008collective ()). Table 2 summarizes the comparison results on four widely used image datasets (CIFAR10 krizhevsky2009learning (), SVHN netzer2011reading (), MNIST and Scene15  jiang2013label ()). The best results are marked as bold in Table 1 and 2. Overall, we can note that (1) GLCN outperforms the baseline method GCN kipf2016semi () on all datasets, especially on the four image datasets. This clearly demonstrates the higher predictive accuracy on semi-supervised classification of GLCN by incorporating graph learning architecture. Comparing with GCN, the hidden layer presentations of graph nodes in GLCN become more discriminatively (as shown in Figure 4), which thus facilitates to semi-supervised learning results. (2) GLCN performs better than recent graph network GAT  velickovic2017graph (), which indicates the benefit of GLCN on graph data representation and learning. (3) GLCN performs better than other graph based semi-supervised learning methods, such as LP zhu2003semi (), ManiReg belkin2006manifold () and DeepWalk perozzi2014deepwalk (), which further demonstrates the effectiveness of GLCN on conducting semi-supervised classification tasks on graph data.

4.4 Parameter analysis

In this section, we evaluate the performance of GLCN model with different settings of network parameter. We first investigate the influence of model depth of GLCN (number of convolutional layers) on semi-supervised classification results. Figure 5 shows the performance of our GLCN method across different number of convolutional layers on MNIST dataset. As a baseline, we also list the results of GCN model with the same setting. One can note that GLCN can obtain better performance with different number of layers, which indicates the insensitivity of the GLCN w.r.t. model depth. Also, GLCN always performs better than GCN under different model depths, which further demonstrates the benefit and better performance of GLCN comparing with the baseline method.

Figure 5: Results of GLCN across different convolutional layers on MNIST dataset.
GCN-Layers 50 60 70 80 90
GCN 0.9041 0.9075 0.9080 0.9076 0.9070
GLCN 0.9410 0.9396 0.9394 0.9410 0.9389
Table 3: Results of two-layer GLCN across different number of units in convolutional-layer on MNIST dataset.
Parameter 0 1e-4 1e-3 1e-2 1e-1 1.0
CIFAR10 0.67 0.69 0.69 0.70 0.69 0.69
MNIST 0.92 0.93 0.93 0.94 0.94 0.93
Table 4: Results of GLCN with different settings of graph learning parameter in loss (Eq.(14)) on MNIST and CIFAR10 datasets

Then, we evaluate the performance of two-layer GLCN with different number of hidden units in convolutional layer. Table 3 summarizes the performance of GLCN with different number of hidden units on MNIST dataset. We can note that Both GCN and GLCN are generally insensitive w.r.t. number of units in the hidden layer.

Finally, we investigate the influence of graph learning parameter in our GLCN. Table 4 shows the performance of GLCN with different parameter settings. Note that, when is set to 0, GLCN can also return a reasonable result. Also, the graph learning regularization term in loss function will improve the graph learning and thus semi-supervised classification results.

5 Conclusion and Future Works

In this paper, we propose a novel Graph Learning-Convolutional Network (GLCN) for graph based semi-supervised learning problem. GLCN integrates the proposed new graph learning operation and traditional graph convolution architecture together in a unified network, which can learn an optimal graph structure that best serves GCN for semi-supervised learning problem. Experimental results on seven benchmarks demonstrate that GLCN generally outperforms traditional fixed-graph CNNs on various semi-supervised learning tasks.

Note that, GLCN is not limited to deal with semi-supervised learning tasks. In the future, we will adapt GLCN on some more pattern recognition tasks, such as graph data classification, graph link prediction etc. Also, we can explore GLCN method on some other computer vision tasks, such as visual object detection, image co-segmentation and visual saliency analysis.

References

  • (1) J. Atwood and D. Towsley. Diffusion-convolutional neural networks. In Advances in Neural Information Processing Systems, pages 1993–2001, 2016.
  • (2) M. Belkin, P. Niyogi, and V. Sindhwani. Manifold regularization: A geometric framework for learning from labeled and unlabeled examples. Journal of machine learning research, 7(Nov):2399–2434, 2006.
  • (3) J. Bruna, W. Zaremba, A. Szlam, and Y. LeCun. Spectral networks and locally connected networks on graphs. In International Conference on Learning Representations, 2014.
  • (4) M. Defferrard, X. Bresson, and P. Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering. In Advances in Neural Information Processing Systems, pages 3844–3852, 2016.
  • (5) D. K. Duvenaud, D. Maclaurin, J. Iparraguirre, R. Bombarell, T. Hirzel, A. Aspuru-Guzik, and R. P. Adams. Convolutional networks on graphs for learning molecular fingerprints. In Advances in Neural Information Processing Systems, pages 2224–2232, 2015.
  • (6) P. K. Geoffrey Hinton. Visualizing data using t-sne laurens van der maaten micc-ikat. 2017.
  • (7) X. Glorot and Y. Bengio. Understanding the difficulty of training deep feedforward neural networks. In International conference on artificial intelligence and statistics, pages 249–256, 2010.
  • (8) M. Henaff, J. Bruna, and Y. LeCun. Deep convolutional networks on graph-structured data. arXiv preprint arXiv:1506.05163, 2015.
  • (9) Z. Jiang, Z. Lin, and L. S. Davis. Label consistent k-svd: Learning a discriminative dictionary for recognition. IEEE transactions on pattern analysis and machine intelligence, 35(11):2651–2664, 2013.
  • (10) D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations, 2015.
  • (11) T. N. Kipf and M. Welling. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016.
  • (12) A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009.
  • (13) F. Monti, D. Boscaini, J. Masci, E. Rodola, J. Svoboda, and M. M. Bronstein. Geometric deep learning on graphs and manifolds using mixture model cnns. In IEEE Conference on Computer Vision and Pattern Recognition, pages 5423–5434, 2017.
  • (14) Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng. Reading digits in natural images with unsupervised feature learning. In NIPS workshop on deep learning and unsupervised feature learning, 2011.
  • (15) F. Nie, X. Wang, and H. Huang. Clustering and projected clustering with adaptive neighbors. In ACM SIGKDD international conference on Knowledge Discovery and Data Mining, pages 977–986, 2014.
  • (16) B. Perozzi, R. Al-Rfou, and S. Skiena. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 701–710, 2014.
  • (17) F. Z. J. H. Ruoyu Li, Sheng Wang. Adaptive graph convolutional neural networks. In AAAI Conference on Artificial Intelligence, pages 3546–3553, 2018.
  • (18) P. Sen, G. Namata, M. Bilgic, L. Getoor, B. Galligher, and T. Eliassi-Rad. Collective classification in network data. AI magazine, 29(3):93, 2008.
  • (19) P. Velickovic, G. Cucurull, A. Casanova, A. Romero, P. Lio, and Y. Bengio. Graph attention networks. arXiv preprint arXiv:1710.10903, 2017.
  • (20) X. Zhu, Z. Ghahramani, and J. D. Lafferty. Semi-supervised learning using gaussian fields and harmonic functions. In Proceedings of the 20th International conference on Machine learning (ICML-03), pages 912–919, 2003.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
320270
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description