LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation

LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation

Abstract.

Graph Convolution Network (GCN) has become new state-of-the-art for collaborative filtering. Nevertheless, the reasons of its effectiveness for recommendation are not well understood. Existing work that adapts GCN to recommendation lacks thorough ablation analyses on GCN, which is originally designed for graph classification tasks and equipped with many neural network operations. However, we empirically find that the two most common designs in GCNs — feature transformation and nonlinear activation — contribute little to the performance of collaborative filtering. Even worse, including them adds to the difficulty of training and degrades recommendation performance.

In this work, we aim to simplify the design of GCN to make it more concise and appropriate for recommendation. We propose a new model named LightGCN, including only the most essential component in GCN — neighborhood aggregation — for collaborative filtering. Specifically, LightGCN learns user and item embeddings by linearly propagating them on the user-item interaction graph, and uses the weighted sum of the embeddings learned at all layers as the final embedding. Such simple, linear, and neat model is much easier to implement and train, exhibiting substantial improvements (about 16.5% relative improvement on average) over Neural Graph Collaborative Filtering (NGCF) — a state-of-the-art GCN-based recommender model — under exactly the same experimental setting. Further analyses are provided towards the rationality of the simple LightGCN from both analytical and empirical perspectives.

Collaborative Filtering, Recommendation, Embedding Propagation, Graph Neural Network
12

1. Introduction

To alleviate information overload on the web, recommender system has been widely deployed to perform personalized information filtering (Ying et al., 2018; Covington et al., 2016). The core of recommender system is to predict whether a user will interact with an item, e.g., click, rate, purchase, among other forms of interactions. As such, collaborative filtering (CF), which focuses on exploiting the past user-item interactions to achieve the prediction, remains to be a fundamental task towards effective personalized recommendation (He et al., 2017b; Liang et al., 2018; Wang et al., 2019c; Ebesu et al., 2018).

The most common paradigm for CF is to learn latent features (a.k.a. embedding) to represent a user and an item, and perform prediction based on the embedding vectors (He et al., 2017b). Matrix factorization is an early such model, which directly projects the single ID of a user to her embedding (Koren et al., 2009; Rendle et al., 2009). Later on, several research find that augmenting user ID with the her interaction history as the input can improve the quality of embedding. For example, SVD++ (Koren, 2008) demonstrates the benefits of user interaction history in predicting user numerical ratings, and Neural Attentive Item Similarity (NAIS) (He et al., 2018) differentiates the importance of items in the interaction history and shows improvements in predicting item ranking. In view of user-item interaction graph, these improvements can be seen as coming from using the subgraph structure of a user — more specifically, her one-hop neighbors — to improve the embedding learning.

To deepen the use of subgraph structure with high-hop neighbors, Wang et al. (Wang et al., 2019c) recently proposes NGCF and achieves state-of-the-art performance for CF. It takes inspiration from the Graph Convolution Network (GCN) (Kipf and Welling, 2017; Hamilton et al., 2017), following the same propagation rule to refine embeddings: feature transformation, neighborhood aggregation, and nonlinear activation. Although NGCF has shown promising results, we argue that its designs are rather heavy and burdensome — many operations are directly inherited from GCN without justification. As a result, they are not necessarily useful for the CF task. To be specific, GCN is originally proposed for node classification on attributed graph, where each node has rich attributes as input features; whereas in user-item interaction graph for CF, each node (user or item) is only described by a one-hot ID, which has no concrete semantics besides being an identifier. In such a case, given the ID embedding as the input, performing multiple layers of nonlinear feature transformation — which is the key to the success of modern neural networks (He et al., 2016) — will bring no benefits, but negatively increases the difficulty for model training.

To validate our thoughts, we perform extensive ablation studies on NGCF. With rigorous controlled experiments (on the same data splits and evaluation protocol), we draw the conclusion that the two operations inherited from GCN — feature transformation and nonlinear activation — has no contribution on NGCF’s effectiveness. Even more surprising, removing them leads to significant accuracy improvements. This reflects the issues of adding operations that are useless for the target task in graph neural network, which not only brings no benefits, but rather degrades model effectiveness. Motivated by these empirical findings, we present a new model named LightGCN, including the most essential component of GCN — neighborhood aggregation — for collaborative filtering. Specifically, after associating each user (item) with an ID embedding, we propagate the embeddings on the user-item interaction graph to refine them. We then combine the embeddings learned at different propagation layers with a weighted sum to obtain the final embedding for prediction. The whole model is simple and elegant, which not only is easier to train, but also achieves better empirical performance than NGCF and other state-of-the-art methods like Mult-VAE (Liang et al., 2018).

To summarize, this work makes the following main contributions:

  • We empirically show that two common designs in GCN, feature transformation and nonlinear activation, have no positive effect on the effectiveness of collaborative filtering.

  • We propose LightGCN, which largely simplifies the model design by including only the most essential components in GCN for recommendation.

  • We empirically compare LightGCN with NGCF by following the same setting and demonstrate substantial improvements. In-depth analyses are provided towards the rationality of LightGCN from both technical and empirical perspectives.

2. Preliminaries

We first introduce NGCF (Wang et al., 2019c), a representative and state-of-the-art GCN model for recommendation. We then perform ablation studies on NGCF to judge the usefulness of each operation in NGCF. The novel contribution of this section is to show that the two common designs in GCNs, feature transformation and nonlinear activation, have no positive effect on collaborative filtering.

2.1. NGCF Brief

In the initial step, each user and item is associated with an ID embedding. Let denote the ID embedding of user and denote the ID embedding of item . Then NGCF leverages the user-item interaction graph to propagate embeddings as:

(1)

where and respectively denote the refined embedding of user and item after layers propagation, is the nonlinear activation function, denotes the set of items that are interacted by user , denotes the set of users that interact with item , and and are trainable weight matrix to perform feature transformation in each layer. By propagating layers, NGCF obtains embeddings to describe a user () and an item (). It then concatenates these embeddings to obtain the final user embedding and item embedding, using inner product to generate the prediction score.

NGCF largely follows the standard GCN (Kipf and Welling, 2017), including the use of nonlinear activation function and feature transformation matrices and . However, we argue that the two operations are not as useful for collaborative filtering. In semi-supervised node classification, each node has rich semantic features as input, such as the title and abstract words of a paper. Thus performing multiple layers of nonlinear transformation is beneficial to feature learning. Nevertheless, in collaborative filtering, each node of user-item interaction graph only has an ID as input which has no concrete semantics. In this case, performing multiple nonlinear transformations will not contribute to learn better features; even worse, it may add the difficulties to train well. In the next subsection, we provide empirical evidence on this argument.

Gowalla Amazon-Book
recall ndcg recall ndcg
NGCF 0.1535 0.2238 0.0319 0.0622
NGCF-f 0.1682 0.2392 0.0355 0.0646
NGCF-n 0.1538 0.2243 0.0325 0.0616
NGCF-fn 0.1723 0.2414 0.0371 0.0669
Table 1. Performance of NGCF and its three variants.
(a) Training loss on Gowalla
(b) Testing recall on Gowalla
(c) Training loss on Amazon-Book
(d) Testing recall on Amazon-Book
Figure 1. Training curves (training loss and testing recall) of NGCF and its three simplified variants.

2.2. Empirical Explorations on NGCF

We conduct ablation studies on NGCF to explore the effect of nonlinear activation and feature transformation. We use the codes released by the authors of NGCF3, running experiments on the same data splits and evaluation protocol to keep the comparison as fair as possible. Since the core of GCN is to refine embeddings by propagation, we are more interested in the embedding quality under the same embedding size. Thus, we change the way of obtaining final embedding from concatenation (i.e., ) to sum (i.e., ). Note that this change has little effect on NGCF’s performance, but makes the following ablation studies more indicative of the embedding quality refined by GCN.

We implement three simplified variants of NGCF:

  • NGCF-f, which removes the feature transformation matrices and .

  • NGCF-n, which removes the non-linear activation function .

  • NGCF-fn, which removes both the feature transformation matrices and non-linear activation function.

For the three variants, we keep all hyper-parameters (e.g., learning rate, regularization coefficient, dropout ratio, etc.) same as the optimal settings of NGCF. We report the results of the 2-layer setting on the Gowalla and Amazon-Book datasets in Table 1, where the scores of NGCF are directly copied from the Table 3 of (Wang et al., 2019c). As can be seen, removing feature transformation (i.e., NGCF-f) leads to consistent improvements over NGCF on all three datasets. In contrast, removing nonlinear activation does not affect the accuracy that much. However, if we remove nonlinear activation on the basis of removing feature transformation (i.e., NGCF-fn), the performance is improved significantly. From these observations, we conclude the findings that:

(1) Adding feature transformation imposes negative effect on NGCF, since removing it in both models of NGCF and NGCF-n improves the performance significantly;

(2) Adding nonlinear activation affects slightly when feature transformation is included, but it imposes negative effect when feature transformation is disabled.

(3) As a whole, feature transformation and nonlinear activation impose rather negative effect on NGCF, since by removing them simultaneously, NGCF-fn demonstrates large improvements over NGCF (9.57% relative improvement on recall).

To gain more insights into the scores obtained in Table 1 and understand why NGCF deteriorates with the two operations, we plot the curves of model status recorded by training loss and testing recall in Figure 1. As can be seen, NGCF-fn achieves a much lower training loss than NGCF, NGCF-f, and NGCF-n along the whole training process. Aligning with the curves of testing recall, we find that such lower training loss successfully transfers to better recommendation accuracy. The comparison between NGCF and NGCF-f shows the similar trend, except that the improvement margin is smaller.

From these evidences, we can draw the conclusion that the deterioration of NGCF stems from the training difficulty, rather than overfitting. Theoretically speaking, NGCF has higher representation power than NGCF-f, since setting the weight matrix and to identity matrix I can fully recover the NGCF-f model. However, in practice, NGCF demonstrates higher training loss and worse generalization performance than NGCF-f. And the incorporation of nonlinear activation further aggravates the discrepancy between representation power and generalization performance. To round out this section, we claim that when designing model for recommendation, it is important to perform rigorous ablation studies to be clear about the impact of each operation. Otherwise, including less useful operations will complicate the model unnecessarily, increase the training difficulty, and even degrade model effectiveness.

3. Method

The former section demonstrates that NGCF is a heavy and burdensome GCN model for collaborative filtering. Driven by these findings, we set the goal of developing a light yet effective model by including the most essential ingredients of GCN for recommendation. The advantages of being simple are several-fold — more interpretable, practically easy to train and maintain, technically easy to analyze the model behavior and revise it towards more effective directions, and so on.

In this section, we first present our designed Light Graph Convolution Network (LightGCN) model, as illustrated in Figure 2. We then provide an in-depth analysis of LightGCN to show the rationality behind its simple design. Lastly, we describe how to do model training for recommendation.

3.1. LightGCN

The basic idea of GCN is to learning representation for nodes by smoothing features over the graph (Kipf and Welling, 2017; Wu et al., 2019a). To achieve this, it performs graph convolution iteratively, i.e., aggregating the features of neighbors as the new representation of a target node. Such neighborhood aggregation can be abstracted as:

(2)

The AGG is an aggregation function — the core of graph convolution — that considers the -th layer’s representation of the target node and its neighbor nodes. Many work have specified the AGG, such as the weighted sum aggregator in GCN (Kipf and Welling, 2017) and GIN (Xu et al., 2018), mean aggregator and LSTM aggregator in GraphSAGE (Hamilton et al., 2017), etc. However, most of the work ties feature transformation or nonlinear activation with the AGG function. Although they perform well on node or graph classification tasks that have semantic input features, they could be burdensome for collaborative filtering (see preliminary results in Section 2.2).

Figure 2. An illustration of LightGCN model architecture. In LGC, only the normalized sum of neighbor embeddings is performed towards next layer; other operations like self-connection, feature transformation, and nonlinear activation are all removed, which largely simplifies GCNs. In Layer Combination, we sum over the embeddings at each layer to obtain the final representations.

Light Graph Convolution (LGC)

In LightGCN, we adopt the simple weighted sum aggregator and abandon the use of feature transformation and nonlinear activation. The graph convolution operation (a.k.a., propagation rule (Wang et al., 2019c)) in LightGCN is defined as:

(3)

The symmetric normalization term follows the design of standard GCN (Kipf and Welling, 2017), which can avoid the scale of embeddings increasing with graph convolution operations; other choices can also be applied here, such as the norm, while empirically we find this symmetric normalization has good performance (see experiment results in Section 4.4.2).

It is worth noting that in LGC, we aggregate only the connected neighbors and do not integrate the target node itself (i.e., self-connection). This is different from most existing graph convolution operations (Wang et al., 2019c; Kipf and Welling, 2017; Velickovic et al., 2018; Hamilton et al., 2017) that typically aggregate extended neighbors and need to handle the self-connection specially. The layer combination operation, to be introduced in the next subsection, essentially captures the same effect as self-connections. Thus, there is no need in LGC to include self-connections.

Layer Combination and Model Prediction

In LightGCN, the only trainable model parameters are the embeddings at the 0-th layer, i.e., for all users and for all items. When they are given, the embeddings at higher layers can be computed via LGC defined in Equation (3). After layers LGC, we further combine the embeddings obtained at each layer to form the final representation of a user (an item):

(4)

where denotes the importance of the -th layer embedding in constituting the final embedding. It can be treated as a hyper-parameter to be tuned manually, or as a model parameter (e.g., output of an attention network (Chen et al., 2017)) to be optimized automatically. In our experiments, we find that setting uniformly as leads to good performance in general. Thus we do not design special component to optimize , to avoid complicating LightGCN unnecessarily and to keep its simplicity. The reasons that we perform layer combination to get final representations are three-fold. (1) With the increasing of the number of layers, the embeddings will be over-smoothed (Li et al., 2018). Thus simply using the last layer is problematic. (2) The embeddings at different layers capture different semantics. E.g., the first layer enforces smoothness on users and items that have interactions, the second layer smooths users (items) that have overlap on interacted items (users), and higher-layers capture higher-order proximity (Wang et al., 2019c). Thus combining them will make the representation more comprehensive. (3) Combining embeddings at different layers with weighted sum captures the effect of graph convolution with self-connections, an important trick in GCNs (proof sees Section 3.2.1).

The model prediction is defined as the inner product of user and item final representations:

(5)

which is used as the ranking score for recommendation generation.

Matrix Form

We provide the matrix form of LightGCN to facilitate implementation and discussion with existing models. Let the user-item interaction matrix be where and denote the number of users and items, respectively, and each entry is 1 if has interacted with item otherwise 0. We then obtain the adjacency matrix of the user-item graph as

(6)

Let the -th layer embedding matrix be , where is the embedding size. Then we can obtain the matrix equivalent form of LGC as:

(7)

where D is a diagonal matrix, in which each entry denotes the number of nonzero entries in the -th row vector of the adjacency matrix A (also named as degree matrix). Lastly, we get the final embedding matrix used for model prediction as:

(8) E

where is the symmetrically normalized matrix.

3.2. Model Analysis

We conduct model analysis to demonstrate the rationality behind the simple design of LightGCN. First we discuss the connection with the Simplified GCN (SGCN) (Wu et al., 2019a), which is a recent linear GCN model that integrates self-connection into graph convolution; this analysis shows that by doing layer combination, LightGCN subsumes the effect of self-connection thus there is no need for LightGCN to add self-connection in adjacency matrix. Then we discuss the relation with the Approximate Personalized Propagation of Neural Predictions (APPNP) (Klicpera et al., 2019), which is recent GCN variant that addresses oversmoothing by inspiring from Personalized PageRank (Haveliwala, 2002); this analysis shows the underlying equivalence between LightGCN and APPNP, thus our LightGCN enjoys the sames benefits in propagating long-range with controllable oversmoothing. Lastly we analyze the second-layer LGC to show how it smooths a user with her second-order neighbors, providing more insights into the working mechanism of LightGCN.

Relation with SGCN

In (Wu et al., 2019a), the authors argue the unnecessary complexity of GCN for node classfication and propose SGCN, which simplifies GCN by removing nonlinearities and collapsing the weight matrices to one weight matrix. The graph convolution in SGCN is defined as4:

(9)

where is an identity matrix, which is added on A to include self-connections. In the following analysis, we omit the terms for simplicity, since they only re-scale embeddings. In SGCN, the embeddings obtained at the last layer are used for downstream prediction task, which can be expressed as:

(10)

The above derivation shows that, inserting self-connection into A and propagating embeddings on it, is essentially equivalent to a weighted sum of the embeddings propagated at each LGC layer.

Relation with APPNP

In a recent work (Klicpera et al., 2019), the authors connect GCN with Personalized PageRank (Haveliwala, 2002), inspiring from which they propose a GCN variant named APPNP that can propagate long range without the risk of oversmoothing. Inspired by the teleport design in Personalized PageRank, APPNP complements each propagation layer with the starting features (i.e., the 0-th layer embeddings), which can balance the need of preserving locality (i.e., staying close to the root node to alleviate oversmoothing) and leveraging the information from a large neighborhood. The propagation layer in APPNP is defined as:

(11)

where is the teleport probability to control the retaining of starting features in the propagation, and denotes the normalized adjacency matrix. In APPNP, the last layer is used for final prediction, i.e.,

(12)

Aligning with Equation (8), we can see that by setting accordingly, LightGCN can fully recover the prediction embedding used by APPNP. As such, LightGCN shares the strength of APPNP in combating oversmoothing — by setting the properly, we allow using a large for long-range modeling with controllable oversmoothing.

Another minor difference is that APPNP adds self-connection into the adjacency matrix. However, as we have shown before, this is redundant due to the weighted sum of different layers.

Second-Order Embedding Smoothness

Owing to the linearity and simplicity of LightGCN, we can draw more insights into how does it smooth embeddings. Here we analyze a 2-layer LightGCN to demonstrate its rationality. Taking the user side as an example, intuitively, the second layer smooths users that have overlap on the interacted items. More concretely, we have:

(13)

We can see that, if another user has co-interacted with the target user , the smoothness strength of on is measured by the coefficient (otherwise 0):

(14)

This coefficient is rather interpretable: the influence of a second-order neighbor on is determined by 1) the number of co-interacted items, the more the larger; 2) the popularity of the co-interacted items, the less popularity (i.e., more indicative of user personalized preference) the larger; and 3) the activity of , the less active the larger. Such interpretability well caters for the assumption of CF in measuring user similarity (Chen et al., 2019a; Wang et al., 2006) and evidences the reasonability of LightGCN. Due to the symmetric formulation of LightGCN, we can get similar analysis on the item side.

3.3. Model Training

The trainable parameters of LightGCN are only the embeddings of the 0-th layer, i.e., ; in other words, the model complexity is same as the standard matrix factorization (MF). We employ the Bayesian Personalized Ranking (BPR) loss (Rendle et al., 2009), which is a pairwise loss that encourages the prediction of an observed entry to be higher than its unobserved counterparts:

(15)

where controls the regularization strength. We employ the Adam (Kingma and Ba, 2015) optimizer and use it in a mini-batch manner. We are aware of other advanced negative sampling strategies which might improve the LightGCN training, such as the hard negative sampling (Rendle and Freudenthaler, 2014) and adversarial sampling (Ding et al., 2019). We leave this extension in the future since it is not the focus of this work.

Note that we do not introduce dropout mechanisms, which are commonly used in GCNs and NGCF. The reason is that we do not have feature transformation weight matrices in LightGCN, thus enforcing regularization on the embedding layer is sufficient to prevent overfitting. This showcases LightGCN’s advantages of being simple — it is easier to train and tune than NGCF which additionally requires to tune two dropout ratios (node dropout and message dropout) and normalize the embedding of each layer to unit length.

Moreover, it is technically viable to also learn the layer combination coefficients , or parameterize them with an attention network. However, we find that learning on training data does not lead improvement. This is probably because the training data does not contain sufficient signal to learn good that can generalize to unknown data. We have also tried to learn from validation data, as inspired by (Chen et al., 2019b) that learns hyper-parameters on validation data. The performance is slightly improved (less than ). We leave the exploration of optimal settings of (e.g., personalizing it for different users and items) as future work.

4. Experiments

We first describe experimental settings, and then conduct detailed comparison with NGCF (Wang et al., 2019c), the method that is most relevant with LightGCN but more complicated (Section 4.2). We next compare with other state-of-the-art methods in Section 4.3. To justify the designs in LightGCN and reveal the reasons of its effectiveness, we perform ablation studies and embedding analyses in Section 4.4. The hyper-parameter study is finally presented in Section 4.5.

Dataset User # Item # Interaction # Density
Gowalla
Yelp2018
Amazon-Book
Table 2. Statistics of the experimented data.

4.1. Experimental Settings

To reduce the experiment workload and keep the comparison fair, we closely follow the settings of the NGCF work (Wang et al., 2019c). We request the experimented datasets (including train/test splits) from the authors, for which the statistics are shown in Table 2. The Gowalla and Amazon-Book are exactly the same as the NGCF paper used, so we directly use the results in the NGCF paper. The only exception is the Yelp2018 data, which is a revised version. According to the authors, the previous version did not filter out cold-start items in the testing set, and they shared us the revised version only. Thus we re-run NGCF on the Yelp2018 data. The evaluation metrics are recall@20 and ndcg@20 computed by the all-ranking protocol — all items that are not interacted by a user are the candidates.

Dataset Gowalla Yelp2018* Amazon-Book
Layer # Method recall ndcg recall ndcg recall ndcg
1 Layer NGCF 0.1511 0.2218 0.0542 0.1028 0.0315 0.0618
LightGCN 0.1726(+14.23%) 0.2455(+10.67%) 0.0633(+16.79%) 0.1148(+11.67%) 0.0385(+22.22%) 0.0698(+12.94%)
2 Layers NGCF 0.1535 0.2238 0.0550 0.1025 0.0319 0.0622
LightGCN 0.1786(+16.35%) 0.2487(+11.12%) 0.0618(+12.36%) 0.1120(+9.27%) 0.0413(+29.48%) 0.0729(+17.20%)
3 Layers NGCF 0.1547 0.2237 0.0549 0.1023 0.0344 0.0630
LightGCN 0.1809(+16.94%) 0.2513(+12.34%) 0.0648(+18.03%) 0.1163(+13.69%) 0.0415(+20.64%) 0.0740(+17.46%)
4 Layers NGCF 0.1560 0.2240 0.0548 0.1020 0.0342 0.0636
LightGCN 0.1817(+16.47%) 0.2518(+12.41%) 0.0655(+19.53%) 0.1170(+14.71%) 0.0416(+21.68%) 0.0739(+16.19%)

*The scores of NGCF on Gowalla and Amazon-Book are directly copied from the Table 3 of (Wang et al., 2019c); the scores of NGCF on Yelp2018 are re-run by us.

Table 3. Performance comparison between NGCF and LightGCN at different layers.

Compared Methods

The main competing method is NGCF, which has shown to outperform several methods including GCN-based models GC-MC (van den Berg et al., 2018) and PinSage (Ying et al., 2018), neural network-based models NeuMF (He et al., 2017b) and CMN (Ebesu et al., 2018), and factorization-based models MF (Rendle et al., 2009) and HOP-Rec (Yang et al., 2018). As the comparison is done on the same datasets under the same evaluation protocol, we do not further compare with these methods. In addition to NGCF, we further compare with two relevant and competitive CF methods:

  • Mult-VAE (Liang et al., 2018). This is an item-based CF method based on the variational autoencoder (VAE). It assumes the data is generated from a multinomial distribution and using variational inference for parameter estimation. We run the codes released by the authors5, tuning the dropout ratio in , and the in . The model architecture is the suggested one in the paper: .

  • GRMF (Rao et al., 2015). This method smooths matrix factorization by adding the graph Laplacian regularizer. For fair comparison on item recommendation, we change the rating prediction loss to BPR loss. The objective function of GRMF is:

    (16)

    where is searched in the range of . Moreover, we compare with a variant that adds normalization to graph Laplacian: , which is termed as GRMF-norm. Other hyper-parameter settings are same as LightGCN. The two GRMF methods benchmark the performance of smoothing embeddings via Laplacian regularizer, while our LightGCN achieves embedding smoothing in the predictive model.

Hyper-parameter Settings

Same as NGCF, the embedding size is fixed to 64 for all models and the embedding parameters are initialized with the Xavier method (Glorot and Bengio, 2010). We optimize LightGCN with Adam (Kingma and Ba, 2015) and use the default learning rate of 0.001 and default mini-batch size of 1024 (on Amazon-Book, we increase the mini-batch size to 2048 for speed). The regularization coefficient is searched in the range of , and in most cases the optimal value is . The layer combination coefficient is uniformly set to where is the number of layers. We test in the range of 1 to 4, and satisfactory performance can be achieved when equals to 3. The early stopping and validation strategies are the same as NGCF. Typically, 1000 epochs are sufficient for LightGCN to converge. The implementation is based on TensorFlow, and we will release all codes and data upon acceptance.

Figure 3. Training curves of LightGCN and NGCF, which are evaluated by training loss and testing recall per 20 epochs on Gowalla and Amazon-Book (results on Yelp2018 show exactly the same trend which are omitted for space).

4.2. Performance Comparison with NGCF

We perform detailed comparison with NGCF, recording the performance at different layers (1 to 4) in Table 4, which also shows the percentage of relative improvement on each metric. We further plot the training curves of training loss and testing recall in Figure 3 to reveal the advantages of LightGCN and to be clear of the training process. The main observations are as follows:

  • In all cases, LightGCN outperforms NGCF by a large margin. For example, on Gowalla the highest recall reported in the NGCF paper is 0.1560, while our LightGCN can reach 0.1817 under the 4-layer setting, which is higher. On average, the recall improvement on the three datasets is and the ndcg improvement is , which are rather significant.

  • Aligning Table 4 with Table 1 in Section 2, we can see that LightGCN performs better than NGCF-fn, the variant of NGCF that removes feature transformation and nonlinear activation. As NGCF-fn still contains more operations than LightGCN (e.g., self-connection, the interaction between user embedding and item embedding in graph convolution, and dropout), this suggests that these operations might also be useless for NGCF-fn.

  • Increasing the number of layers can improve the performance, but the benefits diminish. The general observation is that increasing the layer number from 0 (i.e., the matrix factorization model, results see (Wang et al., 2019c)) to 1 leads to the largest performance gain, and using a layer number of 3 leads to satisfactory performance in most cases. This observation is consistent with NGCF’s finding.

  • Along the training process, LightGCN consistently obtains lower training loss, which indicates that LightGCN fits the training data better than NGCF. Moreover, the lower training loss successfully transfers to better testing accuracy, indicating the strong generalization power of LightGCN. In contrast, the higher training loss and lower testing accuracy of NGCF reflect the practical difficulty to train such a heavy model it well. Note that in the figures we show the training process under the optimal hyper-parameter setting for both methods. Although increasing the learning rate of NGCF can decrease its training loss (even lower than that of LightGCN), the testing recall could not be improved, as lowering training loss in this way only finds trivial solution for NGCF.

4.3. Performance Comparison with State-of-the-Arts

Table 4 shows the performance comparison with competing methods. We show the best score we can obtain for each method. We can see that LightGCN consistently outperforms other methods on all three datasets, demonstrating its high effectiveness with simple yet reasonable designs. Note that LightGCN can be further improved by tuning the (see Figure 4 for an evidence), while here we only use a uniform setting of to avoid over-tuning it. Among the baselines, Mult-VAE exhibits the strongest performance, which is better than GRMF and NGCF. The performance of GRMF is on a par with NGCF, being better than MF, which admits the utility of enforcing embedding smoothness with Laplacian regularizer. By adding normalization into the Laplacian regularizer, GRMF-norm betters than GRMF on Gowalla, while brings no benefits on Yelp2018 and Amazon-Book.

Dataset Gowalla Yelp2018 Amazon-Book
Method recall ndcg recall ndcg recall ndcg
NGCF 0.1560 0.2240 0.0550 0.1028 0.0344 0.0636
Mult-VAE 0.1651 0.2245 0.0582 0.1026 0.0408 0.0710
GRMF 0.1472 0.2030 0.0570 0.1049 0.0351 0.0641
GRMF-norm 0.1544 0.2122 0.0559 0.1051 0.0352 0.0645
LightGCN 0.1817 0.2518 0.0655 0.1170 0.0416 0.0739
Table 4. The comparison of overall performance among LightGCN and competing methods.

4.4. Ablation and Effectiveness Analyses

We perform ablation studies on LightGCN by showing how layer combination and symmetric sqrt normalization affect its performance. To justify the rationality of LightGCN as analyzed in Section 3.2.3, we further investigate the effect of embedding smoothness — the key reason of LightGCN’s effectiveness.

Figure 4. Results of LightGCN and the variant that does not use layer combination (i.e., LightGCN-single) at different layers on Gowalla and Amazon-Book (results on Yelp2018 shows the same trend with Amazon-Book which are omitted for space).

Impact of Layer Combination

Figure 4 shows the results of LightGCN and its variant LightGCN-single that does not use layer combination (i.e., is used for final prediction for a -layer LightGCN). We omit the results on Yelp2018 due to space limitation, which show similar trend with Amazon-Book. We have three main observations:

  • Focusing on LightGCN-single, we find that its performance first improves and then drops when the layer number increases from 1 to 4. The peak point is on layer 2 in most cases, while after that it drops quickly to the worst point of layer 4. This indicates that smoothing a node’s embedding with its first-order and second-order neighbors is very useful for CF, but will suffer from over-smoothing issues when higher-order neighbors are used.

  • Focusing on LightGCN, we find that its performance gradually improves with the increasing of layers. Even using 4 layers, LightGCN’s performance is not degraded. This justifies the effectiveness of layer combination for addressing over-smoothing, as we have technically analyzed in Section 3.2.2 (relation with APPNP).

  • Comparing the two methods, we find that LightGCN consistently outperforms LightGCN-single on Gowalla, but not on Amazon-Book and Yelp2018 (where the 2-layer LightGCN-single performs the best). Regarding this phenomenon, two points need to be noted before we draw conclusion: 1) LightGCN-single is special case of LightGCN that sets to 1 and other to 0; 2) we do not tune the and simply set it as uniformly for LightGCN. As such, we can see the potential of further enhancing the performance of LightGCN by tuning .

Dataset Gowalla Yelp2018 Amazon-Book
Method recall ndcg recall ndcg recall ndcg
LightGCN--L 0.1700 0.2215 0.0633 0.1119 0.0423 0.0726
LightGCN--R 0.1577 0.2311 0.0586 0.1086 0.0331 0.0634
LightGCN- 0.1587 0.2169 0.0574 0.1053 0.0364 0.0653
LightGCN-L 0.1511 0.2125 0.0564 0.1051 0.0375 0.0689
LightGCN-R 0.1295 0.1884 0.0484 0.0872 0.0256 0.0538
LightGCN 0.1809 0.2513 0.0648 0.1163 0.0415 0.0740

Method notation: -L means only the left-side norm is used, -R means only the right-side norm is used, and - means the norm is used.

Table 5. Performance of the 3-layer LightGCN with different choices of normalization schemes in graph convolution.

Impact of Symmetric Sqrt Normalization

In LightGCN, we employ symmetric sqrt normalization on each neighbor embedding when performing neighborhood aggregation (cf. Equation (3)). To study its rationality, we explore different choices here. We test the use of normalization only at the left side (i.e., the target node’s coefficient) and the right side (i.e., the neighbor node’s coefficient). We also test normalization, i.e., removing the square root. Note that if removing normalization, the training becomes numerically unstable and suffers from not-a-value (NAN) issues, so we do not show this setting. Table 5 shows the results of the 3-layer LightGCN. We have the following observations:

  • The best setting in general is using sqrt normalization at both sides (i.e., the current design of LightGCN). Removing either side will drop the performance largely.

  • The second best setting is using normalization at the left side only (i.e., LightGCN--L). This is equivalent to normalize the adjacency matrix as a stochastic matrix by the in-degree.

  • Normalizing symmetrically on two sides is helpful for the sqrt normalization, but will degrade the performance of normalization.

Analysis of Embedding Smoothness

As we have analyzed in Section 3.2.3, a 2-layer LightGCN smooths a user’s embedding based on the users that have overlap on her interacted items, and the smoothing strength between two users is measured in Equation (14). We speculate that such smoothing of embeddings is the key reason of LightGCN’s effectiveness. To verify this, we first define the smoothness of user embeddings as:

(17)

where the norm on embeddings is used to eliminate the impact of the embedding’s scale. Similarly we can obtained the definition for item embeddings. Table 6 shows the smoothness of two models, matrix factorization (i.e., using the for model prediction) and the 2-layer LightGCN-single (i.e., using the for prediction). Note that the 2-layer LightGCN-single outperforms MF in recommendation accuracy by a large margin. As can be seen, the smoothness loss of LightGCN-single is much lower than that of MF. This indicates that by conducting light graph convolution, the embeddings become smoother and more suitable for recommendation.

Dataset Gowalla Yelp2018 Amazon-book
Smoothness of User Embeddings
MF 15449.3 16258.2 38034.2
LightGCN-single 12872.7 10091.7 32191.1
Smoothness of Item Embeddings
MF 12106.7 16632.1 28307.9
LightGCN-single 5829.0 6459.8 16866.0
Table 6. Smoothness loss of the embeddings learned by LightGCN and MF (the lower the smoother).

4.5. Hyper-parameter Studies

When applying LightGCN to a new dataset, besides the standard hyper-parameter learning rate, the most important hyper-parameter to tune is the regularization coefficient . Here we investigate the performance change of LightGCN w.r.t. .

As shown in Figure 5, LightGCN is relatively insensitive to  — even when sets to 0, LightGCN is better than NGCF, which additionally uses dropout to prevent overfitting6. This shows that LightGCN is less prone to overfitting — since the only trainable parameters in LightGCN are ID embeddings of the 0-th layer, the whole model is easy to train and to regularize. The optimal value for Yelp2018, Amazon-Book, and Gowalla is , , and , respectively. When is larger than , the performance drops quickly, which indicates that too strong regularization will negatively affect model normal training and is not encouraged.

Figure 5. Performance of 2-layer LightGCN w.r.t. different regularization coefficient on Yelp and Amazon-Book.

5. Related Work

5.1. Collaborative Filtering

Collaborative Filtering (CF) is a prevalent technique in modern recommender systems (Covington et al., 2016; Ying et al., 2018). One common paradigm of CF model is to parameterize users and items as embeddings, and learn the embedding parameters by reconstructing historical user-item interactions. For example, earlier CF models like matrix factorization (MF) (Koren et al., 2009; Rendle et al., 2009) project the ID of a user (or an item) into an embedding vector. The recent neural recommender models like NCF (He et al., 2017b) and LRML (Tay et al., 2018) use the same embedding component, while enhance the interaction modeling with neural networks.

Beyond merely using ID information, another type of CF methods considers historical items as the pre-existing features of a user, towards better user representations. For example, FISM (Kabbur et al., 2013) and SVD++ (Koren, 2008) use the weighted average of the ID embeddings of historical items as the target user’s embedding. Recently, researchers realize that historical items have different contributions to shape personal interest. Towards this end, attention mechanisms are introduced to capture the varying contributions, such as ACF (Chen et al., 2017), NAIS (He et al., 2018), and DeepICF (Xue et al., 2019), to automatically learn the importance of each historical item. When revisiting historical interactions as a user-item bipartite graph, the performance improvements can be attributed to the encoding of local neighborhood — one-hop neighbors — that improves the embedding learning.

5.2. Graph Methods for Recommendation

Another relevant research line is exploiting the user-item graph structure for recommendation. Prior efforts, such as ItemRank (Gori and Pucci, 2007) and BiRank (He et al., 2017a), use the label propagation mechanism to directly propagate user preference scores over the graph, i.e., encouraging connected nodes to have similar labels. Recently emerged graph neural networks (GNNs) shine a light on modeling graph structure, especially high-hop neighbors, to guide the embedding learning (Kipf and Welling, 2017; Hamilton et al., 2017). Early studies define graph convolution on the spectral domain, such as Laplacian eigen-decomposition (Bruna et al., 2014) and Chebyshev polynomials (Defferrard et al., 2016), which are computationally expensive. Later on, GraphSage (Hamilton et al., 2017) and GCN (Kipf and Welling, 2017) re-define graph convolution in the spatial domain, i.e., aggregating the embeddings of neighbors to refine the target node’s embedding. Owing to its interpretability and efficiency, it quickly becomes a prevalent formulation of GNNs and is being widely used (Qiu et al., 2018; Feng et al., 2019). Motivated by the strength of graph convolution, recent efforts like NGCF (Wang et al., 2019c), GC-MC (van den Berg et al., 2018), and PinSage (Ying et al., 2018) adapt GCN to the user-item interaction graph, capturing CF signals in high-hop neighbors for recommendation.

It is worth mentioning that several recent efforts provide deep insights into GNNs (Li et al., 2018; Klicpera et al., 2019; Wu et al., 2019a), which inspire us developing LightGCN. Particularly, Wu et al. (Wu et al., 2019a) argues the unnecessary complexity of GCN, developing a simplified GCN (SGCN) model by removing nonlinearities and collapsing multiple weight matrices into one. One main difference is that LightGCN and SGCN are developed for different tasks, thus the rationality of model simplification is different. Specifically, SGCN is for node classification, performing simplification for model interpretability and efficiency. In contrast, LightGCN is on collaborative filtering (CF), where each node has an ID feature only. Thus, we do simplification for a stronger reason: nonlinearity and weight matrices are useless for CF, and even hurt model training. For node classification accuracy, SGCN is on par with (sometimes weaker than) GCN. While for CF accuracy, LightGCN outperforms GCN by a large margin (over 15% improvement over NGCF).

6. Conclusion and Future Work

In this work, we argued the unnecessarily complicated design of GCNs for collaborative filtering, and performed empirical studies to justify this argument. We proposed LightGCN which consists of two essential components — light graph convolution and layer combination. In light graph convolution, we discard feature transformation and nonlinear activation — two standard operations in GCNs but inevitably increase the training difficulty. In layer combination, we construct a node’s final embedding as the weighted sum of its embeddings on all layers, which is proved to subsume the effect of self-connections and is helpful to control oversmoothing. We conduct experiments to demonstrate the strengths of LightGCN in being simple: easier to be trained, better generalization ability, and more effective.

We believe the insights of LightGCN are inspirational to future developments of recommender models. With the prevalence of linked graph data in real applications, graph-based models are becoming increasingly important in recommendation (Hamilton et al., 2017; Wang et al., 2018); by explicitly exploiting the relations among entities in the predictive model, they are advantageous to traditional supervised learning scheme like factorization machines (Rendle et al., 2011; He and Chua, 2017) that model the relations implicitly. For example, a recent trend is to exploit auxiliary information such as item knowledge graph (Wang et al., 2019b, a), social network (Wu et al., 2019b) and multimedia content (Yin et al., 2019) for recommendation, where GCNs have set up the new state-of-the-art. However, these models may also suffer from the similar issues of NGCF since the user-item interaction graph is also modeled by same neural operations that may be unnecessary. We plan to explore the idea of LightGCN in these models. Another future direction is to personalize the layer combination weights , so as to enable adaptive-order smoothing for different users (e.g., sparse users may require more signal from higher-order neighbors while active users require less). Lastly, we will explore further the strengths of LightGCN’s simplicity, studying whether closed-form solution exists for particular forms of loss functions and streaming it for online industrial scenarios.

Footnotes

  1. journalyear: 2020
  2. ccs: Information systems Recommender systems
  3. https://github.com/xiangwang1223/neural_graph_collaborative_filtering
  4. The weight matrix in SGCN can be absorbed into the 0-th layer embedding parameters, thus it is omitted in the analysis.
  5. https://github.com/dawenl/vae_cf
  6. Note that Gowalla shows the same trend with Amazon-Book, so its curves are not shown to better highlight the trend of Yelp2018 and Amazon-Book.

References

  1. Spectral networks and locally connected networks on graphs. In ICLR, Cited by: §5.2.
  2. Collaborative similarity embedding for recommender systems. In WWW, pp. 2637–2643. Cited by: §3.2.3.
  3. Attentive collaborative filtering: multimedia recommendation with item- and component-level attention. In SIGIR, pp. 335–344. Cited by: §3.1.2, §5.1.
  4. LambdaOpt: learn to regularize recommender models in finer levels. In KDD, pp. 978–986. Cited by: §3.3.
  5. Deep neural networks for youtube recommendations. In RecSys, pp. 191–198. Cited by: §1, §5.1.
  6. Convolutional neural networks on graphs with fast localized spectral filtering. In NeurIPS, pp. 3837–3845. Cited by: §5.2.
  7. Reinforced negative sampling for recommendation with exposure data. In IJCAI, pp. 2230–2236. Cited by: §3.3.
  8. Collaborative memory network for recommendation systems. In SIGIR, pp. 515–524. Cited by: §1, §4.1.1.
  9. Temporal relational ranking for stock prediction. TOIS 37 (2), pp. 27:1–27:30. Cited by: §5.2.
  10. Understanding the difficulty of training deep feedforward neural networks. In AISTATS, pp. 249–256. Cited by: §4.1.2.
  11. ItemRank: A random-walk based scoring algorithm for recommender engines. In IJCAI, pp. 2766–2771. Cited by: §5.2.
  12. Inductive representation learning on large graphs. In NeurIPS, pp. 1025–1035. Cited by: §1, §3.1.1, §3.1, §5.2, §6.
  13. Topic-sensitive pagerank. In WWW, pp. 517–526. Cited by: §3.2.2, §3.2.
  14. Deep residual learning for image recognition. In CVPR, pp. 770–778. Cited by: §1.
  15. Neural factorization machines for sparse predictive analytics. In SIGIR, pp. 355–364. Cited by: §6.
  16. BiRank: towards ranking on bipartite graphs. TKDE 29 (1), pp. 57–71. Cited by: §5.2.
  17. NAIS: neural attentive item similarity model for recommendation. TKDE 30 (12), pp. 2354–2366. Cited by: §1, §5.1.
  18. Neural collaborative filtering. In WWW, pp. 173–182. Cited by: §1, §1, §4.1.1, §5.1.
  19. FISM: factored item similarity models for top-n recommender systems. In KDD, pp. 659–667. Cited by: §5.1.
  20. Adam: A method for stochastic optimization. In ICLR, Cited by: §3.3, §4.1.2.
  21. Semi-supervised classification with graph convolutional networks. In ICLR, Cited by: §1, §2.1, §3.1.1, §3.1.1, §3.1, §5.2.
  22. Predict then propagate: graph neural networks meet personalized pagerank. In ICLR, Cited by: §3.2.2, §3.2, §5.2.
  23. Matrix factorization techniques for recommender systems. IEEE Computer 42 (8), pp. 30–37. Cited by: §1, §5.1.
  24. Factorization meets the neighborhood: a multifaceted collaborative filtering model. In KDD, pp. 426–434. Cited by: §1, §5.1.
  25. Deeper insights into graph convolutional networks for semi-supervised learning. In AAAI, pp. 3538–3545. Cited by: §3.1.2, §5.2.
  26. Variational autoencoders for collaborative filtering. In WWW, pp. 689–698. Cited by: §1, §1, 1st item.
  27. DeepInf: social influence prediction with deep learning. In KDD, pp. 2110–2119. Cited by: §5.2.
  28. Collaborative filtering with graph information: consistency and scalable methods. In NIPS, pp. 2107–2115. Cited by: 2nd item.
  29. BPR: bayesian personalized ranking from implicit feedback. In UAI, pp. 452–461. Cited by: §1, §3.3, §4.1.1, §5.1.
  30. Improving pairwise learning for item recommendation from implicit feedback. In WSDM, pp. 273–282. Cited by: §3.3.
  31. Fast context-aware recommendations with factorization machines. In SIGIR, pp. 635–644. Cited by: §6.
  32. Latent relational metric learning via memory-based attention for collaborative ranking. In WWW, pp. 729–739. Cited by: §5.1.
  33. Graph convolutional matrix completion. In KDD Workshop on Deep Learning Day, Cited by: §4.1.1, §5.2.
  34. Graph attention networks. In ICLR, Cited by: §3.1.1.
  35. Knowledge graph convolutional networks for recommender systems. In WWW, pp. 3307–3313. Cited by: §6.
  36. Billion-scale commodity embedding for e-commerce recommendation in alibaba. In KDD, pp. 839–848. Cited by: §6.
  37. Unifying user-based and item-based collaborative filtering approaches by similarity fusion. In SIGIR, pp. 501–508. Cited by: §3.2.3.
  38. KGAT: knowledge graph attention network for recommendation. In KDD, pp. 950–958. Cited by: §6.
  39. Neural graph collaborative filtering. In SIGIR, pp. 165–174. Cited by: §1, §1, §2.2, §2, §3.1.1, §3.1.1, §3.1.2, 3rd item, §4.1, Table 3, §4, §5.2.
  40. Simplifying graph convolutional networks. In ICML, pp. 6861–6871. Cited by: §3.1, §3.2.1, §3.2, §5.2.
  41. A neural influence diffusion model for social recommendation. In SIGIR, pp. 235–244. Cited by: §6.
  42. How powerful are graph neural networks?. In ICLR, Cited by: §3.1.
  43. Deep item-based collaborative filtering for top-n recommendation. TOIS 37 (3), pp. 33:1–33:25. Cited by: §5.1.
  44. HOP-rec: high-order proximity for implicit recommendation. In RecSys, pp. 140–144. Cited by: §4.1.1.
  45. MMGCN: multimodal graph convolution network for personalized recommendation of micro-video. In MM, Cited by: §6.
  46. Graph convolutional neural networks for web-scale recommender systems. In KDD (Data Science track), pp. 974–983. Cited by: §1, §4.1.1, §5.1, §5.2.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
406898
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description