MONET: Debiasing Graph Embeddings via the Metadata-Orthogonal Training Unit

MONET: Debiasing Graph Embeddings via the Metadata-Orthogonal Training Unit

John Palowitch
Google Research
San Francisco, CA 94105
palowitch@google.com
&Bryan Perozzi
Google Research
New York, NY 10011
bperozzi@acm.org
Abstract

Are Graph Neural Networks (GNNs) fair? In many real world graphs, the formation of edges is related to certain node attributes (e.g. gender, community, reputation). In this case, standard GNNs using these edges will be biased by this information, as it is encoded in the structure of the adjacency matrix itself. In this paper, we show that when metadata is correlated with the formation of node neighborhoods, unsupervised node embedding dimensions learn this metadata. This bias implies an inability to control for important covariates in real-world applications, such as recommendation systems.

To solve these issues, we introduce the Metadata-Orthogonal Node Embedding Training (MONET) unit, a general model for debiasing embeddings of nodes in a graph. MONET achieves this by ensuring that the node embeddings are trained on a hyperplane orthogonal to that of the node metadata. This effectively organizes unstructured embedding dimensions into an interpretable topology-only, metadata-only division with no linear interactions. We illustrate the effectiveness of MONET though our experiments on a variety of real world graphs, which shows that our method can learn and remove the effect of arbitrary covariates in tasks such as preventing the leakage of political party affiliation in a blog network, and thwarting the gaming of embedding-based recommendation systems.

1 Introduction

Graph embeddings – continuous, low-dimensional vector representations of nodes – have been eminently useful in network visualization, node classification, link prediction, and many other graph learning tasks [cui2018survey]. While graph embeddings can be estimated directly by unsupervised algorithms using the graph’s structure [e.g. perozzi2014deepwalk, tang2015line, grover2016node2vec, qiu2018network], there is often additional (non-relational) information available for each node in the graph. This information, frequently referred to as node attributes or node metadata, can contain information that is useful for prediction tasks including demographic, geo-spatial, and/or textual features.

The interplay between a node’s metadata and edges is a rich and active area of research. Interestingly, in a number of cases, this metadata can be measurably related to a graph’s structure [peel2017ground], and in some instances there may be a causal relationship (the node’s attributes influence the formation of edges). As such, metadata can enhance graph learning models [yang2015network, newman2016structure], and conversely, graphs can be used as regularizers in supervised and semi-supervised models of node features [yang2016revisiting, defferrard2016convolutional]. Furthermore, metadata are commonly used as evaluation data for graph embeddings [chen2018tutorial]. For example, node embeddings trained on a Flickr user graph were shown to predict user-specified Flickr “interests" [perozzi2014deepwalk]. This is presumably because users (as nodes) in the Flickr graph tend to follow users with similar interests, which illustrates a potential causal connection between node topology and node metadata.

However, despite the usefulness and prevalence of metadata in graph learning, there are instances where it desirable to design a system to avoid the effects of a particular kind of sensitive data. For instance, the designers of a recommendation system may want to make recommendations independent of a user’s demographic information or location.

At first glance, this may seem like an artificial dilemma – surely one could just avoid the problem by not adding such sensitive attributes to the model. However, such an approach (ignoring a sensitive attribute) does not control for any existing correlations that may exist between the sensitive metadata and the edges of a node. In other words, if the edges of the graph are correlated with sensitive metadata, then any algorithm which does not explicitly model and remove this correlation will be biased as a result of it. Surprisingly, almost all of the existing work in the area [yang2015network, zhang2016homophily] has ignored this important realization.111While preparing this manuscript, we have become aware of a recent independent result [bose2019compositional] in this area. In contrast to that work, we use a substantially different methodology which offers guarantees about the debiasing process.

In this work, we seek to refocus the discussion about graph learning with node metadata. To this end, we propose a novel, general technique for extending graph representations with metadata embedding dimensions while debiasing the remaining (topology) dimensions. Specifically, our contributions are the following:

  1. The Metadata-Orthogonal Node Embedding Training (MONET) unit, a novel GNN algorithm which jointly embeds graph topology and graph metadata while enforcing independence between the two embedding spaces.

  2. Analysis which proves that a naive approach (adding metadata embeddings without MONET) leaks metadata information into topology embeddings, and that the MONET unit does not.

  3. Experimental results on real world graphs which show that MONET can successfully “debias" topology embeddings while relegating metadata information to separate metadata embeddings.

2 Preliminaries

Early graph embedding methods involved dimensionality reduction techniques like multidimensional scaling and singular value decomposition [chen2018tutorial]. In this paper we use graph neural networks trained on random walks, similarly to DeepWalk [perozzi2014deepwalk]. DeepWalk and many subsequent methods first generate a sequence of random walks from the graph, to create a “corpus" of node “sentences" which are then modeled via word embedding techniques (e.g. word2vec [mikolov2013distributed] or GloVe [pennington2014glove]) to learn low dimensional representations that preserve the observed co-occurrence similarity.

Let be a -dimensional graph embedding matrix, , which aims to preserve the low-dimensional structure of a graph (). Rows of correspond to nodes, and node pairs with large dot-products should be structurally or topologically close in the graph. As a concrete example, in this paper we consider the debiasing of a recently proposed graph embedding using the GloVe model [brochier2019global]. Its training objective is:

(1)

where are the “center" and “context" embeddings, are the biases, is the walk-distance-weighted context co-occurrences, and is the loss smoothing function [pennington2014glove]. We use the GloVe model throughout to demonstrate topology/metadata embeddings and metadata-orthogonal training, though the MONET unit we propose is broadly generalizable.

Notation. In this paper, given a matrix and an index , denotes the -th row vector of . Column indices will not be used. denotes the zero matrix, and denotes the Frobenius norm.

3 Metadata Embeddings and Orthogonal Training

In this section we present MONET, our proposed method for separating and controlling the effects of metadata on topology embeddings. First, we begin by outlining the straightforward extension of metadata to traditional embedding models in Section 3.1. Next, in Section 3.2, we prove that such a simple model will leak information from the metadata to the topology (structural) embeddings. Then, in Section 3.3 we present MONET, our proposed approach for training embeddings of a graph’s structure which are not correlated with metadata. Finally, we conclude with some analysis of MONET in Section 3.4

3.1 Jointly Modeling Metadata & Topology

A natural first approach to modeling the effects of metadata on the graph is to explicitly include the node metadata as part of a node embedding model. For instance, to extend Eq. (1), in addition to and (the “topology embeddings"), we can consider the node metadata directly (, row vector is the metadata for node ). We then can define metadata embeddings , , where are trainable transformations, and propose the concatenations and as full-graph representations. The GloVe loss with metadata embeddings is:

(2)

While in this paper we demonstrate metadata embeddings within the GloVe model, they can be incorporated in any dot-product-based graph neural network. For instance, the well-known DeepWalk [perozzi2014deepwalk] loss, which is based on word2vec [mikolov2013distributed], would incorporate metadata embeddings as follows:

(3)

Above, is the set of context pairs from random walks, and is a set of negative samples associated with node . For GloVe, DeepWalk, and many other GNNs, this approach augments the overall graph representation by concatenating metadata-learned dimensions.

However, this naïve approach does not guarantee that the topology embeddings converge to be statistically independent of the metadata embeddings. Suppose that the metadata (like demographic information) are indeed associated with the formation of links in the graph. In this case, any algorithm which does not explicitly model and remove the association will be biased as a result of it. In the next section we formalize this concept, which we call metadata leakage.

3.2 Metadata Leakage in Graph Neural Networks

Here, we formally define metadata leakage for general topology and metadata embeddings, and show how it can occur even in embedding models with separate metadata embeddings. All proofs appear in the Appendix.

Definition 1.

The metadata leakage of metadata embeddings into topology embeddings is defined . We say that there is no metadata leakage if and only if .

Without a more nuanced approach, metadata leakage can occur even in embedding models that explicitly include the metadata, like Eqs. (2) and (3). To demonstrate this, we consider for simplicity a reduced metadata-aware GloVe loss with as the sole topology embedding and as the sole metadata transformation parameter. With , the reduced loss is:

(4)

We now show that under a random update of the model in Eq. (4), the expected metadata leakage is non-zero. Specifically, let be a node pair from , and define as the incurred Stochastic Gradient Descent update . Suppose there is a “ground-truth" metadata transformation , and define ground-truth metadata embeddings , which represent the “true" dimensions of the metadata effect on the co-occurrences . Define and . With expectations taken with respect to the sampling of a pair for Stochastic Gradient Descent, define and . Define , similarly. Then our main Theorem is as follows:

Theorem 1.

Assume for , , and . Suppose for some fixed we have . Let be a randomly sampled co-occurrence pair and the incurred update. Then if , we have

(5)

Importantly, and are neural network hyperparameters, so we give a useful Corollary:

Corollary 1.

Under the assumptions of Theorem 1, as .

Note that under reasonable GNN initialization schemes, and are random perturbations. Thus, Corollary 1 implies the surprising result that incorporating feed-forward metadata embeddings is not sufficient to prevent metadata leakage in practical settings.

3.3 MONET: Metadata-Orthogonal Node Embedding Training

Here, we introduce the Metadata-Orthogonal Node Embedding Training (MONET) unit for training joint topology-metadata graph representations without metadata leakage. MONET explicitly prevents the correlation between topology and metadata, by using the Singular Value Decomposition (SVD) of to orthogonalize updates to during training.

MONET. The MONET unit is a two-step algorithm applied to the training of a topology embedding in a neural network, and is detailed in Algorithm 1. The input to a MONET unit is a metadata embedding and a target topology embedding for debiasing. Then, let be the left-singular vectors of , and define the projection . In the forward pass procedure, debiased topology weights are obtained by using the projection . Similarly, is used in place of in subsequent GNN layers. In the backward pass, MONET also debiases the backpropagation update to the topology embedding, , using . Figure 1 illustrates a geometric interpretation of the MONET algorithm.

Procedure Forward Pass debiasing
Data: topology embeddings , metadata embeddings
Compute left-singular vectors and projection
Compute orthogonal topology embedding
Return debiased graph representation
Procedure Backward Pass debiasing
Data: topology embedding update
Compute orthogonal topology embedding update
Apply update
Return debiased topology embedding
Algorithm 1 MONET Unit Training Step

Straightforward properties of the SVD show that MONET directly prevents metadata leakage:

Theorem 2.

Using Algorithm 1, and .

We note that in this work we have only considered linear metadata leakage; debiasing nonlinear topology/metadata associations is an area of future work.

Implementation (). We demonstrate MONET in our experiments by applying Algorithm 1 to Eq. (2). We denote this model , and create it as follows. We orthogonalize the input and output topology embeddings with the summed metadata embeddings . By linearity, this implies -orthogonal training of the summed topology representation . We note that working with the sums of center and context embeddings is the standard way to combine these matrices [pennington2014glove]. Figure 2 shows an illustration of .

3.4 Analysis

Here we address some brief remarks about the algorithmic complexity of MONET, and the interpretation of its parameters.

Algorithmic Complexity. The bottleneck of MONET occurs in the SVD computation and orthogonalization. In our setting, the SVD is [trefethen1997numerical]. The matrix need not be computed to perform orthogonalization steps, as , and the right-hand quantity is to compute. Hence the general complexity of the MONET unit is .

Figure 1: Geometric interpretation of MONET.Both prediction and training for occur on a hyperplane orthogonal to . In the forward pass, is projected onto the -orthogonal plane. When an update is proposed, it too is projected, resulting in the best metadata-orthogonal update. This allows to explore the space of unknown latent structure without bias from . Figure 2: Illustration of . and are topology embeddings. The MONET unit adds a feed-forward transformation of the metadata, resulting in metadata embeddings and . gives the combined metadata representation, used to debias and via . Dotted lines indicate stopped gradient flow during backpropagation.

Metadata Parameter Interpretation. The terms in the sum of the loss for GloVe models with metadata (  and ) involve the dot product . That expansion suggests that the matrix contains all pairwise metadata dimension relationships. In other words, gives the direction and magnitude of the raw metadata effect on log co-occurrence, and is therefore a way to measure the extent to which the model has captured metadata information. We will refer to this interpretation in the experiments that follow. An important experiment will show that applying the MONET algorithm increases the magnitude of entries.

4 Metadata Debiasing Experiments

Here we empirically demonstrate Theorems 1 and 2 by confirming the following hypotheses:

  1. H1. The MONET unit can remove leakage of metadata information from topology embeddings, so that the topology embeddings cannot predict the metadata.

  2. H2. The MONET unit can make recommender systems more robust to abuse by removing malicious user directions from rating graphs.

For all embedding models, we use the center-context embedding sum of topology embeddings as the graph representation for task evaluation. Note that some standard baselines (e.g. DeepWalk) do not incorporate metadata and therefore only train topology embeddings. All GloVe-based models are trained with TensorFlow [abadi2016tensorflow] using the AdaGrad optimizer [duchi2011adaptive] with initial learning rate 0.05. DeepWalk models were trained using the gensim software [rehurek_lrec].

4.1 Quantitative Experiment: Political Blogs Network

To address H1, illustrating Theorem 1 and the effect of MONET debiasing, we embed the effect of political ideology on a blogger network [adamic2005political]. The political blog network222Available within the Graph-Tool software [peixoto_graph-tool_2014] has has 1,107 nodes corresponding to blog websites, 19,034 hyperlink edges between the blogs (after converting the graph to be undirected), and two clearly defined, equally sized communities of liberal and conservative bloggers.

Design and Methods. In this experiment all graph neural network models were trained on 5 iterations across 80 random walks per node of length 40 with context window size 10. Topology embeddings had dimension 16, and metadata embeddings had dimension 2. We measure embedding bias by the Macro-F1 score of a linear SVM predicting political party from the embeddings, using a LIBLINEAR implementation [chang2011libsvm]. For each embedding set, we compute the mean and standard deviation Macro-F1 over 10 independent classification repetitions, each trained using half of the node labels sampled at random. To assess metadata information leakage, we also track the metadata dimension importance matrix , recalling its interpretation from Section 3.

Results. Table 1 shows that the baselines DeepWalk and GloVe are highly effective at predicting political party, and therefore biased. This is unsurprising, as these methods are trained without metadata information, and were originally intended to encode low-dimensional structure like that present in this data set. The bias in DeepWalk and GloVe embeddings is further seen in their metadata leakage values, computed using political party one-hot vectors as metadata embeddings.

Considering the embedding models with metadata embeddings, we find that, interestingly, ’s topology embeddings are still able to predict political party with 88.3% Macro-F1. Also, as predicted by Corollary 1, ’s metadata leakage remains . This shows that simply concatenating metadata embeddings is not sufficient to isolate the metadata effect. In contrast,  achieves random Macro-F1 and no metadata leakage (under machine precision), demonstrating that on this data, the MONET unit is necessary to debias the blog embeddings from political party. This contrast is seen in two other ways. First, there is a noticeable increase in magnitude when MONET is used, implying that  metadata embeddings are not capturing all possible metadata information. Second, as seen in Fig. 3, the 2-dimensional PCA plots of the  embeddings still show political party separation, whereas the  PCA dimensions reveal strong mixing.

Model F1 (mean std) (mean std)
DeepWalk 95.59% N/A
GloVe 95.94% N/A
88.33%
49.30%
Table 1: Macro-F1 scores from political blog network classifications using graph topology embeddings only. MONET is successful in removing all metadata information from the topology embeddings – the links in the graph are no longer an effective predictor of political party. Comparison of the metadata transformation product between   and  shows MONET allows for considerably more metadata information learning. Finally, only MONET removes metadata leakage to precision error (recall is a Frobenius norm).
(a)
(b)
(c)
Figure 3: PCA of political blog graph embeddings. (a): Party separation clearly visible on standard GloVe embeddings. (b): Party separation reduces when  captures some metadata information. (c): Party separation disappears with  orthogonalized training.
Model Manipulated Items in Top-20 Embedding Distance
(mean std dev) Correlation w/GloVe
DeepWalk
GloVe
NLP Debiasing [schmidt2015rejecting, bolukbasi2016man] (sum)
NLP Debiasing [schmidt2015rejecting, bolukbasi2016man] (max)
Table 2: Results from the shilling attack experiment. Attackers attempt to insert 10 items in the top-20 recommendations of a target video. The results show that MONET can best mitigate the effect of an attack under incomplete information. We note that there is an implicit trade-off between debiasing and maintaining correlation with the original (biased) embeddings.

4.2 Experiment 2: Thwarting Attacks on Graph-based Recommendation Systems

In this experiment we address H2, investigating the effectiveness of MONET to defend against a shilling attack [Chirita:2005:PSA:1097047.1097061] against graph-embedding based recommender systems [ying2018graph]. In a shilling attack, a number of users act together to artificially increase the likelihood that a particular influenced item will be recommended for a particular target item.

Data. In a single repetition of this experiment, we inject an artificial shilling attack into the MovieLens 100k dataset333Available: http://files.grouplens.org/datasets/movielens/ml-100k/. The raw data is represented as a bipartite graph with 943 users, 1682 items, and a total of 100,000 ratings (edges). Each user has rated at least 20 items. At random, we sample 10 items into an influence set , and a target item to be attacked. We take a random sample of 5% of the existing users to be the set of attackers, . We then create a new graph, which in addition to all the existing ratings, contains new ratings from each attacker to each item as well as the target video. (Note that this corresponds to several varieties of behavior including both incentivizing formerly good users, and account takeover.)

Design and Methods. For each embedding method, we perform random walks through the new bipartite graph . As we wish to study item recommendation, in the random walks, we simply remove user nodes each time they are visited (so the walks contain only pairwise co-occurrence information over items). With any given network embedding, we measure its bias by the number of influence items in in top-20 embedding-nearest-neighbor list of . As metadata, we allow  to know the per-movie attacker rating count for each attacked movie. However, to better demonstrate real-world performance, we only allow 50% (randomly sampled) attackers from the original 5% sample to be “known" when constructing these metadata. As non-debiasing baselines, we compare against DeepWalk and Glove. As debiasing baselines, we applied a generalized correlation removal framework developed for removing word embedding bias [schmidt2015rejecting, bolukbasi2016man]. Specifically, we tried two approaches to “debias” the GloVe embedding of the MovieLens graph – as the “gender" embedding direction, we tried both (a) the most attacked movie vector and (b) the sum of attacked movie vectors. All methods use 128 dimensional topology embeddings and are trained on 100 random walks per node, each walk of length 5.

Results. As seen in Table 2, the topology embeddings from  are the least biased by a large margin, letting on average only 1.2 influence items in the top-20 neighbors of . Interestingly, we note that this behavior occurs even though the majority of observed co-occurrences for the algorithm had nothing to do with the attack in question, and only the known 50% of attackers were used to construct the metadata. In contrast, all baselines (including those that explicitly model the attacker metadata) left at least around half of the attacked items in the top-20 list. To measure the extent to which debiased embeddings retain the original recommendation signal, we compute the pair-wise embedding distances of each method, and compute their Pearson correlation to the standard GloVe embeddings. We find that MONET embedding distances achieve high correlation (0.83) to the original distances, showing that with MONET it is possible to nearly nullify a shilling attack while preserving most of the signal from the true, un-attacked ratings. We note that the max-attack-embedding baseline higher embedding distance correlation, but this method let many more attacked items (on average) into higher ranks. This reveals a trade-off between embedding debiasing and prediction efficacy which has also been observed in other contexts [ying2018graph].

5 Related Work

Though graph learning is an immense field, a minority of unsupervised graph embedding techniques involve graph metadata. To our knowledge, none of these techniques involve either metadata orthogonalization or the capacity to learn arbitrary metadata transformations. [zhu2007combining] is a matrix factorization approach which uses a shared node embedding matrix to factor both the graph adjacencies and the raw metadata in a joint loss, with a tunable parameter to control the influence of the metadata loss. Similarly, [li2017ppne] pre-computes a metadata similarity matrix and trains shared center-context embedding matrices on the metadata similarities and random walk similarities. In contrast, we learn the direction and effect of metadata as neural network parameters, and we separate those parameters into unique embedding dimensions. [yang2015network] and [zhang2016homophily] are matrix factorization approaches which factor an approximation to the co-occurrence matrix into equally-sized metadata and topology embeddings, and were built mainly for text metadata. Their approaches enforce metric space similarity and dimensional homogeneity between metadata and topology representations, restrictions that we do not rely on and are ill-suited to the setting with multiple types of arbitrarily-sized metadata. [guo2018enhancing] constructs random walks that traverse between the original graph and the metadata freely, an approach which runs counter to our ability to separate out the effects of metadata on graph adjacencies. [newman2016structure] introduce a version of the stochastic block model with metadata priors, and show that the estimated posteriors yield insight into the influence of metadata on the graph. However, this model estimates a community partition and in/out-community probabilities - it does not yield embeddings either of the node topology or the node metadata. There has been work in Natural Language Processing on removing gender bias from word embeddings [e.g. bolukbasi2016man], but these methods operate with pre-computed embeddings and rely on identification of gendered terminology.

Additionally, there has been a wealth of work studying semi-supervised learning with graphs [e.g. yang2016revisiting] and graph convolutional networks [e.g. abuelhaija2019mixhop, kipf2016semi, defferrard2016convolutional], which use graph metadata as features. While most semi-supervised and supervised neural networks for graphs indirectly produce embeddings that in some cases can be identified with feature and topology dimensions, they are trained as part of prediction or label propagation tasks. Therefore, the topology embeddings are free to correlate with features to the extent that this serves the loss function - there is no explicit separation of topology and metadata dimensions. In this paper, we have studied the benefits of metadata orthogonalization in the unsupervised setting, and we leave the exploration of our techniques in the semi-supervised and supervised settings to future work.

6 Conclusion

In this work, we have shown that unsupervised training of graph embeddings induces bias from important graph metadata. We proposed a novel solution to address this problem – the Metadata-Orthogonal Node Embedding Training (MONET) unit. The MONET unit is the first graph learning technique for training-time debiasing of embeddings, using orthogonalization. Our experimental results using real datasets showed that MONET is able to encode the effect of graph metadata in isolated embedding dimensions, and simultaneously remove the effect from other dimensions. This has immediate practical applications, which we illustrate by mitigating a simulated shilling attack on a real dataset of movie ratings.

This work was meant to introduce the basic principles underlying the need for the MONET technique, and show its utility in a shallow graph neural network (GloVe). While we used a shallow network for instructional purposes, we note that MONET is generalizable, and MONET units can be used to debias any set of embeddings from another set during training. Subsequent research can explore the use of MONET in deeper networks and potentially semi-supervised models or graph convolutional networks. As MONET’s SVD calculation can be expensive with large graphs and large embedding dimensions, future research could be in assessing the effect of SVD approximations, or training algorithms that utilize caching of previous metadata embedding SVDs to speed up training.

References

Appendix A Appendix

Proposition 1.

Under the assumptions of Theorem 1, we have

(6)
Proof.

Derivatives of yield that the -th row of is , where

(7)

Similarly the -th row is , and all other rows are zero vectors. Hence

(8)

We derive the second term on the right-hand side of Equation 8; the first term follows by symmetry. Note first that by independence and centering assumptions. Second:

by independence. Third:

by independence, and similarly . Combining these with Equation 7, we have

(9)

Applying symmetry to the second term in Equation 8 completes the proof. ∎

a.1 Proof of Theorem 1

Proof.

Proposition 1 gives . Second, note that and thus . Recalling that we have

Applying Jensen’s Inequality completes the proof. ∎

a.2 Proof of Theorem 2

Proof.

Consider metadata embeddings and, as in the MONET algorithm, define the projection , where are the left-singular vectors of . By properties of the SVD, , and hence . This means that , which completes the proof by definition of metadata leakage. ∎

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
391965
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description