Convolutional 2D Knowledge Graph Embeddings
Abstract
In this work, we introduce a convolutional neural network model, ConvE, for the task of link prediction. ConvE applies 2D convolution directly on embeddings, thus inducing spatial structure in embedding space. To scale to large knowledge graphs and prevent overfitting due to overparametrization, previous work seeks to reduce parameters by performing simple transformations in embedding space. We take inspiration from computer vision, where convolution is able to learn multiple layers of nonlinear features while reducing the number of parameters through weight sharing. Applied naively, convolutional models for link prediction are computationally costly. However, by predicting all links simultaneously we improve test time performance by more than 300x on FB15k. We report stateoftheart results for numerous previously introduced link prediction benchmarks, including the wellestablished FB15k and WN18 datasets. Previous work noted that these two datasets contain many reversible triples, but the severity of this issue was not quantified. To investigate this, we design a simple model that uses a single rule which reverses relations and achieves stateoftheart results. We introduce WN18RR, a subset of WN18 which was constructed the same way as the previously proposed FB15k237, to alleviate this problem and report results for our own and previously proposed models for all datasets. Analysis of our convolutional model suggests that it is particularly good at modelling nodes with high indegree and nodes with high PageRank and that 2D convolution applied on embeddings seems to induce contrasting pixellevel structures.
Convolutional 2D Knowledge Graph Embeddings
Tim Dettmers^{†}^{†}thanks: This work was conducted during a research visit to University College London. Università della Svizzera italiana tim.dettmers@gmail.com Pasquale Minervini Pontus Stenetorp Sebastian Riedel University College London {p.minervini, p.stenetorp, s.riedel}@ucl.ac.uk
noticebox[b]\end@float
1 Introduction
Knowledge graphs are databases of triples, where facts are represented in the form of relationships (edges) between entities (nodes). While knowledge graphs have important applications in search, analytics, recommendation, and data integration they suffer from incompleteness, that is, missing links in the graph. For example, in Freebase and DBpedia more than 66% of the person entries are missing a birthplace [6, 15]. Link prediction is the task to predict missing links in such a graph.
One focus of previous link prediction work lies in the number of model parameters. Knowledge graphs can be large and as such link predictors should scale in a manageable way with respect to both the number of parameters and computational costs to be applicable in realworld scenarios. It has also been shown that more complex models suffer from overparameterisation and are thus prone to overfitting [22]. Due to these limitations, link prediction models are often composed of simple operations, like dot products and matrix multiplications, over an embedding space and use a limited number of parameters.
In computer vision, convolutional layers are used to overcome these problems by using parameter sharing to learn multiple layers of nonlinear features which are increasingly abstract and as such more expressive than shallower models [14].
Due to a range of regularisation techniques [9, 28, 30], convolutional architectures have little or no problems with overfitting even for networks with a depth close to a hundreds layers [29, 8]. Convolutional layers are highly parameter efficient, for example convolutional architectures have about 90% of their parameters in the fully connected hidden layers which compose features, while having only 10% of the parameters in convolutional layers which extract features [14]. As such, convolutional layers constitute highly parameter efficient feature extractors which can model rich nonlinear interactions and generalise effectively.
In this work we propose a neural link predictor, ConvE, that uses 2D convolution over embeddings to predict new links in knowledge graphs. While the bulk of our model’s parameters is still in the relation and entity embeddings like in other link predictors, by using convolution as a weight sharing mechanism, we only use an additional 72 parameters to extract an extra layer of nonlinear features which are then projected back to embedding space for scoring. We thus have a highly parameter efficient, scalable architecture, which generalises well and uses few additional parameters compared to other commonly used link prediction models. We make the following contributions:

We introduce ConvE, a simple model that uses 2D convolutions over embeddings and achieves stateoftheart performance on the WN18, FB15k, YAGO310 and Countries datasets.

Using a 1N approach for link prediction, where we predict scores for all possible links simultaneously, we speed up evaluation by several orders of magnitude. While useful for our model to get more predictions from each expensive convolution operation, this approach can also help speed up other link prediction models.

Previous work by Toutanova and Chen [31] noted that FB15k and WN18 contain many redundant, reversible relations, but they did not investigate the severity of this problem. We demonstrate the severity by designing a simple model which is based on a reversal rule that achieves stateoftheart results on WN18 and FB15k, suggesting that successful models might learn this rule rather than to model the knowledge graph itself. We propose a new version of WN18, which follows the construction procedure of FB15k237 to alleviate this problem.

We show that the performance of our model compared to a shallow model, DistMult, increases when the network and test set involves nodes with high indegree or high PageRank. In particular, we show that the performance of our model increases relative to DistMult with increasing mean PageRank of nodes in the test set (). This suggests that our model is better at modelling nodes with high indegree.
2 Related Work
Several neural link prediction models have been proposed in the literature, such as the Translating Embeddings model (TransE) [1], the Bilinear Diagonal model (DistMult) [34] and its extension in the complex space (ComplEx) [33] – we refer to Nickel et al. [22] for a recent survey on such models. The neural link prediction model that is most closely related to this work is most likely the Holographic Embeddings model (HolE) [23], which uses crosscorrelation – the inverse of circular convolution – for matching entity embeddings; it is inspired by holographic models of associative memory. However, HolE does not learn multiple layers of nonlinear features and is thus theoretically less expressive than our model.
To the best of our knowledge, our model is the first model to use 2D convolutional layers when defining neural link prediction models for link prediction. Graph Convolutional Networks (GCNs) [7, 5, 13] are a related line of research, where the convolution operator is generalised to use locality information in graphs. However, the GCN framework is limited to undirected graphs while knowledge graphs are naturally directed, and suffers from potentially prohibitive memory requirements [13]. Relational GCNs (RGCNs) [26] are a generalisation of GCNs developed for dealing with highly multirelational data such as knowledge graphs – we include them in our experimental evaluations.
Several CNNbased models have been proposed in natural language processing (NLP) for solving a variety of tasks, including semantic parsing [35], sentence classification [11], search query retrieval [27], sentence modelling [10], as well as traditional NLP tasks [4]. However, most work in NLP uses 1Dconvolutions, that is convolutions which operate over a temporal sequence of embeddings, for example a sequence of words in embedding space. In this work, we use 2Dconvolutions which operate on a spatial level directly on embeddings. As we show later in section 7.2, this induces pixellevel spatial structure in our embeddings.
Using 2D convolutions has one major advantage for interactions between embeddings: Consider for example the case where we concatenate two rows of 1D embeddings — that is lining them up — a 1D convolution will be able to extract features of the interaction between these two embeddings at the concatenation point; if we concatenate two rows of 2D embeddings — that is stacking them — a 2D convolution will be able to extract features of interactions over the entire concatenation line. Thus 2D convolution is able to extract more features of interactions between two embeddings compared to 1D convolution.
3 Background
Model  Score  Relation parameters  Space complexity 
RESCAL [21]  
SE [3]  
TransE [1]  
DistMult [34]  
ComplEx [33]  
ConvE 
A knowledge graph can be formalised as a set of triples (facts), each consisting of a relationship and two entities , referred to as the subject and object of the triple. Each triple denotes a relationship of type between the entities and .
The link prediction problem can be formalised as a pointwise learning to rank problem, where the objective is learning a scoring function . Given an input triple , its score is proportional to the likelihood that the fact encoded by is true.
In our case, the score of a relationships is defined as a deep convolutional network [16].
Neural Link Predictors
Neural link prediction models [22] can be seen as multilayer neural networks composed by an encoding component and a scoring component. Given an input triple , the encoding component maps entities to their distributed embedding representations . In the scoring component, the two entity embeddings and are scored by a function . The score of is defined as .
In Table 1 we summarise the scoring function of some link prediction models from the literature. The vectors and denote the subject and object embedding, where in ComplEx and in all other models, and denotes the trilinear dot product; denotes the convolution operator; denotes a nonlinear function.
4 Convolutional 2D Embeddings of Knowledge Graphs
In this work we propose a neural link prediction model where the interactions between input entities and relationships is modelled by fullyconnected and convolutional layers. Our model’s main feature is convolution over 2D shaped embeddings. The architecture is summarised in Figure 1. Formally, the scoring function is defined as follows:
(1) 
where denotes a nonlinear function, and and denote a 2D reshaping of and , respectively: if , then , where .
In the feedforward pass, the model performs a rowvector lookup operation in two embedding matrices, one for entities, denoted and one for relations, denoted , where and are respectively the entity and relation embedding dimensions, and and denote the number of entities and relations. The model then concatenates and , and uses it as an input for a 2D convolutional layer with filters . Such a layer returns a feature map tensor , where is the number of 2D filters, and and are the dimensions of the extracted feature maps. The tensor is then reshaped in a vector, which is then projected in a dimensional space by a linear transformation parametrised by the matrix and matched with the object embedding via a dot product. The convolutional filters and the matrix are shared parameters, independent of the input entities and and the relationship .
For training the model parameters, we apply a logistic sigmoid to the logits of the scores of , and minimise the following binary crossentropy loss:
(2) 
where is the prediction and the label.
We use rectified linear units as the nonlinearity for faster training [14], and batch normalisation after each layer to stabilise, regularise and increase rate of training convergence [9]. We regularise our model by using dropout [28] in several stages: We dropout the embeddings, feature maps after the convolution operation and hidden units after the fully connected layer. We use Adam as optimiser [12], and label smoothing to lessen overfitting due to saturation of output nonlinearities at the labels [30].
4.1 Fast Evaluation for Link Prediction Tasks
Unlike other link prediction models which take an entity pair and a relation as a triple , and score it (11 approach), we take one pair and score it against all entities simultaneously (1N approach). In our architecture convolution consumes about 7590% of the total computation time of the model and thus it is important to minimise the number of convolution operations. If we take a naive 11 approach, then a training pass and an evaluation with a convolution model on FB15k – one of the dataset used in the experiments – takes 2.4 minutes and 3.34 hours, respectively, using a highend GPU with a batch size of 128 and embedding size 128. Using a 1N approach the respective numbers are 45 and 35 seconds – a considerable improvement of over 300x in terms of evaluation time. Usually, in a 11 approach, the batch size is increased to speed up evaluation [2], but this is not feasible for convolutional models since the GPU memory requirements quickly blows up for large batch sizes when one uses convolution. Thus the 1N evaluation is of practical importance, since our model would be computationally expensive to evaluate otherwise.
Do note that any 11 model can make use of 1N evaluation. Thus this practical trick in speeding up the evaluation is applicable for all standard link prediction models which usually operate using a 11 approach.
5 Experiments
5.1 Knowledge Graph Datasets
For evaluating our proposed model, we use a selection of common knowledge graphs from the literature. WN18 [1] is a subset of WordNet which consists of 18 relations and 40,943 entities. Most of the 151,442 triples have hyponym and hypernym relations and thus WN18 is marked by relations arranged in a strictly hierarchical structure. FB15k [1] is a subset of Freebase which contains about 15k entities with 1,345 different relations. A large fraction of content in this knowledge graph deals with movies/actors/awards and sports/sport teams. YAGO310 [19] is a subset of YAGO3 which consists of entities which have a minimum of 10 relations each. It has 123,182 entities and 37 relations. Most of the triples deal with descriptive attributes of people (citizenship/gender/profession).
Countries is a benchmark dataset that is useful to evaluate a model’s ability to learn longrange dependencies between entities and relations. It consists of three subtasks which increase in difficulty in a stepwise fashion. It consists of three types of entities: countries (e.g. Germany, Italy), subregions (e.g. Western Europe, South America) and regions (e.g. Europe, Americas); and two relations: NeighborOf(country, country) and LocatedIn(country, subregion / region). The task is to predict the region of a given country. It is evaluated in terms of areaunderthecurve for precision/recall (AUCPR). The task has three levels of difficulty, S1, S2, S3, where each step increases the pathlength or indirection which one needs to traverse the dataset graph in order to find a solution. In S1 all regions of testcase countries are removed from the training set (solution: Country subregion region); in S2 one also removes the subregions (solution: Country neighbour region); in S3 one also removes the region of all the neighbors (solution: Country neighbour subregion region).
We also introduce a new dataset, WN18RR,^{1}^{1}1Code and dataset available at: https://github.com/TimDettmers/ConvE which is an alteration of WN18 which aims to address the shortcoming of WN18: It was first noted by Toutanova and Chen [31] that the test sets of WN18 and FB15k contain mostly reversed triples that are present in the training set, for example the test set contains while the training set contains the reverse . Toutanova and Chen [31] introduced FB15k237, a subset of FB15k where reversing relations are removed, to create a dataset without this property. However, they did not explicitly demonstrate the severity of this problem, which might explain the fact that further research kept using these datasets for evaluation without addressing this issue.
In Section 5.2 we introduce a simple reversal rule which demonstrates the severity of this bias by achieving stateoftheart results on both WN18 and FB15k. One might argue, that a good relational model should learn how to reverse relations in addition to more complex aspects of the dataset, but if this is our evaluation goal we should design more controlled datasets and experiments where it is clear what a relational model learns. By creating WN18RR, we seek to reclaim WN18 as a dataset which tests a models ability to model a general knowledge graph which cannot easily be completed using a single rule. As such, we do not recommend the usage of FB15k and WN18 in further research. Instead, we recommend the usage of FB15k237, WN18RR, and YAGO310 that do not suffer from these issues.
5.2 Experimental Setup
We selected the hyperparameters of our ConvE model via grid search according to the mean reciprocal rank (MRR) on the validation set. Hyperparameter ranges for the grid search were the following – embedding dropout in , feature map dropout in , projection layer dropout in , embedding size in , batch size in , learning rate in , label smoothing in . Besides the grid search, we also tried modifications of the 2D convolution layer in our models: We tried replacing it with fully connected layers, and 1D convolution, but these performed consistently worse and we abandoned them. We also experimented with different filter sizes and found that we only receive good results if the first convolutional layer uses small filters, that is 3x3 filters. We found that the following combination of parameters works well on WN18, YAGO310 and FB15k: embedding dropout 0.2, feature map dropout 0.2, projection layer dropout 0.3, embedding size 200, batch size 128, learning rate 0.001, label smoothing 0.1. For the Countries dataset, we increase embedding dropout to 0.3, hidden dropout to 0.5 and set label smoothing to 0.
We use early stopping using the mean reciprocal rank (WN18, FB15k, YAGO310) and AUCPR (Countries) statistics on the validation set which we evaluate every three epochs. Unlike the other datasets, for Countries the results have a high variance, as such we average 10 runs and produce 95% confidence intervals.
Baseline – Reversal model:
WN18 and FB15k has been noted to contain many reversible relations [31], for instance, a test triple (feline, hyponym, cat) can easily be mapped to a training triple (cat, hypernym, feline): knowing that hyponym is the inverse of hypernym allows you to easily predict the vast majority of test triples.
For such a reason, to test the redundancy of the datasets, we also investigated a simple rulebased baseline model, which we will refer to as the reverse model. This model looks for relationships which are the reverse of each other, such as hypernym and hyponym. We extract these relationships automatically from the training set: given two relation pairs , we check whether implies , and viceversa. If the presence of cooccurs with the presence of at least 99% of the time, we say that implies () and we use this reversal rule for predicting test triples. We use the training set to check for reversal rules. At test time we check if the test triple has reversal matches outside the test set, if matches are found we sample a permutation of the top ranks for these matches; if no match is found we select a random rank for the test triple.
6 Results
WN18  FB15k  
Hits  Hits  
MR  MRR  @10  @3  @1  MR  MRR  @10  @3  @1  
DistMult [34]  902  0.822  0.936  0.914  0.728  97  0.654  0.824  0.733  0.546 
ComplEx [33]  –  0.941  0.947  0.936  0.936  –  0.692  0.840  0.759  0.599 
Gaifman [24]  352  –  0.939  –  0.761  75  –  0.842  –  0.692 
ANALOGY [18]  –  0.942  0.947  0.944  0.939  –  0.725  0.854  0.785  0.646 
RGCN [26]  –  0.814  0.964  0.929  0.697  –  0.696  0.842  0.760  0.601 
ConvE  504  0.942  0.955  0.947  0.935  64  0.745  0.873  0.801  0.670 
ReverseModel  602  0.857  0.969  0.958  0.757  1563  0.759  0.786  0.771  0.743 
WN18RR  FB15k237  
Hits  Hits  
MR  MRR  @10  @3  @1  MR  MRR  @10  @3  @1  
DistMult [34]  5110  0.425  0.491  0.439  0.389  254  0.241  0.419  0.263  0.155 
ComplEx [33]  5261  0.444  0.507  0.458  0.411  248  0.240  0.419  0.263  0.152 
RGCN [26]  –  –  –  –  –  –  0.248  0.417  0.258  0.153 
ConvE  7323  0.342  0.411  0.360  0.306  330  0.301  0.458  0.330  0.220 
ReverseModel  13417  0.360  0.360  0.360  0.360  7124  0.007  0.012  0.008  0.004 
YAGO310  Countries  
Hits  AUCPR  
MR  MRR  @10  @3  @1  S1  S2  S3  
DistMult [34]  5926  0.337  0.540  0.379  0.237  1.0000.000  0.7210.122  0.5160.070  
ComplEx [33]  6351  0.355  0.547  0.399  0.258  0.9650.021  0.5710.104  0.4300.072  
ConvE  2792  0.523  0.658  0.564  0.448  1.0000.000  0.9850.013  0.856 0.051  
ReverseModel  60251  0.015  0.022  0.017  0.010  –  –  – 
Similarly to [34, 33, 24], we here focus on reporting results in a filtered setting, that is we rank triples only against scores for all possible entity combinations of unknown triples and we do not rank against combinations of existing, known triples. Our results on the standard benchmarks FB15k and WN18 are shown in Table 2, results on the datasets with reversing relations removed are shown in Table 3; results on YAGO310 and Countries are shown in Table 4.
Strikingly, the reverse model baseline achieves stateoftheart on many different metrics on both, FB15k and WN18 datasets. However, it fails to pick up on reversible relations on YAGO310 and FB15k237. On WN18 our reverse model achieves a good score due to selfreversing relationships like "similar to" which still exist in the dataset after using the procedure which is also used for the FB15k237 dataset.
Our proposed model, ConvE, achieves stateoftheart performance for all metrics on YAGO310, for some metrics on FB15k, and it does well on WN18. On Countries, it solves the S1 and S2 tasks, and does well on S3, scoring better than other models like DistMult and ComplEx
For FB15k237, we could not replicate the basic model results from Toutanova et al. [32], where the models in general have better performance than what we can achieve. Compared to Schlichtkrull et al. [26] our results for standard models are a bit better then theirs and onapar with their RGCN model.
7 Analysis
7.1 Looking at Indegree and PageRank
Our main hypothesis for the good performance of our model on datasets like YAGO310 and FB15k237 compared to WN18RR is that these dataset contain nodes with very high relationspecific indegree. For example the node "United States" (entity embedding) with edges "was born in" (relation embedding) has an indegree of over 10,000. Many of these 10,000 nodes will be very different from each other (actors, writers, politicians, business people) and our main hypothesis is that models that learn multiple layers of nonlinear features like ours have an advantage over shallow models to capture all these constraints for such high indegree nodes.
However, for simpler datasets, like WN18 which mainly consists of hypernym/hyponym relations which often have an indegree of one (there is often only one generalisation for a given concept), we have nodes with small relationspecific indegree and thus a linear model might be sufficient, easier to optimise, and thus be able to find a better local minimum.
In this section we compare DistMult, a model that uses a simple trilinear dot product, and our model, ConvE, that learn multiple layers of nonlinear features to analyse the effect these high indegree nodes on performance.
We test our hypothesis in two ways. We remove triples which contain nodes which have a relationspecific indegree of greater than two for FB15k; and we remove triples which contain nodes which have a relationspecific indegree less than two for WN18. On these datasets we hypothesise that compared to DistMult, (1) our model will perform worse on FB15k (relatively more nodes with low indegree), and (2) our model will perform better on WN18 (relatively more nodes with high indegree). Indeed, we find that both hypotheses hold: For (1) on FB15k we have ConvE 0.586 Hits@10 vs DistMult 0.728 Hits@10; for (2) on WN18 we have ConvE 0.952 Hits@10 vs DistMult 0.938 Hits@10. This shows that our model indeed might have an advantage when modelling nodes with high indegree.
To verify this hypothesis further we look at PageRank, which is a measure of centrality of a node. PageRank can also be seen as a measure of the recursive indegree of a node, that is, the PageRank value of a node is proportional to the indegree of this node, its neighbours indegree, its neighboursneighbours indegree and so forth scaled relative to all other nodes in the network.
In line with our argument above, we expect that nodes with high PageRank are more difficult to model, since one entity embedding needs to capture numerous constraints with other entity embeddings, and additionally, many of its neighbours, which by definition of high PageRank have often high indegree, will need to capture numerous additional constrains and so forth.
To test this hypothesis, we calculate the PageRank for each dataset as a measure of centrality. We find that the most central nodes in WN18 have a PageRank value more than one order of magnitude smaller than the most central nodes in YAGO310 and Countries, and about 4 times smaller than the most central nodes in FB15k; see Figure 3 in the Appendix for more statistics. When we look at the mean PageRank of nodes contained in the test sets, we find that the difference of performance in terms of Hits@10 between DistMult and ConvE is roughly proportional to the mean test set PageRank, that is, the higher the mean PageRank of the test set nodes the better does ConvE compared to DistMult and viceversa. See Table 5 for these statistics. The correlation between mean test set PageRank and relative error reduction of ConvE compared to DistMult is strong with . This gives additional evidence that our model has an advantage at modelling nodes with high (recursive) indegree.
To verify if this behaviour is caused by learning multiple layers of of nonlinear features, we remove the convolutional layer from our model and replace it with either two fully connected layers or with a 1D convolutional layer while keeping the rest of the architecture consistent. We then train all three models on FB15k and compare the scores. If multiple layers of nonlinear features would be the decisive factor we would expect similar score for all three models. We find that the fully connect architecture performs much worse then 1D or 2D convolution, with Hits@10 0.466 (MLP) vs 0.821 (1D) vs 0.873 (2D), respectively.
Experimentally, this shows that the multiple layers of nonlinear features are not the decisive factors, but rather the ability of a layer to extract useful features. This is similar to the performance of multilayer perceptrons compared to convolutional networks in computer vision — both models learn nonlinear features, but convolutional layers can extract more relevant features which lead to better generalisation. The differences between 1D and 2D convolution suggests that features learned from the interaction of two embeddings (2D) are more powerful than just modelling the embeddings with convolution (1D).
In conclusion, we believe that the increased performance of our model compared to a standard link predictor, DistMult, can be partially be explained due to our model’s ability to model nodes with high indegree with greater precision. We show that this behaviour might be due to the ability of 2D convolutional layers to learn features of interactions between entities and relation embeddings. This begs the question what kind of spatial structure 2D convolution is inducing in the embedding space.
Dataset  PageRank  Error Reduction 
WN18RR  0.104  0.91 
WN18  0.125  1.28 
FB15k  0.599  1.23 
FB15RR  0.733  1.17 
YAGO310  0.988  1.91 
Countries S3  1.415  3.36 
Countries S1  1.711  0 
Countries S2  1.796  18.6 
7.2 Spatial Structure of 2D Embeddings
Here we test if the 2D embeddings contain spatial structure. We use Moran’s I test which tests the null hypothesis that the global spatial autocorrelation is zero, or in other words, that the random variable does not have any global spatial structure [20]. We use the PySal package [25] to carry out the test.
We normalise each "image" to have mean zero and variance and apply the Moran’s I test to up to 100 samples of entity and relation embeddings. We count the proportion of significant tests to get evidence if these embeddings have some structure in general. We find that Moran I tests on entity embeddings are significant near chance level, with less than 10% of tests being significant at the level. For relation embeddings we find weak evidence for spatial structure for YAGO310 where 44% of relations are significant. For visualisations of these significant relations see Figure 4 in the Appendix. These results suggest that 2D entity embeddings generally do not have global spatial structure and that only some of the relation embedding have global spatial structure.
However, visualisation of YAGO310 and WN18 convolution filters, as seen in Figure 2 suggest that these filters match contrasting "pixel" values, that is, the filters match patterns where one pixel or a group of pixels stands out relative to its neighbourhood. These pixelfeature filters give some hints why Moran’s I test failed to pick up spatial patterns, since a spatial pattern in embeddings must be larger than a few contrasting pixels to be significantly different from noise. If the 2D convolutions would not pick up on any spatial features, we would expect that they lead to poor model performance so this means that these filters must work on pixellevel structures. This also explains why convolutions larger than 3x3 do not work well in the first convolutional layer since larger filters sum pixellevel features together with their surroundings thus diluting information in pixellevel structures.
8 Conclusion and Future Work
Here we introduced ConvE, a link prediction model that uses 2D convolution over embeddings and multiple layers of nonlinear features to model knowledge graphs. This model uses few parameters and is computationally efficient due to a 1N approach of link prediction and thus scales well with increasing knowledge graph sizes. Our model achieves stateoftheart results on several existing knowledge graph datasets. However, building on previous work, we also show that a simple reverse model for relations can achieve stateoftheart results on WN18 and FB15k thus questioning if models on this dataset actually learn general link prediction rather than learning this reversal rule. We introduce WN18RR to address this issue for WN18 and we recommend using FB15k237 over FB15k for future research.
In our analysis we show that the performance of our model compared to a common link predictor, DistMult, can partially be explained by its ability to model nodes with high (recursive) indegree. Tests for spatial autocorrelation reveal that the entity and relation embeddings in general do not have any significant spatial structure, but that 2D convolutions on embeddings instead learn structures of contrasting pixels.
Our model is still shallow compared to convolutional architecture found in computer vision and future work might deal with convolutional models of increasing depth. Further work might also look at the interpretation of 2D convolution or how to enforce largescale structure in embedding space and thus make convolutional filters learn feature extractors for large scale structures in embedding interactions.
Acknowledgments
We would like to thank Johannes Welbl for his feedback and helpful discussions related to this work. This work was supported by a Marie Curie Career Integration Award, an Allen Distinguished Investigator Award, a Google Europe Scholarship for Students with Disabilities, and the H2020 project SUMMA.
References
 Bordes et al. [2013a] Antoine Bordes, Nicolas Usunier, Alberto GarcíaDurán, Jason Weston, and Oksana Yakhnenko. Translating Embeddings for Modeling Multirelational Data. In Christopher J. C. Burges et al., editors, Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 58, 2013, Lake Tahoe, Nevada, United States., pages 2787–2795, 2013a.
 Bordes et al. [2013b] Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko. Translating Embeddings for Modeling Multirelational Data. In Advances in neural information processing systems, pages 2787–2795, 2013b.
 Bordes et al. [2014] Antoine Bordes, Xavier Glorot, Jason Weston, and Yoshua Bengio. A semantic matching energy function for learning with multirelational data  application to wordsense disambiguation. Machine Learning, 94(2):233–259, 2014.
 Collobert et al. [2011] Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel P. Kuksa. Natural Language Processing (Almost) from Scratch. Journal of Machine Learning Research, 12:2493–2537, 2011.
 Defferrard et al. [2016] Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering. In Lee et al. [17], pages 3837–3845.
 Dong et al. [2014] Xin Dong, Evgeniy Gabrilovich, Geremy Heitz, Wilko Horn, Ni Lao, Kevin Murphy, Thomas Strohmann, Shaohua Sun, and Wei Zhang. Knowledge Vault: A WebScale Approach to Probabilistic Knowledge Fusion. In Sofus A. Macskassy et al., editors, The 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’14, New York, NY, USA  August 24  27, 2014, pages 601–610. ACM, 2014. ISBN 9781450329569.
 Duvenaud et al. [2015] David K. Duvenaud, Dougal Maclaurin, Jorge AguileraIparraguirre, Rafael Bombarell, Timothy Hirzel, Alán AspuruGuzik, and Ryan P. Adams. Convolutional Networks on Graphs for Learning Molecular Fingerprints. In Corinna Cortes et al., editors, Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 712, 2015, Montreal, Quebec, Canada, pages 2224–2232, 2015.
 He et al. [2016] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 770–778, 2016.
 Ioffe and Szegedy [2015] Sergey Ioffe and Christian Szegedy. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. arXiv preprint arXiv:1502.03167, 2015.
 Kalchbrenner et al. [2014] Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. A Convolutional Neural Network for Modelling Sentences. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, ACL 2014, June 2227, 2014, Baltimore, MD, USA, Volume 1: Long Papers, pages 655–665. The Association for Computer Linguistics, 2014. ISBN 9781937284725.
 Kim [2014] Yoon Kim. Convolutional Neural Networks for Sentence Classification. In Alessandro Moschitti et al., editors, Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 2529, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1746–1751. ACL, 2014. ISBN 9781937284961.
 Kingma and Ba [2014] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
 Kipf and Welling [2016] Thomas N. Kipf and Max Welling. SemiSupervised Classification with Graph Convolutional Networks. In International Conference on Learning Representations (ICLR), 2016.
 Krizhevsky et al. [2012] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. ImageNet Classification with Deep Convolutional Neural Networks. In Advances in neural information processing systems, pages 1097–1105, 2012.
 Krompaß et al. [2015] Denis Krompaß, Stephan Baier, and Volker Tresp. TypeConstrained Representation Learning in Knowledge Graphs. In Marcelo Arenas et al., editors, The Semantic Web  ISWC 2015  14th International Semantic Web Conference, Bethlehem, PA, USA, October 1115, 2015, Proceedings, Part I, volume 9366 of Lecture Notes in Computer Science, pages 640–655. Springer, 2015. ISBN 9783319250069.
 LeCun et al. [1998] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. GradientBased Learning Applied to Document Recognition. Proceedings of the IEEE, 86(11):2278–2324, November 1998.
 Lee et al. [2016] Daniel D. Lee et al., editors. Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 510, 2016, Barcelona, Spain, 2016.
 Liu et al. [2017] H. Liu, Y. Wu, and Y. Yang. Analogical Inference for MultiRelational Embeddings. ArXiv eprints, May 2017.
 Mahdisoltani et al. [2015] Farzaneh Mahdisoltani, Joanna Biega, and Fabian M. Suchanek. YAGO3: A Knowledge Base from Multilingual Wikipedias. In CIDR 2015, Seventh Biennial Conference on Innovative Data Systems Research, Asilomar, CA, USA, January 47, 2015, Online Proceedings. www.cidrdb.org, 2015.
 Moran [1950] Patrick AP Moran. Notes on Continuous Stochastic Phenomena. Biometrika, 37(1/2):17–23, 1950.
 Nickel et al. [2011] Maximilian Nickel, Volker Tresp, and HansPeter Kriegel. A ThreeWay Model for Collective Learning on MultiRelational Data. In Lise Getoor et al., editors, Proceedings of the 28th International Conference on Machine Learning, ICML 2011, Bellevue, Washington, USA, June 28  July 2, 2011, pages 809–816. Omnipress, 2011.
 Nickel et al. [2016a] Maximilian Nickel, Kevin Murphy, Volker Tresp, and Evgeniy Gabrilovich. A review of relational machine learning for knowledge graphs. Proceedings of the IEEE, 104(1):11–33, 2016a.
 Nickel et al. [2016b] Maximilian Nickel, Lorenzo Rosasco, and Tomaso A. Poggio. Holographic Embeddings of Knowledge Graphs. In Dale Schuurmans et al., editors, Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, February 1217, 2016, Phoenix, Arizona, USA., pages 1955–1961. AAAI Press, 2016b. ISBN 9781577357605.
 Niepert [2016] Mathias Niepert. Discriminative Gaifman Models. In Lee et al. [17], pages 3405–3413.
 Rey and Anselin [2010] Sergio J Rey and Luc Anselin. PySAL: A Python Library of Spatial Analytical Methods. Handbook of applied spatial analysis, pages 175–193, 2010.
 Schlichtkrull et al. [2017] Michael Schlichtkrull, Thomas N Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling. Modeling Relational Data with Graph Convolutional Networks. arXiv preprint arXiv:1703.06103, 2017.
 Shen et al. [2014] Yelong Shen, Xiaodong He, Jianfeng Gao, Li Deng, and Grégoire Mesnil. Learning Semantic Representations Using Convolutional Neural Networks for Web Search. In ChinWan Chung et al., editors, 23rd International World Wide Web Conference, WWW ’14, Seoul, Republic of Korea, April 711, 2014, Companion Volume, pages 373–374. ACM, 2014. ISBN 9781450327459.
 Srivastava et al. [2014] Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. Journal of Machine Learning Research, 15(1):1929–1958, 2014.
 Srivastava et al. [2015] Rupesh Kumar Srivastava, Klaus Greff, and Jürgen Schmidhuber. Highway networks. arXiv preprint arXiv:1505.00387, 2015.
 Szegedy et al. [2016] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the Inception Architecture for Computer Vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2818–2826, 2016.
 Toutanova and Chen [2015] Kristina Toutanova and Danqi Chen. Observed Versus Latent Features for Knowledge Base and Text Inference. In Proceedings of the 3rd Workshop on Continuous Vector Space Models and their Compositionality, pages 57–66, 2015.
 Toutanova et al. [2015] Kristina Toutanova, Danqi Chen, Patrick Pantel, Hoifung Poon, Pallavi Choudhury, and Michael Gamon. Representing Text for Joint Embedding of Text and Knowledge Bases. In EMNLP, volume 15, pages 1499–1509, 2015.
 Trouillon et al. [2016] Théo Trouillon, Johannes Welbl, Sebastian Riedel, Éric Gaussier, and Guillaume Bouchard. Complex Embeddings for Simple Link Prediction. In MariaFlorina Balcan et al., editors, Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 1924, 2016, volume 48 of JMLR Workshop and Conference Proceedings, pages 2071–2080. JMLR.org, 2016.
 Yang et al. [2015] Bishan Yang, Wentau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. Embedding Entities and Relations for Learning and Inference in Knowledge Bases. In International Conference on Learning Representations (ICLR), 2015.
 Yih et al. [2011] Wentau Yih, Kristina Toutanova, John C. Platt, and Christopher Meek. Learning Discriminative Projections for Text Similarity Measures. In Sharon Goldwater et al., editors, Proceedings of the Fifteenth Conference on Computational Natural Language Learning, CoNLL 2011, Portland, Oregon, USA, June 2324, 2011, pages 247–256. ACL, 2011. ISBN 9781932432923.