Knowledge Graph Entity Alignment with Graph Convolutional Networks: Lessons Learned
Abstract
In this work, we focus on the problem of entity alignment in Knowledge Graphs (KG) and we report on our experiences when applying a Graph Convolutional Network (GCN) based model for this task.
Variants of GCN are used in multiple state-of-the-art approaches and therefore it is important to understand the specifics and limitations of GCN-based models.
Despite serious efforts, we were not able to fully reproduce the results from the original paper and after a thorough audit of the code provided by authors, we concluded, that their implementation is different from the architecture described in the paper.
In addition, several tricks are required to make the model work and some of them are not very intuitive.
We provide an extensive ablation study to quantify the effects these tricks and changes of architecture have on final performance.
Furthermore, we examine current evaluation approaches and systematize available benchmark datasets.
We believe that people interested in KG matching might profit from our work, as well as novices entering the field.
1 Introduction
The success of information retrieval in a given task critically depends on the quality of the underlying data. Another issue is that in many domains knowledge bases are spread across various data sources [20] and it is crucial to be able to combine information from different sources. In this work, we focus on knowledge bases in the form of Knowledge Graphs (KGs), which are particularly suited for information retrieval [25]. Joining information from different KGs is non-trivial, as there is no unified schema or vocabulary. The goal of the entity alignment task is to overcome this problem by learning a matching between entities in different KGs. In the typical setting some of the alignments are known in advance (seed alignments) and the task is therefore supervised. More formally, we are given graphs and with a seed alignment . It is commonly assumed that an entity can match at most one entity . Thus the goal is to infer alignments for the remaining nodes only.
Graph Convolutional Networks(GCN) [14, 12], which have been recently become increasingly popular, are at the core of state-of-the-art methods for entity alignments in KGs [32, 5, 35, 38, 11]. In this paper, we thoroughly analyze one of the first GCN-based entity alignment methods, GCN-Align [32]. Since the other methods we are studying can be considered as extensions of this first paper and have a similar architecture, our goal is to understand the importance of its individual components and architecture choices. In summary, our contribution is as follows:
-
We investigate the reproducibility of the published results of a recent GCN-based method for entity alignment and uncover differences between the method’s description in the paper and the authors’ implementation.
-
We perform an ablation study to demonstrate the individual components’ contribution.
-
We apply the method to numerous additional datasets of different sizes to investigate the consistency of results across datasets.
2 Related work
In this section we review previous work for the entity alignment for Knowledge Graphs and revise datasets and current evaluation process. We believe this is useful for practitioners, since we discover some pitfalls, especially when implementing evaluation scores and selecting datasets for comparison. The overview of methods, datasets and metrics is provided in Table 1.
Method | Datasets | Metrics | Code |
---|---|---|---|
MTransE [8] | WK3l-15K, WK3l-120K, CN3l | H@10(, MR) | yes |
IPTransE [37] | DFB-{1,2,3} | H@{1,10}, MR | yes |
JAPE [26] | DBP15K(JAPE) | H@{1,10,50}, MR | yes |
KDCoE [7] | WK3l-60K | H@{1,10}, MR | yes |
BootEA [28] | DBP15K(JAPE), DWY100K | H@{1,10}, MRR | yes |
SEA [21] | WK3l-15K, WK3l-120K | H@{1,5,10}, MRR | yes |
MultiKE [36] | DWY100K | H@{1,10}, MR, MRR | yes |
AttrE [30] | DBP-LGD,DBP-GEO,DBP-YAGO | H@{1,10}, MR | yes |
RSN [13] | custom DBP15K, DWY100K | H@{1,10}, MRR | yes |
GCN-Align [32] | DBP15K(JAPE) | H@{1,10,50} | yes |
CL-GNN [34] | DBP15K(JAPE) | H@{1,10} | yes |
MuGNN [5] | DBP15K(JAPE), DWY100K | H@{1,10}, MRR | yes |
NAEA [38] | DBP15K(JAPE), DWY100K | H@{1,10}, MRR | no |
2.1 Methods
While the problem of entity alignments in Knowledge Graphs has been tackled historically by researching vocabularies which are as broad as possible, and establish them as a standard, recent approaches take a more data-driven view.
Early methods use classical knowledge graph link prediction models such as TransE [3] to embed the entities of the individual knowledge graphs using a intra-KG link prediction loss, and differ in what they do with the aligned entities.
For instance MTransE [8] learns a linear transformation between the embedding spaces of the individual graphs using -loss.
BootEA [27] adopts a bootstrapping approach and iteratively labels the most likely alignments and utilizes them for further training. In addition to the alignment loss, embeddings of aligned entities are swapped regularly to calibrate embedding spaces against each other. SEA [22] learns mapping between embedding spaces in both directions and additionaly adds cycle-consistency loss. Therefore the distance between original embedding of an entity and its representation, which was first translated to another space and then back from it, is penalized.
IPTransE [37] embeds both KGs into the same embedding space and uses a margin-based loss to enforce the embeddings of aligned entities to become similar.
RSN [13] model generates sequences using different types of random walks which can move between graphs when visiting aligned entities. The generated sequences are feed to adapted recurrent model.
JAPE [26], KDCoE [7], MultiKE [36] and AttrE [30] utilize attributes available for some entities and additional information like names of entities and relationships.
Graph Neural Network (GNN) based models
[32, 5, 35, 38, 11]
2.2 Datasets
Dataset | Subset | Graph | Triples | Entities | Relations | Alignments |
DBP15k (full) | fr-en | fr | 192,191 | 66,858 | 1,379 | 15,000 |
en | 278,590 | 105,889 | 2,209 | |||
ja-en | ja | 164,373 | 65,744 | 2,043 | 15,000 | |
en | 233,319 | 95,680 | 2,096 | |||
zh-en | zh | 153,929 | 66,469 | 2,830 | 15,000 | |
en | 237,674 | 98,125 | 2,317 | |||
DBP15k (JAPE) | fr-en | fr | 105,998 | 19,661 | 903 | 15,000 |
en | 115,722 | 19,993 | 1,208 | |||
ja-en | ja | 77,214 | 19,814 | 1,299 | 15,000 | |
en | 93,484 | 19,780 | 1,153 | |||
zh-en | zh | 70,414 | 19,388 | 1,701 | 15,000 | |
en | 95,142 | 19,572 | 1,323 | |||
WK3l-15k | en-de | en | 209,041 | 15,127 | 1,841 | 1,289 (10,383) |
de | 144,244 | 14,603 | 596 | 1,140 (10,383) | ||
en-fr | en | 203,356 | 15,170 | 2,228 | 2,498 (8,024) | |
fr | 169,329 | 15,393 | 2,422 | 3,812 (8,024) | ||
WK3l-120k | en-de | en | 624,659 | 67,650 | 2,393 | 6,173 (50,280) |
de | 389,554 | 61,942 | 861 | 4,820 (50,280) | ||
en-fr | en | 1,375,406 | 119,749 | 3,109 | 36,749 (87,836) | |
fr | 760,497 | 118,592 | 2,336 | 36,013 (87,836) | ||
DWY-100k | dbp-wd | dbp | 463,294 | 100,000 | 330 | 100,000 |
wd | 448,774 | 100,000 | 220 | |||
dbp-yg | dbp | 428,952 | 100,000 | 302 | 100,000 | |
yg | 502,563 | 100,000 | 31 |
The datasets used by entity alignments methods generally derive from large-scale open-source data source such as DBPedia [2], YAGO [19], or Wikidata [33]. While there is the DWY-100k dataset, which comprises 100k aligned entities across the three aforementioned individual knowledge graphs, most of the datasets, such as DBP15k, or WK3l derive from a single multi-lingual database. There, subsets are formed according to a specific language, and entities which occur in multiple languages and are linked accordingly are used as alignments.
As an interesting observation we found out that all papers which evaluate on DBP15k, do not evaluate on the full DBP15k dataset
2.3 Scores
It is common practice to only consider the entities being part of the test alignment as potential matching candidates.
Although we argue that ignoring entities exclusive to a single graph as potential candidates does not reflect well the use-case situation
In the following description of evaluation measures we focus only on the case of aligning one node with a ground truth alignment . The right-to-left alignment is handled analogously. Let denote the set of matching candidates in the right graph. For a node , the entity alignment models generates a score for each matching candidate . Afterwards, the candidates are sorted according to their score, and the rank is computed as the index of the ground truth match in this sorted list (1-based). The mean rank (MR) is simply the mean over the ranks for all alignments.
The mean reciprocal rank (MRR) is the mean over all reciprocal ranks.
It is naturally bounded between 0 and 1, where 1 corresponds to a perfect score. Moreover, its value is dominated by small ranks, and it is less sensitive to larger ones. The hits at (H@k) is the percentage of alignments where the rank was at most , i.e. equivalent to the recall at .
3 Method
GCN-Align [32] is a GCN-based approach to embed all entities from both graphs into a common embedding space. Each entity is associated with structural features , which are initialized randomly and updated during training. The features of all entities in a single graph are combined to the feature matrix . Subsequently, a two-layer GCN is applied. A single GCN layer is described by
with , where is the adjacency matrix, and is the diagonal node degree matrix. The input of the first layer is set to , and is non-linear activation function. For the first layer, , and the second layer uses the identity. The output of the second layer is considered as the structural embedding, denoted by . Both graphs are equipped with its own node features, but the convolution weights are shared across the graphs.
The adjacency matrix is derived from the knowledge graph by first computing a score, called functionality, for each relation as the ratio between the number of different entities which occur as head, and the number of triples in which the relation occurs
Analogously, the inverse functionality is obtained by replacing the nominator by the number of different tail entities. The final adjacency matrix is obtained as
In addition, each entity is also equipped with attributes which are combined into a graph attribute matrix . Anagously, the attributes are processed by a GCN for each graph with convolution weights shared across the graphs, resulting in attribute embeddings .
The attribute and structure GCNs are optimized separately using SGD. As loss function, a margin-rank loss is used, exemplary for the structure embedding
where , and the margin is a hyperparameter chosen separately for structure and attribute embeddings. denotes a set of negative samples constructed by either replacing the left or the right entity with a random entity from the same graph.
In order to compare two nodes from both graphs, the distance between their embeddings is used, normalised by the dimensionality.
Here, is a hyperparameter for the tradeoff between structural and attribute similarity.
3.1 Implementation Differences
The code
4 Experiments
In initial experiments we were able to reproduce the results reported in the paper using the implementation provided by the authors. Moreover, we are able to reproduce the results using our own implementation, and settings adjusted to the authors’ code. In addition, we replaced the adjacency matrix based on functionality and inverse functionality by a simpler version, where . We additionally use instead of the symmetric normalization. In total, we see no difference in performance between our simplified adjacency matrix, and the authors’ one. We identified two aspects which affect the model’s performance: Not using convolutional weights, and normalizing the variance when initializing node embeddings. We provide empirical evidence for this finding across numerous datasets.
hyperparameter | abbrev. | value range |
---|---|---|
optimizer | opt | {Adam, SGD} |
learning rate | lr | {0.1, 0.5, 1, 10, 20} |
number of layers | #layers | {1, 2, 3} |
number of negative samples | #neg | {5, 50, 100} |
number of epochs | #epochs | {10, 500, 2000, 3000} |
Weights | no | yes | ||
---|---|---|---|---|
Variance Emb. Init. | 1 | 1 | ||
#epochs | 2,000 | 3,000 | 2,000 | 2,000 |
#neg | 50 | 100 | 50 | 50 |
#layers | 2 | 2 | 3 | 2 |
lr | 1 | 1 | 1 | 1 |
opt | adam | sgd | adam | adam |
Weights | No | Yes | |||
Variance. Emb. Init. | |||||
H@1 | |||||
DBP15K (full) | fr-en | 31.51 0.16 | 27.64 0.22 | 21.82 0.39 | 16.73 0.59 |
ja-en | 33.26 0.10 | 29.06 0.23 | 26.21 0.33 | 20.78 0.16 | |
zh-en | 31.15 0.15 | 22.55 0.27 | 24.96 0.71 | 18.85 0.99 | |
DBP15K (JAPE) | fr-en | 45.37 0.13 | 41.03 0.13 | 35.36 0.33 | 30.50 0.38 |
ja-en | 45.53 0.18 | 40.29 0.09 | 35.81 0.53 | 31.46 0.15 | |
zh-en | 43.30 0.12 | 39.37 0.20 | 33.61 0.49 | 29.94 0.35 | |
DWY100K | wd | 58.50 0.05 | 54.07 0.05 | 50.13 0.11 | 38.85 0.31 |
yg | 72.82 0.06 | 67.06 0.03 | 67.36 0.10 | 60.67 0.30 | |
WK3l-120K | en-de | 10.10 0.03 | 9.17 0.05 | 9.02 0.17 | 6.75 0.12 |
en-fr | 8.28 0.03 | 7.38 0.03 | 7.26 0.11 | 5.07 0.16 | |
WK3l-15K | en-de | 16.57 0.12 | 14.41 0.23 | 17.43 0.38 | 12.66 0.30 |
en-fr | 17.07 0.15 | 16.16 0.16 | 15.98 0.16 | 12.41 0.18 | |
H@10 | |||||
DBP15K (full) | fr-en | 68.38 0.32 | 63.41 0.14 | 59.26 0.55 | 48.55 0.92 |
ja-en | 68.22 0.09 | 61.95 0.17 | 61.12 0.51 | 49.56 0.38 | |
zh-en | 67.46 0.11 | 56.03 0.21 | 59.07 1.10 | 50.32 1.52 | |
DBP15K (JAPE) | fr-en | 82.48 0.08 | 79.11 0.07 | 74.71 0.27 | 69.72 0.36 |
ja-en | 79.77 0.14 | 75.13 0.20 | 73.05 0.52 | 67.18 0.28 | |
zh-en | 77.63 0.05 | 73.66 0.28 | 71.16 0.17 | 66.22 0.51 | |
DWY100K | wd | 86.26 0.05 | 81.30 0.03 | 79.65 0.20 | 69.73 0.25 |
yg | 92.13 0.04 | 87.57 0.04 | 88.64 0.09 | 83.76 0.27 | |
WK3l-120K | en-de | 27.13 0.02 | 24.92 0.03 | 25.49 0.26 | 20.83 0.29 |
en-fr | 23.73 0.04 | 21.57 0.05 | 22.16 0.24 | 16.31 0.35 | |
WK3l-15K | en-de | 42.43 0.13 | 38.63 0.17 | 45.24 0.47 | 37.03 0.30 |
en-fr | 49.68 0.15 | 48.18 0.15 | 47.64 0.53 | 41.87 0.36 | |
MR | |||||
DBP15K (FULL) | fr-en | 203.90 3.80 | 262.24 3.23 | 123.09 15.43 | 208.00 12.04 |
ja-en | 206.17 4.21 | 358.53 3.65 | 138.80 12.87 | 238.24 24.09 | |
zh-en | 168.80 2.59 | 149.08 2.70 | 279.49 38.78 | 206.36 17.60 | |
DBP15K (JAPE) | fr-en | 109.64 1.56 | 117.59 2.91 | 130.75 8.48 | 133.14 7.09 |
ja-en | 144.81 1.89 | 195.19 3.44 | 146.42 6.45 | 221.92 12.22 | |
zh-en | 181.37 4.05 | 215.23 4.53 | 172.05 12.72 | 236.72 2.84 | |
DWY100K | wd | 277.08 8.28 | 460.32 9.17 | 500.61 24.10 | 563.29 28.92 |
yg | 49.32 2.71 | 102.50 3.69 | 105.52 4.63 | 67.71 3.82 | |
WK3l-120K | en-de | 2753.75 6.69 | 2280.31 8.97 | 2843.96 53.29 | 2289.02 36.71 |
en-fr | 4438.81 9.29 | 4110.23 7.90 | 4551.39 55.29 | 4007.91 59.01 | |
WK3l-15K | en-de | 247.74 1.09 | 233.29 2.66 | 263.16 6.75 | 197.40 5.39 |
en-fr | 196.16 1.09 | 176.32 1.03 | 249.77 7.71 | 184.72 3.32 | |
MRR | |||||
DBP15K (full) | fr-en | 43.59 0.08 | 39.30 0.18 | 33.83 0.46 | 27.03 0.72 |
ja-en | 44.68 0.06 | 39.92 0.20 | 37.59 0.37 | 30.31 0.19 | |
zh-en | 43.09 0.10 | 33.55 0.19 | 36.15 0.80 | 29.21 1.16 | |
DBP15K (JAPE) | fr-en | 57.95 0.10 | 53.78 0.05 | 48.31 0.26 | 43.37 0.33 |
ja-en | 57.14 0.13 | 51.96 0.07 | 48.03 0.55 | 43.30 0.15 | |
zh-en | 54.89 0.09 | 50.88 0.15 | 45.97 0.39 | 41.93 0.40 | |
DWY100K | wd | 68.33 0.03 | 63.68 0.04 | 60.50 0.14 | 49.56 0.30 |
yg | 79.74 0.04 | 74.29 0.03 | 74.93 0.09 | 68.76 0.28 | |
WK3l-120K | en-de | 16.05 0.03 | 14.73 0.03 | 14.73 0.21 | 11.70 0.18 |
en-fr | 13.65 0.02 | 12.34 0.02 | 12.41 0.16 | 9.04 0.23 | |
WK3l-15K | en-de | 25.40 0.10 | 22.81 0.16 | 26.94 0.38 | 21.06 0.25 |
en-fr | 27.98 0.16 | 26.76 0.13 | 26.55 0.23 | 22.20 0.21 |
We fix using convolution weights and the variance for the normal distribution from which the embedding vectors are initialized and optimize the other hyperparameters according to validation H@1 (80/20% train-validation split) on DBP15K (JAPE) zh-en in a large-scale hyperparameter search, comprising 1,440 experiments. The hyperparameter grid is given in Table 3, and Table 4 shows the best parameters found for DBP15k (JAPE) zh-en for the four different settings. For each dataset, we perform a smaller hyperparameter search to fine-tune LR, #epochs & #layers for each dataset (again 80/20 split). Their optimal parameters are given in the appendix, in Table 6. We evaluate the best models on the official test set. Our results regarding Hits@1 (H@1), Hits@10 (H@10), mean rank (MR) and mean reciprocal rank (MRR) are summarised in Table 5.
Node Embedding Initialization
Comparing the columns of Table 5 we can observe the influence of the node embedding initialization. Using the settings from the authors’ code, i.e. not using weights, a choosing a variance of actually results in inferior performance in terms of H@1, as compared to use a standard normal distribution. These findings are consistent across datasets.
Convolution Weights
The first column of Table 5 corresponds to the weight usage and initialization settings used in the code for GCN-Align. We achieve slightly better results than published in [32], which we attribute to a more exhaustive parameter search. Interestingly, all best configurations use Adam optimizer instead of SGD. Adding convolution weights degrades the performance across all datasets and subsets thereof but one as witnessed by comparing the first two columns with the last two columns.
5 Conclusion
In this work, we reported our experiences when implementing the Knowledge Graph alignment method GCN-Align. We pointed at important differences between the model described in the paper and the actual implementation and quantified their effects in the ablation study. For future work, we plan to include other methods for entity alignments in our framework.
Acknowledgements
This work has been funded by the German Federal Ministry of Education and Research (BMBF) under Grant No. 01IS18036A and by the Bavarian Ministry for Economic Affairs, Infrastructure, Transport and Technology through the Center for Analytics-Data-Applications (ADA-Center) within the framework of “BAYERN DIGITAL II”. The authors of this work take full responsibilities for its content.
Appendix
Dataset Links
Code Links
Method | Link |
---|---|
MTransE [8] | https://github.com/muhaochen/MTransE |
IPTransE [37] | https://github.com/thunlp/IEAJKE |
JAPE [26] | https://github.com/nju-websoft/JAPE |
KDCoE [7] | https://github.com/muhaochen/MTransE-tf |
BootEA [28] | https://github.com/nju-websoft/BootEA |
SEA [21] | https://github.com/scpei/SEA |
MultiKE [36] | https://github.com/nju-websoft/MultiKE |
AttrE [30] | http://www.ruizhang.info/GKB/gkb.htm |
RSN [13] | https://github.com/nju-websoft/RSN |
GCN-Align [32] | https://github.com/1049451037/GCN-Align |
CL-GNN [34] | https://github.com/syxu828/Crosslingula-KG-Matching |
MuGNN [5] | https://github.com/thunlp/MuGNN |
NAEA [38] | - |
var. emb. init | weights | dataset | subset | #epochs | #layers | lr |
---|---|---|---|---|---|---|
1 | no | DBP15k (full) | fr-en | 2k | 2 | 1.0 |
ja-en | 2k | 3 | 1.0 | |||
zh-en | 2k | 4 | 1.0 | |||
DBP15k (JAPE) | fr-en | 2k | 2 | 1.0 | ||
ja-en | 2k | 2 | 1.0 | |||
zh-en | 2k | 2 | 1.0 | |||
DWY100k | wd | 2k | 2 | 1.0 | ||
yg | 2k | 2 | 1.0 | |||
WK3l-120k | en-de | 2k | 2 | 1.0 | ||
en-fr | 2k | 2 | 1.0 | |||
WK3l-15k | en-de | 2k | 2 | 1.0 | ||
en-fr | 2k | 2 | 10.0 | |||
yes | DBP15k (full) | fr-en | 2k | 4 | 1.0 | |
ja-en | 2k | 4 | 1.0 | |||
zh-en | 2k | 3 | 1.0 | |||
DBP15k (JAPE) | fr-en | 2k | 2 | 10.0 | ||
ja-en | 2k | 3 | 1.0 | |||
zh-en | 2k | 3 | 1.0 | |||
DWY100k | wd | 2k | 2 | 1.0 | ||
yg | 2k | 2 | 1.0 | |||
WK3l-120k | en-de | 2k | 2 | 1.0 | ||
en-fr | 2k | 2 | 1.0 | |||
WK3l-15k | en-de | 2k | 2 | 1.0 | ||
en-fr | 2k | 2 | 1.0 | |||
no | DBP15k (full) | fr-en | 3k | 2 | 1.0 | |
ja-en | 3k | 2 | 1.0 | |||
zh-en | 2k | 4 | 1.0 | |||
DBP15k (JAPE) | fr-en | 3k | 2 | 1.0 | ||
ja-en | 2k | 2 | 1.0 | |||
zh-en | 3k | 2 | 1.0 | |||
DWY100k | wd | 3k | 2 | 1.0 | ||
yg | 3k | 2 | 1.0 | |||
WK3l-120k | en-de | 3k | 2 | 0.5 | ||
en-fr | 3k | 2 | 1.0 | |||
WK3l-15k | en-de | 3k | 2 | 0.5 | ||
en-fr | 3k | 2 | 1.0 | |||
yes | DBP15k (full) | fr-en | 2k | 4 | 1.0 | |
ja-en | 2k | 4 | 1.0 | |||
zh-en | 2k | 4 | 1.0 | |||
DBP15k (JAPE) | fr-en | 2k | 2 | 1.0 | ||
ja-en | 2k | 2 | 1.0 | |||
zh-en | 2k | 2 | 1.0 | |||
DWY100k | wd | 2k | 2 | 1.0 | ||
yg | 3k | 2 | 0.5 | |||
WK3l-120k | en-de | 2k | 2 | 1.0 | ||
en-fr | 2k | 2 | 1.0 | |||
WK3l-15k | en-de | 2k | 2 | 1.0 | ||
en-fr | 2k | 2 | 1.0 |
Footnotes
- Code: https://github.com/Valentyn1997/kg-alignment-lessons-learned.
- Please note, that while [38] does not state explicitly that they use GNNs, their model is very similar to [31].
- Available at http://ws.nju.edu.cn/jape/
- https://github.com/nju-websoft/JAPE/blob/master/data/dbp15k.tar.gz
- In the typical scenario it is not known in advance, which entities have matching and which not. Therefore the resulting score is too optimistic. However, we advocate to investigate this shortcoming further in future work
- https://github.com/1049451037/GCN-Align
References
- K. Aberer, K. Choi, N. F. Noy, D. Allemang, K. Lee, L. J. B. Nixon, J. Golbeck, P. Mika, D. Maynard, R. Mizoguchi, G. Schreiber and P. Cudré-Mauroux (Eds.) (2007) The semantic web, 6th international semantic web conference, 2nd asian semantic web conference, ISWC 2007 + ASWC 2007, busan, korea, november 11-15, 2007. Lecture Notes in Computer Science, Vol. 4825, Springer. External Links: Link, Document, ISBN 978-3-540-76297-3 Cited by: 2.
- (2007) DBpedia: A nucleus for a web of open data. See The semantic web, 6th international semantic web conference, 2nd asian semantic web conference, ISWC 2007 + ASWC 2007, busan, korea, november 11-15, 2007, Aberer et al., pp. 722–735. External Links: Link, Document Cited by: §2.2.
- (2013) Translating embeddings for modeling multi-relational data. See Advances in neural information processing systems 26: 27th annual conference on neural information processing systems 2013. proceedings of a meeting held december 5-8, 2013, lake tahoe, nevada, united states, Burges et al., pp. 2787–2795. External Links: Link Cited by: §2.1.
- C. J. C. Burges, L. Bottou, Z. Ghahramani and K. Q. Weinberger (Eds.) (2013) Advances in neural information processing systems 26: 27th annual conference on neural information processing systems 2013. proceedings of a meeting held december 5-8, 2013, lake tahoe, nevada, united states. External Links: Link Cited by: 3.
- (2019) Multi-channel graph neural network for entity alignment. See Proceedings of the 57th conference of the association for computational linguistics, ACL 2019, florence, italy, july 28- august 2, 2019, volume 1: long papers, Korhonen et al., pp. 1452–1461. External Links: Link Cited by: §1, §2.1, §2.2, Table 1, Code Links.
- K. Chaudhuri and R. Salakhutdinov (Eds.) (2019) Proceedings of the 36th international conference on machine learning, ICML 2019, 9-15 june 2019, long beach, california, USA. Proceedings of Machine Learning Research, Vol. 97, PMLR. External Links: Link Cited by: 13.
- (2018) Co-training embeddings of knowledge graphs and entity descriptions for cross-lingual entity alignment. See Proceedings of the twenty-seventh international joint conference on artificial intelligence, IJCAI 2018, july 13-19, 2018, stockholm, sweden, Lang, pp. 3998–4004. External Links: Link, Document Cited by: §2.1, Table 1, Code Links.
- (2017) Multilingual knowledge graph embeddings for cross-lingual knowledge alignment. See Proceedings of the twenty-sixth international joint conference on artificial intelligence, IJCAI 2017, melbourne, australia, august 19-25, 2017, Sierra, pp. 1511–1517. External Links: Link, Document Cited by: §2.1, Table 1, Code Links.
- (2015) CIDR 2015, seventh biennial conference on innovative data systems research, asilomar, ca, usa, january 4-7, 2015, online proceedings. www.cidrdb.org. External Links: Link Cited by: 19.
- C. d’Amato, M. Fernández, V. A. M. Tamma, F. Lécué, P. Cudré-Mauroux, J. F. Sequeda, C. Lange and J. Heflin (Eds.) (2017) The semantic web - ISWC 2017 - 16th international semantic web conference, vienna, austria, october 21-25, 2017, proceedings, part I. Lecture Notes in Computer Science, Vol. 10587, Springer. External Links: Link, Document, ISBN 978-3-319-68287-7 Cited by: 26.
- (2020) Deep graph matching consensus. In International Conference on Learning Representations, External Links: Link Cited by: §1, §2.1.
- (2017) Neural message passing for quantum chemistry. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1263–1272. Cited by: §1.
- (2019) Learning to exploit long-term relational dependencies in knowledge graphs. See Proceedings of the 36th international conference on machine learning, ICML 2019, 9-15 june 2019, long beach, california, USA, Chaudhuri and Salakhutdinov, pp. 2505–2514. External Links: Link Cited by: §2.1, Table 1, Code Links.
- (2016) Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907. Cited by: §1.
- A. Korhonen, D. R. Traum and L. Màrquez (Eds.) (2019) Proceedings of the 57th conference of the association for computational linguistics, ACL 2019, florence, italy, july 28- august 2, 2019, volume 1: long papers. Association for Computational Linguistics. External Links: Link, ISBN 978-1-950737-48-2 Cited by: 5, 34.
- S. Kraus (Ed.) (2019) Proceedings of the twenty-eighth international joint conference on artificial intelligence, IJCAI 2019, macao, china, august 10-16, 2019. ijcai.org. External Links: Link, Document Cited by: 36, 38.
- J. Lang (Ed.) (2018) Proceedings of the twenty-seventh international joint conference on artificial intelligence, IJCAI 2018, july 13-19, 2018, stockholm, sweden. ijcai.org. External Links: Link, ISBN 978-0-9992411-2-7 Cited by: 7, 28.
- L. Liu, R. W. White, A. Mantrach, F. Silvestri, J. J. McAuley, R. Baeza-Yates and L. Zia (Eds.) (2019) The world wide web conference, WWW 2019, san francisco, ca, usa, may 13-17, 2019. ACM. External Links: Link, Document, ISBN 978-1-4503-6674-8 Cited by: 21.
- (2015) YAGO3: A knowledge base from multilingual wikipedias. See 9, External Links: Link Cited by: §2.2.
- (2015) A review of relational machine learning for knowledge graphs. Proceedings of the IEEE 104 (1), pp. 11–33. Cited by: §1.
- (2019) Semi-supervised entity alignment via knowledge graph embedding with awareness of degree difference. See The world wide web conference, WWW 2019, san francisco, ca, usa, may 13-17, 2019, Liu et al., pp. 3130–3136. External Links: Link, Document Cited by: Table 1, Table 2, Code Links.
- (2019) Semi-supervised entity alignment via knowledge graph embedding with awareness of degree difference. In The World Wide Web Conference, pp. 3130–3136. Cited by: §2.1.
- E. Riloff, D. Chiang, J. Hockenmaier and J. Tsujii (Eds.) (2018) Proceedings of the 2018 conference on empirical methods in natural language processing, brussels, belgium, october 31 - november 4, 2018. Association for Computational Linguistics. External Links: Link, ISBN 978-1-948087-84-1 Cited by: 32.
- C. Sierra (Ed.) (2017) Proceedings of the twenty-sixth international joint conference on artificial intelligence, IJCAI 2017, melbourne, australia, august 19-25, 2017. ijcai.org. External Links: Link, ISBN 978-0-9992411-0-3 Cited by: 8, 37.
- (2012) Introducing the knowledge graph: things, not strings. Official google blog 5. Cited by: §1.
- (2017) Cross-lingual entity alignment via joint attribute-preserving embedding. See The semantic web - ISWC 2017 - 16th international semantic web conference, vienna, austria, october 21-25, 2017, proceedings, part I, d’Amato et al., pp. 628–644. External Links: Link, Document Cited by: §2.1, §2.2, Table 1, Code Links.
- (2018) Bootstrapping entity alignment with knowledge graph embedding.. In IJCAI, pp. 4396–4402. Cited by: §2.1, §2.2.
- (2018) Bootstrapping entity alignment with knowledge graph embedding. See Proceedings of the twenty-seventh international joint conference on artificial intelligence, IJCAI 2018, july 13-19, 2018, stockholm, sweden, Lang, pp. 4396–4402. External Links: Link, Document Cited by: Table 1, Code Links.
- (2019) The thirty-third AAAI conference on artificial intelligence, AAAI 2019, the thirty-first innovative applications of artificial intelligence conference, IAAI 2019, the ninth AAAI symposium on educational advances in artificial intelligence, EAAI 2019, honolulu, hawaii, usa, january 27 - february 1, 2019. AAAI Press. External Links: Link, ISBN 978-1-57735-809-1 Cited by: 30.
- (2019) Entity alignment between knowledge graphs using attribute embeddings. See 29, pp. 297–304. External Links: Link Cited by: §2.1, Table 1, Code Links.
- (2017) Graph attention networks. arXiv preprint arXiv:1710.10903. Cited by: footnote 2.
- (2018) Cross-lingual knowledge graph alignment via graph convolutional networks. See Proceedings of the 2018 conference on empirical methods in natural language processing, brussels, belgium, october 31 - november 4, 2018, Riloff et al., pp. 349–357. External Links: Link Cited by: §1, §2.1, §2.2, Table 1, §3, §4, Code Links.
- Wikidata. Note: \urlhttps://www.wikidata.org/ Cited by: §2.2.
- (2019) Cross-lingual knowledge graph alignment via graph matching neural network. See Proceedings of the 57th conference of the association for computational linguistics, ACL 2019, florence, italy, july 28- august 2, 2019, volume 1: long papers, Korhonen et al., pp. 3156–3161. External Links: Link Cited by: Table 1, Code Links.
- (2019) Cross-lingual knowledge graph alignment via graph matching neural network. arXiv preprint arXiv:1905.11605. Cited by: §1, §2.1.
- (2019) Multi-view knowledge graph embedding for entity alignment. See Proceedings of the twenty-eighth international joint conference on artificial intelligence, IJCAI 2019, macao, china, august 10-16, 2019, Kraus, pp. 5429–5435. External Links: Link, Document Cited by: §2.1, Table 1, Code Links.
- (2017) Iterative entity alignment via joint knowledge embeddings. See Proceedings of the twenty-sixth international joint conference on artificial intelligence, IJCAI 2017, melbourne, australia, august 19-25, 2017, Sierra, pp. 4258–4264. External Links: Link, Document Cited by: §2.1, §2.2, Table 1, Code Links.
- (2019) Neighborhood-aware attentional representation for multilingual knowledge graphs. See Proceedings of the twenty-eighth international joint conference on artificial intelligence, IJCAI 2019, macao, china, august 10-16, 2019, Kraus, pp. 1943–1949. External Links: Link, Document Cited by: §1, §2.1, Table 1, Code Links, footnote 2.
