Knowledge Graph Entity Alignment with Graph Convolutional Networks: Lessons Learned

Knowledge Graph Entity Alignment with Graph Convolutional Networks: Lessons Learned

Abstract

In this work, we focus on the problem of entity alignment in Knowledge Graphs (KG) and we report on our experiences when applying a Graph Convolutional Network (GCN) based model for this task. Variants of GCN are used in multiple state-of-the-art approaches and therefore it is important to understand the specifics and limitations of GCN-based models. Despite serious efforts, we were not able to fully reproduce the results from the original paper and after a thorough audit of the code provided by authors, we concluded, that their implementation is different from the architecture described in the paper. In addition, several tricks are required to make the model work and some of them are not very intuitive. We provide an extensive ablation study to quantify the effects these tricks and changes of architecture have on final performance. Furthermore, we examine current evaluation approaches and systematize available benchmark datasets. We believe that people interested in KG matching might profit from our work, as well as novices entering the field.1

1 Introduction

The success of information retrieval in a given task critically depends on the quality of the underlying data. Another issue is that in many domains knowledge bases are spread across various data sources [20] and it is crucial to be able to combine information from different sources. In this work, we focus on knowledge bases in the form of Knowledge Graphs (KGs), which are particularly suited for information retrieval [25]. Joining information from different KGs is non-trivial, as there is no unified schema or vocabulary. The goal of the entity alignment task is to overcome this problem by learning a matching between entities in different KGs. In the typical setting some of the alignments are known in advance (seed alignments) and the task is therefore supervised. More formally, we are given graphs and with a seed alignment . It is commonly assumed that an entity can match at most one entity . Thus the goal is to infer alignments for the remaining nodes only.

Graph Convolutional Networks(GCN) [14, 12], which have been recently become increasingly popular, are at the core of state-of-the-art methods for entity alignments in KGs [32, 5, 35, 38, 11]. In this paper, we thoroughly analyze one of the first GCN-based entity alignment methods, GCN-Align [32]. Since the other methods we are studying can be considered as extensions of this first paper and have a similar architecture, our goal is to understand the importance of its individual components and architecture choices. In summary, our contribution is as follows:

  1. We investigate the reproducibility of the published results of a recent GCN-based method for entity alignment and uncover differences between the method’s description in the paper and the authors’ implementation.

  2. We perform an ablation study to demonstrate the individual components’ contribution.

  3. We apply the method to numerous additional datasets of different sizes to investigate the consistency of results across datasets.

2 Related work

In this section we review previous work for the entity alignment for Knowledge Graphs and revise datasets and current evaluation process. We believe this is useful for practitioners, since we discover some pitfalls, especially when implementing evaluation scores and selecting datasets for comparison. The overview of methods, datasets and metrics is provided in Table 1.

Method Datasets Metrics Code
MTransE [8] WK3l-15K, WK3l-120K, CN3l H@10(, MR) yes
IPTransE [37] DFB-{1,2,3} H@{1,10}, MR yes
JAPE [26] DBP15K(JAPE) H@{1,10,50}, MR yes
KDCoE [7] WK3l-60K H@{1,10}, MR yes
BootEA [28] DBP15K(JAPE), DWY100K H@{1,10}, MRR yes
SEA [21] WK3l-15K, WK3l-120K H@{1,5,10}, MRR yes
MultiKE [36] DWY100K H@{1,10}, MR, MRR yes
AttrE [30] DBP-LGD,DBP-GEO,DBP-YAGO H@{1,10}, MR yes
RSN [13] custom DBP15K, DWY100K H@{1,10}, MRR yes
GCN-Align [32] DBP15K(JAPE) H@{1,10,50} yes
CL-GNN [34] DBP15K(JAPE) H@{1,10} yes
MuGNN [5] DBP15K(JAPE), DWY100K H@{1,10}, MRR yes
NAEA [38] DBP15K(JAPE), DWY100K H@{1,10}, MRR no
Table 1: Overview of related work in the field of entity alignment for knowledge graphs with their used datasets and metrics.

2.1 Methods

While the problem of entity alignments in Knowledge Graphs has been tackled historically by researching vocabularies which are as broad as possible, and establish them as a standard, recent approaches take a more data-driven view. Early methods use classical knowledge graph link prediction models such as TransE [3] to embed the entities of the individual knowledge graphs using a intra-KG link prediction loss, and differ in what they do with the aligned entities. For instance MTransE [8] learns a linear transformation between the embedding spaces of the individual graphs using -loss. BootEA [27] adopts a bootstrapping approach and iteratively labels the most likely alignments and utilizes them for further training. In addition to the alignment loss, embeddings of aligned entities are swapped regularly to calibrate embedding spaces against each other. SEA [22] learns mapping between embedding spaces in both directions and additionaly adds cycle-consistency loss. Therefore the distance between original embedding of an entity and its representation, which was first translated to another space and then back from it, is penalized. IPTransE [37] embeds both KGs into the same embedding space and uses a margin-based loss to enforce the embeddings of aligned entities to become similar. RSN [13] model generates sequences using different types of random walks which can move between graphs when visiting aligned entities. The generated sequences are feed to adapted recurrent model. JAPE [26], KDCoE [7], MultiKE [36] and AttrE [30] utilize attributes available for some entities and additional information like names of entities and relationships. Graph Neural Network (GNN) based models [32, 5, 35, 38, 11]2 have in common that they use GNN to create node representations by aggregating node representations together with representations of their neighbors. Most of GNN approaches do not distinguish between different relations and either consider all neighbors equally [32, 35, 11] or use attention [5] to weight the representations of the neighbors for the aggregation.

2.2 Datasets

Dataset Subset Graph Triples Entities Relations Alignments
DBP15k (full) fr-en fr 192,191 66,858 1,379 15,000
en 278,590 105,889 2,209
ja-en ja 164,373 65,744 2,043 15,000
en 233,319 95,680 2,096
zh-en zh 153,929 66,469 2,830 15,000
en 237,674 98,125 2,317
DBP15k (JAPE) fr-en fr 105,998 19,661 903 15,000
en 115,722 19,993 1,208
ja-en ja 77,214 19,814 1,299 15,000
en 93,484 19,780 1,153
zh-en zh 70,414 19,388 1,701 15,000
en 95,142 19,572 1,323
WK3l-15k en-de en 209,041 15,127 1,841 1,289 (10,383)
de 144,244 14,603 596 1,140 (10,383)
en-fr en 203,356 15,170 2,228 2,498 (18,024)
fr 169,329 15,393 2,422 3,812 (18,024)
WK3l-120k en-de en 624,659 67,650 2,393 6,173 (50,280)
de 389,554 61,942 861 4,820 (50,280)
en-fr en 1,375,406 119,749 3,109 36,749 (87,836)
fr 760,497 118,592 2,336 36,013 (87,836)
DWY-100k dbp-wd dbp 463,294 100,000 330 100,000
wd 448,774 100,000 220
dbp-yg dbp 428,952 100,000 302 100,000
yg 502,563 100,000 31
Table 2: Overview of used datasets with their sizes in the number of triples (edges), entities (nodes), relations (different edge types) and alignments. For Wk3l, the alignment is provided as a directed mapping on a entity level. However, there are additional triple alignments. Following a common practice as e.g. [21] we can assume that an alignment should be symmetric, and that we can extract entity alignments from the triple alignments. Doing so, we obtain the number of alignments given in brackets.

The datasets used by entity alignments methods generally derive from large-scale open-source data source such as DBPedia [2], YAGO [19], or Wikidata [33]. While there is the DWY-100k dataset, which comprises 100k aligned entities across the three aforementioned individual knowledge graphs, most of the datasets, such as DBP15k, or WK3l derive from a single multi-lingual database. There, subsets are formed according to a specific language, and entities which occur in multiple languages and are linked accordingly are used as alignments.

As an interesting observation we found out that all papers which evaluate on DBP15k, do not evaluate on the full DBP15k dataset3 (which we refer to as DBP15k (full)), but rather use a smaller subset provided by the authors of JAPE [26] in their GitHub repository4, which we call DBP15k-JAPE. The smaller subsets were created by selecting a portion of entities (around 20k of 100k) which are popular, i.e. appear in many triples as head or tail. The number of aligned entities stays the same (15k). As the paper only reports the dataset statistics of the larger dataset, and does not mention the reduction of the dataset, subsequent papers also report the statistics of the larger dataset, although experiments use the smaller variant [26, 27, 32, 5, 37].

2.3 Scores

It is common practice to only consider the entities being part of the test alignment as potential matching candidates. Although we argue that ignoring entities exclusive to a single graph as potential candidates does not reflect well the use-case situation5, we follow this evaluation scheme for our experiments to maintain comparability.

In the following description of evaluation measures we focus only on the case of aligning one node with a ground truth alignment . The right-to-left alignment is handled analogously. Let denote the set of matching candidates in the right graph. For a node , the entity alignment models generates a score for each matching candidate . Afterwards, the candidates are sorted according to their score, and the rank is computed as the index of the ground truth match in this sorted list (1-based). The mean rank (MR) is simply the mean over the ranks for all alignments.

The mean reciprocal rank (MRR) is the mean over all reciprocal ranks.

It is naturally bounded between 0 and 1, where 1 corresponds to a perfect score. Moreover, its value is dominated by small ranks, and it is less sensitive to larger ones. The hits at (H@k) is the percentage of alignments where the rank was at most , i.e. equivalent to the recall at .

3 Method

GCN-Align [32] is a GCN-based approach to embed all entities from both graphs into a common embedding space. Each entity is associated with structural features , which are initialized randomly and updated during training. The features of all entities in a single graph are combined to the feature matrix . Subsequently, a two-layer GCN is applied. A single GCN layer is described by

with , where is the adjacency matrix, and is the diagonal node degree matrix. The input of the first layer is set to , and is non-linear activation function. For the first layer, , and the second layer uses the identity. The output of the second layer is considered as the structural embedding, denoted by . Both graphs are equipped with its own node features, but the convolution weights are shared across the graphs.

The adjacency matrix is derived from the knowledge graph by first computing a score, called functionality, for each relation as the ratio between the number of different entities which occur as head, and the number of triples in which the relation occurs

Analogously, the inverse functionality is obtained by replacing the nominator by the number of different tail entities. The final adjacency matrix is obtained as

In addition, each entity is also equipped with attributes which are combined into a graph attribute matrix . Anagously, the attributes are processed by a GCN for each graph with convolution weights shared across the graphs, resulting in attribute embeddings .

The attribute and structure GCNs are optimized separately using SGD. As loss function, a margin-rank loss is used, exemplary for the structure embedding

where , and the margin is a hyperparameter chosen separately for structure and attribute embeddings. denotes a set of negative samples constructed by either replacing the left or the right entity with a random entity from the same graph.

In order to compare two nodes from both graphs, the distance between their embeddings is used, normalised by the dimensionality.

Here, is a hyperparameter for the tradeoff between structural and attribute similarity.

3.1 Implementation Differences

The code6 provided by the authors differs in a few aspects from the method described in the paper. First, instead of using the full DBP15k dataset having the dataset sizes as reported in the paper, a smaller version is used. Second, when computing the adjacency matrix, and are set to at least 0.3. Third, the node features are always normalised to unit Euclidean length before passing them into the network. Finally, there are no convolution weights. This fact is particularly interesting, as this means that the whole GCN does not contain a single parameter, but is just a fixed function on the learned node embeddings.

4 Experiments

In initial experiments we were able to reproduce the results reported in the paper using the implementation provided by the authors. Moreover, we are able to reproduce the results using our own implementation, and settings adjusted to the authors’ code. In addition, we replaced the adjacency matrix based on functionality and inverse functionality by a simpler version, where . We additionally use instead of the symmetric normalization. In total, we see no difference in performance between our simplified adjacency matrix, and the authors’ one. We identified two aspects which affect the model’s performance: Not using convolutional weights, and normalizing the variance when initializing node embeddings. We provide empirical evidence for this finding across numerous datasets.

hyperparameter abbrev. value range
optimizer opt {Adam, SGD}
learning rate lr {0.1, 0.5, 1, 10, 20}
number of layers #layers {1, 2, 3}
number of negative samples #neg {5, 50, 100}
number of epochs #epochs {10, 500, 2000, 3000}
Table 3: Hyperparameter grid used for large-scale hyperparameter search on DBP15K (JAPE) zh-en.
Weights no yes
Variance Emb. Init. 1 1
#epochs 2,000 3,000 2,000 2,000
#neg 50 100 50 50
#layers 2 2 3 2
lr 1 1 1 1
opt adam sgd adam adam
Table 4: Optimal Hyperparameters found for DBP15k (JAPE), zh-en with and without convolution weights, and with two different embedding initialization variances.
Weights No Yes
Variance. Emb. Init.
H@1
DBP15K (full) fr-en 31.51 0.16 27.64 0.22 21.82 00.39 16.73 00.59
ja-en 33.26 0.10 29.06 0.23 26.21 00.33 20.78 00.16
zh-en 31.15 0.15 22.55 0.27 24.96 00.71 18.85 00.99
DBP15K (JAPE) fr-en 45.37 0.13 41.03 0.13 35.36 00.33 30.50 00.38
ja-en 45.53 0.18 40.29 0.09 35.81 00.53 31.46 00.15
zh-en 43.30 0.12 39.37 0.20 33.61 00.49 29.94 00.35
DWY100K wd 58.50 0.05 54.07 0.05 50.13 00.11 38.85 00.31
yg 72.82 0.06 67.06 0.03 67.36 00.10 60.67 00.30
WK3l-120K en-de 10.10 0.03 9.17 0.05 9.02 00.17 6.75 00.12
en-fr 8.28 0.03 7.38 0.03 7.26 00.11 5.07 00.16
WK3l-15K en-de 16.57 0.12 14.41 0.23 17.43 00.38 12.66 00.30
en-fr 17.07 0.15 16.16 0.16 15.98 00.16 12.41 00.18
H@10
DBP15K (full) fr-en 68.38 0.32 63.41 00.14 59.26 00.55 48.55 00.92
ja-en 68.22 0.09 61.95 0.17 61.12 00.51 49.56 00.38
zh-en 67.46 0.11 56.03 0.21 59.07 01.10 50.32 01.52
DBP15K (JAPE) fr-en 82.48 0.08 79.11 0.07 74.71 00.27 69.72 00.36
ja-en 79.77 0.14 75.13 0.20 73.05 00.52 67.18 00.28
zh-en 77.63 0.05 73.66 0.28 71.16 00.17 66.22 00.51
DWY100K wd 86.26 0.05 81.30 0.03 79.65 00.20 69.73 00.25
yg 92.13 0.04 87.57 0.04 88.64 00.09 83.76 00.27
WK3l-120K en-de 27.13 0.02 24.92 0.03 25.49 00.26 20.83 00.29
en-fr 23.73 0.04 21.57 0.05 22.16 00.24 16.31 00.35
WK3l-15K en-de 42.43 0.13 38.63 0.17 45.24 00.47 37.03 00.30
en-fr 49.68 0.15 48.18 0.15 47.64 00.53 41.87 00.36
MR
DBP15K (FULL) fr-en 203.90 3.80 262.24 3.23 123.09 15.43 208.00 12.04
ja-en 206.17 4.21 358.53 3.65 138.80 12.87 238.24 24.09
zh-en 168.80 2.59 149.08 2.70 279.49 38.78 206.36 17.60
DBP15K (JAPE) fr-en 109.64 1.56 117.59 2.91 130.75 08.48 133.14 07.09
ja-en 144.81 1.89 195.19 3.44 146.42 06.45 221.92 12.22
zh-en 181.37 4.05 215.23 4.53 172.05 12.72 236.72 02.84
DWY100K wd 277.08 8.28 460.32 9.17 500.61 24.10 563.29 28.92
yg 49.32 2.71 102.50 3.69 105.52 04.63 67.71 03.82
WK3l-120K en-de 2753.75 6.69 2280.31 8.97 2843.96 53.29 2289.02 36.71
en-fr 4438.81 9.29 4110.23 7.90 4551.39 55.29 4007.91 59.01
WK3l-15K en-de 247.74 1.09 233.29 2.66 263.16 06.75 197.40 05.39
en-fr 196.16 1.09 176.32 1.03 249.77 07.71 184.72 03.32
MRR
DBP15K (full) fr-en 43.59 0.08 39.30 0.18 33.83 00.46 27.03 00.72
ja-en 44.68 0.06 39.92 0.20 37.59 00.37 30.31 00.19
zh-en 43.09 0.10 33.55 0.19 36.15 00.80 29.21 01.16
DBP15K (JAPE) fr-en 57.95 0.10 53.78 0.05 48.31 00.26 43.37 00.33
ja-en 57.14 0.13 51.96 0.07 48.03 00.55 43.30 00.15
zh-en 54.89 0.09 50.88 0.15 45.97 00.39 41.93 00.40
DWY100K wd 68.33 0.03 63.68 0.04 60.50 00.14 49.56 00.30
yg 79.74 0.04 74.29 0.03 74.93 00.09 68.76 00.28
WK3l-120K en-de 16.05 0.03 14.73 0.03 14.73 00.21 11.70 00.18
en-fr 13.65 0.02 12.34 0.02 12.41 00.16 9.04 00.23
WK3l-15K en-de 25.40 0.10 22.81 0.16 26.94 00.38 21.06 00.25
en-fr 27.98 0.16 26.76 0.13 26.55 00.23 22.20 00.21
Table 5: Ablation study on using convolution weights and different embedding initialisation. A detailed description can be found in the text.

We fix using convolution weights and the variance for the normal distribution from which the embedding vectors are initialized and optimize the other hyperparameters according to validation H@1 (80/20% train-validation split) on DBP15K (JAPE) zh-en in a large-scale hyperparameter search, comprising 1,440 experiments. The hyperparameter grid is given in Table 3, and Table 4 shows the best parameters found for DBP15k (JAPE) zh-en for the four different settings. For each dataset, we perform a smaller hyperparameter search to fine-tune LR, #epochs & #layers for each dataset (again 80/20 split). Their optimal parameters are given in the appendix, in Table 6. We evaluate the best models on the official test set. Our results regarding Hits@1 (H@1), Hits@10 (H@10), mean rank (MR) and mean reciprocal rank (MRR) are summarised in Table 5.

Node Embedding Initialization

Comparing the columns of Table 5 we can observe the influence of the node embedding initialization. Using the settings from the authors’ code, i.e. not using weights, a choosing a variance of actually results in inferior performance in terms of H@1, as compared to use a standard normal distribution. These findings are consistent across datasets.

Convolution Weights

The first column of Table 5 corresponds to the weight usage and initialization settings used in the code for GCN-Align. We achieve slightly better results than published in [32], which we attribute to a more exhaustive parameter search. Interestingly, all best configurations use Adam optimizer instead of SGD. Adding convolution weights degrades the performance across all datasets and subsets thereof but one as witnessed by comparing the first two columns with the last two columns.

5 Conclusion

In this work, we reported our experiences when implementing the Knowledge Graph alignment method GCN-Align. We pointed at important differences between the model described in the paper and the actual implementation and quantified their effects in the ablation study. For future work, we plan to include other methods for entity alignments in our framework.

Acknowledgements

This work has been funded by the German Federal Ministry of Education and Research (BMBF) under Grant No. 01IS18036A and by the Bavarian Ministry for Economic Affairs, Infrastructure, Transport and Technology through the Center for Analytics-Data-Applications (ADA-Center) within the framework of “BAYERN DIGITAL II”. The authors of this work take full responsibilities for its content.

Appendix

Dataset Links

Code Links

var. emb. init weights dataset subset #epochs #layers lr
1 no DBP15k (full) fr-en 2k 2 1.0
ja-en 2k 3 1.0
zh-en 2k 4 1.0
DBP15k (JAPE) fr-en 2k 2 1.0
ja-en 2k 2 1.0
zh-en 2k 2 1.0
DWY100k wd 2k 2 1.0
yg 2k 2 1.0
WK3l-120k en-de 2k 2 1.0
en-fr 2k 2 1.0
WK3l-15k en-de 2k 2 1.0
en-fr 2k 2 10.0
yes DBP15k (full) fr-en 2k 4 1.0
ja-en 2k 4 1.0
zh-en 2k 3 1.0
DBP15k (JAPE) fr-en 2k 2 10.0
ja-en 2k 3 1.0
zh-en 2k 3 1.0
DWY100k wd 2k 2 1.0
yg 2k 2 1.0
WK3l-120k en-de 2k 2 1.0
en-fr 2k 2 1.0
WK3l-15k en-de 2k 2 1.0
en-fr 2k 2 1.0
no DBP15k (full) fr-en 3k 2 1.0
ja-en 3k 2 1.0
zh-en 2k 4 1.0
DBP15k (JAPE) fr-en 3k 2 1.0
ja-en 2k 2 1.0
zh-en 3k 2 1.0
DWY100k wd 3k 2 1.0
yg 3k 2 1.0
WK3l-120k en-de 3k 2 0.5
en-fr 3k 2 1.0
WK3l-15k en-de 3k 2 0.5
en-fr 3k 2 1.0
yes DBP15k (full) fr-en 2k 4 1.0
ja-en 2k 4 1.0
zh-en 2k 4 1.0
DBP15k (JAPE) fr-en 2k 2 1.0
ja-en 2k 2 1.0
zh-en 2k 2 1.0
DWY100k wd 2k 2 1.0
yg 3k 2 0.5
WK3l-120k en-de 2k 2 1.0
en-fr 2k 2 1.0
WK3l-15k en-de 2k 2 1.0
en-fr 2k 2 1.0
Table 6: Optimal Hyperparameters after finetuning LR, number of epochs and number of layers for each individual dataset / subset combination. We only report differences to the ones found on DBP15k (JAPE) zh-en.

Footnotes

  1. Code: https://github.com/Valentyn1997/kg-alignment-lessons-learned.
  2. Please note, that while [38] does not state explicitly that they use GNNs, their model is very similar to [31].
  3. Available at http://ws.nju.edu.cn/jape/
  4. https://github.com/nju-websoft/JAPE/blob/master/data/dbp15k.tar.gz
  5. In the typical scenario it is not known in advance, which entities have matching and which not. Therefore the resulting score is too optimistic. However, we advocate to investigate this shortcoming further in future work
  6. https://github.com/1049451037/GCN-Align

References

  1. K. Aberer, K. Choi, N. F. Noy, D. Allemang, K. Lee, L. J. B. Nixon, J. Golbeck, P. Mika, D. Maynard, R. Mizoguchi, G. Schreiber and P. Cudré-Mauroux (Eds.) (2007) The semantic web, 6th international semantic web conference, 2nd asian semantic web conference, ISWC 2007 + ASWC 2007, busan, korea, november 11-15, 2007. Lecture Notes in Computer Science, Vol. 4825, Springer. External Links: Link, Document, ISBN 978-3-540-76297-3 Cited by: 2.
  2. S. Auer, C. Bizer, G. Kobilarov, J. Lehmann, R. Cyganiak and Z. G. Ives (2007) DBpedia: A nucleus for a web of open data. See The semantic web, 6th international semantic web conference, 2nd asian semantic web conference, ISWC 2007 + ASWC 2007, busan, korea, november 11-15, 2007, Aberer et al., pp. 722–735. External Links: Link, Document Cited by: §2.2.
  3. A. Bordes, N. Usunier, A. García-Durán, J. Weston and O. Yakhnenko (2013) Translating embeddings for modeling multi-relational data. See Advances in neural information processing systems 26: 27th annual conference on neural information processing systems 2013. proceedings of a meeting held december 5-8, 2013, lake tahoe, nevada, united states, Burges et al., pp. 2787–2795. External Links: Link Cited by: §2.1.
  4. C. J. C. Burges, L. Bottou, Z. Ghahramani and K. Q. Weinberger (Eds.) (2013) Advances in neural information processing systems 26: 27th annual conference on neural information processing systems 2013. proceedings of a meeting held december 5-8, 2013, lake tahoe, nevada, united states. External Links: Link Cited by: 3.
  5. Y. Cao, Z. Liu, C. Li, Z. Liu, J. Li and T. Chua (2019) Multi-channel graph neural network for entity alignment. See Proceedings of the 57th conference of the association for computational linguistics, ACL 2019, florence, italy, july 28- august 2, 2019, volume 1: long papers, Korhonen et al., pp. 1452–1461. External Links: Link Cited by: §1, §2.1, §2.2, Table 1, Code Links.
  6. K. Chaudhuri and R. Salakhutdinov (Eds.) (2019) Proceedings of the 36th international conference on machine learning, ICML 2019, 9-15 june 2019, long beach, california, USA. Proceedings of Machine Learning Research, Vol. 97, PMLR. External Links: Link Cited by: 13.
  7. M. Chen, Y. Tian, K. Chang, S. Skiena and C. Zaniolo (2018) Co-training embeddings of knowledge graphs and entity descriptions for cross-lingual entity alignment. See Proceedings of the twenty-seventh international joint conference on artificial intelligence, IJCAI 2018, july 13-19, 2018, stockholm, sweden, Lang, pp. 3998–4004. External Links: Link, Document Cited by: §2.1, Table 1, Code Links.
  8. M. Chen, Y. Tian, M. Yang and C. Zaniolo (2017) Multilingual knowledge graph embeddings for cross-lingual knowledge alignment. See Proceedings of the twenty-sixth international joint conference on artificial intelligence, IJCAI 2017, melbourne, australia, august 19-25, 2017, Sierra, pp. 1511–1517. External Links: Link, Document Cited by: §2.1, Table 1, Code Links.
  9. (2015) CIDR 2015, seventh biennial conference on innovative data systems research, asilomar, ca, usa, january 4-7, 2015, online proceedings. www.cidrdb.org. External Links: Link Cited by: 19.
  10. C. d’Amato, M. Fernández, V. A. M. Tamma, F. Lécué, P. Cudré-Mauroux, J. F. Sequeda, C. Lange and J. Heflin (Eds.) (2017) The semantic web - ISWC 2017 - 16th international semantic web conference, vienna, austria, october 21-25, 2017, proceedings, part I. Lecture Notes in Computer Science, Vol. 10587, Springer. External Links: Link, Document, ISBN 978-3-319-68287-7 Cited by: 26.
  11. M. Fey, J. E. Lenssen, C. Morris, J. Masci and N. M. Kriege (2020) Deep graph matching consensus. In International Conference on Learning Representations, External Links: Link Cited by: §1, §2.1.
  12. J. Gilmer, S. S. Schoenholz, P. F. Riley, O. Vinyals and G. E. Dahl (2017) Neural message passing for quantum chemistry. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1263–1272. Cited by: §1.
  13. L. Guo, Z. Sun and W. Hu (2019) Learning to exploit long-term relational dependencies in knowledge graphs. See Proceedings of the 36th international conference on machine learning, ICML 2019, 9-15 june 2019, long beach, california, USA, Chaudhuri and Salakhutdinov, pp. 2505–2514. External Links: Link Cited by: §2.1, Table 1, Code Links.
  14. T. N. Kipf and M. Welling (2016) Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907. Cited by: §1.
  15. A. Korhonen, D. R. Traum and L. Màrquez (Eds.) (2019) Proceedings of the 57th conference of the association for computational linguistics, ACL 2019, florence, italy, july 28- august 2, 2019, volume 1: long papers. Association for Computational Linguistics. External Links: Link, ISBN 978-1-950737-48-2 Cited by: 5, 34.
  16. S. Kraus (Ed.) (2019) Proceedings of the twenty-eighth international joint conference on artificial intelligence, IJCAI 2019, macao, china, august 10-16, 2019. ijcai.org. External Links: Link, Document Cited by: 36, 38.
  17. J. Lang (Ed.) (2018) Proceedings of the twenty-seventh international joint conference on artificial intelligence, IJCAI 2018, july 13-19, 2018, stockholm, sweden. ijcai.org. External Links: Link, ISBN 978-0-9992411-2-7 Cited by: 7, 28.
  18. L. Liu, R. W. White, A. Mantrach, F. Silvestri, J. J. McAuley, R. Baeza-Yates and L. Zia (Eds.) (2019) The world wide web conference, WWW 2019, san francisco, ca, usa, may 13-17, 2019. ACM. External Links: Link, Document, ISBN 978-1-4503-6674-8 Cited by: 21.
  19. F. Mahdisoltani, J. Biega and F. M. Suchanek (2015) YAGO3: A knowledge base from multilingual wikipedias. See 9, External Links: Link Cited by: §2.2.
  20. M. Nickel, K. Murphy, V. Tresp and E. Gabrilovich (2015) A review of relational machine learning for knowledge graphs. Proceedings of the IEEE 104 (1), pp. 11–33. Cited by: §1.
  21. S. Pei, L. Yu, R. Hoehndorf and X. Zhang (2019) Semi-supervised entity alignment via knowledge graph embedding with awareness of degree difference. See The world wide web conference, WWW 2019, san francisco, ca, usa, may 13-17, 2019, Liu et al., pp. 3130–3136. External Links: Link, Document Cited by: Table 1, Table 2, Code Links.
  22. S. Pei, L. Yu, R. Hoehndorf and X. Zhang (2019) Semi-supervised entity alignment via knowledge graph embedding with awareness of degree difference. In The World Wide Web Conference, pp. 3130–3136. Cited by: §2.1.
  23. E. Riloff, D. Chiang, J. Hockenmaier and J. Tsujii (Eds.) (2018) Proceedings of the 2018 conference on empirical methods in natural language processing, brussels, belgium, october 31 - november 4, 2018. Association for Computational Linguistics. External Links: Link, ISBN 978-1-948087-84-1 Cited by: 32.
  24. C. Sierra (Ed.) (2017) Proceedings of the twenty-sixth international joint conference on artificial intelligence, IJCAI 2017, melbourne, australia, august 19-25, 2017. ijcai.org. External Links: Link, ISBN 978-0-9992411-0-3 Cited by: 8, 37.
  25. A. Singhal (2012) Introducing the knowledge graph: things, not strings. Official google blog 5. Cited by: §1.
  26. Z. Sun, W. Hu and C. Li (2017) Cross-lingual entity alignment via joint attribute-preserving embedding. See The semantic web - ISWC 2017 - 16th international semantic web conference, vienna, austria, october 21-25, 2017, proceedings, part I, d’Amato et al., pp. 628–644. External Links: Link, Document Cited by: §2.1, §2.2, Table 1, Code Links.
  27. Z. Sun, W. Hu, Q. Zhang and Y. Qu (2018) Bootstrapping entity alignment with knowledge graph embedding.. In IJCAI, pp. 4396–4402. Cited by: §2.1, §2.2.
  28. Z. Sun, W. Hu, Q. Zhang and Y. Qu (2018) Bootstrapping entity alignment with knowledge graph embedding. See Proceedings of the twenty-seventh international joint conference on artificial intelligence, IJCAI 2018, july 13-19, 2018, stockholm, sweden, Lang, pp. 4396–4402. External Links: Link, Document Cited by: Table 1, Code Links.
  29. (2019) The thirty-third AAAI conference on artificial intelligence, AAAI 2019, the thirty-first innovative applications of artificial intelligence conference, IAAI 2019, the ninth AAAI symposium on educational advances in artificial intelligence, EAAI 2019, honolulu, hawaii, usa, january 27 - february 1, 2019. AAAI Press. External Links: Link, ISBN 978-1-57735-809-1 Cited by: 30.
  30. B. D. Trisedya, J. Qi and R. Zhang (2019) Entity alignment between knowledge graphs using attribute embeddings. See 29, pp. 297–304. External Links: Link Cited by: §2.1, Table 1, Code Links.
  31. P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Lio and Y. Bengio (2017) Graph attention networks. arXiv preprint arXiv:1710.10903. Cited by: footnote 2.
  32. Z. Wang, Q. Lv, X. Lan and Y. Zhang (2018) Cross-lingual knowledge graph alignment via graph convolutional networks. See Proceedings of the 2018 conference on empirical methods in natural language processing, brussels, belgium, october 31 - november 4, 2018, Riloff et al., pp. 349–357. External Links: Link Cited by: §1, §2.1, §2.2, Table 1, §3, §4, Code Links.
  33. Wikidata. Note: \urlhttps://www.wikidata.org/ Cited by: §2.2.
  34. K. Xu, L. Wang, M. Yu, Y. Feng, Y. Song, Z. Wang and D. Yu (2019) Cross-lingual knowledge graph alignment via graph matching neural network. See Proceedings of the 57th conference of the association for computational linguistics, ACL 2019, florence, italy, july 28- august 2, 2019, volume 1: long papers, Korhonen et al., pp. 3156–3161. External Links: Link Cited by: Table 1, Code Links.
  35. K. Xu, L. Wang, M. Yu, Y. Feng, Y. Song, Z. Wang and D. Yu (2019) Cross-lingual knowledge graph alignment via graph matching neural network. arXiv preprint arXiv:1905.11605. Cited by: §1, §2.1.
  36. Q. Zhang, Z. Sun, W. Hu, M. Chen, L. Guo and Y. Qu (2019) Multi-view knowledge graph embedding for entity alignment. See Proceedings of the twenty-eighth international joint conference on artificial intelligence, IJCAI 2019, macao, china, august 10-16, 2019, Kraus, pp. 5429–5435. External Links: Link, Document Cited by: §2.1, Table 1, Code Links.
  37. H. Zhu, R. Xie, Z. Liu and M. Sun (2017) Iterative entity alignment via joint knowledge embeddings. See Proceedings of the twenty-sixth international joint conference on artificial intelligence, IJCAI 2017, melbourne, australia, august 19-25, 2017, Sierra, pp. 4258–4264. External Links: Link, Document Cited by: §2.1, §2.2, Table 1, Code Links.
  38. Q. Zhu, X. Zhou, J. Wu, J. Tan and L. Guo (2019) Neighborhood-aware attentional representation for multilingual knowledge graphs. See Proceedings of the twenty-eighth international joint conference on artificial intelligence, IJCAI 2019, macao, china, august 10-16, 2019, Kraus, pp. 1943–1949. External Links: Link, Document Cited by: §1, §2.1, Table 1, Code Links, footnote 2.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
405516
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description