论文标题

联合多语言知识图完成和对齐

Joint Multilingual Knowledge Graph Completion and Alignment

论文作者

Tong, Vinh, Nguyen, Dat Quoc, Huynh, Trung Thanh, Nguyen, Tam Thanh, Nguyen, Quoc Viet Hung, Niepert, Mathias

论文摘要

知识图(kg)对准和完成通常被视为两个独立的任务。尽管最近的工作利用了来自多个公斤的实体和关系一致性,例如多语言KG与共同实体和关系之间的一致性,但对多语言KG完成(MKGC)的方式有更深入的了解可以帮助创建多语言KG比对(MKGA)的创建。通过观察到结构上的不一致(MKGA模型的主要挑战)可以通过kg完成方法来减轻的动机,我们提出了一个新型模型,用于共同完成和对齐知识图。提出的模型结合了两个共同完成KG完成和对齐的组件。这两个组件采用关系感知的图形神经网络,我们建议将多跳社区结构编码为实体和关系表示。此外,我们还提出了(i)一种结构性不一致的减少机制,将完成的信息纳入对齐成分中,以及(ii)在KGS比对期间,对齐种子增大和三重传递机制以扩大比对种子和转移三元。在公共多语言基准上进行的广泛实验表明,我们提出的模型的表现优于现有的竞争基准,从而在MKGC和MKGA任务上获得了新的最新结果。我们在https://github.com/vinhsuhi/jmac上公开发布了模型的实施。

Knowledge graph (KG) alignment and completion are usually treated as two independent tasks. While recent work has leveraged entity and relation alignments from multiple KGs, such as alignments between multilingual KGs with common entities and relations, a deeper understanding of the ways in which multilingual KG completion (MKGC) can aid the creation of multilingual KG alignments (MKGA) is still limited. Motivated by the observation that structural inconsistencies -- the main challenge for MKGA models -- can be mitigated through KG completion methods, we propose a novel model for jointly completing and aligning knowledge graphs. The proposed model combines two components that jointly accomplish KG completion and alignment. These two components employ relation-aware graph neural networks that we propose to encode multi-hop neighborhood structures into entity and relation representations. Moreover, we also propose (i) a structural inconsistency reduction mechanism to incorporate information from the completion into the alignment component, and (ii) an alignment seed enlargement and triple transferring mechanism to enlarge alignment seeds and transfer triples during KGs alignment. Extensive experiments on a public multilingual benchmark show that our proposed model outperforms existing competitive baselines, obtaining new state-of-the-art results on both MKGC and MKGA tasks. We publicly release the implementation of our model at https://github.com/vinhsuhi/JMAC

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源