论文标题

多视图知识图的推理

Reasoning over Multi-view Knowledge Graphs

论文作者

Xi, Zhaohan, Pang, Ren, Li, Changjiang, Du, Tianyu, Ji, Shouling, Ma, Fenglong, Wang, Ting

论文摘要

最近,知识表示学习(KRL)正在作为对知识图(kgs)处理查询的最新方法(KGS)的出现,其中kg实体和查询嵌入了一个潜在空间中,以使得回答查询的实体嵌入了接近查询的附近。然而,尽管对KRL进行了深入的研究,但大多数现有的研究都集中在同质KG上,或者承担kg完成任务(即缺失事实的推断),同时回答对具有多个方面(多视图kgs)的kgs的复杂逻辑查询仍然是一项公开挑战。 为了弥合这一差距,在本文中,我们介绍了罗马,这是一个新颖的KRL框架,用于回答多视图KGS的逻辑查询。与先前的工作相比,罗姆人在主要方面离开。 (i)它将多视图kg建模为一组覆盖子kg,每个kg对应于一种视图,该视图集合了文献中研究的许多类型的kg(例如,时间kgs)。 (ii)它支持具有不同关系和视图约束的复杂逻辑查询(例如,具有复杂的拓扑和/或从多个视图中); (iii)它扩展到大小(例如,数百万个事实)和细粒状视图(例如,数十个观点); (iv)它概括地查询训练过程中未观察到的结构和kg观点。对现实世界KGS的广泛经验评估表明,\系统明显优于替代方法。

Recently, knowledge representation learning (KRL) is emerging as the state-of-the-art approach to process queries over knowledge graphs (KGs), wherein KG entities and the query are embedded into a latent space such that entities that answer the query are embedded close to the query. Yet, despite the intensive research on KRL, most existing studies either focus on homogenous KGs or assume KG completion tasks (i.e., inference of missing facts), while answering complex logical queries over KGs with multiple aspects (multi-view KGs) remains an open challenge. To bridge this gap, in this paper, we present ROMA, a novel KRL framework for answering logical queries over multi-view KGs. Compared with the prior work, ROMA departs in major aspects. (i) It models a multi-view KG as a set of overlaying sub-KGs, each corresponding to one view, which subsumes many types of KGs studied in the literature (e.g., temporal KGs). (ii) It supports complex logical queries with varying relation and view constraints (e.g., with complex topology and/or from multiple views); (iii) It scales up to KGs of large sizes (e.g., millions of facts) and fine-granular views (e.g., dozens of views); (iv) It generalizes to query structures and KG views that are unobserved during training. Extensive empirical evaluation on real-world KGs shows that \system significantly outperforms alternative methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源