论文标题

nabu $ \ mathrm { - } $基于图形的神经RDF Verbalizer

NABU $\mathrm{-}$ Multilingual Graph-based Neural RDF Verbalizer

论文作者

Moussallem, Diego, Gnaneshwar, Dwaraknath, Ferreira, Thiago Castro, Ngomo, Axel-Cyrille Ngonga

论文摘要

由于链接数据的持续增长,RDF到文本的任务最近引起了很大的关注。与传统的管道模型相反,最近的研究集中在神经模型上,神经模型现在能够以端到端风格将一组RDF三元组转换为文本,并具有令人鼓舞的结果。但是,英语是唯一针对性的语言。我们通过介绍NABU(一种基于图形的神经模型,将RDF数据用德语,俄语和英语语言)来解决这一研究差距。 NABU基于编码器架构,使用受图形注意力网络启发的编码器和变压器作为解码器。我们的方法依赖于知识图是语言敏捷的事实,因此可以用来生成多语言文本。我们在标准基准WebNLG数据集中评估单语和多语言设置的NABU。我们的结果表明,NABU在英语上的最先进方法和66.21 BLEU,并在56.04 BLEU的多语言场景上实现了所有语言的一致结果。

The RDF-to-text task has recently gained substantial attention due to continuous growth of Linked Data. In contrast to traditional pipeline models, recent studies have focused on neural models, which are now able to convert a set of RDF triples into text in an end-to-end style with promising results. However, English is the only language widely targeted. We address this research gap by presenting NABU, a multilingual graph-based neural model that verbalizes RDF data to German, Russian, and English. NABU is based on an encoder-decoder architecture, uses an encoder inspired by Graph Attention Networks and a Transformer as decoder. Our approach relies on the fact that knowledge graphs are language-agnostic and they hence can be used to generate multilingual text. We evaluate NABU in monolingual and multilingual settings on standard benchmarking WebNLG datasets. Our results show that NABU outperforms state-of-the-art approaches on English with 66.21 BLEU, and achieves consistent results across all languages on the multilingual scenario with 56.04 BLEU.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源