论文标题
VisualSem:视觉和语言的高质量知识图
VisualSem: A High-quality Knowledge Graph for Vision and Language
论文作者
论文摘要
自然语言理解(NLU)和发电(NLG)的令人兴奋的边界要求(视觉和 - )语言模型,可以有效地访问外部结构化知识存储库。但是,许多现有的知识库仅涵盖有限的领域或遇到嘈杂的数据,并且最重要的是很难集成到神经语言管道中。为了填补这一空白,我们发布了VisualSem:高质量知识图(kg),其中包括带有多语言光泽,多个说明性图像和视觉相关关系的节点。我们还发布了一个神经多模式检索模型,该模型可以使用图像或句子作为输入和检索kg中的实体。该多模式检索模型可以集成到任何(神经网络)模型管道中。我们鼓励研究界使用VisualSem进行数据扩展和/或作为接地的来源,以及其他可能的用途。 VisualSem以及多模式检索模型已公开可用,可以在此URL中下载:https://github.com/iacercalixto/visualsem
An exciting frontier in natural language understanding (NLU) and generation (NLG) calls for (vision-and-) language models that can efficiently access external structured knowledge repositories. However, many existing knowledge bases only cover limited domains, or suffer from noisy data, and most of all are typically hard to integrate into neural language pipelines. To fill this gap, we release VisualSem: a high-quality knowledge graph (KG) which includes nodes with multilingual glosses, multiple illustrative images, and visually relevant relations. We also release a neural multi-modal retrieval model that can use images or sentences as inputs and retrieves entities in the KG. This multi-modal retrieval model can be integrated into any (neural network) model pipeline. We encourage the research community to use VisualSem for data augmentation and/or as a source of grounding, among other possible uses. VisualSem as well as the multi-modal retrieval models are publicly available and can be downloaded in this URL: https://github.com/iacercalixto/visualsem