论文标题
视觉问题回答图像集
Visual Question Answering on Image Sets
论文作者
论文摘要
我们介绍了图像设置的视觉询问回答(ISVQA)的任务,该任务将常见研究的单位图VQA问题推广到多图像设置。将自然语言问题和一组图像作为输入,旨在根据图像的内容回答问题。这些问题可以是关于一个或多个图像中的对象和关系,也可以是关于图像集描绘的整个场景。为了在这个新主题中进行研究,我们介绍了两个ISVQA数据集 - 室内和室外场景。他们分别模拟了室内图像收集和多个汽车安装的摄像机的现实情况。室内场景数据集包含48,138张图像集的91,479个注释问题,室外景点数据集有49,617个问题,可用于12,746个图像集。我们分析了两个数据集的属性,包括问答分布,问题的类型,数据集中的偏见和问题图像依赖性。我们还建立了新的基线模型来研究ISVQA的新研究挑战。
We introduce the task of Image-Set Visual Question Answering (ISVQA), which generalizes the commonly studied single-image VQA problem to multi-image settings. Taking a natural language question and a set of images as input, it aims to answer the question based on the content of the images. The questions can be about objects and relationships in one or more images or about the entire scene depicted by the image set. To enable research in this new topic, we introduce two ISVQA datasets - indoor and outdoor scenes. They simulate the real-world scenarios of indoor image collections and multiple car-mounted cameras, respectively. The indoor-scene dataset contains 91,479 human annotated questions for 48,138 image sets, and the outdoor-scene dataset has 49,617 questions for 12,746 image sets. We analyze the properties of the two datasets, including question-and-answer distributions, types of questions, biases in dataset, and question-image dependencies. We also build new baseline models to investigate new research challenges in ISVQA.