论文标题
视频参考表达式通过变压器带有内容感知查询
Video Referring Expression Comprehension via Transformer with Content-aware Query
论文作者
论文摘要
视频参考表达理解(REC)旨在将目标对象定位在自然语言表达式所指的视频框架中。最近,基于变压的方法极大地提高了性能限制。但是,我们认为当前的查询设计是次马匹,并且有两个缺点:1)慢训练收敛过程; 2)缺乏细粒度对齐。为了减轻这一点,我们的目标是将纯粹的可学习查询与内容信息相结合。具体来说,我们在整个框架上设置了固定数量的可学习边界框,并采用了对齐区域功能来提供富有成果的线索。此外,我们将句子中的某些短语与语义相关的视觉区域联系起来。为此,我们通过分别在整个句子中使用明确引用的单词来增强VIDSentence和VIDSTG数据集,从而介绍了两个新的数据集(即VID实体和VIDSTG-ENTITY)。从中受益,我们在区域词句水平上进行了细粒度的跨模式对齐,这确保了更详细的特征表示。我们提出的模型(称为Contformer)结合了这两种设计,可以在广泛的基准数据集上实现最先进的性能。例如,与先前的SOTA相比,在VID实体数据集上,Contformer在[email protected]上实现了8.75%的绝对改进。
Video Referring Expression Comprehension (REC) aims to localize a target object in video frames referred by the natural language expression. Recently, the Transformerbased methods have greatly boosted the performance limit. However, we argue that the current query design is suboptima and suffers from two drawbacks: 1) the slow training convergence process; 2) the lack of fine-grained alignment. To alleviate this, we aim to couple the pure learnable queries with the content information. Specifically, we set up a fixed number of learnable bounding boxes across the frame and the aligned region features are employed to provide fruitful clues. Besides, we explicitly link certain phrases in the sentence to the semantically relevant visual areas. To this end, we introduce two new datasets (i.e., VID-Entity and VidSTG-Entity) by augmenting the VIDSentence and VidSTG datasets with the explicitly referred words in the whole sentence, respectively. Benefiting from this, we conduct the fine-grained cross-modal alignment at the region-phrase level, which ensures more detailed feature representations. Incorporating these two designs, our proposed model (dubbed as ContFormer) achieves the state-of-the-art performance on widely benchmarked datasets. For example on VID-Entity dataset, compared to the previous SOTA, ContFormer achieves 8.75% absolute improvement on [email protected].