论文标题

匹配之前查看:实例理解视频对象细分中的重要

Look Before You Match: Instance Understanding Matters in Video Object Segmentation

论文作者

Wang, Junke, Chen, Dongdong, Wu, Zuxuan, Luo, Chong, Tang, Chuanxin, Dai, Xiyang, Zhao, Yucheng, Xie, Yujia, Yuan, Lu, Jiang, Yu-Gang

论文摘要

基于内存的方法,探索当前帧与过去帧之间的密集匹配,最近在视频对象细分(VOS)中表现出了令人印象深刻的结果。然而,由于缺乏实例理解能力,上述方法通常会因物体和摄像机的运动而导致的较大外观变化或观点变化。在本文中,我们认为实例理解在VOS中重要,并将其与基于内存的匹配集成在一起可以享受协同作用,从VOS任务的定义,\ ie,识别和分割视频中的对象实例,这是从直觉上明智的。为了实现这一目标,我们提出了一个针对VOS的两分支网络,基于查询的实例分割(IS)分支在当前帧的实例详细信息中深入研究,VOS分支执行与内存库的空间匹配。我们将良好的对象查询从IS分支到注入实例特定信息到查询密钥中,并进一步执行实例仪的匹配。此外,我们引入了一个多路径融合块,以有效地将内存读数与实例分割解码器的多尺度特征相结合,该功能结合了高分辨率实例感知功能,以产生最终的分割结果。我们的方法在戴维斯2016/2017 Val(92.6%和87.1%),戴维斯2017 Test-Dev(82.8%)和YouTube-VOS 2018/2019 Val(86.3%和86.3%)上实现了最先进的性能,超过了清晰的差额。

Exploring dense matching between the current frame and past frames for long-range context modeling, memory-based methods have demonstrated impressive results in video object segmentation (VOS) recently. Nevertheless, due to the lack of instance understanding ability, the above approaches are oftentimes brittle to large appearance variations or viewpoint changes resulted from the movement of objects and cameras. In this paper, we argue that instance understanding matters in VOS, and integrating it with memory-based matching can enjoy the synergy, which is intuitively sensible from the definition of VOS task, \ie, identifying and segmenting object instances within the video. Towards this goal, we present a two-branch network for VOS, where the query-based instance segmentation (IS) branch delves into the instance details of the current frame and the VOS branch performs spatial-temporal matching with the memory bank. We employ the well-learned object queries from IS branch to inject instance-specific information into the query key, with which the instance-augmented matching is further performed. In addition, we introduce a multi-path fusion block to effectively combine the memory readout with multi-scale features from the instance segmentation decoder, which incorporates high-resolution instance-aware features to produce final segmentation results. Our method achieves state-of-the-art performance on DAVIS 2016/2017 val (92.6% and 87.1%), DAVIS 2017 test-dev (82.8%), and YouTube-VOS 2018/2019 val (86.3% and 86.3%), outperforming alternative methods by clear margins.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源