论文标题

自我监督到密集的运动细分

Self-supervised Sparse to Dense Motion Segmentation

论文作者

Kardoost, Amirhossein, Ho, Kalun, Ochs, Peter, Keuper, Margret

论文摘要

视频中可观察的运动可能会引起相对于场景移动的对象的定义。分割此类对象的任务称为运动分段,通常通过在长时间稀疏的点轨迹中汇总运动信息,或直接通过依靠大量训练数据而直接产生每个框架密集分段来解决。在本文中,我们提出了一种自我监督方法,以学习从单个视频帧中稀疏运动分割的致密化。尽管以前的运动分割方法建立在大型替代数据集上的预训练上,并使用密集的运动信息作为PixelWise分割的必不可少的提示,但我们的模型不需要预训练并在单帧上进行测试时间进行操作。可以以特定方式对其进行训练,以从稀疏和嘈杂的输入中产生高质量的密集分段。我们对众所周知的运动分割数据集FBMS59和Davis16评估了我们的方法。

Observable motion in videos can give rise to the definition of objects moving with respect to the scene. The task of segmenting such moving objects is referred to as motion segmentation and is usually tackled either by aggregating motion information in long, sparse point trajectories, or by directly producing per frame dense segmentations relying on large amounts of training data. In this paper, we propose a self supervised method to learn the densification of sparse motion segmentations from single video frames. While previous approaches towards motion segmentation build upon pre-training on large surrogate datasets and use dense motion information as an essential cue for the pixelwise segmentation, our model does not require pre-training and operates at test time on single frames. It can be trained in a sequence specific way to produce high quality dense segmentations from sparse and noisy input. We evaluate our method on the well-known motion segmentation datasets FBMS59 and DAVIS16.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源