论文标题

物体分割,没有具有大规模生成模型的标签

Object Segmentation Without Labels with Large-Scale Generative Models

论文作者

Voynov, Andrey, Morozov, Stanislav, Babenko, Artem

论文摘要

无监督和自我监督学习的最新兴起大大降低了对标记数据的依赖性,从而为转移到下游视觉任务提供了有效的图像表示。此外,最近的作品在完全无监督的设置中采用了这些表示形式进行图像分类,从而减少了在微调阶段对人类标签的需求。这项工作表明,大规模的无监督模型还可以执行更具挑战性的对象分割任务,而不需要像素级也不需要图像级标签。也就是说,我们表明最近的无监督甘斯可以区分前景/背景像素,从而提供高质量的显着性口罩。通过对标准基准测试的广泛比较,我们优于现有的无监督的替代方案,用于对象细分,实现了新的最先进。

The recent rise of unsupervised and self-supervised learning has dramatically reduced the dependency on labeled data, providing effective image representations for transfer to downstream vision tasks. Furthermore, recent works employed these representations in a fully unsupervised setup for image classification, reducing the need for human labels on the fine-tuning stage as well. This work demonstrates that large-scale unsupervised models can also perform a more challenging object segmentation task, requiring neither pixel-level nor image-level labeling. Namely, we show that recent unsupervised GANs allow to differentiate between foreground/background pixels, providing high-quality saliency masks. By extensive comparison on standard benchmarks, we outperform existing unsupervised alternatives for object segmentation, achieving new state-of-the-art.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源