论文标题
图像和视频柔软对比度自我监督学习的相似性估计
Similarity Contrastive Estimation for Image and Video Soft Contrastive Self-Supervised Learning
论文作者
论文摘要
事实证明,对比表示学习是图像和视频的有效自我监督的学习方法。大多数成功的方法基于噪声对比估计(NCE),并将实例的不同视图用作阳性,应与其他被称为噪声的实例(称为负面)形成鲜明对比。但是,数据集中的几个实例是从相同的分布中得出的,并共享基本的语义信息。良好的数据表示应包含实例之间的关系,语义相似性和差异性,这些关系通过将所有负面因素视为噪声,从而损害了对比的学习。为了解决这个问题,我们提出了一种新的对比度学习的表述,该实例在称为相似性估计(SCE)之间的语义相似性。我们的训练目标是一个软的对比目标,它使阳性更接近,并估计根据他们所学到的相似性推动或提取负面实例的连续分布。我们在图像和视频表示学习方面以经验验证我们的方法。我们表明,SCE在ImageNet线性评估方案上与最少的时期时期的现状在较少的时期进行了竞争性,并且它可以推广到几个下游图像任务。我们还表明,SCE达到了预读取视频表示的最新结果,并且学习的表示形式可以推广到视频下游任务。
Contrastive representation learning has proven to be an effective self-supervised learning method for images and videos. Most successful approaches are based on Noise Contrastive Estimation (NCE) and use different views of an instance as positives that should be contrasted with other instances, called negatives, that are considered as noise. However, several instances in a dataset are drawn from the same distribution and share underlying semantic information. A good data representation should contain relations between the instances, or semantic similarity and dissimilarity, that contrastive learning harms by considering all negatives as noise. To circumvent this issue, we propose a novel formulation of contrastive learning using semantic similarity between instances called Similarity Contrastive Estimation (SCE). Our training objective is a soft contrastive one that brings the positives closer and estimates a continuous distribution to push or pull negative instances based on their learned similarities. We validate empirically our approach on both image and video representation learning. We show that SCE performs competitively with the state of the art on the ImageNet linear evaluation protocol for fewer pretraining epochs and that it generalizes to several downstream image tasks. We also show that SCE reaches state-of-the-art results for pretraining video representation and that the learned representation can generalize to video downstream tasks.