论文标题
超越实例歧视:关系意识的对比自我监督学习
Beyond Instance Discrimination: Relation-aware Contrastive Self-supervised Learning
论文作者
论文摘要
基于实例歧视的对比自我监督学习(CSL)通常会吸引积极的样本,同时驱除负面的消极因素,以学习预定的二元自我判断。但是,香草CSL在建模复杂的实例关系方面不足,从而限制了学习模型以保留精细的语义结构。一方面,具有相同语义类别的样品不可避免地被否定为负面。另一方面,样本之间的差异无法捕获。在本文中,我们将关系感知的对比自我监督学习(RECO)以实例关系(即全球分布关系和局部插值关系)整合到CSL框架中,以插件的方式纳入CSL框架。具体而言,我们将积极的锚点观点与全球层面的负面因素之间计算的相似性分布保持一致,以利用实例之间的各种相似性关系。像素空间和特征空间之间的局部级插值一致性用于定量对具有明显相似性的样品的特征差异进行建模。通过明确的实例关系建模,我们的RECO避免了非理性地推开语义相同的样本并雕刻出结构良好的特征空间。对常用基准进行的广泛实验证明,我们的RECO始终取得了显着的性能改善。
Contrastive self-supervised learning (CSL) based on instance discrimination typically attracts positive samples while repelling negatives to learn representations with pre-defined binary self-supervision. However, vanilla CSL is inadequate in modeling sophisticated instance relations, limiting the learned model to retain fine semantic structure. On the one hand, samples with the same semantic category are inevitably pushed away as negatives. On the other hand, differences among samples cannot be captured. In this paper, we present relation-aware contrastive self-supervised learning (ReCo) to integrate instance relations, i.e., global distribution relation and local interpolation relation, into the CSL framework in a plug-and-play fashion. Specifically, we align similarity distributions calculated between the positive anchor views and the negatives at the global level to exploit diverse similarity relations among instances. Local-level interpolation consistency between the pixel space and the feature space is applied to quantitatively model the feature differences of samples with distinct apparent similarities. Through explicitly instance relation modeling, our ReCo avoids irrationally pushing away semantically identical samples and carves a well-structured feature space. Extensive experiments conducted on commonly used benchmarks justify that our ReCo consistently gains remarkable performance improvements.