论文标题
小组损失++:深入研究深度度量学习的群体损失
The Group Loss++: A deeper look into group loss for deep metric learning
论文作者
论文摘要
深度度量学习通过利用神经网络获得高度歧视性特征嵌入,在诸如聚类和图像检索之类的任务中产生了令人印象深刻的结果,可用于将样品分为不同类别。许多研究专门用于设计智能损失功能或用于培训此类网络的数据挖掘策略。大多数方法仅考虑在迷你批次中的样品对或三胞胎来计算损耗函数,这通常基于嵌入之间的距离。我们提出了群体损失,这是一种基于可区分标签 - 传播方法的损失函数,该方法强制嵌入组的所有样本中的相似性,同时促进属于不同组的数据点之间的低密度区域。在“相似对象应属于同一群体”的平稳性假设的指导下,提议的损失训练神经网络进行分类任务,从而在类中的样本中执行一致的标记。我们设计了针对我们的算法量身定制的一组推理策略,称为“群体损失++”,这些策略进一步改善了我们的模型结果。我们在四个检索数据集上显示了针对聚类和图像检索的最新结果,并对两个人重新识别数据集提出了竞争性结果,为检索和重新识别提供了一个统一的框架。
Deep metric learning has yielded impressive results in tasks such as clustering and image retrieval by leveraging neural networks to obtain highly discriminative feature embeddings, which can be used to group samples into different classes. Much research has been devoted to the design of smart loss functions or data mining strategies for training such networks. Most methods consider only pairs or triplets of samples within a mini-batch to compute the loss function, which is commonly based on the distance between embeddings. We propose Group Loss, a loss function based on a differentiable label-propagation method that enforces embedding similarity across all samples of a group while promoting, at the same time, low-density regions amongst data points belonging to different groups. Guided by the smoothness assumption that "similar objects should belong to the same group", the proposed loss trains the neural network for a classification task, enforcing a consistent labelling amongst samples within a class. We design a set of inference strategies tailored towards our algorithm, named Group Loss++ that further improve the results of our model. We show state-of-the-art results on clustering and image retrieval on four retrieval datasets, and present competitive results on two person re-identification datasets, providing a unified framework for retrieval and re-identification.