论文标题

重新识别人的联合判别性和度量嵌入学习

Joint Discriminative and Metric Embedding Learning for Person Re-Identification

论文作者

Sabri, Sinan, Randhawa, Zaigham, Doretto, Gianfranco

论文摘要

人重新识别是一项具有挑战性的任务,因为诸如姿势,照明,视点,背景和传感器噪声等变异的无限制滋扰因素引起的较高的阶段差异。最近的方法假定,强大的体系结构具有通过训练损失来最大程度地减少阶层内差异并最大程度地提高阶层间分离的损失,而无需明确的滋扰因素,从而可以学习特征表示与滋扰因素不变。主要的方法使用差距的判别损失,例如带有添加角度边缘的软马克斯损失,或者是度量学习损失,例如三胞胎的三胞胎损失和三胞胎的批处理挖掘。由于SoftMax施加了特征归一化,因此它限制了监督特征嵌入的梯度流。我们通过加入损失并利用三胞胎损失作为缺失梯度的代理来解决这一问题。我们通过添加预测属性的歧视性任务进一步提高了对滋扰因素的不变性。我们广泛的评估表明,当只学到了整体表示时,我们始终在三个最具挑战性的数据集上胜过最先进的表现。这样的表示更容易在实际系统中部署。最后,我们发现加入损失可以消除在提高性能的同时,在SoftMax损失中保持边缘的要求。

Person re-identification is a challenging task because of the high intra-class variance induced by the unrestricted nuisance factors of variations such as pose, illumination, viewpoint, background, and sensor noise. Recent approaches postulate that powerful architectures have the capacity to learn feature representations invariant to nuisance factors, by training them with losses that minimize intra-class variance and maximize inter-class separation, without modeling nuisance factors explicitly. The dominant approaches use either a discriminative loss with margin, like the softmax loss with the additive angular margin, or a metric learning loss, like the triplet loss with batch hard mining of triplets. Since the softmax imposes feature normalization, it limits the gradient flow supervising the feature embedding. We address this by joining the losses and leveraging the triplet loss as a proxy for the missing gradients. We further improve invariance to nuisance factors by adding the discriminative task of predicting attributes. Our extensive evaluation highlights that when only a holistic representation is learned, we consistently outperform the state-of-the-art on the three most challenging datasets. Such representations are easier to deploy in practical systems. Finally, we found that joining the losses removes the requirement for having a margin in the softmax loss while increasing performance.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源