论文标题
基于身体部分的表示被遮挡人重新识别的学习
Body Part-Based Representation Learning for Occluded Person Re-Identification
论文作者
论文摘要
被遮挡的人重新识别(REID)是一个人检索任务,旨在将被阻塞的人形象与整体形象匹配。为了解决闭塞的REID,基于部分的方法已显示出有益的,因为它们提供了细粒度的信息,并且非常适合代表部分可见的人体。但是,培训基于零件的模型是一项具有挑战性的任务,原因有两个。首先,单个身体部位的外观不像全球外观那样歧视(两个不同的ID可能具有相同的本地外观),这意味着使用身份标签的标准REID训练目标不适合本地功能学习。其次,REID数据集未提供人类地形注释。在这项工作中,我们提出了BPBREID,这是一种基于身体部分的REID模型,用于解决上述问题。我们首先设计了两个模块,以预测身体部位的注意图,并生成REID目标的基于身体的特征。然后,我们提出了GILT,这是一种用于学习基于零件的表示的新型培训方案,对闭塞和非歧视性局部外观是可靠的。对流行的整体和阻塞数据集进行了广泛的实验,显示了我们提出的方法的有效性,该方法在具有挑战性的阻塞DUKE数据集中,其表现优于最先进的方法,而最先进的方法则胜过0.7%的MAP和5.6%的Rank-1准确性。我们的代码可在https://github.com/vlsomers/bpbreid上找到。
Occluded person re-identification (ReID) is a person retrieval task which aims at matching occluded person images with holistic ones. For addressing occluded ReID, part-based methods have been shown beneficial as they offer fine-grained information and are well suited to represent partially visible human bodies. However, training a part-based model is a challenging task for two reasons. Firstly, individual body part appearance is not as discriminative as global appearance (two distinct IDs might have the same local appearance), this means standard ReID training objectives using identity labels are not adapted to local feature learning. Secondly, ReID datasets are not provided with human topographical annotations. In this work, we propose BPBreID, a body part-based ReID model for solving the above issues. We first design two modules for predicting body part attention maps and producing body part-based features of the ReID target. We then propose GiLt, a novel training scheme for learning part-based representations that is robust to occlusions and non-discriminative local appearance. Extensive experiments on popular holistic and occluded datasets show the effectiveness of our proposed method, which outperforms state-of-the-art methods by 0.7% mAP and 5.6% rank-1 accuracy on the challenging Occluded-Duke dataset. Our code is available at https://github.com/VlSomers/bpbreid.