论文标题
跨模式的人重新识别具有共享特征特征转移
Cross-modality Person re-identification with Shared-Specific Feature Transfer
论文作者
论文摘要
跨模式的人重新识别(CM-REID)是智能视频分析的具有挑战性但重要的技术。现有作品主要集中于学习共同表示,通过将不同的方式嵌入相同的特征空间中。但是,只有学习共同特征意味着巨大的信息丢失,从而降低了特征独特性的上限。在本文中,我们通过提出一种新型的跨模式共享特定特征转移算法(称为CM-SSFT)来解决上述限制,以探索模态共享信息的潜力和特定于模态的特征,以提高重新识别性能。我们根据共享特征对不同模态样本的亲和力进行建模,然后在模态之间和跨模态之间转移共享和特定特征。我们还提出了一种互补的特征学习策略,包括模式适应,项目对抗性学习和重建增强,以分别学习各种方式的歧视性和互补共享和特定特征。可以以端到端的方式对整个CM-SSFT算法进行训练。我们进行了全面的实验,以验证总体算法的优势和每个组件的有效性。在两个主流基准数据集SYSU-MM01和REGDB上,提出的算法显着优于最先进的最先进算法。
Cross-modality person re-identification (cm-ReID) is a challenging but key technology for intelligent video analysis. Existing works mainly focus on learning common representation by embedding different modalities into a same feature space. However, only learning the common characteristics means great information loss, lowering the upper bound of feature distinctiveness. In this paper, we tackle the above limitation by proposing a novel cross-modality shared-specific feature transfer algorithm (termed cm-SSFT) to explore the potential of both the modality-shared information and the modality-specific characteristics to boost the re-identification performance. We model the affinities of different modality samples according to the shared features and then transfer both shared and specific features among and across modalities. We also propose a complementary feature learning strategy including modality adaption, project adversarial learning and reconstruction enhancement to learn discriminative and complementary shared and specific features of each modality, respectively. The entire cm-SSFT algorithm can be trained in an end-to-end manner. We conducted comprehensive experiments to validate the superiority of the overall algorithm and the effectiveness of each component. The proposed algorithm significantly outperforms state-of-the-arts by 22.5% and 19.3% mAP on the two mainstream benchmark datasets SYSU-MM01 and RegDB, respectively.