论文标题

VMRFANET:针对人重新识别的特定视图的多受理现场注意网络

VMRFANet:View-Specific Multi-Receptive Field Attention Network for Person Re-identification

论文作者

Cai, Honglong, Fang, Yuedong, Wang, Zhiguan, Yeh, Tingchun, Cheng, Jinxing

论文摘要

人重新识别(RE-ID)旨在通过不同的摄像机检索同一个人。在实践中,由于背景杂物,身体姿势和查看条件的变化,边界框检测等不准确,这仍然是一项具有挑战性的任务。解决这些问题,在本文中,我们提出了一种新型的多种磁场注意(MRFA)模块,该模块利用各种尺寸的过滤器来帮助网络关注网络上关注信息性像素。此外,我们提出了一种特定视图的机制,该机制指导注意模块处理视图条件的变化。此外,我们引入了高斯水平随机裁剪/填充方法,该方法进一步提高了我们提出的网络的鲁棒性。全面的实验证明了每个组件的有效性。 Our method achieves 95.5% / 88.1% in rank-1 / mAP on Market-1501, 88.9% / 80.0% on DukeMTMC-reID, 81.1% / 78.8% on CUHK03 labeled dataset and 78.9% / 75.3% on CUHK03 detected dataset, outperforming current state-of-the-art methods.

Person re-identification (re-ID) aims to retrieve the same person across different cameras. In practice, it still remains a challenging task due to background clutter, variations on body poses and view conditions, inaccurate bounding box detection, etc. To tackle these issues, in this paper, we propose a novel multi-receptive field attention (MRFA) module that utilizes filters of various sizes to help network focusing on informative pixels. Besides, we present a view-specific mechanism that guides attention module to handle the variation of view conditions. Moreover, we introduce a Gaussian horizontal random cropping/padding method which further improves the robustness of our proposed network. Comprehensive experiments demonstrate the effectiveness of each component. Our method achieves 95.5% / 88.1% in rank-1 / mAP on Market-1501, 88.9% / 80.0% on DukeMTMC-reID, 81.1% / 78.8% on CUHK03 labeled dataset and 78.9% / 75.3% on CUHK03 detected dataset, outperforming current state-of-the-art methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源