论文标题
参数有效的人在3D空间中重新识别
Parameter-Efficient Person Re-identification in the 3D Space
论文作者
论文摘要
人们生活在一个3D世界中。但是,现有关于人重新识别(RE-ID)的作品主要考虑在2D空间中的语义表示学习,从本质上限制了对人的理解。在这项工作中,我们通过探索3D身体结构的先验知识来解决这一限制。具体而言,我们将2D图像投影到3D空间,并引入新的参数效率Omni级图网络(OG-NET),以直接从3D点云中学习行人表示。 OG-NET有效利用稀疏3D点提供的局部信息,并以连贯的方式利用结构和外观信息。在3D几何信息的帮助下,我们可以学习一种新型的深层重新ID功能,没有嘈杂的变体,例如比例尺和观点。据我们所知,我们是在3D空间中进行人员重新识别的首批尝试。我们通过广泛的实验证明,所提出的方法(1)在传统的2D空间中放松了匹配难度,(2)利用2D外观和3D结构的互补信息,(3)在四个大型人重新ID数据集的参数上实现竞争成果,并且(4)具有良好的伸缩性可伸缩性的数据集合。我们的代码,模型和生成的3D人类数据可在https://github.com/layumi/person-reid-3d上公开获取。
People live in a 3D world. However, existing works on person re-identification (re-id) mostly consider the semantic representation learning in a 2D space, intrinsically limiting the understanding of people. In this work, we address this limitation by exploring the prior knowledge of the 3D body structure. Specifically, we project 2D images to a 3D space and introduce a novel parameter-efficient Omni-scale Graph Network (OG-Net) to learn the pedestrian representation directly from 3D point clouds. OG-Net effectively exploits the local information provided by sparse 3D points and takes advantage of the structure and appearance information in a coherent manner. With the help of 3D geometry information, we can learn a new type of deep re-id feature free from noisy variants, such as scale and viewpoint. To our knowledge, we are among the first attempts to conduct person re-identification in the 3D space. We demonstrate through extensive experiments that the proposed method (1) eases the matching difficulty in the traditional 2D space, (2) exploits the complementary information of 2D appearance and 3D structure, (3) achieves competitive results with limited parameters on four large-scale person re-id datasets, and (4) has good scalability to unseen datasets. Our code, models and generated 3D human data are publicly available at https://github.com/layumi/person-reid-3d .