论文标题

重新识别的团结风格转移

Unity Style Transfer for Person Re-Identification

论文作者

Liu, Chong, Chang, Xiaojun, Shen, Yi-Dong

论文摘要

风格变化一直是人重新识别的主要挑战,该识别旨在与不同相机的同一行人相匹配。现有的作品试图通过摄像机不变的描述符子空间学习来解决这个问题。但是,当不同摄像机拍摄的图像之间的差异更大时,将会有更多的图像伪影。为了解决此问题,我们提出了一种UnityleS适应方法,该方法可以平滑同一相机内和跨不同相机的样式差异。具体来说,我们首先创建Unitygan来学习相机之间的样式变化,为每个相机生成形状稳定的样式图像,这称为UnityStyle Images。同时,我们使用UnitySle图像来消除不同图像之间的样式差异,这使查询和画廊之间的匹配更好。然后,我们将提出的方法应用于重新ID模型,期望获得更多样式的深度功能来查询。我们对广泛使用的基准数据集进行了广泛的实验,以评估所提出的框架的性能,结果证实了所提出模型的优越性。

Style variation has been a major challenge for person re-identification, which aims to match the same pedestrians across different cameras. Existing works attempted to address this problem with camera-invariant descriptor subspace learning. However, there will be more image artifacts when the difference between the images taken by different cameras is larger. To solve this problem, we propose a UnityStyle adaption method, which can smooth the style disparities within the same camera and across different cameras. Specifically, we firstly create UnityGAN to learn the style changes between cameras, producing shape-stable style-unity images for each camera, which is called UnityStyle images. Meanwhile, we use UnityStyle images to eliminate style differences between different images, which makes a better match between query and gallery. Then, we apply the proposed method to Re-ID models, expecting to obtain more style-robust depth features for querying. We conduct extensive experiments on widely used benchmark datasets to evaluate the performance of the proposed framework, the results of which confirm the superiority of the proposed model.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源