论文标题
风格的交错学习,用于可概括的人重新识别
Style Interleaved Learning for Generalizable Person Re-identification
论文作者
论文摘要
人重新识别(REID)的域概括(DG)是一个具有挑战性的问题,因为在培训过程中不允许访问目标域数据。大多数现有的DG REID方法基于相同的功能更新功能提取器和分类器参数。这种常见的实践导致模型过度拟合了源域中的现有特征样式,从而在目标域上具有亚最佳的概括能力。为了解决这个问题,我们提出了一种新型的交织方式(IL)框架。与传统的学习策略不同,IL为每次迭代纳入了两个远期传播和一个向后传播。我们采用交错样式的功能,使用不同的前向传播更新功能提取器和分类器,这有助于防止模型过度拟合某些域样式。为了生成交织的特征样式,我们进一步提出了一种新的功能风格化方法。它产生了广泛的有意义的风格,这些样式与源域中的原始样式既不同又独立,该样式适合IL方法论。广泛的实验结果表明,我们的模型不仅在大规模的DG REID基准上始终优于最先进的方法,而且在计算效率方面也具有明显的优势。该代码可从https://github.com/wentaotan/interleaved-learning获得。
Domain generalization (DG) for person re-identification (ReID) is a challenging problem, as access to target domain data is not permitted during the training process. Most existing DG ReID methods update the feature extractor and classifier parameters based on the same features. This common practice causes the model to overfit to existing feature styles in the source domain, resulting in sub-optimal generalization ability on target domains. To solve this problem, we propose a novel style interleaved learning (IL) framework. Unlike conventional learning strategies, IL incorporates two forward propagations and one backward propagation for each iteration. We employ the features of interleaved styles to update the feature extractor and classifiers using different forward propagations, which helps to prevent the model from overfitting to certain domain styles. To generate interleaved feature styles, we further propose a new feature stylization approach. It produces a wide range of meaningful styles that are both different and independent from the original styles in the source domain, which caters to the IL methodology. Extensive experimental results show that our model not only consistently outperforms state-of-the-art methods on large-scale benchmarks for DG ReID, but also has clear advantages in computational efficiency. The code is available at https://github.com/WentaoTan/Interleaved-Learning.