论文标题

积极的蒸馏被认为有害:终身对象的持续元衡量标准学习

Positive Pair Distillation Considered Harmful: Continual Meta Metric Learning for Lifelong Object Re-Identification

论文作者

Wang, Kai, Wu, Chenshen, Bagdanov, Andy, Liu, Xialei, Yang, Shiqi, Jui, Shangling, van de Weijer, Joost

论文摘要

终身对象重新识别从重新识别任务流中学习。目的是学习可以应用于所有任务的表示形式,并概括为以前看不见的重新识别任务。主要的挑战是,在推理时,表示必须推广到以前看不见的身份。为了解决这个问题,我们将持续的元学度量学习应用于终身对象重新识别。为了防止忘记以前的任务,我们使用知识蒸馏并探索正面和负面对的角色。根据我们的观察,蒸馏和度量损失是拮抗的,我们建议将正对从蒸馏中删除到可靠的模型更新。我们的方法称为无阳性对(DWOPP)的蒸馏(DWOPP),在人体和车辆重新识别数据集以及LREID基准上的域间实验上进行了广泛的内域实验评估。我们的实验表明,DWOPP的表现明显优于最先进的。代码在这里:https://github.com/wangkai930418/dwopp_code

Lifelong object re-identification incrementally learns from a stream of re-identification tasks. The objective is to learn a representation that can be applied to all tasks and that generalizes to previously unseen re-identification tasks. The main challenge is that at inference time the representation must generalize to previously unseen identities. To address this problem, we apply continual meta metric learning to lifelong object re-identification. To prevent forgetting of previous tasks, we use knowledge distillation and explore the roles of positive and negative pairs. Based on our observation that the distillation and metric losses are antagonistic, we propose to remove positive pairs from distillation to robustify model updates. Our method, called Distillation without Positive Pairs (DwoPP), is evaluated on extensive intra-domain experiments on person and vehicle re-identification datasets, as well as inter-domain experiments on the LReID benchmark. Our experiments demonstrate that DwoPP significantly outperforms the state-of-the-art. The code is here: https://github.com/wangkai930418/DwoPP_code

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源