论文标题
实例关系图指导的无源域自适应对象检测
Instance Relation Graph Guided Source-Free Domain Adaptive Object Detection
论文作者
论文摘要
无监督的域适应性(UDA)是解决域转移问题的有效方法。具体而言,UDA方法试图使源和目标表示形式对齐以改善对目标域的概括。此外,UDA方法在适应过程中可以访问源数据的假设下起作用。但是,在实际情况下,由于隐私法规,数据传输约束或专有数据关注,标记的源数据通常受到限制。无源域的适应(SFDA)设置旨在通过调整对目标域的源训练模型而无需访问源数据来减轻这些问题。在本文中,我们探讨了自适应对象检测任务的SFDA设置。为此,我们提出了一种新颖的培训策略,用于在没有源数据的情况下将受源训练的对象检测器调整为目标域。更确切地说,我们设计了一种新颖的对比损失,以通过为给定目标域输入利用对象关系来增强目标表示。这些对象实例关系是使用实例关系图(IRG)网络建模的,然后将其用于指导对比度表示学习。此外,我们还利用基于学生老师的知识蒸馏策略来避免过度适应源训练的模型产生的嘈杂的伪标记。在多个对象检测基准数据集上进行的广泛实验表明,所提出的方法能够有效地将受源训练的对象检测器适应目标域,从而优于先前的最新域自适应检测方法。代码和模型在\ href {https://viudomain.github.io/irg-sfda-web/} {https://viudomain.github.io/irg-sfda-web/}中提供。
Unsupervised Domain Adaptation (UDA) is an effective approach to tackle the issue of domain shift. Specifically, UDA methods try to align the source and target representations to improve the generalization on the target domain. Further, UDA methods work under the assumption that the source data is accessible during the adaptation process. However, in real-world scenarios, the labelled source data is often restricted due to privacy regulations, data transmission constraints, or proprietary data concerns. The Source-Free Domain Adaptation (SFDA) setting aims to alleviate these concerns by adapting a source-trained model for the target domain without requiring access to the source data. In this paper, we explore the SFDA setting for the task of adaptive object detection. To this end, we propose a novel training strategy for adapting a source-trained object detector to the target domain without source data. More precisely, we design a novel contrastive loss to enhance the target representations by exploiting the objects relations for a given target domain input. These object instance relations are modelled using an Instance Relation Graph (IRG) network, which are then used to guide the contrastive representation learning. In addition, we utilize a student-teacher based knowledge distillation strategy to avoid overfitting to the noisy pseudo-labels generated by the source-trained model. Extensive experiments on multiple object detection benchmark datasets show that the proposed approach is able to efficiently adapt source-trained object detectors to the target domain, outperforming previous state-of-the-art domain adaptive detection methods. Code and models are provided in \href{https://viudomain.github.io/irg-sfda-web/}{https://viudomain.github.io/irg-sfda-web/}.