论文标题
跨域对象检测的深度对齐适应
Deeply Aligned Adaptation for Cross-domain Object Detection
论文作者
论文摘要
最近,跨域对象检测吸引了对现实世界应用的越来越多的关注,因为它有助于构建适应新环境的强大检测器。在这项工作中,我们提出了一种基于更快的R-CNN的端到端解决方案,其中可用于源图像(例如,卡通),但在训练过程中不适合目标图像(例如,漫画)(例如水彩)。通过观察到不同神经网络层的转移能力彼此不同的动机,我们建议将许多域对准策略应用于更快的R-CNN的不同层,在这种情况下,对齐强度从低层逐渐降低到更高层。此外,在获得网络中的区域建议之后,我们开发了一个前景 - 背景的对齐模块,以通过从源和目标域分别对准前景和背景区域的特征进一步减少域不匹配。基准数据集的广泛实验证明了我们提出的方法的有效性。
Cross-domain object detection has recently attracted more and more attention for real-world applications, since it helps build robust detectors adapting well to new environments. In this work, we propose an end-to-end solution based on Faster R-CNN, where ground-truth annotations are available for source images (e.g., cartoon) but not for target ones (e.g., watercolor) during training. Motivated by the observation that the transferabilities of different neural network layers differ from each other, we propose to apply a number of domain alignment strategies to different layers of Faster R-CNN, where the alignment strength is gradually reduced from low to higher layers. Moreover, after obtaining region proposals in our network, we develop a foreground-background aware alignment module to further reduce the domain mismatch by separately aligning features of the foreground and background regions from the source and target domains. Extensive experiments on benchmark datasets demonstrate the effectiveness of our proposed approach.