论文标题
语义分割中的持续无监督域适应的多头蒸馏
Multi-Head Distillation for Continual Unsupervised Domain Adaptation in Semantic Segmentation
论文作者
论文摘要
无监督的域适应(UDA)是一项转移学习任务,旨在通过利用标记的源域来培训未标记的目标域。除了具有单一源域和单个目标域的传统UDA范围外,现实世界的感知系统还面临着各种场景,从不同的照明条件到世界许多城市。在这种情况下,具有多个域的UDA随着不同目标域内的分配变化的增加而增加了挑战。这项工作着重于学习UDA,连续UDA的新框架,其中模型在依次发现的多个目标域上操作,而无需访问以前的目标域。我们提出了用于多头蒸馏的Muhdi,这是一种解决灾难性遗忘问题的方法,这是连续学习任务固有的。 Muhdi在以前的模型以及辅助目标特殊分段头的多个级别上进行蒸馏。我们报告了有关挑战多目标UDA语义分段基准测试的广泛消融和实验,以验证拟议的学习方案和体系结构。
Unsupervised Domain Adaptation (UDA) is a transfer learning task which aims at training on an unlabeled target domain by leveraging a labeled source domain. Beyond the traditional scope of UDA with a single source domain and a single target domain, real-world perception systems face a variety of scenarios to handle, from varying lighting conditions to many cities around the world. In this context, UDAs with several domains increase the challenges with the addition of distribution shifts within the different target domains. This work focuses on a novel framework for learning UDA, continuous UDA, in which models operate on multiple target domains discovered sequentially, without access to previous target domains. We propose MuHDi, for Multi-Head Distillation, a method that solves the catastrophic forgetting problem, inherent in continual learning tasks. MuHDi performs distillation at multiple levels from the previous model as well as an auxiliary target-specialist segmentation head. We report both extensive ablation and experiments on challenging multi-target UDA semantic segmentation benchmarks to validate the proposed learning scheme and architecture.