论文标题
无监督的内域适应性通过自学语义分割
Unsupervised Intra-domain Adaptation for Semantic Segmentation through Self-Supervision
论文作者
论文摘要
基于卷积神经网络的方法在语义细分方面取得了显着进步。但是,这些方法在很大程度上依赖于劳动密集型的注释数据。为了应对这种限制,使用图形引擎生成的自动注释数据用于训练分割模型。但是,从合成数据训练的模型很难转移到真实图像中。为了解决此问题,以前的工作已考虑将模型从源数据直接调整到未标记的目标数据(以减少域间隙)。但是,这些技术并未考虑目标数据本身(内域间隙)之间的较大分布差距。在这项工作中,我们提出了一种两步的自制域适应方法,以最大程度地减少域间和内域间隙。首先,我们进行模型的域间适应。从这种适应中,我们使用基于熵的排名函数将目标域分为简单而硬的拆分。最后,为了减少域内差距,我们建议将一种自我监督的适应技术从易于拆分到硬裂。众多基准数据集的实验结果突出了我们对现有最新方法的有效性。源代码可在https://github.com/feipan664/intrada.git上找到。
Convolutional neural network-based approaches have achieved remarkable progress in semantic segmentation. However, these approaches heavily rely on annotated data which are labor intensive. To cope with this limitation, automatically annotated data generated from graphic engines are used to train segmentation models. However, the models trained from synthetic data are difficult to transfer to real images. To tackle this issue, previous works have considered directly adapting models from the source data to the unlabeled target data (to reduce the inter-domain gap). Nonetheless, these techniques do not consider the large distribution gap among the target data itself (intra-domain gap). In this work, we propose a two-step self-supervised domain adaptation approach to minimize the inter-domain and intra-domain gap together. First, we conduct the inter-domain adaptation of the model; from this adaptation, we separate the target domain into an easy and hard split using an entropy-based ranking function. Finally, to decrease the intra-domain gap, we propose to employ a self-supervised adaptation technique from the easy to the hard split. Experimental results on numerous benchmark datasets highlight the effectiveness of our method against existing state-of-the-art approaches. The source code is available at https://github.com/feipan664/IntraDA.git.