论文标题

使用黑框源模型进行分割的无监督域改编

Unsupervised Domain Adaptation for Segmentation with Black-box Source Model

论文作者

Liu, Xiaofeng, Yoo, Chaehwa, Xing, Fangxu, Kuo, C. -C. Jay, Fakhri, Georges El, Woo, Jonghye

论文摘要

无监督的域适应性(UDA)已被广泛用于将知识从标记的源域转移到未标记的目标域,以抵消在新域中标记的难度。常规解决方案的培训通常依赖于源和目标域数据的存在。但是,源域和经过训练的模型参数中的大规模和标记数据的隐私可能成为跨中心/域协作的主要关注点。在这项工作中,为了解决这个问题,我们建议使用仅在源域中训练的黑框分割模型对UDA进行分割的实用解决方案,而不是原始源数据或白盒源模型。具体而言,我们求助于具有指数混合衰减(EMD)的知识蒸馏方案,以逐渐学习特定于目标的表示。另外,无监督的熵最小化进一步应用于目标域置信度的正则化。我们在Brats 2018数据库上评估了我们的框架,并以White-Box源模型适应方法在标准杆上达到了性能。

Unsupervised domain adaptation (UDA) has been widely used to transfer knowledge from a labeled source domain to an unlabeled target domain to counter the difficulty of labeling in a new domain. The training of conventional solutions usually relies on the existence of both source and target domain data. However, privacy of the large-scale and well-labeled data in the source domain and trained model parameters can become the major concern of cross center/domain collaborations. In this work, to address this, we propose a practical solution to UDA for segmentation with a black-box segmentation model trained in the source domain only, rather than original source data or a white-box source model. Specifically, we resort to a knowledge distillation scheme with exponential mixup decay (EMD) to gradually learn target-specific representations. In addition, unsupervised entropy minimization is further applied to regularization of the target domain confidence. We evaluated our framework on the BraTS 2018 database, achieving performance on par with white-box source model adaptation approaches.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源