论文标题

域2VEC:无监督域适应的域嵌入

Domain2Vec: Domain Embedding for Unsupervised Domain Adaptation

论文作者

Peng, Xingchao, Li, Yichen, Saenko, Kate

论文摘要

常规的无监督域适应性(UDA)研究了有限数量的域之间的知识转移。这忽略了更实用的方案,其中数据分布在现实世界中的许多不同领域。这些域之间的域相似性对于域自适应性能至关重要。为了描述和学习不同领域之间的关系,我们提出了一个新颖的域2VEC模型,以基于特征分离和革兰氏矩阵的联合学习提供视觉域的矢量表示。为了评估我们的域2VEC模型的有效性,我们创建了两个大规模的跨域基准。第一个是Tinyda,其中包含54个域和大约100万个MNIST式图像。第二个基准是DomainBank,它是从56个现有视觉数据集收集的。我们证明,我们的嵌入能够预测与不同域之间视觉关系的直觉相匹配的域相似性。进行了广泛的实验,以证明我们在基准测试最先进的多源域适应方法以及我们提出的模型的优势方面的功能。

Conventional unsupervised domain adaptation (UDA) studies the knowledge transfer between a limited number of domains. This neglects the more practical scenario where data are distributed in numerous different domains in the real world. The domain similarity between those domains is critical for domain adaptation performance. To describe and learn relations between different domains, we propose a novel Domain2Vec model to provide vectorial representations of visual domains based on joint learning of feature disentanglement and Gram matrix. To evaluate the effectiveness of our Domain2Vec model, we create two large-scale cross-domain benchmarks. The first one is TinyDA, which contains 54 domains and about one million MNIST-style images. The second benchmark is DomainBank, which is collected from 56 existing vision datasets. We demonstrate that our embedding is capable of predicting domain similarities that match our intuition about visual relations between different domains. Extensive experiments are conducted to demonstrate the power of our new datasets in benchmarking state-of-the-art multi-source domain adaptation methods, as well as the advantage of our proposed model.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源