论文标题

学习域适应性的学习语义表示

Learning Disentangled Semantic Representation for Domain Adaptation

论文作者

Cai, Ruichu, Li, Zijian, Wei, Pengfei, Qiao, Jie, Zhang, Kun, Hao, Zhifeng

论文摘要

域适应是一项重要但具有挑战性的任务。大多数现有的域适应方法都难以通过纠缠域信息和语义信息在特征空间上提取域不变表示。与以前在纠缠特征空间上的努力不同,我们旨在在数据的潜在分离语义表示(DSR)中提取域不变的语义信息。在DSR中,我们假设数据生成过程由两个独立的变量集控制,即语义潜在变量和域潜在变量。在上述假设下,我们采用变异自动编码器来重建数据背后的语义潜在变量和域潜在变量。我们进一步设计了一个双重对手网络,以解开这两组重建的潜在变量。最终在整个域中对分离的语义潜在变量进行了调整。实验研究证明,我们的模型在几个域适应基准数据集上产生最先进的性能。

Domain adaptation is an important but challenging task. Most of the existing domain adaptation methods struggle to extract the domain-invariant representation on the feature space with entangling domain information and semantic information. Different from previous efforts on the entangled feature space, we aim to extract the domain invariant semantic information in the latent disentangled semantic representation (DSR) of the data. In DSR, we assume the data generation process is controlled by two independent sets of variables, i.e., the semantic latent variables and the domain latent variables. Under the above assumption, we employ a variational auto-encoder to reconstruct the semantic latent variables and domain latent variables behind the data. We further devise a dual adversarial network to disentangle these two sets of reconstructed latent variables. The disentangled semantic latent variables are finally adapted across the domains. Experimental studies testify that our model yields state-of-the-art performance on several domain adaptation benchmark datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源