论文标题
深层域概括的批归式嵌入
Batch Normalization Embeddings for Deep Domain Generalization
论文作者
论文摘要
域的概括旨在训练机器学习模型,以在不同和看不见的领域稳健地执行。最近的几种方法使用多个数据集来训练模型来提取域不变特征,希望推广到看不见的域。取而代之的是,首先,我们通过使用临时批处理标准化层来收集独立域的统计数据来明确训练域依赖性表示。然后,我们建议使用这些统计信息在共享潜在空间中映射域,在该空间中,可以通过距离函数来测量到域的成员资格。在测试时,我们将样品从未知域投射到相同的空间中,并将其域的性质推断为已知的线性组合。我们在培训和测试时间应用相同的映射策略,学习潜在的表示和功能强大但轻巧的合奏模型。与流行领域概括基准的当前最新技术相比,我们显示出分类准确性的显着提高:PACS,Office-31和Office-Caltech。
Domain generalization aims at training machine learning models to perform robustly across different and unseen domains. Several recent methods use multiple datasets to train models to extract domain-invariant features, hoping to generalize to unseen domains. Instead, first we explicitly train domain-dependant representations by using ad-hoc batch normalization layers to collect independent domain's statistics. Then, we propose to use these statistics to map domains in a shared latent space, where membership to a domain can be measured by means of a distance function. At test time, we project samples from an unknown domain into the same space and infer properties of their domain as a linear combination of the known ones. We apply the same mapping strategy at training and test time, learning both a latent representation and a powerful but lightweight ensemble model. We show a significant increase in classification accuracy over current state-of-the-art techniques on popular domain generalization benchmarks: PACS, Office-31 and Office-Caltech.