论文标题

动态域的概括

Dynamic Domain Generalization

论文作者

Sun, Zhishu, Shen, Zhifeng, Lin, Luojun, Yu, Yuanlong, Yang, Zhifeng, Yang, Shicai, Chen, Weijie

论文摘要

域泛化(DG)是机器学习中的基本研究主题。现有的艺术主要集中在静态模型中具有有限源域的学习域不变特征上。不幸的是,缺乏无训练的机制来调整到不可知论目标域时调整模型。为了解决这个问题,我们开发了一个全新的DG变体,即动态域概括(DDG),其中模型学会了扭曲网络参数以调整来自不同域的数据。具体而言,我们利用元估算来基于静态模型的网络参数,相对于来自不同域的不同数据。通过这种方式,静态模型被优化以学习域共享的特征,而元饰品旨在学习特定于域的特征。为了启用此过程,在教授元评估时,域中利用域模拟来自不同域的数据以适应即将到来的不可知论目标域。这种学习机制敦促该模型通过在不训练的情况下调整模型来推广到不同的不可知论目标域。广泛的实验证明了我们提出的方法的有效性。代码可用:https://github.com/metavisionlab/ddg

Domain generalization (DG) is a fundamental yet very challenging research topic in machine learning. The existing arts mainly focus on learning domain-invariant features with limited source domains in a static model. Unfortunately, there is a lack of training-free mechanism to adjust the model when generalized to the agnostic target domains. To tackle this problem, we develop a brand-new DG variant, namely Dynamic Domain Generalization (DDG), in which the model learns to twist the network parameters to adapt the data from different domains. Specifically, we leverage a meta-adjuster to twist the network parameters based on the static model with respect to different data from different domains. In this way, the static model is optimized to learn domain-shared features, while the meta-adjuster is designed to learn domain-specific features. To enable this process, DomainMix is exploited to simulate data from diverse domains during teaching the meta-adjuster to adapt to the upcoming agnostic target domains. This learning mechanism urges the model to generalize to different agnostic target domains via adjusting the model without training. Extensive experiments demonstrate the effectiveness of our proposed method. Code is available at: https://github.com/MetaVisionLab/DDG

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源