论文标题

平均分布泛化的各种重量

Diverse Weight Averaging for Out-of-Distribution Generalization

论文作者

Ramé, Alexandre, Kirchmeyer, Matthieu, Rahier, Thibaud, Rakotomamonjy, Alain, Gallinari, Patrick, Cord, Matthieu

论文摘要

标准神经网络努力在计算机视觉中概括分配变化。幸运的是,组合多个网络可以始终如一地改善分布的概括。特别是,在竞争性域基准测试中,平均体重(WA)策略表现出最佳性能。尽管它们非线性,但它们直接平均多个网络的权重。在本文中,我们提出了平均重量(DIWA),这是一种新的WA策略,其主要动机是增加平均模型的功能多样性。为此,DIWA平均从几个独立的训练运行中获得的权重:实际上,由于超参数和训练程序的差异,从不同运行中获得的模型比在一次跑步中收集的模型要多样化。我们通过对预期误差的新偏见 - 稳定性分解分解来激发多样性的需求,从而利用了WA与标准功能结合之间的相似性。此外,这种分解突出显示了当差异术语占主导地位时,wa会成功,当边缘分布在测试时发生变化时,我们表明这是出现的。在实验上,DIWA始终在没有推理开销的域上改善域上的最新状态。

Standard neural networks struggle to generalize under distribution shifts in computer vision. Fortunately, combining multiple networks can consistently improve out-of-distribution generalization. In particular, weight averaging (WA) strategies were shown to perform best on the competitive DomainBed benchmark; they directly average the weights of multiple networks despite their nonlinearities. In this paper, we propose Diverse Weight Averaging (DiWA), a new WA strategy whose main motivation is to increase the functional diversity across averaged models. To this end, DiWA averages weights obtained from several independent training runs: indeed, models obtained from different runs are more diverse than those collected along a single run thanks to differences in hyperparameters and training procedures. We motivate the need for diversity by a new bias-variance-covariance-locality decomposition of the expected error, exploiting similarities between WA and standard functional ensembling. Moreover, this decomposition highlights that WA succeeds when the variance term dominates, which we show occurs when the marginal distribution changes at test time. Experimentally, DiWA consistently improves the state of the art on DomainBed without inference overhead.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源