论文标题
SAE:顺序锚定合奏
SAE: Sequential Anchored Ensembles
论文作者
论文摘要
由于参数空间的高维度,计算神经网络的贝叶斯后部是一项艰巨的任务。锚固的合奏通过训练旨在供最佳贝叶斯后部的锚定损失训练神经网络的集合来近似后部。但是,随着其成员数量的增长,培训合奏在计算上变得昂贵,因为每个成员都重复了完整的培训程序。在本说明中,我们提出了顺序的锚定合奏(SAE),这是锚定合奏的轻便替代品。与其从头开始训练每个成员的每个成员,不如在用高自动相关的损失上依次训练成员,因此可以使神经网络的快速收敛和贝叶斯后部的有效近似。 SAE的表现优于锚定的合奏,对于给定的计算预算,在某些基准上,同时在其他基准上表现出可比的性能,并在Neurips 2021的光线和扩展轨道上获得了第二和第三名。
Computing the Bayesian posterior of a neural network is a challenging task due to the high-dimensionality of the parameter space. Anchored ensembles approximate the posterior by training an ensemble of neural networks on anchored losses designed for the optima to follow the Bayesian posterior. Training an ensemble, however, becomes computationally expensive as its number of members grows since the full training procedure is repeated for each member. In this note, we present Sequential Anchored Ensembles (SAE), a lightweight alternative to anchored ensembles. Instead of training each member of the ensemble from scratch, the members are trained sequentially on losses sampled with high auto-correlation, hence enabling fast convergence of the neural networks and efficient approximation of the Bayesian posterior. SAE outperform anchored ensembles, for a given computational budget, on some benchmarks while showing comparable performance on the others and achieved 2nd and 3rd place in the light and extended tracks of the NeurIPS 2021 Approximate Inference in Bayesian Deep Learning competition.