论文标题

预测复杂性先验

Predictive Complexity Priors

论文作者

Nalisnick, Eric, Gordon, Jonathan, Hernández-Lobato, José Miguel

论文摘要

对于复杂的神经网络等复杂模型,指定贝叶斯先验是很难的。关于参数的推理因空间的高维和参数化而变得挑战。看似良性和不信息的先验可能会对模型的预测产生不直觉和有害的影响。因此,我们提出了预测复杂性先验:通过将模型的预测与参考模型的预测进行比较来定义的功能性先验。尽管最初是在模型输出上定义的,但我们通过更改变量转移模型参数。然后,传统的贝叶斯工作流可以照常进行。我们在高维回归,神经网络深度的推理以及统计强度共享几次学习之前应用了我们的预测复杂性。

Specifying a Bayesian prior is notoriously difficult for complex models such as neural networks. Reasoning about parameters is made challenging by the high-dimensionality and over-parameterization of the space. Priors that seem benign and uninformative can have unintuitive and detrimental effects on a model's predictions. For this reason, we propose predictive complexity priors: a functional prior that is defined by comparing the model's predictions to those of a reference model. Although originally defined on the model outputs, we transfer the prior to the model parameters via a change of variables. The traditional Bayesian workflow can then proceed as usual. We apply our predictive complexity prior to high-dimensional regression, reasoning over neural network depth, and sharing of statistical strength for few-shot learning.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源