论文标题

摊销的贝叶斯模型与证据深度学习

Amortized Bayesian model comparison with evidential deep learning

论文作者

Radev, Stefan T., D'Alessandro, Marco, Mertens, Ulf K., Voss, Andreas, Köthe, Ullrich, Bürkner, Paul-Christian

论文摘要

比较复杂自然过程的竞争数学模型是许多科学分支中的共同目标。贝叶斯概率框架提供了执行模型比较的原则方法,并提取有用的指标来指导决策。但是,许多有趣的模型与标准的贝叶斯方法相互困扰,因为它们缺乏封闭形式的可能性功能,或者可能性在计算上太昂贵而无法评估。通过这项工作,我们提出了一种新的方法,可以使用专门的深度学习体系结构进行贝叶斯模型比较。我们的方法纯粹是基于仿真的,并规避将所有正在考虑的数据集中考虑的所有替代模型明确拟合的步骤。此外,它不需要数据的手工制作的摘要统计数据,旨在摊销多个模型和可观察数据集的模拟成本。这使得该方法在需要评估大量数据集需要评估模型拟合的情况下特别有效,因此实际上推断实际上是不可行的。在本文中,我们提出了一种新颖的方法来衡量模型比较问题中的认知不确定性。我们在玩具示例和来自认知科学和单细胞神经科学的非平凡模型的模拟数据上证明了我们的方法的实用性。我们表明,在本工作中考虑的示例中,我们的方法在准确性,校准和效率方面取得了出色的成果。我们认为,我们的框架可以增强和丰富基于模型的分析和推断,这些领域的许多领域涉及自然过程计算模型。我们进一步认为,即使在假设真实数据生成模型在有限的候选模型集中的框架中,提出的认知不确定性量度也为量化绝对证据提供了独特的代理。

Comparing competing mathematical models of complex natural processes is a shared goal among many branches of science. The Bayesian probabilistic framework offers a principled way to perform model comparison and extract useful metrics for guiding decisions. However, many interesting models are intractable with standard Bayesian methods, as they lack a closed-form likelihood function or the likelihood is computationally too expensive to evaluate. With this work, we propose a novel method for performing Bayesian model comparison using specialized deep learning architectures. Our method is purely simulation-based and circumvents the step of explicitly fitting all alternative models under consideration to each observed dataset. Moreover, it requires no hand-crafted summary statistics of the data and is designed to amortize the cost of simulation over multiple models and observable datasets. This makes the method particularly effective in scenarios where model fit needs to be assessed for a large number of datasets, so that per-dataset inference is practically infeasible.Finally, we propose a novel way to measure epistemic uncertainty in model comparison problems. We demonstrate the utility of our method on toy examples and simulated data from non-trivial models from cognitive science and single-cell neuroscience. We show that our method achieves excellent results in terms of accuracy, calibration, and efficiency across the examples considered in this work. We argue that our framework can enhance and enrich model-based analysis and inference in many fields dealing with computational models of natural processes. We further argue that the proposed measure of epistemic uncertainty provides a unique proxy to quantify absolute evidence even in a framework which assumes that the true data-generating model is within a finite set of candidate models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源