论文标题
贝叶斯神经网络的可解释人工智能:对海洋动态的值得信赖的预测
Explainable Artificial Intelligence for Bayesian Neural Networks: Towards trustworthy predictions of ocean dynamics
论文作者
论文摘要
神经网络的可信度通常受到挑战,因为它们缺乏表达不确定性和解释其技能的能力。鉴于在高风险决策中(例如气候变化应用程序)中神经网络的使用越来越多,这可能是有问题的。我们通过成功实施贝叶斯神经网络(BNN)来解决这两个问题,该贝叶斯神经网络是分布而不是确定性的,并应用了可解释的AI(XAI)技术的新实现。与经典神经网络的预测相比,BNN的不确定性分析提供了更适合从业者需求的预测的全面概述。使用BNN意味着我们可以计算预测的熵(即不确定性),并确定结果的概率是否具有统计学意义。为了增强可信赖性,我们还在空间上应用了两种XAI相关性传播(LRP)和Shapley添加性解释(SHAP)值的XAI技术。这些XAI方法揭示了BNN合适和/或值得信赖的程度。使用两种技术可以使BNN技能及其不确定性更全面,因为LRP考虑了神经网络参数,而Shap考虑了对输出的变化。我们使用与物理理论的直觉进行比较来验证这些技术。解释的差异确定了需要新的物理理论指导研究的潜在领域。
The trustworthiness of neural networks is often challenged because they lack the ability to express uncertainty and explain their skill. This can be problematic given the increasing use of neural networks in high stakes decision-making such as in climate change applications. We address both issues by successfully implementing a Bayesian Neural Network (BNN), where parameters are distributions rather than deterministic, and applying novel implementations of explainable AI (XAI) techniques. The uncertainty analysis from the BNN provides a comprehensive overview of the prediction more suited to practitioners' needs than predictions from a classical neural network. Using a BNN means we can calculate the entropy (i.e. uncertainty) of the predictions and determine if the probability of an outcome is statistically significant. To enhance trustworthiness, we also spatially apply the two XAI techniques of Layer-wise Relevance Propagation (LRP) and SHapley Additive exPlanation (SHAP) values. These XAI methods reveal the extent to which the BNN is suitable and/or trustworthy. Using two techniques gives a more holistic view of BNN skill and its uncertainty, as LRP considers neural network parameters, whereas SHAP considers changes to outputs. We verify these techniques using comparison with intuition from physical theory. The differences in explanation identify potential areas where new physical theory guided studies are needed.