论文标题
使用深神经网络的关键系统的安全框架
A Safety Framework for Critical Systems Utilising Deep Neural Networks
论文作者
论文摘要
来自机器学习的越来越复杂的数学建模过程被用于分析复杂数据。但是,这些模型在实际关键系统中的性能和解释性需要对其安全利用进行严格而连续的验证。为了应对这一挑战,本文为利用深层神经网络的关键系统提供了一个原则上的新型安全论证框架。该方法允许各种形式的预测,例如,通过某些要求或对所需可靠性水平的信心的未来可靠性。使用操作数据以及最新的验证和验证技术来支持它。该预测是保守的 - 它始于从生命周期活动中获得的部分先验知识,然后确定最坏的预测。还确定了公开挑战。
Increasingly sophisticated mathematical modelling processes from Machine Learning are being used to analyse complex data. However, the performance and explainability of these models within practical critical systems requires a rigorous and continuous verification of their safe utilisation. Working towards addressing this challenge, this paper presents a principled novel safety argument framework for critical systems that utilise deep neural networks. The approach allows various forms of predictions, e.g., future reliability of passing some demands, or confidence on a required reliability level. It is supported by a Bayesian analysis using operational data and the recent verification and validation techniques for deep learning. The prediction is conservative -- it starts with partial prior knowledge obtained from lifecycle activities and then determines the worst-case prediction. Open challenges are also identified.