论文标题
查找Relu神经网络中输出属性的输入特征
Finding Input Characterizations for Output Properties in ReLU Neural Networks
论文作者
论文摘要
深度神经网络(DNN)已成为一种强大的机制,并越来越多地部署在现实世界中的安全 - 关键领域。尽管取得了广泛的成功,但它们的复杂架构使证明对它们的正式保证很困难。确定高级正确性的逻辑概念与复杂的低级网络体系结构有何关系。在这个项目中,我们扩展了提出的想法,并介绍了一种弥合体系结构与高级规格之间差距的方法。我们的主要见解是,我们首先证明了与神经网的结构紧密相关的属性,而不是直接证明所需的安全性能,并用它们来推理安全性能。我们为我们的方法建立了理论基础,并通过各种实验来凭经验评估绩效,通过识别一个更大的输入空间区域来保证在产出上保证某些属性,从而实现了有希望的结果。
Deep Neural Networks (DNNs) have emerged as a powerful mechanism and are being increasingly deployed in real-world safety-critical domains. Despite the widespread success, their complex architecture makes proving any formal guarantees about them difficult. Identifying how logical notions of high-level correctness relate to the complex low-level network architecture is a significant challenge. In this project, we extend the ideas presented in and introduce a way to bridge the gap between the architecture and the high-level specifications. Our key insight is that instead of directly proving the safety properties that are required, we first prove properties that relate closely to the structure of the neural net and use them to reason about the safety properties. We build theoretical foundations for our approach, and empirically evaluate the performance through various experiments, achieving promising results than the existing approach by identifying a larger region of input space that guarantees a certain property on the output.