论文标题

在优化神经网络验证的背部替代方法

On Optimizing Back-Substitution Methods for Neural Network Verification

论文作者

Zelazny, Tom, Wu, Haoze, Barrett, Clark, Katz, Guy

论文摘要

随着深度学习在关键任务系统中的越来越多的应用,越来越需要对神经网络行为的正式保证。实际上,最近提出了许多用于验证神经网络的方法,但是这些方法通常以有限的可伸缩性或准确性不足而遇到的困难。许多最先进的验证方案中的关键组成部分是计算网络中神经元在特定输入域中获得的值的下限和上限 - 并且这些界限更紧,验证的可能性就越大。计算这些边界的许多常见算法是符号结合传播方法的变化。其中,利用一种称为背部遗体的过程的方法特别成功。在本文中,我们提出了一种使背部替代产生更严格的界限的方法。为了实现这一目标,我们制定并最大程度地减少了在背部固定过程中产生的不精确错误。我们的技术是一般的,从某种意义上说,它可以将其集成到许多现有的符号结合的传播技术中,并且只有微小的修改。我们将方法作为概念验证工具实施,并且与执行背部替代的最先进的验证者相比,取得了有利的结果。

With the increasing application of deep learning in mission-critical systems, there is a growing need to obtain formal guarantees about the behaviors of neural networks. Indeed, many approaches for verifying neural networks have been recently proposed, but these generally struggle with limited scalability or insufficient accuracy. A key component in many state-of-the-art verification schemes is computing lower and upper bounds on the values that neurons in the network can obtain for a specific input domain -- and the tighter these bounds, the more likely the verification is to succeed. Many common algorithms for computing these bounds are variations of the symbolic-bound propagation method; and among these, approaches that utilize a process called back-substitution are particularly successful. In this paper, we present an approach for making back-substitution produce tighter bounds. To achieve this, we formulate and then minimize the imprecision errors incurred during back-substitution. Our technique is general, in the sense that it can be integrated into numerous existing symbolic-bound propagation techniques, with only minor modifications. We implement our approach as a proof-of-concept tool, and present favorable results compared to state-of-the-art verifiers that perform back-substitution.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源