论文标题

反复神经网络的可扩展性多面体验证

Scalable Polyhedral Verification of Recurrent Neural Networks

论文作者

Ryou, Wonryong, Chen, Jiayu, Balunovic, Mislav, Singh, Gagandeep, Dan, Andrei, Vechev, Martin

论文摘要

We present a scalable and precise verifier for recurrent neural networks, called Prover based on two novel ideas: (i) a method to compute a set of polyhedral abstractions for the non-convex and nonlinear recurrent update functions by combining sampling, optimization, and Fermat's theorem, and (ii) a gradient descent based algorithm for abstraction refinement guided by the certification problem that combines multiple abstractions for每个神经元。使用Prover,我们介绍了对复发性神经网络的非平凡用例,即语音分类的第一个研究。为了实现这一目标,我们还为非线性语音预处理管道开发了自定义抽象。我们的评估表明,Prover成功地验证了计算机视觉,语音和运动传感器数据分类中的几个具有挑战性的经常性模型,超出了先前工作的范围。

We present a scalable and precise verifier for recurrent neural networks, called Prover based on two novel ideas: (i) a method to compute a set of polyhedral abstractions for the non-convex and nonlinear recurrent update functions by combining sampling, optimization, and Fermat's theorem, and (ii) a gradient descent based algorithm for abstraction refinement guided by the certification problem that combines multiple abstractions for each neuron. Using Prover, we present the first study of certifying a non-trivial use case of recurrent neural networks, namely speech classification. To achieve this, we additionally develop custom abstractions for the non-linear speech preprocessing pipeline. Our evaluation shows that Prover successfully verifies several challenging recurrent models in computer vision, speech, and motion sensor data classification beyond the reach of prior work.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源