论文标题

Amite:一种用于分析神经网络非线性的新型多项式扩展

AMITE: A Novel Polynomial Expansion for Analyzing Neural Network Nonlinearities

论文作者

Sanchirico III, Mauro J., Jiao, Xun, Nataraj, C.

论文摘要

多项式扩展对于分析神经网络非线性很重要。它们已被应用,以解决验证,解释性和安全性方面的众所周知困难。现有的方法涵盖了经典的泰勒和Chebyshev方法,渐近方法以及许多数值方法。我们发现,尽管这些单独具有有用的属性,例如确切的误差公式,可调节域和对未定义的衍生物的鲁棒性,但没有任何方法可以提供一致的方法,可以产生所有这些属性的扩展。为了解决这个问题,我们开发了一个分析修改的积分变换扩展(AMITE),这是一种通过使用衍生标准进行收敛的积分转换的新型扩展。我们显示了一般的扩展,然后演示了两个流行的激活功能的应用,即双曲线切线和整流线性单元。与现有的扩展(即Chebyshev,Taylor和数值)相比,Amite是第一个提供六个先前相互排斥的膨胀属性,例如系数和精确扩展误差(表II)。我们证明了Amite在两个案例研究中的有效性。首先,从单个隐藏层黑盒多层多层感知器(MLP)有效地提取多元多项式形式,以促进从嘈杂的刺激 - 响应对中进行等效测试。其次,使用Amite多项式和误差公式改进的Taylor模型,各种具有3至7层的馈电神经网络(FFNN)结构。 Amite提出了一个新的扩展方法的维度,适用于神经网络中非线性的分析/近似,开辟了新的方向以及用于神经网络的理论分析和系统测试的机会。

Polynomial expansions are important in the analysis of neural network nonlinearities. They have been applied thereto addressing well-known difficulties in verification, explainability, and security. Existing approaches span classical Taylor and Chebyshev methods, asymptotics, and many numerical approaches. We find that while these individually have useful properties such as exact error formulas, adjustable domain, and robustness to undefined derivatives, there are no approaches that provide a consistent method yielding an expansion with all these properties. To address this, we develop an analytically modified integral transform expansion (AMITE), a novel expansion via integral transforms modified using derived criteria for convergence. We show the general expansion and then demonstrate application for two popular activation functions, hyperbolic tangent and rectified linear units. Compared with existing expansions (i.e., Chebyshev, Taylor, and numerical) employed to this end, AMITE is the first to provide six previously mutually exclusive desired expansion properties such as exact formulas for the coefficients and exact expansion errors (Table II). We demonstrate the effectiveness of AMITE in two case studies. First, a multivariate polynomial form is efficiently extracted from a single hidden layer black-box Multi-Layer Perceptron (MLP) to facilitate equivalence testing from noisy stimulus-response pairs. Second, a variety of Feed-Forward Neural Network (FFNN) architectures having between 3 and 7 layers are range bounded using Taylor models improved by the AMITE polynomials and error formulas. AMITE presents a new dimension of expansion methods suitable for analysis/approximation of nonlinearities in neural networks, opening new directions and opportunities for the theoretical analysis and systematic testing of neural networks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源