论文标题
团队:一种基于泰勒扩展的方法,用于生成对抗性示例
TEAM: An Taylor Expansion-Based Method for Generating Adversarial Examples
论文作者
论文摘要
尽管深层神经网络(DNNS)在许多领域都取得了成功的应用,但它们容易受到对抗性的示例。对抗性培训是提高DNN稳健性的最有效方法之一,通常被认为是解决鞍点的问题,可以使风险最小化并最大程度地提高局限性。鞍点问题。本文提出的方法通过使用泰勒扩展近似输入邻域中的DNN的输出,然后通过使用Lagrange乘数方法来生成对抗性示例,从而对其进行优化。如果将其用于对抗训练,则可以有效地正规化DNN,并可以改善模型的缺陷。
Although Deep Neural Networks(DNNs) have achieved successful applications in many fields, they are vulnerable to adversarial examples.Adversarial training is one of the most effective methods to improve the robustness of DNNs, and it is generally considered as solving a saddle point problem that minimizes risk and maximizes perturbation.Therefore, powerful adversarial examples can effectively replicate the situation of perturbation maximization to solve the saddle point problem.The method proposed in this paper approximates the output of DNNs in the input neighborhood by using the Taylor expansion, and then optimizes it by using the Lagrange multiplier method to generate adversarial examples. If it is used for adversarial training, the DNNs can be effectively regularized and the defects of the model can be improved.