论文标题
通过可转移的对手增强深度学习网络的弹性
Enhancing Resilience of Deep Learning Networks by Means of Transferable Adversaries
论文作者
论文摘要
一般和深度学习网络的人工神经网络将自己确立为流行而强大的机器学习算法。尽管在解决复杂的任务时,这些网络的大小通常是有益的,但大量参数也导致此类网络容易受到恶意行为的影响,例如对抗性扰动。这些扰动可以改变模型的分类决策。此外,尽管单步对手可以轻松地从网络转移到网络,但更强大的多步兵的转移通常很困难。在这项工作中,我们介绍了一种产生强大的广告交流的方法,这些方法可以轻松(并经常)在不同模型之间传输。然后,该方法用于生成大量的对手,基于这些方法对所选防御方法的影响进行实验评估。最后,我们介绍了一种新颖,简单但有效的方法,以增强对抗者的神经网络的弹性,并根据既定的防御方法进行基准测试。与已经存在的方法相反,我们提出的防御方法要高得多,因为它仅需要一个额外的前向通行即可获得可比的性能结果。
Artificial neural networks in general and deep learning networks in particular established themselves as popular and powerful machine learning algorithms. While the often tremendous sizes of these networks are beneficial when solving complex tasks, the tremendous number of parameters also causes such networks to be vulnerable to malicious behavior such as adversarial perturbations. These perturbations can change a model's classification decision. Moreover, while single-step adversaries can easily be transferred from network to network, the transfer of more powerful multi-step adversaries has - usually -- been rather difficult. In this work, we introduce a method for generating strong ad-versaries that can easily (and frequently) be transferred between different models. This method is then used to generate a large set of adversaries, based on which the effects of selected defense methods are experimentally assessed. At last, we introduce a novel, simple, yet effective approach to enhance the resilience of neural networks against adversaries and benchmark it against established defense methods. In contrast to the already existing methods, our proposed defense approach is much more efficient as it only requires a single additional forward-pass to achieve comparable performance results.