论文标题
Wavetransform:通过输入分解来制定对抗性示例
WaveTransform: Crafting Adversarial Examples via Input Decomposition
论文作者
论文摘要
频谱在学习独特的和歧视对象识别的特征方面发挥了重要作用。图像中存在的低频和高频信息均已通过包括深度学习在内的许多表示学习技术提取和学习。受这一观察的启发,我们引入了一种新颖的对抗攻击,即“ wavetransform”,从而产生了对应于低频和高频子带(分别(或组合))的对抗噪声。使用小波分解分析频率子带。子带被损坏,然后用于构建对抗性示例。使用多个数据库和CNN模型进行实验,以建立提出的Wavetransform攻击的有效性并分析特定频率分量的重要性。还通过针对最近的对抗性防御算法的转移性和弹性来评估拟议攻击的鲁棒性。实验表明,拟议的攻击对防御算法有效,并且在CNN之间也可以转移。
Frequency spectrum has played a significant role in learning unique and discriminating features for object recognition. Both low and high frequency information present in images have been extracted and learnt by a host of representation learning techniques, including deep learning. Inspired by this observation, we introduce a novel class of adversarial attacks, namely `WaveTransform', that creates adversarial noise corresponding to low-frequency and high-frequency subbands, separately (or in combination). The frequency subbands are analyzed using wavelet decomposition; the subbands are corrupted and then used to construct an adversarial example. Experiments are performed using multiple databases and CNN models to establish the effectiveness of the proposed WaveTransform attack and analyze the importance of a particular frequency component. The robustness of the proposed attack is also evaluated through its transferability and resiliency against a recent adversarial defense algorithm. Experiments show that the proposed attack is effective against the defense algorithm and is also transferable across CNNs.