论文标题
使用元素激活缩放来改善卷积神经网络的鲁棒性
Improving Robustness of Convolutional Neural Networks Using Element-Wise Activation Scaling
论文作者
论文摘要
最近的工作表明,重新校准对抗示例的中间激活可以改善CNN模型的对抗鲁棒性。艺术状态[Baiet al。,2021]和[Yanet al。,2021]在通道级别探索此特征,即通道的激活均匀地缩放为因子。在本文中,我们研究了更细粒度的中间激活操作。我们不是均匀地缩放激活,而是分别调整激活中的每个元素,从而提出元素的激活缩放(称为EWAS),以改善CNNS的对抗性鲁棒性。用CIFAR10和SVHN对RESNET-18和WIDERESNET进行的实验结果表明,EWA显着提高了鲁棒性的精度。特别是对于CIFAR10上的RESNET18,对于C&W攻击,EWAS将对抗性准确性提高了37.65%至82.35%。 EWAS在改善鲁棒性方面很简单,但非常有效。这些代码可在https://anonymon.4open.science/r/ewas-dd64上匿名获得。
Recent works reveal that re-calibrating the intermediate activation of adversarial examples can improve the adversarial robustness of a CNN model. The state of the arts [Baiet al., 2021] and [Yanet al., 2021] explores this feature at the channel level, i.e. the activation of a channel is uniformly scaled by a factor. In this paper, we investigate the intermediate activation manipulation at a more fine-grained level. Instead of uniformly scaling the activation, we individually adjust each element within an activation and thus propose Element-Wise Activation Scaling, dubbed EWAS, to improve CNNs' adversarial robustness. Experimental results on ResNet-18 and WideResNet with CIFAR10 and SVHN show that EWAS significantly improves the robustness accuracy. Especially for ResNet18 on CIFAR10, EWAS increases the adversarial accuracy by 37.65% to 82.35% against C&W attack. EWAS is simple yet very effective in terms of improving robustness. The codes are anonymously available at https://anonymous.4open.science/r/EWAS-DD64.