论文标题
愚弄卷积神经网络和防御此类攻击的颜色渠道扰动攻击
Color Channel Perturbation Attacks for Fooling Convolutional Neural Networks and A Defense Against Such Attacks
论文作者
论文摘要
卷积神经网络(CNN)已成为一种非常强大的数据依赖性分层特征提取方法。它被广泛用于几个计算机视觉问题。 CNN自动从训练样本中学习重要的视觉特征。可以观察到网络非常容易地过度培训训练样本。已经提出了几种正则化方法,以避免过度拟合。尽管如此,网络还是对现有方法忽略的图像中的颜色分布敏感。在本文中,我们通过提出颜色通道扰动(CCP)攻击来欺骗CNN来发现CNN的颜色鲁棒性问题。在CCP攻击中,新图像是通过将原始通道与随机权重相结合而创建的新频道生成的。实验是在图像分类框架中广泛使用的CIFAR10,CALTECH256和TINYIMAGENET数据集上进行的。 VGG,Resnet和Densenet模型用于测试拟议攻击的影响。据观察,在提出的CCP攻击下,CNN的性能急剧降解。结果显示了提出的简单CCP攻击对CNN训练模型的鲁棒性的影响。还将结果与现有的CNN欺骗方法进行比较,以评估准确性下降。我们还通过提出的CCP攻击来扩大培训数据集,为该问题提出了主要的防御机制。在实验中观察到了CNN鲁棒性在CNN鲁棒性方面使用拟议的解决方案的最先进性能。该代码可在\ url {https://github.com/jayendrakantipudi/color-channel-perturbation-attack}公开获得。
The Convolutional Neural Networks (CNNs) have emerged as a very powerful data dependent hierarchical feature extraction method. It is widely used in several computer vision problems. The CNNs learn the important visual features from training samples automatically. It is observed that the network overfits the training samples very easily. Several regularization methods have been proposed to avoid the overfitting. In spite of this, the network is sensitive to the color distribution within the images which is ignored by the existing approaches. In this paper, we discover the color robustness problem of CNN by proposing a Color Channel Perturbation (CCP) attack to fool the CNNs. In CCP attack new images are generated with new channels created by combining the original channels with the stochastic weights. Experiments were carried out over widely used CIFAR10, Caltech256 and TinyImageNet datasets in the image classification framework. The VGG, ResNet and DenseNet models are used to test the impact of the proposed attack. It is observed that the performance of the CNNs degrades drastically under the proposed CCP attack. Result show the effect of the proposed simple CCP attack over the robustness of the CNN trained model. The results are also compared with existing CNN fooling approaches to evaluate the accuracy drop. We also propose a primary defense mechanism to this problem by augmenting the training dataset with the proposed CCP attack. The state-of-the-art performance using the proposed solution in terms of the CNN robustness under CCP attack is observed in the experiments. The code is made publicly available at \url{https://github.com/jayendrakantipudi/Color-Channel-Perturbation-Attack}.