论文标题

在固有的数据集属性上,用于对抗机器学习

On Intrinsic Dataset Properties for Adversarial Machine Learning

论文作者

Pan, Jeffrey Z., Zufelt, Nicholas

论文摘要

深度神经网络(DNN)在广泛的机器学习应用中发挥了关键作用。但是,DNN分类器容易受到人类侵蚀的对抗扰动的影响,这可能会导致他们以高信心错误地分类输入。因此,在安全性发挥重要作用的应用中,创建强大的DNN可以防御恶意例子至关重要。在本文中,我们研究了内在数据集属性对对抗性攻击和防御方法的性能的影响,并在五个流行的图像分类数据集上进行了测试 - Mnist,Fashion -Mnist,Cifar10/Cifar100和Imagenet。我们发现输入大小和图像对比在攻击和防御成功中起关键作用。我们的发现强调,数据集设计和数据预处理步骤对于提高DNN的对抗性鲁棒性很重要。据我们所知,这是研究内在数据集属性对对抗机器学习的影响的第一部综合工作。

Deep neural networks (DNNs) have played a key role in a wide range of machine learning applications. However, DNN classifiers are vulnerable to human-imperceptible adversarial perturbations, which can cause them to misclassify inputs with high confidence. Thus, creating robust DNNs which can defend against malicious examples is critical in applications where security plays a major role. In this paper, we study the effect of intrinsic dataset properties on the performance of adversarial attack and defense methods, testing on five popular image classification datasets - MNIST, Fashion-MNIST, CIFAR10/CIFAR100, and ImageNet. We find that input size and image contrast play key roles in attack and defense success. Our discoveries highlight that dataset design and data preprocessing steps are important to boost the adversarial robustness of DNNs. To our best knowledge, this is the first comprehensive work that studies the effect of intrinsic dataset properties on adversarial machine learning.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源