论文标题

对抗性颜色投影:基于投影仪的物理攻击DNN

Adversarial Color Projection: A Projector-based Physical Attack to DNNs

论文作者

Hu, Chengyin, Shi, Weiwen, Tian, Ling

论文摘要

最近的研究表明,深度神经网络(DNN)容易受到对抗性扰动的影响。因此,必须评估高级DNN对对抗攻击的弹性。但是,使用贴纸作为物理扰动来欺骗分类者的传统方法在实现隐形方面面临着挑战,并且容易受到印刷损失的影响。最近,物理攻击的进步利用了激光等光束来执行攻击,其中产生的光学模式是人造的,而不是自然的。在这项工作中,我们提出了一个基于黑盒投影仪的物理攻击,称为对抗性颜色投影(ADVCP),该攻击操作颜色投影的物理参数以执行对抗性攻击。我们在三个关键标准上评估我们的方法:有效性,隐身性和鲁棒性。在数字环境中,我们在一部分ImageNet的一部分中获得了97.60%的攻击成功率,而在物理环境中,我们在室内测试中获得了100%的攻击成功率,在户外测试中达到了82.14%。将ADVCP产生的对抗样品与基线样品进行比较,以证明我们方法的隐身性。当攻击高级DNN时,实验结果表明,在所有情况下,我们的方法都可以达到超过85%的攻击成功率,这验证了ADVCP的鲁棒性。最后,我们考虑AdvCP对基于未来的基于视觉的系统和应用构成的潜在威胁,并提出了一些基于光的物理攻击的想法。

Recent research has demonstrated that deep neural networks (DNNs) are vulnerable to adversarial perturbations. Therefore, it is imperative to evaluate the resilience of advanced DNNs to adversarial attacks. However, traditional methods that use stickers as physical perturbations to deceive classifiers face challenges in achieving stealthiness and are susceptible to printing loss. Recently, advancements in physical attacks have utilized light beams, such as lasers, to perform attacks, where the optical patterns generated are artificial rather than natural. In this work, we propose a black-box projector-based physical attack, referred to as adversarial color projection (AdvCP), which manipulates the physical parameters of color projection to perform an adversarial attack. We evaluate our approach on three crucial criteria: effectiveness, stealthiness, and robustness. In the digital environment, we achieve an attack success rate of 97.60% on a subset of ImageNet, while in the physical environment, we attain an attack success rate of 100% in the indoor test and 82.14% in the outdoor test. The adversarial samples generated by AdvCP are compared with baseline samples to demonstrate the stealthiness of our approach. When attacking advanced DNNs, experimental results show that our method can achieve more than 85% attack success rate in all cases, which verifies the robustness of AdvCP. Finally, we consider the potential threats posed by AdvCP to future vision-based systems and applications and suggest some ideas for light-based physical attacks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源