论文标题

用一个补丁抑制:重新访问通用对抗补丁攻击对象检测

Suppress with a Patch: Revisiting Universal Adversarial Patch Attacks against Object Detection

论文作者

Pavlitskaya, Svetlana, Hendl, Jonas, Kleim, Sebastian, Müller, Leopold, Wylczoch, Fabian, Zöllner, J. Marius

论文摘要

基于对抗斑块的攻击旨在欺骗一个有意产生的噪声的神经网络,该网络集中在输入图像的特定区域中。在这项工作中,我们对不同的贴片生成参数进行了深入的分析,包括初始化,贴剂大小,尤其是在训练过程中将贴剂放置在图像中。我们专注于对象消失的攻击,并用Yolov3作为白色框设置中攻击的模型运行实验,并使用可可数据集中的图像。我们的实验表明,在训练期间,将斑块插入大小增加的窗口内,与固定位置相比,攻击强度显着提高。当斑块在训练过程中随机放置时,获得了最佳结果,而贴片位置则在批处理中也有所不同。

Adversarial patch-based attacks aim to fool a neural network with an intentionally generated noise, which is concentrated in a particular region of an input image. In this work, we perform an in-depth analysis of different patch generation parameters, including initialization, patch size, and especially positioning a patch in an image during training. We focus on the object vanishing attack and run experiments with YOLOv3 as a model under attack in a white-box setting and use images from the COCO dataset. Our experiments have shown, that inserting a patch inside a window of increasing size during training leads to a significant increase in attack strength compared to a fixed position. The best results were obtained when a patch was positioned randomly during training, while patch position additionally varied within a batch.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源