论文标题

语义分割的近端分裂对抗攻击

Proximal Splitting Adversarial Attacks for Semantic Segmentation

论文作者

Rony, Jérôme, Pesquet, Jean-Christophe, Ayed, Ismail Ben

论文摘要

分类一直是对对抗性攻击的研究的焦点,但是只有少数著作调查了适合于更密集的预测任务的方法,例如语义分割。这些作品中提出的方法不能准确地解决对抗性分割问题,因此高估了愚弄模型所需的扰动的大小。在这里,我们基于近端分裂的近端分裂提出了对这些模型的白色框攻击,以产生较小的$ \ ell_ \ infty $ norms的对抗扰动。我们的攻击可以通过增强的拉格朗日方法以及自适应约束缩放和掩盖策略来处理非covex最小化框架内的大量约束。我们证明,我们的攻击明显优于先前提出的攻击,以及我们适应细分的分类攻击,为这项密集的任务提供了第一个全面的基准。

Classification has been the focal point of research on adversarial attacks, but only a few works investigate methods suited to denser prediction tasks, such as semantic segmentation. The methods proposed in these works do not accurately solve the adversarial segmentation problem and, therefore, overestimate the size of the perturbations required to fool models. Here, we propose a white-box attack for these models based on a proximal splitting to produce adversarial perturbations with much smaller $\ell_\infty$ norms. Our attack can handle large numbers of constraints within a nonconvex minimization framework via an Augmented Lagrangian approach, coupled with adaptive constraint scaling and masking strategies. We demonstrate that our attack significantly outperforms previously proposed ones, as well as classification attacks that we adapted for segmentation, providing a first comprehensive benchmark for this dense task.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源