论文标题

梯度方法证明会收敛到非舒适网络

Gradient Methods Provably Converge to Non-Robust Networks

论文作者

Vardi, Gal, Yehudai, Gilad, Shamir, Ohad

论文摘要

尽管进行了大量研究,但仍不清楚为什么神经网络如此容易受到对抗性例子的影响。在这项工作中,我们确定了自然设置,即深度为$ 2 $ relu网络接受梯度流训练的网络也是不舒适的(即使在正确存在培训数据集对训练数据集进行分类的强大网络时,也对小的对抗性$ \ ell_2 $ perturbations易受。也许令人惊讶的是,我们表明,对边缘最大化的众所周知的隐式偏见通过证明满足最大量化问题的KKT条件的每个网络都不持命运,从而引起对非持胸网络的偏见。

Despite a great deal of research, it is still unclear why neural networks are so susceptible to adversarial examples. In this work, we identify natural settings where depth-$2$ ReLU networks trained with gradient flow are provably non-robust (susceptible to small adversarial $\ell_2$-perturbations), even when robust networks that classify the training dataset correctly exist. Perhaps surprisingly, we show that the well-known implicit bias towards margin maximization induces bias towards non-robust networks, by proving that every network which satisfies the KKT conditions of the max-margin problem is non-robust.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源