论文标题

一致性正规化,用于平滑分类器的认证鲁棒性

Consistency Regularization for Certified Robustness of Smoothed Classifiers

论文作者

Jeong, Jongheon, Shin, Jinwoo

论文摘要

一种随机平滑的最新技术表明,可以通过“平滑”分类器,即,即通过考虑对高斯噪声的平均预测,可以通过“平滑”分类器来将最差的案例(对抗性)$ \ ell_2 $ - 塑性转化为平均案例高斯 - 舒适性。在此范式中,应该从嘈杂的观测值下的分类器的概括能力来重新考虑对抗性鲁棒性的概念。我们发现,可以通过简单地将预测一致性对噪声定向,可以极大地控制精度和经过认证的鲁棒性之间的权衡。这种关系使我们能够设计一个强大的训练目标,而无需近似不存在的平滑分类器,例如通过软平滑。我们在各种深层神经网络架构和数据集下进行的实验表明,通过拟议的正则化,可以极大地改善“认证” $ \ ell_2 $ - 固定,甚至可以取得更好或可比的效果,而培训成本和超级参数则明显降低。

A recent technique of randomized smoothing has shown that the worst-case (adversarial) $\ell_2$-robustness can be transformed into the average-case Gaussian-robustness by "smoothing" a classifier, i.e., by considering the averaged prediction over Gaussian noise. In this paradigm, one should rethink the notion of adversarial robustness in terms of generalization ability of a classifier under noisy observations. We found that the trade-off between accuracy and certified robustness of smoothed classifiers can be greatly controlled by simply regularizing the prediction consistency over noise. This relationship allows us to design a robust training objective without approximating a non-existing smoothed classifier, e.g., via soft smoothing. Our experiments under various deep neural network architectures and datasets show that the "certified" $\ell_2$-robustness can be dramatically improved with the proposed regularization, even achieving better or comparable results to the state-of-the-art approaches with significantly less training costs and hyperparameters.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源