论文标题

$β$ - 差异分类器正在攻击

$β$-Variational Classifiers Under Attack

论文作者

Maggipinto, Marco, Terzi, Matteo, Susto, Gian Antonio

论文摘要

由于计算机视觉领域的突破,近年来,深层神经网络引起了很多关注。但是,尽管它们很受欢迎,但已经证明它们在预测中提供了有限的鲁棒性。特别是,可以合成小型对抗扰动,这些扰动不明显地修改正确分类的输入数据,从而使网络自信地错误分类。这导致了许多不同的方法来试图改善鲁棒性或检测这些扰动的存在。在本文中,我们对$β$ - 变量分类器进行分析,这是一种不仅解决特定分类任务的特定方法,而且还提供了能够从输入分布中生成新样本的生成组件。从细节上讲,我们研究了它们的鲁棒性和检测能力,以及对模型生成部分的一些新见解。

Deep Neural networks have gained lots of attention in recent years thanks to the breakthroughs obtained in the field of Computer Vision. However, despite their popularity, it has been shown that they provide limited robustness in their predictions. In particular, it is possible to synthesise small adversarial perturbations that imperceptibly modify a correctly classified input data, making the network confidently misclassify it. This has led to a plethora of different methods to try to improve robustness or detect the presence of these perturbations. In this paper, we perform an analysis of $β$-Variational Classifiers, a particular class of methods that not only solve a specific classification task, but also provide a generative component that is able to generate new samples from the input distribution. More in details, we study their robustness and detection capabilities, together with some novel insights on the generative part of the model.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源