论文标题

深尖峰神经网络的固有对抗性鲁棒性:离散输入编码和非线性激活的影响

Inherent Adversarial Robustness of Deep Spiking Neural Networks: Effects of Discrete Input Encoding and Non-Linear Activations

论文作者

Sharmin, Saima, Rathi, Nitin, Panda, Priyadarshini, Roy, Kaushik

论文摘要

在最近对值得信赖的神经网络的追求中,我们提出了尖峰神经网络(SNN),作为对对抗性攻击的固有稳健性的潜在候选者。在这项工作中,我们证明了基于梯度的攻击下SNN的对抗性准确性高于其对Deep VGG和Resnet Architectures上的CIFAR数据集的非加价对应物,尤其是在BlackBox攻击方案中。我们将这种鲁棒性归因于SNN的两个基本特征并分析其效果。首先,我们展示了Poisson编码器引入的输入离散化可以改善对抗性鲁棒性,而时间段数量减少。其次,我们量化了对抗准确性的量,并增加了泄漏综合发生(LIF)神经元的泄漏率。我们的结果表明,经过LIF神经元和较少数量的时间段训练的SNN比具有IF(集成)神经元和较大数量的时间段的SNN更强大。另外,我们通过提出一种从SNN制作攻击的技术来克服在时间领域中创建基于梯度的对抗输入的瓶颈

In the recent quest for trustworthy neural networks, we present Spiking Neural Network (SNN) as a potential candidate for inherent robustness against adversarial attacks. In this work, we demonstrate that adversarial accuracy of SNNs under gradient-based attacks is higher than their non-spiking counterparts for CIFAR datasets on deep VGG and ResNet architectures, particularly in blackbox attack scenario. We attribute this robustness to two fundamental characteristics of SNNs and analyze their effects. First, we exhibit that input discretization introduced by the Poisson encoder improves adversarial robustness with reduced number of timesteps. Second, we quantify the amount of adversarial accuracy with increased leak rate in Leaky-Integrate-Fire (LIF) neurons. Our results suggest that SNNs trained with LIF neurons and smaller number of timesteps are more robust than the ones with IF (Integrate-Fire) neurons and larger number of timesteps. Also we overcome the bottleneck of creating gradient-based adversarial inputs in temporal domain by proposing a technique for crafting attacks from SNN

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源