论文标题

要了解峰值神经网络中泄漏的效果

Towards Understanding the Effect of Leak in Spiking Neural Networks

论文作者

Chowdhury, Sayeed Shafayet, Lee, Chankyu, Roy, Kaushik

论文摘要

正在探索尖峰神经网络(SNN),以模仿人脑的惊人能力,这些功能可以通过嘈杂的尖峰活动来牢固有效地学习和计算功能。已经提出了多种尖峰神经元模型,以类似于生物神经元功能。随着生物保真度的不同水平,这些模型通常在其内部状态中包含一个称为膜电位的泄漏路径。虽然泄漏模型被认为是更可行的,但从纯粹的计算观点中,有和没有泄漏的模型之间的比较分析需要注意。在本文中,我们调查了有关泄漏正当理由以及使用泄漏行为的利弊的问题。我们的实验结果表明,与没有泄漏的模型相比,泄漏的神经元模型可提供改善的鲁棒性和更好的概括。但是,泄漏降低了与公共概念相反的计算的稀疏性。通过频域分析,我们证明了从输入中消除高频组件的泄漏的效果,从而使SNN能够在嘈杂的尖峰输入中变得更加健壮。

Spiking Neural Networks (SNNs) are being explored to emulate the astounding capabilities of human brain that can learn and compute functions robustly and efficiently with noisy spiking activities. A variety of spiking neuron models have been proposed to resemble biological neuronal functionalities. With varying levels of bio-fidelity, these models often contain a leak path in their internal states, called membrane potentials. While the leaky models have been argued as more bioplausible, a comparative analysis between models with and without leak from a purely computational point of view demands attention. In this paper, we investigate the questions regarding the justification of leak and the pros and cons of using leaky behavior. Our experimental results reveal that leaky neuron model provides improved robustness and better generalization compared to models with no leak. However, leak decreases the sparsity of computation contrary to the common notion. Through a frequency domain analysis, we demonstrate the effect of leak in eliminating the high-frequency components from the input, thus enabling SNNs to be more robust against noisy spike-inputs.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源