论文标题
探索尖峰神经网络中的权衡
Exploring Tradeoffs in Spiking Neural Networks
论文作者
论文摘要
尖峰神经网络(SNN)已成为低功率计算的传统深神经网络的有前途的替代方法。但是,SNN的有效性不仅取决于它们的性能,还取决于它们的能耗,预测速度和对噪声的稳健性。最近的方法快速\&Deep与其他方法一起,通过限制神经元最多发射一次来实现快速和节能的计算。然而,这种约束被称为第一尖峰(TTF),但是在许多方面都限制了SNN的功能。在这项工作中,我们在使用此约束时探讨了性能,能耗,速度和稳定性之间的关系。更确切地说,我们强调了以稀疏性和预测潜伏期为代价获得性能和鲁棒性的权衡的存在。为了改善这些权衡,我们提出了一个轻松的快速\&Deep版本,该版本可以每个神经元进行多个峰值。我们的实验表明,与TTFS SNN相比,放松尖峰限制的限制可提供更高的性能,同时也从更快的收敛性,相似的稀疏性,相似的预测潜伏期和与噪声相比的稳健性更好。通过强调TTF的局限性并证明了不受约束的SNN的优势,我们为开发有效的神经形态计算的学习策略提供了宝贵的见解。
Spiking Neural Networks (SNNs) have emerged as a promising alternative to traditional Deep Neural Networks for low-power computing. However, the effectiveness of SNNs is not solely determined by their performance but also by their energy consumption, prediction speed, and robustness to noise. The recent method Fast \& Deep, along with others, achieves fast and energy-efficient computation by constraining neurons to fire at most once. Known as Time-To-First-Spike (TTFS), this constraint however restricts the capabilities of SNNs in many aspects. In this work, we explore the relationships between performance, energy consumption, speed and stability when using this constraint. More precisely, we highlight the existence of tradeoffs where performance and robustness are gained at the cost of sparsity and prediction latency. To improve these tradeoffs, we propose a relaxed version of Fast \& Deep that allows for multiple spikes per neuron. Our experiments show that relaxing the spike constraint provides higher performance while also benefiting from faster convergence, similar sparsity, comparable prediction latency, and better robustness to noise compared to TTFS SNNs. By highlighting the limitations of TTFS and demonstrating the advantages of unconstrained SNNs we provide valuable insight for the development of effective learning strategies for neuromorphic computing.