论文标题
在多层尖峰神经网络中,有监督的学习通过首次尖峰解码
Supervised Learning with First-to-Spike Decoding in Multilayer Spiking Neural Networks
论文作者
论文摘要
实验研究支持大脑中基于尖峰的神经元信息处理的概念,神经回路表现出广泛的基于时间的编码策略,以快速有效地代表感觉刺激。因此,希望将基于尖峰的计算应用于应对现实世界的挑战,特别是将这种理论转移到低功率嵌入式应用程序中。在此激励的基础上,我们提出了一种新的监督学习方法,该方法可以培训多层尖峰神经网络,以基于快速,首次尖峰解码策略来解决分类问题。所提出的学习规则支持由随机隐藏神经元发射的多个尖峰,但通过依靠确定性输出层产生的第一尖峰响应而保持稳定。除此之外,我们还探索了几种不同的,基于尖峰的编码策略,以形成呈现的输入数据的紧凑表示。我们证明了将学习规则的分类性能应用于包括MNIST在内的几个基准数据集。学习规则能够从数据中概括,即使与包含很少的输入和隐藏层神经元的受约束网络体系结构一起使用也是成功的。此外,我们重点介绍了一种新颖的编码策略,称为“扫描线编码”,可以将图像数据转换为紧凑的时空模式,以进行后续网络处理。设计受限但优化的网络结构和执行输入维度降低对神经形态应用具有很大的影响。
Experimental studies support the notion of spike-based neuronal information processing in the brain, with neural circuits exhibiting a wide range of temporally-based coding strategies to rapidly and efficiently represent sensory stimuli. Accordingly, it would be desirable to apply spike-based computation to tackling real-world challenges, and in particular transferring such theory to neuromorphic systems for low-power embedded applications. Motivated by this, we propose a new supervised learning method that can train multilayer spiking neural networks to solve classification problems based on a rapid, first-to-spike decoding strategy. The proposed learning rule supports multiple spikes fired by stochastic hidden neurons, and yet is stable by relying on first-spike responses generated by a deterministic output layer. In addition to this, we also explore several distinct, spike-based encoding strategies in order to form compact representations of presented input data. We demonstrate the classification performance of the learning rule as applied to several benchmark datasets, including MNIST. The learning rule is capable of generalising from the data, and is successful even when used with constrained network architectures containing few input and hidden layer neurons. Furthermore, we highlight a novel encoding strategy, termed `scanline encoding', that can transform image data into compact spatiotemporal patterns for subsequent network processing. Designing constrained, but optimised, network structures and performing input dimensionality reduction has strong implications for neuromorphic applications.