论文标题
旋转:用于概率尖峰神经网络的高性能自旋加速器
SpinAPS: A High-Performance Spintronic Accelerator for Probabilistic Spiking Neural Networks
论文作者
论文摘要
我们讨论了基于广义线性模型(GLM)神经元的概率尖峰神经网络(SNN)的高性能和高通量硬件加速器,该神经元使用二进制STT-RAM设备作为突触和数字CMOS逻辑。推理加速器称为概率SNN的旋转加速器的“旋转”,它在不需要预先训练的ANN转换的情况下实现了首次尖峰解码的有原则的直接学习规则。该提出的解决方案显示出在手写数字和人类活动识别基准上的等效ANN实现可比的性能。与基于同等的SRAM设计相比,通过软件仿真工具通过软件仿真工具显示了推理引擎,可在GSOPS/W/MM2方面实现4倍性能。该体系结构利用概率的尖峰神经网络采用第一到跨度解码规则来在低潜伏期处做出推理决策,在手写数字基准中,在不到4个算法的时间步骤中实现了75%的测试性能。该加速器还与其他基于Memristor的DNN/SNN加速器和最先进的GPU表现出竞争性能。
We discuss a high-performance and high-throughput hardware accelerator for probabilistic Spiking Neural Networks (SNNs) based on Generalized Linear Model (GLM) neurons, that uses binary STT-RAM devices as synapses and digital CMOS logic for neurons. The inference accelerator, termed "SpinAPS" for Spintronic Accelerator for Probabilistic SNNs, implements a principled direct learning rule for first-to-spike decoding without the need for conversion from pre-trained ANNs. The proposed solution is shown to achieve comparable performance with an equivalent ANN on handwritten digit and human activity recognition benchmarks. The inference engine, SpinAPS, is shown through software emulation tools to achieve 4x performance improvement in terms of GSOPS/W/mm2 when compared to an equivalent SRAM-based design. The architecture leverages probabilistic spiking neural networks that employ first-to-spike decoding rule to make inference decisions at low latencies, achieving 75% of the test performance in as few as 4 algorithmic time steps on the handwritten digit benchmark. The accelerator also exhibits competitive performance with other memristor-based DNN/SNN accelerators and state-of-the-art GPUs.