论文标题
掺杂网络处理单元:朝着具有高容量纳米电源节点的有效神经网络模拟器
Dopant Network Processing Units: Towards Efficient Neural-network Emulators with High-capacity Nanoelectronic Nodes
论文作者
论文摘要
深度神经网络的快速增长的计算需求需要新颖的硬件设计。最近,基于通过硅的掺杂原子网络跳开发了可调的纳米电子设备。这些“掺杂网络处理单元”(DNPU)具有很高的能源效率,并且可能具有很高的吞吐量。通过调整应用于其端子的控制电压,单个DNPU可以解决各种线性不可分割的分类问题。但是,由于隐式单节结构架构,使用单个设备具有局限性。本文通过引入DNPU作为高容量神经元并从单个多神经元框架转移到多个神经元,提出了一种有希望的新型神经信息处理方法。通过在硬件中实现和测试一个小型多DNPU分类器,我们表明,在二进制分类任务上,通过平面上的同心类别,馈电DNPU网络将单个DNPU的性能从77%提高到94%的测试准确性。此外,由于DNPU与Memristor阵列的整合,我们研究了与线性层结合使用DNPU的潜力。我们通过模拟显示,只有10个DNPU的单层MNIST分类器可实现超过96%的测试精度。我们的结果铺平了通向硬件神经网络模拟器的道路,这些模拟器提供了低潜伏期和能耗的原子尺度信息处理。
The rapidly growing computational demands of deep neural networks require novel hardware designs. Recently, tunable nanoelectronic devices were developed based on hopping electrons through a network of dopant atoms in silicon. These "Dopant Network Processing Units" (DNPUs) are highly energy-efficient and have potentially very high throughput. By adapting the control voltages applied to its terminals, a single DNPU can solve a variety of linearly non-separable classification problems. However, using a single device has limitations due to the implicit single-node architecture. This paper presents a promising novel approach to neural information processing by introducing DNPUs as high-capacity neurons and moving from a single to a multi-neuron framework. By implementing and testing a small multi-DNPU classifier in hardware, we show that feed-forward DNPU networks improve the performance of a single DNPU from 77% to 94% test accuracy on a binary classification task with concentric classes on a plane. Furthermore, motivated by the integration of DNPUs with memristor arrays, we study the potential of using DNPUs in combination with linear layers. We show by simulation that a single-layer MNIST classifier with only 10 DNPUs achieves over 96% test accuracy. Our results pave the road towards hardware neural-network emulators that offer atomic-scale information processing with low latency and energy consumption.