论文标题
深度神经网络培训而无需乘法
Deep Neural Network Training without Multiplications
论文作者
论文摘要
深神经网络真的需要乘法吗?在这里,我们建议仅添加两个IEEE754浮点数,并带有整数ADD指令,以代替浮点乘法指令。我们表明,可以使用此操作以具有竞争力的分类精度对Resnet进行培训。我们的建议不需要任何方法来解决不稳定性和降低准确性,这在低精度训练中很常见。在某些设置中,我们可能会获得与基线FP32结果相等的精度。该方法将使消除深度神经网络训练和推理中的乘法。
Is multiplication really necessary for deep neural networks? Here we propose just adding two IEEE754 floating-point numbers with an integer-add instruction in place of a floating-point multiplication instruction. We show that ResNet can be trained using this operation with competitive classification accuracy. Our proposal did not require any methods to solve instability and decrease in accuracy, which is common in low-precision training. In some settings, we may obtain equal accuracy to the baseline FP32 result. This method will enable eliminating the multiplications in deep neural-network training and inference.