论文标题
通过错误结合的有损压缩的新型记忆有效的深度学习训练框架
A Novel Memory-Efficient Deep Learning Training Framework via Error-Bounded Lossy Compression
论文作者
论文摘要
由于对预测准确性和分析质量的需求不断增长,深层神经网络(DNN)越来越深,更广泛和非线性。训练DNN模型时,必须在向前传播期间保存中间激活数据,然后恢复以向后传播。但是,由于硬件设计限制,GPU等最新的加速器(例如GPU)仅具有非常有限的内存能力,这在训练大规模DNNS时会大大限制最大批次大小,从而限制性能加速。 在本文中,我们提出了一个新型的记忆驱动的高性能DNN训练框架,该培训框架利用错误的有损压缩,以显着减少训练的内存需求,以允许训练较大的网络。与采用基于图像的损耗压缩机(例如JPEG)压缩激活数据的最新解决方案不同,我们的框架故意设计了通过严格控制错误控制的机制设计错误的有损压缩。具体而言,我们提供了从变化的激活数据到梯度的压缩误差传播的理论分析,然后在整个训练过程中经验研究改变梯度的影响。基于这些分析,我们提出了改进的有损压缩机和一种自适应方案,以动态配置有损耗的压缩误差限制并调整训练批量的大小,以进一步利用保存的存储空间以额外加速。我们使用四个流行的DNN和ImageNet数据集评估了针对最先进的解决方案的设计。结果表明,在基线训练和最先进的框架中,我们提出的框架可以显着将训练记忆消耗高达13.5倍和1.8倍,而压缩很少或没有准确的损失。
Deep neural networks (DNNs) are becoming increasingly deeper, wider, and non-linear due to the growing demands on prediction accuracy and analysis quality. When training a DNN model, the intermediate activation data must be saved in the memory during forward propagation and then restored for backward propagation. However, state-of-the-art accelerators such as GPUs are only equipped with very limited memory capacities due to hardware design constraints, which significantly limits the maximum batch size and hence performance speedup when training large-scale DNNs. In this paper, we propose a novel memory-driven high performance DNN training framework that leverages error-bounded lossy compression to significantly reduce the memory requirement for training in order to allow training larger networks. Different from the state-of-the-art solutions that adopt image-based lossy compressors such as JPEG to compress the activation data, our framework purposely designs error-bounded lossy compression with a strict error-controlling mechanism. Specifically, we provide theoretical analysis on the compression error propagation from the altered activation data to the gradients, and then empirically investigate the impact of altered gradients over the entire training process. Based on these analyses, we then propose an improved lossy compressor and an adaptive scheme to dynamically configure the lossy compression error-bound and adjust the training batch size to further utilize the saved memory space for additional speedup. We evaluate our design against state-of-the-art solutions with four popular DNNs and the ImageNet dataset. Results demonstrate that our proposed framework can significantly reduce the training memory consumption by up to 13.5x and 1.8x over the baseline training and state-of-the-art framework with compression, respectively, with little or no accuracy loss.