论文标题
加速GPU上使用跨层数据重用的深度学习推断
Accelerating Deep Learning Inference with Cross-Layer Data Reuse on GPUs
论文作者
论文摘要
加速深度学习推论对于实时应用非常重要。在本文中,我们提出了一种新颖的方法,以融合图形处理单元(GPU)上的卷积神经网络(CNN)层,该方法在内存层次结构的不同级别中应用数据重用分析和访问优化。为了在计算和内存访问之间达到平衡,我们探索了CNN计算图中的融合机会,并提出了卷积神经网络的三种融合模式:直,合并和分裂。然后,设计了一种生成高效融合代码的方法,该方法在多层内存使用中更深入跨层数据重复使用。通过从两个不同的GPU平台上的最先进的CNN的网络层评估我们方法的有效性,NVIDIA TITAN XP和TESLA P4。实验表明,在CNN的代表性结构上的平均加速为2.02倍,而在Squeezenet的端到端推断上,平均速度为1.57倍。
Accelerating the deep learning inference is very important for real-time applications. In this paper, we propose a novel method to fuse the layers of convolutional neural networks (CNNs) on Graphics Processing Units (GPUs), which applies data reuse analysis and access optimization in different levels of the memory hierarchy. To achieve the balance between computation and memory access, we explore the fusion opportunities in the CNN computation graph and propose three fusion modes of convolutional neural networks: straight, merge and split. Then, an approach for generating efficient fused code is designed, which goes deeper in multi-level memory usage for cross-layer data reuse. The effectiveness of our method is evaluated with the network layers from state-of-the-art CNNs on two different GPU platforms, NVIDIA TITAN Xp and Tesla P4. The experiments show that the average speedup is 2.02x on representative structures of CNNs, and 1.57x on end-to-end inference of SqueezeNet.