论文标题

从三个维度加速CNN:一个全面的修剪框架

Accelerate CNNs from Three Dimensions: A Comprehensive Pruning Framework

论文作者

Wang, Wenxiao, Chen, Minghao, Zhao, Shuai, Chen, Long, Hu, Jinming, Liu, Haifeng, Cai, Deng, He, Xiaofei, Liu, Wei

论文摘要

大多数神经网络修剪方法,例如滤波器级别和图层级修剪,仅沿一个维度(深度,宽度或分辨率)修剪网络模型,仅满足计算预算。但是,这样的修剪政策通常会导致该维度过度降低,从而导致巨大的准确性损失。为了减轻这个问题,我们认为应该全面地沿三个维度进行修剪。为此,我们的修剪框架将修剪作为优化问题。具体而言,它首先将某个模型的精度和深度/宽度/分辨率之间的关系投入到多项式回归中,然后最大化多项式以获取三个维度的最佳值。最后,该模型沿着三个最佳维度进行修剪。在此框架中,由于收集过多的训练数据的数据非常耗时,因此我们提出了两种降低成本的方法:1)专门对多项式进行专门确保即使使用较少的培训数据确保准确的回归; 2)使用迭代修剪和微调来更快地收集数据。广泛的实验表明,我们提出的算法超过了最新的修剪算法,甚至基于神经架构搜索算法。

Most neural network pruning methods, such as filter-level and layer-level prunings, prune the network model along one dimension (depth, width, or resolution) solely to meet a computational budget. However, such a pruning policy often leads to excessive reduction of that dimension, thus inducing a huge accuracy loss. To alleviate this issue, we argue that pruning should be conducted along three dimensions comprehensively. For this purpose, our pruning framework formulates pruning as an optimization problem. Specifically, it first casts the relationships between a certain model's accuracy and depth/width/resolution into a polynomial regression and then maximizes the polynomial to acquire the optimal values for the three dimensions. Finally, the model is pruned along the three optimal dimensions accordingly. In this framework, since collecting too much data for training the regression is very time-costly, we propose two approaches to lower the cost: 1) specializing the polynomial to ensure an accurate regression even with less training data; 2) employing iterative pruning and fine-tuning to collect the data faster. Extensive experiments show that our proposed algorithm surpasses state-of-the-art pruning algorithms and even neural architecture search-based algorithms.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源