论文标题

DAI:通过可区分退火指示器搜索修剪自动频道

DAIS: Automatic Channel Pruning via Differentiable Annealing Indicator Search

论文作者

Guan, Yushuo, Liu, Ning, Zhao, Pengyu, Che, Zhengping, Bian, Kaigui, Wang, Yanzhi, Tang, Jian

论文摘要

尽管有大量的计算开销,但卷积神经网络在完成计算机视觉任务方面取得了巨大成功。通常应用结构化的(通道)修剪来减少模型冗余,同时保留网络结构,从而可以在实践中轻松部署修剪的网络。但是,现有的结构化修剪方法需要手工制作的规则,这可能会导致巨大的修剪空间。在本文中,我们介绍了可区分的退火指示器搜索(DAI),该搜索利用了通道修剪中神经体系结构搜索的强度,并自动搜索有效的修剪模型,并在计算开销方面具有给定的限制。具体而言,DAIS放宽了二进制通道指示器,以保持连续,然后通过双层优化共同学习指标和模型参数。为了弥合连续模型与目标二进制模型之间不可忽略的差异,Dais提出了一种基于退火的程序,以将指标收敛转移到二进制状态。此外,戴斯(Dais)根据先验结构知识设计了各种正规化,以控制修剪的稀疏性并提高模型性能。实验结果表明,DAI的表现优于CIFAR-10,CIFAR-100和IMAGENET上的最先进的修剪方法。

The convolutional neural network has achieved great success in fulfilling computer vision tasks despite large computation overhead against efficient deployment. Structured (channel) pruning is usually applied to reduce the model redundancy while preserving the network structure, such that the pruned network can be easily deployed in practice. However, existing structured pruning methods require hand-crafted rules which may lead to tremendous pruning space. In this paper, we introduce Differentiable Annealing Indicator Search (DAIS) that leverages the strength of neural architecture search in the channel pruning and automatically searches for the effective pruned model with given constraints on computation overhead. Specifically, DAIS relaxes the binarized channel indicators to be continuous and then jointly learns both indicators and model parameters via bi-level optimization. To bridge the non-negligible discrepancy between the continuous model and the target binarized model, DAIS proposes an annealing-based procedure to steer the indicator convergence towards binarized states. Moreover, DAIS designs various regularizations based on a priori structural knowledge to control the pruning sparsity and to improve model performance. Experimental results show that DAIS outperforms state-of-the-art pruning methods on CIFAR-10, CIFAR-100, and ImageNet.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源