论文标题

动态小组卷积,用于加速卷积神经网络

Dynamic Group Convolution for Accelerating Convolutional Neural Networks

论文作者

Su, Zhuo, Fang, Linpu, Kang, Wenxiong, Hu, Dewen, Pietikäinen, Matti, Liu, Li

论文摘要

用小组卷积代替正常的卷积可以显着提高现代深度卷积网络的计算效率,这在紧凑的网络体系结构设计中已被广泛采用。但是,现有的小组卷积通过永久切断一些连接而导致明显的准确性降解来破坏原始网络结构。在本文中,我们提出了动态组卷积(DGC),该卷卷(DGC)自适应地选择了要在每个组中连接的输入通道的哪一部分进行即时的单个样本。具体而言,我们为每个组配备一个小型特征选择器,以自动选择在输入图像上的最重要的输入通道。多组可以适应每个输入图像的丰富和互补的视觉/语义特征。 DGC保留了原始网络结构,并具有与常规组同时卷积相似的计算效率。在包括CIFAR-10,CIFAR-100和ImageNet在内的多个图像分类基准上进行的大量实验证明了其优于现有的组卷积技术和动态执行方法。该代码可从https://github.com/zhuogege1943/dgc获得。

Replacing normal convolutions with group convolutions can significantly increase the computational efficiency of modern deep convolutional networks, which has been widely adopted in compact network architecture designs. However, existing group convolutions undermine the original network structures by cutting off some connections permanently resulting in significant accuracy degradation. In this paper, we propose dynamic group convolution (DGC) that adaptively selects which part of input channels to be connected within each group for individual samples on the fly. Specifically, we equip each group with a small feature selector to automatically select the most important input channels conditioned on the input images. Multiple groups can adaptively capture abundant and complementary visual/semantic features for each input image. The DGC preserves the original network structure and has similar computational efficiency as the conventional group convolution simultaneously. Extensive experiments on multiple image classification benchmarks including CIFAR-10, CIFAR-100 and ImageNet demonstrate its superiority over the existing group convolution techniques and dynamic execution methods. The code is available at https://github.com/zhuogege1943/dgc.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源