论文标题

GhostNETV2:增强廉价的操作,并长期关注

GhostNetV2: Enhance Cheap Operation with Long-Range Attention

论文作者

Tang, Yehui, Han, Kai, Guo, Jianyuan, Xu, Chang, Xu, Chao, Wang, Yunhe

论文摘要

轻量级卷积神经网络(CNN)专为推理速度更快的移动设备上的应用而设计。卷积操作只能在窗口区域捕获本地信息,从而防止性能进一步提高。将自我注意力引入卷积可以很好地捕获全球信息,但这在很大程度上会带来实际速度。在本文中,我们提出了一种适合硬件友好的注意机制(称为DFC注意力),然后为移动应用程序提供新的GhostNETV2架构。提出的DFC注意是基于完全连接的层构建的,该层不仅可以在常见硬件上快速执行,而且可以捕获远程像素之间的依赖性。我们进一步重新审视了以前的GhostNet中的表达性瓶颈,并提议增强廉价操作以及DFC注意的扩展功能,以便GhostNETV2块可以同时汇总本地和远程信息。广泛的实验证明了GhostNETV2比现有建筑的优越性。例如,它以167m的拖鞋在Imagenet上获得了75.3%的TOP-1精度,并以相似的计算成本抑制了GhostNetv1(74.5%)。源代码将在https://github.com/huawei-noah/felficity-ai-backbones/tree/master/ghostnetv2_pytorch和https://gitee.com/mindspore/mindspore/models/models/models/master/master/master/research/research/research/research/research/cv/ghostnetnetetv2一下。

Light-weight convolutional neural networks (CNNs) are specially designed for applications on mobile devices with faster inference speed. The convolutional operation can only capture local information in a window region, which prevents performance from being further improved. Introducing self-attention into convolution can capture global information well, but it will largely encumber the actual speed. In this paper, we propose a hardware-friendly attention mechanism (dubbed DFC attention) and then present a new GhostNetV2 architecture for mobile applications. The proposed DFC attention is constructed based on fully-connected layers, which can not only execute fast on common hardware but also capture the dependence between long-range pixels. We further revisit the expressiveness bottleneck in previous GhostNet and propose to enhance expanded features produced by cheap operations with DFC attention, so that a GhostNetV2 block can aggregate local and long-range information simultaneously. Extensive experiments demonstrate the superiority of GhostNetV2 over existing architectures. For example, it achieves 75.3% top-1 accuracy on ImageNet with 167M FLOPs, significantly suppressing GhostNetV1 (74.5%) with a similar computational cost. The source code will be available at https://github.com/huawei-noah/Efficient-AI-Backbones/tree/master/ghostnetv2_pytorch and https://gitee.com/mindspore/models/tree/master/research/cv/ghostnetv2.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源