论文标题
远程零击生成深网量化
Long-Range Zero-Shot Generative Deep Network Quantization
论文作者
论文摘要
量化具有浮点数的深网模型,该模型的位宽度较低,以加速推理和减少计算。无需访问原始数据,量化模型,可以通过数据合成拟合真实的数据分布来实现零拍的量化。但是,与使用真实数据的训练后量化相比,零射击量化的性能较低。我们发现这是因为:1)正常发生器很难获得高度多样性的合成数据,因为它缺乏远程信息来分配注意全球特征; 2)合成图像旨在模拟真实数据的统计数据,从而导致较弱的类异质性和有限的特征丰富度。为了克服这些问题,我们提出了一种新型的深层网络量化器,称为远程零击生成深网量化(LRQ)。从技术上讲,我们提出了一个远程生成器,以学习远程信息,而不是简单的本地功能。为了使合成数据包含更多的全局特征,发电机将使用大内核卷积的远程注意力纳入了。此外,我们还提出了一个对抗边缘添加(AMA)模块,以迫使特征向量和类中心之间的类内部角度增大。随着AMA增加损失函数的收敛难度,这与原始损失函数的训练目标相反,它会形成对抗过程。此外,为了从完整的网络转移知识,我们还利用了脱钩的知识蒸馏。广泛的实验表明,LRQ比其他竞争对手更好。
Quantization approximates a deep network model with floating-point numbers by the one with low bit width numbers, in order to accelerate inference and reduce computation. Quantizing a model without access to the original data, zero-shot quantization can be accomplished by fitting the real data distribution by data synthesis. However, zero-shot quantization achieves inferior performance compared to the post-training quantization with real data. We find it is because: 1) a normal generator is hard to obtain high diversity of synthetic data, since it lacks long-range information to allocate attention to global features; 2) the synthetic images aim to simulate the statistics of real data, which leads to weak intra-class heterogeneity and limited feature richness. To overcome these problems, we propose a novel deep network quantizer, dubbed Long-Range Zero-Shot Generative Deep Network Quantization (LRQ). Technically, we propose a long-range generator to learn long-range information instead of simple local features. In order for the synthetic data to contain more global features, long-range attention using large kernel convolution is incorporated into the generator. In addition, we also present an Adversarial Margin Add (AMA) module to force intra-class angular enlargement between feature vector and class center. As AMA increases the convergence difficulty of the loss function, which is opposite to the training objective of the original loss function, it forms an adversarial process. Furthermore, in order to transfer knowledge from the full-precision network, we also utilize a decoupled knowledge distillation. Extensive experiments demonstrate that LRQ obtains better performance than other competitors.