论文标题

通过扩大嵌入空间来对深度卷积神经网络进行有效培训

Efficient Training of Deep Convolutional Neural Networks by Augmentation in Embedding Space

论文作者

Abrishami, Mohammad Saeed, Eshratifar, Amir Erfan, Eigen, David, Wang, Yanzhi, Nazarian, Shahin, Pedram, Massoud

论文摘要

深层神经网络已经使人工智能领域的最新进展成为可能。在数据稀缺的应用中,转移学习和数据增强技术通常用于改善深度学习模型的概括。但是,在原始输入空间中使用数据扩展的调整模型具有高计算成本,可以为每个增强输入运行完整的网络。当在具有有限的计算和能源的嵌入式设备上实现大型模型时,这一点尤其重要。在这项工作中,我们提出了一种替代原始输入空间中的增强的方法,纯粹是在嵌入空间中作用的方法。我们的实验结果表明,所提出的方法大大降低了计算,而模型的准确性则忽略不计。

Recent advances in the field of artificial intelligence have been made possible by deep neural networks. In applications where data are scarce, transfer learning and data augmentation techniques are commonly used to improve the generalization of deep learning models. However, fine-tuning a transfer model with data augmentation in the raw input space has a high computational cost to run the full network for every augmented input. This is particularly critical when large models are implemented on embedded devices with limited computational and energy resources. In this work, we propose a method that replaces the augmentation in the raw input space with an approximate one that acts purely in the embedding space. Our experimental results show that the proposed method drastically reduces the computation, while the accuracy of models is negligibly compromised.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源