论文标题
yolobile:通过压缩兼容共同设计在移动设备上的实时对象检测
YOLObile: Real-Time Object Detection on Mobile Devices via Compression-Compilation Co-Design
论文作者
论文摘要
对象检测技术的快速发展和广泛的利用引起了对象检测器的准确性和速度的关注。但是,当前的最新对象检测工作是使用大型模型精确地以准确的为导向,但使用轻量级模型导致高潜伏期或速度为导向,但牺牲了精度。在这项工作中,我们提出了Yolobile框架,这是通过压缩兼容共同设计在移动设备上检测的实时对象检测。提出了针对任何内核尺寸的新型块状修剪方案。为了提高移动设备上的计算效率,采用了GPU-CPU协作方案以及高级编译器辅助优化。实验结果表明,我们的修剪方案可实现14 $ \ times $ yolov4的压缩率,并以49.0的地图。在我们的Yolobile框架下,我们在三星Galaxy S20上使用GPU实现了17 FPS推理速度。通过将我们提出的GPU-CPU协作方案纳入,推理速度提高到19.1 fps,并优于原始的Yolov4,提高了5 $ \ times $ speedup。源代码为:\ url {https://github.com/nightsnack/yolobile}。
The rapid development and wide utilization of object detection techniques have aroused attention on both accuracy and speed of object detectors. However, the current state-of-the-art object detection works are either accuracy-oriented using a large model but leading to high latency or speed-oriented using a lightweight model but sacrificing accuracy. In this work, we propose YOLObile framework, a real-time object detection on mobile devices via compression-compilation co-design. A novel block-punched pruning scheme is proposed for any kernel size. To improve computational efficiency on mobile devices, a GPU-CPU collaborative scheme is adopted along with advanced compiler-assisted optimizations. Experimental results indicate that our pruning scheme achieves 14$\times$ compression rate of YOLOv4 with 49.0 mAP. Under our YOLObile framework, we achieve 17 FPS inference speed using GPU on Samsung Galaxy S20. By incorporating our proposed GPU-CPU collaborative scheme, the inference speed is increased to 19.1 FPS, and outperforms the original YOLOv4 by 5$\times$ speedup. Source code is at: \url{https://github.com/nightsnack/YOLObile}.