论文标题

使用学习的深度改善单眼视觉探空仪

Improving Monocular Visual Odometry Using Learned Depth

论文作者

Sun, Libo, Yin, Wei, Xie, Enze, Li, Zhengrong, Sun, Changming, Shen, Chunhua

论文摘要

单眼视觉进程(VO)是机器人技术和计算机视觉中的重要任务。到目前为止,如何在各种情况下可以很好地构建可以很好地工作的准确,坚固的单眼VO系统。在本文中,我们提出了一个框架来利用单眼深度估计来改善VO。我们框架的核心是单眼深度估计模块,具有强大的概括能力,可用于不同场景。它由两种独立的工作模式组成,以协助本地化和映射。使用单眼图像输入,深度估计模块可以预测一个相对深度,以帮助定位模块提高准确性。使用稀疏的深度图和RGB图像输入,深度估计模块可以生成准确的比例符合深度,以进行密集映射。与当前基于学习的VO方法相比,我们的方法证明了不同场景的更强概括能力。更重要的是,我们的框架能够将现有基于几何的VO方法的性能提高大幅度。

Monocular visual odometry (VO) is an important task in robotics and computer vision. Thus far, how to build accurate and robust monocular VO systems that can work well in diverse scenarios remains largely unsolved. In this paper, we propose a framework to exploit monocular depth estimation for improving VO. The core of our framework is a monocular depth estimation module with a strong generalization capability for diverse scenes. It consists of two separate working modes to assist the localization and mapping. With a single monocular image input, the depth estimation module predicts a relative depth to help the localization module on improving the accuracy. With a sparse depth map and an RGB image input, the depth estimation module can generate accurate scale-consistent depth for dense mapping. Compared with current learning-based VO methods, our method demonstrates a stronger generalization ability to diverse scenes. More significantly, our framework is able to boost the performances of existing geometry-based VO methods by a large margin.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源