论文标题

基于融合的车辆定位的LIDAR/视觉探光仪的场景感知错误建模

Scene-Aware Error Modeling of LiDAR/Visual Odometry for Fusion-based Vehicle Localization

论文作者

Ju, Xiaoliang, Xu, Donghao, Zhao, Huijing

论文摘要

本地化是移动机器人技术中必不可少的技术。在复杂的环境中,有必要融合不同的定位模块以获得更强大的结果,其中错误模型起着至关重要的作用。但是,基于外部传感器的odoteries(ESO)(例如LiDAR/Visual Odometry)通常会通过场景相关的误差提供结果,这很难准确地建模。为了解决这个问题,这项研究设计了ESO的场景感知错误模型,基于该模型开发了多模式定位融合框架。此外,提出了一种端到端学习方法,以使用GPS/IMU结果等稀疏全局姿势来训练此错误模型。提出的方法是为了对激光雷达/视觉进程的误差建模而实现,结果融合了死亡,以检查车辆定位的性能。使用经验丰富和未经经验的环境的模拟和现实世界数据进行实验,实验结果表明,通过学习的场景感知误差模型,可以在很大程度上提高车辆定位准确性,并在无法经验的场景中表现出适应性。

Localization is an essential technique in mobile robotics. In a complex environment, it is necessary to fuse different localization modules to obtain more robust results, in which the error model plays a paramount role. However, exteroceptive sensor-based odometries (ESOs), such as LiDAR/visual odometry, often deliver results with scene-related error, which is difficult to model accurately. To address this problem, this research designs a scene-aware error model for ESO, based on which a multimodal localization fusion framework is developed. In addition, an end-to-end learning method is proposed to train this error model using sparse global poses such as GPS/IMU results. The proposed method is realized for error modeling of LiDAR/visual odometry, and the results are fused with dead reckoning to examine the performance of vehicle localization. Experiments are conducted using both simulation and real-world data of experienced and unexperienced environments, and the experimental results demonstrate that with the learned scene-aware error models, vehicle localization accuracy can be largely improved and shows adaptiveness in unexperienced scenes.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源