论文标题
无论您查看它的任何方式:语义跨观察本地化和用LIDAR映射
Any Way You Look At It: Semantic Crossview Localization and Mapping with LiDAR
论文作者
论文摘要
目前,GPS是迄今为止最受欢迎的全球本地化方法。但是,在所有环境中,它并不总是可靠或准确的。 SLAM方法实现了当地国家的估计,但没有提供将本地地图注册到全球地图的手段,这对于机器人间协作或人类互动可能很重要。在这项工作中,我们提出了一种实时方法,用于利用语义在全球范围内仅使用以Egentric 3D语义标记的LIDAR和IMU以及从卫星或空中机器人获得的自上而下的RGB图像来定位机器人。此外,在运行时,我们的方法构建了全球注册的环境语义图。我们验证了我们在KITTI以及自己具有挑战性的数据集上的方法,并且表现出比10米精度,高度鲁棒性以及估算自上而下地图的规模(如果最初未知)的能力更好。
Currently, GPS is by far the most popular global localization method. However, it is not always reliable or accurate in all environments. SLAM methods enable local state estimation but provide no means of registering the local map to a global one, which can be important for inter-robot collaboration or human interaction. In this work, we present a real-time method for utilizing semantics to globally localize a robot using only egocentric 3D semantically labelled LiDAR and IMU as well as top-down RGB images obtained from satellites or aerial robots. Additionally, as it runs, our method builds a globally registered, semantic map of the environment. We validate our method on KITTI as well as our own challenging datasets, and show better than 10 meter accuracy, a high degree of robustness, and the ability to estimate the scale of a top-down map on the fly if it is initially unknown.