论文标题
SC-explorer:增量3D场景完成,以进行安全有效的探索映射和计划
SC-Explorer: Incremental 3D Scene Completion for Safe and Efficient Exploration Mapping and Planning
论文作者
论文摘要
对未知环境的探索是机器人技术中的一个基本问题,也是自治系统应用中的重要组成部分。探索未知环境的主要挑战是,机器人必须在每个时间步骤中使用有限的信息计划。尽管大多数当前的方法都依赖于启发式方法和假设来根据这些部分观察的规划路径,但我们提出了一种新颖的方式,通过利用3D场景的完成来将深度学习整合到探索中,以获取知情,安全,可解释的探索映射和计划。我们的方法,SC-explorer使用一种新型的增量融合机制和新提出的分层多层映射方法结合了场景完成,以确保机器人的安全性和效率。我们进一步提出了一种信息性的路径计划方法,利用了我们的映射方法的功能以及一种新颖的场景完整信息信息增益。虽然我们的方法通常适用,但我们在微型航空车辆(MAV)的用例中进行了评估。我们仅使用移动硬件彻底研究了高保真模拟实验中的每个组件,并证明我们的方法可以将环境覆盖量加快73%的速度,而不是基线,而地图准确性的降低最少。即使最终地图中未包含场景的完成,我们也可以证明它们可以用于指导机器人选择更多有益的路径,从而加快了用机器人的传感器来加快场景的测量值35%。我们在完全自主的MAV上验证我们的系统,即使在复杂的混乱环境中,也会显示出快速可靠的场景覆盖范围。我们将我们的方法作为开源。
Exploration of unknown environments is a fundamental problem in robotics and an essential component in numerous applications of autonomous systems. A major challenge in exploring unknown environments is that the robot has to plan with the limited information available at each time step. While most current approaches rely on heuristics and assumption to plan paths based on these partial observations, we instead propose a novel way to integrate deep learning into exploration by leveraging 3D scene completion for informed, safe, and interpretable exploration mapping and planning. Our approach, SC-Explorer, combines scene completion using a novel incremental fusion mechanism and a newly proposed hierarchical multi-layer mapping approach, to guarantee safety and efficiency of the robot. We further present an informative path planning method, leveraging the capabilities of our mapping approach and a novel scene-completion-aware information gain. While our method is generally applicable, we evaluate it in the use case of a Micro Aerial Vehicle (MAV). We thoroughly study each component in high-fidelity simulation experiments using only mobile hardware, and show that our method can speed up coverage of an environment by 73% compared to the baselines with only minimal reduction in map accuracy. Even if scene completions are not included in the final map, we show that they can be used to guide the robot to choose more informative paths, speeding up the measurement of the scene with the robot's sensors by 35%. We validate our system on a fully autonomous MAV, showing rapid and reliable scene coverage even in a complex cluttered environment. We make our methods available as open-source.