论文标题

通过对图的深度加固学习在不确定性下的自主探索

Autonomous Exploration Under Uncertainty via Deep Reinforcement Learning on Graphs

论文作者

Chen, Fanfei, Martin, John D., Huang, Yewei, Wang, Jinkun, Englot, Brendan

论文摘要

我们考虑了一个自主探索问题,其中范围感应移动机器人的任务是准确地在先验未知环境中实时绘制地标;它必须选择构成本地化不确定性和实现信息增益的传感动作。对于这个问题,前进的机器人感测和估计的信念空间规划方法通常在实时实施中可能失败,随着国家规模的增加,信念和行动空间的增加而缩小。我们提出了一种新颖的方法,该方法将图形神经网络(GNN)与深度强化学习(DRL)结合使用,从而对包含探索信息的图表进行决策,以预测机器人在信仰空间中的最佳传感作用。该政策在没有人类干预的情况下在不同的随机环境中进行培训,它提供了一个实时,可扩展的决策过程,其高性能探索性传感动作可产生准确的地图和高度信息增长率。

We consider an autonomous exploration problem in which a range-sensing mobile robot is tasked with accurately mapping the landmarks in an a priori unknown environment efficiently in real-time; it must choose sensing actions that both curb localization uncertainty and achieve information gain. For this problem, belief space planning methods that forward-simulate robot sensing and estimation may often fail in real-time implementation, scaling poorly with increasing size of the state, belief and action spaces. We propose a novel approach that uses graph neural networks (GNNs) in conjunction with deep reinforcement learning (DRL), enabling decision-making over graphs containing exploration information to predict a robot's optimal sensing action in belief space. The policy, which is trained in different random environments without human intervention, offers a real-time, scalable decision-making process whose high-performance exploratory sensing actions yield accurate maps and high rates of information gain.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源