论文标题
任务法:在大型3D场景图上评估机器人任务计划
TASKOGRAPHY: Evaluating robot task planning over large 3D scene graphs
论文作者
论文摘要
3D场景图(3DSG)是新兴的描述;统一符号,拓扑和度量场景表示。但是,即使对于小环境,典型的3DSG也包含数百个对象和符号。完整图上的任务计划是不切实际的。我们构建任务法,这是第一个大规模的机器人任务计划基准3DSGS。尽管大多数基准在该领域的基准努力都集中在基于视觉的计划上,但我们系统地研究了符号计划,以使计划绩效与视觉表示学习相融合。我们观察到,在现有方法中,基于经典的计划者都不能在完整的3DSG上实时计划。实现实时计划需要(a)稀疏3DSG进行可拖动计划的进展以及(b)设计更好利用3DSG层次结构的计划者。针对前一个目标,我们提出了磨砂膏,这是一种由任务条件的3DSG稀疏方法;使经典计划者能够匹配,在某些情况下可以超过最新的学习计划者。我们提出寻求后一个目标,这是一种使学习计划者利用3DSG结构的程序,从而减少了当前最佳方法所需的重新介绍查询数量的数量级。我们将打开所有代码和基线,以刺激机器人任务计划,学习和3DSGS的交叉点进行进一步的研究。
3D scene graphs (3DSGs) are an emerging description; unifying symbolic, topological, and metric scene representations. However, typical 3DSGs contain hundreds of objects and symbols even for small environments; rendering task planning on the full graph impractical. We construct TASKOGRAPHY, the first large-scale robotic task planning benchmark over 3DSGs. While most benchmarking efforts in this area focus on vision-based planning, we systematically study symbolic planning, to decouple planning performance from visual representation learning. We observe that, among existing methods, neither classical nor learning-based planners are capable of real-time planning over full 3DSGs. Enabling real-time planning demands progress on both (a) sparsifying 3DSGs for tractable planning and (b) designing planners that better exploit 3DSG hierarchies. Towards the former goal, we propose SCRUB, a task-conditioned 3DSG sparsification method; enabling classical planners to match and in some cases surpass state-of-the-art learning-based planners. Towards the latter goal, we propose SEEK, a procedure enabling learning-based planners to exploit 3DSG structure, reducing the number of replanning queries required by current best approaches by an order of magnitude. We will open-source all code and baselines to spur further research along the intersections of robot task planning, learning and 3DSGs.