论文标题
安全:通过对象级映射学习安全的对象提取
SafePicking: Learning Safe Object Extraction via Object-Level Mapping
论文作者
论文摘要
机器人需要对象级场景的理解以操纵对象,同时推理对象之间的联系,支持和遮挡。给定一堆对象,对象识别和重建可以识别对象实例的边界,从而为对象如何形成和支持堆积提供重要的线索。在这项工作中,我们提出了一个安全的系统,该系统集成了对象级映射和基于学习的运动计划,以生成一个安全地从堆中提取封闭的目标对象的运动。通过学习深入的Q网络来完成计划,该Q网络可以收到预测姿势的观察结果和基于深度的高度图来输出运动轨迹,该运动轨迹受过训练,以最大程度地提高安全度量奖励。我们的结果表明,姿势和深度感应的观察融合使模型具有更好的性能和鲁棒性。我们在模拟和现实世界中使用YCB对象评估了我们的方法,从而从桩中实现了安全的对象提取。
Robots need object-level scene understanding to manipulate objects while reasoning about contact, support, and occlusion among objects. Given a pile of objects, object recognition and reconstruction can identify the boundary of object instances, giving important cues as to how the objects form and support the pile. In this work, we present a system, SafePicking, that integrates object-level mapping and learning-based motion planning to generate a motion that safely extracts occluded target objects from a pile. Planning is done by learning a deep Q-network that receives observations of predicted poses and a depth-based heightmap to output a motion trajectory, trained to maximize a safety metric reward. Our results show that the observation fusion of poses and depth-sensing gives both better performance and robustness to the model. We evaluate our methods using the YCB objects in both simulation and the real world, achieving safe object extraction from piles.