论文标题

掌握提案网络:用于视觉学习机器人掌握的端到端解决方案

Grasp Proposal Networks: An End-to-End Solution for Visual Learning of Robotic Grasps

论文作者

Wu, Chaozheng, Chen, Jian, Cao, Qiaoyu, Zhang, Jianchi, Tai, Yunxin, Sun, Lin, Jia, Kui

论文摘要

从视觉观察中学习机器人掌握是一项有前途但具有挑战性的任务。最近的研究表明,它通过从大规模合成数据集中准备和学习来表明其潜力很大。对于流行的6个自由度(6-DOF)的掌握设置,平行jaw抓手的设置,大多数现有方法都采用启发式采样抓取候选者的策略,然后使用学习的评分功能对其进行评估。在采样效率和最佳掌握覆盖范围之间的冲突方面,此策略受到限制。为此,我们在这项工作中提出了一个小说,端到端\ emph {grasp提案网络(GPNET)},以预测从单个和未知的相机视图中观察到的一个未看到的物体的多样的6-DOF GRASP。 GPNET建立在掌握建议模块的关键设计上,该模块在离散但常规的3D网格角下定义了\ emph {grass grasp Centers}的锚,这是灵活的,可以灵活地支持更精确或更多样化的掌握预测。为了测试GPNET,我们贡献了6-DOF对象grasps的合成数据集;使用基于规则的标准,仿真测试和实际测试进行评估。比较结果表明我们的方法比现有方法的优势。值得注意的是,GPNET通过指定的覆盖范围获得了更好的仿真结果,这有助于在实际测试中实现现成的翻译。我们将公开提供数据集。

Learning robotic grasps from visual observations is a promising yet challenging task. Recent research shows its great potential by preparing and learning from large-scale synthetic datasets. For the popular, 6 degree-of-freedom (6-DOF) grasp setting of parallel-jaw gripper, most of existing methods take the strategy of heuristically sampling grasp candidates and then evaluating them using learned scoring functions. This strategy is limited in terms of the conflict between sampling efficiency and coverage of optimal grasps. To this end, we propose in this work a novel, end-to-end \emph{Grasp Proposal Network (GPNet)}, to predict a diverse set of 6-DOF grasps for an unseen object observed from a single and unknown camera view. GPNet builds on a key design of grasp proposal module that defines \emph{anchors of grasp centers} at discrete but regular 3D grid corners, which is flexible to support either more precise or more diverse grasp predictions. To test GPNet, we contribute a synthetic dataset of 6-DOF object grasps; evaluation is conducted using rule-based criteria, simulation test, and real test. Comparative results show the advantage of our methods over existing ones. Notably, GPNet gains better simulation results via the specified coverage, which helps achieve a ready translation in real test. We will make our dataset publicly available.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源