论文标题

学习以对象为中心的视觉能力学习灵巧的抓握

Learning Dexterous Grasping with Object-Centric Visual Affordances

论文作者

Mandikal, Priyanka, Grauman, Kristen

论文摘要

灵巧的机器人手正在吸引他们的敏捷性和类似人类的形态,但是他们的高度自由使学习操纵具有挑战性。我们介绍了一种学习灵巧抓握的方法。我们的关键想法是将以对象为中心的视觉负担能力模型嵌入深入的强化学习循环中,以学习掌握人们偏爱人们所偏爱的对象区域的掌握政策。与从人类示范轨迹中学习的传统方法(例如,用手套捕获的手接联合序列)是基于对象和图像的,使代理商可以预期在政策学习过程中对对象的有用的负担区域。我们用两个数据集的40个对象上的40个对象上的30型机器人手工模拟器展示了我们的想法,在该对象中,它成功有效地学习了稳定功能性grasps的策略。我们的负担得起的指导政策明显更有效,对新物体的推广更好,训练3 x的速度比基准快3 x,并且对嘈杂的传感器读取和驱动更为强大。我们的工作为操纵代理提供了一步,这些代理人通过观察人们如何使用对象,而不需要有关人体的状态和行动信息。项目网站:http://vision.cs.utexas.edu/projects/graff-dexterous-affordance-grasp

Dexterous robotic hands are appealing for their agility and human-like morphology, yet their high degree of freedom makes learning to manipulate challenging. We introduce an approach for learning dexterous grasping. Our key idea is to embed an object-centric visual affordance model within a deep reinforcement learning loop to learn grasping policies that favor the same object regions favored by people. Unlike traditional approaches that learn from human demonstration trajectories (e.g., hand joint sequences captured with a glove), the proposed prior is object-centric and image-based, allowing the agent to anticipate useful affordance regions for objects unseen during policy learning. We demonstrate our idea with a 30-DoF five-fingered robotic hand simulator on 40 objects from two datasets, where it successfully and efficiently learns policies for stable functional grasps. Our affordance-guided policies are significantly more effective, generalize better to novel objects, train 3 X faster than the baselines, and are more robust to noisy sensor readings and actuation. Our work offers a step towards manipulation agents that learn by watching how people use objects, without requiring state and action information about the human body. Project website: http://vision.cs.utexas.edu/projects/graff-dexterous-affordance-grasp

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源