论文标题
在社会环境中学习全身人类触觉互动
Learning Whole-Body Human-Robot Haptic Interaction in Social Contexts
论文作者
论文摘要
本文提出了一个学习人类社交互动的学习框架(LFD)框架,该框架涉及全身触觉互动,即在完整的机器人体内的直接人类机器人接触。由于示范数据的高维度和时空稀疏性,现有LFD框架的性能在这种相互作用中受到了影响。我们表明,通过利用这种稀疏性,我们可以降低数据维度,而不会产生明显的准确性惩罚,并引入了三种策略。通过将这些技术与学习多模式人类机器人相互作用的LFD框架相结合,我们可以在全身触觉相互作用期间对触觉和动态信息之间的时空关系进行建模。我们使用配备61个力传感器的遥控双人机器人,我们在实验上证明了一个经过121个样本拥抱的模型,来自4位参与者的121个样本拥抱,可以很好地推广到看不见的投入和人类合作伙伴。
This paper presents a learning-from-demonstration (LfD) framework for teaching human-robot social interactions that involve whole-body haptic interaction, i.e. direct human-robot contact over the full robot body. The performance of existing LfD frameworks suffers in such interactions due to the high dimensionality and spatiotemporal sparsity of the demonstration data. We show that by leveraging this sparsity, we can reduce the data dimensionality without incurring a significant accuracy penalty, and introduce three strategies for doing so. By combining these techniques with an LfD framework for learning multimodal human-robot interactions, we can model the spatiotemporal relationship between the tactile and kinesthetic information during whole-body haptic interactions. Using a teleoperated bimanual robot equipped with 61 force sensors, we experimentally demonstrate that a model trained with 121 sample hugs from 4 participants generalizes well to unseen inputs and human partners.