论文标题
作为主管最佳控制的人类的视觉导航
Visual Navigation Among Humans with Optimal Control as a Supervisor
论文作者
论文摘要
现实世界的视觉导航要求机器人在陌生的人类占领的动态环境中运行。围绕人的航行特别困难,因为它需要预测其未来的运动,这可能是非常具有挑战性的。我们提出了一种将基于学习的感知与基于模型的最佳控制结合起来的方法,以仅基于单眼,第一人称RGB图像在人之间导航。我们的方法是由我们的新型数据生成工具Humanav启用的,该工具允许与人类一起使用人类的室内环境场景的影像效率,然后将其完全用于模拟中的感知模块。通过在移动机器人上进行的模拟和实验,我们证明了学识渊博的导航政策可以预测并对人类做出反应,而无需明确预测未来的人类运动,从而推广到以前看不见的环境和人类行为,并直接从模拟转移到现实。在项目网站上可用描述我们的方法和实验的视频以及Humanav的演示。
Real world visual navigation requires robots to operate in unfamiliar, human-occupied dynamic environments. Navigation around humans is especially difficult because it requires anticipating their future motion, which can be quite challenging. We propose an approach that combines learning-based perception with model-based optimal control to navigate among humans based only on monocular, first-person RGB images. Our approach is enabled by our novel data-generation tool, HumANav that allows for photorealistic renderings of indoor environment scenes with humans in them, which are then used to train the perception module entirely in simulation. Through simulations and experiments on a mobile robot, we demonstrate that the learned navigation policies can anticipate and react to humans without explicitly predicting future human motion, generalize to previously unseen environments and human behaviors, and transfer directly from simulation to reality. Videos describing our approach and experiments, as well as a demo of HumANav are available on the project website.