论文标题
迈向现实世界中基于自我注意的视觉导航
Towards self-attention based visual navigation in the real world
论文作者
论文摘要
视力指导的导航需要处理复杂的视觉信息,以告知以任务为导向的决策。应用包括自主机器人,自动驾驶汽车以及对人类的辅助愿景。一个关键要素是在像素空间中提取和选择相关特征,以便基于操作选择,对于哪种机器学习技术非常适合。但是,在模拟中接受培训的深度强化学习剂在现实世界中部署时通常会表现出不满意的结果,这是因为感知差异称为$ \ textit {现实gap} $。尚未探索以弥合这一差距的方法是自我注意力。在本文中,我们(1)对基于3D环境的基于自我注意力的导航进行系统探索,并从不同的超参数集中观察到的行为,包括它们的概括能力; (2)目前的策略来提高代理的概括能力和导航行为; (3)显示在模拟中训练的模型如何能够实时处理现实世界的图像。据我们所知,这是使用少于4000个参数成功导航3D动作空间的基于自我注意的代理的首次演示。
Vision guided navigation requires processing complex visual information to inform task-orientated decisions. Applications include autonomous robots, self-driving cars, and assistive vision for humans. A key element is the extraction and selection of relevant features in pixel space upon which to base action choices, for which Machine Learning techniques are well suited. However, Deep Reinforcement Learning agents trained in simulation often exhibit unsatisfactory results when deployed in the real-world due to perceptual differences known as the $\textit{reality gap}$. An approach that is yet to be explored to bridge this gap is self-attention. In this paper we (1) perform a systematic exploration of the hyperparameter space for self-attention based navigation of 3D environments and qualitatively appraise behaviour observed from different hyperparameter sets, including their ability to generalise; (2) present strategies to improve the agents' generalisation abilities and navigation behaviour; and (3) show how models trained in simulation are capable of processing real world images meaningfully in real time. To our knowledge, this is the first demonstration of a self-attention based agent successfully trained in navigating a 3D action space, using less than 4000 parameters.