论文标题

在稀疏奖励的环境中对异质增强学习代理的协作培训:什么以及何时分享?

Collaborative Training of Heterogeneous Reinforcement Learning Agents in Environments with Sparse Rewards: What and When to Share?

论文作者

Andres, Alain, Villar-Rodriguez, Esther, Del Ser, Javier

论文摘要

在人类生活的早期阶段,婴儿通过探索以固有的满足感而不是通过环境的外部奖励来探索不同的情景来发展自己的技能。这种行为被称为内在动机,已经成为解决探索挑战的一种解决方案。已经提出了多种探索方法,以加快与均质代理的单一和多代理问题有关的学习过程。但是,稀缺的研究已经详细阐述了部署到同一环境中的异质代理之间的协作学习框架,但在没有任何先验知识的情况下与后者的不同实例进行互动。除了异质性之外,每个代理的特征仅授予对整个状态空间的一个子集的访问,这可能会隐藏不同的勘探策略和最佳解决方案。在这项工作中,我们结合了内在动机和转移学习的想法。具体而言,我们专注于共享参数 - 参数 - 批判性模型架构,并通过内在动机获得获得的信息,以进行更有效的探索和更快的学习。我们通过对修改后的Vizdoom的“我的路”场景进行的实验测试我们的策略,这比其原始版本更具挑战性,并允许评估代理之间的异质性。我们的结果揭示了不同的方式,在没有知识共享的情况下,具有额外计算成本的协作框架几乎无法胜过独立学习过程。此外,我们描述了需要正确调制外在奖励和内在奖励之间的重要性以避免不希望的代理行为。

In the early stages of human life, babies develop their skills by exploring different scenarios motivated by their inherent satisfaction rather than by extrinsic rewards from the environment. This behavior, referred to as intrinsic motivation, has emerged as one solution to address the exploration challenge derived from reinforcement learning environments with sparse rewards. Diverse exploration approaches have been proposed to accelerate the learning process over single- and multi-agent problems with homogeneous agents. However, scarce studies have elaborated on collaborative learning frameworks between heterogeneous agents deployed into the same environment, but interacting with different instances of the latter without any prior knowledge. Beyond the heterogeneity, each agent's characteristics grant access only to a subset of the full state space, which may hide different exploration strategies and optimal solutions. In this work we combine ideas from intrinsic motivation and transfer learning. Specifically, we focus on sharing parameters in actor-critic model architectures and on combining information obtained through intrinsic motivation with the aim of having a more efficient exploration and faster learning. We test our strategies through experiments performed over a modified ViZDooM's My Way Home scenario, which is more challenging than its original version and allows evaluating the heterogeneity between agents. Our results reveal different ways in which a collaborative framework with little additional computational cost can outperform an independent learning process without knowledge sharing. Additionally, we depict the need for modulating correctly the importance between the extrinsic and intrinsic rewards to avoid undesired agent behaviors.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源