论文标题

学习可重复使用的多任务增强学习选择

Learning Reusable Options for Multi-Task Reinforcement Learning

论文作者

Garcia, Francisco M., Nota, Chris, Thomas, Philip S.

论文摘要

近年来,增强学习(RL)已成为越来越活跃的研究领域。尽管有许多算法允许代理可以有效地解决任务,但它们通常会忽略与手头任务相关的先前经验的可能性。对于许多实际应用,鉴于它通常是一个计算昂贵的过程,因此代理商学习如何从头开始解决任务可能是不可行的。但是,可以利用先前的经验使这些问题在实践中可以解决。在本文中,我们提出了一个通过学习可重复使用的选项来利用现有经验的框架。我们表明,在代理商学习解决少量问题的政策之后,我们能够使用这些政策产生的轨迹来学习可重复使用的选项,以使代理商可以快速学习如何解决新颖和相关问题。

Reinforcement learning (RL) has become an increasingly active area of research in recent years. Although there are many algorithms that allow an agent to solve tasks efficiently, they often ignore the possibility that prior experience related to the task at hand might be available. For many practical applications, it might be unfeasible for an agent to learn how to solve a task from scratch, given that it is generally a computationally expensive process; however, prior experience could be leveraged to make these problems tractable in practice. In this paper, we propose a framework for exploiting existing experience by learning reusable options. We show that after an agent learns policies for solving a small number of problems, we are able to use the trajectories generated from those policies to learn reusable options that allow an agent to quickly learn how to solve novel and related problems.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源