论文标题
是什么使有用的辅助任务在加强学习中:调查目标政策的效果
What makes useful auxiliary tasks in reinforcement learning: investigating the effect of the target policy
论文作者
论文摘要
辅助任务已被认为对于在强化学习中的表示学习很有用。尽管在经验上已经证明许多辅助任务对于加速对主要任务的学习有效,但尚不清楚是什么使有用的辅助任务。一些最有希望的结果是在像素控制,奖励预测和下一个状态预测辅助任务上。但是,经验结果混合在一起,在某些情况下显示出很大的改善以及其他情况的边缘改善。仔细研究辅助任务如何帮助学习主要任务。在本文中,我们迈出了一步,研究目标策略对以一般价值函数为例的辅助任务的实用性的影响。一般值函数由三个核心要素组成:1)策略2)累积3)延续功能。我们关注辅助任务目标政策的作用是由于目标政策决定了代理想要做出预测的行为和对代理商进行培训的国家行动分布的行为,从而进一步影响了主要任务学习。我们的研究提供了有关以下问题的见解:与其他政策相比,贪婪的政策是否会导致更大的进步?是否最好将辅助任务策略设置为与主要任务策略相同?目标政策的选择是否会对实现的绩效增益或制定政策的简单策略产生重大影响,例如使用统一的随机策略,也可以工作?我们的经验结果表明:1)具有贪婪政策的辅助任务往往很有用。 2)大多数政策,包括一项统一的随机政策,倾向于在基线上有所改善。 3)令人惊讶的是,与其他政策相比,主要任务策略往往不大。
Auxiliary tasks have been argued to be useful for representation learning in reinforcement learning. Although many auxiliary tasks have been empirically shown to be effective for accelerating learning on the main task, it is not yet clear what makes useful auxiliary tasks. Some of the most promising results are on the pixel control, reward prediction, and the next state prediction auxiliary tasks; however, the empirical results are mixed, showing substantial improvements in some cases and marginal improvements in others. Careful investigations of how auxiliary tasks help the learning of the main task is necessary. In this paper, we take a step studying the effect of the target policies on the usefulness of the auxiliary tasks formulated as general value functions. General value functions consist of three core elements: 1) policy 2) cumulant 3) continuation function. Our focus on the role of the target policy of the auxiliary tasks is motivated by the fact that the target policy determines the behavior about which the agent wants to make a prediction and the state-action distribution that the agent is trained on, which further affects the main task learning. Our study provides insights about questions such as: Does a greedy policy result in bigger improvement gains compared to other policies? Is it best to set the auxiliary task policy to be the same as the main task policy? Does the choice of the target policy have a substantial effect on the achieved performance gain or simple strategies for setting the policy, such as using a uniformly random policy, work as well? Our empirical results suggest that: 1) Auxiliary tasks with the greedy policy tend to be useful. 2) Most policies, including a uniformly random policy, tend to improve over the baseline. 3) Surprisingly, the main task policy tends to be less useful compared to other policies.