论文标题

具有共享结构的连续任务中的联合表示培训

Joint Representation Training in Sequential Tasks with Shared Structure

论文作者

Pacchiano, Aldo, Nachum, Ofir, Tripuraneni, Nilseh, Bartlett, Peter

论文摘要

强化学习(RL)中的经典理论主要集中在单个任务设置上,在该设备设置中,代理商学会通过反复试验的经验来解决任务,仅从该任务中访问数据。但是,许多最近的经验著作证明了利用跨多个相关任务训练的联合代表的重大实际好处。在这项工作中,我们从理论上分析了这种设置,将与任务相关性的概念形式化为共享的状态行动表示,该表示在所有任务中都接受线性动态。我们介绍了用于Multitask MatrixRl的共享MATRIXRL算法。在存在$ p $ dimension $ d $共享$ r \ ll d $低维表示的情况下,我们向$ o o(phd \ sqrt {nh})$提高了$ p $任务的遗憾HP \ sqrt {rd})\ sqrt {nh})$ of $ n $ epotodes of horizo​​n $ h $。这些收益与上下文匪徒和RL中其他线性模型中观察到的收益一致。与以前研究过其他函数近似模型中多任务RL的工作相反,我们表明,在具有双线性优化的Oracle和有限状态作用空间的存在下,多任务矩阵的计算有效算法通过还原到Quadratic编程。我们还开发了一种简单的技术,可以从某些情节线性问题的遗憾上限中刮除$ \ sqrt {h} $ factor。

Classical theory in reinforcement learning (RL) predominantly focuses on the single task setting, where an agent learns to solve a task through trial-and-error experience, given access to data only from that task. However, many recent empirical works have demonstrated the significant practical benefits of leveraging a joint representation trained across multiple, related tasks. In this work we theoretically analyze such a setting, formalizing the concept of task relatedness as a shared state-action representation that admits linear dynamics in all the tasks. We introduce the Shared-MatrixRL algorithm for the setting of Multitask MatrixRL. In the presence of $P$ episodic tasks of dimension $d$ sharing a joint $r \ll d$ low-dimensional representation, we show the regret on the the $P$ tasks can be improved from $O(PHd\sqrt{NH})$ to $O((Hd\sqrt{rP} + HP\sqrt{rd})\sqrt{NH})$ over $N$ episodes of horizon $H$. These gains coincide with those observed in other linear models in contextual bandits and RL. In contrast with previous work that have studied multi task RL in other function approximation models, we show that in the presence of bilinear optimization oracle and finite state action spaces there exists a computationally efficient algorithm for multitask MatrixRL via a reduction to quadratic programming. We also develop a simple technique to shave off a $\sqrt{H}$ factor from the regret upper bounds of some episodic linear problems.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源