论文标题

捍卫深度多任务学习的统一标准化

In Defense of the Unitary Scalarization for Deep Multi-Task Learning

论文作者

Kurin, Vitaly, De Palma, Alessandro, Kostrikov, Ilya, Whiteson, Shimon, Kumar, M. Pawan

论文摘要

最近的多任务学习研究反对统一标量,在这种情况下,训练只是最大程度地减少了任务损失的总和。改为提出了几种临时多任务优化算法,灵感来自有关使多任务设置变得困难的各种假设的启发。这些优化器中的大多数都需要每个任务梯度,并引入大量的内存,运行时和实现开销。我们表明,单一标量化,再加上单个任务学习中的标准正则化和稳定技术,在受欢迎的监督和加强学习设置中的复杂多任务优化器的性能中进行了匹配或改进。然后,我们进行了一项分析,表明许多专业的多任务优化器可以部分解释为正规化的形式,有可能解释我们令人惊讶的结果。我们认为,我们的结果要求对该地区最近的研究重新评估。

Recent multi-task learning research argues against unitary scalarization, where training simply minimizes the sum of the task losses. Several ad-hoc multi-task optimization algorithms have instead been proposed, inspired by various hypotheses about what makes multi-task settings difficult. The majority of these optimizers require per-task gradients, and introduce significant memory, runtime, and implementation overhead. We show that unitary scalarization, coupled with standard regularization and stabilization techniques from single-task learning, matches or improves upon the performance of complex multi-task optimizers in popular supervised and reinforcement learning settings. We then present an analysis suggesting that many specialized multi-task optimizers can be partly interpreted as forms of regularization, potentially explaining our surprising results. We believe our results call for a critical reevaluation of recent research in the area.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源