论文标题

有条件的元学习的优势用于偏见的正则化和微调

The Advantage of Conditional Meta-Learning for Biased Regularization and Fine-Tuning

论文作者

Denevi, Giulia, Pontil, Massimiliano, Ciliberto, Carlo

论文摘要

偏见的正则化和微调是最近的两种元学习方法。它们已被证明可以有效地解决任务的分布,其中任务的目标向量都接近常见的元参数向量。但是,这些方法在任务的异质环境上的性能很差,在该任务的异质环境中,任务分布的复杂性无法由单个元参数向量捕获。我们通过有条件的元学习来解决此限制,将调节功能映射任务的侧面信息映射到适合当前该任务的元参数矢量中。我们表征了环境的属性,条件方法比标准元学习具有很大的优势,我们重点介绍了环境的示例,例如具有多个簇的环境,满足了这些属性。然后,我们提出了一个凸元算法,在实践中也提供了可比的优势。数值实验证实了我们的理论发现。

Biased regularization and fine-tuning are two recent meta-learning approaches. They have been shown to be effective to tackle distributions of tasks, in which the tasks' target vectors are all close to a common meta-parameter vector. However, these methods may perform poorly on heterogeneous environments of tasks, where the complexity of the tasks' distribution cannot be captured by a single meta-parameter vector. We address this limitation by conditional meta-learning, inferring a conditioning function mapping task's side information into a meta-parameter vector that is appropriate for that task at hand. We characterize properties of the environment under which the conditional approach brings a substantial advantage over standard meta-learning and we highlight examples of environments, such as those with multiple clusters, satisfying these properties. We then propose a convex meta-algorithm providing a comparable advantage also in practice. Numerical experiments confirm our theoretical findings.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源