论文标题

通过间接判别比对的增量元学习

Incremental Meta-Learning via Indirect Discriminant Alignment

论文作者

Liu, Qing, Majumder, Orchid, Achille, Alessandro, Ravichandran, Avinash, Bhotika, Rahul, Soatto, Stefano

论文摘要

大多数用于几个射击分类任务的现代元学习方法分为两个阶段:一个元训练阶段,通过求解从大型数据集和一个测试阶段采样的多个几个射击任务来学习通用表示,在该阶段中,元学习阶段的内部表示阶段涉及在特定的几个阶段,而在特定的几个阶段中,该阶段是在特定的几个阶段中看到的。据我们所知,所有此类元学习方法都使用单个基本数据集进行元训练来示例任务,并且在元训练后不会适应算法。该策略可能无法扩展到现实世界中的用例,在这种情况下,从一开始,元学习者可能无法访问完整的元训练数据集,我们需要在可用的其他培训数据时以增量的方式更新元学习者。通过我们的实验设置,我们在元学习的元训练阶段开发了一个增量学习的概念,并提出了一种可以与多个现有的基于度量的元学习算法一起使用的方法。基准数据集上的实验结果表明,与训练完全元训练集的模型相比,我们的方法在测试时表现出色,并且造成可忽略的灾难性遗忘

Majority of the modern meta-learning methods for few-shot classification tasks operate in two phases: a meta-training phase where the meta-learner learns a generic representation by solving multiple few-shot tasks sampled from a large dataset and a testing phase, where the meta-learner leverages its learnt internal representation for a specific few-shot task involving classes which were not seen during the meta-training phase. To the best of our knowledge, all such meta-learning methods use a single base dataset for meta-training to sample tasks from and do not adapt the algorithm after meta-training. This strategy may not scale to real-world use-cases where the meta-learner does not potentially have access to the full meta-training dataset from the very beginning and we need to update the meta-learner in an incremental fashion when additional training data becomes available. Through our experimental setup, we develop a notion of incremental learning during the meta-training phase of meta-learning and propose a method which can be used with multiple existing metric-based meta-learning algorithms. Experimental results on benchmark dataset show that our approach performs favorably at test time as compared to training a model with the full meta-training set and incurs negligible amount of catastrophic forgetting

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源