论文标题
朝着可持续的持续学习:检测和知识重新利用类似任务
Toward Sustainable Continual Learning: Detection and Knowledge Repurposing of Similar Tasks
论文作者
论文摘要
大多数现有关于持续学习的作品(CL)专注于克服灾难性遗忘(CF)问题,动态模型和重播方法表现出色。但是,由于当前的作品倾向于在学习任务之间假定排他性或差异,因此这些方法需要在每个任务中不断积累特定于任务的知识。如果我们考虑从一系列任务中学习,这会导致知识存储库的最终膨胀。在这项工作中,我们介绍了一个范式,其中持续学习者将获得一系列混合和不同的任务。我们提出了一个新的持续学习框架,该框架使用任务相似性检测功能,该功能不需要其他学习,我们分析了过去是否有特定任务与当前任务相似。然后,我们可以重复使用以前的任务知识来减慢参数扩展,从而确保CL系统将知识存储库分别扩展到学习任务的数量。我们的实验表明,所提出的框架在广泛使用的计算机视觉基准(例如CIFAR10,CIFAR100和EMNIST)上竞争性能。
Most existing works on continual learning (CL) focus on overcoming the catastrophic forgetting (CF) problem, with dynamic models and replay methods performing exceptionally well. However, since current works tend to assume exclusivity or dissimilarity among learning tasks, these methods require constantly accumulating task-specific knowledge in memory for each task. This results in the eventual prohibitive expansion of the knowledge repository if we consider learning from a long sequence of tasks. In this work, we introduce a paradigm where the continual learner gets a sequence of mixed similar and dissimilar tasks. We propose a new continual learning framework that uses a task similarity detection function that does not require additional learning, with which we analyze whether there is a specific task in the past that is similar to the current task. We can then reuse previous task knowledge to slow down parameter expansion, ensuring that the CL system expands the knowledge repository sublinearly to the number of learned tasks. Our experiments show that the proposed framework performs competitively on widely used computer vision benchmarks such as CIFAR10, CIFAR100, and EMNIST.