论文标题

在线持续学习的半歧视表示损失

Semi-Discriminative Representation Loss for Online Continual Learning

论文作者

Chen, Yu, Diethe, Tom, Flach, Peter

论文摘要

在持续学习中使用情节记忆已经证明了减轻灾难性遗忘的有效性。在最近的研究中,已经开发出基于梯度的方法来更有效地利用紧凑的情节记忆。这种方法完善了由记忆样本中的样本引起的梯度,旨在减少不同任务中梯度的多样性。在本文中,我们阐明了梯度的多样性与表示形式的歧视性之间的关系,表明了深度度量学习与持续学习之间的共同利益以及相互矛盾的利益,从而证明了学习中学习歧视性表征的利弊。基于这些发现,我们提出了一种简单的方法 - 半歧视表示损失(SDRL) - 用于持续学习。与最先进的方法相比,SDRL在在线持续学习时在多个基准任务上显示出更好的性能,而计算成本低。

The use of episodic memory in continual learning has demonstrated effectiveness for alleviating catastrophic forgetting. In recent studies, gradient-based approaches have been developed to make more efficient use of compact episodic memory. Such approaches refine the gradients resulting from new samples by those from memorized samples, aiming to reduce the diversity of gradients from different tasks. In this paper, we clarify the relation between diversity of gradients and discriminativeness of representations, showing shared as well as conflicting interests between Deep Metric Learning and continual learning, thus demonstrating pros and cons of learning discriminative representations in continual learning. Based on these findings, we propose a simple method -- Semi-Discriminative Representation Loss (SDRL) -- for continual learning. In comparison with state-of-the-art methods, SDRL shows better performance with low computational cost on multiple benchmark tasks in the setting of online continual learning.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源