论文标题

CT胸部分割的连续类增量学习

Continual Class Incremental Learning for CT Thoracic Segmentation

论文作者

Elskhawy, Abdelrahman, Lisowska, Aneta, Keicher, Matthias, Henry, Josep, Thomson, Paul, Navab, Nassir

论文摘要

深度学习器官细分方法需要大量注释的培训数据,由于机密性和专家手动注释所需的时间,供应量受到限制。因此,希望能够逐步训练模型而无需访问先前使用的数据。顺序训练的一种常见形式是微调(FT)。在这种情况下,模型可以有效地学习一项新任务,但在以前学习的任务上失去了绩效。没有忘记的学习(LWF)方法通过在模型培训期间重播自己对过去任务的预测来解决此问题。在这项工作中,我们使用公开可用的AAPM数据集评估了多器官分割中类增量学习的FT和LWF。我们表明,LWF可以成功地保留有关先前细分的知识,但是,它的学习新课程的能力随着每个类别的增加而降低。为了解决这个问题,我们提出了一种对抗性的持续学习细分方法(ACLSEG),该方法将空间分解为特定于任务和任务不变的功能。这可以保存过去的任务和有效获取新知识的绩效。

Deep learning organ segmentation approaches require large amounts of annotated training data, which is limited in supply due to reasons of confidentiality and the time required for expert manual annotation. Therefore, being able to train models incrementally without having access to previously used data is desirable. A common form of sequential training is fine tuning (FT). In this setting, a model learns a new task effectively, but loses performance on previously learned tasks. The Learning without Forgetting (LwF) approach addresses this issue via replaying its own prediction for past tasks during model training. In this work, we evaluate FT and LwF for class incremental learning in multi-organ segmentation using the publicly available AAPM dataset. We show that LwF can successfully retain knowledge on previous segmentations, however, its ability to learn a new class decreases with the addition of each class. To address this problem we propose an adversarial continual learning segmentation approach (ACLSeg), which disentangles feature space into task-specific and task-invariant features. This enables preservation of performance on past tasks and effective acquisition of new knowledge.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源