论文标题

连续语义细分的表示补偿网络

Representation Compensation Networks for Continual Semantic Segmentation

论文作者

Zhang, Chang-Bin, Xiao, Jia-Wen, Liu, Xialei, Chen, Ying-Cong, Cheng, Ming-Ming

论文摘要

在这项工作中,我们研究了持续的语义分割问题,其中需要深层神经网络才能连续合并新的阶级而不会造成灾难性遗忘。我们建议使用称为代表补偿(RC)模块的结构重新参数化机制,以将旧知识的表示形式学习。 RC模块由两个动态进化的分支组成,其中一个冷冻和一个可训练。此外,我们在空间和通道尺寸上设计了合并的立方体知识蒸馏策略,以进一步增强模型的可塑性和稳定性。我们对两个挑战性的连续语义分割场景,持续的类分割和持续域分割进行实验。在推断过程中,没有任何额外的计算开销和参数,我们的方法优于最先进的性能。该代码可在\ url {https://github.com/zhangchbin/rcil}中获得。

In this work, we study the continual semantic segmentation problem, where the deep neural networks are required to incorporate new classes continually without catastrophic forgetting. We propose to use a structural re-parameterization mechanism, named representation compensation (RC) module, to decouple the representation learning of both old and new knowledge. The RC module consists of two dynamically evolved branches with one frozen and one trainable. Besides, we design a pooled cube knowledge distillation strategy on both spatial and channel dimensions to further enhance the plasticity and stability of the model. We conduct experiments on two challenging continual semantic segmentation scenarios, continual class segmentation and continual domain segmentation. Without any extra computational overhead and parameters during inference, our method outperforms state-of-the-art performance. The code is available at \url{https://github.com/zhangchbin/RCIL}.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源