论文标题

VMFNET:组成符合域将来的分割

vMFNet: Compositionality Meets Domain-generalised Segmentation

论文作者

Liu, Xiao, Thermos, Spyridon, Sanchez, Pedro, O'Neil, Alison Q., Tsaftaris, Sotirios A.

论文摘要

培训医疗图像分割模型通常需要大量标记的数据。相比之下,人类可以在医学(例如MRI和CT)图像中迅速学会准确地识别出有限的指导性解剖学。这种识别能力可以很容易地推广到来自不同临床中心的新图像。这种快速且可普遍的学习能力主要是由于人脑中图像模式的组成结构,该结构不太纳入医学图像分割。在本文中,我们将人类解剖结构的组成成分(即模式)建模为可学习的von-mises-fisher(VMF)内核,它们对从不同领域(例如临床中心)收集的图像很强。图像特征可以分解为具有组成操作的组件(或由)组成的组件,即VMF可能性。 VMF的可能性证明了每个解剖部分在图像的每个位置的可能性。因此,可以根据VMF的可能性预测分割掩模。此外,使用重建模块,未标记的数据也可以通过重新组合重建输入图像来学习VMF内核和可能性。广泛的实验表明,所提出的VMFNET在两个基准上实现了改善的概括性能,尤其是在注释有限的情况下。代码可公开可用:https://github.com/vios-s/vmfnet。

Training medical image segmentation models usually requires a large amount of labeled data. By contrast, humans can quickly learn to accurately recognise anatomy of interest from medical (e.g. MRI and CT) images with some limited guidance. Such recognition ability can easily generalise to new images from different clinical centres. This rapid and generalisable learning ability is mostly due to the compositional structure of image patterns in the human brain, which is less incorporated in medical image segmentation. In this paper, we model the compositional components (i.e. patterns) of human anatomy as learnable von-Mises-Fisher (vMF) kernels, which are robust to images collected from different domains (e.g. clinical centres). The image features can be decomposed to (or composed by) the components with the composing operations, i.e. the vMF likelihoods. The vMF likelihoods tell how likely each anatomical part is at each position of the image. Hence, the segmentation mask can be predicted based on the vMF likelihoods. Moreover, with a reconstruction module, unlabeled data can also be used to learn the vMF kernels and likelihoods by recombining them to reconstruct the input image. Extensive experiments show that the proposed vMFNet achieves improved generalisation performance on two benchmarks, especially when annotations are limited. Code is publicly available at: https://github.com/vios-s/vMFNet.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源