论文标题
去极端:弱监督的医学图像细分
Going to Extremes: Weakly Supervised Medical Image Segmentation
论文作者
论文摘要
医疗图像注释是开发精确且健壮的机器学习模型的主要障碍。注释很昂贵,耗时,并且通常需要专家知识,尤其是在医学领域。在这里,我们建议以极端点击的形式使用最小的用户互动来训练一个分割模型,该模型实际上可用于加快医疗图像注释。基于随机步行者算法的极端点生成初始分割。然后,此初始细分被用作嘈杂的监督信号,以训练一个完全卷积网络,该网络可以基于提供的用户点击,该网络可以分割感兴趣的器官。通过在几个医学成像数据集上的实验,我们表明可以使用几轮培训对网络的预测进行完善,并从相同的弱注释数据中进行预测。使用自定义设计的损失和注意机制中的点击点显示了进一步的改进。我们的方法有可能加快生成新的培训数据集的过程,以开发新的机器学习和基于深度学习的模型,但不仅仅是医学图像分析。
Medical image annotation is a major hurdle for developing precise and robust machine learning models. Annotation is expensive, time-consuming, and often requires expert knowledge, particularly in the medical field. Here, we suggest using minimal user interaction in the form of extreme point clicks to train a segmentation model which, in effect, can be used to speed up medical image annotation. An initial segmentation is generated based on the extreme points utilizing the random walker algorithm. This initial segmentation is then used as a noisy supervision signal to train a fully convolutional network that can segment the organ of interest, based on the provided user clicks. Through experimentation on several medical imaging datasets, we show that the predictions of the network can be refined using several rounds of training with the prediction from the same weakly annotated data. Further improvements are shown utilizing the clicked points within a custom-designed loss and attention mechanism. Our approach has the potential to speed up the process of generating new training datasets for the development of new machine learning and deep learning-based models for, but not exclusively, medical image analysis.