论文标题
学会注意错误
Learning To Pay Attention To Mistakes
论文作者
论文摘要
在基于卷积神经网络的医学图像分割中,代表恶性组织的前景区域的周围可能不成比例地分配为属于健康组织的背景类别\ cite {attenunet} \ cite {attenunet2018} \ cite {interseg} \ cite {unetfrontneuro} \ cite {learnActiveContour}。这导致了高的假阴性检测率。在本文中,我们提出了一种新颖的注意机制,可以直接解决如此高的假负率,称为注意错误。我们的注意力机制将模型朝着虚假的阳性识别介绍,这反驳了现有的偏见对假否定性。拟议的机制具有两个互补的实现:(a)模型的“明确”转向,以便在前景区域上进行更大的有效接收场; (b)通过参与背景区域上一个较小的有效接收领域,“隐式”方向转向误报。我们验证了三个任务的方法:1)使用CityScapes的车辆和背景之间的二元密集预测; 2)在BRATS2018中,通过多模式MRI扫描增强了肿瘤核分段; 3)使用Isles2018中使用超声图像进行分割中风病变。我们将我们的方法与医学成像中的最新注意力机制进行了比较,包括自我注意力,空间注意力和空间通道的混合注意力。在所有三个不同的任务中,我们的模型始终优于联合(IOU)和/或Hausdorff距离(HD)中的相交中的基线模型。例如,在第二个任务中,我们机制的“明确”实现将最佳基线的HD降低了$ 26 \%$,同时将IOU提高了$ 3 \%$ $。我们认为,我们提出的注意机制可以使广泛的医疗和计算机视觉任务受益,这些任务遭受了背景过度检测。
In convolutional neural network based medical image segmentation, the periphery of foreground regions representing malignant tissues may be disproportionately assigned as belonging to the background class of healthy tissues \cite{attenUnet}\cite{AttenUnet2018}\cite{InterSeg}\cite{UnetFrontNeuro}\cite{LearnActiveContour}. This leads to high false negative detection rates. In this paper, we propose a novel attention mechanism to directly address such high false negative rates, called Paying Attention to Mistakes. Our attention mechanism steers the models towards false positive identification, which counters the existing bias towards false negatives. The proposed mechanism has two complementary implementations: (a) "explicit" steering of the model to attend to a larger Effective Receptive Field on the foreground areas; (b) "implicit" steering towards false positives, by attending to a smaller Effective Receptive Field on the background areas. We validated our methods on three tasks: 1) binary dense prediction between vehicles and the background using CityScapes; 2) Enhanced Tumour Core segmentation with multi-modal MRI scans in BRATS2018; 3) segmenting stroke lesions using ultrasound images in ISLES2018. We compared our methods with state-of-the-art attention mechanisms in medical imaging, including self-attention, spatial-attention and spatial-channel mixed attention. Across all of the three different tasks, our models consistently outperform the baseline models in Intersection over Union (IoU) and/or Hausdorff Distance (HD). For instance, in the second task, the "explicit" implementation of our mechanism reduces the HD of the best baseline by more than $26\%$, whilst improving the IoU by more than $3\%$. We believe our proposed attention mechanism can benefit a wide range of medical and computer vision tasks, which suffer from over-detection of background.