论文标题
可信赖的AI的Dempster-Shafer方法,并应用于胎儿脑MRI分段
A Dempster-Shafer approach to trustworthy AI with application to fetal brain MRI segmentation
论文作者
论文摘要
与训练图像相比,在不同中心获得的病理病例和图像,对医学图像分割的深度学习模型可能会出乎意料而壮观,并带有违反专家知识的标签。此类错误破坏了对医学图像细分的深度学习模型的可信赖性。检测和纠正此类故障的机制对于将该技术安全地转化为诊所至关重要,并且可能是对人工智能(AI)的未来法规的要求。在这项工作中,我们提出了一个值得信赖的AI理论框架和一个实用系统,可以使用后备方法和基于Dempster-Shafer理论的失败机制来增强任何骨干AI系统。我们的方法依赖于可信赖的AI的可行定义。我们的方法会自动放弃由骨干AI预测的体素级标记,该标签违反了专家知识,并依赖于这些体素的后备。我们证明了拟议的值得信赖的AI方法在最大的报告的胎儿MRI的注释数据集中,由13个中心的540个手动注释的胎儿脑3D T2W MRI组成。我们值得信赖的AI方法改善了在各个中心获得的胎儿脑MRI和各种脑异常的胎儿的最先进的主链AI的鲁棒性。
Deep learning models for medical image segmentation can fail unexpectedly and spectacularly for pathological cases and images acquired at different centers than training images, with labeling errors that violate expert knowledge. Such errors undermine the trustworthiness of deep learning models for medical image segmentation. Mechanisms for detecting and correcting such failures are essential for safely translating this technology into clinics and are likely to be a requirement of future regulations on artificial intelligence (AI). In this work, we propose a trustworthy AI theoretical framework and a practical system that can augment any backbone AI system using a fallback method and a fail-safe mechanism based on Dempster-Shafer theory. Our approach relies on an actionable definition of trustworthy AI. Our method automatically discards the voxel-level labeling predicted by the backbone AI that violate expert knowledge and relies on a fallback for those voxels. We demonstrate the effectiveness of the proposed trustworthy AI approach on the largest reported annotated dataset of fetal MRI consisting of 540 manually annotated fetal brain 3D T2w MRIs from 13 centers. Our trustworthy AI method improves the robustness of a state-of-the-art backbone AI for fetal brain MRIs acquired across various centers and for fetuses with various brain abnormalities.