论文标题
深纹状体完成,图像先验的金属伪像减少CT图像
Deep Sinogram Completion with Image Prior for Metal Artifact Reduction in CT Images
论文作者
论文摘要
计算机断层扫描(CT)已被广泛用于医学诊断,评估和治疗计划和指导。实际上,在存在金属物体的情况下,CT图像可能会受到不利影响,这可能导致严重的金属伪像并影响放射治疗中的临床诊断或剂量计算。在本文中,我们通过同时利用图像结构域和基于正式域的MAR技术的优势来提出一个可推广的金属伪像还原框架(MAR)。我们将框架作为正式完成问题制定,并训练神经网络(Sinonet)以恢复受金属影响的投影。为了提高金属痕迹边界的完整预测的连续性,从而减轻了重建的CT图像中的新伪像,我们训练另一个神经网络(priornet),以生成一个良好的先前图像来指导辛格图,并进一步设计一种新型的残留罪恶学习策略,以有效利用先前的图像信息,以更好地完成SIN图。这两个网络以端到端的方式共同训练,并具有可区分的前向预测(FP)操作,以便先前的图像生成和深纹状体完成程序可以彼此受益。最后,使用完整的正式图中的过滤后向后投影(FBP)重建了伪影降低的CT图像。对模拟和真实工件数据进行的广泛实验表明,我们的方法在保留解剖结构并表现优于其他MAR方法的同时,产生了较高的伪像减少结果。
Computed tomography (CT) has been widely used for medical diagnosis, assessment, and therapy planning and guidance. In reality, CT images may be affected adversely in the presence of metallic objects, which could lead to severe metal artifacts and influence clinical diagnosis or dose calculation in radiation therapy. In this paper, we propose a generalizable framework for metal artifact reduction (MAR) by simultaneously leveraging the advantages of image domain and sinogram domain-based MAR techniques. We formulate our framework as a sinogram completion problem and train a neural network (SinoNet) to restore the metal-affected projections. To improve the continuity of the completed projections at the boundary of metal trace and thus alleviate new artifacts in the reconstructed CT images, we train another neural network (PriorNet) to generate a good prior image to guide sinogram learning, and further design a novel residual sinogram learning strategy to effectively utilize the prior image information for better sinogram completion. The two networks are jointly trained in an end-to-end fashion with a differentiable forward projection (FP) operation so that the prior image generation and deep sinogram completion procedures can benefit from each other. Finally, the artifact-reduced CT images are reconstructed using the filtered backward projection (FBP) from the completed sinogram. Extensive experiments on simulated and real artifacts data demonstrate that our method produces superior artifact-reduced results while preserving the anatomical structures and outperforms other MAR methods.