论文标题
AlignShift:在3D各向异性体积中桥接成像厚度的间隙
AlignShift: Bridging the Gap of Imaging Thickness in 3D Anisotropic Volumes
论文作者
论文摘要
本文解决了3D医学图像处理中的基本挑战:如何处理成像厚度。对于各向异性的医疗量,薄片(主要是1毫米)和厚切片(主要是5mm)之间存在明显的性能差距。先前的艺术倾向于分别使用3D方法对薄板和2D方法分别用于厚板。我们的目标是用于薄薄和厚的医疗量的统一方法。受视频分析的最新进展的启发,我们提出了AlignShift,这是一种新型的无参数操作员,可以将理论上的任何2D预处理的网络转换为厚度感知的3D网络。值得注意的是,转换后的网络对薄滑板的行为就像3D,但对于厚板的厚板来说,缩小为2D。统一的厚度意识表示学习是通过根据输入成像厚度转移和融合的“虚拟切片”来实现的。公共大规模深层基准的广泛实验,包括32K病变的通用病变检测,验证了我们方法的有效性,我们的方法通过没有哨子和铃铛的相当大的边缘来优于先前的艺术状态。更重要的是,据我们所知,这是第一种通过统一框架弥合薄片和厚板量之间的性能差距的方法。为了提高研究可重复性,我们在Pytorch中的代码是https://github.com/m3dv/alignshift的开源。
This paper addresses a fundamental challenge in 3D medical image processing: how to deal with imaging thickness. For anisotropic medical volumes, there is a significant performance gap between thin-slice (mostly 1mm) and thick-slice (mostly 5mm) volumes. Prior arts tend to use 3D approaches for the thin-slice and 2D approaches for the thick-slice, respectively. We aim at a unified approach for both thin- and thick-slice medical volumes. Inspired by recent advances in video analysis, we propose AlignShift, a novel parameter-free operator to convert theoretically any 2D pretrained network into thickness-aware 3D network. Remarkably, the converted networks behave like 3D for the thin-slice, nevertheless degenerate to 2D for the thick-slice adaptively. The unified thickness-aware representation learning is achieved by shifting and fusing aligned "virtual slices" as per the input imaging thickness. Extensive experiments on public large-scale DeepLesion benchmark, consisting of 32K lesions for universal lesion detection, validate the effectiveness of our method, which outperforms previous state of the art by considerable margins without whistles and bells. More importantly, to our knowledge, this is the first method that bridges the performance gap between thin- and thick-slice volumes by a unified framework. To improve research reproducibility, our code in PyTorch is open source at https://github.com/M3DV/AlignShift.