论文标题

可区分的缩放用于多个实例在全扫描图像上学习

Differentiable Zooming for Multiple Instance Learning on Whole-Slide Images

论文作者

Thandiackal, Kevin, Chen, Boqi, Pati, Pushpak, Jaume, Guillaume, Williamson, Drew F. K., Gabrani, Maria, Goksel, Orcun

论文摘要

多个实例学习(MIL)方法在数字病理学中对GIGA像素大小的全曲线图像(WSI)进行分类变得越来越流行。大多数MIL方法通过处理所有组织贴片,以单个WSI放大倍率运行。这样的公式会引起高计算要求,并将WSI级表示的上下文化限制为单个量表。一些MIL方法扩展到多个量表,但在计算上要求更高。在本文中,受病理诊断过程的启发,我们提出了Zoommil,该方法学会了以端到端的方式执行多级缩放。 Zoommil通过从多种增值化的组织信息汇总组织信息来构建WSI表示。所提出的方法在两个大数据集上的WSI分类中优于最先进的MIL方法,同时大大降低了关于浮点操作(FLOPS)和处理时间的计算需求,最高为40倍。

Multiple Instance Learning (MIL) methods have become increasingly popular for classifying giga-pixel sized Whole-Slide Images (WSIs) in digital pathology. Most MIL methods operate at a single WSI magnification, by processing all the tissue patches. Such a formulation induces high computational requirements, and constrains the contextualization of the WSI-level representation to a single scale. A few MIL methods extend to multiple scales, but are computationally more demanding. In this paper, inspired by the pathological diagnostic process, we propose ZoomMIL, a method that learns to perform multi-level zooming in an end-to-end manner. ZoomMIL builds WSI representations by aggregating tissue-context information from multiple magnifications. The proposed method outperforms the state-of-the-art MIL methods in WSI classification on two large datasets, while significantly reducing the computational demands with regard to Floating-Point Operations (FLOPs) and processing time by up to 40x.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源