论文标题
注意模型增强网络用于分类乳腺癌图像
Attention Model Enhanced Network for Classification of Breast Cancer Image
论文作者
论文摘要
由于阶层间的歧义和类内变异性,乳腺癌的分类仍然是一项具有挑战性的任务。现有的基于深度学习的方法试图通过利用复杂的非线性预测来应对这一挑战。但是,这些方法通常从整个图像中提取全局特征,从而忽略了一个微妙的详细信息对于提取区分特征至关重要的事实。在这项研究中,我们提出了一种名为“注意力模型增强网络”(AMEN)的新方法,该方法以多分支机构的方式配制,具有像素明智的注意模型和分类。具体而言,阿门(Amen)中的特征学习部分可以生成像素明智的注意图,而分类子模块用于对样本进行分类。为了更多地关注微妙的详细信息,通过以前分支产生的像素明智的注意图增强了样本图像。此外,采用增强策略来融合不同分支的分类结果,以提高性能。在三个基准数据集上进行的实验证明了在各种情况下提出的方法的优越性。
Breast cancer classification remains a challenging task due to inter-class ambiguity and intra-class variability. Existing deep learning-based methods try to confront this challenge by utilizing complex nonlinear projections. However, these methods typically extract global features from entire images, neglecting the fact that the subtle detail information can be crucial in extracting discriminative features. In this study, we propose a novel method named Attention Model Enhanced Network (AMEN), which is formulated in a multi-branch fashion with pixel-wised attention model and classification submodular. Specifically, the feature learning part in AMEN can generate pixel-wised attention map, while the classification submodular are utilized to classify the samples. To focus more on subtle detail information, the sample image is enhanced by the pixel-wised attention map generated from former branch. Furthermore, boosting strategy are adopted to fuse classification results from different branches for better performance. Experiments conducted on three benchmark datasets demonstrate the superiority of the proposed method under various scenarios.