论文标题

双流的最大自我注意力学多实体学习

Dual-stream Maximum Self-attention Multi-instance Learning

论文作者

Li, Bin, Eliceiri, Kevin W.

论文摘要

多构想学习(MIL)是一种弱监督学习的一种形式,其中单个类标签被分配给一袋实例,而实例级别的标签则没有。在许多实际情况(例如计算组织病理学)中,培训分类器以准确确定袋子标签和实例标签是一项具有挑战性但至关重要的任务。最近,由于灵活性高和出色的性能,通过神经网络完全参数化的MIL模型变得流行。这些模型中的大多数都依赖于注意机制,这些机制在袋中嵌入了实例嵌入的注意力分数,并使用聚合操作员生产袋子嵌入。在本文中,我们提出了通过神经网络参数为参数的双流最大自我发注意模型(DSMIL)。第一流部署简单的MIL Max-Pooling,而确定顶级激活的实例嵌入并用于在第二个流中的实例嵌入跨实例嵌入。与以前的大多数方法不同,所提出的模型共同学习基于同一实例嵌入的实例分类器和袋子分类器。实验结果表明,与最佳MIL方法相比,我们的方法在基准MIL数据集上表现出了最先进的性能。

Multi-instance learning (MIL) is a form of weakly supervised learning where a single class label is assigned to a bag of instances while the instance-level labels are not available. Training classifiers to accurately determine the bag label and instance labels is a challenging but critical task in many practical scenarios, such as computational histopathology. Recently, MIL models fully parameterized by neural networks have become popular due to the high flexibility and superior performance. Most of these models rely on attention mechanisms that assign attention scores across the instance embeddings in a bag and produce the bag embedding using an aggregation operator. In this paper, we proposed a dual-stream maximum self-attention MIL model (DSMIL) parameterized by neural networks. The first stream deploys a simple MIL max-pooling while the top-activated instance embedding is determined and used to obtain self-attention scores across instance embeddings in the second stream. Different from most of the previous methods, the proposed model jointly learns an instance classifier and a bag classifier based on the same instance embeddings. The experiments results show that our method achieves superior performance compared to the best MIL methods and demonstrates state-of-the-art performance on benchmark MIL datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源