论文标题

通过单个正面标签来认可多标签学习的未知数

Acknowledging the Unknown for Multi-label Learning with Single Positive Labels

论文作者

Zhou, Donghao, Chen, Pengfei, Wang, Qiong, Chen, Guangyong, Heng, Pheng-Ann

论文摘要

由于难以收集详尽的多标签注释,因此多标签数据集通常包含部分标签。我们考虑了这个弱监督的学习问题的极端,称为单个积极的多标签学习(SPML),其中每个多标签训练图像只有一个正标签。传统上,所有未注释的标签都被认为是SPML中的负标签,它引入了假阴性标签,并导致模型训练以假定的负标签为主。在这项工作中,我们选择从替代角度来对待所有未经注释的标签,即承认它们是未知的。因此,我们提出熵最大化(EM)损失,以达到提供适当监督信号的特殊梯度制度。此外,我们提出了采用不对称耐受性策略和自定进度程序的不对称伪标记(APL),以与EM损失合作,然后提供更精确的监督。实验表明,我们的方法可显着提高性能并在所有四个基准测试中取得最先进的结果。代码可在https://github.com/correr-zhou/spml-acktheunknown上找到。

Due to the difficulty of collecting exhaustive multi-label annotations, multi-label datasets often contain partial labels. We consider an extreme of this weakly supervised learning problem, called single positive multi-label learning (SPML), where each multi-label training image has only one positive label. Traditionally, all unannotated labels are assumed as negative labels in SPML, which introduces false negative labels and causes model training to be dominated by assumed negative labels. In this work, we choose to treat all unannotated labels from an alternative perspective, i.e. acknowledging they are unknown. Hence, we propose entropy-maximization (EM) loss to attain a special gradient regime for providing proper supervision signals. Moreover, we propose asymmetric pseudo-labeling (APL), which adopts asymmetric-tolerance strategies and a self-paced procedure, to cooperate with EM loss and then provide more precise supervision. Experiments show that our method significantly improves performance and achieves state-of-the-art results on all four benchmarks. Code is available at https://github.com/Correr-Zhou/SPML-AckTheUnknown.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源