论文标题
半监督的任务自适应聚类几乎没有分类
Task-Adaptive Clustering for Semi-Supervised Few-Shot Classification
论文作者
论文摘要
很少有学习旨在仅使用少量新培训数据来处理以前看不见的任务。但是,在准备(或元训练)一些学习者时,需要大量标记的数据。不幸的是,在现实世界中,标记的数据昂贵和/或稀缺。在这项工作中,我们提出了一些射击的学习者,可以在半监督的环境下正常工作,其中大部分培训数据是未标记的。我们的方法采用明确的任务条件,其中当前任务的未标记的样本聚类发生在与嵌入功能空间不同的新投影空间中。条件群集空间是线性构造的,以便快速缩小当前任务的类质心和独立的每类参考矢量跨任务训练的差距。在更一般的环境中,我们的方法引入了一个控制元学习任务条件程度的概念:任务条件的数量随集群空间的重复更新数量而变化。基于迷你imagenet和tieredimagenet数据集的广泛仿真结果显示,该方法的最先进的半监督分类性能是所提出的方法的几个分类性能。仿真结果还表明,所提出的任务自适应聚类显示出了越来越多的干扰器样本,即来自候选类别外部的未标记的样本图像。
Few-shot learning aims to handle previously unseen tasks using only a small amount of new training data. In preparing (or meta-training) a few-shot learner, however, massive labeled data are necessary. In the real world, unfortunately, labeled data are expensive and/or scarce. In this work, we propose a few-shot learner that can work well under the semi-supervised setting where a large portion of training data is unlabeled. Our method employs explicit task-conditioning in which unlabeled sample clustering for the current task takes place in a new projection space different from the embedding feature space. The conditioned clustering space is linearly constructed so as to quickly close the gap between the class centroids for the current task and the independent per-class reference vectors meta-trained across tasks. In a more general setting, our method introduces a concept of controlling the degree of task-conditioning for meta-learning: the amount of task-conditioning varies with the number of repetitive updates for the clustering space. Extensive simulation results based on the miniImageNet and tieredImageNet datasets show state-of-the-art semi-supervised few-shot classification performance of the proposed method. Simulation results also indicate that the proposed task-adaptive clustering shows graceful degradation with a growing number of distractor samples, i.e., unlabeled sample images coming from outside the candidate classes.