论文标题
GMM-IL:使用逐步学习的,独立的概率模型的图像分类,用于小样本大小
GMM-IL: Image Classification using Incrementally Learnt, Independent Probabilistic Models for Small Sample Sizes
论文作者
论文摘要
当前的深度学习分类器,在一组共享网络权重中执行监督学习和存储班级歧视性信息。这些权重不能轻易更改以逐步学习其他类别,因为分类权重都需要重新训练以防止旧类信息丢失,并且还需要以前的培训数据。我们提出了一种新颖的两个阶段结构,将视觉学习与概率模型相结合,以高斯混合模型的形式表示每个类别。通过在分类器中使用这些独立的类表示,我们的表现优于具有软磁头的等效网络的基准,从而获得了小于12的样本量的准确性,而在该样本范围内的3个不平衡类别剖面的加权F1得分提高。学习新课程时,我们的分类器没有任何灾难性的遗忘问题,只需要新班的培训图像即可。这使得随着时间的推移可以在视觉上进行索引和推理的数据库。
Current deep learning classifiers, carry out supervised learning and store class discriminatory information in a set of shared network weights. These weights cannot be easily altered to incrementally learn additional classes, since the classification weights all require retraining to prevent old class information from being lost and also require the previous training data to be present. We present a novel two stage architecture which couples visual feature learning with probabilistic models to represent each class in the form of a Gaussian Mixture Model. By using these independent class representations within our classifier, we outperform a benchmark of an equivalent network with a Softmax head, obtaining increased accuracy for sample sizes smaller than 12 and increased weighted F1 score for 3 imbalanced class profiles in that sample range. When learning new classes our classifier exhibits no catastrophic forgetting issues and only requires the new classes' training images to be present. This enables a database of growing classes over time which can be visually indexed and reasoned over.