论文标题
完成了:直接的一声学习和分位数的印迹
It's DONE: Direct ONE-shot learning with quantile weight imprinting
论文作者
论文摘要
从一个例子中学习一个新概念是人脑的出色功能,它正在引起机器学习领域的注意,这是一项单击学习任务。在本文中,我们提出了该任务的最简单方法之一,它具有非参数重量印记,名为Direct One-Shot Learning(Done)。 DONE将新的类添加到预处理的深度神经网络(DNN)分类器中,均未训练优化也没有验证的DNN修改。完成的灵感是受Hebbian理论的启发,并直接使用了属于新额外类的数据获得的最终致密层的神经活动输入,作为新类别的新阶段神经元的突触重量,将神经活动的所有统计特性转化为通过分数正常化的突触重量的所有统计特性。完成只需要学习一个新概念,其过程是简单,确定性的,而不是需要参数调整和超级参数。完成的问题克服了现有的重量印记方法的严重问题,该方法依赖于DNN依赖于对原始类图像的分类。完成的性能完全取决于用作骨干模型的经过预定的DNN模型,我们证实了当前训练有素的骨干模型以良好的精度执行。
Learning a new concept from one example is a superior function of the human brain and it is drawing attention in the field of machine learning as a one-shot learning task. In this paper, we propose one of the simplest methods for this task with a nonparametric weight imprinting, named Direct ONE-shot learning (DONE). DONE adds new classes to a pretrained deep neural network (DNN) classifier with neither training optimization nor pretrained-DNN modification. DONE is inspired by Hebbian theory and directly uses the neural activity input of the final dense layer obtained from data that belongs to the new additional class as the synaptic weight with a newly-provided-output neuron for the new class, transforming all statistical properties of the neural activity into those of synaptic weight by quantile normalization. DONE requires just one inference for learning a new concept and its procedure is simple, deterministic, not requiring parameter tuning and hyperparameters. DONE overcomes a severe problem of existing weight imprinting methods that DNN-dependently interfere with the classification of original-class images. The performance of DONE depends entirely on the pretrained DNN model used as a backbone model, and we confirmed that DONE with current well-trained backbone models perform at a decent accuracy.