论文标题
学习使用梯度匹配和隐式差异来生成合成训练数据
Learning to Generate Synthetic Training Data using Gradient Matching and Implicit Differentiation
论文作者
论文摘要
使用庞大的培训数据集可能会很昂贵且不便。本文探讨了各种数据蒸馏技术,这些技术可以减少成功训练深网所需的数据量。受最新想法的启发,我们建议基于生成教学网络,梯度匹配和隐式函数定理的新数据蒸馏技术。使用MNIST图像分类问题的实验表明,新方法在计算上比以前的方法更有效,并允许提高对蒸馏数据训练的模型的性能。
Using huge training datasets can be costly and inconvenient. This article explores various data distillation techniques that can reduce the amount of data required to successfully train deep networks. Inspired by recent ideas, we suggest new data distillation techniques based on generative teaching networks, gradient matching, and the Implicit Function Theorem. Experiments with the MNIST image classification problem show that the new methods are computationally more efficient than previous ones and allow to increase the performance of models trained on distilled data.