论文标题
要转移哪种型号?在生长的干草堆中找到针头
Which Model to Transfer? Finding the Needle in the Growing Haystack
论文作者
论文摘要
转移学习最近已被作为从头开始的培训模型的数据效率替代品,尤其是对于计算机视觉任务提供了非常坚实的基线。丰富的模型存储库(例如Tensorflow Hub)的出现使从业人员和研究人员能够在各种下游任务中释放这些模型的潜力。随着这些存储库不断成倍增长,有效地为手头任务选择一个好的模型变得至关重要。我们通过熟悉的遗憾概念来形式化此问题,并介绍主要的策略,即任务不可能的(例如,通过其Imagenet绩效进行排名模型)和任务感知搜索策略(例如线性或KNN评估)。我们进行了一项大规模的实证研究,并表明任务不合时宜的方法和任务感知方法都会产生高度的遗憾。然后,我们提出了一种简单且在计算上有效的混合搜索策略,该策略表现出现有方法的表现。我们在一组19种不同的视觉任务上强调了拟议解决方案的实际好处。
Transfer learning has been recently popularized as a data-efficient alternative to training models from scratch, in particular for computer vision tasks where it provides a remarkably solid baseline. The emergence of rich model repositories, such as TensorFlow Hub, enables the practitioners and researchers to unleash the potential of these models across a wide range of downstream tasks. As these repositories keep growing exponentially, efficiently selecting a good model for the task at hand becomes paramount. We provide a formalization of this problem through a familiar notion of regret and introduce the predominant strategies, namely task-agnostic (e.g. ranking models by their ImageNet performance) and task-aware search strategies (such as linear or kNN evaluation). We conduct a large-scale empirical study and show that both task-agnostic and task-aware methods can yield high regret. We then propose a simple and computationally efficient hybrid search strategy which outperforms the existing approaches. We highlight the practical benefits of the proposed solution on a set of 19 diverse vision tasks.