论文标题

一种用于最低效果转移学习的灵活选择方案

A Flexible Selection Scheme for Minimum-Effort Transfer Learning

论文作者

Royer, Amelie, Lampert, Christoph H.

论文摘要

微调是利用预训练的卷积网络中包含的知识来制定新的视觉识别任务的一种流行方式。但是,很少考虑将知识从验证的网络转移到视觉上不同但语义上接近源的正交设置:这种通常发生在现实生活数据中,这不一定像训练源(噪声,几何变换,不同的模式等)那样干净。为了解决此类情况,我们引入了一种新的,广义的微调形式,称为Flex-Tuning,其中可以调整网络的任何单个单元(例如层),并自动选择最有希望的一个单元。为了使该方法吸引实际使用,我们提出了两个轻巧,更快的选择程序,这些程序在实践中被证明是良好的近似值。我们在各种领域的变化和数据稀缺情况下进行经验研究这些选择标准,并表明,尽管简单,但单个单元作为适应技术的效果非常好。事实证明,与共同的实践相反,而不是最后一个完全连接的单元,最好在许多域换档场景中调整一个中间或早期的单元,这是通过flex-Tuning精确检测到的。

Fine-tuning is a popular way of exploiting knowledge contained in a pre-trained convolutional network for a new visual recognition task. However, the orthogonal setting of transferring knowledge from a pretrained network to a visually different yet semantically close source is rarely considered: This commonly happens with real-life data, which is not necessarily as clean as the training source (noise, geometric transformations, different modalities, etc.). To tackle such scenarios, we introduce a new, generalized form of fine-tuning, called flex-tuning, in which any individual unit (e.g. layer) of a network can be tuned, and the most promising one is chosen automatically. In order to make the method appealing for practical use, we propose two lightweight and faster selection procedures that prove to be good approximations in practice. We study these selection criteria empirically across a variety of domain shifts and data scarcity scenarios, and show that fine-tuning individual units, despite its simplicity, yields very good results as an adaptation technique. As it turns out, in contrast to common practice, rather than the last fully-connected unit it is best to tune an intermediate or early one in many domain-shift scenarios, which is accurately detected by flex-tuning.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源