论文标题

更深入地了解神经建筑搜索中的体重共享

Deeper Insights into Weight Sharing in Neural Architecture Search

论文作者

Zhang, Yuge, Lin, Zejun, Jiang, Junyang, Zhang, Quanlu, Wang, Yujing, Xue, Hui, Zhang, Chen, Yang, Yaming

论文摘要

随着深层神经网络的成功,神经体系结构搜索(NAS)作为一种自动模型设计的方式引起了广泛的关注。由于从头开始培训每个儿童模型都非常耗时,最近的作品利用体重共享来加快模型评估程序。这些方法通过在超级网络上保持单个权重并在每个儿童模型中共享权重,从而大大减少了计算。但是,分享体重没有理论保证,并且其影响尚未得到很好的研究。在本文中,我们进行了全面的实验,以揭示体重共享的影响:(1)不同运行的表现最佳模型,甚至来自同一运行中连续时期的最佳模型具有显着差异; (2)即使有很大的差异,我们也可以从以共同权重培训超级网络中提取有价值的信息; (3)儿童模型之间的干扰是引起高方差的主要因素; (4)正确降低体重共享程度可以有效地降低差异并提高性能。

With the success of deep neural networks, Neural Architecture Search (NAS) as a way of automatic model design has attracted wide attention. As training every child model from scratch is very time-consuming, recent works leverage weight-sharing to speed up the model evaluation procedure. These approaches greatly reduce computation by maintaining a single copy of weights on the super-net and share the weights among every child model. However, weight-sharing has no theoretical guarantee and its impact has not been well studied before. In this paper, we conduct comprehensive experiments to reveal the impact of weight-sharing: (1) The best-performing models from different runs or even from consecutive epochs within the same run have significant variance; (2) Even with high variance, we can extract valuable information from training the super-net with shared weights; (3) The interference between child models is a main factor that induces high variance; (4) Properly reducing the degree of weight sharing could effectively reduce variance and improve performance.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源