论文标题
互锁的反向传播:改善深度模型并行性
Interlocking Backpropagation: Improving depthwise model-parallelism
论文作者
论文摘要
近年来,艺术神经网络状态的参数数量已大大增加。大规模神经网络的兴趣激增促使发展了这种模型的新分布式培训策略。一种这样的策略是模型并联分布式培训。不幸的是,模型并行性可能会遭受资源利用率不佳,从而导致资源浪费。在这项工作中,我们改进了理想化的模型平行优化环境中的最新发展:本地学习。在全球环境中的资源利用率较差和在本地环境中的任务绩效差的情况下,我们引入了一类中间策略,在本地和全球学习之间被称为互锁的反向传播。这些策略保留了本地优化的许多计算效率优势,同时恢复了通过全球优化实现的许多任务绩效。我们评估了图像分类重置和变压器语言模型的策略,发现我们的策略在任务绩效方面始终超越本地学习,并且在培训效率方面超越了全球学习。
The number of parameters in state of the art neural networks has drastically increased in recent years. This surge of interest in large scale neural networks has motivated the development of new distributed training strategies enabling such models. One such strategy is model-parallel distributed training. Unfortunately, model-parallelism can suffer from poor resource utilisation, which leads to wasted resources. In this work, we improve upon recent developments in an idealised model-parallel optimisation setting: local learning. Motivated by poor resource utilisation in the global setting and poor task performance in the local setting, we introduce a class of intermediary strategies between local and global learning referred to as interlocking backpropagation. These strategies preserve many of the compute-efficiency advantages of local optimisation, while recovering much of the task performance achieved by global optimisation. We assess our strategies on both image classification ResNets and Transformer language models, finding that our strategy consistently out-performs local learning in terms of task performance, and out-performs global learning in training efficiency.