论文标题

用于训练深神经网络的分数瞬间初始化方案

Fractional moment-preserving initialization schemes for training deep neural networks

论文作者

Gurbuzbalaban, Mert, Hu, Yuanhan

论文摘要

深层神经网络(DNN)中的一种传统初始化方法是随机对网络权重进行采样,以保留前激活的差异。另一方面,几项研究表明,在训练过程中,随机梯度的分布尤其是针对小批量的大小。在这种情况下,可以用重量分布的重量分布进行建模,该分布具有无限差异,但具有有限的(非授权)订单$ s $的分数,$ s <2 $。由于这个事实的激励,我们开发了完全连接的馈电网络的初始化方案,这些方案可以证明可以保留(0,2] $ in(0,2] $的任何给定订单时刻$ s \ in(0,2] $,用于一系列激活,包括Relu,Relu,Relu,随机泄漏的Relu和线性激活,并线性激活和线性激活。方案,我们表明,随着层的增长,网络输出几乎可以肯定的限制,并且在某些情况下,限制的限制是重型的。选择的值和网络宽度。

A traditional approach to initialization in deep neural networks (DNNs) is to sample the network weights randomly for preserving the variance of pre-activations. On the other hand, several studies show that during the training process, the distribution of stochastic gradients can be heavy-tailed especially for small batch sizes. In this case, weights and therefore pre-activations can be modeled with a heavy-tailed distribution that has an infinite variance but has a finite (non-integer) fractional moment of order $s$ with $s<2$. Motivated by this fact, we develop initialization schemes for fully connected feed-forward networks that can provably preserve any given moment of order $s \in (0, 2]$ over the layers for a class of activations including ReLU, Leaky ReLU, Randomized Leaky ReLU, and linear activations. These generalized schemes recover traditional initialization schemes in the limit $s \to 2$ and serve as part of a principled theory for initialization. For all these schemes, we show that the network output admits a finite almost sure limit as the number of layers grows, and the limit is heavy-tailed in some settings. This sheds further light into the origins of heavy tail during signal propagation in DNNs. We prove that the logarithm of the norm of the network outputs, if properly scaled, will converge to a Gaussian distribution with an explicit mean and variance we can compute depending on the activation used, the value of s chosen and the network width. We also prove that our initialization scheme avoids small network output values more frequently compared to traditional approaches. Furthermore, the proposed initialization strategy does not have an extra cost during the training procedure. We show through numerical experiments that our initialization can improve the training and test performance.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源