论文标题
深线性神经网络的无限宽度极限
Infinite-width limit of deep linear neural networks
论文作者
论文摘要
本文研究了以随机参数初始化的深线性神经网络的无限宽度极限。当神经元的数量分歧时,我们获得了训练动力学(从精确的意义上)收敛到从无限宽的确定性线性神经网络上从梯度下降获得的动力学。此外,即使权重是随机的,我们也会沿训练动力学获得其确切的定律,并证明了线性预测因子的定量收敛结果,而神经元的数量。 我们最终研究了无限宽的线性神经网络获得的连续时间限制,并表明神经网络的线性预测因子以指数级的速率收敛到最小$ \ ell_2 $ 2 $ -Norm-norm最小化的风险。
This paper studies the infinite-width limit of deep linear neural networks initialized with random parameters. We obtain that, when the number of neurons diverges, the training dynamics converge (in a precise sense) to the dynamics obtained from a gradient descent on an infinitely wide deterministic linear neural network. Moreover, even if the weights remain random, we get their precise law along the training dynamics, and prove a quantitative convergence result of the linear predictor in terms of the number of neurons. We finally study the continuous-time limit obtained for infinitely wide linear neural networks and show that the linear predictors of the neural network converge at an exponential rate to the minimal $\ell_2$-norm minimizer of the risk.