论文标题
体积作为重量衰减的自然概括
Volumization as a Natural Generalization of Weight Decay
论文作者
论文摘要
我们为神经网络提出了一种新颖的正则化方法,称为\ textit {卷}。受物理的启发,我们为神经网络中的权重参数定义了物理体积,我们表明该方法是正规化神经网络的有效方法。直观地,此方法在$ L_2 $和$ L_ \ infty $正则化之间进行内插。因此,重量衰减和重量减轻成为所提出算法的特殊情况。在一个玩具的例子上,我们证明了这种方法的本质是控制偏见 - 差异权衡的正规化技术。该方法在显示标准重量衰减方法很好地工作的类别中表现出很好的表现,包括改善网络的概括和防止记忆。此外,我们证明了体积可能会导致一种简单的方法来训练重量是二元或三元的神经网络。
We propose a novel regularization method, called \textit{volumization}, for neural networks. Inspired by physics, we define a physical volume for the weight parameters in neural networks, and we show that this method is an effective way of regularizing neural networks. Intuitively, this method interpolates between an $L_2$ and $L_\infty$ regularization. Therefore, weight decay and weight clipping become special cases of the proposed algorithm. We prove, on a toy example, that the essence of this method is a regularization technique to control bias-variance tradeoff. The method is shown to do well in the categories where the standard weight decay method is shown to work well, including improving the generalization of networks and preventing memorization. Moreover, we show that the volumization might lead to a simple method for training a neural network whose weight is binary or ternary.