论文标题

归一化卷积神经网络

Normalized Convolutional Neural Network

论文作者

Kim, Dongsuk, Lee, Geonhee, Lee, Myungjae, Kang, Shin Uk, Kim, Dongmin

论文摘要

我们引入了归一化卷积神经层,这是一种在卷积网络中进行标准化的新方法。与常规方法不同,该层在卷积期间将IM2COL矩阵的行归一化,使其固有地适应切成薄片的输入,并更好地与内核结构保持一致。这种独特的方法将其与标准归一化技术区分开来,并防止直接集成到针对传统卷积操作中优化的现有深度学习框架中。我们的方法具有通用的属性,使其适用于涉及卷积层的任何深度学习任务。通过在卷积过程中固有地归一化,它是自我归一化网络的卷积适应,保持其核心原理而无需额外的归一化层。值得注意的是,在微分支训练方案中,它始终优于其他独立于批次的归一化方法。这种性能提升是由标准化IM2COL矩阵的行标准化,从理论上讲,该行可以使损失梯度更平滑和改善的训练稳定性。

We introduce a Normalized Convolutional Neural Layer, a novel approach to normalization in convolutional networks. Unlike conventional methods, this layer normalizes the rows of the im2col matrix during convolution, making it inherently adaptive to sliced inputs and better aligned with kernel structures. This distinctive approach differentiates it from standard normalization techniques and prevents direct integration into existing deep learning frameworks optimized for traditional convolution operations. Our method has a universal property, making it applicable to any deep learning task involving convolutional layers. By inherently normalizing within the convolution process, it serves as a convolutional adaptation of Self-Normalizing Networks, maintaining their core principles without requiring additional normalization layers. Notably, in micro-batch training scenarios, it consistently outperforms other batch-independent normalization methods. This performance boost arises from standardizing the rows of the im2col matrix, which theoretically leads to a smoother loss gradient and improved training stability.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源