论文标题

关于卷积神经网络线性区域的数量

On the Number of Linear Regions of Convolutional Neural Networks

论文作者

Xiong, H., Huang, L., Yu, M., Liu, L., Zhu, F., Shao, L.

论文摘要

深度学习中的一个基本问题是了解实践中深神经网络(NNS)的出色表现。 NNS优越性的一种解释是,他们可以实现大量复杂的功能,即它们具有强大的表现力。可以通过它可以将其输入空间分开的线性区域的最大线性区域的最大数量来量化。在本文中,我们提供了研究CNN线性区域所需的几个数学结果,并使用它们来得出一层relu CNN的线性区域的最大和平均数量。此外,我们获得了多层Relu CNN线性区域数量的上限和下限。我们的结果表明,更深层次的CNN具有比其浅层同行具有更强大的表现力,而CNN的表现力比每个参数完全连接的NNS具有更高的表现力。

One fundamental problem in deep learning is understanding the outstanding performance of deep Neural Networks (NNs) in practice. One explanation for the superiority of NNs is that they can realize a large class of complicated functions, i.e., they have powerful expressivity. The expressivity of a ReLU NN can be quantified by the maximal number of linear regions it can separate its input space into. In this paper, we provide several mathematical results needed for studying the linear regions of CNNs, and use them to derive the maximal and average numbers of linear regions for one-layer ReLU CNNs. Furthermore, we obtain upper and lower bounds for the number of linear regions of multi-layer ReLU CNNs. Our results suggest that deeper CNNs have more powerful expressivity than their shallow counterparts, while CNNs have more expressivity than fully-connected NNs per parameter.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源