论文标题
没有一个隐藏的神经网络可以代表多变量功能
No one-hidden-layer neural network can represent multivariable functions
论文作者
论文摘要
在与神经网络的函数近似中,通过优化每个隐藏层单元的参数,将输入数据集映射到输出索引。对于一元函数,我们通过构建具有整流的线性单元(relu)激活函数的一个隐藏层神经网络的连续版本,对参数及其第二个衍生物进行限制。由于约束降低了参数的自由度,因此可以准确实现网络。我们还解释了平滑的二进制函数的存在,该函数不能由任何此类神经网络精确代表。
In a function approximation with a neural network, an input dataset is mapped to an output index by optimizing the parameters of each hidden-layer unit. For a unary function, we present constraints on the parameters and its second derivative by constructing a continuum version of a one-hidden-layer neural network with the rectified linear unit (ReLU) activation function. The network is accurately implemented because the constraints decrease the degrees of freedom of the parameters. We also explain the existence of a smooth binary function that cannot be precisely represented by any such neural network.