论文标题

神经网络中的层次稀疏性

Layer Sparsity in Neural Networks

论文作者

Hebiri, Mohamed, Lederer, Johannes

论文摘要

稀疏性在机器学习中变得很流行,因为它可以节省计算资源,促进解释并防止过度拟合。在本文中,我们讨论了神经网络框架中的稀疏性。特别是,我们制定了一种新的稀疏性概念,该概念涉及网络的层,因此与当前深层网络的趋势特别一致。我们称这个概念层的稀疏性。然后,我们介绍相应的正则化和改装方案,这些方案可以补充标准的深入学习管道以生成更紧凑,更准确的网络。

Sparsity has become popular in machine learning, because it can save computational resources, facilitate interpretations, and prevent overfitting. In this paper, we discuss sparsity in the framework of neural networks. In particular, we formulate a new notion of sparsity that concerns the networks' layers and, therefore, aligns particularly well with the current trend toward deep networks. We call this notion layer sparsity. We then introduce corresponding regularization and refitting schemes that can complement standard deep-learning pipelines to generate more compact and accurate networks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源