论文标题
神经网络通过集结减少
Neural Networks Reduction via Lumping
论文作者
论文摘要
最近提出的神经网络的规模不断增加,因此很难在嵌入式设备上实现它们,在嵌入式设备上,内存,电池和计算能力是一种非平凡的瓶颈。因此,在过去几年中,网络压缩文献一直在蓬勃发展,并且已经发布了大量解决方案,以减少模型的操作数量和参数。不幸的是,这些还原技术中的大多数实际上是启发式方法,通常需要至少一个重新训练的步骤才能恢复准确性。在验证和性能评估领域中,对模型降低的程序的需求也众所周知,在这些领域中,大量努力致力于保留可观察到的潜在行为的商的定义。在本文中,我们试图弥合最流行和非常有效的网络减少策略与正式概念(例如块状性)之间的差距,以验证和评估马尔可夫链。详细阐述肿块,我们提出了一种修剪方法,该方法可以减少网络中的神经元数,而无需使用任何数据或微调,同时完全保留了确切的行为。放松对商方法的确切定义的限制,我们可以对一些最常见的还原技术进行形式说明。
The increasing size of recently proposed Neural Networks makes it hard to implement them on embedded devices, where memory, battery and computational power are a non-trivial bottleneck. For this reason during the last years network compression literature has been thriving and a large number of solutions has been been published to reduce both the number of operations and the parameters involved with the models. Unfortunately, most of these reducing techniques are actually heuristic methods and usually require at least one re-training step to recover the accuracy. The need of procedures for model reduction is well-known also in the fields of Verification and Performances Evaluation, where large efforts have been devoted to the definition of quotients that preserve the observable underlying behaviour. In this paper we try to bridge the gap between the most popular and very effective network reduction strategies and formal notions, such as lumpability, introduced for verification and evaluation of Markov Chains. Elaborating on lumpability we propose a pruning approach that reduces the number of neurons in a network without using any data or fine-tuning, while completely preserving the exact behaviour. Relaxing the constraints on the exact definition of the quotienting method we can give a formal explanation of some of the most common reduction techniques.