论文标题

在平衡时进行本地学习的最小控制原则

The least-control principle for local learning at equilibrium

论文作者

Meulemans, Alexander, Zucchet, Nicolas, Kobayashi, Seijin, von Oswald, Johannes, Sacramento, João

论文摘要

平衡系统是表达神经计算的有力方法。作为特殊情况,它们包括对神经科学和机器学习的最新兴趣模型,例如深神经网络,平衡复发性神经网络,深度平衡模型或元学习。在这里,我们提出了一个新的原则,用于以时间和空间本地规则学习此类系统。我们的原理将学习作为一个最不控制的问题,我们首先引入一个最佳控制器,以将系统带入解决方案状态,然后将学习定义为减少达到这种状态所需的控制量。我们表明,将学习信号纳入动力学作为最佳控制能够传播与活动有关的信用分配信息,避免将中间状态存储在内存中,并且不依赖于无限的学习信号。在实践中,我们的原理可以使基于梯度的学习方法的强大性能匹配,该方法应用于涉及经常性神经网络和元学习的一系列问题。我们的结果阐明了大脑如何学习并提供解决广泛的机器学习问题的新方法。

Equilibrium systems are a powerful way to express neural computations. As special cases, they include models of great current interest in both neuroscience and machine learning, such as deep neural networks, equilibrium recurrent neural networks, deep equilibrium models, or meta-learning. Here, we present a new principle for learning such systems with a temporally- and spatially-local rule. Our principle casts learning as a least-control problem, where we first introduce an optimal controller to lead the system towards a solution state, and then define learning as reducing the amount of control needed to reach such a state. We show that incorporating learning signals within a dynamics as an optimal control enables transmitting activity-dependent credit assignment information, avoids storing intermediate states in memory, and does not rely on infinitesimal learning signals. In practice, our principle leads to strong performance matching that of leading gradient-based learning methods when applied to an array of problems involving recurrent neural networks and meta-learning. Our results shed light on how the brain might learn and offer new ways of approaching a broad class of machine learning problems.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源