论文标题
Thalamus:一种以脑为灵感的算法,用于生物学上可行的持续学习和分离的表示形式
Thalamus: a brain-inspired algorithm for biologically-plausible continual learning and disentangled representations
论文作者
论文摘要
动物在不断变化的环境中壮成长,并利用时间结构来学习良好的因果关系。相比之下,传统的神经网络因不断变化的环境而忘记,并且已经提出了许多方法来限制忘记不同的权衡。受大脑丘脑皮层电路的启发,我们引入了一种简单的算法,该算法在推理时间使用优化来动态生成当前任务的内部表示。该算法在更新模型权重和潜在任务嵌入之间进行交替,从而使代理可以将时间体验的流分解为离散事件并组织学习。在持续学习的基准测试中,它通过减轻遗忘而实现竞争性最终的平均准确性,但重要的是,通过要求模型适应潜在的更新,它通过认知界面将知识组织为灵活的结构来控制它们。在序列后面的任务可以通过知识转移来解决,因为它们可以在物质良好的潜在空间内到达。该算法符合开放式环境中理想不断学习的代理的许多避免,其简单性提出了具有丰富反馈控制回路的电路中的基本计算,例如大脑的丘脑皮层电路。
Animals thrive in a constantly changing environment and leverage the temporal structure to learn well-factorized causal representations. In contrast, traditional neural networks suffer from forgetting in changing environments and many methods have been proposed to limit forgetting with different trade-offs. Inspired by the brain thalamocortical circuit, we introduce a simple algorithm that uses optimization at inference time to generate internal representations of the current task dynamically. The algorithm alternates between updating the model weights and a latent task embedding, allowing the agent to parse the stream of temporal experience into discrete events and organize learning about them. On a continual learning benchmark, it achieves competitive end average accuracy by mitigating forgetting, but importantly, by requiring the model to adapt through latent updates, it organizes knowledge into flexible structures with a cognitive interface to control them. Tasks later in the sequence can be solved through knowledge transfer as they become reachable within the well-factorized latent space. The algorithm meets many of the desiderata of an ideal continually learning agent in open-ended environments, and its simplicity suggests fundamental computations in circuits with abundant feedback control loops such as the thalamocortical circuits in the brain.