论文标题

学习神经汉密尔顿动力学:方法论概述

Learning Neural Hamiltonian Dynamics: A Methodological Overview

论文作者

Chen, Zhijie, Feng, Mingquan, Yan, Junchi, Zha, Hongyuan

论文摘要

过去几年,人们对在深度学习框架中学习哈密顿动态的兴趣增加了。作为基于物理定律的归纳偏见,哈密顿动力学endow神经网络具有准确的长期预测,可解释性和数据效率学习。但是,哈密顿动力学还可以在输入数据和其他计算开销上带来节能或耗散假设。在本文中,我们系统地调查了最近提出的哈密顿神经网络模型,并特别强调了方法论。通常,我们讨论了这些模型的主要贡献,并在四个重叠的方向上进行了比较:1)广义汉密尔顿系统; 2)符合性集成,3)广义输入形式和4)扩展问题设置。我们还提供了这一领域的基本挑战和新兴机会的前景。

The past few years have witnessed an increased interest in learning Hamiltonian dynamics in deep learning frameworks. As an inductive bias based on physical laws, Hamiltonian dynamics endow neural networks with accurate long-term prediction, interpretability, and data-efficient learning. However, Hamiltonian dynamics also bring energy conservation or dissipation assumptions on the input data and additional computational overhead. In this paper, we systematically survey recently proposed Hamiltonian neural network models, with a special emphasis on methodologies. In general, we discuss the major contributions of these models, and compare them in four overlapping directions: 1) generalized Hamiltonian system; 2) symplectic integration, 3) generalized input form, and 4) extended problem settings. We also provide an outlook of the fundamental challenges and emerging opportunities in this area.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源