论文标题

从多个轨迹的相互作用粒子的随机系统中学习相互作用内核

Learning interaction kernels in stochastic systems of interacting particles from multiple trajectories

论文作者

Lu, Fei, Maggioni, Mauro, Tang, Sui

论文摘要

我们考虑相互作用颗粒或代理的随机系统,其动力学由仅取决于成对距离的相互作用内核确定。我们研究了从沿多个独立轨迹的连续或离散时间内观察到粒子位置的观察到的相互作用内核的问题。我们基于限制在适应数据的合适假设空间的正则最大似然估计量基于适当的最大似然估计量引入非参数推理方法。我们表明,强制性条件使我们能够控制此问题的状况数量并证明估计器的一致性,实际上,它以近乎最佳的学习率收敛,等于$ 1 $ $二维的非参数回归的最低最佳速率。特别是,此速率与状态空间的维度无关,该空间通常非常高。在离散时间观察的情况下,我们还分析了离散化错误,表明在观测值之间的时间差距方面是订单$ 1/2 $。当大术时,该术语主导采样误差和近似误差,从而阻止估计器的收敛性。最后,我们展示了一种有效的并行算法来从数据中构造估计器,并通过对包括随机意见动力学和Lennard-Jones模型在内的原型系统进行数值测试来证明算法的有效性。

We consider stochastic systems of interacting particles or agents, with dynamics determined by an interaction kernel which only depends on pairwise distances. We study the problem of inferring this interaction kernel from observations of the positions of the particles, in either continuous or discrete time, along multiple independent trajectories. We introduce a nonparametric inference approach to this inverse problem, based on a regularized maximum likelihood estimator constrained to suitable hypothesis spaces adaptive to data. We show that a coercivity condition enables us to control the condition number of this problem and prove the consistency of our estimator, and that in fact it converges at a near-optimal learning rate, equal to the min-max rate of $1$-dimensional non-parametric regression. In particular, this rate is independent of the dimension of the state space, which is typically very high. We also analyze the discretization errors in the case of discrete-time observations, showing that it is of order $1/2$ in terms of the time gaps between observations. This term, when large, dominates the sampling error and the approximation error, preventing convergence of the estimator. Finally, we exhibit an efficient parallel algorithm to construct the estimator from data, and we demonstrate the effectiveness of our algorithm with numerical tests on prototype systems including stochastic opinion dynamics and a Lennard-Jones model.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源