论文标题
基于Langevin的非convex采样的动态系统视图
A Dynamical System View of Langevin-Based Non-Convex Sampling
论文作者
论文摘要
非convex采样是机器学习的关键挑战,是深度学习中非凸优化以及近似概率推断的核心。尽管具有重要意义,但从理论上讲,仍然存在许多重要的挑战:现有的保证(1)通常仅适用于平均迭代的迭代,而不是更可取的最后迭代,(2)缺乏融合度量标准,这些指标捕获了诸如Wasserstein距离等变量的尺度,例如,(3)主要适用于诸如Sentarastic lange langesigent langesigent langesictient langesictient langesictigent langesictient langesict nymentics。在本文中,我们开发了一个新的框架,该框架通过利用动态系统理论利用几种工具来提出上述问题。我们的关键结果是,对于大量的最先进的抽样方案,可以将其在瓦斯坦斯坦距离中的最后近期收敛降低到对他们的连续时间对应物的研究,这是更好地理解的。再加上MCMC采样的标准假设,我们的理论立即产生了许多先进的采样方案的最后近期瓦斯汀收敛,例如近端,随机中间点和runge-kutta集成剂。除了现有方法之外,我们的框架还激发了更有效的计划,这些计划享有相同的严格保证。
Non-convex sampling is a key challenge in machine learning, central to non-convex optimization in deep learning as well as to approximate probabilistic inference. Despite its significance, theoretically there remain many important challenges: Existing guarantees (1) typically only hold for the averaged iterates rather than the more desirable last iterates, (2) lack convergence metrics that capture the scales of the variables such as Wasserstein distances, and (3) mainly apply to elementary schemes such as stochastic gradient Langevin dynamics. In this paper, we develop a new framework that lifts the above issues by harnessing several tools from the theory of dynamical systems. Our key result is that, for a large class of state-of-the-art sampling schemes, their last-iterate convergence in Wasserstein distances can be reduced to the study of their continuous-time counterparts, which is much better understood. Coupled with standard assumptions of MCMC sampling, our theory immediately yields the last-iterate Wasserstein convergence of many advanced sampling schemes such as proximal, randomized mid-point, and Runge-Kutta integrators. Beyond existing methods, our framework also motivates more efficient schemes that enjoy the same rigorous guarantees.