论文标题
基于对球体的共识优化:融合全球最小化和机器学习
Consensus-Based Optimization on the Sphere: Convergence to Global Minimizers and Machine Learning
论文作者
论文摘要
我们研究了新的随机kuramoto-vicsek-type模型的实现,以全局优化球体上的非凸函数。该模型属于基于共识的优化类别。实际上,粒子在朝向瞬时共识点驱动的球体上移动,该点被计算为粒子位置的凸组合,并根据拉普拉斯的原理加权成本函数,并且代表了与全球最小化器的近似值。随机向量场进一步扰动动力学,以偏爱探索,其方差是粒子到共识点的距离的函数。特别是,一旦达成共识就会消失。本文的主要结果是关于数值方案与全局最小化器收敛的证明,提供了初始基准的良好准备条件。该证明将平均场极限的先前结果与新的渐近分析和SDE数值方法的经典收敛结果结合在一起。我们提出了几个数值实验,这些实验表明,目前纸张中提出的算法与维度相当良好,并且非常通用。为了量化新方法的性能,我们表明该算法能够在信号处理和机器学习中挑战性问题(即相位检索问题和强大的子空间检测)中的临时状态基本上表现出色。
We investigate the implementation of a new stochastic Kuramoto-Vicsek-type model for global optimization of nonconvex functions on the sphere. This model belongs to the class of Consensus-Based Optimization. In fact, particles move on the sphere driven by a drift towards an instantaneous consensus point, which is computed as a convex combination of particle locations, weighted by the cost function according to Laplace's principle, and it represents an approximation to a global minimizer. The dynamics is further perturbed by a random vector field to favor exploration, whose variance is a function of the distance of the particles to the consensus point. In particular, as soon as the consensus is reached the stochastic component vanishes. The main results of this paper are about the proof of convergence of the numerical scheme to global minimizers provided conditions of well-preparation of the initial datum. The proof combines previous results of mean-field limit with a novel asymptotic analysis, and classical convergence results of numerical methods for SDE. We present several numerical experiments, which show that the algorithm proposed in the present paper scales well with the dimension and is extremely versatile. To quantify the performances of the new approach, we show that the algorithm is able to perform essentially as good as ad hoc state of the art methods in challenging problems in signal processing and machine learning, namely the phase retrieval problem and the robust subspace detection.