论文标题
更快的一个样本随机条件梯度方法,用于最小化复合凸的
Faster One-Sample Stochastic Conditional Gradient Method for Composite Convex Minimization
论文作者
论文摘要
我们提出了一种随机条件梯度方法(CGM),以最大程度地减少凸有限和非平滑项的总和。该模板的现有CGM变体要么患有缓慢的收敛速率,要么需要在算法执行过程中仔细地增加批次大小,这导致计算完整的梯度。相比之下,配备了随机平均梯度(SAG)估计量的提议方法仅需要一个样本。然而,它可以保证与更复杂的差异技术相同的快速收敛率。在应用程序中,我们特别强调了许多可分离约束的问题。在机器学习和理论计算机科学中产生的半决赛编程(SDP)中,这种问题普遍存在。我们在矩阵完成,无监督聚类和最稀少的切割SDP上提供数值实验。
We propose a stochastic conditional gradient method (CGM) for minimizing convex finite-sum objectives formed as a sum of smooth and non-smooth terms. Existing CGM variants for this template either suffer from slow convergence rates, or require carefully increasing the batch size over the course of the algorithm's execution, which leads to computing full gradients. In contrast, the proposed method, equipped with a stochastic average gradient (SAG) estimator, requires only one sample per iteration. Nevertheless, it guarantees fast convergence rates on par with more sophisticated variance reduction techniques. In applications we put special emphasis on problems with a large number of separable constraints. Such problems are prevalent among semidefinite programming (SDP) formulations arising in machine learning and theoretical computer science. We provide numerical experiments on matrix completion, unsupervised clustering, and sparsest-cut SDPs.