论文标题
SAPD+:一种非Convex-Concave minimax问题的加速随机方法
SAPD+: An Accelerated Stochastic Method for Nonconvex-Concave Minimax Problems
论文作者
论文摘要
We propose a new stochastic method SAPD+ for solving nonconvex-concave minimax problems of the form $\min\max\mathcal{L}(x,y)=f(x)+Φ(x,y)-g(y)$, where $f,g$ are closed convex and $Φ(x,y)$ is a smooth function that is weakly convex in $x$, (strongly) concave in $ y $。令$δ^2 $表示SAPD+内使用的无偏随机甲骨文的方差,以估算$ \nablaφ$。当$δ> 0 $时,对于强烈的凹面和仅仅是凹面设置,SAPD+实现了最著名的Oracle复杂性: $ \ MATHCAL {o} \ big(κ_y\ max \ big \ {1,\ frac {δ^2} {δ^2} {ε^2} \ big \} \ big \} \ frac {l \ mathcal {g} _0} _0} _0} _0} _ $ \ MATHCAL {O} \ big(\ frac {l^3 \ Mathcal {d} _y^2 \ Mathcal {g} _0} {ε^{4}} \ big(1+ \ frac \ frac {δ^2}} $κ_y\geq 1$ is the condition number, $L$ is the Lipschitz constant of $\nabla Φ$, $\mathcal{G}_0$ is the primal-dual gap of the initial point, and $\mathcal{D}_y=\sup\{\|y\|:\ y\in\mathbf{dom} g \} $。我们还提出了减少差异的SAPD+,它享受$ \ Mathcal {o} \ big(\ max \ big \ big \ {κ_y,\ sqrt {\fracδε} \ big \ big \} \ cdot (1+κ_y\fracδε)\ frac {l \ Mathcal {g} _0} {ε^2} \ big)$ oracle的复杂性,用于弱凸的凹陷环境 - 这是本环境中最著名的上层复杂性,我们的论文为第一次创建了它。我们证明了SAPD+使用非凸正规器以及深度学习中的多类分类问题的效率。
We propose a new stochastic method SAPD+ for solving nonconvex-concave minimax problems of the form $\min\max\mathcal{L}(x,y)=f(x)+Φ(x,y)-g(y)$, where $f,g$ are closed convex and $Φ(x,y)$ is a smooth function that is weakly convex in $x$, (strongly) concave in $y$. Let $δ^2$ denote the variance bound for the unbiased stochastic oracle used within SAPD+ to estimate $\nablaΦ$. When $δ>0$, for both strongly concave and merely concave settings, SAPD+ achieves the best known oracle complexities: $\mathcal{O}\Big(κ_y\max\Big\{1,\frac{δ^2}{ε^2}\Big\}\frac{L\mathcal{G}_0}{ε^{2}}\Big)$ for the strongly concave case without assuming compactness of the problem domain, and $\mathcal{O}\Big(\frac{L^3\mathcal{D}_y^2\mathcal{G}_0}{ε^{4}}\Big(1+\frac{δ^2}{ε^2}\Big)\Big)$ for the merely concave case, where $κ_y\geq 1$ is the condition number, $L$ is the Lipschitz constant of $\nabla Φ$, $\mathcal{G}_0$ is the primal-dual gap of the initial point, and $\mathcal{D}_y=\sup\{\|y\|:\ y\in\mathbf{dom} g\}$. We also propose SAPD+ with variance reduction, which enjoys $\mathcal{O}\Big(\max\Big\{κ_y,\sqrt{\fracδε}\Big\}\cdot (1+κ_y\fracδε)\frac{L\mathcal{G}_0}{ε^2}\Big)$ oracle complexity for weakly convex-strongly concave setting --this is the best known upper complexity bound in the literature for this setting and our paper establishes it for the first time. We demonstrate the efficiency of SAPD+ on a distributionally robust learning problem with a nonconvex regularizer and also on a multi-class classification problem in deep learning.