论文标题
可证明具有XOR投影的梯度下降的可限制的随机凸优化
Provable Constrained Stochastic Convex Optimization with XOR-Projected Gradient Descent
论文作者
论文摘要
可以证明,解决与约束的随机凸优化问题对于科学,商业和统计中的各种问题至关重要。最近提出的XOR-STOCHASTIC梯度下降(XOR-SGD)通过利用XOR缩采样来解决问题的收敛速率保证,从而解决了该问题的不含约束版本。但是,当需要满足其他平等和不平等约束时,任务变得更加困难。在这里,我们提出了XOR-PGD,这是一种基于预计梯度下降(PGD)的新型算法,并与XOR采样器结合,可以通过选择适当的步长,可以保证在线性收敛速率中解决约束随机凸优化问题。我们在合成随机库存管理和现实世界的道路网络设计问题上都表明,XOR-PGD优化的解决方案的满意度比在非常大的搜索空间中的竞争方法要多于$ 10 \%$。证明改进的XOR-PGD算法比XOR-SGD和SGD与MCMC基于MCMC的采样器更准确,更有效。还显示,通过具有较大尺寸的实验,相对于样品和处理器核心的数量,它更可扩展。
Provably solving stochastic convex optimization problems with constraints is essential for various problems in science, business, and statistics. Recently proposed XOR-Stochastic Gradient Descent (XOR-SGD) provides a convergence rate guarantee solving the constraints-free version of the problem by leveraging XOR-Sampling. However, the task becomes more difficult when additional equality and inequality constraints are needed to be satisfied. Here we propose XOR-PGD, a novel algorithm based on Projected Gradient Descent (PGD) coupled with the XOR sampler, which is guaranteed to solve the constrained stochastic convex optimization problem still in linear convergence rate by choosing proper step size. We show on both synthetic stochastic inventory management and real-world road network design problems that the rate of constraints satisfaction of the solutions optimized by XOR-PGD is $10\%$ more than the competing approaches in a very large searching space. The improved XOR-PGD algorithm is demonstrated to be more accurate and efficient than both XOR-SGD and SGD coupled with MCMC based samplers. It is also shown to be more scalable with respect to the number of samples and processor cores via experiments with large dimensions.