论文标题

沟通有效方差降低随机梯度下降

Communication-efficient Variance-reduced Stochastic Gradient Descent

论文作者

Ghadikolaei, Hossein S., Magnusson, Sindri

论文摘要

我们考虑了沟通有效分布式优化的问题,其中多个节点在每种迭代中交换重要的算法信息以解决大型问题。特别是,我们专注于随机方差减少梯度,并提出了一种使其沟通效率的新方法。也就是说,我们将传达的信息压缩到几个位,同时保留原始未压缩算法的线性收敛速率。对实际数据集的全面理论和数值分析表明,我们的算法可以显着降低通信复杂性,高达95 \%,而几乎没有明显的惩罚。此外,与解决分布式优化问题的最新算法相比,量化(就保持真实的最小化和收敛速率而言)要强得多。我们的结果对于在元素网络和移动网络上使用机器学习具有重要意义。

We consider the problem of communication efficient distributed optimization where multiple nodes exchange important algorithm information in every iteration to solve large problems. In particular, we focus on the stochastic variance-reduced gradient and propose a novel approach to make it communication-efficient. That is, we compress the communicated information to a few bits while preserving the linear convergence rate of the original uncompressed algorithm. Comprehensive theoretical and numerical analyses on real datasets reveal that our algorithm can significantly reduce the communication complexity, by as much as 95\%, with almost no noticeable penalty. Moreover, it is much more robust to quantization (in terms of maintaining the true minimizer and the convergence rate) than the state-of-the-art algorithms for solving distributed optimization problems. Our results have important implications for using machine learning over internet-of-things and mobile networks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源