论文标题
分布式随机优化在随机变化的网络上具有无界亚级别的分布式优化
Distributed Stochastic Optimization With Unbounded Subgradients Over Randomly Time-Varying Networks
论文作者
论文摘要
通过分布式的统计学习对不确定的通信网络的分布式学习,我们研究了通过网络节点进行分布的随机优化,以合作最大程度地减少凸成本函数的总和。该网络由一系列随时间变化的随机挖掘图建模,每个节点代表局部优化器,每个边缘代表通信链接。我们考虑分布式亚级别优化算法,并嘈杂地测量了本地成本函数的亚级别,添加性和乘法噪声之间的每对节点之间的信息交换之间。 By stochastic Lyapunov method, convex analysis, algebraic graph theory and martingale convergence theory, we prove that if the local subgradient functions grow linearly and the sequence of digraphs is conditionally balanced and uniformly conditionally jointly connected, then proper algorithm step sizes can be designed so that all nodes' states converge to the global optimal solution almost surely.
Motivated by distributed statistical learning over uncertain communication networks, we study distributed stochastic optimization by networked nodes to cooperatively minimize a sum of convex cost functions. The network is modeled by a sequence of time-varying random digraphs with each node representing a local optimizer and each edge representing a communication link. We consider the distributed subgradient optimization algorithm with noisy measurements of local cost functions' subgradients, additive and multiplicative noises among information exchanging between each pair of nodes. By stochastic Lyapunov method, convex analysis, algebraic graph theory and martingale convergence theory, we prove that if the local subgradient functions grow linearly and the sequence of digraphs is conditionally balanced and uniformly conditionally jointly connected, then proper algorithm step sizes can be designed so that all nodes' states converge to the global optimal solution almost surely.