论文标题

可扩展的插件ADMM具有收敛保证

Scalable Plug-and-Play ADMM with Convergence Guarantees

论文作者

Sun, Yu, Wu, Zihui, Xu, Xiaojian, Wohlberg, Brendt, Kamilov, Ulugbek S.

论文摘要

插入式先验(PNP)是通过利用指定为Deoisiser的统计先验来解决反问题的广泛适用方法。最近的工作报道了使用预先训练的深神经网作为多种成像应用中的DeNoisers的PNP算法的最新性能。但是,由于其繁重的计算和内存要求,当前的PNP算法在大规模设置中是不切实际的。这项工作通过提出广泛使用的PNP-ADMM算法的增量变体来解决此问题,从而使其可扩展到大规模数据集。我们理论上分析了一组明确假设下算法的收敛性,从而扩展了该区域的最新理论结果。此外,我们通过非平滑数据效率项和深度神经净学位的算法显示了算法的有效性,与现有的PNP算法相比,其快速收敛以及其在速度和记忆方面的可伸缩性。

Plug-and-play priors (PnP) is a broadly applicable methodology for solving inverse problems by exploiting statistical priors specified as denoisers. Recent work has reported the state-of-the-art performance of PnP algorithms using pre-trained deep neural nets as denoisers in a number of imaging applications. However, current PnP algorithms are impractical in large-scale settings due to their heavy computational and memory requirements. This work addresses this issue by proposing an incremental variant of the widely used PnP-ADMM algorithm, making it scalable to large-scale datasets. We theoretically analyze the convergence of the algorithm under a set of explicit assumptions, extending recent theoretical results in the area. Additionally, we show the effectiveness of our algorithm with nonsmooth data-fidelity terms and deep neural net priors, its fast convergence compared to existing PnP algorithms, and its scalability in terms of speed and memory.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源