论文标题
真实超级分辨率的深层生成对抗残留卷积网络
Deep Generative Adversarial Residual Convolutional Networks for Real-World Super-Resolution
论文作者
论文摘要
大多数基于深度学习的单图像超分辨率(SISR)方法着重于设计更深 /更广泛的模型,以学习低分辨率(LR)输入之间的非线性映射与大量配对(LR / HR)训练数据的高分辨率(HR)输出之间的非线性映射。他们通常认为LR图像是HR图像的双色图像下采样版本。但是,这种降解过程在现实世界设置中不可用,即固有的传感器噪声,随机噪声,压缩伪像,图像退化过程和相机设备之间可能的不匹配。由于现实世界图像损坏,它大大降低了当前SISR方法的性能。为了解决这些问题,我们提出了一个深层的超分辨率残差卷积生成对抗网络(SRRESCGAN),以通过对抗性训练从其生成的LR对应物中的人力资源域中通过像素域进行像素域的模型遵循现实世界中的退化环境。提出的网络通过使用强大的图像正则化和凸优化技术来最大程度地降低基于能量的目标函数来利用残差学习。我们证明了我们在定量和定性实验中提出的方法,这些方法可以稳定地推广到真实的输入,并且很容易为其他下尺度运算符以及移动/嵌入式设备部署。
Most current deep learning based single image super-resolution (SISR) methods focus on designing deeper / wider models to learn the non-linear mapping between low-resolution (LR) inputs and the high-resolution (HR) outputs from a large number of paired (LR/HR) training data. They usually take as assumption that the LR image is a bicubic down-sampled version of the HR image. However, such degradation process is not available in real-world settings i.e. inherent sensor noise, stochastic noise, compression artifacts, possible mismatch between image degradation process and camera device. It reduces significantly the performance of current SISR methods due to real-world image corruptions. To address these problems, we propose a deep Super-Resolution Residual Convolutional Generative Adversarial Network (SRResCGAN) to follow the real-world degradation settings by adversarial training the model with pixel-wise supervision in the HR domain from its generated LR counterpart. The proposed network exploits the residual learning by minimizing the energy-based objective function with powerful image regularization and convex optimization techniques. We demonstrate our proposed approach in quantitative and qualitative experiments that generalize robustly to real input and it is easy to deploy for other down-scaling operators and mobile/embedded devices.