论文标题

使用ESRGAN的单图像超分辨率的双重感知损失

Dual Perceptual Loss for Single Image Super-Resolution Using ESRGAN

论文作者

Song, Jie, Yi, Huawei, Xu, Wenqian, Li, Xiaohui, Li, Bo, Liu, Yuanyuan

论文摘要

感知损失的建议解决了每个像素差损失函数导致重建图像过于平滑的问题,这在单图超分辨率重建领域取得了重大进展。此外,将生成对抗网络(GAN)应用于超分辨率领域,从而有效地改善了重建图像的视觉质量。但是,在高尺度因素的束缚下,网络的过度异常推理会产生一些扭曲的结构,因此重建的图像与地面真实图像之间存在一定的偏差。为了从根本上提高重建图像的质量,本文提出了一种称为双重知觉损失(DP损失)的有效方法,该方法用于替换原始的感知损失,以解决单图超级分辨率重建的问题。由于VGG功能和重新连接功能之间的互补属性,拟议的DP损失同时考虑学习两个功能的优势,这显着改善了图像的重建效果。基准数据集的定性和定量分析证明了我们所提出的方法比最先进的超分辨率方法的优越性。

The proposal of perceptual loss solves the problem that per-pixel difference loss function causes the reconstructed image to be overly-smooth, which acquires a significant progress in the field of single image super-resolution reconstruction. Furthermore, the generative adversarial networks (GAN) is applied to the super-resolution field, which effectively improves the visual quality of the reconstructed image. However, under the condtion of high upscaling factors, the excessive abnormal reasoning of the network produces some distorted structures, so that there is a certain deviation between the reconstructed image and the ground-truth image. In order to fundamentally improve the quality of reconstructed images, this paper proposes a effective method called Dual Perceptual Loss (DP Loss), which is used to replace the original perceptual loss to solve the problem of single image super-resolution reconstruction. Due to the complementary property between the VGG features and the ResNet features, the proposed DP Loss considers the advantages of learning two features simultaneously, which significantly improves the reconstruction effect of images. The qualitative and quantitative analysis on benchmark datasets demonstrates the superiority of our proposed method over state-of-the-art super-resolution methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源