论文标题

单一图像defogging的未配对四边形循环一致的对抗网络

Unpaired Quad-Path Cycle Consistent Adversarial Networks for Single Image Defogging

论文作者

Liu, Wei, Chen, Cheng, Jiang, Rui, Lu, Tao, Xiong, Zixiang

论文摘要

基于对抗性学习的图像脱落方法,由于其出色的性能,已经在计算机视觉中进行了广泛的研究。但是,大多数现有方法对实际情况的质量功能有限,因为它们在相同场景的透明和合成的雾化图像上进行了培训。此外,它们在保留鲜明的色彩和丰富的文本细节方面存在局限性。为了解决这些问题,我们开发了一个新颖的生成对抗网络,称为四局循环一致的对抗网络(QPC-net),用于单个图像defogging。 QPC-NET由Fog2FogFree块和FogFree2Fog块组成。在每个块中,有三个基于学习的模块,即雾除雾,颜色纹理恢复和雾合成,它们顺序组成双路径,它们相互约束以生成高质量的图像。具体而言,颜色文本恢复模型旨在通过学习雾图图像之间的整体通道空间特征相关性及其几个派生图像来利用纹理和结构信息的自相似性。此外,在雾合成模块中,我们利用大气散射模型来指导它,以通过新颖的天空分割网络专注于大气光优化来提高生成质量。关于合成和现实世界数据集的广泛实验表明,就定量准确性和主观的视觉质量而言,QPC-NET优于最先进的脱落方法。

Adversarial learning-based image defogging methods have been extensively studied in computer vision due to their remarkable performance. However, most existing methods have limited defogging capabilities for real cases because they are trained on the paired clear and synthesized foggy images of the same scenes. In addition, they have limitations in preserving vivid color and rich textual details in defogging. To address these issues, we develop a novel generative adversarial network, called quad-path cycle consistent adversarial network (QPC-Net), for single image defogging. QPC-Net consists of a Fog2Fogfree block and a Fogfree2Fog block. In each block, there are three learning-based modules, namely, fog removal, color-texture recovery, and fog synthetic, which sequentially compose dual-path that constrain each other to generate high quality images. Specifically, the color-texture recovery model is designed to exploit the self-similarity of texture and structure information by learning the holistic channel-spatial feature correlations between the foggy image with its several derived images. Moreover, in the fog synthetic module, we utilize the atmospheric scattering model to guide it to improve the generative quality by focusing on an atmospheric light optimization with a novel sky segmentation network. Extensive experiments on both synthetic and real-world datasets show that QPC-Net outperforms state-of-the-art defogging methods in terms of quantitative accuracy and subjective visual quality.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源