论文标题
HDRFEAT:用于高动态范围图像重建的功能丰富的网络
HDRfeat: A Feature-Rich Network for High Dynamic Range Image Reconstruction
论文作者
论文摘要
从多暴露的低动态范围(LDR)图像(尤其是动态场景)中重建高动态范围(HDR)图像重建的主要挑战是提取和合并相关的上下文特征,以抑制从移动对象中抑制任何hothing和模糊的伪影。为了解决这个问题,在这项工作中,我们提出了一个具有深层和丰富特征提取层的HDR重建的新型网络,包括带有顺序通道和空间注意力的残留注意力块。为了压缩丰富的功能到HDR域,采用了基于残留的特征蒸馏块(RFDB)结构。与较早的HDR深度学习方法相反,上述贡献将重点从合并/压缩转变为特征提取,我们通过消融实验证明其附加值。我们在公共基准数据集上介绍了定性和定量比较,这表明我们所提出的方法的表现优于最先进的方法。
A major challenge for high dynamic range (HDR) image reconstruction from multi-exposed low dynamic range (LDR) images, especially with dynamic scenes, is the extraction and merging of relevant contextual features in order to suppress any ghosting and blurring artifacts from moving objects. To tackle this, in this work we propose a novel network for HDR reconstruction with deep and rich feature extraction layers, including residual attention blocks with sequential channel and spatial attention. For the compression of the rich-features to the HDR domain, a residual feature distillation block (RFDB) based architecture is adopted. In contrast to earlier deep-learning methods for HDR, the above contributions shift focus from merging/compression to feature extraction, the added value of which we demonstrate with ablation experiments. We present qualitative and quantitative comparisons on a public benchmark dataset, showing that our proposed method outperforms the state-of-the-art.