论文标题
通过几何感知网络学习光场角超分辨率
Learning Light Field Angular Super-Resolution via a Geometry-Aware Network
论文作者
论文摘要
具有高角度分辨率的光场图像的获取成本很高。尽管已经提出了许多方法来改善稀疏采样的光场的角度分辨率,但它们总是以小基线为重点放在光场上,该基线由消费者光场摄像头捕获。通过充分利用光场的固有\ textit {几何}信息,在本文中,我们提出了一种基于端到端的学习方法,旨在旨在具有较大基线的偏见超级清晰采样的光场。我们的模型由两个可学习的模块和一个基于物理的模块组成。具体而言,它包括一个用于明确建模场景几何形状的深度估计模块,用于新视图合成的基于物理的翘曲以及专门为光场重建设计的光场混合模块。此外,我们引入了一种新型的损失功能,以促进光场差距结构的保存。在包括大基线光场图像在内的各种光场数据集上的实验结果表明,与最先进的结果相比,我们的方法具有显着优势,即,我们的方法将第二个最佳方法的PSNR平均提高到2 dB,而平均可以节省执行时间48 $ \ times $。此外,我们的方法可以更好地保留光场视差结构。
The acquisition of light field images with high angular resolution is costly. Although many methods have been proposed to improve the angular resolution of a sparsely-sampled light field, they always focus on the light field with a small baseline, which is captured by a consumer light field camera. By making full use of the intrinsic \textit{geometry} information of light fields, in this paper we propose an end-to-end learning-based approach aiming at angularly super-resolving a sparsely-sampled light field with a large baseline. Our model consists of two learnable modules and a physically-based module. Specifically, it includes a depth estimation module for explicitly modeling the scene geometry, a physically-based warping for novel views synthesis, and a light field blending module specifically designed for light field reconstruction. Moreover, we introduce a novel loss function to promote the preservation of the light field parallax structure. Experimental results over various light field datasets including large baseline light field images demonstrate the significant superiority of our method when compared with state-of-the-art ones, i.e., our method improves the PSNR of the second best method up to 2 dB in average, while saves the execution time 48$\times$. In addition, our method preserves the light field parallax structure better.