论文标题

深空角度正规化,用于对孔的压缩光场重建

Deep Spatial-angular Regularization for Compressive Light Field Reconstruction over Coded Apertures

论文作者

Guo, Mantang, Hou, Junhui, Jin, Jing, Chen, Jie, Chau, Lap-Pui

论文摘要

编码的孔径是捕获4-D光场(LF)的一种有前途的方法,其中4-D数据被压缩到2D编码测量中,通过重建算法进一步解码。瓶颈在于重建算法,导致重建质量相当有限。为了应对这一挑战,我们提出了一个新型的基于学习的框架,以通过学习的编码光圈从收购中重建高质量的LFS。提出的方法将测量观测结果纳入了深度学习框架中,以避免完全依靠数据驱动的先验进行LF重建。具体而言,我们首先将压缩LF重建作为具有隐式正规化项的反问题。然后,我们使用有效的深空角卷积子网络构建正则化项,以全面地探索自由探索有限的表示能力和确定性数学建模效率低下的信号分布。实验结果表明,与真实和合成LF基准的最新方法相比,重建的LFS不仅获得了更高的PSNR/SSIM,而且可以更好地保留LF视差结构。此外,实验表明,我们的方法对噪声是有效且鲁棒的,这对于真实摄像机系统来说是重要的优势。该代码可在\ url {https://github.com/angmt2008/lfca}上公开获得。

Coded aperture is a promising approach for capturing the 4-D light field (LF), in which the 4-D data are compressively modulated into 2-D coded measurements that are further decoded by reconstruction algorithms. The bottleneck lies in the reconstruction algorithms, resulting in rather limited reconstruction quality. To tackle this challenge, we propose a novel learning-based framework for the reconstruction of high-quality LFs from acquisitions via learned coded apertures. The proposed method incorporates the measurement observation into the deep learning framework elegantly to avoid relying entirely on data-driven priors for LF reconstruction. Specifically, we first formulate the compressive LF reconstruction as an inverse problem with an implicit regularization term. Then, we construct the regularization term with an efficient deep spatial-angular convolutional sub-network to comprehensively explore the signal distribution free from the limited representation ability and inefficiency of deterministic mathematical modeling. Experimental results show that the reconstructed LFs not only achieve much higher PSNR/SSIM but also preserve the LF parallax structure better, compared with state-of-the-art methods on both real and synthetic LF benchmarks. In addition, experiments show that our method is efficient and robust to noise, which is an essential advantage for a real camera system. The code is publicly available at \url{https://github.com/angmt2008/LFCA}

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源