论文标题
学习srgb-to-raw-rgb用内容感知元数据
Learning sRGB-to-Raw-RGB De-rendering with Content-Aware Metadata
论文作者
论文摘要
大多数相机图像由相机的硬件渲染并保存在标准RGB(SRGB)格式中。由于相机内的照片绘制例程,因此非线性SRGB图像对于在像素值和场景辐射之间存在直接关系的计算机视觉任务是不可能的。对于此类应用,优选线性RAW-RGB传感器图像。由于许多成像应用程序的存储要求较大,并且缺乏支持,因此以其RAW-RGB格式保存图像仍然很少见。已经提出了几种“原始重建”方法,它利用捕获时间从RAW-RGB图像中采样并嵌入SRGB图像中的专门元数据。该元数据用于参数化映射函数,以在需要时将SRGB图像降低到其原始的RAW-RGB格式。现有的原始重建方法依赖于简单的采样策略和全局映射来执行降级。本文展示了如何通过共同学习采样和重建来改善露出结果。我们的实验表明,与现有方法相比,我们学习的抽样可以适应图像内容以产生更好的原始重建。我们还描述了重建网络的在线微调策略,以进一步改善结果。
Most camera images are rendered and saved in the standard RGB (sRGB) format by the camera's hardware. Due to the in-camera photo-finishing routines, nonlinear sRGB images are undesirable for computer vision tasks that assume a direct relationship between pixel values and scene radiance. For such applications, linear raw-RGB sensor images are preferred. Saving images in their raw-RGB format is still uncommon due to the large storage requirement and lack of support by many imaging applications. Several "raw reconstruction" methods have been proposed that utilize specialized metadata sampled from the raw-RGB image at capture time and embedded in the sRGB image. This metadata is used to parameterize a mapping function to de-render the sRGB image back to its original raw-RGB format when needed. Existing raw reconstruction methods rely on simple sampling strategies and global mapping to perform the de-rendering. This paper shows how to improve the de-rendering results by jointly learning sampling and reconstruction. Our experiments show that our learned sampling can adapt to the image content to produce better raw reconstructions than existing methods. We also describe an online fine-tuning strategy for the reconstruction network to improve results further.