论文标题
PIP:位置编码图像先验
PIP: Positional-encoding Image Prior
论文作者
论文摘要
在深图像先验(DIP)中,安装了卷积神经网络(CNN)将潜在空间映射到降级(例如嘈杂)图像中,但在此过程中,学会了重建干净的图像。这种现象归因于CNN的内部图像优点。我们重新访问了DIP框架,从神经隐含表示的角度进行了检查。通过这种观点的激励,我们用傅立叶功能(位置编码)代替了随机或学到的潜在。我们证明,由于傅立叶功能属性,我们可以用简单的像素级MLP替换卷积层。 We name this scheme ``Positional Encoding Image Prior" (PIP) and exhibit that it performs very similarly to DIP on various image-reconstruction tasks with much less parameters required. Additionally, we demonstrate that PIP can be easily extended to videos, where 3D-DIP struggles and suffers from instability. Code and additional examples for all tasks, including videos, are available on the project page https://nimrodshabtay.github.io/pip/
In Deep Image Prior (DIP), a Convolutional Neural Network (CNN) is fitted to map a latent space to a degraded (e.g. noisy) image but in the process learns to reconstruct the clean image. This phenomenon is attributed to CNN's internal image-prior. We revisit the DIP framework, examining it from the perspective of a neural implicit representation. Motivated by this perspective, we replace the random or learned latent with Fourier-Features (Positional Encoding). We show that thanks to the Fourier features properties, we can replace the convolution layers with simple pixel-level MLPs. We name this scheme ``Positional Encoding Image Prior" (PIP) and exhibit that it performs very similarly to DIP on various image-reconstruction tasks with much less parameters required. Additionally, we demonstrate that PIP can be easily extended to videos, where 3D-DIP struggles and suffers from instability. Code and additional examples for all tasks, including videos, are available on the project page https://nimrodshabtay.github.io/PIP/