论文标题

nerfeditor:完整3D场景编辑的可区分样式分解

NeRFEditor: Differentiable Style Decomposition for Full 3D Scene Editing

论文作者

Sun, Chunyi, Liu, Yanbin, Han, Junlin, Gould, Stephen

论文摘要

我们提出了Nerfeditor,这是一个用于3D场景编辑的有效学习框架,该框架以360°的捕获为输入,并输出高质量,标识的风格风格的3D场景。我们的方法支持各种类型的编辑,例如由参考图像,文本提示和用户交互指导。我们通过鼓励预先训练的StyleGAN模型和NERF模型相互学习来实现这一目标。具体而言,我们使用NERF模型生成许多图像角度对来训练一个调节器,该对可以调整stylegan潜在代码以生成任何给定角度的高保真风格化图像。为了将编辑推断为gan室外视图,我们设计了另一个以自我监督的学习方式训练的模块。该模块将新颖的图像映射到stylegan的隐藏空间,该图像允许Stylegan在新视图上生成风格化的图像。这两个模块共同在360°的视图中产生带导的图像,以捕获NERF,以产生风格化效果,其中提出了稳定的微调策略来实现这一目标。实验表明,Nerfeditor在基准和实际场景上的优先工作优于具有更好的编辑性,忠诚度和身份保存的现实场景。

We present NeRFEditor, an efficient learning framework for 3D scene editing, which takes a video captured over 360° as input and outputs a high-quality, identity-preserving stylized 3D scene. Our method supports diverse types of editing such as guided by reference images, text prompts, and user interactions. We achieve this by encouraging a pre-trained StyleGAN model and a NeRF model to learn from each other mutually. Specifically, we use a NeRF model to generate numerous image-angle pairs to train an adjustor, which can adjust the StyleGAN latent code to generate high-fidelity stylized images for any given angle. To extrapolate editing to GAN out-of-domain views, we devise another module that is trained in a self-supervised learning manner. This module maps novel-view images to the hidden space of StyleGAN that allows StyleGAN to generate stylized images on novel views. These two modules together produce guided images in 360°views to finetune a NeRF to make stylization effects, where a stable fine-tuning strategy is proposed to achieve this. Experiments show that NeRFEditor outperforms prior work on benchmark and real-world scenes with better editability, fidelity, and identity preservation.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源