论文标题

DRWR:一个可区分的渲染器,无需渲染,无法监督的3D结构从轮廓图像学习

DRWR: A Differentiable Renderer without Rendering for Unsupervised 3D Structure Learning from Silhouette Images

论文作者

Han, Zhizhong, Chen, Chao, Liu, Yu-Shen, Zwicker, Matthias

论文摘要

可区分的渲染器已成功地用于从2D图像中进行无监督的3D结构学习,因为它们可以弥合3D和2D之间的间隙。为了优化3D形状参数,当前的渲染器依赖于从相应的观点中的3D重建图像与地面真相图像之间的像素损失。因此,它们需要在每个像素上恢复的3D结构,可见性处理并选择评估阴影模型。相比之下,在这里,我们提出了一个无需渲染(DRWR)的可区分渲染器,以省略这些步骤。 DRWR仅依靠简单但有效的损失,该损失评估了重建的3D点云的预测覆盖地面真相对象轮廓的程度。具体而言,DRWR采用平滑的轮廓损失来拉动物体轮廓内每个单独的3D点的投影,以及结构意识到的排斥力损失,推动每对落在轮廓内部的投影彼此远离的投影。尽管我们省略了表面插值,可见性处理和阴影,但我们的结果表明,DRWR在广泛使用的基准下实现了最新的精度,在定性和定量上都优于先前的方法。此外,由于DRWR的简单性,我们的培训时间大大降低。

Differentiable renderers have been used successfully for unsupervised 3D structure learning from 2D images because they can bridge the gap between 3D and 2D. To optimize 3D shape parameters, current renderers rely on pixel-wise losses between rendered images of 3D reconstructions and ground truth images from corresponding viewpoints. Hence they require interpolation of the recovered 3D structure at each pixel, visibility handling, and optionally evaluating a shading model. In contrast, here we propose a Differentiable Renderer Without Rendering (DRWR) that omits these steps. DRWR only relies on a simple but effective loss that evaluates how well the projections of reconstructed 3D point clouds cover the ground truth object silhouette. Specifically, DRWR employs a smooth silhouette loss to pull the projection of each individual 3D point inside the object silhouette, and a structure-aware repulsion loss to push each pair of projections that fall inside the silhouette far away from each other. Although we omit surface interpolation, visibility handling, and shading, our results demonstrate that DRWR achieves state-of-the-art accuracies under widely used benchmarks, outperforming previous methods both qualitatively and quantitatively. In addition, our training times are significantly lower due to the simplicity of DRWR.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源