论文标题
mulaycap:使用单眼摄像机捕获多层人类绩效
MulayCap: Multi-layer Human Performance Capture Using A Monocular Video Camera
论文作者
论文摘要
我们介绍了MulayCap,这是一种新型的人类性能捕获方法,使用单眼摄像机而无需预扫描。该方法分别将“多层”表示分别用于几何重建和纹理渲染。为了重建几何形状,我们将衣服的人分解为多个几何层,即车身网格层和服装层层。背后的关键技术是一种基于用梯度下降有效求解的布模拟模型,它是一种优化服装形状和重建动态布以适合输入视频序列的动态布以适合输入视频序列的方法。对于纹理渲染,我们将每个输入图像框架分解为阴影层和反照率层,并提出了一种融合固定反照率图并求解的方法,并求解了使用阴影层进行详细的服装几何形状。与现有的单一视图人类性能捕获系统相比,我们的“多层”方法绕过了获得人类特定网格模板的繁琐且耗时的扫描步骤。实验结果表明,MulayCap产生了动态变化的细节的现实渲染,这些细节在任何以前的单眼摄像机系统中都没有实现。 MulayCap受益于其完全的语义建模,可以应用于各种重要的编辑应用程序,例如布料编辑,重新定位,重新定位,重新定位和AR应用。
We introduce MulayCap, a novel human performance capture method using a monocular video camera without the need for pre-scanning. The method uses "multi-layer" representations for geometry reconstruction and texture rendering, respectively. For geometry reconstruction, we decompose the clothed human into multiple geometry layers, namely a body mesh layer and a garment piece layer. The key technique behind is a Garment-from-Video (GfV) method for optimizing the garment shape and reconstructing the dynamic cloth to fit the input video sequence, based on a cloth simulation model which is effectively solved with gradient descent. For texture rendering, we decompose each input image frame into a shading layer and an albedo layer, and propose a method for fusing a fixed albedo map and solving for detailed garment geometry using the shading layer. Compared with existing single view human performance capture systems, our "multi-layer" approach bypasses the tedious and time consuming scanning step for obtaining a human specific mesh template. Experimental results demonstrate that MulayCap produces realistic rendering of dynamically changing details that has not been achieved in any previous monocular video camera systems. Benefiting from its fully semantic modeling, MulayCap can be applied to various important editing applications, such as cloth editing, re-targeting, relighting, and AR applications.