论文标题
手工艺:自由置端动画和单眼视频渲染
Hand Avatar: Free-Pose Hand Animation and Rendering from Monocular Video
论文作者
论文摘要
我们介绍了Handavatar,这是一种用于手动动画和渲染的新颖表示,它可以产生平稳的组成几何形状和自我批准 - 意识到的纹理。具体而言,我们首先开发了一种Mano-HD模型作为高分辨率网格拓扑,以适合个性化的手形。顺便说一句,我们将手几何形状分解为每骨刚性零件,然后重新组合成对的几何编码以得出整个部分一致的占用场。至于纹理建模,我们提出了一个自我概括的阴影场(自我)。在自我中,可驱动的锚位于mano-hd表面上,以记录各种手姿势的反照率信息。此外,定向软占用旨在描述射线到表面关系,该关系旨在生成一个照明场,以散布姿势独立的反照率和姿势依赖性照明。通过单眼视频数据训练,我们的Handavatar可以执行自由置式手动动画和渲染,同时实现出色的外观忠诚度。我们还证明了Handavatar提供了一条手工外观编辑的途径。项目网站:https://seanchenxy.github.io/handavatarweb。
We present HandAvatar, a novel representation for hand animation and rendering, which can generate smoothly compositional geometry and self-occlusion-aware texture. Specifically, we first develop a MANO-HD model as a high-resolution mesh topology to fit personalized hand shapes. Sequentially, we decompose hand geometry into per-bone rigid parts, and then re-compose paired geometry encodings to derive an across-part consistent occupancy field. As for texture modeling, we propose a self-occlusion-aware shading field (SelF). In SelF, drivable anchors are paved on the MANO-HD surface to record albedo information under a wide variety of hand poses. Moreover, directed soft occupancy is designed to describe the ray-to-surface relation, which is leveraged to generate an illumination field for the disentanglement of pose-independent albedo and pose-dependent illumination. Trained from monocular video data, our HandAvatar can perform free-pose hand animation and rendering while at the same time achieving superior appearance fidelity. We also demonstrate that HandAvatar provides a route for hand appearance editing. Project website: https://seanchenxy.github.io/HandAvatarWeb.