论文标题

可控的3D生成对抗面部模型,通过脱离形状和外观

Controllable 3D Generative Adversarial Face Model via Disentangling Shape and Appearance

论文作者

Taherkhani, Fariborz, Rai, Aashish, Gao, Quankai, Srivastava, Shaunak, Chen, Xuanbai, de la Torre, Fernando, Song, Steven, Prakash, Aayush, Kim, Daeil

论文摘要

3D面部建模一直是计算机视觉和计算机图形研究的活跃领域,从而加强了从虚拟化身中的面部表达转移到合成数据生成的应用。现有的3D深度学习生成模型(例如VAE,gan)允许生成紧凑的面部表征(形状和纹理),可以在形状和外观空间中建模非线性(例如,散射效果,镜面等)。但是,他们缺乏控制微妙表达产生的能力。本文提出了一个新的3D面部生成模型,该模型可以使身份和表达脱致并提供对表达式的颗粒状控制。特别是,我们建议在外观和形状方面使用一对监督自动编码器和生成对抗网络来产生高质量的3D面。实验结果是通过整体表达标签或作用单元标签学习的3D面的产生,表明了我们如何将身份和表达分离;在保留身份的同时,获得精细的表达方式。

3D face modeling has been an active area of research in computer vision and computer graphics, fueling applications ranging from facial expression transfer in virtual avatars to synthetic data generation. Existing 3D deep learning generative models (e.g., VAE, GANs) allow generating compact face representations (both shape and texture) that can model non-linearities in the shape and appearance space (e.g., scatter effects, specularities, etc.). However, they lack the capability to control the generation of subtle expressions. This paper proposes a new 3D face generative model that can decouple identity and expression and provides granular control over expressions. In particular, we propose using a pair of supervised auto-encoder and generative adversarial networks to produce high-quality 3D faces, both in terms of appearance and shape. Experimental results in the generation of 3D faces learned with holistic expression labels, or Action Unit labels, show how we can decouple identity and expression; gaining fine-control over expressions while preserving identity.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源