论文标题

深布:形状和样式编辑的神经服装表示

DeepCloth: Neural Garment Representation for Shape and Style Editing

论文作者

Su, Zhaoqi, Yu, Tao, Wang, Yangang, Liu, Yebin

论文摘要

服装表示,编辑和动画是计算机视觉和图形领域的挑战性主题。现有的服装表示很难在不同形状和拓扑之间实现平稳且合理的过渡。在这项工作中,我们介绍了DeepCloth,这是一个统一的服装表示,重建,动画和编辑的统一框架。我们的统一框架包含3个组件:首先,我们代表具有“拓扑感知的UV位置图”的服装几何形状,它可以通过引入其他形状和拓扑的各种服装的统一描述,通过引入紫外线图的附加拓扑 - 紫外线图。其次,为了进一步实现服装的重建和编辑,我们为将基于紫外线的表示形式嵌入连续特征空间而做出了贡献,该空间可以分别通过在潜在空间中的优化和控制来实现服装形状的重建和编辑。最后,我们提出了一种服装动画方法,通过将我们的神经服装表示用身体形状和姿势统一,即使在剧烈的服装编辑操作下,也可以实现合理的服装动画结果,从而利用了由我们的形状和样式表示的动态信息。总而言之,借助DeepCloth,我们向建立更灵活,更一般的3D服装数字化框架迈出了一步。实验表明,与以前的方法相比,我们的方法可以实现最先进的服装表示性能。

Garment representation, editing and animation are challenging topics in the area of computer vision and graphics. It remains difficult for existing garment representations to achieve smooth and plausible transitions between different shapes and topologies. In this work, we introduce, DeepCloth, a unified framework for garment representation, reconstruction, animation and editing. Our unified framework contains 3 components: First, we represent the garment geometry with a "topology-aware UV-position map", which allows for the unified description of various garments with different shapes and topologies by introducing an additional topology-aware UV-mask for the UV-position map. Second, to further enable garment reconstruction and editing, we contribute a method to embed the UV-based representations into a continuous feature space, which enables garment shape reconstruction and editing by optimization and control in the latent space, respectively. Finally, we propose a garment animation method by unifying our neural garment representation with body shape and pose, which achieves plausible garment animation results leveraging the dynamic information encoded by our shape and style representation, even under drastic garment editing operations. To conclude, with DeepCloth, we move a step forward in establishing a more flexible and general 3D garment digitization framework. Experiments demonstrate that our method can achieve state-of-the-art garment representation performance compared with previous methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源