论文标题
颞叶:捕获带有时间连贯的紫外线坐标的松散衣服
TemporalUV: Capturing Loose Clothing with Temporally Coherent UV Coordinates
论文作者
论文摘要
我们提出了一种新颖的方法,以生成宽松衣服的时间连贯的紫外线坐标。我们的方法不受人体轮廓的限制,可以捕获松散的衣服和头发。我们实现了一条可区分的管道,以通过UV坐标在RGB输入和纹理之间学习UV映射。我们的数据生成方法不是分别处理每个帧的紫外线坐标,而是通过特征匹配以达到时间稳定性来连接所有UV坐标。随后,对生成模型进行了训练,以平衡空间质量和时间稳定性。它是由紫外线和图像空间的监督和无监督损失驱动的。我们的实验表明,受过训练的模型输出高质量的紫外线坐标并推广到新姿势。一旦我们的模型推断出一系列紫外线坐标,它就可以灵活地合成新的外观和修改的视觉样式。与现有方法相比,我们的方法减少了计算工作量,以使新服装动画数量数量级。
We propose a novel approach to generate temporally coherent UV coordinates for loose clothing. Our method is not constrained by human body outlines and can capture loose garments and hair. We implemented a differentiable pipeline to learn UV mapping between a sequence of RGB inputs and textures via UV coordinates. Instead of treating the UV coordinates of each frame separately, our data generation approach connects all UV coordinates via feature matching for temporal stability. Subsequently, a generative model is trained to balance the spatial quality and temporal stability. It is driven by supervised and unsupervised losses in both UV and image spaces. Our experiments show that the trained models output high-quality UV coordinates and generalize to new poses. Once a sequence of UV coordinates has been inferred by our model, it can be used to flexibly synthesize new looks and modified visual styles. Compared to existing methods, our approach reduces the computational workload to animate new outfits by several orders of magnitude.