论文标题
基于样式的虚拟试验的全球外观流
Style-Based Global Appearance Flow for Virtual Try-On
论文作者
论文摘要
基于图像的虚拟试验旨在将店里的衣服适合穿着衣服的人形象。为了实现这一目标,关键步骤是衣服翘曲,它在空间上与人体图像中的相应身体部位保持一致。先前的方法通常采用局部外观流估计模型。因此,它们本质上容易受到困难的身体姿势/遮挡和人物和服装图像之间的较大错误对准(见图〜\ ref {FIG:fig1})。为了克服这一限制,在这项工作中提出了一种新型的全球外观流估计模型。首次采用基于StyleGAN的架构进行外观流估计。这使我们能够利用全球样式矢量来编码整个图像上下文,以应对上述挑战。为了指导StyleGAN流量生成器以更加关注本地服装变形,引入了流程细化模块以添加本地上下文。经受流行的虚拟试验基准的实验结果表明,我们的方法实现了新的最新性能。它在“野外”应用程序方案中特别有效,其中参考图像是全身,导致与服装图像的错误对准(图〜\ ref {fig:fig1}顶部)。代码可在:\ url {https://github.com/senhe/flow-style-vton}中获得。
Image-based virtual try-on aims to fit an in-shop garment into a clothed person image. To achieve this, a key step is garment warping which spatially aligns the target garment with the corresponding body parts in the person image. Prior methods typically adopt a local appearance flow estimation model. They are thus intrinsically susceptible to difficult body poses/occlusions and large mis-alignments between person and garment images (see Fig.~\ref{fig:fig1}). To overcome this limitation, a novel global appearance flow estimation model is proposed in this work. For the first time, a StyleGAN based architecture is adopted for appearance flow estimation. This enables us to take advantage of a global style vector to encode a whole-image context to cope with the aforementioned challenges. To guide the StyleGAN flow generator to pay more attention to local garment deformation, a flow refinement module is introduced to add local context. Experiment results on a popular virtual try-on benchmark show that our method achieves new state-of-the-art performance. It is particularly effective in a `in-the-wild' application scenario where the reference image is full-body resulting in a large mis-alignment with the garment image (Fig.~\ref{fig:fig1} Top). Code is available at: \url{https://github.com/SenHe/Flow-Style-VTON}.