论文标题

通过合成光流的伪划分学习时间和语义一致的不成对视频对视频翻译

Learning Temporally and Semantically Consistent Unpaired Video-to-video Translation Through Pseudo-Supervision From Synthetic Optical Flow

论文作者

Wang, Kaihong, Akash, Kumar, Misu, Teruhisa

论文摘要

未配对的视频对视频翻译旨在在不需要配对培训数据的情况下将视频翻译在源和目标域之间,从而使其对于实际应用程序更可行。不幸的是,翻译的视频通常会遇到时间和语义上的不一致。为了解决这个问题,许多现有作品采用了基于运动估计的时空信息的时空一致性约束。然而,运动估计的不准确性使空间颞一致性的指导质量恶化,从而导致不稳定的翻译。在这项工作中,我们提出了一种新颖的范式,该范式通过与生成的光流中的输入视频中的动作合成动作,而不是估计它们,从而使时空的一致性正常。因此,合成运动可以在正则化范式中应用,以使动作在范围内保持一致,而不会出现运动估计错误的风险。此后,我们利用了我们的无监督回收和无监督的空间损失,在合成光流提供的伪内观察指导下,以准确地在两个域中实现时空一致性。实验表明,在各种情况下,我们的方法在生成时间和语义一致的视频方面具有最先进的性能。代码可在以下网址获得:https://github.com/wangkaihong/unsup_recycle_gan/。

Unpaired video-to-video translation aims to translate videos between a source and a target domain without the need of paired training data, making it more feasible for real applications. Unfortunately, the translated videos generally suffer from temporal and semantic inconsistency. To address this, many existing works adopt spatiotemporal consistency constraints incorporating temporal information based on motion estimation. However, the inaccuracies in the estimation of motion deteriorate the quality of the guidance towards spatiotemporal consistency, which leads to unstable translation. In this work, we propose a novel paradigm that regularizes the spatiotemporal consistency by synthesizing motions in input videos with the generated optical flow instead of estimating them. Therefore, the synthetic motion can be applied in the regularization paradigm to keep motions consistent across domains without the risk of errors in motion estimation. Thereafter, we utilize our unsupervised recycle and unsupervised spatial loss, guided by the pseudo-supervision provided by the synthetic optical flow, to accurately enforce spatiotemporal consistency in both domains. Experiments show that our method is versatile in various scenarios and achieves state-of-the-art performance in generating temporally and semantically consistent videos. Code is available at: https://github.com/wangkaihong/Unsup_Recycle_GAN/.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源