论文标题
V3GAN:分解背景,前景和视频生成运动
V3GAN: Decomposing Background, Foreground and Motion for Video Generation
论文作者
论文摘要
视频生成是一项具有挑战性的任务,需要在视频中对合理的空间和时间动态进行建模。受到人类通过将场景分组为移动和固定组件的视频如何看待视频的启发,我们提出了一种将视频生成任务分解为前景,背景和运动的综合的方法。前景和背景一起描述了外观,而运动则指定前景如何随着时间的流逝而在视频中移动。我们提出了V3GAN,这是一个新型的三个分支生成对抗网络,其中两个分支模拟前景和背景信息,而第三个分支则在没有任何监督的情况下对时间信息进行了建模。我们的新型特征级掩蔽层增强了前景分支,该层有助于学习准确的面罩,以进行前景和背景分离。为了鼓励运动一致性,我们进一步提出了视频歧视者的损失。关于合成和现实基准数据集的广泛定量和定性分析表明,V3GAN的表现优于最先进的方法。
Video generation is a challenging task that requires modeling plausible spatial and temporal dynamics in a video. Inspired by how humans perceive a video by grouping a scene into moving and stationary components, we propose a method that decomposes the task of video generation into the synthesis of foreground, background and motion. Foreground and background together describe the appearance, whereas motion specifies how the foreground moves in a video over time. We propose V3GAN, a novel three-branch generative adversarial network where two branches model foreground and background information, while the third branch models the temporal information without any supervision. The foreground branch is augmented with our novel feature-level masking layer that aids in learning an accurate mask for foreground and background separation. To encourage motion consistency, we further propose a shuffling loss for the video discriminator. Extensive quantitative and qualitative analysis on synthetic as well as real-world benchmark datasets demonstrates that V3GAN outperforms the state-of-the-art methods by a significant margin.