论文标题

用OCO-GAN统一条件和无条件的语义图像合成

Unifying conditional and unconditional semantic image synthesis with OCO-GAN

论文作者

Careil, Marlène, Lathuilière, Stéphane, Couprie, Camille, Verbeek, Jakob

论文摘要

近年来,已经对生成图像模型进行了广泛的研究。在无条件的环境中,它们对未标记图像的边际分布进行建模。为了获得更多的控制,可以在语义分割图上进行图像合成,以指示发电机在图像中的对象位置。尽管这两个任务密切相关,但通常会孤立地研究它们。我们建议使用共享的图像综合网络,以统一的方式解决这两个任务,以解决这两个任务。通过共同的歧视者,通过端到端的方法进行了对手,我们能够利用这两个任务之间的协同作用。我们在有限的数据,半监督和完整的数据制度中尝试了CityScapes,Coco-stuff,ADE20K数据集,并获得出色的性能,从而改善了现有的混合模型,这些混合模型可以在所有设置中都有和不进行调节。此外,我们的结果比最先进的专业无条件和有条件模型更具竞争力或更好。

Generative image models have been extensively studied in recent years. In the unconditional setting, they model the marginal distribution from unlabelled images. To allow for more control, image synthesis can be conditioned on semantic segmentation maps that instruct the generator the position of objects in the image. While these two tasks are intimately related, they are generally studied in isolation. We propose OCO-GAN, for Optionally COnditioned GAN, which addresses both tasks in a unified manner, with a shared image synthesis network that can be conditioned either on semantic maps or directly on latents. Trained adversarially in an end-to-end approach with a shared discriminator, we are able to leverage the synergy between both tasks. We experiment with Cityscapes, COCO-Stuff, ADE20K datasets in a limited data, semi-supervised and full data regime and obtain excellent performance, improving over existing hybrid models that can generate both with and without conditioning in all settings. Moreover, our results are competitive or better than state-of-the art specialised unconditional and conditional models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源