论文标题

通过多扩展损失在分割图上的语义编辑

Semantic Editing On Segmentation Map Via Multi-Expansion Loss

论文作者

He, Jianfeng, Zhang, Xuchao, Lei, Shuo, Wang, Shuhui, Huang, Qingming, Lu, Chang-Tien, Xiao, Bei

论文摘要

已经提出了分段图上的语义编辑作为图像生成的中间接口,因为它在各种图像生成任务中提供了灵活而强大的帮助。本文旨在提高以语义输入为条件的编辑细分图的质量。尽管最近的研究广泛应用了全球和地方对抗性损失以生成图像以提高图像质量,但我们发现它们遭受了面具区域边界区域的未对准。为了解决这个问题,我们建议Mexgan在细分图上进行语义编辑,该图是使用新颖的多膨胀(MEX)损失,该损失是由MEX区域上的对抗损失实现的。每个MEX区域都有一代人的面具区域,是原始上下文的多数和边界。为了提高MEX损失的便利性和稳定性,我们进一步提出了近似的MEX(A-MEX)损失。此外,与以前的模型相反,该模型与整个图像的一部分构建分段图上的语义编辑数据相反,这导致了模型性能降级,Mexgan应用了整个图像来构建培训数据。关于分割图和自然图像介绍的语义编辑的广泛实验显示了四个数据集的竞争结果。

Semantic editing on segmentation map has been proposed as an intermediate interface for image generation, because it provides flexible and strong assistance in various image generation tasks. This paper aims to improve quality of edited segmentation map conditioned on semantic inputs. Even though recent studies apply global and local adversarial losses extensively to generate images for higher image quality, we find that they suffer from the misalignment of the boundary area in the mask area. To address this, we propose MExGAN for semantic editing on segmentation map, which uses a novel Multi-Expansion (MEx) loss implemented by adversarial losses on MEx areas. Each MEx area has the mask area of the generation as the majority and the boundary of original context as the minority. To boost convenience and stability of MEx loss, we further propose an Approximated MEx (A-MEx) loss. Besides, in contrast to previous model that builds training data for semantic editing on segmentation map with part of the whole image, which leads to model performance degradation, MExGAN applies the whole image to build the training data. Extensive experiments on semantic editing on segmentation map and natural image inpainting show competitive results on four datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源