论文标题

与组合扩散模型的组成视觉产生

Compositional Visual Generation with Composable Diffusion Models

论文作者

Liu, Nan, Li, Shuang, Du, Yilun, Torralba, Antonio, Tenenbaum, Joshua B.

论文摘要

大型文本引导的扩散模型(例如Dalle-2)能够在自然语言描述下产生令人惊叹的影像图像。尽管这样的模型非常灵活,但它们很难理解某些概念的组成,例如使不同对象的属性或对象之间的关系混淆。在本文中,我们提出了一种使用扩散模型的替代结构化方法,用于生成组成。图像是通过组成一组扩散模型来生成的,每个扩散模型都在建模图像的某个组件。为此,我们将扩散模型解释为基于能量的模型,其中可以明确组合能量函数定义的数据分布。所提出的方法可以在测试时间生成比训练中看到的场景要复杂得多,构成句子描述,对象关系,人面部属性,甚至对在现实世界中很少见的新组合进行推广。我们进一步说明了如何使用我们的方法来构成预训练的文本引导的扩散模型,并生成包含输入描述中描述的所有细节的影像图像,包括对Dalle-2显示的某些对象属性的结合。这些结果表明,所提出的方法在促进视觉产生的结构化概括方面的有效性。项目页面:https://energy-lase-model.github.io/compositional-visual-generation-with-composable-diffusion-models/

Large text-guided diffusion models, such as DALLE-2, are able to generate stunning photorealistic images given natural language descriptions. While such models are highly flexible, they struggle to understand the composition of certain concepts, such as confusing the attributes of different objects or relations between objects. In this paper, we propose an alternative structured approach for compositional generation using diffusion models. An image is generated by composing a set of diffusion models, with each of them modeling a certain component of the image. To do this, we interpret diffusion models as energy-based models in which the data distributions defined by the energy functions may be explicitly combined. The proposed method can generate scenes at test time that are substantially more complex than those seen in training, composing sentence descriptions, object relations, human facial attributes, and even generalizing to new combinations that are rarely seen in the real world. We further illustrate how our approach may be used to compose pre-trained text-guided diffusion models and generate photorealistic images containing all the details described in the input descriptions, including the binding of certain object attributes that have been shown difficult for DALLE-2. These results point to the effectiveness of the proposed method in promoting structured generalization for visual generation. Project page: https://energy-based-model.github.io/Compositional-Visual-Generation-with-Composable-Diffusion-Models/

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源