论文标题

生成模型没有事先分发匹配

Generative Model without Prior Distribution Matching

论文作者

Geng, Cong, Wang, Jia, Chen, Li, Gao, Zhiyong

论文摘要

通过学习低维的潜在表示,可以满足某些先前的分布(例如高斯分布),这是经典的生成模型。他们对GAN的优势是,他们可以同时生成高维数据并学习潜在表示以重建输入。但是,已经观察到,重建与生成之间存在权衡,因为匹配先验分布可能会破坏数据歧管的几何结构。为了减轻此问题,我们建议让先前的嵌入分布匹配,而不是施加潜在变量以适合先验。使用简单的正规化自动编码器结构训练嵌入分布,该体系结构将几何结构保留到最大值。然后采用对抗策略来实现潜在的映射。我们为我们的方法的有效性提供了理论和实验支持,这减轻了拓扑特性在潜在空间中保存数据歧管和分布匹配之间的矛盾。

Variational Autoencoder (VAE) and its variations are classic generative models by learning a low-dimensional latent representation to satisfy some prior distribution (e.g., Gaussian distribution). Their advantages over GAN are that they can simultaneously generate high dimensional data and learn latent representations to reconstruct the inputs. However, it has been observed that a trade-off exists between reconstruction and generation since matching prior distribution may destroy the geometric structure of data manifold. To mitigate this problem, we propose to let the prior match the embedding distribution rather than imposing the latent variables to fit the prior. The embedding distribution is trained using a simple regularized autoencoder architecture which preserves the geometric structure to the maximum. Then an adversarial strategy is employed to achieve a latent mapping. We provide both theoretical and experimental support for the effectiveness of our method, which alleviates the contradiction between topological properties' preserving of data manifold and distribution matching in latent space.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源