论文标题

统一的F-Divergence框架概括了VAE和GAN

A Unified f-divergence Framework Generalizing VAE and GAN

论文作者

Gimenez, Jaime Roquero, Zou, James

论文摘要

开发灵活地结合各种概率距离的深层生成模型是研究的重要领域。在这里,我们开发了F-Divergence生成模型F-GM的统一数学框架,该模型同时融合了VAE和F-GAN,并可以通过一般的F-Diverence启用可拖动的学习。 F-GM允许实验者在不更改网络结构或学习过程的情况下灵活设计F-Divergence功能。 F-GM共同对三个组件进行建模:生成器,推理网络和密度估计器。因此,它同时实现了潜在变量的采样,后验推断以及对任意基准的可能性的评估。 F-GM属于编码器decoder gans的类别:我们的密度估计器可以解释为在潜在代码和观察到的空间的关节空间中样本之间歧视器的作用。我们证明,F-GM自然会简化标准VAE和作为特殊情况的F-GAN,并说明了不同的编码器decoder gan体系结构之间的连接。 F-GM与一般网络体系结构和优化器兼容。我们利用它来实验探索效果 - 例如模式崩溃和图像清晰度 - 不同选择F-Divergence。

Developing deep generative models that flexibly incorporate diverse measures of probability distance is an important area of research. Here we develop an unified mathematical framework of f-divergence generative model, f-GM, that incorporates both VAE and f-GAN, and enables tractable learning with general f-divergences. f-GM allows the experimenter to flexibly design the f-divergence function without changing the structure of the networks or the learning procedure. f-GM jointly models three components: a generator, a inference network and a density estimator. Therefore it simultaneously enables sampling, posterior inference of the latent variable as well as evaluation of the likelihood of an arbitrary datum. f-GM belongs to the class of encoder-decoder GANs: our density estimator can be interpreted as playing the role of a discriminator between samples in the joint space of latent code and observed space. We prove that f-GM naturally simplifies to the standard VAE and to f-GAN as special cases, and illustrates the connections between different encoder-decoder GAN architectures. f-GM is compatible with general network architecture and optimizer. We leverage it to experimentally explore the effects -- e.g. mode collapse and image sharpness -- of different choices of f-divergence.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源