论文标题

变性拉普拉斯自动编码器

Variational Laplace Autoencoders

论文作者

Park, Yookoon, Kim, Chris Dongjoo, Kim, Gunhee

论文摘要

变分自动编码器采用摊销的推理模型来近似潜在变量的后部。但是,这种摊销的变异推断面临两个挑战:(1)完全归化的高斯假设的后验表达有限,以及(2)推理模型的摊销误差。我们提出了一种解决这两个挑战的新颖方法。首先,我们专注于具有高斯输出的Relu网络,并说明了它们与概率PCA的联系。在此观察结果的基础上,我们得出了一种迭代算法,该算法可以找到后部的模式,并应用以模式为中心的全协方差高斯后近似值。随后,我们提出了一个名为“变性拉普拉斯自动编码器”(VLAE)的通用框架,用于培训深层生成模型。基于潜在变量后部的拉普拉斯近似,vlaes在减少摊销误差的同时增强了后验的表现力。关于MNIST,Omniglot,Fashion-Mnist,SVHN和CIFAR10的经验结果表明,所提出的方法在RELU网络上大大优于其他最近的摊销或迭代方法。

Variational autoencoders employ an amortized inference model to approximate the posterior of latent variables. However, such amortized variational inference faces two challenges: (1) the limited posterior expressiveness of fully-factorized Gaussian assumption and (2) the amortization error of the inference model. We present a novel approach that addresses both challenges. First, we focus on ReLU networks with Gaussian output and illustrate their connection to probabilistic PCA. Building on this observation, we derive an iterative algorithm that finds the mode of the posterior and apply full-covariance Gaussian posterior approximation centered on the mode. Subsequently, we present a general framework named Variational Laplace Autoencoders (VLAEs) for training deep generative models. Based on the Laplace approximation of the latent variable posterior, VLAEs enhance the expressiveness of the posterior while reducing the amortization error. Empirical results on MNIST, Omniglot, Fashion-MNIST, SVHN and CIFAR10 show that the proposed approach significantly outperforms other recent amortized or iterative methods on the ReLU networks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源