论文标题
费舍尔自动编码器
Fisher Auto-Encoders
论文作者
论文摘要
据推测,与常规的kullback-leibler(KL)差异相比,Fisher的差异对建模不确定性更强大。这激发了一种新的强大生成自动编码器(AE)的设计,称为Fisher自动编码器。我们的方法是通过最大程度地减少观察到的数据和潜在变量(假定的/建模的关节分布)之间的Fisher差异来设计Fisher AE。与基于KL的变分AE(VAE)相反,Fisher AE可以准确量化真实和基于模型的后验分布之间的距离。 MNIST和CELEBA数据集提供了定性和定量结果,这些结果证明了与VAES和Wasserstein AES相比,Fisher AE的鲁棒性的竞争性能。
It has been conjectured that the Fisher divergence is more robust to model uncertainty than the conventional Kullback-Leibler (KL) divergence. This motivates the design of a new class of robust generative auto-encoders (AE) referred to as Fisher auto-encoders. Our approach is to design Fisher AEs by minimizing the Fisher divergence between the intractable joint distribution of observed data and latent variables, with that of the postulated/modeled joint distribution. In contrast to KL-based variational AEs (VAEs), the Fisher AE can exactly quantify the distance between the true and the model-based posterior distributions. Qualitative and quantitative results are provided on both MNIST and celebA datasets demonstrating the competitive performance of Fisher AEs in terms of robustness compared to other AEs such as VAEs and Wasserstein AEs.