论文标题

探索和利用Hubness先验的高质量gan潜伏采样

Exploring and Exploiting Hubness Priors for High-Quality GAN Latent Sampling

论文作者

Liang, Yuanbang, Wu, Jing, Lai, Yu-Kun, Qin, Yipeng

论文摘要

尽管对生成对抗网络(GAN)进行了广泛的研究,但如何可靠地从其潜在空间中采样高质量的图像仍然是一个不足的主题。在本文中,我们通过探索和利用GAN潜在分布的中心先验来提出一种新型的GAN潜伏方法。我们的关键见解是,GAN潜在空间的高维度将不可避免地导致集线器潜伏期的出现通常比潜在空间中的其他潜在潜在的潜水密度大得多。结果,这些枢纽潜伏期得到了更好的训练,因此对高质量图像的综合贡献更大。与A后“樱桃”不同,我们的方法高效,因为它是一种先验方法,可以在合成图像之前识别高质量的潜在方法。此外,我们表明,众所周知但纯粹的经验截断技巧是对集线器潜伏期的中心聚类效应的幼稚近似,这不仅揭示了截断技巧的基本原理,而且还表明了我们方法的优越性和基础性。广泛的实验结果证明了该方法的有效性。

Despite the extensive studies on Generative Adversarial Networks (GANs), how to reliably sample high-quality images from their latent spaces remains an under-explored topic. In this paper, we propose a novel GAN latent sampling method by exploring and exploiting the hubness priors of GAN latent distributions. Our key insight is that the high dimensionality of the GAN latent space will inevitably lead to the emergence of hub latents that usually have much larger sampling densities than other latents in the latent space. As a result, these hub latents are better trained and thus contribute more to the synthesis of high-quality images. Unlike the a posterior "cherry-picking", our method is highly efficient as it is an a priori method that identifies high-quality latents before the synthesis of images. Furthermore, we show that the well-known but purely empirical truncation trick is a naive approximation to the central clustering effect of hub latents, which not only uncovers the rationale of the truncation trick, but also indicates the superiority and fundamentality of our method. Extensive experimental results demonstrate the effectiveness of the proposed method.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源