论文标题

私人邮政加强

Private Post-GAN Boosting

论文作者

Neunhoeffer, Marcel, Wu, Zhiwei Steven, Dwork, Cynthia

论文摘要

事实证明,差异化的私人是一种有前途的方法,可以在不损害个人隐私的情况下生成现实的合成数据。由于训练中引入的隐私保护噪声,gan的融合变得更加难以捉摸,这通常会导致培训结束时产量生成器的效用较差。我们提出了一种私人后的促进(私有PGB),这是一种差异化的方法,结合了由GAN训练过程中获得的发电机序列产生的样品,以创建高质量的合成数据集。为此,我们的方法利用了私人乘法方法(Hardt和Rothblum,2010)重新生成的样品。我们在二维玩具数据,MNIST图像,US人口普查数据和标准的机器学习预测任务上评估私人PGB。我们的实验表明,私人PGB在一系列质量措施中采用标准的私人GAN方法改进。我们还提供了PGB的非私人变体,可提高标准GAN培训的数据质量。

Differentially private GANs have proven to be a promising approach for generating realistic synthetic data without compromising the privacy of individuals. Due to the privacy-protective noise introduced in the training, the convergence of GANs becomes even more elusive, which often leads to poor utility in the output generator at the end of training. We propose Private post-GAN boosting (Private PGB), a differentially private method that combines samples produced by the sequence of generators obtained during GAN training to create a high-quality synthetic dataset. To that end, our method leverages the Private Multiplicative Weights method (Hardt and Rothblum, 2010) to reweight generated samples. We evaluate Private PGB on two dimensional toy data, MNIST images, US Census data and a standard machine learning prediction task. Our experiments show that Private PGB improves upon a standard private GAN approach across a collection of quality measures. We also provide a non-private variant of PGB that improves the data quality of standard GAN training.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源