论文标题
T-GD:可转移的GAN生成图像检测框架
T-GD: Transferable GAN-generated Images Detection Framework
论文作者
论文摘要
生成对抗网络(GAN)的最新进展使得能够产生高度现实的图像,从而引起了人们对恶意目的滥用的担忧。由于潜在的伪影和特定模式的显着降低,检测这些GAN生成的图像(GAN图像)变得越来越具有挑战性。缺乏此类迹线会阻碍检测算法识别gan图像并转移知识以识别其他类型的GAN图像。在这项工作中,我们介绍了可转移的GAN图像检测框架T-GD,这是一个可有效检测GAN图像的可靠转移框架。 T-GD由一个教师和学生模型组成,可以迭代教学和评估以提高检测性能。首先,我们在源数据集上训练教师模型,并将其用作学习目标数据集的起点。要训练学生模型,我们通过混合源和目标数据集注入噪声,同时约束重量变化以保留起点。我们的方法是一种自我训练的方法,但通过重点放在提高gan-图像检测的可转移性来区分自己与先前的方法。 T-GD通过克服灾难性遗忘并有效地检测最先进的gan图像,在没有任何元数据信息的情况下,可以在源数据集上实现高性能。
Recent advancements in Generative Adversarial Networks (GANs) enable the generation of highly realistic images, raising concerns about their misuse for malicious purposes. Detecting these GAN-generated images (GAN-images) becomes increasingly challenging due to the significant reduction of underlying artifacts and specific patterns. The absence of such traces can hinder detection algorithms from identifying GAN-images and transferring knowledge to identify other types of GAN-images as well. In this work, we present the Transferable GAN-images Detection framework T-GD, a robust transferable framework for an effective detection of GAN-images. T-GD is composed of a teacher and a student model that can iteratively teach and evaluate each other to improve the detection performance. First, we train the teacher model on the source dataset and use it as a starting point for learning the target dataset. To train the student model, we inject noise by mixing up the source and target datasets, while constraining the weight variation to preserve the starting point. Our approach is a self-training method, but distinguishes itself from prior approaches by focusing on improving the transferability of GAN-image detection. T-GD achieves high performance on the source dataset by overcoming catastrophic forgetting and effectively detecting state-of-the-art GAN-images with only a small volume of data without any metadata information.