论文标题

使用随机歧视者的生成中毒

Generative Poisoning Using Random Discriminators

论文作者

van Vlijmen, Dirren, Kolmus, Alex, Liu, Zhuoran, Zhao, Zhengyu, Larson, Martha

论文摘要

我们介绍了一种新的数据中毒攻击,它通过学习发电机来产生样本依赖性,错误最小化的扰动。越野华的主要新颖性是使用随机定位的歧视器,该歧视器提供了产生毒物所需的虚假捷径。与最近的迭代方法不同,我们的折短机只能以无标签的方式产生扰动,并且与唯一现有的生成方法DeepConfuse相比,我们的折比式越来越快,更简单地训练,同时保持竞争力。我们还证明,整合简单的增强策略可以进一步提高折齿的鲁棒性,以防止早期停止,并结合增强和非夸大性,从而在最终验证准确性方面,尤其是在挑战性的转移方案中,会导致新的最先进的结果。最后,我们通过发现其工作机制来推测,学习更一般的表示空间可以使捷径能够为看不见的数据工作。

We introduce ShortcutGen, a new data poisoning attack that generates sample-dependent, error-minimizing perturbations by learning a generator. The key novelty of ShortcutGen is the use of a randomly-initialized discriminator, which provides spurious shortcuts needed for generating poisons. Different from recent, iterative methods, our ShortcutGen can generate perturbations with only one forward pass in a label-free manner, and compared to the only existing generative method, DeepConfuse, our ShortcutGen is faster and simpler to train while remaining competitive. We also demonstrate that integrating a simple augmentation strategy can further boost the robustness of ShortcutGen against early stopping, and combining augmentation and non-augmentation leads to new state-of-the-art results in terms of final validation accuracy, especially in the challenging, transfer scenario. Lastly, we speculate, through uncovering its working mechanism, that learning a more general representation space could allow ShortcutGen to work for unseen data.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源