论文标题
野外的扰动:利用人工写的文本扰动进行现实的对抗攻击和防御
Perturbations in the Wild: Leveraging Human-Written Text Perturbations for Realistic Adversarial Attack and Defense
论文作者
论文摘要
我们提出了一种新颖的算法Anthro,该算法在野外诱导地提取超过600k的人称文本扰动,并利用它们进行现实的对抗性攻击。与现有的基于角色的攻击通常可以推翻一组操纵策略,我们的工作基于实际观察到现实世界文本的实际观察。我们发现,由Anthro产生的对抗文本实现了(1)攻击成功率,(2)原始文本的语义保存和(3)隐形 - i.e。与人类著作无法区分,因此很难被标记为可疑。具体而言,我们的攻击分别在伯特和罗伯塔的攻击成功率上完成了约83%和91%的攻击成功率。此外,当外行人和专业人工工人评估时,它的语义保存和隐身性在语义保存和隐身方面,其表现优于文本库基线,增长了50%和40%。与观点API相比,Anthro可以进一步增强BERT分类器在理解人写有毒文本的不同变化方面的性能。
We proposes a novel algorithm, ANTHRO, that inductively extracts over 600K human-written text perturbations in the wild and leverages them for realistic adversarial attack. Unlike existing character-based attacks which often deductively hypothesize a set of manipulation strategies, our work is grounded on actual observations from real-world texts. We find that adversarial texts generated by ANTHRO achieve the best trade-off between (1) attack success rate, (2) semantic preservation of the original text, and (3) stealthiness--i.e. indistinguishable from human writings hence harder to be flagged as suspicious. Specifically, our attacks accomplished around 83% and 91% attack success rates on BERT and RoBERTa, respectively. Moreover, it outperformed the TextBugger baseline with an increase of 50% and 40% in terms of semantic preservation and stealthiness when evaluated by both layperson and professional human workers. ANTHRO can further enhance a BERT classifier's performance in understanding different variations of human-written toxic texts via adversarial training when compared to the Perspective API.