论文标题
通过随机编码增强量子对抗鲁棒性
Enhancing Quantum Adversarial Robustness by Randomized Encodings
论文作者
论文摘要
量子物理学和机器学习之间的相互作用引起了量子机器学习的新兴前沿,在这种情况下,高级量子学习模型在解决某些具有挑战性的问题方面可能会优于其经典同行。但是,量子学习系统容易受到对抗性攻击的影响:在合法输入样本上添加微型精心制作的扰动可能会导致错误分类。为了解决这个问题,我们提出了一个一般方案,通过通过单一或量子误差校正编码器随机编码合法数据样本,以保护量子学习系统免受对抗攻击。特别是,我们严格地证明,对于任何旨在添加对抗性数据的变异量子电路,全球和局部随机编码器都导致梯度呈指数消失(即贫瘠的高原),而与对抗性电路和量子级别的内部结构无关。此外,我们证明了在当地统一对抗攻击下量子分类器的脆弱性严格限制。我们表明,随机的黑盒量子误差校正编码器可以保护量子分类器免受局部对抗噪声的影响,并且随着我们的串联误差校正代码,它们的稳健性会增加。为了量化鲁棒性增强,我们将量子差异隐私适应量子分类器的预测稳定性。我们的结果建立了针对对抗性扰动的量子分类器的多功能防御策略,这些策略提供了宝贵的指导,以增强近期和未来量子学习技术的可靠性和安全性。
The interplay between quantum physics and machine learning gives rise to the emergent frontier of quantum machine learning, where advanced quantum learning models may outperform their classical counterparts in solving certain challenging problems. However, quantum learning systems are vulnerable to adversarial attacks: adding tiny carefully-crafted perturbations on legitimate input samples can cause misclassifications. To address this issue, we propose a general scheme to protect quantum learning systems from adversarial attacks by randomly encoding the legitimate data samples through unitary or quantum error correction encoders. In particular, we rigorously prove that both global and local random unitary encoders lead to exponentially vanishing gradients (i.e. barren plateaus) for any variational quantum circuits that aim to add adversarial perturbations, independent of the input data and the inner structures of adversarial circuits and quantum classifiers. In addition, we prove a rigorous bound on the vulnerability of quantum classifiers under local unitary adversarial attacks. We show that random black-box quantum error correction encoders can protect quantum classifiers against local adversarial noises and their robustness increases as we concatenate error correction codes. To quantify the robustness enhancement, we adapt quantum differential privacy as a measure of the prediction stability for quantum classifiers. Our results establish versatile defense strategies for quantum classifiers against adversarial perturbations, which provide valuable guidance to enhance the reliability and security for both near-term and future quantum learning technologies.