论文标题
现实世界中的安全和机器学习
Security and Machine Learning in the Real World
论文作者
论文摘要
在许多安全和业务关键系统中部署的机器学习(ML)模型很容易通过对抗性示例剥削。大量的学术研究彻底探讨了这些盲点的原因,开发了寻找它们的复杂算法,并提出了一些有希望的防御能力。但是,其中绝大多数作品都研究了独立的神经网络模型。在这项工作中,我们基于评估大规模部署的机器学习软件产品的安全性的经验,以扩大对话,以包括对这些漏洞的系统安全视图。我们描述了在具有ML组件的软件中实施系统安全最佳实践的新颖挑战。此外,我们提出了一份短期缓解建议的列表,即部署机器学习模块可以用来保护其系统的实践者。最后,我们概述了对机器学习攻击和防御措施的新研究的指示,这些方向可以提高ML系统安全状况。
Machine learning (ML) models deployed in many safety- and business-critical systems are vulnerable to exploitation through adversarial examples. A large body of academic research has thoroughly explored the causes of these blind spots, developed sophisticated algorithms for finding them, and proposed a few promising defenses. A vast majority of these works, however, study standalone neural network models. In this work, we build on our experience evaluating the security of a machine learning software product deployed on a large scale to broaden the conversation to include a systems security view of these vulnerabilities. We describe novel challenges to implementing systems security best practices in software with ML components. In addition, we propose a list of short-term mitigation suggestions that practitioners deploying machine learning modules can use to secure their systems. Finally, we outline directions for new research into machine learning attacks and defenses that can serve to advance the state of ML systems security.