论文标题
社会和道德ML风险的系统安全工程:案例研究
System Safety Engineering for Social and Ethical ML Risks: A Case Study
论文作者
论文摘要
政府,工业和学术界已努力识别和减轻ML驱动系统的危害,特别关注复杂社会技术系统中ML组件的社会和道德风险。但是,现有的方法在很大程度上是脱节的,临时的和未知的有效性。系统安全工程是一门良好的学科,在许多复杂的社会技术领域中识别和管理风险的记录。我们采用了这样一种自然假设,即该领域的工具可以在其使用的背景下提高ML的风险分析。为了检验这一假设,我们将“最佳品种”系统安全分析(系统理论过程分析(STPA))应用于具有重要ML驱动组件的特定高结果系统,即由许多美国州运营的处方药物监测计划(PDMPS),其中一些国家依赖于ML的风险评分。我们特别关注该分析如何扩展到确定社会和道德风险并开发具体设计级别的控制以减轻它们。
Governments, industry, and academia have undertaken efforts to identify and mitigate harms in ML-driven systems, with a particular focus on social and ethical risks of ML components in complex sociotechnical systems. However, existing approaches are largely disjointed, ad-hoc and of unknown effectiveness. Systems safety engineering is a well established discipline with a track record of identifying and managing risks in many complex sociotechnical domains. We adopt the natural hypothesis that tools from this domain could serve to enhance risk analyses of ML in its context of use. To test this hypothesis, we apply a "best of breed" systems safety analysis, Systems Theoretic Process Analysis (STPA), to a specific high-consequence system with an important ML-driven component, namely the Prescription Drug Monitoring Programs (PDMPs) operated by many US States, several of which rely on an ML-derived risk score. We focus in particular on how this analysis can extend to identifying social and ethical risks and developing concrete design-level controls to mitigate them.