论文标题

保证案件是审计AI支持和自主系统的基础石:审查项目的研讨会结果和政治建议

Assurance Cases as Foundation Stone for Auditing AI-enabled and Autonomous Systems: Workshop Results and Political Recommendations for Action from the ExamAI Project

论文作者

Adler, Rasmus, Klaes, Michael

论文摘要

欧洲机械指令和相关的统一标准确实认为软件用于生成机械与安全性相关的行为,但不考虑各种软件。特别是,基于机器学习(ML)的软件未考虑实现与安全相关的行为。这限制了为自动移动机器人和其他自主机械引入合适的安全概念,这些机械通常取决于基于ML的功能。我们调查了此问题以及安全标准定义要针对软件故障实施的安全措施的方式。功能安全标准使用安全完整性水平(SILS)来定义应采取哪些安全措施。它们提供了根据SIL确定SIL和选择安全措施的规则的规则。在本文中,我们认为这种方法在ML和其他类型的人工智能(AI)方面很难采用。我们建议使用保证案例来指出,在给定情况下,我们建议使用保证案例的简单规则,而是建议使用保证案例来说明单独选择和应用的措施就足够了。为了获得有关提案的可行性和实用性的第一个评级,我们在讲习班中与工业专家,德国法定事故保险公司,工作安全和标准化委员会以及来自各种国家,欧洲和国际工作组的代表进行了研讨会进行了讨论。在本文中,我们总结了提案和研讨会的讨论。此外,我们检查我们的提案与欧洲AI ACT提案和当前的安全标准化计划一致,以解决AI和自治系统

The European Machinery Directive and related harmonized standards do consider that software is used to generate safety-relevant behavior of the machinery but do not consider all kinds of software. In particular, software based on machine learning (ML) are not considered for the realization of safety-relevant behavior. This limits the introduction of suitable safety concepts for autonomous mobile robots and other autonomous machinery, which commonly depend on ML-based functions. We investigated this issue and the way safety standards define safety measures to be implemented against software faults. Functional safety standards use Safety Integrity Levels (SILs) to define which safety measures shall be implemented. They provide rules for determining the SIL and rules for selecting safety measures depending on the SIL. In this paper, we argue that this approach can hardly be adopted with respect to ML and other kinds of Artificial Intelligence (AI). Instead of simple rules for determining an SIL and applying related measures against faults, we propose the use of assurance cases to argue that the individually selected and applied measures are sufficient in the given case. To get a first rating regarding the feasibility and usefulness of our proposal, we presented and discussed it in a workshop with experts from industry, German statutory accident insurance companies, work safety and standardization commissions, and representatives from various national, European, and international working groups dealing with safety and AI. In this paper, we summarize the proposal and the workshop discussion. Moreover, we check to which extent our proposal is in line with the European AI Act proposal and current safety standardization initiatives addressing AI and Autonomous Systems

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源