论文标题
人工智能供应链中的责任暂停:模块化和开发商的责任概念
Dislocated Accountabilities in the AI Supply Chain: Modularity and Developers' Notions of Responsibility
论文作者
论文摘要
负责任的人工智能指南要求工程师考虑其系统如何损害。但是,当代人工智能系统是通过组成许多先前存在的软件模块来构建的,这些软件模块在成为成品或服务之前通过了许多手。这种负责任的人工智能实践如何形成?在对各个行业,开源和学术界的27位人工智能工程师的访谈中,我们的参与者经常没有看到负责任的人工智能指南中提出的问题在其代理,能力或解决方案范围内。我们使用Suchman的“责任”来展示当前负责的人工智能劳动力如何组织起来,并探索如何做出不同的方式。我们确定了跨裁切的社会逻辑,例如模块化,规模,声誉和客户取向,这些逻辑确实会发生哪些负责任的人工智能行动,哪些是在想象中的“供应链”中被降级为低地位的人或被认为是下一个或以前的工作的工作。我们认为,当前负责的人工智能干预措施,如伦理清单和采用全面知识和控制系统的准则,可以通过采用定期的问责制方法来改善,并认识到该供应链内外的关系和义务可能在哪里交织。
Responsible artificial intelligence guidelines ask engineers to consider how their systems might harm. However, contemporary artificial intelligence systems are built by composing many preexisting software modules that pass through many hands before becoming a finished product or service. How does this shape responsible artificial intelligence practice? In interviews with 27 artificial intelligence engineers across industry, open source, and academia, our participants often did not see the questions posed in responsible artificial intelligence guidelines to be within their agency, capability, or responsibility to address. We use Suchman's "located accountability" to show how responsible artificial intelligence labor is currently organized and to explore how it could be done differently. We identify cross-cutting social logics, like modularizability, scale, reputation, and customer orientation, that organize which responsible artificial intelligence actions do take place and which are relegated to low status staff or believed to be the work of the next or previous person in the imagined "supply chain." We argue that current responsible artificial intelligence interventions, like ethics checklists and guidelines that assume panoptical knowledge and control over systems, could be improved by taking a located accountability approach, recognizing where relations and obligations might intertwine inside and outside of this supply chain.