论文标题

衡量实用程序,获得信任:XAI研究员的实用建议

Measure Utility, Gain Trust: Practical Advice for XAI Researcher

论文作者

Davis, Brittany, Glenski, Maria, Sealy, William, Arendt, Dustin

论文摘要

在过去的十年中,对机器学习模型的解释(即可解释的AI(XAI))的解释进行了研究。出于历史原因,解释和信任已经交织在一起。但是,对信任的关注太狭窄了,并且使研究界从久经考验的经验方法中误入歧途,从而产生了有关人和解释的更可辩护的科学知识。为了解决这个问题,我们为XAI领域的研究人员贡献了一条实用的途径。我们建议研究人员专注于机器学习解释的实用性,而不是信任。我们概述了五个广泛的用例,其中解释很有用,对于每种解释,我们描述了依靠客观的经验测量和可伪造的假设的伪证明。我们认为,这种实验性严谨是为XAI领域的科学知识做出贡献所必需的。

Research into the explanation of machine learning models, i.e., explainable AI (XAI), has seen a commensurate exponential growth alongside deep artificial neural networks throughout the past decade. For historical reasons, explanation and trust have been intertwined. However, the focus on trust is too narrow, and has led the research community astray from tried and true empirical methods that produced more defensible scientific knowledge about people and explanations. To address this, we contribute a practical path forward for researchers in the XAI field. We recommend researchers focus on the utility of machine learning explanations instead of trust. We outline five broad use cases where explanations are useful and, for each, we describe pseudo-experiments that rely on objective empirical measurements and falsifiable hypotheses. We believe that this experimental rigor is necessary to contribute to scientific knowledge in the field of XAI.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源