论文标题
机器学习模型在无知的时候应该向我们报告吗?
Should Machine Learning Models Report to Us When They Are Clueless?
论文作者
论文摘要
AI解释性的权利已作为研究社区和决策制定的共识合并。但是,解释性的一个关键组成部分缺少:外推,它描述了当AI模型遇到陌生样本时无知的程度(即,正如我们将要解释的那样)。我们报告,AI模型在其熟悉的数据范围之外推断出来,而无需通知用户和利益相关者。知道模型是否推断是一种基本见解,应包括在解释AI模型中以透明和问责制。我们没有居住在负面方面,而是提供清除障碍促进AI透明度的方法。我们的分析评论伴随的实践条款有助于纳入AI法规,例如《美国国家AI倡议法》在美国和欧洲委员会的AI法案中。
The right to AI explainability has consolidated as a consensus in the research community and policy-making. However, a key component of explainability has been missing: extrapolation, which describes the extent to which AI models can be clueless when they encounter unfamiliar samples (i.e., samples outside the convex hull of their training sets, as we will explain). We report that AI models extrapolate outside their range of familiar data, frequently and without notifying the users and stakeholders. Knowing whether a model has extrapolated or not is a fundamental insight that should be included in explaining AI models in favor of transparency and accountability. Instead of dwelling on the negatives, we offer ways to clear the roadblocks in promoting AI transparency. Our analysis commentary accompanying practical clauses useful to include in AI regulations such as the National AI Initiative Act in the US and the AI Act by the European Commission.