论文标题

从机器学习中提取的符号知识的评估指标黑匣子:讨论纸

Evaluation Metrics for Symbolic Knowledge Extracted from Machine Learning Black Boxes: A Discussion Paper

论文作者

Sabbatini, Federico, Calegari, Roberta

论文摘要

由于几乎所有应用程序领域都越来越多地采用了不透明的决策系统,因此有关它们缺乏透明度和人类可读性的问题是最终用户的具体关注者。在现有的建议将人类解剖知识与不透明模型提供的准确预测相关联的建议中,有一些规则提取技术,能够从不透明的模型中提取符号知识。但是,如何定量地评估提取知识的可读性水平仍然是一个空旷的问题。例如,找到这样的指标将是启用一组不同的知识表示之间的自动比较,为开发参数自动调整算法为知识提取器铺平了自动比较。在本文中,我们讨论了对最常见的知识表示形式的必要性以及可读性评估和评估的关键,同时突出了最令人困惑的问题。

As opaque decision systems are being increasingly adopted in almost any application field, issues about their lack of transparency and human readability are a concrete concern for end-users. Amongst existing proposals to associate human-interpretable knowledge with accurate predictions provided by opaque models, there are rule extraction techniques, capable of extracting symbolic knowledge out of an opaque model. However, how to assess the level of readability of the extracted knowledge quantitatively is still an open issue. Finding such a metric would be the key, for instance, to enable automatic comparison between a set of different knowledge representations, paving the way for the development of parameter autotuning algorithms for knowledge extractors. In this paper we discuss the need for such a metric as well as the criticalities of readability assessment and evaluation, taking into account the most common knowledge representations while highlighting the most puzzling issues.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源