论文标题
及时:改变模型决策的个人时间见解
Just in Time: Personal Temporal Insights for Altering Model Decisions
论文作者
论文摘要
复杂的机器学习模型的可解释性即将成为一个关键的社会问题,因为它们越来越多地用于与人类相关的决策过程(例如简历过滤或贷款应用程序)。接受不希望的分类的个人很可能会要求解释 - 最好是指定他们应采取的措施以在将来重新申请时要改变该决定。现有的工作重点是单个ML模型和单个时间点,而在实践中,模型和数据随着时间的流逝而发展:2018年对应用程序拒绝的解释可能在2019年无关紧要,因为与此同时,模型和申请人的数据都可以改变。为此,我们提出了一个新颖的框架,该框架为用户提供了更改其分类特定时间点的见解和计划。该解决方案基于(单个)模型说明,预测未来模型的最新算法以及对所获得的解释的数据库式查询。我们建议在贷款应用程序的背景下证明解决方案的有用性,并以互动方式吸引观众,以根据申请人的独特特征来量身定制的计算和查看建议。
The interpretability of complex Machine Learning models is coming to be a critical social concern, as they are increasingly used in human-related decision-making processes such as resume filtering or loan applications. Individuals receiving an undesired classification are likely to call for an explanation -- preferably one that specifies what they should do in order to alter that decision when they reapply in the future. Existing work focuses on a single ML model and a single point in time, whereas in practice, both models and data evolve over time: an explanation for an application rejection in 2018 may be irrelevant in 2019 since in the meantime both the model and the applicant's data can change. To this end, we propose a novel framework that provides users with insights and plans for changing their classification in particular future time points. The solution is based on combining state-of-the-art algorithms for (single) model explanations, ones for predicting future models, and database-style querying of the obtained explanations. We propose to demonstrate the usefulness of our solution in the context of loan applications, and interactively engage the audience in computing and viewing suggestions tailored for applicants based on their unique characteristic.