论文标题
征求循环的用户反馈进行交互式机器学习,可以降低用户的信任和模型准确性的印象
Soliciting Human-in-the-Loop User Feedback for Interactive Machine Learning Reduces User Trust and Impressions of Model Accuracy
论文作者
论文摘要
混合启动系统允许用户交互式提供反馈,以可能提高系统性能。人类反馈可以纠正模型错误并更新模型参数以动态适应不断变化的数据。此外,许多用户希望能够在他们依赖的系统中拥有更高水平的控制和修复感知的缺陷。但是,向自主系统提供反馈的能力如何影响用户信任是一个未开发的研究领域。我们的研究调查了提供反馈的行为如何影响用户对智能系统及其准确性的理解。我们使用模拟对象检测系统和图像数据提出了一个受控的实验,以研究交互式反馈收集对用户印象的影响。结果表明,提供人类的反馈降低了参与者对系统的信任及其对系统准确性的看法,无论系统的准确性是否提高了他们的反馈。这些结果突出了考虑在设计智能系统时允许最终用户反馈对用户信任的影响的重要性。
Mixed-initiative systems allow users to interactively provide feedback to potentially improve system performance. Human feedback can correct model errors and update model parameters to dynamically adapt to changing data. Additionally, many users desire the ability to have a greater level of control and fix perceived flaws in systems they rely on. However, how the ability to provide feedback to autonomous systems influences user trust is a largely unexplored area of research. Our research investigates how the act of providing feedback can affect user understanding of an intelligent system and its accuracy. We present a controlled experiment using a simulated object detection system with image data to study the effects of interactive feedback collection on user impressions. The results show that providing human-in-the-loop feedback lowered both participants' trust in the system and their perception of system accuracy, regardless of whether the system accuracy improved in response to their feedback. These results highlight the importance of considering the effects of allowing end-user feedback on user trust when designing intelligent systems.