论文标题
评估安全操作中心基于机器学习工具的可用性
An Assessment of the Usability of Machine Learning Based Tools for the Security Operations Center
论文作者
论文摘要
大型研究和咨询公司Gartner预计,到2024年,80%的安全操作中心(SOCS)将使用基于机器学习(ML)的解决方案来增强其运营。鉴于这种广泛的采用,对于研究界而言,识别和解决可用性问题至关重要。这项工作介绍了第一个原位可用性评估基于ML的工具的结果。在美国海军的支持下,我们利用了国家网络范围,这是一个大型,充气的网络测试台,配备了最先进的网络和用户仿真功能,研究了六位美国海军SOC分析师对两种工具的使用。我们的分析确定了几个严重的可用性问题,包括对既定可用性启发式方法的多种侵犯用户界面设计。我们还发现,分析师缺乏明确的心理模型,即这些工具如何产生分数,从而导致不信任和/或滥用工具本身。令人惊讶的是,我们发现分析师的教育水平或多年的经验与他们在任何一种工具之间的表现之间没有相关性,这表明其他因素(例如先前的背景知识或个性)在基于ML的工具使用中起着重要作用。我们的发现表明,基于ML的安全工具供应商必须将重点放在与经验丰富和经验不足的分析师合作上,以确保其系统在实际安全操作设置中可用且有用。
Gartner, a large research and advisory company, anticipates that by 2024 80% of security operation centers (SOCs) will use machine learning (ML) based solutions to enhance their operations. In light of such widespread adoption, it is vital for the research community to identify and address usability concerns. This work presents the results of the first in situ usability assessment of ML-based tools. With the support of the US Navy, we leveraged the national cyber range, a large, air-gapped cyber testbed equipped with state-of-the-art network and user emulation capabilities, to study six US Naval SOC analysts' usage of two tools. Our analysis identified several serious usability issues, including multiple violations of established usability heuristics form user interface design. We also discovered that analysts lacked a clear mental model of how these tools generate scores, resulting in mistrust and/or misuse of the tools themselves. Surprisingly, we found no correlation between analysts' level of education or years of experience and their performance with either tool, suggesting that other factors such as prior background knowledge or personality play a significant role in ML-based tool usage. Our findings demonstrate that ML-based security tool vendors must put a renewed focus on working with analysts, both experienced and inexperienced, to ensure that their systems are usable and useful in real-world security operations settings.