论文标题

用于临床实体提取的机器学习的强大基准测试

Robust Benchmarking for Machine Learning of Clinical Entity Extraction

论文作者

Agrawal, Monica, O'Connell, Chloe, Fatemi, Yasmin, Levy, Ariel, Sontag, David

论文摘要

临床研究通常需要了解仅在自由文本临床注释中存在的患者叙事的要素。为了将笔记转换为结构化数据以供下游使用,这些元素通常被提取并标准化为医学词汇。在这项工作中,我们审核了最先进系统的性能并指示改进的领域。我们发现,2019 N2C2共享任务上临床实体归一化系统的高任务精度具有误导性,并且基础性能仍然很脆弱。对于常见概念(95.3%)的归一化精度很高,但在训练数据中看不见的概念(69.3%)要低得多。我们证明,当前的方法部分是由于医学词汇的不一致,现有标签模式的局限性和狭窄的评估技术而受到阻碍。我们重新制定了临床实体提取的注释框架,以考虑这些问题,以允许良好的端到端系统基准测试。我们从两个注释者之间的新框架中评估了注释的一致性,并获得了实体识别的JACCARD相似性为0.73,而实体归一化的一致性为0.83。我们提出了一条前进的途径,以解决对实体识别和标准化中刺激方法开发的参考标准的证明需求。

Clinical studies often require understanding elements of a patient's narrative that exist only in free text clinical notes. To transform notes into structured data for downstream use, these elements are commonly extracted and normalized to medical vocabularies. In this work, we audit the performance of and indicate areas of improvement for state-of-the-art systems. We find that high task accuracies for clinical entity normalization systems on the 2019 n2c2 Shared Task are misleading, and underlying performance is still brittle. Normalization accuracy is high for common concepts (95.3%), but much lower for concepts unseen in training data (69.3%). We demonstrate that current approaches are hindered in part by inconsistencies in medical vocabularies, limitations of existing labeling schemas, and narrow evaluation techniques. We reformulate the annotation framework for clinical entity extraction to factor in these issues to allow for robust end-to-end system benchmarking. We evaluate concordance of annotations from our new framework between two annotators and achieve a Jaccard similarity of 0.73 for entity recognition and an agreement of 0.83 for entity normalization. We propose a path forward to address the demonstrated need for the creation of a reference standard to spur method development in entity recognition and normalization.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源