论文标题

通过提示恢复信息提取的几种方法

A Few-shot Approach to Resume Information Extraction via Prompts

论文作者

Gan, Chengguang, Mori, Tatsunori

论文摘要

迅速学习在文本分类任务上的微调表现吸引了NLP社区。本文将其应用于恢复信息提取,从而改善了此任务的现有方法。我们创建了为恢复文本而定制的手动模板和语言器,并比较了蒙版语言模型(MLM)和SEQ2SEQ PLM的性能。此外,我们增强了口头设计的知识迅速调整,为跨NLP任务的迅速模板设计做出了贡献。我们介绍手动知识渊博的语言器(MKV),这是为特定应用程序构造语言器的规则。我们的测试表明,MKV规则比现有方法产生更有效,强大的模板和语言器。我们的MKV方法解决了样本不平衡,超过了当前的自动及时方法。这项研究强调了量身定制学习的额外启动提取的价值,这强调了定制设计的模板和言语者的重要性。

Prompt learning's fine-tune performance on text classification tasks has attracted the NLP community. This paper applies it to resume information extraction, improving existing methods for this task. We created manual templates and verbalizers tailored to resume texts and compared the performance of Masked Language Model (MLM) and Seq2Seq PLMs. Also, we enhanced the verbalizer design for Knowledgeable Prompt-tuning, contributing to prompt template design across NLP tasks. We present the Manual Knowledgeable Verbalizer (MKV), a rule for constructing verbalizers for specific applications. Our tests show that MKV rules yield more effective, robust templates and verbalizers than existing methods. Our MKV approach resolved sample imbalance, surpassing current automatic prompt methods. This study underscores the value of tailored prompt learning for resume extraction, stressing the importance of custom-designed templates and verbalizers.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源