论文标题

通过元学习来学习更好的初始化,以便软提示

Learning a Better Initialization for Soft Prompts via Meta-Learning

论文作者

Huang, Yukun, Qian, Kun, Yu, Zhou

论文摘要

及时调整(PT)是将预训练的语言模型调整到下游任务的有效方法。如果没有良好的初始化,请在几次射击设置下迅速调整表现不佳。因此,提出了预训练的提示调整(PPT),以通过利用预训练数据来初始化提示。我们建议Metapt(元学习的及时调整),以通过考虑预训练数据中的潜在结构来进一步改善PPT的初始化。具体而言,我们通过首先将预训练数据聚集到使用无监督方法的不同辅助任务中引入结构。然后,我们将这些任务使用元学习算法预先培训提示。这样的过程可以通过发现这些辅助任务之间的共同点来使提示学习更好的初始化。我们在七个下游任务上评估我们的方法。与最先进的方法相比,我们的Metapt取得更好,更稳定的性能。

Prompt tuning (PT) is an effective approach to adapting pre-trained language models to downstream tasks. Without a good initialization, prompt tuning doesn't perform well under few-shot settings. So pre-trained prompt tuning (PPT) is proposed to initialize prompts by leveraging pre-training data. We propose MetaPT (Meta-learned Prompt Tuning) to further improve PPT's initialization by considering latent structure within the pre-training data. Specifically, we introduce the structure by first clustering pre-training data into different auxiliary tasks with unsupervised methods. Then we use these tasks to pre-train prompts with a meta-learning algorithm. Such a process can make prompts learn a better initialization by discovering commonalities among these auxiliary tasks. We evaluate our method on seven downstream tasks. Our MetaPT achieves better and more stable performance than the state-of-the-art method.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源