论文标题

Word2Vec:最佳超参数及其对NLP下游任务的影响

Word2Vec: Optimal Hyper-Parameters and Their Impact on NLP Downstream Tasks

论文作者

Adewumi, Tosin P., Liwicki, Foteini, Liwicki, Marcus

论文摘要

Word2Vec是自然语言处理(NLP)任务的重要模型。在分布式嵌入式的新最先进的(SOTA)深神经网络中也发现了类似的灵感。但是,超参数的错误组合会产生质量较差的载体。这项工作的目的是从经验上显示出存在超参数的最佳组合并评估各种组合。我们将它们与发布的预训练的原始Word2Vec模型进行了比较。进行了内在和外在(下游)评估,包括指定的实体识别(NER)和情感分析(SA)。下游任务表明,最佳模型通常是特定于任务的,高类比喻得分不一定与F1分数正相关,并且仅适用于仅关注数据。一个点后增加矢量尺寸的尺寸会导致质量或性能差。如果要节省时间,精力和环境的道德考虑,那么在某些情况下,相当小的语料库可能会做得同样甚至更好。此外,与原始模型相比,使用小型语料库,我们获得了更好的人为分配的单词im分数,相应的Spearman相关性和更好的下游性能(具有显着性测试),该模型对1000亿字型语料库进行了训练。

Word2Vec is a prominent model for natural language processing (NLP) tasks. Similar inspiration is found in distributed embeddings for new state-of-the-art (SotA) deep neural networks. However, wrong combination of hyper-parameters can produce poor quality vectors. The objective of this work is to empirically show optimal combination of hyper-parameters exists and evaluate various combinations. We compare them with the released, pre-trained original word2vec model. Both intrinsic and extrinsic (downstream) evaluations, including named entity recognition (NER) and sentiment analysis (SA) were carried out. The downstream tasks reveal that the best model is usually task-specific, high analogy scores don't necessarily correlate positively with F1 scores and the same applies to focus on data alone. Increasing vector dimension size after a point leads to poor quality or performance. If ethical considerations to save time, energy and the environment are made, then reasonably smaller corpora may do just as well or even better in some cases. Besides, using a small corpus, we obtain better human-assigned WordSim scores, corresponding Spearman correlation and better downstream performances (with significance tests) compared to the original model, trained on 100 billion-word corpus.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源