论文标题

平衡多语言神经机器翻译的培训

Balancing Training for Multilingual Neural Machine Translation

论文作者

Wang, Xinyi, Tsvetkov, Yulia, Neubig, Graham

论文摘要

当训练可以转换为多种语言的多语言机器翻译(MT)模型时,我们面临不平衡的训练集:某些语言的培训数据比其他语言要多得多。标准实践是提高样本资源较低的语言以增加表示形式,并且向上采样的程度对整体性能产生了很大的影响。在本文中,我们提出了一种方法,该方法将自动学习如何通过对数据得分手进行优化以最大程度地提高所有测试语言的性能来加重训练数据。在一对多和多一对MT设置下,对两组语言进行的实验表明,我们的方法不仅在平均性能方面始终超过启发式基线,而且还可以灵活地控制哪种语言的性能。

When training multilingual machine translation (MT) models that can translate to/from multiple languages, we are faced with imbalanced training sets: some languages have much more training data than others. Standard practice is to up-sample less resourced languages to increase representation, and the degree of up-sampling has a large effect on the overall performance. In this paper, we propose a method that instead automatically learns how to weight training data through a data scorer that is optimized to maximize performance on all test languages. Experiments on two sets of languages under both one-to-many and many-to-one MT settings show our method not only consistently outperforms heuristic baselines in terms of average performance, but also offers flexible control over the performance of which languages are optimized.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源