论文标题

低资源语言中语音识别的多语言变压器语言模型

Multilingual Transformer Language Model for Speech Recognition in Low-resource Languages

论文作者

Miao, Li, Wu, Jian, Behre, Piyush, Chang, Shuangyu, Parthasarathy, Sarangarajan

论文摘要

由于(1)低资源语言的数据稀缺性,(2)培训和刷新100多种单语言模型的昂贵计算成本,以及(3)托管效率低下,考虑到流量稀少,因此训练和部署用于混合语音识别的变压器LMS以低资源语言重新排行第二通过,这是一项挑战。在这项研究中,我们提出了一种新的方法,将多个低资源区域组合在一起,并优化ASR中多语言变压器LMS的性能。我们的网站组多语言变压器LMS的表现优于传统的多语言LM,以及降低维护成本和运营费用。此外,对于部署单语模型的低资源但人口流量的地区是可行的,我们表明,与基线单语LMS相比,对我们的语言环境组的多语言LMS产生更好的单语LM候选者。

It is challenging to train and deploy Transformer LMs for hybrid speech recognition 2nd pass re-ranking in low-resource languages due to (1) data scarcity in low-resource languages, (2) expensive computing costs for training and refreshing 100+ monolingual models, and (3) hosting inefficiency considering sparse traffic. In this study, we present a new way to group multiple low-resource locales together and optimize the performance of Multilingual Transformer LMs in ASR. Our Locale-group Multilingual Transformer LMs outperform traditional multilingual LMs along with reducing maintenance costs and operating expenses. Further, for low-resource but high-traffic locales where deploying monolingual models is feasible, we show that fine-tuning our locale-group multilingual LMs produces better monolingual LM candidates than baseline monolingual LMs.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源