论文标题

regotron:通过单调对齐损失正规化Tacotron2体系结构

Regotron: Regularizing the Tacotron2 architecture via monotonic alignment loss

论文作者

Georgiou, Efthymios, Kritsis, Kosmas, Paraskevopoulos, Georgios, Katsamanis, Athanasios, Katsouros, Vassilis, Potamianos, Alexandros

论文摘要

最近的深度学习文本到语音(TTS)系统通过产生接近人类平等的语音来实现令人印象深刻的表现。但是,他们遭受了训练稳定性问题的困扰,以及与输入文本序列的中间声学表示的不正确对齐。在这项工作中,我们介绍了tacotron2的正规版Regotron,旨在减轻培训问题并同时产生单调对齐。我们的方法以额外的术语增强了香草tacotron2的目标函数,这会惩罚与位置敏感的注意机制中的非单调比对。通过正确调整此正规化术语,我们表明损失曲线变得更加顺畅,同时恢复也始终在看不见的示例中始终产生单调的对准,即使在其训练过程的早期阶段(13%\%的时期总数)),而完全融合的tacotron2也无法这样做。此外,我们提出的正则化方法没有额外的计算开销,同时减少了常见的TTS错误并根据从50位评估者收集的主观平均意见分数(MOS)来实现较高的言语自然性。

Recent deep learning Text-to-Speech (TTS) systems have achieved impressive performance by generating speech close to human parity. However, they suffer from training stability issues as well as incorrect alignment of the intermediate acoustic representation with the input text sequence. In this work, we introduce Regotron, a regularized version of Tacotron2 which aims to alleviate the training issues and at the same time produce monotonic alignments. Our method augments the vanilla Tacotron2 objective function with an additional term, which penalizes non-monotonic alignments in the location-sensitive attention mechanism. By properly adjusting this regularization term we show that the loss curves become smoother, and at the same time Regotron consistently produces monotonic alignments in unseen examples even at an early stage (13\% of the total number of epochs) of its training process, whereas the fully converged Tacotron2 fails to do so. Moreover, our proposed regularization method has no additional computational overhead, while reducing common TTS mistakes and achieving slighlty improved speech naturalness according to subjective mean opinion scores (MOS) collected from 50 evaluators.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源