论文标题

表达语音综合的自我监督的上下文感知风格表示

Self-supervised Context-aware Style Representation for Expressive Speech Synthesis

论文作者

Wu, Yihan, Wang, Xi, Zhang, Shaofei, He, Lei, Song, Ruihua, Nie, Jian-Yun

论文摘要

像有声读物的综合一样,表达性的语音综合仍然对风格表示学习和预测仍然具有挑战性。从参考音频或从文本中预测样式标签需要大量标记的数据,这是昂贵的,并且难以准确定义和注释。在本文中,我们提出了一个新颖的框架,以自我监督的方式从丰富的纯文本学习样式表示。它利用情感词典,并使用对比度学习和深度聚类。我们进一步将样式表示形式整合为多式变压器TTS中的条件嵌入。通过预测在同一数据集上训练的样式标签,但通过人类注释,我们的方法根据对Audiobook语音中的内域和外域测试集的主观评估来改进结果。此外,有了隐性的上下文感知样式表示,长期综合音频的情感过渡似乎更自然。音频样本可在演示网络上找到。

Expressive speech synthesis, like audiobook synthesis, is still challenging for style representation learning and prediction. Deriving from reference audio or predicting style tags from text requires a huge amount of labeled data, which is costly to acquire and difficult to define and annotate accurately. In this paper, we propose a novel framework for learning style representation from abundant plain text in a self-supervised manner. It leverages an emotion lexicon and uses contrastive learning and deep clustering. We further integrate the style representation as a conditioned embedding in a multi-style Transformer TTS. Comparing with multi-style TTS by predicting style tags trained on the same dataset but with human annotations, our method achieves improved results according to subjective evaluations on both in-domain and out-of-domain test sets in audiobook speech. Moreover, with implicit context-aware style representation, the emotion transition of synthesized audio in a long paragraph appears more natural. The audio samples are available on the demo web.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源