论文标题

精细的情绪强度传递,控制和预测情绪语音综合

Fine-grained Emotion Strength Transfer, Control and Prediction for Emotional Speech Synthesis

论文作者

Lei, Yi, Yang, Shan, Xie, Lei

论文摘要

本文提出了一个统一模型,以进行情绪转移,控制和预测基于序列至序列的细粒情感语音综合。传统的情感语音综合通常需要手动标签或参考音频来确定综合语音的情感表达。这种粗糙的标签无法控制语音情感的细节,通常会导致平均情绪表达传递,并且在推断过程中也很难选择合适的参考音频。为了进行细粒度的情感表达产生,我们通过学习的排名功能介绍音素级的情感强度表示,以描述当地的情感细节,并采用了句子级的情感类别来呈现综合语音的全球情感。有了全球渲染和局部情感描述符,我们可以通过其情感描述符(用于传递)或直接从音素级手动标签(用于控制)获得参考音频的细粒情绪表达。至于与任意文本输入的情感语音综合,所提出的模型还可以预测文本中的音素级情感表达式,这不需要任何参考音频或手动标签。

This paper proposes a unified model to conduct emotion transfer, control and prediction for sequence-to-sequence based fine-grained emotional speech synthesis. Conventional emotional speech synthesis often needs manual labels or reference audio to determine the emotional expressions of synthesized speech. Such coarse labels cannot control the details of speech emotion, often resulting in an averaged emotion expression delivery, and it is also hard to choose suitable reference audio during inference. To conduct fine-grained emotion expression generation, we introduce phoneme-level emotion strength representations through a learned ranking function to describe the local emotion details, and the sentence-level emotion category is adopted to render the global emotions of synthesized speech. With the global render and local descriptors of emotions, we can obtain fine-grained emotion expressions from reference audio via its emotion descriptors (for transfer) or directly from phoneme-level manual labels (for control). As for the emotional speech synthesis with arbitrary text inputs, the proposed model can also predict phoneme-level emotion expressions from texts, which does not require any reference audio or manual label.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源