论文标题
SUSING:SU-NET演唱语音综合
SUSing: SU-net for Singing Voice Synthesis
论文作者
论文摘要
Singing Voice Synthesis是一项生成任务,涉及对歌手模型的多维控制,包括歌词,音调和持续时间,包括歌手的音色和诸如Vibrato之类的歌手。在本文中,我们提出了Su-net,以唱出名为Susing的语音综合。综合唱歌声音被视为歌词和音乐乐谱和频谱之间的翻译任务。歌词和音乐得分信息通过卷积层编码为二维功能表示。二维特征及其频谱通过SU-NET网络以自回归方式映射到目标谱。在SU-NET中,使用条纹合并方法用于替换替代的全局池化方法,以了解频谱中的垂直频率关系和时域中的频率变化。公共数据集Kiritan上的实验结果表明,所提出的方法可以综合自然的歌声。
Singing voice synthesis is a generative task that involves multi-dimensional control of the singing model, including lyrics, pitch, and duration, and includes the timbre of the singer and singing skills such as vibrato. In this paper, we proposed SU-net for singing voice synthesis named SUSing. Synthesizing singing voice is treated as a translation task between lyrics and music score and spectrum. The lyrics and music score information is encoded into a two-dimensional feature representation through the convolution layer. The two-dimensional feature and its frequency spectrum are mapped to the target spectrum in an autoregressive manner through a SU-net network. Within the SU-net the stripe pooling method is used to replace the alternate global pooling method to learn the vertical frequency relationship in the spectrum and the changes of frequency in the time domain. The experimental results on the public dataset Kiritan show that the proposed method can synthesize more natural singing voices.