论文标题
说唱歌手:有条件的说唱歌词与Denoising AutoCoders生成
Rapformer: Conditional Rap Lyrics Generation with Denoising Autoencoders
论文作者
论文摘要
结合符号生成语言的能力是人类智力的定义特征,尤其是在通过歌词的艺术讲故事的背景下。我们开发了一种根据任何文本的内容(例如新闻文章)或增强现有的RAP歌词来综合说唱诗歌的方法。我们的方法称为RAPFORMER,基于培训基于变压器的DeNoing AutoCoder,从歌词中提取的内容单词重建RAP歌词,试图保留基本含义,同时匹配目标样式。 Rapformer具有一种新型的基于BERT的释义方案,用于押韵增强功能,将输出歌词的平均押韵密度提高了10%。对三个不同输入域的实验结果表明,Rapformer能够生成技术流利的经文,这些经文在内容保存和样式转移之间提供了良好的权衡。此外,一个像图灵测试的实验表明,说唱歌手愚人了人类歌词专家有25%的时间。
The ability to combine symbols to generate language is a defining characteristic of human intelligence, particularly in the context of artistic story-telling through lyrics. We develop a method for synthesizing a rap verse based on the content of any text (e.g., a news article), or for augmenting pre-existing rap lyrics. Our method, called Rapformer, is based on training a Transformer-based denoising autoencoder to reconstruct rap lyrics from content words extracted from the lyrics, trying to preserve the essential meaning, while matching the target style. Rapformer features a novel BERT-based paraphrasing scheme for rhyme enhancement which increases the average rhyme density of output lyrics by 10%. Experimental results on three diverse input domains show that Rapformer is capable of generating technically fluent verses that offer a good trade-off between content preservation and style transfer. Furthermore, a Turing-test-like experiment reveals that Rapformer fools human lyrics experts 25% of the time.