论文标题
碎片的力量:符号音乐生成中结构分割的层次变压器模型
The Power of Fragmentation: A Hierarchical Transformer Model for Structural Segmentation in Symbolic Music Generation
论文作者
论文摘要
符号音乐的产生依赖于生成模型的上下文表示功能,其中最普遍的方法是基于变压器的模型。音乐背景的学习也与音乐中的结构元素,即介绍,经文和合唱有关,这些元素目前被研究界忽略了。在本文中,我们提出了一个分层变压器模型,以学习音乐中的多尺度上下文。在编码阶段,我们首先设计了一个片段范围定位层,以将音乐结合为和弦和部分。然后,我们使用多尺度的注意机制来学习笔记,和弦和部分级别的上下文。在解码阶段,我们提出了一个层次变压器模型,该模型使用细分编码器并行生成部分和粗编码器来解码组合音乐。我们还设计了音乐风格的标准化层,以在生成的部分之间达到一致的音乐风格。我们的模型在两个开放的MIDI数据集上进行了评估,实验表明,我们的模型优于当代最佳的音乐生成模型。更令人兴奋的是,视觉评估表明,我们的模型在旋律重复使用方面表现出色,从而产生了更逼真的音乐。
Symbolic Music Generation relies on the contextual representation capabilities of the generative model, where the most prevalent approach is the Transformer-based model. The learning of musical context is also related to the structural elements in music, i.e. intro, verse, and chorus, which are currently overlooked by the research community. In this paper, we propose a hierarchical Transformer model to learn multi-scale contexts in music. In the encoding phase, we first designed a Fragment Scope Localization layer to syncopate the music into chords and sections. Then, we use a multi-scale attention mechanism to learn note-, chord-, and section-level contexts. In the decoding phase, we proposed a hierarchical Transformer model that uses fine-decoders to generate sections in parallel and a coarse-decoder to decode the combined music. We also designed a Music Style Normalization layer to achieve a consistent music style between the generated sections. Our model is evaluated on two open MIDI datasets, and experiments show that our model outperforms the best contemporary music generative models. More excitingly, the visual evaluation shows that our model is superior in melody reuse, resulting in more realistic music.