论文标题
学习序列生成的多尺度变压器模型
Learning Multiscale Transformer Models for Sequence Generation
论文作者
论文摘要
多尺度特征层次结构已在计算机视觉区域的成功中得到见证。这进一步激发了研究人员设计自然语言处理的多尺度变压器,主要是基于自我发项机制。例如,限制跨头部的接收场或通过卷积提取局部细粒度特征。但是,大多数现有作品都直接建模了本地功能,但忽略了单词边界信息。这导致了缺乏解释性的冗余和模棱两可的注意力分布。在这项工作中,我们在不同的语言单元中定义了这些量表,包括子字,单词和短语。我们通过基于单词边界信息和短语级别的先验知识建立量表之间的关系来构建多尺度变压器模型。提出的\ textbf {u} niversal \ textbf {m} ulti \ textbf {s} cale \ textbf {t} ransformer,即在两个序列生成任务上评估。值得注意的是,它在不牺牲效率的情况下在几个测试集上的强基线上产生了一致的性能提高。
Multiscale feature hierarchies have been witnessed the success in the computer vision area. This further motivates researchers to design multiscale Transformer for natural language processing, mostly based on the self-attention mechanism. For example, restricting the receptive field across heads or extracting local fine-grained features via convolutions. However, most of existing works directly modeled local features but ignored the word-boundary information. This results in redundant and ambiguous attention distributions, which lacks of interpretability. In this work, we define those scales in different linguistic units, including sub-words, words and phrases. We built a multiscale Transformer model by establishing relationships among scales based on word-boundary information and phrase-level prior knowledge. The proposed \textbf{U}niversal \textbf{M}ulti\textbf{S}cale \textbf{T}ransformer, namely \textsc{Umst}, was evaluated on two sequence generation tasks. Notably, it yielded consistent performance gains over the strong baseline on several test sets without sacrificing the efficiency.