论文标题
仅使用无条件语言模型的条件自然语言生成:探索
Conditioned Natural Language Generation using only Unconditioned Language Model: An Exploration
论文作者
论文摘要
基于变压器的语言模型已证明对自然语言生成(NLG)非常有力。但是,以某些用户输入(例如主题或属性)为条件的文本生成并非平凡。过去的方法依赖于修改原始的LM体系结构,用属性标签重新训练LM,或者对LM进行了属性标签,或者已经单独培训了“指导模型”以指导文本生成解码。我们认为上述方法不是必需的,而原始的无条件的LM足以容纳有条件的NLG。我们通过自动化和人类评估来评估样品的流利度和多样性。
Transformer-based language models have shown to be very powerful for natural language generation (NLG). However, text generation conditioned on some user inputs, such as topics or attributes, is non-trivial. Past approach relies on either modifying the original LM architecture, re-training the LM on corpora with attribute labels, or having separately trained `guidance models' to guide text generation in decoding. We argued that the above approaches are not necessary, and the original unconditioned LM is sufficient for conditioned NLG. We evaluated our approaches by the samples' fluency and diversity with automated and human evaluation.