论文标题

选择,提取和生成:神经键形生成具有层面覆盖范围

Select, Extract and Generate: Neural Keyphrase Generation with Layer-wise Coverage Attention

论文作者

Ahmad, Wasi Uddin, Bai, Xiao, Lee, Soomin, Chang, Kai-Wei

论文摘要

自然语言处理技术已证明了键形生成的有希望的结果。但是,\ emph {neural}键形生成中的主要挑战之一是使用深神经网络处理长文档。通常,将文档作为神经网络的输入进行截断。因此,这些模型可能会错过目标文档中传达的基本点。为了克服这一限制,我们提出了\ emph {seg-net},这是一种神经键形生成模型,由两个主要组成部分组成,(1)选择器中选择文档中的显着句子的选择器,以及(2)从选定句子中共同提取和生成键的提取器。 SEG-NET使用Transformer(一种自我进行的架构)作为带有新颖\ emph {layer-wise}覆盖范围的基本构建块,以总结文档中讨论的大多数要点。从科学和Web文档中的七个键形生成基准测试的实验结果表明,SEG-NET大优于最先进的神经生成方法。

Natural language processing techniques have demonstrated promising results in keyphrase generation. However, one of the major challenges in \emph{neural} keyphrase generation is processing long documents using deep neural networks. Generally, documents are truncated before given as inputs to neural networks. Consequently, the models may miss essential points conveyed in the target document. To overcome this limitation, we propose \emph{SEG-Net}, a neural keyphrase generation model that is composed of two major components, (1) a selector that selects the salient sentences in a document and (2) an extractor-generator that jointly extracts and generates keyphrases from the selected sentences. SEG-Net uses Transformer, a self-attentive architecture, as the basic building block with a novel \emph{layer-wise} coverage attention to summarize most of the points discussed in the document. The experimental results on seven keyphrase generation benchmarks from scientific and web documents demonstrate that SEG-Net outperforms the state-of-the-art neural generative methods by a large margin.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源