论文标题

将神经网络提炼成跳跃级决策列表

Distilling neural networks into skipgram-level decision lists

论文作者

Sushil, Madhumita, Šuster, Simon, Daelemans, Walter

论文摘要

先前关于反复神经网络解释的研究的研究集中在发现网络最重要的输入段作为其解释的方法上。在这种情况下,这些输入段相互结合以形成解释模式的方式仍然未知。为了克服这一点,以前的一些工作试图在解释神经输出的数据中找到模式(称为规则)。但是,它们的解释通常对模型参数不敏感,这限制了文本解释的可扩展性。为了克服这些局限性,我们提出了一条管道,通过Skipgrams通过决策清单(也称为规则)来解释RNN。为了评估解释,我们创建了一个合成的败血症识别数据集,并将我们的技术应用于其他临床和情感分析数据集。我们发现我们的技术持续实现了很高的解释忠诚度和可解释的规则。

Several previous studies on explanation for recurrent neural networks focus on approaches that find the most important input segments for a network as its explanations. In that case, the manner in which these input segments combine with each other to form an explanatory pattern remains unknown. To overcome this, some previous work tries to find patterns (called rules) in the data that explain neural outputs. However, their explanations are often insensitive to model parameters, which limits the scalability of text explanations. To overcome these limitations, we propose a pipeline to explain RNNs by means of decision lists (also called rules) over skipgrams. For evaluation of explanations, we create a synthetic sepsis-identification dataset, as well as apply our technique on additional clinical and sentiment analysis datasets. We find that our technique persistently achieves high explanation fidelity and qualitatively interpretable rules.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源