论文标题
关于注意力网络的解释性
On the Interpretability of Attention Networks
论文作者
论文摘要
注意机制构成了几种成功的深度学习架构的核心组成部分,并基于一个关键思想:“输出仅取决于输入的一个小(但未知)段。”在几种实际应用中,例如图像字幕和语言翻译,这主要是正确的。在具有注意机制的经过训练的模型中,编码负责输出的输入段的中间模块的输出通常被用作窥视网络的“推理”的一种方式。对于分类问题的变体,我们将这样的概念更加精确,以至于我们将选择性依赖性分类(SDC)与注意模型体系结构一起使用。在这样的设置下,我们演示了各种误差模式,其中注意模型可以准确但无法解释,并表明确实是由于训练而发生的。我们说明了各种情况,这些情况可以强调和减轻这种行为。最后,我们使用对SDC任务的可解释性的客观定义来评估旨在鼓励稀疏性的一些注意力模型学习算法,并证明这些算法有助于提高可解释性。
Attention mechanisms form a core component of several successful deep learning architectures, and are based on one key idea: ''The output depends only on a small (but unknown) segment of the input.'' In several practical applications like image captioning and language translation, this is mostly true. In trained models with an attention mechanism, the outputs of an intermediate module that encodes the segment of input responsible for the output is often used as a way to peek into the `reasoning` of the network. We make such a notion more precise for a variant of the classification problem that we term selective dependence classification (SDC) when used with attention model architectures. Under such a setting, we demonstrate various error modes where an attention model can be accurate but fail to be interpretable, and show that such models do occur as a result of training. We illustrate various situations that can accentuate and mitigate this behaviour. Finally, we use our objective definition of interpretability for SDC tasks to evaluate a few attention model learning algorithms designed to encourage sparsity and demonstrate that these algorithms help improve interpretability.