论文标题

破坏批处理标准化,以更好地解释深度神经网络通过层次相关性传播

Breaking Batch Normalization for better explainability of Deep Neural Networks through Layer-wise Relevance Propagation

论文作者

Guillemot, Mathilde, Heusele, Catherine, Korichi, Rodolphe, Schnebert, Sylvianne, Chen, Liming

论文摘要

缺乏神经网络的透明度是其使用的重大突破。层的相关性传播技术构建了代表模型决策中每个输入的相关性的热图。相关性从深神经网络的最后一层向后传播。通过层的相关性传播不能管理归一化层,在这项工作中,我们建议一种包括标准化层的方法。具体而言,我们建立一个等效的网络融合归一化层和卷积或完全连接的层。使用我们的MNIST和CIFAR 10数据集获得的热图更为准确,对于卷积层。我们的研究还阻止了与网络相关性的传播,包括连接层和归一化层的组合。

The lack of transparency of neural networks stays a major break for their use. The Layerwise Relevance Propagation technique builds heat-maps representing the relevance of each input in the model s decision. The relevance spreads backward from the last to the first layer of the Deep Neural Network. Layer-wise Relevance Propagation does not manage normalization layers, in this work we suggest a method to include normalization layers. Specifically, we build an equivalent network fusing normalization layers and convolutional or fully connected layers. Heatmaps obtained with our method on MNIST and CIFAR 10 datasets are more accurate for convolutional layers. Our study also prevents from using Layerwise Relevance Propagation with networks including a combination of connected layers and normalization layer.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源