论文标题
可解释的人工智能的局部神经网络
Locality Guided Neural Networks for Explainable Artificial Intelligence
论文作者
论文摘要
在当前的深层网络体系结构中,网络中的更深层倾向于包含数百个独立的神经元,这使得人很难理解它们之间的相互作用。通过通过相关组织神经元,人类可以观察到相邻神经元的簇如何相互作用。在本文中,我们提出了一种用于背部传播的新颖算法,称为局部神经网络(LGNN),用于训练网络,该网络可保留深网的每一层中相邻神经元之间的位置。由自组织地图(SOM)激励的,目标是在深网的每一层上强制局部拓扑,以使相邻的神经元彼此高度相关。这种方法有助于解释人工智能(XAI)的领域,该领域旨在减轻当前AI方法的黑盒性质,并使人可以理解。我们的方法旨在在深度学习中实现XAI,而无需更改当前模型的结构,也不需要任何后处理。本文着重于卷积神经网络(CNN),但理论上可以应用于任何类型的深度学习体系结构。在我们的实验中,我们训练各种VGG和广泛的重新连接(WRN)网络,以在CIFAR100上进行图像分类。在呈现定性和定量结果的深入分析中,我们的方法能够在每一层上实施拓扑,同时实现分类精度的较小提高
In current deep network architectures, deeper layers in networks tend to contain hundreds of independent neurons which makes it hard for humans to understand how they interact with each other. By organizing the neurons by correlation, humans can observe how clusters of neighbouring neurons interact with each other. In this paper, we propose a novel algorithm for back propagation, called Locality Guided Neural Network(LGNN) for training networks that preserves locality between neighbouring neurons within each layer of a deep network. Heavily motivated by Self-Organizing Map (SOM), the goal is to enforce a local topology on each layer of a deep network such that neighbouring neurons are highly correlated with each other. This method contributes to the domain of Explainable Artificial Intelligence (XAI), which aims to alleviate the black-box nature of current AI methods and make them understandable by humans. Our method aims to achieve XAI in deep learning without changing the structure of current models nor requiring any post processing. This paper focuses on Convolutional Neural Networks (CNNs), but can theoretically be applied to any type of deep learning architecture. In our experiments, we train various VGG and Wide ResNet (WRN) networks for image classification on CIFAR100. In depth analyses presenting both qualitative and quantitative results demonstrate that our method is capable of enforcing a topology on each layer while achieving a small increase in classification accuracy