论文标题
时间:用于动态物理过程的透明,易于解释,模型自适应和可解释的神经网络
TIME: A Transparent, Interpretable, Model-Adaptive and Explainable Neural Network for Dynamic Physical Processes
论文作者
论文摘要
部分微分方程是物理过程的无限尺寸编码表示。但是,向耦合表示的多种观察数据吸收了重大挑战。我们提出了一种完全卷积的体系结构,该体系结构捕获了域的不变结构,以重建可观察到的系统。与其他网络相比,针对此类问题的其他网络相比,所提出的体系结构的重量明显低。我们的目的是学习将耦合的动态过程解释为与代表模型适应性的孤立过程的真实内核的偏差。实验分析表明,我们的体系结构在捕获过程内核和系统异常方面是稳健且透明的。我们还表明,高权重表示不仅是冗余,而且会影响网络的解释性。我们的设计以领域知识为指导,孤立的过程表示形式是验证的基础真理。这些使我们能够识别冗余核及其在激活图中的表现形式,以指导更好的设计,这些设计既可以解释又可以解释,这与传统的深网不同。
Partial Differential Equations are infinite dimensional encoded representations of physical processes. However, imbibing multiple observation data towards a coupled representation presents significant challenges. We present a fully convolutional architecture that captures the invariant structure of the domain to reconstruct the observable system. The proposed architecture is significantly low-weight compared to other networks for such problems. Our intent is to learn coupled dynamic processes interpreted as deviations from true kernels representing isolated processes for model-adaptivity. Experimental analysis shows that our architecture is robust and transparent in capturing process kernels and system anomalies. We also show that high weights representation is not only redundant but also impacts network interpretability. Our design is guided by domain knowledge, with isolated process representations serving as ground truths for verification. These allow us to identify redundant kernels and their manifestations in activation maps to guide better designs that are both interpretable and explainable unlike traditional deep-nets.