论文标题

与循环专家的解释性模块设计,以进行可视化和动态调整持续学习

Design of Explainability Module with Experts in the Loop for Visualization and Dynamic Adjustment of Continual Learning

论文作者

He, Yujiang, Huang, Zhixin, Sick, Bernhard

论文摘要

持续的学习可以使神经网络在改变任务的方案中依次学习新任务来发展。但是,在将此技术应用于现实世界应用之前,应在进一步的研究中克服两个一般且相关的挑战。首先,从应用程序中的数据流中新收集的新颖性可能包含对持续学习毫无意义的异常。我们不必将它们视为更新的新任务,而是必须过滤出此类异常,以减少极高的高渗透数据的干扰以进行收敛的进展。其次,关于持续学习的解释性的研究努力减少了,这导致更新的神经网络缺乏透明度和信誉。关于持续学习的过程和结果的详细解释可以帮助专家做出判断力和决策。因此,我们根据循环中的专家(例如缩小维度,可视化和评估策略)提出了解释性模块的概念设计。这项工作旨在通过充分解释和可视化已确定的异常和更新的神经网络来克服上述挑战。借助该模块,专家可以对有关异常过滤,动态调整超参数,数据备份等的决策更有信心。

Continual learning can enable neural networks to evolve by learning new tasks sequentially in task-changing scenarios. However, two general and related challenges should be overcome in further research before we apply this technique to real-world applications. Firstly, newly collected novelties from the data stream in applications could contain anomalies that are meaningless for continual learning. Instead of viewing them as a new task for updating, we have to filter out such anomalies to reduce the disturbance of extremely high-entropy data for the progression of convergence. Secondly, fewer efforts have been put into research regarding the explainability of continual learning, which leads to a lack of transparency and credibility of the updated neural networks. Elaborated explanations about the process and result of continual learning can help experts in judgment and making decisions. Therefore, we propose the conceptual design of an explainability module with experts in the loop based on techniques, such as dimension reduction, visualization, and evaluation strategies. This work aims to overcome the mentioned challenges by sufficiently explaining and visualizing the identified anomalies and the updated neural network. With the help of this module, experts can be more confident in decision-making regarding anomaly filtering, dynamic adjustment of hyperparameters, data backup, etc.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源