论文标题

将神经网络解释为逐步论证框架(包括证明附录)

Interpreting Neural Networks as Gradual Argumentation Frameworks (Including Proof Appendix)

论文作者

Potyka, Nico

论文摘要

我们表明,一类有趣的前馈神经网络可以理解为定量论证框架。这种连接在正式论证和机器学习中的研究之间创造了桥梁。我们将前馈神经网络的语义推广到无环图,并研究参数图中所得的计算和语义特性。事实证明,语义比现有的语义提供了更强的保证,这些语义是为论证设置量身定制的。从机器学习的角度来看,连接似乎并没有立即有所帮助。尽管它为某些前进的神经网络提供了直观的含义,但由于它们的大小和密度,它们仍然难以理解。但是,该连接似乎有助于将稀疏论证网络形式的背景知识与已接受互补目的培训的密集神经网络相结合,并从数据中以端到端的方式学习了定量论证框架的参数。

We show that an interesting class of feed-forward neural networks can be understood as quantitative argumentation frameworks. This connection creates a bridge between research in Formal Argumentation and Machine Learning. We generalize the semantics of feed-forward neural networks to acyclic graphs and study the resulting computational and semantical properties in argumentation graphs. As it turns out, the semantics gives stronger guarantees than existing semantics that have been tailor-made for the argumentation setting. From a machine-learning perspective, the connection does not seem immediately helpful. While it gives intuitive meaning to some feed-forward-neural networks, they remain difficult to understand due to their size and density. However, the connection seems helpful for combining background knowledge in form of sparse argumentation networks with dense neural networks that have been trained for complementary purposes and for learning the parameters of quantitative argumentation frameworks in an end-to-end fashion from data.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源