论文标题

来自真相表

A Scalable, Interpretable, Verifiable & Differentiable Logic Gate Convolutional Neural Network Architecture From Truth Tables

论文作者

Benamira, Adrien, Guérand, Tristan, Peyrin, Thomas, Yap, Trevor, Hooi, Bryan

论文摘要

我们建议$ \ MATHCAL {T} $ RUTH $ \ MATHCAL {T} $ able Net($ \ Mathcal {tt} $ net),这是一种新颖的卷积神经网络(CNN)体系结构,通过设计,开放的挑战,可解释性的挑战,正式验证,正式验证和逻辑门转换。 $ \ MATHCAL {TT} $ NET是使用CNNS的过滤器构建的,CNNS的过滤器等效于可拖动真实表,我们称之为学习真相表(LTT)块。 LTT块的双重形式使真相表可以轻松地使用梯度下降训练,并使这些CNN易于解释,验证和推断。具体而言,$ \ Mathcal {tt} $ net是一个深的CNN模型,可以在训练后转换后自动表示,作为布尔决策树的总和,或作为分离/结合性的正常形式(DNF/CNF)的总和,或作为压缩的布尔逻辑逻辑电路。我们证明了多个数据集上$ \ Mathcal {tt} $ net的有效性和可伸缩性,与决策树,快速完成/正式验证以及可扩展的逻辑门表示相比,与最先进的方法相比,所有这些都可比较。我们认为,这项工作代表了使CNN对现实世界中的关键应用程序更加透明和可信赖的一步。

We propose $\mathcal{T}$ruth $\mathcal{T}$able net ($\mathcal{TT}$net), a novel Convolutional Neural Network (CNN) architecture that addresses, by design, the open challenges of interpretability, formal verification, and logic gate conversion. $\mathcal{TT}$net is built using CNNs' filters that are equivalent to tractable truth tables and that we call Learning Truth Table (LTT) blocks. The dual form of LTT blocks allows the truth tables to be easily trained with gradient descent and makes these CNNs easy to interpret, verify and infer. Specifically, $\mathcal{TT}$net is a deep CNN model that can be automatically represented, after post-training transformation, as a sum of Boolean decision trees, or as a sum of Disjunctive/Conjunctive Normal Form (DNF/CNF) formulas, or as a compact Boolean logic circuit. We demonstrate the effectiveness and scalability of $\mathcal{TT}$net on multiple datasets, showing comparable interpretability to decision trees, fast complete/sound formal verification, and scalable logic gate representation, all compared to state-of-the-art methods. We believe this work represents a step towards making CNNs more transparent and trustworthy for real-world critical applications.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源