论文标题

可解释的深层因果学习,以进行节制效果

Interpretable Deep Causal Learning for Moderation Effects

论文作者

Caron, Alberto, Baio, Gianluca, Manolopoulou, Ioanna

论文摘要

在这篇扩展的抽象论文中,我们解决了因果机学习模型中的可解释性问题和针对性的正则化问题。特别是,我们专注于在观察到的混杂因素下估计单个因果/治疗效果的问题,这些问题可以控制并适应治疗对感兴趣结果的影响。针对因果环境调整的Black-Box ML模型在此任务中通常表现良好,但是它们缺乏可解释的输出,无法识别治疗异质性及其功能关系的主要驱动因素。我们提出了一种新型的深层反事实学习结构,用于估计可以同时进行的个体治疗效果:i)传达有针对性的正则化,并产生围绕感兴趣量的量化不确定性(即条件平均治疗效应); ii)解开协变量的基线预后和调节作用,并输出可解释的分数功能,描述了它们与结果的关系。最后,我们通过一个简单的模拟实验演示了该方法的使用。

In this extended abstract paper, we address the problem of interpretability and targeted regularization in causal machine learning models. In particular, we focus on the problem of estimating individual causal/treatment effects under observed confounders, which can be controlled for and moderate the effect of the treatment on the outcome of interest. Black-box ML models adjusted for the causal setting perform generally well in this task, but they lack interpretable output identifying the main drivers of treatment heterogeneity and their functional relationship. We propose a novel deep counterfactual learning architecture for estimating individual treatment effects that can simultaneously: i) convey targeted regularization on, and produce quantify uncertainty around the quantity of interest (i.e., the Conditional Average Treatment Effect); ii) disentangle baseline prognostic and moderating effects of the covariates and output interpretable score functions describing their relationship with the outcome. Finally, we demonstrate the use of the method via a simple simulated experiment.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源