论文标题

表演预测

Performative Prediction

论文作者

Perdomo, Juan C., Zrnic, Tijana, Mendler-Dünner, Celestine, Hardt, Moritz

论文摘要

当预测支持决策时,它们可能会影响他们旨在预测的结果。我们称这种预测性能;预测会影响目标。绩效是迄今为止在监督学习中忽略的政策制定现象。当忽略时,性能表面是不良的分布偏移,通常会通过再培训来解决。 我们为表演预测开发了一个风险最小化框架,从统计,游戏理论和因果关系中汇集了概念。概念的新颖性是我们称之为表演稳定性的平衡概念。性能稳定性意味着预测不是针对过去的结果进行校准,而是反对未来结果表现出的预测结果。我们的主要结果是将重新验证到几乎最小损失的性能稳定点的收敛性的必要条件。 表演性预测完全一般性,严格包含称为战略分类的设置。因此,我们还提供了第一个足够的条件来克服战略反馈效应。

When predictions support decisions they may influence the outcome they aim to predict. We call such predictions performative; the prediction influences the target. Performativity is a well-studied phenomenon in policy-making that has so far been neglected in supervised learning. When ignored, performativity surfaces as undesirable distribution shift, routinely addressed with retraining. We develop a risk minimization framework for performative prediction bringing together concepts from statistics, game theory, and causality. A conceptual novelty is an equilibrium notion we call performative stability. Performative stability implies that the predictions are calibrated not against past outcomes, but against the future outcomes that manifest from acting on the prediction. Our main results are necessary and sufficient conditions for the convergence of retraining to a performatively stable point of nearly minimal loss. In full generality, performative prediction strictly subsumes the setting known as strategic classification. We thus also give the first sufficient conditions for retraining to overcome strategic feedback effects.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源