论文标题

经过培训时,部署时不公平:可观察到的公平措施在表演性预测设置中不稳定

Fair When Trained, Unfair When Deployed: Observable Fairness Measures are Unstable in Performative Prediction Settings

论文作者

Mishler, Alan, Dalmasso, Niccolò

论文摘要

许多流行的算法公平措施取决于预测,结果和敏感特征(例如种族或性别)的共同分布。这些措施对分配变化很敏感:经过训练以满足这些公平定义之一的预测变量,如果分布发生变化,可能会变得不公平。但是,在表演性预测设置中,预测因子精确旨在诱导分布变化。例如,在刑事司法,医疗保健和消费者金融方面的许多应用中,建立预测因素的目的是降低诸如累犯,住院或贷款违约等不利结果的速度。我们将此类预测因子的效果形式化为一种概念转移 - 特定的分布转移,并通过模拟示例在理论上表现出来,这是如何导致预测变量的,这些预测因素是在部署时被训练而变得不公平的公平的。我们进一步展示了通过使用依赖于反事实而不是可观察到的结果的公平定义来避免其中有多少个问题。

Many popular algorithmic fairness measures depend on the joint distribution of predictions, outcomes, and a sensitive feature like race or gender. These measures are sensitive to distribution shift: a predictor which is trained to satisfy one of these fairness definitions may become unfair if the distribution changes. In performative prediction settings, however, predictors are precisely intended to induce distribution shift. For example, in many applications in criminal justice, healthcare, and consumer finance, the purpose of building a predictor is to reduce the rate of adverse outcomes such as recidivism, hospitalization, or default on a loan. We formalize the effect of such predictors as a type of concept shift-a particular variety of distribution shift-and show both theoretically and via simulated examples how this causes predictors which are fair when they are trained to become unfair when they are deployed. We further show how many of these issues can be avoided by using fairness definitions that depend on counterfactual rather than observable outcomes.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源