论文标题
消除偏见的“偏见”测量
De-biasing "bias" measurement
论文作者
论文摘要
当模型的表现在社会或文化相关的群体上(类别,性别或许多此类群体的交集)时,通常称为“有偏见”。尽管过去几年中算法公平的大部分工作都集中在开发模型公平的各种定义(缺乏团体模型绩效差异)并消除这种“偏见”,但严格地衡量的工作就少了。在实践中,重要的是要对模型性能差异以及相关的不确定性量化进行高质量的人类消化率衡量,以作为多方面决策过程的输入。在本文中,我们在数学上和模拟中都表明,用于衡量群体模型绩效差异的许多指标本身都是他们声称代表的基本数量的统计偏差估计值。我们认为,这可能会导致关于沿不同维度的相对群体模型绩效差异的误导性结论,尤其是在某些敏感变量由少数成员的类别组成的情况下。我们提出了“双重校正”方差估计器,该方差估计器提供了跨组模型性能方差的无偏估计和不确定性量化。它在概念上是简单易于实现的,没有统计软件包或数值优化。我们通过模拟证明了这种方法的实用性,并在一个真实数据集中证明,尽管统计上有偏差的群体模型性能差异估计量表示统计上的显着差异,但当估计估计器的统计偏差时,估计的组间差异不再具有统计学意义。
When a model's performance differs across socially or culturally relevant groups--like race, gender, or the intersections of many such groups--it is often called "biased." While much of the work in algorithmic fairness over the last several years has focused on developing various definitions of model fairness (the absence of group-wise model performance disparities) and eliminating such "bias," much less work has gone into rigorously measuring it. In practice, it important to have high quality, human digestible measures of model performance disparities and associated uncertainty quantification about them that can serve as inputs into multi-faceted decision-making processes. In this paper, we show both mathematically and through simulation that many of the metrics used to measure group-wise model performance disparities are themselves statistically biased estimators of the underlying quantities they purport to represent. We argue that this can cause misleading conclusions about the relative group-wise model performance disparities along different dimensions, especially in cases where some sensitive variables consist of categories with few members. We propose the "double-corrected" variance estimator, which provides unbiased estimates and uncertainty quantification of the variance of model performance across groups. It is conceptually simple and easily implementable without statistical software package or numerical optimization. We demonstrate the utility of this approach through simulation and show on a real dataset that while statistically biased estimators of group-wise model performance disparities indicate statistically significant differences, when accounting for statistical bias in the estimator, the estimated between-group disparities are no longer statistically significant.