论文标题
在预测奇偶校验下,公平的贝叶斯 - 最佳分类器
Fair Bayes-Optimal Classifiers Under Predictive Parity
论文作者
论文摘要
人们对AI不同影响的越来越担心促使人们在公平的机器学习上进行了大量工作。现有的作品主要集中于基于独立和基于分离的措施(例如人口统计学奇偶,机会平等,均衡的几率),而基于足够的措施(例如预测奇偶校验)的研究要少得多。本文考虑了预测奇偶校验,这需要在不同受保护的群体之间进行积极的预测,需要平等成功的概率。我们证明,如果不同群体的总体表现仅适度变化,那么在预测奇偶校验下,所有公平的贝叶斯 - 最佳分类器都是群体阈值规则。也许令人惊讶的是,如果小组绩效水平差异很大,则可能不会成立。在这种情况下,我们发现受保护群体之间的预测奇偶校验可能导致组内不公平。然后,我们提出了一种称为Fairbayes-DPP的算法,旨在确保在满足条件时进行预测奇偶校验。 Fairbayes-DPP是一种自适应阈值算法,旨在实现预测奇偶校验,同时也试图最大程度地提高测试准确性。我们提供有关合成和经验数据进行的支持实验。
Increasing concerns about disparate effects of AI have motivated a great deal of work on fair machine learning. Existing works mainly focus on independence- and separation-based measures (e.g., demographic parity, equality of opportunity, equalized odds), while sufficiency-based measures such as predictive parity are much less studied. This paper considers predictive parity, which requires equalizing the probability of success given a positive prediction among different protected groups. We prove that, if the overall performances of different groups vary only moderately, all fair Bayes-optimal classifiers under predictive parity are group-wise thresholding rules. Perhaps surprisingly, this may not hold if group performance levels vary widely; in this case we find that predictive parity among protected groups may lead to within-group unfairness. We then propose an algorithm we call FairBayes-DPP, aiming to ensure predictive parity when our condition is satisfied. FairBayes-DPP is an adaptive thresholding algorithm that aims to achieve predictive parity, while also seeking to maximize test accuracy. We provide supporting experiments conducted on synthetic and empirical data.