论文标题

导航合奏配置算法公平性

Navigating Ensemble Configurations for Algorithmic Fairness

论文作者

Feffer, Michael, Hirzel, Martin, Hoffman, Samuel C., Kate, Kiran, Ram, Parikshit, Shinnar, Avraham

论文摘要

偏见缓解剂可以改善机器学习模型中的算法公平性,但是它们对公平性的影响通常在数据拆分的情况下不稳定。一种流行的训练更稳定模型的方法是整体学习,但不幸的是,尚不清楚如何将合奏与缓解剂结合起来,以最好地在公平和预测性能之间进行最佳的折衷。为此,我们构建了一个开源库,实现了8个缓解剂,4个合奏及其相应的超参数的模块化组成,我们在13个数据集中探索了配置的空间。我们以向从业者的指导图的形式将洞察力从这种探索中提炼出来,我们证明的是坚固且可重复的。

Bias mitigators can improve algorithmic fairness in machine learning models, but their effect on fairness is often not stable across data splits. A popular approach to train more stable models is ensemble learning, but unfortunately, it is unclear how to combine ensembles with mitigators to best navigate trade-offs between fairness and predictive performance. To that end, we built an open-source library enabling the modular composition of 8 mitigators, 4 ensembles, and their corresponding hyperparameters, and we empirically explored the space of configurations on 13 datasets. We distilled our insights from this exploration in the form of a guidance diagram for practitioners that we demonstrate is robust and reproducible.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源