论文标题
Fairalm:增强拉格朗日培训公平模型的方法,几乎没有遗憾
FairALM: Augmented Lagrangian Method for Training Fair Models with Little Regret
论文作者
论文摘要
基于计算机视觉和机器学习技术的算法决策继续渗透我们的生活。但是,与这些模型的偏见以及他们不公平地对待某些人群的程度有关的问题引起了公众的关注。现在可以接受的是,由于我们向模型提出的数据集中存在偏见,合乎公平的培训将导致不公平的模型。一个有趣的话题是研究通过公平措施来告知模型的从头设计或培训的机制。在这里,我们研究在训练模型时同时强加公平的机制。尽管现有的基于公平的视觉方法主要依赖于训练对抗模块以及主要的分类/回归任务,以消除受保护属性或变量的影响,但我们展示了基于知名优化概念的思想如何提供更简单的选择。在我们提出的计划中,强加公平只需要指定受保护的属性并利用我们的优化程序。我们提供了详细的技术分析,目前的实验表明,可以以可解释的方式可靠地对文献中的各种公平措施可靠地施加到视力的许多培训任务上。
Algorithmic decision making based on computer vision and machine learning technologies continue to permeate our lives. But issues related to biases of these models and the extent to which they treat certain segments of the population unfairly, have led to concern in the general public. It is now accepted that because of biases in the datasets we present to the models, a fairness-oblivious training will lead to unfair models. An interesting topic is the study of mechanisms via which the de novo design or training of the model can be informed by fairness measures. Here, we study mechanisms that impose fairness concurrently while training the model. While existing fairness based approaches in vision have largely relied on training adversarial modules together with the primary classification/regression task, in an effort to remove the influence of the protected attribute or variable, we show how ideas based on well-known optimization concepts can provide a simpler alternative. In our proposed scheme, imposing fairness just requires specifying the protected attribute and utilizing our optimization routine. We provide a detailed technical analysis and present experiments demonstrating that various fairness measures from the literature can be reliably imposed on a number of training tasks in vision in a manner that is interpretable.