论文标题
使用von mises-fisher混合物模型缓解面部识别中的性别偏见
Mitigating Gender Bias in Face Recognition Using the von Mises-Fisher Mixture Model
论文作者
论文摘要
尽管深度学习算法在广泛的日常应用中具有高性能和可靠性,但许多调查倾向于表明许多模型都表现出偏见,从而歧视人口的特定亚组(例如性别,种族)。这敦促从业人员开发在敏感群体之间具有统一/可比性能的公平系统。在这项工作中,我们研究了深面识别网络的性别偏见。为了衡量这种偏见,我们介绍了两个新的指标,$ \ mathrm {bfar} $和$ \ mathrm {bfrr} $,可以更好地反映面部识别系统的固有部署需求。出于几何考虑,我们通过一种新的后处理方法来减轻性别偏见,该方法将预训练模型的深层嵌入方式转化为赋予歧视亚组的更多代表能力。它包括训练一个浅神经网络,通过最大程度地减少von Mises-fisher损失,该损失的超参数占每个性别的阶层差异。有趣的是,我们从经验上观察到这些超参数与我们的公平指标相关。实际上,各种数据集上的广泛数值实验表明,仔细选择会显着减少性别偏见。可以在https://github.com/jrconti/ethicalmodule_vmf上找到用于实验的代码。
In spite of the high performance and reliability of deep learning algorithms in a wide range of everyday applications, many investigations tend to show that a lot of models exhibit biases, discriminating against specific subgroups of the population (e.g. gender, ethnicity). This urges the practitioner to develop fair systems with a uniform/comparable performance across sensitive groups. In this work, we investigate the gender bias of deep Face Recognition networks. In order to measure this bias, we introduce two new metrics, $\mathrm{BFAR}$ and $\mathrm{BFRR}$, that better reflect the inherent deployment needs of Face Recognition systems. Motivated by geometric considerations, we mitigate gender bias through a new post-processing methodology which transforms the deep embeddings of a pre-trained model to give more representation power to discriminated subgroups. It consists in training a shallow neural network by minimizing a Fair von Mises-Fisher loss whose hyperparameters account for the intra-class variance of each gender. Interestingly, we empirically observe that these hyperparameters are correlated with our fairness metrics. In fact, extensive numerical experiments on a variety of datasets show that a careful selection significantly reduces gender bias. The code used for the experiments can be found at https://github.com/JRConti/EthicalModule_vMF.