论文标题
通过鲁棒性公平:研究深度学习中的鲁棒性差异
Fairness Through Robustness: Investigating Robustness Disparity in Deep Learning
论文作者
论文摘要
深度神经网络(DNN)越来越多地用于现实世界应用(例如面部识别)。这引起了人们对这些模型做出的决策公平性的担忧。已经提出了各种公平概念和公平措施,以确保决策系统不会损害人口的特定亚组。在本文中,我们认为,只有基于模型的输出的传统公平概念在模型容易受到对抗性攻击的影响是不够的。我们认为,在某些情况下,攻击者可能更容易针对特定的子组,从而导致\ textit {robustness bias}的形式。我们表明,测量鲁棒性偏见是DNN的一项挑战,并提出了两种测量这种偏见的方法。然后,我们对常用现实世界中的最新神经网络进行了经验研究,例如CIFAR-10,CIFAR-100,ADIENCE和UTKFACE,并表明在几乎所有情况下,都有亚组(在某些情况下基于诸如种族,性别等敏感属性等),因此具有较不强大的鲁棒性和具有较轻的功能。我们认为,这种偏见是由于DNN的数据分布和高度复杂的性质而产生的,因此减轻这种偏见是一个非平凡的任务。我们的结果表明,鲁棒性偏见是审核依靠DNN进行决策的现实世界系统时要考虑的重要标准。可以在此处找到重现我们所有结果的代码:\ url {https://github.com/nvedant07/fairness-through-robustness}
Deep neural networks (DNNs) are increasingly used in real-world applications (e.g. facial recognition). This has resulted in concerns about the fairness of decisions made by these models. Various notions and measures of fairness have been proposed to ensure that a decision-making system does not disproportionately harm (or benefit) particular subgroups of the population. In this paper, we argue that traditional notions of fairness that are only based on models' outputs are not sufficient when the model is vulnerable to adversarial attacks. We argue that in some cases, it may be easier for an attacker to target a particular subgroup, resulting in a form of \textit{robustness bias}. We show that measuring robustness bias is a challenging task for DNNs and propose two methods to measure this form of bias. We then conduct an empirical study on state-of-the-art neural networks on commonly used real-world datasets such as CIFAR-10, CIFAR-100, Adience, and UTKFace and show that in almost all cases there are subgroups (in some cases based on sensitive attributes like race, gender, etc) which are less robust and are thus at a disadvantage. We argue that this kind of bias arises due to both the data distribution and the highly complex nature of the learned decision boundary in the case of DNNs, thus making mitigation of such biases a non-trivial task. Our results show that robustness bias is an important criterion to consider while auditing real-world systems that rely on DNNs for decision making. Code to reproduce all our results can be found here: \url{https://github.com/nvedant07/Fairness-Through-Robustness}