论文标题
面部属性分类的公平对比学习
Fair Contrastive Learning for Facial Attribute Classification
论文作者
论文摘要
学习高质量的视觉表示对于图像分类至关重要。最近,一系列的对比表示学习方法取得了杰出的成功。特别是,Supcon在表示学习中基于跨凝性损失的主要方法优于主要方法。但是,我们注意到监督对比学习可能存在潜在的道德风险。在本文中,我们首次分析受监督的对比学习引起的不公平性,并为公平的视觉表示学习提出了新的公平监督对比损失(FSCL)。继承了受监督的对比学习的哲学,它鼓励同一阶级的代表比不同的阶级更接近彼此,同时通过惩罚将敏感属性信息包含在表示中,从而确保公平。此外,我们引入了群体归一化,以减少群体内紧凑性和伴随不公平分类的人口组之间的阶层间可分离性的差异。通过对Celeba和UTK面的广泛实验,我们验证了所提出的方法在TOP-1准确性和公平性之间的权衡方面显着优于SUPCON和现有的最新方法。此外,我们的方法对数据偏差的强度非常强大,并有效地在不完整的监督设置中起作用。我们的代码可在https://github.com/sungho-coolg/fscl上找到。
Learning visual representation of high quality is essential for image classification. Recently, a series of contrastive representation learning methods have achieved preeminent success. Particularly, SupCon outperformed the dominant methods based on cross-entropy loss in representation learning. However, we notice that there could be potential ethical risks in supervised contrastive learning. In this paper, we for the first time analyze unfairness caused by supervised contrastive learning and propose a new Fair Supervised Contrastive Loss (FSCL) for fair visual representation learning. Inheriting the philosophy of supervised contrastive learning, it encourages representation of the same class to be closer to each other than that of different classes, while ensuring fairness by penalizing the inclusion of sensitive attribute information in representation. In addition, we introduce a group-wise normalization to diminish the disparities of intra-group compactness and inter-class separability between demographic groups that arouse unfair classification. Through extensive experiments on CelebA and UTK Face, we validate that the proposed method significantly outperforms SupCon and existing state-of-the-art methods in terms of the trade-off between top-1 accuracy and fairness. Moreover, our method is robust to the intensity of data bias and effectively works in incomplete supervised settings. Our code is available at https://github.com/sungho-CoolG/FSCL.