论文标题
通过校准功能机制对私人和公平分类
Differentially Private and Fair Classification via Calibrated Functional Mechanism
论文作者
论文摘要
机器学习越来越多地成为一种在各种应用程序(例如医学诊断和自动驾驶)中做出决策的强大工具。与培训数据有关的隐私问题以及某些决定(例如性别,种族)的某些决定的不公平行为变得越来越关键。因此,在同时提供隐私保护的同时构建公平的机器学习模型成为一个具有挑战性的问题。在本文中,我们通过共同结合功能机制和决策界限公平性来保证具有公平性和差异隐私的分类模型设计。为了强制执行$ε$ - 差异性隐私和公平性,我们利用功能机制来添加不同量的不同属性的拉普拉斯噪声,以考虑到公平限制的目标函数的多项式系数。我们进一步提出了一种实用程序增强方案,即通过添加高斯噪声而不是拉普拉斯噪声来称为松弛功能机理,因此实现了$(ε,δ)$ - 差异隐私。基于放松的功能机制,我们可以设计$(ε,δ)$ - 差异化和公平分类模型。此外,我们的理论分析和经验结果表明,我们的两种方法既具有公平性和差异性隐私,同时保持良好的效用并超越最先进的算法。
Machine learning is increasingly becoming a powerful tool to make decisions in a wide variety of applications, such as medical diagnosis and autonomous driving. Privacy concerns related to the training data and unfair behaviors of some decisions with regard to certain attributes (e.g., sex, race) are becoming more critical. Thus, constructing a fair machine learning model while simultaneously providing privacy protection becomes a challenging problem. In this paper, we focus on the design of classification model with fairness and differential privacy guarantees by jointly combining functional mechanism and decision boundary fairness. In order to enforce $ε$-differential privacy and fairness, we leverage the functional mechanism to add different amounts of Laplace noise regarding different attributes to the polynomial coefficients of the objective function in consideration of fairness constraint. We further propose an utility-enhancement scheme, called relaxed functional mechanism by adding Gaussian noise instead of Laplace noise, hence achieving $(ε,δ)$-differential privacy. Based on the relaxed functional mechanism, we can design $(ε,δ)$-differentially private and fair classification model. Moreover, our theoretical analysis and empirical results demonstrate that our two approaches achieve both fairness and differential privacy while preserving good utility and outperform the state-of-the-art algorithms.