论文标题

合适的公平:朝着从业者的工具箱

Getting Fairness Right: Towards a Toolbox for Practitioners

论文作者

Ruf, Boris, Boutharouite, Chaouki, Detyniecki, Marcin

论文摘要

AI系统无意中嵌入和再现偏见的潜在风险吸引了机器学习从业者和整个社会的关注。由于政策制定者愿意设定算法和人工智能技术的标准,因此如何完善现有法规的问题,以实施自动化系统做出的决定是公平且非歧视性的,这再次至关重要。同时,研究人员已经证明,公平性的各种现有指标在统计​​上是相互排斥的,正确的选择主要取决于用例和公平定义。 认识到实施公平AI的解决方案不是纯粹的数学,而是要求利益相关者的承诺来定义公平性的性质,因此本文提议起草一个工具箱,以帮助从业者确保公平的AI实践。基于应用程序的性质和可用的培训数据,也基于法律要求以及道德,哲学和文化维度,该工具箱旨在确定最合适的公平目标。这种方法试图构建公平指标的复杂格局,因此使非技术人员更容易获得不同的可用选项。在事实证明的是没有用于Fair AI的银色子弹解决方案的情况下,该工具箱旨在生产有关其本地环境可能的最公平的AI系统。

The potential risk of AI systems unintentionally embedding and reproducing bias has attracted the attention of machine learning practitioners and society at large. As policy makers are willing to set the standards of algorithms and AI techniques, the issue on how to refine existing regulation, in order to enforce that decisions made by automated systems are fair and non-discriminatory, is again critical. Meanwhile, researchers have demonstrated that the various existing metrics for fairness are statistically mutually exclusive and the right choice mostly depends on the use case and the definition of fairness. Recognizing that the solutions for implementing fair AI are not purely mathematical but require the commitments of the stakeholders to define the desired nature of fairness, this paper proposes to draft a toolbox which helps practitioners to ensure fair AI practices. Based on the nature of the application and the available training data, but also on legal requirements and ethical, philosophical and cultural dimensions, the toolbox aims to identify the most appropriate fairness objective. This approach attempts to structure the complex landscape of fairness metrics and, therefore, makes the different available options more accessible to non-technical people. In the proven absence of a silver bullet solution for fair AI, this toolbox intends to produce the fairest AI systems possible with respect to their local context.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源