论文标题

印度算法公平性的不可存储性

Non-portability of Algorithmic Fairness in India

论文作者

Sambasivan, Nithya, Arnesen, Erin, Hutchinson, Ben, Prabhakaran, Vinodkumar

论文摘要

常规算法公平性是西方的子组,价值和优化。在本文中,我们询问这一西方对算法公平的假设是如何对印度等地理文化背景的不同的。基于对印度学者的36次专家访谈,以及对印度新兴算法部署的分析,我们确定了三个挑战群,这些挑战席卷了机器学习模型与印度压迫社区之间的较大距离。我们认为,仅将技术公平工作翻译给印度亚组可能只能用作窗帘,而是通过重新定义数据和模型,授权被压迫的社区以及更重要的是,更重要的是,呼吁集体重新想象Fair-ML。

Conventional algorithmic fairness is Western in its sub-groups, values, and optimizations. In this paper, we ask how portable the assumptions of this largely Western take on algorithmic fairness are to a different geo-cultural context such as India. Based on 36 expert interviews with Indian scholars, and an analysis of emerging algorithmic deployments in India, we identify three clusters of challenges that engulf the large distance between machine learning models and oppressed communities in India. We argue that a mere translation of technical fairness work to Indian subgroups may serve only as a window dressing, and instead, call for a collective re-imagining of Fair-ML, by re-contextualising data and models, empowering oppressed communities, and more importantly, enabling ecosystems.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源