论文标题
机器学习模型中的偏见和不公平:系统文献综述
Bias and unfairness in machine learning models: a systematic literature review
论文作者
论文摘要
人工智能的困难之一是确保模型决策是公平且无偏见的。在研究中,应用数据集,指标,技术和工具用于检测和减轻算法不公平和偏见。这项研究旨在检查机器学习模型中有关偏见和不公平性的现有知识,识别缓解方法,公平度指标和支持工具。一项系统的文献综述发现,2017年至2022年之间在Scopus,IEEE Xplore,Web of Science和Google Scholar知识库中发表了40条合格的文章。结果显示了ML技术的许多偏见和不公平检测和缓解方法,文献中具有明确定义的指标,并且可以强调各种指标。我们建议进一步的研究来定义每种情况下应采用的技术和指标,以标准化和确保机器学习模型的公正性,从而允许最合适的指标在给定的情况下检测到偏见和不公平性。
One of the difficulties of artificial intelligence is to ensure that model decisions are fair and free of bias. In research, datasets, metrics, techniques, and tools are applied to detect and mitigate algorithmic unfairness and bias. This study aims to examine existing knowledge on bias and unfairness in Machine Learning models, identifying mitigation methods, fairness metrics, and supporting tools. A Systematic Literature Review found 40 eligible articles published between 2017 and 2022 in the Scopus, IEEE Xplore, Web of Science, and Google Scholar knowledge bases. The results show numerous bias and unfairness detection and mitigation approaches for ML technologies, with clearly defined metrics in the literature, and varied metrics can be highlighted. We recommend further research to define the techniques and metrics that should be employed in each case to standardize and ensure the impartiality of the machine learning model, thus, allowing the most appropriate metric to detect bias and unfairness in a given context.