论文标题
在算法时代公平雇用
Hiring Fairly in the Age of Algorithms
论文作者
论文摘要
自动化方面的广泛发展减少了对人类投入的需求。但是,尽管机器学习的力量增加了,但在许多情况下,这些程序做出了有问题的决策。数据和不透明模型中的偏见已经扩大了人类偏见,从而产生了亚马逊(现已失效的)实验性招聘算法等工具,当活动在活动之前添加“女性”一词时,发现该算法始终如一地降级。本文批判性地调查了围绕算法招聘的现有法律和技术格局。我们认为,雇用算法的负面影响可以通过从雇主向公众的透明度来缓解,这将使民事倡导团体能够使雇主负责,并允许美国司法部诉讼。我们的主要贡献是用于自动雇用透明度的框架,算法透明度报告,使用自动雇用软件的雇主必须按照法律发布。我们还解释了平等就业机会委员会和国会如何扩大有关就业和商业秘密法律中现有的法规以适应这些报告。
Widespread developments in automation have reduced the need for human input. However, despite the increased power of machine learning, in many contexts these programs make decisions that are problematic. Biases within data and opaque models have amplified human prejudices, giving rise to such tools as Amazon's (now defunct) experimental hiring algorithm, which was found to consistently downgrade resumes when the word "women's" was added before an activity. This article critically surveys the existing legal and technological landscape surrounding algorithmic hiring. We argue that the negative impact of hiring algorithms can be mitigated by greater transparency from the employers to the public, which would enable civil advocate groups to hold employers accountable, as well as allow the U.S. Department of Justice to litigate. Our main contribution is a framework for automated hiring transparency, algorithmic transparency reports, which employers using automated hiring software would be required to publish by law. We also explain how existing regulations in employment and trade secret law can be extended by the Equal Employment Opportunity Commission and Congress to accommodate these reports.