🔬 Research Summary by Anna Lena Hunkenschroer; Christoph Luetge. Anna is a business consultant and PhD student at Technical University of Munich, specializing in the use of AI in hiring. Christoph is the director of the Institute for Ethics in Artificial Intelligence at the Technical University of Munich.
[Original paper by Anna Lena Hunkenschroer; Christoph Luetge]
Overview: While companies increasingly deploy artificial intelligence (AI) technologies in their personnel recruiting and selection process, the subject is still an emerging topic in academic literature. As these new technologies significantly impact people’s lives and careers, but also trigger ethical concerns, the ethicality of these AI applications needs to be comprehensively understood. This paper reviews extant literature on AI recruiting and maps the ethical opportunities, risks, and ambiguities, as well as the proposed ways to mitigate ethical risks in practice.
Introduction
“Amazon scraps AI recruiting tool that shows bias against women” – do you remember this headline from 2018? The corresponding article reported that Amazon abandoned its tested hiring algorithm, which had turned out to be biased and discriminatory against women. This case nicely illustrates that AI applications used in the recruiting context may generate serious conflicts with what society typically considers ethical. Although research on AI recruiting has increased substantially in recent years, a comprehensive ethical understanding of recruiting as an expanding application context of AI is lacking. Nevertheless, there are various ethical concerns related to AI recruiting, such as algorithmic bias, data privacy, transparency, and accountability, which are worth discussing. To establish a common foundation for future research in the field, our paper synthesizes and discusses extant theoretical and empirical research on the topic to assess the ethicality of AI-powered recruiting.Â
Key Insights
AI Applications in the Hiring Process
AI can be applied across all four stages of the hiring process: outreach, screening, assessment, and facilitation. In the outreach stage, AI can be for example leveraged for targeted communication across online platforms and social media or for de-biasing the wording of job ads to make them gender neutral and attract a diverse pool of applicants. Moreover, algorithms are used to screen applicants’ CVs and derive a short-list of the most promising candidates. In the assessment stage, face recognition software is used to analyze video-interviews, evaluate applicants’ responses, and provide insights on certain personality traits and competencies. Thereby, target variables do not need to be predefined by the company, but ML algorithms can analyze the data of a company’s current top performers and derive which applicant characteristics and skills have been associated with better job performance. Lastly, AI can also be leveraged to facilitate the selection process, for example for scheduling activities.
Mapping of Ethical Considerations
However, the rise of the new AI recruiting tools and practices comes with new ethical quandaries for organizations and society. Thereby, AI recruiting comes with ethical opportunities and ethical risks, as well as issues that are controversially discussed in current research.
Ethical opportunities encompass the following:
· Reduction of human bias in the selection process, as human assessment can be good and accurate, but it is nevertheless based on subjective intuition
· Process consistency as AI-based practices allow firms to put all applicants through the same experience
· Timely feedback for applicants, who could be given data-driven insights on their strengths and development needs
· Efficiency gains for organizations as AI tools make hiring more cost- and time-efficient
· Job enhancement for recruiters as AI takes over repetitive tasks, such as screening resumes, scheduling interviews and conducting similar conversations
Ethical risks include the following:
· Introduction of algorithmic bias e.g., due to bias in the training data, which could even magnify discrimination
· Privacy loss of applicants and increased power asymmetry between applicants and employees due to new ways (e.g., facial recognition) to discern applicants’ private information
· Lack of transparency and explainability as the predictive and decision-making processes of algorithms are often opaque, even for the programmers themselves
· Obfuscation of accountability as the AI itself cannot be held accountable for a decision or recommendation made
· Potential loss of human oversight, which raises the question of whether it is ethical to base hiring decisions solely on algorithms and without human intervention
Ethical ambiguities are the following:
· Effect on workforce diversity: While a reduction in human bias could lead to the diversification of a company’s workforce, a systematic algorithmic bias could result in more homogeneity in organizations
· The use of more personal data may invade applicants’ privacy and informed consent may not be given, but it may also lead to more accurate predictions and assessment
· Impact on assessment validity and accuracy: AI may outperform human assessments in accuracy because it can process much more behavioral signals; however, AI tools are often not scientifically derived and validated
· Perceived fairness: There is limited understanding of how people perceive AI recruiting and contrasting findings exist
Practical Approaches to Mitigate Ethical Risks
As shown, AI technologies pose new ethical challenges to governments and organizations, especially as they are being applied in recruiting. As governmental regulation currently leaves room for unethical behavior of companies, firms often need to act beyond regulation and establish organizational standards to ensure the ethical use of AI recruiting tools. These might include compliance with privacy laws, transparency on AI usage, and human oversight on the AI in place. In addition, organizational compliance mechanisms, such as AI ethics boards or a code of ethics, could help to ensure ethical use of AI within firms. Moreover, technical approaches can be applied to ensure the ethical implementation of AI, which may encompass the data literacy of programmers and hiring managers who use the AI solution, the reference to professional test standards, or the implementation of proactive auditing methods of the AI in place. Lastly, ethics competencies could already be anchored at the team and individual levels within organizations, e.g., via the implementation of diverse data scientist teams. Given that manifold ethical questions may arise in the development of algorithms, diverse voices and people who are aware of the potential shortcomings of recruiting algorithms may help to check implicit assumptions and foster inclusion and equity.
Between the lines
Overall, we observe contrasting views in literature on the ethicality of AI recruiting. Even if we cannot offer a conclusive evaluation of whether the ethical opportunities outweigh the risks, managers need to understand the ethical concerns AI technologies might create and that algorithmic decisions might contradict what they aim to do with their workforce. Thus, they must consider approaches to address those ethical concerns, covering organizational standards, as well as awareness among employees. Only by proactively tackling the ethical concerns, both in implementation and in external communication, can practitioners create new forms of AI recruiting practices that are both efficient and effective, and which also have the potential to manifest a competitive advantage and financial payoff.Â