🔬 Research summary by Connor Wright, our Partnerships Manager.
[Original paper by V. Uday Kumar, A. Mohan, B.Srinivasa S P Kumar, Ramesh Ponnala, B Sateesh, P. Dundy Sai Maruthi]
Overview: AI is starting to become a household name in the hiring process for many businesses. While its involvement in the process varies from case to case, the attention required to tackle the problem of bias does not.
Introduction
AI being deployed in the hiring process is a common theme in today’s recruitment story. Nevertheless, the depth of this deployment varies from application to application. At times, the AI is used to simply schedule interviews, but it has been used to screen candidates. No matter the variation involved, the common thread that holds these applications together is the problem of AI bias. To explore this further, it fits to give some context on what AI in the hiring process looks like.
Key Insights
AI in the hiring process
Given the widely-touted capabilities of AI to streamline business resources, it has been well used in the hiring process. The AI’s involvement can range from screening candidates to scheduling interviews and even helping out in the interview process. However, the main inspiration can be seen in the AI lending a hand to filtering the sheer amounts of applications a job receives.
However, how this filtering is done varies on the application used. The paper details the following programmes which utilise AI to different depths:
- XOR interacts with a candidate through a chatbot.
- Paradox involves engaging with the candidate through a machine learning algorithm.
- Hiretual and AmazingHiring contain a database that it uses to match a particular profile.
- Pymetrics and Eightfold focus on cutting employee time spent on reviewing applications.
- HireVue, Seekout and MyInterview use the cloud for various tasks, including conducting interviews, filtering and outsourcing candidates.
- Humanly uses automated candidate screening.
- Fetcher and Loxo contact the candidate through emails and SMS.
- Textio helps write job descriptions, which then appeal to some candidates more than others as a form of filtering.
Despite a fruitful variation between the applications themselves, a common thread connects them all: the problem of bias.
The problem of bias
Five different types of bias are explored by the authors, which are all worth considering when deploying AI in the hiring process:
- Historical bias – hiring algorithms could contribute to concretizing past tendencies in a company. The company continues to look for what it already knows instead of prioritizing diversity.
- Representation bias – the dataset offered to the hiring algorithm must represent all different types of candidates. For example, collecting data only about people who went to university would ignore those who are also qualified for the job through other means, like internships.
- Measurement bias – candidate data is erroneously collected, such as being taken from a date outside the specified window.
- Aggregation bias – wrongly assuming the trends observed in the data apply to all individual data points. For example, assuming that all candidates from a particular area did not go to university based on a high school drop-out rate.
- Evaluation bias – giving more weight to specific character traits as opposed to others
Between the lines
While the problem of bias in AI is well-documented, I believe our attitude to confronting the phenomenon is equally important. The paper’s analysis shows the accuracy of the algorithms to vary significantly., at times involving 30% inaccuracy and others at 10%. From there, the authors point to how various surveys show that employers are not too worried about the 10% inaccuracy involved in AI ethics. For me, adopting the attitude where we do care for the 10% inaccuracy will be essential in the fight against bias, allowing us to take advantage of the deserved attention the AI Ethics field receives.