• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Analysis and Issues of Artificial Intelligence Ethics in the Process of Recruitment

January 18, 2022

🔬 Research summary by Connor Wright, our Partnerships Manager.

[Original paper by V. Uday Kumar, A. Mohan, B.Srinivasa S P Kumar, Ramesh Ponnala, B Sateesh, P. Dundy Sai Maruthi]


Overview: AI is starting to become a household name in the hiring process for many businesses. While its involvement in the process varies from case to case, the attention required to tackle the problem of bias does not.


Introduction

AI being deployed in the hiring process is a common theme in today’s recruitment story. Nevertheless, the depth of this deployment varies from application to application. At times, the AI is used to simply schedule interviews, but it has been used to screen candidates. No matter the variation involved, the common thread that holds these applications together is the problem of AI bias. To explore this further, it fits to give some context on what AI in the hiring process looks like.

Key Insights

AI in the hiring process

Given the widely-touted capabilities of AI to streamline business resources, it has been well used in the hiring process. The AI’s involvement can range from screening candidates to scheduling interviews and even helping out in the interview process. However, the main inspiration can be seen in the AI lending a hand to filtering the sheer amounts of applications a job receives.

However, how this filtering is done varies on the application used. The paper details the following programmes which utilise AI to different depths:

  1. XOR interacts with a candidate through a chatbot.
  2. Paradox involves engaging with the candidate through a machine learning algorithm.
  3. Hiretual and AmazingHiring contain a database that it uses to match a particular profile.
  4. Pymetrics and Eightfold focus on cutting employee time spent on reviewing applications.
  5. HireVue, Seekout and MyInterview use the cloud for various tasks, including conducting interviews, filtering and outsourcing candidates.
  6. Humanly uses automated candidate screening.
  7. Fetcher and Loxo contact the candidate through emails and SMS.
  8. Textio helps write job descriptions, which then appeal to some candidates more than others as a form of filtering.

Despite a fruitful variation between the applications themselves, a common thread connects them all: the problem of bias.

The problem of bias

Five different types of bias are explored by the authors, which are all worth considering when deploying AI in the hiring process:

  1. Historical bias – hiring algorithms could contribute to concretizing past tendencies in a company. The company continues to look for what it already knows instead of prioritizing diversity.
  2. Representation bias – the dataset offered to the hiring algorithm must represent all different types of candidates. For example, collecting data only about people who went to university would ignore those who are also qualified for the job through other means, like internships.
  3. Measurement bias – candidate data is erroneously collected, such as being taken from a date outside the specified window.
  4. Aggregation bias – wrongly assuming the trends observed in the data apply to all individual data points. For example, assuming that all candidates from a particular area did not go to university based on a high school drop-out rate.
  5. Evaluation bias – giving more weight to specific character traits as opposed to others

Between the lines

While the problem of bias in AI is well-documented, I believe our attitude to confronting the phenomenon is equally important. The paper’s analysis shows the accuracy of the algorithms to vary significantly., at times involving 30% inaccuracy and others at 10%. From there, the authors point to how various surveys show that employers are not too worried about the 10% inaccuracy involved in AI ethics. For me, adopting the attitude where we do care for the 10% inaccuracy will be essential in the fight against bias, allowing us to take advantage of the deserved attention the AI Ethics field receives.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

AI Policy Corner: U.S. Copyright Guidance on Works Created with AI

related posts

  • Participation and Division of Labor in User-Driven Algorithm Audits: How Do Everyday Users Work toge...

    Participation and Division of Labor in User-Driven Algorithm Audits: How Do Everyday Users Work toge...

  • The AI Carbon Footprint and Responsibilities of AI Scientists

    The AI Carbon Footprint and Responsibilities of AI Scientists

  • Research summary:  Laughing is Scary, but Farting is Cute: A Conceptual Model of Children’s Perspect...

    Research summary: Laughing is Scary, but Farting is Cute: A Conceptual Model of Children’s Perspect...

  • Unpacking Human-AI interaction (HAII) in safety-critical industries

    Unpacking Human-AI interaction (HAII) in safety-critical industries

  • Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing

    Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing

  • Low-Resource Languages Jailbreak GPT-4

    Low-Resource Languages Jailbreak GPT-4

  • AI Ethics and Ordoliberalism 2.0: Towards A ‘Digital Bill of Rights

    AI Ethics and Ordoliberalism 2.0: Towards A ‘Digital Bill of Rights

  • Rewiring What-to-Watch-Next Recommendations to Reduce Radicalization Pathways

    Rewiring What-to-Watch-Next Recommendations to Reduce Radicalization Pathways

  • The State of Artificial Intelligence in the Pacific Islands

    The State of Artificial Intelligence in the Pacific Islands

  • Putting AI ethics to work: are the tools fit for purpose?

    Putting AI ethics to work: are the tools fit for purpose?

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.