• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Facial Recognition – Can It Evolve From A “Source of Bias” to A “Tool Against Bias”

February 11, 2022

🔬 Original article by Azfar Adib, who is currently pursuing his PhD in Electrical and Computer Engineering in Concordia University in Montreal. He is a Senior Member in the Institute of Electrical and Electronic Engineers (IEEE).


A recent announcement by Meta about terminating the face recognition system in Facebook sparked worldwide attention. It comes as a sort of new reality for many Facebook users, who have been habituated for years to the automatic people recognition feature in Facebook photos and videos.

Since the arrival of mankind on earth, facial outlook has remained as the most common identifier for humans.  Facial recognition is a dominating technology being used in numerous applications.  Its benefits have been amazing. At the same time, it has been one of the most debated topics, particularly from ethical perspectives.

Drawbacks of facial recognitions often get mentioned as examples of misusing artificial intelligence in general. Such examples are numerous, like- violating basic privacy of people, failure to identify racially marginalized people, or being used in some countries for malicious detection of political opponents. Facial Recognition Technology has even been mentioned as “ one of the biggest threats to our privacy”. In the service-closing statement, Meta (Facebook) also mentioned societal concerns and lack of clear regulations regarding this technology. However, they did mention about the boundless prospects of facial recognition for other applications. 

Along with its basic purpose of identity verification, facial analysis remains as a powerful tool for various innovative solutions. Let us consider an example in the medical domain – detecting pain when patients cannot express it. Particularly, infants may not always cry in pain, and untreated pain in newborns can lead to further neurological and behavioral issues. To address this challenge, a group of researchers in University of South Florida have developed a facial recognition software termed as “Neonatal Convolutional Neural Network” (N-CNN), It contains an algorithm for analyzing the facial expression of newborns with postoperative pain. It also analyzes visual -vocal cues and vital signs to sense and predict pain. 

This scheme aims to ensure that it will not miss a signal of pain regardless of factors like infant’s position or sedation. Facial analysis remains as the key focus, as mentioned by one of the researchers there- “Our experiments in procedural pain showed that facial expression is the most important indicator of pain in most cases.” The group is now working on monitoring and predicting pain in infants after surgery.

Such technology can ensure better treatment by automatically alerting caregivers about pain in infants. It is worth mentioning that other factors, such as culture and gender, may also affect pain assessments by humans. So, this facial recognition scheme also helps to overcome human observers’ bias. That is an interesting aspect, demonstrating that facial analysis can also assist to counter bias.

Another good example of using facial recognition for clinical purposes is a smartphone App called  Face2Gene, developed by a Boston based digital health company.  This App uses neural networks to classify distinctive facial features to diagnose congenital and neurodevelopmental disorders, which are quite difficult to identify through plain eye. Since its launch in 2018-19, this App has been significantly assisting clinicians, in fact outperforming them in terms of fast and accurate diagnosis.

These are some inspiring examples of using facial recognition for life-saving purposes.  It is worth mentioning that adequate diversity in training data is utmost crucial to ensure bias-free performance of these applications also. In general, biasedness in AI algorithms (particularly in sensitive applications like facial recognition) can be successfully overcome only by training these with sufficiently diverse data. That’s actually much easier said than done.  Various realistic challenges exist in this regard.

For instance, most of the publicly available datasets used by researchers contain data from only developed parts of the world. A significant portion of the global population are still not represented there. The reasons are obvious- lack of contextual need, proper procedure and supportive regulation to collect, store and share data in many countries. This needs to be gradually overcome through effective collaboration, trust and technical support among concerned ends.

Regulatory challenges exist in developed regions as well. For example: “Illinois Biometric Information Privacy Act” in the state of Illinois, which is the oldest biometric regulation in the United States, contains following clauses which are not quite favorable for health-based applications- “Biometric identifiers do not include information captured from a patient in a health care setting or information collected, used, or stored for health care treatment…”.

This scenario is obviously improving. Last year the U.S. Food and Drug Administration released their first Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan. This is considered as a significant regulatory step to facilitate AI in the health industry.

Despite numerous drawbacks (most notably the issue of bias), technologies like facial recognition will continue expanding across our lives, and its improvisation will continue utilizing cutting-edge algorithms and advanced datasets. In certain applications, it can act as a useful tool against human bias as well.  Moving ahead, it will be interesting to observe how much facial recognition technologies can evolve from a “source of bias” to a “tool against bias”.

References

1. https://about.fb.com/news/2021/11/update-on-use-of-face-recognition/

2.https://www.marketwatch.com/story/facial-recognition-technology-is-one-of-the-biggest-threats-to-our-privacy-11640623526

3. https://www.embs.org/pulse/articles/detecting-faces-saving-lives/

4. https://www.nature.com/articles/d41586-019-00027-x

5. https://www.ilga.gov/legislation/ilcs/ilcs3.asp?ActID=3004&ChapterID=57

6.https://www.fda.gov/news-events/press-announcements/fda-releases-artificial-intelligencemachine-learning-action-plan

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

This image is a collage with a colourful Japanese vintage landscape showing a mountain, hills, flowers and other plants and a small stream. There are 3 large black data servers placed in the bottom half of the image, with a cloud of black smoke emitting from them, partly obscuring the scenery.

Tech Futures: Crafting Participatory Tech Futures

A network diagram with lots of little emojis, organised in clusters.

Tech Futures: AI For and Against Knowledge

A brightly coloured illustration which can be viewed in any direction. It has many elements to it working together: men in suits around a table, someone in a data centre, big hands controlling the scenes and holding a phone, people in a production line. Motifs such as network diagrams and melting emojis are placed throughout the busy vignettes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part II

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

related posts

  • Managing Human and Robots Together - Can That Be a Leadership Dilemma?

    Managing Human and Robots Together - Can That Be a Leadership Dilemma?

  • Digital Sex Crime, Online Misogyny, and Digital Feminism in South Korea

    Digital Sex Crime, Online Misogyny, and Digital Feminism in South Korea

  • Why AI ethics is a critical theory

    Why AI ethics is a critical theory

  • On Measuring Fairness in Generative Modelling (NeurIPS 2023)

    On Measuring Fairness in Generative Modelling (NeurIPS 2023)

  • Analysis of the “Artificial Intelligence governance principles: towards ethical and trustworthy arti...

    Analysis of the “Artificial Intelligence governance principles: towards ethical and trustworthy arti...

  • Understanding Machine Learning Practitioners' Data Documentation Perceptions, Needs, Challenges, and...

    Understanding Machine Learning Practitioners' Data Documentation Perceptions, Needs, Challenges, and...

  • I Don't Want Someone to Watch Me While I'm Working: Gendered Views of Facial Recognition Technolog...

    "I Don't Want Someone to Watch Me While I'm Working": Gendered Views of Facial Recognition Technolog...

  • Mapping the Ethics of Generative AI: A Comprehensive Scoping Review

    Mapping the Ethics of Generative AI: A Comprehensive Scoping Review

  • Nonhuman humanitarianism: when 'AI for good' can be harmful

    Nonhuman humanitarianism: when 'AI for good' can be harmful

  • Embedded ethics: a proposal for integrating ethics into the development of medical AI

    Embedded ethics: a proposal for integrating ethics into the development of medical AI

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.