• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Facial Recognition – Can It Evolve From A “Source of Bias” to A “Tool Against Bias”

February 11, 2022

🔬 Original article by Azfar Adib, who is currently pursuing his PhD in Electrical and Computer Engineering in Concordia University in Montreal. He is a Senior Member in the Institute of Electrical and Electronic Engineers (IEEE).


A recent announcement by Meta about terminating the face recognition system in Facebook sparked worldwide attention. It comes as a sort of new reality for many Facebook users, who have been habituated for years to the automatic people recognition feature in Facebook photos and videos.

Since the arrival of mankind on earth, facial outlook has remained as the most common identifier for humans.  Facial recognition is a dominating technology being used in numerous applications.  Its benefits have been amazing. At the same time, it has been one of the most debated topics, particularly from ethical perspectives.

Drawbacks of facial recognitions often get mentioned as examples of misusing artificial intelligence in general. Such examples are numerous, like- violating basic privacy of people, failure to identify racially marginalized people, or being used in some countries for malicious detection of political opponents. Facial Recognition Technology has even been mentioned as “ one of the biggest threats to our privacy”. In the service-closing statement, Meta (Facebook) also mentioned societal concerns and lack of clear regulations regarding this technology. However, they did mention about the boundless prospects of facial recognition for other applications. 

Along with its basic purpose of identity verification, facial analysis remains as a powerful tool for various innovative solutions. Let us consider an example in the medical domain – detecting pain when patients cannot express it. Particularly, infants may not always cry in pain, and untreated pain in newborns can lead to further neurological and behavioral issues. To address this challenge, a group of researchers in University of South Florida have developed a facial recognition software termed as “Neonatal Convolutional Neural Network” (N-CNN), It contains an algorithm for analyzing the facial expression of newborns with postoperative pain. It also analyzes visual -vocal cues and vital signs to sense and predict pain. 

This scheme aims to ensure that it will not miss a signal of pain regardless of factors like infant’s position or sedation. Facial analysis remains as the key focus, as mentioned by one of the researchers there- “Our experiments in procedural pain showed that facial expression is the most important indicator of pain in most cases.” The group is now working on monitoring and predicting pain in infants after surgery.

Such technology can ensure better treatment by automatically alerting caregivers about pain in infants. It is worth mentioning that other factors, such as culture and gender, may also affect pain assessments by humans. So, this facial recognition scheme also helps to overcome human observers’ bias. That is an interesting aspect, demonstrating that facial analysis can also assist to counter bias.

Another good example of using facial recognition for clinical purposes is a smartphone App called  Face2Gene, developed by a Boston based digital health company.  This App uses neural networks to classify distinctive facial features to diagnose congenital and neurodevelopmental disorders, which are quite difficult to identify through plain eye. Since its launch in 2018-19, this App has been significantly assisting clinicians, in fact outperforming them in terms of fast and accurate diagnosis.

These are some inspiring examples of using facial recognition for life-saving purposes.  It is worth mentioning that adequate diversity in training data is utmost crucial to ensure bias-free performance of these applications also. In general, biasedness in AI algorithms (particularly in sensitive applications like facial recognition) can be successfully overcome only by training these with sufficiently diverse data. That’s actually much easier said than done.  Various realistic challenges exist in this regard.

For instance, most of the publicly available datasets used by researchers contain data from only developed parts of the world. A significant portion of the global population are still not represented there. The reasons are obvious- lack of contextual need, proper procedure and supportive regulation to collect, store and share data in many countries. This needs to be gradually overcome through effective collaboration, trust and technical support among concerned ends.

Regulatory challenges exist in developed regions as well. For example: “Illinois Biometric Information Privacy Act” in the state of Illinois, which is the oldest biometric regulation in the United States, contains following clauses which are not quite favorable for health-based applications- “Biometric identifiers do not include information captured from a patient in a health care setting or information collected, used, or stored for health care treatment…”.

This scenario is obviously improving. Last year the U.S. Food and Drug Administration released their first Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan. This is considered as a significant regulatory step to facilitate AI in the health industry.

Despite numerous drawbacks (most notably the issue of bias), technologies like facial recognition will continue expanding across our lives, and its improvisation will continue utilizing cutting-edge algorithms and advanced datasets. In certain applications, it can act as a useful tool against human bias as well.  Moving ahead, it will be interesting to observe how much facial recognition technologies can evolve from a “source of bias” to a “tool against bias”.

References

1. https://about.fb.com/news/2021/11/update-on-use-of-face-recognition/

2.https://www.marketwatch.com/story/facial-recognition-technology-is-one-of-the-biggest-threats-to-our-privacy-11640623526

3. https://www.embs.org/pulse/articles/detecting-faces-saving-lives/

4. https://www.nature.com/articles/d41586-019-00027-x

5. https://www.ilga.gov/legislation/ilcs/ilcs3.asp?ActID=3004&ChapterID=57

6.https://www.fda.gov/news-events/press-announcements/fda-releases-artificial-intelligencemachine-learning-action-plan

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

related posts

  • Evaluating the Social Impact of Generative AI Systems in Systems and Society

    Evaluating the Social Impact of Generative AI Systems in Systems and Society

  • Mapping the Ethics of Generative AI: A Comprehensive Scoping Review

    Mapping the Ethics of Generative AI: A Comprehensive Scoping Review

  • Ten Simple Rules for Good Model-sharing Practices

    Ten Simple Rules for Good Model-sharing Practices

  • The Bias of Harmful Label Associations in Vision-Language Models

    The Bias of Harmful Label Associations in Vision-Language Models

  • Embedded ethics: a proposal for integrating ethics into the development of medical AI

    Embedded ethics: a proposal for integrating ethics into the development of medical AI

  • The TESCREAL Bundle: Eugenics and the promise of utopia through artificial general intelligence

    The TESCREAL Bundle: Eugenics and the promise of utopia through artificial general intelligence

  • FairQueue: Rethinking Prompt Learning for Fair Text-to-Image Generation (NeurIPS 2024)

    FairQueue: Rethinking Prompt Learning for Fair Text-to-Image Generation (NeurIPS 2024)

  • The increasing footprint of facial recognition technology in Indian law enforcement - pitfalls and r...

    The increasing footprint of facial recognition technology in Indian law enforcement - pitfalls and r...

  • Episodio 3 - Idoia Salazar: Sobre la Vital Importancia de Educar al Ciudadano en los Usos Responsabl...

    Episodio 3 - Idoia Salazar: Sobre la Vital Importancia de Educar al Ciudadano en los Usos Responsabl...

  • 2022 AI Index Report - Technical AI Ethics Chapter

    2022 AI Index Report - Technical AI Ethics Chapter

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.