• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Research summary: Snapshot Series: Facial Recognition Technology

September 13, 2020

Summary contributed by Connor Wright, who’s a 3rd year Philosophy student at the University of Exeter.

Link to full paper + authors listed at the bottom.


Mini-summary: Offering an interactive and well-worded over-arching summary of facial recognition technology (FRT), this paper presents the current situation within the UK. It makes important distinctions between different FRT systems, explains the presence of bias and offers topical case studies of those trying to implement FRT. The paper lists the benefits and risks in one of its 6 sections, as well as explaining how the technology actually works. Both fascinating and engaging, the paper earns a spot as some much needed background reading for any FRT debate.


Full summary:

The snapshot series provides a very readable, intuitive, and engaging overview of the facial recognition scene in the UK. The paper splits into 6 sections, stretching from what facial recognition technology (FRT) is, how it works, what the risks and benefits are, and what the future looks like. I will now summarise some of my highlights from the paper, including the difference between facial verification and facial identification systems, and how bias can be sewn into these systems.

Being from the UK, what first struck me was how the report made note of the South Wales Police’s (SWP) use of FRT systems being ruled as unlawful by the Court of Appeal. Here, the SWP’s use of the technology was taken to court by the civil liberties group Liberty, and was initially ruled as lawful by the Supreme Court, having been deemed as following all the necessary regulation. However, Liberty then won the appeal with the SWP being ruled to have breached the Human Rights Act, the Data Privacy Act, and the Equality Act. Not only does this show the precarious nature of FRT, but also how this technology is not immune to civil protest. Such technology is often surrounded by a false obligation to commit to its usage by authorities, where civil society has no question on the matter. Yet, Liberty has shown that no such obligation exists, and how civilians can feel empowered to call out governments and big corporations on their use of seemingly untouchable technology.

There is nonetheless still an important distinction to draw in the use of FRT. Facial verification systems utilise a template of an already scanned face (such as on iPhones), and scan the face being presented to see if it matches the template. Facial identification systems on the other hand, are not looking for a face in particular. Instead, it operates on a one-to-many matching basis, whereby a face template is utilised to sift through millions of images in order to reveal which faces match. Furthermore, facial verification is more likely to be automated, with a match proving enough to warrant an action (such as unlocking your phone), whereas facial identification is more likely to be augmentative, being overseen by a human before a decision is made. Facial identification is then further split into live and retroactive recognition. Here, when talking about FRT and its problems, this is more likely to be centred around live facial identification, as opposed to retroactive systems, or the system on your phone.

One problem faced by live FRT in particular is that of bias, and one of the key ways this is woven into the system is through a non-representative data set. What gave me a lot of food for thought was how the paper highlighted that the FRT can be as accurate as it wants, but accuracy does not guarantee the elimination of bias. Even if the data set was accurate, and the data set contained over 10 million images, this will still not eliminate the presence of bias if the data set is homogeneous. Such bias can only then be exacerbated by private data collection efforts, which will be tailored to the company’s interests.

Despite all this doom and gloom, the paper did shed some positive light on the use of FRT. The technology is able to scale the use of security infrastructure by improving efficiency, as well as aid the already swamped police forces around the world. Not only this, but there are multiple laws governing the use of the technology within the UK thanks to the GDPR agreement spanning both private and public uses of FRT, as well as the laws broken by the SWP mentioned above.

These reassurances certainly provide a welcomed respite in the FRT debate, and the paper offers a positive image of the ongoing conversations. Nevertheless, the paper makes sure to emphasise how this is unfortunately not the only aspect to take note of. FRT can be of great benefit to society, but the elimination of bias and the new legislation proposals still have a long way to go to guide FRT to this destination.


Original paper by Centre for Data Ethics and Innovation: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/905267/Facial_Recognition_Technology_Snapshot_UPDATED.pdf

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

related posts

  • Why We Need to Audit Government AI

    Why We Need to Audit Government AI

  • Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising

    Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising

  • Digital Sex Crime, Online Misogyny, and Digital Feminism in South Korea

    Digital Sex Crime, Online Misogyny, and Digital Feminism in South Korea

  • Selecting Privacy-Enhancing Technologies for Managing Health Data Use

    Selecting Privacy-Enhancing Technologies for Managing Health Data Use

  • Exploiting The Right: Inferring Ideological Alignment in Online Influence Campaigns Using Shared Ima...

    Exploiting The Right: Inferring Ideological Alignment in Online Influence Campaigns Using Shared Ima...

  • Tiny, Always-on and Fragile: Bias Propagation through Design Choices in On-device Machine Learning W...

    Tiny, Always-on and Fragile: Bias Propagation through Design Choices in On-device Machine Learning W...

  • Why AI Ethics Is a Critical Theory

    Why AI Ethics Is a Critical Theory

  • REAL ML: Recognizing, Exploring, and Articulating Limitations of Machine Learning Research

    REAL ML: Recognizing, Exploring, and Articulating Limitations of Machine Learning Research

  • The ethical ambiguity of AI data enrichment: Measuring gaps in research ethics norms and practices

    The ethical ambiguity of AI data enrichment: Measuring gaps in research ethics norms and practices

  • Research summary: Troubling Trends in Machine Learning Scholarship

    Research summary: Troubling Trends in Machine Learning Scholarship

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.