🔬 Original article by Nils Aoun, Chloé Currie, Dane Malenfant, and Cella Wardrop from Encode Justice Canada
This is a part of our Recess series in which university students from across Canada briefly explain key concepts in AI that young people should know about: specifically, what AI does, how it works, and what it means for you. The writers are members of Encode Justice Canada, a student-led advocacy organization dedicated to including Canadian youth in the essential conversations about the future of AI.
Facial recognition (FR) is the confirmation of an individual’s identity through images of their face and has become an increasingly popular form of identification through the use of biometric data. Biometric measurements are not new forms of technology; rather the uses of biometric data have increased over time and become new to public consciousness. FR, first created during the Cold War, has been used in a variety of identification and surveillance manners. Seen as an innovation in computer science and machine learning automated processes, FR is an evolution of the late-nineteenth-century introduction of measurements, mapping, and analysis of biometric information.  The long history of FR technology shows how biometric data has permeated modern society, with artificial intelligence becoming a regular part of life rather than a fictional anomaly. The use of FR is a controversial topic due to ongoing debates surrounding privacy, biases, abuse of power, and the uneven balance of costs and benefits. The unknowns surrounding the uses of FR and the possible threats on a national and international scale make the implementation of FR systems a highly controversial topic in need of further regulation. Many controversies surrounding FR come from police use of technology to identify criminals.
What is facial recognition and how does it work?
FR technology spread quickly and has been used by a wide range of actors, from Big Tech companies to government and law enforcement. Depending on what it is used for, this technology and the results it yields may or may not be beneficial for the people the software is deployed on. The ability to automatically classify pictures in your camera roll based on the people present in these pictures sounds like a great idea; however, the technology assumes that the FR software is perfect and impartial – which is not the case.  The following brief will dive into what FR is, how it works, and some of the social impacts of the bias in its software’s code.
How does FR work?
FR systems are a new technological advancement allowing the identification or confirmation of someone’s identity by using their face – i.e., their biometric features. FR can be used to identify people in pictures, videos, or in real-time. Interest in this technology is increasing in other areas of use, though it is still mostly used for security and law enforcement. While FR falls under the category of biometric security, which also includes voice recognition, fingerprint recognition, and eye recognition, it has use cases in a wider range of areas, namely entertainment; think of Instagram or Snapchat filters as recent smartphone examples.  With this in mind, let’s see how this technology works.
The first thing FR software needs is a way to detect faces. This is done by using a camera to detect and locate a face, which can be in different positions. It then needs to capture and analyze that picture.  This is done by comparing geometric facial features of the individual in the picture to a database with key biometric features, like the distance between the eyes, the depth of eye sockets, and the distance from forehead to chin. The next step is to convert that image into data, turning these facial features and analysis into a mathematical formula. This makes the faceprint – the equivalent of a thumbprint, for your face – which can be matched to other faces in the database.
Where is FR used?
For example, any photo tagged with a person’s name becomes a part of Facebook’s database, which can be used for FR. From this method, Facebook has access to up to 650 million photos. This has helped Facebook provide options like the automatic creation of albums with specific people in them, allowing users to easily access pictures of their friends. However, Facebook’s use of FR has been controversial, leading the company to announce that it would stop using FR technology. 
Facebook may have been inspired by recent controversies surrounding the use of FR by law enforcement. China, for example, does not hide that it uses FR for racial profiling and tracking Uighur Muslims. Russia has video cameras that scan for “people of interest” and is thinking of equipping police officers with glasses that work similarly.  Furthermore, state-of-the-art FR systems have been found to frequently misidentify darker-skinned people, particularly Black women.  This discrepancy, combined with the overrepresentation of Black people in Canadian prisons, may create a “feedback loop, racist policing strategies lead to disproportionate arrests of Black people, who are then subject to future surveillance.” 
Overall, FR has many use cases for both the private and the public sector, ranging from entertainment to more serious cases, like law enforcement. Just like any technology, it has a lot of potential for good. However, it is important not to let this progress come too fast, at the cost of marginalized communities being left behind. Technology should work for all of us, and we must make sure everyone can benefit from it – and that no one gets hurt by it.
FR as a human rights issue
FR technology has repeatedly proven to have bias and inaccuracies, raising concern over its uses and potential for negative impacts on society. A 2019 study by the U.S. Department of Commerce’s National Institute of Standards and Technology found that the accuracy of FR software shifted depending on the age, race, and ethnicity of faces, as well as the location in which the software was developed.  Additionally, the Gender Shades project explored the accuracy of gender identification by FR software used by IBM, Microsoft, and Face++, finding that the software was more accurate in identifying men than women, and that accuracy decreased for darker subjects.  Based on this research, and many others, a key concern is that FR technology tends to further discriminate against those already discriminated against in society and can therefore worsen existing inequalities. Depending on the use of FR software, discrimination could play a greater role in individuals’ lives: for example, impacting their potential to be hired or a chance at being arrested. Governments and intelligence forces use FR technology in situations that are high stakes, like identifying suspicious people in airports or as ‘criminals’ in the street. 
The United Nations
Understandably, the use of FR technology concerns human rights activist groups because of the potential for discrimination and human rights infringement. The United Nations’ Universal Declaration of Human Rights outlines every individual’s right to equality, privacy, safety, and freedom of thought and expression.  As a member of the UN, Canada is committed to these goals, as well as those outlined in The Canadian Human Rights Act of 1977 and The Canadian Charter of Rights and Freedoms of 1982. In September 2021, Michelle Bachelet, the United Nations High Commissioner for Human Rights, stated the need for regulation for AI, in light of its potential for discriminatory practices and “ability to feed human rights violations at an enormous scale.”  Bachelet cited the potential for FR to be used by governments to further control and discriminate against people as a concern. She also raised the concern of FR technology being used to track individuals, violating their privacy and data rights. Bachelet then called for a temporary ban on the use of biometric recognition software until appropriate regulations can be made. 
To align with its values and that of the UN, Canada is exploring regulation for biometric technology, including FR software.  The Clearview AI scandal of early 2021, which saw the company providing illegal FR services to law enforcement across Europe and North America, brought the conversation of Canada’s regulation of FR to the forefront of politics and human rights discussions. As of January 2022, the Canadian Government has not announced any specific policies regulating the use of FR.
Biometric tech: Clearview AI and the RCMP
In the past decade, advances in biometric scanning software have led to unique inventions in fingerprint scanning, voice detection, and genealogy. The biometric tech industry in the US experienced an average 8% increase in revenue each year over the last decade but is now projected to decline with an average of 0.7% each year for the next five years.  This decline has not only been attributed to the increased public scrutiny on how biometric data is collected and used but also new regulations and privacy laws.
While only representing 8% of total revenue in the biometric scanning software industry, a controversial subset of this industry is FR technology. This has been used by governments since the 1990s. In 2014, the FBI developed and deployed a FR system to aid law enforcement officials.  However, low accuracy rates and continued racial bias in false positives and negatives have opened the door to legal issues and questions about civil rights regarding the use of FR systems.  Some factors that influence bias are the demographics and social statistics of an area, which determine the qualities of a population such as race and gender. Different algorithms represent these demographics in multiple ways, but the correct algorithms must be applied to FR to limit the inherent biases within the technology, and determine the similarities and differences between population distributions. 
Nonetheless, governments are currently using private FR to identify citizens. Noted as one of TIME magazine’s top 100 companies in 2021, Clearview AI has access to 3 billion images scraped from social media platforms, was recently used to identify US Capitol rioters on January 6th, 2021, and has one of the best accuracy rates of FR software currently on the market, according to a US federal report on false-negative identification rates.  After a data breach, Clearview AI’s technology was found to also be in use by various corporations, like Macy’s, Walmart, the NBA, and, notably, the NYPD who had policies to inhibit “creating an unsupervised repository of photos that facial recognition systems can reference, and restrict the use of facial recognition technology to a specific team” and previously denied working with Clearview AI. 
Clearview AI has developed substantial connections with international law enforcement organizations, including the Royal Canadian Mounted Police (RCMP). The partnership between Clearview AI and the RCMP prompted national scrutiny and a federal investigation, as Clearview AI infringed on Canadian copyright law as the company used a technique called ‘scraping’ to automatically collect publicly available photos of faces across social media and other websites, which is classified as infringement and against Canadian copyright law. Before the Office of the Privacy Commissioner concluded that Clearview AI’s data collection method violated the law and use of their technology was thus illegal, the RCMP launched a National Technology Onboarding Program unit to review. They are also committed to all recommendations by the privacy commissioner. FR technology is, anyhow, still available and increasingly used. According to the Canadian Civil Liberties Association (CCLA), FR is more like “‘facial fingerprinting’ rather than recognition because it gives a more accurate impression of what we’re talking about: an identifier inextricably linked with our body” and will culminate in the ability “to identify your name, address, place of work, friend group, or many other private factors simply by taking a picture of you in public.” 
Biometric technologies such as FR have become increasingly used in both the public and private sectors. This technology has been used to identify individuals through facial characteristics and has been employed by a diverse group of companies ranging from Facebook to the RCMP. The recent popularization of FR technology has led to public hesitancy due to the possible abuse of the system. The use of FR by the RCMP and other global law enforcement organizations emphasizes the specific issues with regards to racial and gender biases within the technology, highlighting a larger systemic issue. The Clearview AI scandal showed just a bit of the danger of FR use by law enforcement. To combat these issues, government policies must be introduced to regulate the use of FR in both the public and private sectors to prevent misuse of personal data and ensure the safe practice of FR.
 Nikki Gladstone, “How Facial Technology Permeated Everyday Life,” Centre for International Governance Innovation, published September 19, 2018: https://www.cigionline.org/articles/how-facial-recognition-technology-permeated-everyday-life/.
 Thorin Klosowski, “Facial Recognition Is Everywhere. Here’s What We Can Do About It,” New York Times: Wirecutter, published July 15, 2020: https://www.nytimes.com/wirecutter/blog/how-facial-recognition-works/.
 Kapersky, “What is Facial Recognition – Definition and Explanation,” Undated: https://www.kaspersky.com/resource-center/definitions/what-is-facial-recognition.
 Pesenti, Jerome. “An Update On Our Use of Facial Recognition.” Meta Newsroom. November 2, 2021: https://about.fb.com/news/2021/11/update-on-use-of-face-recognition/
 Ian Sample, “What is facial recognition – and how sinister is it?,” The Guardian, July 29, 2019: https://www.theguardian.com/technology/2019/jul/29/what-is-facial-recognition-and-how-sinister-is-it.
 Buolamwini, Joy, and Timnit Gebru. “Gender shades: Intersectional accuracy disparities in commercial gender classification.” Conference on fairness, accountability and transparency. PMLR, 2018.
 Najibi, Alex. “Racial Discrimination in Face Recognition Technology.” Harvard University: Science in the News. October 24, 2020. https://sitn.hms.harvard.edu/flash/2020/racial-discrimination-in-face-recognition-technology/. And Owusu-Bempah, Akwasi, Maria Jung, Firdaous Sbaï, Andrew S. Wilton, and Fiona Kouyoumdjian. “Race and Incarceration: The Representation and Characteristics of Black People in Provincial Correctional Facilities in Ontario, Canada.” Race and Justice, (April 2021). https://doi.org/10.1177/21533687211006461.
 Patrick Grother et al., “Face Recognition Vendor Test (FRVT): Part 3: Demographic Effects.” U.S. Department of Commerce, National Institute of Standards and Technology, 8280 (December 2019): 1-82.
 Joy Buolamwini and Timnit Gebru, “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification,” Proceedings of Machine Learning Research 81 (2018): 1–15.
 Harrison Rudolph et al., “Not Ready for Takeoff: Face Scans at Airport Departure Gates.” Georgetown Law Center on Privacy & Technology. Published December 21, 2017: https://www.airportfacescans.com/.
 United Nations. Universal Declaration of Human Rights. Published 1948: https://www.un.org/sites/un2.un.org/files/udhr.pdf.
 United Nations. (2021). “Urgent action needed over artificial intelligence risks to human rights.” United Nations, UN News: Global perspective Human stories. Published 2021: https://news.un.org/en/story/2021/09/1099972.
 Office of the Privacy Commissioner of Canada. Police use of Facial recognition Technology in Canada and the way forward: Special report to Parliament on the OPC’s investigation into the RCMP’s use of Clearview AI and draft joint guidance for law enforcement agencies considering the use of facial recognition technology. Published June 10, 2021: https://www.priv.gc.ca/en/opc-actions-and-decisions/ar_index/202021/sr_rcmp/.
 Jack Curren, “Biometric Scan Software.” IBISWorld Industry. Report no. OD4530 (2021): https://www.ibisworld.com/united-states/market-research-reports/biometrics-scan-software-industry/
 Tate Ryan-Mosley, “The new lawsuit that shows facial recognition is officially a civil rights issue,” MIT Technology Review, published April 14, 2021: https://bit.ly/3vTo3EC
 Jacqueline G. Cavazos et al., “Accuracy comparison across face recognition algorithms: Where are we on measuring race bias?” IEEE Transactions on Biometrics, Behavior, and Identity Science 3, no. 1 (2021): 101–111.
 FRVT 1:N Identification, False Negative Identification Rates. U.S. Department of Commerce, National Institute of Standards and Technology (2021): https://pages.nist.gov/frvt/html/frvt1N.html#_FRVT_Ongoing_.
 Tate Ryan-Mosley, “The NYPD used a controversial facial recognition tool. here’s what you need to know,” MIT Technology Review, published April 9, 2021: https://www.technologyreview.com/2021/04/09/1022240/clearview-ai-nypd-emails/.
 Office of the Privacy Commissioner of Canada, Police use of Facial recognition Technology in Canada and the way forward. Canadian Civil Liberties Association, “Facial Surveillance: Protecting Your Privacy Rights from AI,” Undated: https://ccla.org/our-work/privacy/surveillance-technology/facial-recognition/.