🔬 Research Summary by Giuliana Luz Grabina, a philosophy undergraduate student at McGill University, with an interest in AI/technology policy regulation from a gendered perspective.
[Original paper by Jose-Miguel Bello y Villarino and Ramona Vijeyarasa]
Overview: In both regional and global International Human Rights law, it is firmly established that states have a dual obligation: states are not only to refrain from committing acts in violation of human rights, but they must also ensure that the ‘ …essential rights of the persons under their jurisdiction are not harmed.’ This paper argues that the lack of adequate regulation in the AI sector may violate International Human Rights norms, reflecting a state where governments have failed to uphold their obligations to protect, fulfill, and remedy.
Introduction
Until recently, the states’ dual obligation under International Human Rights (IHR) law entailed protecting individuals from possible human rights violations derived from the actions and inactions of private persons and other public authorities. The increasing use of Artificial Intelligence presents a new challenge to the traditional IHR framework, as harm can originate in systems that escape human control, or as the OECD Group of experts put it, in systems with ‘…varying levels of autonomy.’ In this sense, the authors warn, AI operates outside the scope of the traditional IHR framework, as the state would need to exert its authority over an entity that private or public persons may not even control.
In this article, Jose-Miguel Bello y Villarino and Ramona Vijeyarasa highlight, using three examples of the implication of AI for women’s rights, competing human rights concerns and argue that addressing those (and other) questions is the only way to correctly assess what obligations the state has concerning AI and whether states are prepared to meet those obligations.
Key Insights
The Artificial Intelligence Challenge for Human Rights: Are States Prepared?
The main challenge for states as the primary entities responsible for human rights protection is not substantive but rather procedural. States must consider whether they are prepared to protect against possible rights violations derived from the misapplication of AI systems and whether they are equipped to remedy cases of violations. In the authors’ view, IHR law mandates states to engage in that consideration. As Article 2 of the International Covenant on Civil and Political Rights establishes:
“[w]here not already provided for by existing legislative or other measures, each state party to the present Covenant undertakes to take the necessary steps … to adopt such laws or other measures as may be necessary to give effect to the rights recognized in the present Covenant.”
In what follows, the authors present three examples to illustrate the thorny questions that states must consider when assessing if their human rights legal framework is ready to address the many challenges that AI systems will involve.
Competing Human Rights Concerns: A Zero-Sum Game
The first example, Nadia, demonstrates how efforts to promote the rights of individuals with disabilities may inadvertently jeopardize women’s rights. The second example, VioGen,
demonstrates how efforts to protect women from gender-based violence risk impinging other individuals’ fundamental rights. Lastly, System Y exemplifies how corporations and individuals may deliberately use private AI technology to undermine women’s rights.
- Nadia
Nadia is an omnichannel digital employee platform meant to ‘help the NDIA communicate with the hundreds of thousands of national disability insurance scheme participants’, giving ‘spoken or written answers in 32 languages to thousands of NDIS queries’ and learning from those interactions.’ Marie Johnson, Nadia’s designer, stated in her submission to a Parliament Committee that Nadia’s creation aligned directly with the UN Convention on the Rights of Persons with Disabilities. Specifically, the Convention’s call to promote communication for people with disabilities, including through ‘human-reader and augmentative and alternative modes, means and formats of communication, including accessible information and communication technology.’
However, according to the authors, it falls short as an example of “human-rights-by-design technology.” As the authors emphasize, using a female voice for this voice-activated personal assistant reflects and promotes gender stereotypes of women’s subservience. Nadia, the authors suggest, exemplifies a core human rights challenge when it comes to regulating AI— harming a group of rights-holders in an attempt to advance the interests of another.
- VioGen
VioGen is a computer-based system that monitors the experiences of victims of gender-based violence and proposes possible protection and assistance measures to public authorities. This risk assessment mechanism is a crucial component of the relative success– in reducing the absolute number of deaths and repeat offenses– of the Spanish model of combatting gender-based violence. The VioGen system has undergone three iterations: (i) a human-driven system; (ii) a system supported by traditional statistics, and (iii) an AI-driven system. Currently, the operational version of VioGen remains in phase ‘ii.’ It relies on traditional statistical methods to estimate risk by comparing the case with the historical data in the VioGen database. According to the authors, the AI-driven system, when tested, outperforms both the human-driven system and the statistical system in terms of accuracy. As the authors put it, “[out] of the 600,000 women in the Spanish VioGen database, 60,000 to 90,000 could have had a better [risk-level assessment] with the new [AI] system.”
However, the system presents risks. As with any data-driven system, it can be manipulated for malicious purposes if someone tampers with the information fed into the system. This could affect the victim’s level of protection and infringe upon the alleged perpetrator’s rights to family life and freedom of movement. Second, in cases of system failure, the responsibility for that failure is unclear. For instance, in 2020, the Spanish Audiciencia Nacional found that the state had to compensate the family of Stefany González Escarramán, murdered by a former partner against whom she had been denied a restraining order based on a VioGen-generated risk assessment. A system designed to protect women from GBV had failed. Would Ms. González Escarramán have been one of the thousands whose risk of re-victimization would have been better assessed if the AI-driven system was already in place?
- System Y
System Y (unidentified to avoid its promotion) turns anyone into a ‘porn star’ by uploading their photo onto the website, which uses deepfake technology to swap the person’s face into an adult video. According to the authors, one of Deepfake’s particularities is its gendered dimension. As the authors note, “pornography-related attacks on non-media-relevant figures are more likely to target women than men, partly because it is an AI tool most often developed with women’s bodies in mind, and partly because the ratio of pornographic (still and animated) images involving women rather than men is unbalanced.”
Developing a human rights-oriented framework to address AI usage when it involves deliberate assaults on the human rights of specific groups remains a significant challenge. The concern is that private parties can use AI to undermine the rights of a segment of (or all) the population.
The Questions
The obligation to protect requires adequate legislation and extends beyond policy implementation. Therefore, the authors stress that AI-ready legislation is a state obligation under the IHR framework. In this subsection, the authors present eight questions that warrant consideration when assessing states’ legal frameworks concerning the problems highlighted in the previous three examples.
- Is the state prepared to balance the risks posed to different human rights when AI is deployed?
As seen in the Nadia case, Nadia may be very helpful for people with disabilities, but it evidently reflects and perpetuates harmful gender stereotypes. A human rights-based regulation of AI may intentionally or unintentionally favor one group of rights-holders over another. Governments must be prepared to decide who and what to favor, who or what to sacrifice, and, if so, based on what parameters.
- How can the state determine when to replace a human-performed system with an AI-based one to advance the interests of some rights-holders when that decision may affect or undermine the achievement of other rights or carry new risks?
This question, while connected to the first, differs in that it refers to the decision taken to replace systems performed by humans with an AI-based system. In the case of Nadia and the last iteration of VioGen, states must identify the factors they will consider when adopting an AI system, including user readiness, general welfare, the protection of marginalized groups, cost savings, or a combination thereof.
- How can states ensure in their regulation that AI systems consider intersectional perspectives when applied at large scales?
As the authors emphasize, AI systems may not correctly identify the protection of the human rights of those who are part of several vulnerable groups. Indeed, while a human interviewer may readily recognize the importance of addressing racial inequalities, an AI-driven system may lack this awareness if such information is absent from its learning dataset. Implementing a legal mandate for intersectionality in AI systems would ensure this issue is not neglected during the human-to-machine transition.
- Is the domestic legal system prepared to identify and respond to human rights violations derived from the use of AI systems which could be considered minor in terms of gravity but could be severe in societal terms when AI systems are applied on a larger scale?
Many of us have become so accustomed to virtual assistants infiltrating our households, so it is difficult to acknowledge the gender stereotype they may perpetuate about women’s roles. However, the continuous impact of voice-activated assistants like Nadia, as they interact with millions of individuals, could have enduring effects, like reinforcing stereotypical, restrictive, and unequal roles and responsibilities for women, particularly in caregiving.
- How can legal systems be flexible and responsive enough to address new types of human rights violations that only emerge as a result of these new technologies?
AI will be used in ways that we cannot currently foresee, and it runs the potential of openly violating many human rights. States must demonstrate that they are prepared to respond to these challenges.
- Whose responsibility is it to monitor these systems, and according to what parameters?
Nadia may improve the lives of people with disabilities but potentially cause harm to other rights holders. Similarly, VioGén may achieve its objectives of reducing recidivism, but potentially at the expense of the rights of alleged perpetrators through an increasing number of recommended restraining orders. A human evaluator might consider the consequences of recidivism significant enough to justify this encroachment, but should the state be obligated to monitor such activities to protect the rights of potential perpetrators of gender-based violence? If so, the initial question remains: how can the state adjust the parameters to strike a balance between rights?
- Does the state need to wait for harm and then offer a remedy, or is it necessary to create rules to guide ex-ante prohibitions of AI usage as they are highly likely to entail human rights violations?
In other words, given that technology advances faster than legislation, a proactive human rights-based mandate could minimize risk without gravely interfering with scientific progress.
- Are states ready to cooperate to effectively address uses of AI designed to undermine human rights which cannot be regulated or policed domestically?
As instances like System Y reveal, states are limited in preventing human rights violations. At some point, we will need to move on from domestic regulation to international cooperation.
Between the lines
Influential scholars like Donahoe and Metzger stress that ‘perhaps the darkest concerns [for HR] relate to misuse of AI by authoritarian regimes.’ While these may be the darkest, this paper argues they are certainly not the most significant concern. Instead, the main concern lies in the everyday normalization of AI, whose implications will likely impact different groups of rightsholders in distinct ways. As we navigate the challenges that AI’s emergence poses to human rights, states must consider–with discretion–if and when AI-related risks are sufficiently understood to require a regulatory response.