🔬 Research summary contributed by Muriam Fancy, our Network Engagement Manager.
[Original paper by Petra Molnar]
Overview: The deployment of technology like AI on migrant communities is unregulated and undocumented and therefore becomes a means to test such technology. This paper examines that the lack of regulation protecting migrant communities from the deployment of these technologies allows for human rights violations to exercise control over these communities and their ability to migrate.
Introduction
The source of concern is the lack of regulation and governance of these technologies by States. Dr. Molnar finds that it with the intent that States have allowed the lines of distinction between citizens and non-citizens to blur to exercise such power over migrant communities. A vital stakeholder implication for this issue is how States allow deploying these technologies by private sector actors.
For the migrant community to exercise accountability is difficult and complicated as a result. Therefore, as Dr. Molnar finds, the lack of accountability frameworks to protect migrant communities creates “legal black holes.” Dr. Molnar notes that technological development that has such consequences is not developed in a silo, reflecting power hierarchies.
Areas of Concern
In Dr. Molnar’s paper, AI technologies are defined as automated decision-making technology, including automating data mining. A considerable concern for the use of these technologies is how they are designed. Issues of bias resulting from automated decision-making technology due to existing bias or the incorrect classification of variables can have detrimental consequences, which is also found with human operators. The four significant areas of concern are data collection, biometrics and consent, criminalization and surveillance, and automated decision-making (AI technologies).
- Data collection becomes a form of data colonization, as western actors come in and collect a large amount of data to create patterns to predict migration patterns. Data collection for marginalized populations especially concerns due to the lack of accountability and regulation of these practices. Weaponizing data collection is a historical practice for marginalized communities.
- Biometrics and consent: building on the previous concern, data sharing issues also present an issue since access to data is not provided equally. Furthermore, the question of “who” has access to data is significant. This is in reference to Dr. Molnar’s concerns on informed consent and the ability to opt-out.
- Criminalization and securitization: automated decision-making technology is also being deployed at the border. A case example is with FRONTEX (European Border and Coast Guard Agency) using “unpiloted” military drones in the Mediterranean to servile migrant vessels as they are charting to land to apply for asylum status. The use of this technology before reaching or at the border is used to bar these communities from applying for asylum. Dr. Molnar notes that by creating this barrier, it is moving towards criminalization by creating a “threat environment” at the border.
- Automated decision-making in immigration and refugee decisions: using automated decision-making technology to determine immigration and refugee application systems pose significant human rights implications. As Dr. Molnar notes with the Canadian government’s case, there are no legal means of protecting individuals who may fall victim to inaccurate decisions due to these technologies.
“The monopolies of knowledge which are being created a function to consolidate power and authority over technological development…” From the inequities posed by deploying automated decision-making technology to how this technology is designed, the human rights and legal ramifications of this technology need to be addressed.
Dr. Molnar dives into the lack of international regulatory frameworks that effectively address and protect the rights of migrant communities. Presently, there is no single “integrated regulatory global governance framework” for the use of automated technology, and there are no regulations specifically for migration management. However, despite this gap, States are mandated to follow customary international law and to depend on their region, such as the EU, in which countries would have to follow the GDPR.
Impacts and the accountability Gap
Dr. Molnar denotes some of the legal ramifications of the experimentation of these technologies on migrants. Such impacts include life and liberty (right to life and liberty as mentioned by the Convention Relating to the Status of Refugees), equality rights and freedom from discrimination (freedom from discrimination and equality is protected by the International Covenant on Economic, Social and Cultural Rights, the ICCPR, CERD, CEDAW, CRPD, CRC, and the Refugee convention), and privacy rights (a right protected in Article 17 of the ICCPR, and the UN High Commissioner for Human Rights has noted the importance of privacy and freedom of expression).
This sector concludes by exploring an administrative, legal and principles of natural justice analysis. Drawing the line between machine and human-made decision-making is difficult and to find that distinction in law is also not clear. Reviewing the decisions made by automated decision-making technology is not presently possible because there is no rubric for humans to follow to conduct this analysis. As Dr. Molnar pushes throughout her paper, human rights must be central to the analysis and development/deployment of technology.
The author concludes her analysis by discussing how States can deploy these technological experiments on migrant communities. The answer is that migrants are not given the same rights as citizens, and therefore, they cannot exercise the same power and mechanisms to protect themselves. Testing automated decision-making technology on migrant communities perpetuates the discourse that this community needs to be tracked and monitored, thus classifying this community as a threat to state sovereignty and power.
Regulating these technologies is particularly important when questions how States continue with such experimentations. The private sector’s role allows for public sector bodies to walk away from their responsibilities by maintaining the notion that they are not directly involved. The responsible stakeholder also delineates how data issues are collected, processed, and explored in these technologies. Essentially, the actor’s role determines how they maintain control of these communities through automated decision-making technology. The accountability gap needs to be addressed to ensure the safety of the migrant community.
Between the lines
The “AI divide” as Dr. Molnar notes, dictates who is involved in designing and deploying AI. The issues raised above demonstrate that migrant communities are not part of their side of the end-to-end development and deployment of AI, which further perpetuates systemic power and inequality. The lack of governance and regulation from both the private and public sectors is an issue that needs to be addressed to build a holistic and inclusive legal framework that protects migrant communities. The lack of governance demonstrates how techno-solutionism continues to be a popular technique to solve systemic issues.
In short, Dr. Molnar makes a call for a more inclusive and rigorous governance framework to address the public-private accountability gap. Additionally, there need to be more clear regulatory guidelines for the private sector. A suggested reading following the discussions and analysis brought forward by this paper is a more recent report by Dr. Molnar titled “Technological Testing Grounds: Migration Management Experiments from the Ground Up,” which builds on the concerns and international legal analysis on the governance and accountability gap of deploying AI at the border.