🔬 Research Summary by Sofia Woo, a recent graduate from McGill University who studied History and Political Science, emphasizing scientific history and political economics.
[Original paper by Andrea Guillén and Emma Teodoro]
Overview: Current AI ethical frameworks from the UN and EU, while well-intentioned, are broad and need to consider specific contexts. Ethical frameworks must be adopted into every step of the technology’s design and development process to weave responsible principles into AI predictive tools for migration management.
Immigration and refugee policies are some of the most divisive and controversial political topics. Lawmakers and humanitarian organizations often try to balance conflicting views while maintaining that ethical actions are being administered. Recent AI predictive tools for migration management can significantly help organizations with decision-making in these complex situations. By generating recommendations through gathering large swaths of data, governments and organizations can better anticipate and plan for migrant influxes. This paper’s methodology is based on the EU’s H2020-funded project ITFLOWS and the EUMigraTool but goes beyond traditional AI ethics frameworks by accounting for migration management’s specificities. The authors first identify AI ethics principles and then translate these principles into practical requirements with a humanitarian context in mind. Lastly, by determining eight AI ethical requirements and actionable measures to take, the authors define their goal as having their findings as a practical guide for designing and developing responsible AI predictive tools.
AI Predictive Tools in Migration Management: A Double-Edged Sword
While AI predictive tools can significantly help humanitarian actors in decision-making processes to ensure migrants are as safe as possible, there are several risks when implementing the technology. Under the umbrella of “surveillance humanitarianism,” there are dangers such as “techno-solutionism” and “techno-colonialism.” In the former, technologies are used as simple blanket solutions for complex problems—thus neglecting the small but critical details in humanitarian actions such as migration management. In the latter, digital innovation, while usually well-intentioned, can have the unwanted consequences of perpetuating colonial relationships of dependency and inequality.
Although AI ethics principles concerning humanitarian work do exist, they are often quite broad and need to consider the specificities of certain situations. Principles outlined by the UN, Nesta, and the EU are not effectively woven into designing, developing, and deploying AI tools.
From Ideas to Reality: Steps on Translating Ethical Principles Into Actionable Measures
With the above issues outlined, the authors present four steps for transforming AI ethical principles into action. These steps are based on the European Commission’s High-Level Expert Group on Artificial Intelligence (AI HLEG) and the Institute of Electrical and Electronics Engineers’ (IEEE) guidelines for ethically aligned design. Step one is to identify the four AI ethical principles: human autonomy, harm prevention, fairness, and transparency/explicability. Secondly, these principles are used to inform requirements for assessing risks. Examples of these requirements include human agency and oversight, technical safety, privacy, transparency, diversity, environmental and societal well-being, and accountability. Thirdly, the authors suggest tweaking these general requirements by accounting for the technology’s context and purpose before using technical and organizational measures to execute these ethical principles.
Eight requirements are proposed to achieve specific ethical principles. These requirements are listed in the above second step on translating said principles into actionable measures. There are a few key takeaways from this. The first is that it is critical to have stakeholders from various fields engage in discussions to provide developers with the contextual knowledge needed to create better tools. Technical teams, NGOs, migration scholars, legal and ethics experts, and other relevant figures should work together to provide crucial contextual knowledge to developers. This contextual knowledge raises awareness about political and cultural sensitivities that, by extension, create fairness by ensuring that even underrepresented groups are accounted for.
Another critical point is that these technologies are only helpful if they are trusted by humanitarian actors using these tools. This is why it is crucial that the data being fed to AI predictive tools is accurate, representative, and diverse. Not only are regular data quality checks encouraged, but transparency about AI systems’ limitations is also essential. Identifying the technology’s shortcomings (which enables explainability) is key to ensuring accurate, evidence-based decision-making processes.
Between the lines
AI’s potential to improve humanitarian work is an often overlooked facet in broader conversations about technology. The authors do a thorough job of taking general AI ethical principles and refining them to fit the context of migration management. However, the paper stops short of considering tensions that may arise. Given just how politically charged the topic of migration and refugee issues are, how should humanitarian actors deal with governments who may use or manipulate the recommendations generated by AI predictive tools to prevent migration into their respective countries?
Additionally, while the authors advocate for a diverse set of stakeholders to be engaged in every part of the design and development process, they need to mention how this panel of stakeholders should be determined. Specifically, in the case of migration management, there are many groups to keep in mind (such as gender, age, religion, sexual orientation, etc.). While it is crucial that different backgrounds are represented, there will naturally be conflicting viewpoints and groups that hold biases towards others based on their own cultures and circumstances. The question of how developers should reconcile inevitable conflicting opinions and demands while also keeping in line with AI ethical principles is a critical issue that needs to be further explored to successfully utilize AI predictive tools in anticipating and better preparing for major migration events.