• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Embedding Ethical Principles into AI Predictive Tools for Migration Management in Humanitarian Action

December 7, 2023

🔬 Research Summary by Sofia Woo, a recent graduate from McGill University who studied History and Political Science, emphasizing scientific history and political economics.

[Original paper by Andrea Guillén and Emma Teodoro]


Overview: Current AI ethical frameworks from the UN and EU, while well-intentioned, are broad and need to consider specific contexts. Ethical frameworks must be adopted into every step of the technology’s design and development process to weave responsible principles into AI predictive tools for migration management. 


Introduction

Immigration and refugee policies are some of the most divisive and controversial political topics. Lawmakers and humanitarian organizations often try to balance conflicting views while maintaining that ethical actions are being administered. Recent AI predictive tools for migration management can significantly help organizations with decision-making in these complex situations. By generating recommendations through gathering large swaths of data, governments and organizations can better anticipate and plan for migrant influxes. This paper’s methodology is based on the EU’s H2020-funded project ITFLOWS and the EUMigraTool but goes beyond traditional AI ethics frameworks by accounting for migration management’s specificities. The authors first identify AI ethics principles and then translate these principles into practical requirements with a humanitarian context in mind. Lastly, by determining eight AI ethical requirements and actionable measures to take, the authors define their goal as having their findings as a practical guide for designing and developing responsible AI predictive tools. 

Key Insights

AI Predictive Tools in Migration Management: A Double-Edged Sword 

While AI predictive tools can significantly help humanitarian actors in decision-making processes to ensure migrants are as safe as possible, there are several risks when implementing the technology. Under the umbrella of “surveillance humanitarianism,” there are dangers such as “techno-solutionism” and “techno-colonialism.” In the former, technologies are used as simple blanket solutions for complex problems—thus neglecting the small but critical details in humanitarian actions such as migration management. In the latter, digital innovation, while usually well-intentioned, can have the unwanted consequences of perpetuating colonial relationships of dependency and inequality. 

Although AI ethics principles concerning humanitarian work do exist, they are often quite broad and need to consider the specificities of certain situations. Principles outlined by the UN, Nesta, and the EU are not effectively woven into designing, developing, and deploying AI tools.  

From Ideas to Reality: Steps on Translating Ethical Principles Into Actionable Measures

With the above issues outlined, the authors present four steps for transforming AI ethical principles into action. These steps are based on the European Commission’s High-Level Expert Group on Artificial Intelligence (AI HLEG) and the Institute of Electrical and Electronics Engineers’ (IEEE) guidelines for ethically aligned design. Step one is to identify the four AI ethical principles: human autonomy, harm prevention, fairness, and transparency/explicability. Secondly, these principles are used to inform requirements for assessing risks. Examples of these requirements include human agency and oversight, technical safety, privacy, transparency, diversity, environmental and societal well-being, and accountability. Thirdly, the authors suggest tweaking these general requirements by accounting for the technology’s context and purpose before using technical and organizational measures to execute these ethical principles. 

Eight requirements are proposed to achieve specific ethical principles. These requirements are listed in the above second step on translating said principles into actionable measures. There are a few key takeaways from this. The first is that it is critical to have stakeholders from various fields engage in discussions to provide developers with the contextual knowledge needed to create better tools. Technical teams, NGOs, migration scholars, legal and ethics experts, and other relevant figures should work together to provide crucial contextual knowledge to developers. This contextual knowledge raises awareness about political and cultural sensitivities that, by extension, create fairness by ensuring that even underrepresented groups are accounted for. 

Another critical point is that these technologies are only helpful if they are trusted by humanitarian actors using these tools. This is why it is crucial that the data being fed to AI predictive tools is accurate, representative, and diverse. Not only are regular data quality checks encouraged, but transparency about AI systems’ limitations is also essential. Identifying the technology’s shortcomings (which enables explainability) is key to ensuring accurate, evidence-based decision-making processes. 

Between the lines

AI’s potential to improve humanitarian work is an often overlooked facet in broader conversations about technology. The authors do a thorough job of taking general AI ethical principles and refining them to fit the context of migration management. However, the paper stops short of considering tensions that may arise. Given just how politically charged the topic of migration and refugee issues are, how should humanitarian actors deal with governments who may use or manipulate the recommendations generated by AI predictive tools to prevent migration into their respective countries? 

Additionally, while the authors advocate for a diverse set of stakeholders to be engaged in every part of the design and development process, they need to mention how this panel of stakeholders should be determined. Specifically, in the case of migration management, there are many groups to keep in mind (such as gender, age, religion, sexual orientation, etc.). While it is crucial that different backgrounds are represented, there will naturally be conflicting viewpoints and groups that hold biases towards others based on their own cultures and circumstances. The question of how developers should reconcile inevitable conflicting opinions and demands while also keeping in line with AI ethical principles is a critical issue that needs to be further explored to successfully utilize AI predictive tools in anticipating and better preparing for major migration events. 

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

related posts

  • AI Ethics and Ordoliberalism 2.0: Towards A ‘Digital Bill of Rights

    AI Ethics and Ordoliberalism 2.0: Towards A ‘Digital Bill of Rights

  • Research summary: Detecting Misinformation on WhatsApp without Breaking Encryption

    Research summary: Detecting Misinformation on WhatsApp without Breaking Encryption

  • HAI Weekly Seminar Series: Decolonizing AI with Sabelo Mhlambi

    HAI Weekly Seminar Series: Decolonizing AI with Sabelo Mhlambi

  • Social media polarization reflects shifting political alliances in Pakistan

    Social media polarization reflects shifting political alliances in Pakistan

  • Friend or foe? Exploring the implications of large language models on the science system

    Friend or foe? Exploring the implications of large language models on the science system

  • Democracy, epistemic agency, and AI: Political Epistemology in Times of Artificial Intelligence

    Democracy, epistemic agency, and AI: Political Epistemology in Times of Artificial Intelligence

  • Balancing Data Utility and Confidentiality in the 2020 US Census

    Balancing Data Utility and Confidentiality in the 2020 US Census

  • Regulatory Instruments for Fair Personalized Pricing

    Regulatory Instruments for Fair Personalized Pricing

  • Exchanging Lessons Between Algorithmic Fairness and Domain Generalization (Research Summary)

    Exchanging Lessons Between Algorithmic Fairness and Domain Generalization (Research Summary)

  • Submission to World Intellectual Property Organization on IP & AI

    Submission to World Intellectual Property Organization on IP & AI

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.