• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Embedding Ethical Principles into AI Predictive Tools for Migration Management in Humanitarian Action

December 7, 2023

🔬 Research Summary by Sofia Woo, a recent graduate from McGill University who studied History and Political Science, emphasizing scientific history and political economics.

[Original paper by Andrea Guillén and Emma Teodoro]


Overview: Current AI ethical frameworks from the UN and EU, while well-intentioned, are broad and need to consider specific contexts. Ethical frameworks must be adopted into every step of the technology’s design and development process to weave responsible principles into AI predictive tools for migration management. 


Introduction

Immigration and refugee policies are some of the most divisive and controversial political topics. Lawmakers and humanitarian organizations often try to balance conflicting views while maintaining that ethical actions are being administered. Recent AI predictive tools for migration management can significantly help organizations with decision-making in these complex situations. By generating recommendations through gathering large swaths of data, governments and organizations can better anticipate and plan for migrant influxes. This paper’s methodology is based on the EU’s H2020-funded project ITFLOWS and the EUMigraTool but goes beyond traditional AI ethics frameworks by accounting for migration management’s specificities. The authors first identify AI ethics principles and then translate these principles into practical requirements with a humanitarian context in mind. Lastly, by determining eight AI ethical requirements and actionable measures to take, the authors define their goal as having their findings as a practical guide for designing and developing responsible AI predictive tools. 

Key Insights

AI Predictive Tools in Migration Management: A Double-Edged Sword 

While AI predictive tools can significantly help humanitarian actors in decision-making processes to ensure migrants are as safe as possible, there are several risks when implementing the technology. Under the umbrella of “surveillance humanitarianism,” there are dangers such as “techno-solutionism” and “techno-colonialism.” In the former, technologies are used as simple blanket solutions for complex problems—thus neglecting the small but critical details in humanitarian actions such as migration management. In the latter, digital innovation, while usually well-intentioned, can have the unwanted consequences of perpetuating colonial relationships of dependency and inequality. 

Although AI ethics principles concerning humanitarian work do exist, they are often quite broad and need to consider the specificities of certain situations. Principles outlined by the UN, Nesta, and the EU are not effectively woven into designing, developing, and deploying AI tools.  

From Ideas to Reality: Steps on Translating Ethical Principles Into Actionable Measures

With the above issues outlined, the authors present four steps for transforming AI ethical principles into action. These steps are based on the European Commission’s High-Level Expert Group on Artificial Intelligence (AI HLEG) and the Institute of Electrical and Electronics Engineers’ (IEEE) guidelines for ethically aligned design. Step one is to identify the four AI ethical principles: human autonomy, harm prevention, fairness, and transparency/explicability. Secondly, these principles are used to inform requirements for assessing risks. Examples of these requirements include human agency and oversight, technical safety, privacy, transparency, diversity, environmental and societal well-being, and accountability. Thirdly, the authors suggest tweaking these general requirements by accounting for the technology’s context and purpose before using technical and organizational measures to execute these ethical principles. 

Eight requirements are proposed to achieve specific ethical principles. These requirements are listed in the above second step on translating said principles into actionable measures. There are a few key takeaways from this. The first is that it is critical to have stakeholders from various fields engage in discussions to provide developers with the contextual knowledge needed to create better tools. Technical teams, NGOs, migration scholars, legal and ethics experts, and other relevant figures should work together to provide crucial contextual knowledge to developers. This contextual knowledge raises awareness about political and cultural sensitivities that, by extension, create fairness by ensuring that even underrepresented groups are accounted for. 

Another critical point is that these technologies are only helpful if they are trusted by humanitarian actors using these tools. This is why it is crucial that the data being fed to AI predictive tools is accurate, representative, and diverse. Not only are regular data quality checks encouraged, but transparency about AI systems’ limitations is also essential. Identifying the technology’s shortcomings (which enables explainability) is key to ensuring accurate, evidence-based decision-making processes. 

Between the lines

AI’s potential to improve humanitarian work is an often overlooked facet in broader conversations about technology. The authors do a thorough job of taking general AI ethical principles and refining them to fit the context of migration management. However, the paper stops short of considering tensions that may arise. Given just how politically charged the topic of migration and refugee issues are, how should humanitarian actors deal with governments who may use or manipulate the recommendations generated by AI predictive tools to prevent migration into their respective countries? 

Additionally, while the authors advocate for a diverse set of stakeholders to be engaged in every part of the design and development process, they need to mention how this panel of stakeholders should be determined. Specifically, in the case of migration management, there are many groups to keep in mind (such as gender, age, religion, sexual orientation, etc.). While it is crucial that different backgrounds are represented, there will naturally be conflicting viewpoints and groups that hold biases towards others based on their own cultures and circumstances. The question of how developers should reconcile inevitable conflicting opinions and demands while also keeping in line with AI ethical principles is a critical issue that needs to be further explored to successfully utilize AI predictive tools in anticipating and better preparing for major migration events. 

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • The Ethical AI Startup Ecosystem 02: Data for AI

    The Ethical AI Startup Ecosystem 02: Data for AI

  • (Re)Politicizing Digital Well-Being: Beyond User Engagements

    (Re)Politicizing Digital Well-Being: Beyond User Engagements

  • A Lesson From AI: Ethics Is Not an Imitation Game

    A Lesson From AI: Ethics Is Not an Imitation Game

  • The Role of Relevance in Fair Ranking

    The Role of Relevance in Fair Ranking

  • AI in Finance: 8 Frequently Asked Questions

    AI in Finance: 8 Frequently Asked Questions

  • Towards A Unified Utilitarian Ethics Framework for Healthcare Artificial Intelligence

    Towards A Unified Utilitarian Ethics Framework for Healthcare Artificial Intelligence

  • Beyond the Frontier: Fairness Without Accuracy Loss

    Beyond the Frontier: Fairness Without Accuracy Loss

  • From Case Law to Code: Evaluating AI’s Role in the Justice System

    From Case Law to Code: Evaluating AI’s Role in the Justice System

  • The Logic of Strategic Assets: From Oil to AI

    The Logic of Strategic Assets: From Oil to AI

  • Research summary: Mass Incarceration and the Future of AI

    Research summary: Mass Incarceration and the Future of AI

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.