• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Top 5 takeaways from our conversation with I2AI on AI in different national contexts

July 28, 2021

šŸ”¬ Event summary by Connor Wright, our Partnerships Manager.


Overview: Can the world unite under a global AI regulatory framework? Are different cultural interpretations of key terms a sticking point? These questions and more formed the basis of our top 5 takeaways from our meetup with I2AI. With such a variety of nations present, it shows that while we have different views on various issues, this is not a bad thing at all.


Introduction

Can the world unite under a global AI regulatory framework? Can problems with AI join together other nations in a common cause? These questions form the basis of the top 5 takeaways from our meetup with I2AI. Spanning topics like centralisation and the importance of localised AI regulations, our meetup showed how AI governance must be seen as a context-dependent phenomenon, starting with power relations.

There are power relations at play

Any talk about enacting localised regulations on AI must consider how uneven the playing field is in terms of decision-making and economic power. The extent to which local governments can instantiate local laws depends heavily on the resources available to each country. How this is conducted is then affected by the power relations in the international arena. For example, the attitude adopted to privacy laws could depend on the relationship a nation has with either China or the USA (two opposing views on privacy).

Centralisation may be a moot point

Given such global diversity, it may be difficult for all countries to follow one system to which there are even differences of interpretation within countries. Jamaica has dismissed digital ID cards in the Caribbean as unconstitutional, but Barbados is still trying to implement them. Furthermore, the interpretation of company data storage laws in India has a lot to do with cultural understandings. As a result, fragmentation of AI regulation may be inevitable, but is this a bad thing?

Fragmentation isn’t inherently undesirable

Fragmentation doesn’t mean that you have incoherent pieces. Peaceful coexistence between the AI regulation fragments can be moulded, primarily through a common thread. Setting a global target for all to reach can help direct all the different approaches towards the problem. Sure, there will be some inconsistencies in the approach, but arriving at the same point through different pathways is undoubtedly a viable option.

The importance of local regulations

To arrive at the same destination, local regulations and interpretations of the issues in AI are very important. They will serve to define what is meant by terms such as ā€˜fair’ and ā€˜representative’, as well as proving the most accurate expression of a country’s views on issues within AI. If these were not in place, individual countries’ values and concerns would be lost in the big-scale legislation conceived elsewhere. Without localised efforts, someone else ends up designing your AI for you. 

The language we speak and the language we use

The importance of these regulations is most clearly seen in their relationship with language. Even reading the law in one language (say, German) can produce a completely different interpretation than reading in English. With our meetup spanning from South America to Europe, we found that some participants harboured different interpretations of the same legislation depending on the language used. The subtle meanings and context of each word changes throughout each language, emphasising the vital role of localisation even more.

It is not just the language in which we speak about AI that matters, but also how we talk about AI. At times, the AI vernacular tends to anthropomorphise the technology by saying ā€œthe AI decidedā€ or ā€œthe AI is thinkingā€. Furthermore, such ways of expressing AI renders countries like Brazil (with barely any initiatives towards AI) at risk of the buzzword effect that AI generates. For example, reading phrases like ā€œthe AI determined its course of actionā€ and immediately thinking about terminators.

Between the lines

Technology is an excellent way of demonstrating how different countries treat their citizens and how difficult it is to find a common thread. Global convergence could indeed help us overcome problems such as gaps in datasets. With enough data in all the right places, researchers would no longer need to construct ā€˜representative AI’ where the data is most available but instead select the most relevant data. However, I believe that fragmentation of AI regulation is guaranteed, and the importance of local regulatory efforts is an essential consequence of that.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

šŸ” SEARCH

Spotlight

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

related posts

  • On the Impact of Machine Learning Randomness on Group Fairness

    On the Impact of Machine Learning Randomness on Group Fairness

  • Zoom Out and Observe: News Environment Perception for Fake News Detection

    Zoom Out and Observe: News Environment Perception for Fake News Detection

  • Explainable artificial intelligence (XAI) post‐hoc explainability methods: risks and limitations in ...

    Explainable artificial intelligence (XAI) post‐hoc explainability methods: risks and limitations in ...

  • Towards a Framework for Human-AI Interaction Patterns in Co-Creative GAN Applications

    Towards a Framework for Human-AI Interaction Patterns in Co-Creative GAN Applications

  • Robustness and Usefulness in AI Explanation Methods

    Robustness and Usefulness in AI Explanation Methods

  • What is Sovereign Artificial Intelligence?

    What is Sovereign Artificial Intelligence?

  • A roadmap toward empowering the labor force behind AI

    A roadmap toward empowering the labor force behind AI

  • Deployment corrections: An incident response framework for frontier AI models

    Deployment corrections: An incident response framework for frontier AI models

  • UNESCO’s Recommendation on the Ethics of AI

    UNESCO’s Recommendation on the Ethics of AI

  • Research summary: Technology-Enabled Disinformation: Summary, Lessons, and Recommendations

    Research summary: Technology-Enabled Disinformation: Summary, Lessons, and Recommendations

Partners

  • Ā 
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • Ā© 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.