• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Who Is Governing AI Matters Just as Much as How It’s Designed

March 15, 2021

✍️ Column by Muriam Fancy (@muriamfancy), our Network Engagement Manager.


The evolution of technology is at an exciting but also at an alarming pace. We have seen various cases displaying the implications of deploying technology without having foundational governance mechanisms to mould these technologies’ behaviour and consequences. Just as importantly, we do not have adequate governance mechanisms to protect communities, especially marginalized communities, from the lack of governance and regulation of AI. 

The question of who is governing AI is significant. One of the primary reasons is that depending on the stakeholder who is setting the governance framework, they could be existing outside of the state legislative laws and policies. The discussion of private vs. public stakeholder governance is essential since the normative responsibilities and desired outcomes of deploying the technology vary greatly. Baseline design variables such as accuracy, fairness, and explainability, continue to evolve, change, and vary depending on who the AI application is serving. To create pan-sector definitions that do not work jointly together will inevitably cause mass scaleable harm. 

As Mittelstadt notes, these stakeholders’ fiduciary responsibilities vary greatly, creating a divide and ultimately a gap in regulation, allowing communities to fall through the cracks and face these technologies’ harms. Most often, AI is developed by the private sector to be deployed by the public sector. Issues of user groups, definitions of AI, and gathering data do not align between these two sectors. 

The motivations for these two sectors also vary greatly. With private sector companies looking to increase profit to meet their bottom line, and with the government’s lack of AI literacy, governments are often deploying technologies without conducting efficient risk assessments. Schiff et al. note that the ethical frameworks within these stakeholders also vary. The review process of analyzing these technologies significantly differs. Hence, the variables of concern are most likely not going to be similar and therefore make it difficult to deploy responsible AI. 

In what way does the issue of who is governing AI have real-world implications? The case brought forward by Dr. Petra Molnar in her study of AI being deployed at the border demonstrates the diverging motivations between the private and public sector, both with a heavy racist discourse against migrant communities. She was referencing case examples of private companies utilizing AI to track the migration and separation of families to enforce deportation and detention practices in the US. The state government politicizing the case of migration made it possible for AI to be deployed without any governance and regulation to protect migrant communities. 

Dr. Molnar found a similar issue with her study in Greece, with automated technology being deployed without consent, minors, and overall acting without governance from private and public stakeholders. The case of Clearview AI in Canada scrapped the social media data of millions of Canadians to label gender and age filtered in a machine learning algorithm. In which police forces also used the product. The gross human rights violations that came from deploying this product demonstrated the significant gap in Canadian law that does not apply to Clearview’s actions. However, the Privacy Commissioner in Canada did find various activities in violation of PIPEDA. 

What is the way forward in the future for effective governance of these technologies? The first and foremost mechanisms should be inclusive participation from public citizens and those with varying statuses (i.e., refugee, immigrant, stateless). I would propose this step forward because, ultimately, public citizens and those residing in the state will face the consequences of the technology. 

As Dr. Molnar notes in her paper, governance models by private and public sectors reflect the power structures within our society. Thus, these stakeholders, not including the voices and concerns of affected communities, reflect these bodies’ systematic racist and discriminatory norms, continuing to bridge the divide between public and private stakeholders and marginalized and affected communities. Like Dr. Molnar, this column would call to design an inclusive, participatory framework to ensure that public citizens and affected communities are included equitably in the conversations to develop design and deployment mechanisms. 

The second mechanism is for better discussions between public and private sector actors. Foundational principles that will dictate how these technologies can exist in the state or internationally are understood and implemented differently by both actors. Joint conversations between these two sectors and the inclusive participation from public citizens and affected communities will help change the discourse and build holistic governance frameworks.

As Medeiros notes in her paper, the need for jointly developed mechanisms and discussion is because of differences in understanding of what regulation is and how that complies with policy and law. As Mederios notes, private stakeholders need to adopt state policies, which hopefully aim to protect public citizens and be thought through by private companies. If such joint effort does occur, there can be a baseline framework that we can continue to evolve as technology evolves to remain inclusive, human-centric, and focused on human rights at its core.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

related posts

  • We interviewed 3 experts who teach Tech Ethics. Here's what we learned.

    We interviewed 3 experts who teach Tech Ethics. Here's what we learned.

  • Engaging the Public in AI's Journey: Lessons from the UK AI Safety Summit on Standards, Policy, and ...

    Engaging the Public in AI's Journey: Lessons from the UK AI Safety Summit on Standards, Policy, and ...

  • Knowledge, Workflow, Oversight: A framework for implementing AI ethics

    Knowledge, Workflow, Oversight: A framework for implementing AI ethics

  • How Do We Teach Tech Ethics? How Should We?

    How Do We Teach Tech Ethics? How Should We?

  • Artificial Intelligence and Healthcare: From Sci-Fi to Reality

    Artificial Intelligence and Healthcare: From Sci-Fi to Reality

  • A Case Study: Increasing AI Ethics Maturity in a Startup

    A Case Study: Increasing AI Ethics Maturity in a Startup

  • Computer vision and sustainability

    Computer vision and sustainability

  • Who's watching? What you need to know about personal data security

    Who's watching? What you need to know about personal data security

  • Sociological Perspectives on Artificial Intelligence: A Typological Reading

    Sociological Perspectives on Artificial Intelligence: A Typological Reading

  • The coming AI 'culture war'

    The coming AI 'culture war'

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.