• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Scoping AI Governance: A Smarter Tool Kit for Beneficial Applications

October 16, 2022

🔬 Research Summary by Grace Wright, Business Development Manager at a technology start-up and has worked in research roles focused on responsible and ethical development and use of AI and other emerging technologies.

[Original paper by Maroussia Lévesque]


Overview: The public sector can play an important role in governing artificial intelligence, acting directly or indirectly to help shape AI systems that benefit society. This paper examines the various tools and approaches policymakers can leverage to accomplish this by generating AI governance strategies that promote fairness and transparency. By exploring the benefits and drawbacks of these policy tools in more detail, the author aims to generate conversation amongst policymakers about which approaches are best suited for creating strong frameworks for AI governance. 


Introduction

Many private sector actors are sceptical or outwardly critical about the ability of the public sector to effectively govern emerging technologies. Artificial intelligence is one technology in particular that continues to be a topic of debate in this regard, especially concerning the roles of private and public sector actors in ensuring AI systems remain fair and beneficial to society. 

This paper explores the role of the public sector and how policymakers can effectively influence AI governance through an exploration of policy tools that increase more direct traditional policy interventions and others that alter the role of the public sector and place greater emphasis on procedural safeguards. 

To provide these insights, the author examined multiple policy instruments in these two categories and assessed their utility for enhancing the fairness of AI systems. The options explored in this paper suggest that each tool has benefits and drawbacks, and given their varied forms and impact on generating more fair and transparent AI systems, multiple tools should be leveraged to create a robust approach to AI governance.

Key Insights

Governing AI Systems – The Importance 

The author, Maroussia Levesque, argues that fair AI systems can ultimately contribute to the public good. However, AI development is primarily industry-driven, and commercial interests are not always aligned with public interests. In this respect, policymakers have a unique opportunity to influence AI policies that seek to guide the development of fair AI systems that benefit society. 

Levesque notes that concerns over the accuracy and fairness of AI systems have been ongoing and persist in systems used today. Not only is there the challenge of false negatives and false positives, but racial and gender biases are also significant issues of concern. For example, AI systems that predict recidivism rates (i.e., how likely someone is to repeat a criminal offense) have been criticized for being racially biased, predicting higher levels of recidivism amongst Black individuals. This raises significant concerns about AI fairness and transparency and how these systems generate their outcomes, and consequently, how these results impact society. Examples like these underscore the urgency of policymakers to be involved in ensuring these systems are developed in a way that does not result in potentially discriminatory practices. 

Options for Policy intervention

Levesque outlines multiple avenues that can be used to develop more robust frameworks for AI governance. In particular, the author explores options for redress using traditional, direct intervention and options that are more adaptive and “reinvent” the role of the public sector, emphasizing flexible procedural safeguards that can keep pace with technological change. Each of these options and their related policy tools are outlined below: 

Redress 

  • Rights and liabilities: Strengthening protections to deter harmful behavior from the private sector and uphold individual rights to equality and non-discrimination. This may also include enhancing transparency from the private sector in cases of suspected discrimination, which could help address issues of transparency of AI systems and dissuade harmful behaviours. 
  • Command and control: Imposing penalties against companies for not following certain safeguards or engaging in harmful practices. 
  • Administrative oversight: Designating specialized agencies to provide oversight of AI systems concerning their uncertainty, complexity, transparency, and impact. 
  • Incentives: Providing tax credits for certification, debiasing training, and other practices based on advancing fairer and more transparent systems. 
  • Market-harnessing controls: Stimulating AI research & development focused on a non-economic driven basis. 
  • Public infrastructure: Building public AI infrastructure to inform its values and development from inception.  
  • Mandatory disclosures: Compelling private sector actors to disclose performance-related metrics of their AI systems that preserve proprietary information. 
  • Public compensation: Having companies dedicate a portion of revenues to compensation for harms caused by AI systems. 

Adapt – Reinventing the role of public actors 

  • Checks and balances to counter industry dominance: Drawing on the principles of constitutionalism to have the AI innovation agenda driven by multiple interests rather than primarily the private sector 
  • Co-regulation:  Drafting standards and regulations together – both the public and private sector, including through negotiated rule-making and alignment with industry standards.  This includes implementing approaches similar to the EU AI Act or developing technical standards that reflect best practices. 

While direct policy interventions can help redress bias, they are limited in their ability to define and regulate fairness effectively. Therefore, the author suggests, they should be left to those implementing AI systems to determine, with some level of oversight from the public sector. More adaptive procedural safeguards, on the other hand, aim to cultivate accountability and integrity and are more favorable because they can ultimately be more adaptable to a complex and rapidly evolving technology space. 

Levesque notes that no one policy option is perfect – each has its drawbacks and should be viewed within the context of an entire toolbox for policymakers to draw from. The harms of AI can be varied, and so too should the policy instruments used to address them if an effective change will be made.

Between the lines

The paper draws some crucial points of consideration for regulating AI and emerging technologies more broadly. Firstly, regulation must be addressed from multiple angles with multiple policy instruments. Effective policy requires using multiple tools at the public sector’s disposal, especially given the complex and evolving challenge of regulating emerging technologies. Secondly, approaches emphasizing more adaptable, principle-based approaches appear better suited to rapidly changing policy spaces because they provide the flexibility and collaboration needed to solve complex challenges. 

While the author does make some strong arguments in favor of public sector involvement in AI governance, this paper raises some thought-provoking areas for further research and discussion. For example, the public sector is often criticized for being lethargic, providing reactionary responses to challenges that it may not understand well. Given the quickly evolving nature of technology and the concentration of technical expertise in the private sector, how can the public sector be better equipped to develop robust governance frameworks? Are there more effective avenues for public and private sector collaboration on these issues that have yet to be explored, and if so, what are some practical ways of moving forward to test and adopt those approaches?

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

related posts

  • Rethinking normative status necessary for self-determination in the era of sentient artificial agent...

    Rethinking normative status necessary for self-determination in the era of sentient artificial agent...

  • Research summary: A Focus on Neural Machine Translation for African Languages

    Research summary: A Focus on Neural Machine Translation for African Languages

  • Automating Extremism: Mapping the Affective Roles of Artificial Agents in Online Radicalization

    Automating Extremism: Mapping the Affective Roles of Artificial Agents in Online Radicalization

  • Does diversity really go well with Large Language Models?

    Does diversity really go well with Large Language Models?

  • Meet the inaugural cohort of the MAIEI Summer Research Internship!

    Meet the inaugural cohort of the MAIEI Summer Research Internship!

  • Epistemic fragmentation poses a threat to the governance of online targeting

    Epistemic fragmentation poses a threat to the governance of online targeting

  • On the Creativity of Large Language Models

    On the Creativity of Large Language Models

  • The State of AI Ethics Report (Volume 5)

    The State of AI Ethics Report (Volume 5)

  • Ubuntu’s Implications for Philosophical Ethics

    Ubuntu’s Implications for Philosophical Ethics

  • Recess: Is AI in Law School a Helpful Tool or a Hidden Trap?

    Recess: Is AI in Law School a Helpful Tool or a Hidden Trap?

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.