• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Corporate Governance of Artificial Intelligence in the Public Interest

August 10, 2021

🔬 Research summary by Jonas Schuett, Policy Research Intern at DeepMind | Research Fellow at the Legal Priorities Project | PhD Candidate in Law at Goethe University Frankfurt

[Original paper by Peter Cihon, Jonas Schuett, Seth D. Baum]


Overview:

How can different actors improve the corporate governance of AI in the public interest? This paper offers a broad introduction to the topic. It surveys opportunities of nine types of actors inside and outside the corporation. In many cases, the best results will accrue when multiple types of actors work together.


Introduction

Private industry is at the forefront of AI research and development. AI is a major focus of the technology industry, which includes some of the largest corporations in the world. As AI research and development has an increasingly outsized impact on the world, it is essential to ensure that the governance of the field’s leading companies supports the public interest.

Key Insights

Opportunities to improve the corporate governance of AI

The opportunities to improve AI corporate governance are diverse. The paper surveys opportunities for nine different types of actors:

  • Management can establish policies, translate policies into practice, and create structures such as oversight committees.
  • Workers can directly affect the design and use of AI systems, and can have indirect effects by influencing management.
  • Investors can voice concerns to management, vote in shareholder resolutions, replace a corporation’s board of directors, sell off their investments to signal disapproval, and file lawsuits against the corporation.
  • Corporate partners can use their business-to-business market power and relations to influence companies, while corporate competitors can push each other in pursuit of market share and reputation.
  • Industry consortia can identify and promote best practices, formalize best practices as standards, and pool resources to advance industry interests, such as by lobbying governments.
  • Nonprofit organizations can conduct research, advocate for change, organize coalitions, and raise awareness.
  • The public can select which corporate AI products and services to use, and also support specific AI public policies.
  • The media can research, document, analyze, and generate attention to corporate governance activities and related matters.

Coordination and collaboration

In many cases, the best results will accrue when multiple types of actors work together. The paper shows this via extended discussion of three running examples:

  • First, workers and the media collaborated to influence managers at Google to leave Project Maven, a drone video classification project of the US Department of Defense. Workers initially leaked information about Maven to the media, and then signed an open letter against Maven following media reports.
  • Second, nonprofit research and advocacy on law enforcement use of facial recognition technology fueled worker and investor activism and public pressure (especially the 2020 protests against racism and police brutality) that ultimately pushed multiple competing AI corporations to change their practices.
  • Third, workers, management, and industry consortia have interacted to develop and promote best practices concerning the publication of potentially harmful research.

Between the lines

The paper will be of use to researchers looking for an overview of corporate governance at leading AI companies, levers of influence in corporate AI development, and opportunities to improve corporate governance with an eye towards long-term AI development.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

related posts

  • Research summary: Robot Rights? Let’s Talk about Human Welfare instead

    Research summary: Robot Rights? Let’s Talk about Human Welfare instead

  • Why was your job application rejected: Bias in Recruitment Algorithms? (Part 2)

    Why was your job application rejected: Bias in Recruitment Algorithms? (Part 2)

  • Evolution in Age-Verification Applications: Can AI Open Some New Horizons?

    Evolution in Age-Verification Applications: Can AI Open Some New Horizons?

  • Re-imagining Algorithmic Fairness in India and Beyond (Research Summary)

    Re-imagining Algorithmic Fairness in India and Beyond (Research Summary)

  • Setting the Right Expectations: Algorithmic Recourse Over Time

    Setting the Right Expectations: Algorithmic Recourse Over Time

  • Artificial Intelligence and Inequality in the Middle East: The Political Economy of Inclusion

    Artificial Intelligence and Inequality in the Middle East: The Political Economy of Inclusion

  • Fair Generative Model Via Transfer Learning

    Fair Generative Model Via Transfer Learning

  • Against Interpretability: a Critical Examination

    Against Interpretability: a Critical Examination

  • Knowing Your Annotator: Rapidly Testing the Reliability of Affect Annotation

    Knowing Your Annotator: Rapidly Testing the Reliability of Affect Annotation

  • Clinical trial site matching with improved diversity using fair policy learning

    Clinical trial site matching with improved diversity using fair policy learning

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.