• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Research summary: Decision Points in AI Governance

August 3, 2020

Summary contributed by Connor Wright, who’s a 3rd year Philosophy student at the University of Exeter.

Link to full paper + authors listed at the bottom.


Mini-summary: Newman embarks on the lonely and brave journey of investigating how to put AI governmental principles into action. To do this, 3 case studies are considered, ranging from ethics committees, publication norms and intergovernmental agreement. While all 3 of those aspects have their benefits, none of them are perfect, and Newman eloquently explains why. The challenges presented are numerous, but the way forward is visible, and that way is called practicality.


Full summary:

The inspiration behind Newman’s paper lies within her observation that governance on AI principles focuses too much on the what, and not enough on the how. As a result, her paper aims to denote examples to act as suggestions to how to best operationalise the AI principles being discussed. To do this, she illustrates 3 different case studies, which I will now illustrate in turn.

Case study 1: Can an AI Ethics advisory committee help advance responsible AI (Microsoft AETHER committee)?

A very current debate topic is of whether a company’s ethics boards can actually impact the work done by its engineers. Newman refers to Microsoft’s AETHER committee attempt to do just that.

As a bug company, Microsoft’s moves in the AI world will have a significantly larger impact than other smaller businesses, putting even more emphasis on making this known to the key stakeholders. To act on this, Microsoft organised their principles based on the engineering processes involved, including guidance on privacy, and accountability. The committee (comprising 7 working groups, with about 23 members from each major department) would then write reports on any AI concerns had by different employees raised through their Ask-AETHER phone-line. This was made available to all departments within Microsoft, and allowed the reports compiled to represent each concern raised. These reports would then be sent to senior management for review, keeping those at the top connected with what goes on elsewhere. 

Qualms were nonetheless raised about the council’s impact, with Microsoft winning the $10 billion contract in 2019 to restructure the Department of Defense’s cloud system. Their response was that there was no objection to being involved with the military within the company’s AI principles, so long as the system was safe, reliable, and accountable. No official objection was ever published from AETHER, but they apparently did raise a policy concern on an executive retreat that same year. 

Newman’s takeaways were resultantly based on the welcomed move of establishing the AETHER call line, and involving the executives at the top. For the principles to be truly representative, all concerns must be taken into account, and inter-disciplinary departments are to be involved. Microsoft did exactly that, but AETHER’s true impact is still to be seen.

Case study 2: Does shifting publication norms of AI reduce its risk?

Here, Newman considers the staged-release publication process of AI systems, in complete contrast to the norm in the AI field of an all at once release. The staged process has been examined as a possible way to prevent the use of the AI software by malicious actors, as well as being able to give time to policy makers and human actors involved. Such a process gives policy-makers time to consider how best to approach the software and its societal effects, while human actors have time to reflect on their own usage of the product.

However, the process has been criticised for potentially stifling the speed and growth of the AI field through having a more delayed process. Admittedly, such a process can prevent potential harms, but it can also prevent potential benefits. Here, Newman utilises OpenAI’s GPT-2 language model as an example. Committed to releasing it in stages, models with greater parameters and specs were released before GPT-2 had fully been made available. Furthermore, once released, a doctor from Imperial College London repurposed GPT-2 to write accurate scientific abstracts in solely 24 hours, something which could have occurred much earlier had the model been fully released. 

Newman believes that open source AI information is key to the field progressing, whether released in stages or fully. Releasing in stages can help prevent certain harms, but can also make it harder for independent researchers to properly evaluate the model without its full release. Altering publication norms can potentially help prevent malicious usage of the product, but can also prevent its proper evaluation in the first place.

Case study 3: Can a global focus point provide for international coordination on AI policy and implementation?

Newman takes advantage of the monumental OECD principles as her example of (and one of the only) points of international agreement on AI principles. On may 22nd 2019, 42 countries signed up to the OECD’s intergovernmental principles on AI, ranging from Asia, South America, Europe and Africa. The language utilised in the principles that stand out to me are words such as stewardship, plain easy-to-understand information, human-centeredness, and underrepresented. Strong and powerful language contained within principles agreed upon by 42 countries was never anticipated, and proved an extremely positive step in the right direction.

Unfortunately, Newman acknowledges that the implementation of these principles in each country will be different. Cultural considerations, the presence of infrastructure and the economic situation will impact which principles can be adopted in which way. Bodies such as the AI observatory have been established to try and link practical instantiations of the principles with their desired goal, but how each country develops its AI strategy remains to be seen.

Newman’s paper has provided us with real life examples of how AI principles are trying to be implemented in the real world. Involving leaders at large corporations like AETHER has done can help to move towards a great cognizance of the implications of decisions made on AI. Such a cognizance can then help influence the publication norms to prevent evil-use of AI products, helping international governments do the same. While there are many challenges ahead, turning talk into action is certainly the way to overcome them.


Original paper by Jessica Cussins Newman: https://cltc.berkeley.edu/wp-content/uploads/2020/05/Decision_Points_AI_Governance.pdf

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

An abstract spiral of dark circles appears at the centre, resembling a tornado. Several vintage magazine covers and advertisements are being drawn toward the spiral. The artworks that have already been pulled into it are becoming distorted and replaced with clusters of numbers representing their numerical embeddings.

Tech Futures: Better Imagination for Better Tech Futures

This image is a collage with a colourful Japanese vintage landscape showing a mountain, hills, flowers and other plants and a small stream. There are 3 large black data servers placed in the bottom half of the image, with a cloud of black smoke emitting from them, partly obscuring the scenery.

Tech Futures: Crafting Participatory Tech Futures

A network diagram with lots of little emojis, organised in clusters.

Tech Futures: AI For and Against Knowledge

A brightly coloured illustration which can be viewed in any direction. It has many elements to it working together: men in suits around a table, someone in a data centre, big hands controlling the scenes and holding a phone, people in a production line. Motifs such as network diagrams and melting emojis are placed throughout the busy vignettes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part II

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

related posts

  • Consequences of Recourse In Binary Classification

    Consequences of Recourse In Binary Classification

  • Technology on the Margins: AI and Global Migration Management From a Human Rights Perspective (Resea...

    Technology on the Margins: AI and Global Migration Management From a Human Rights Perspective (Resea...

  • Governing AI to Advance Shared Prosperity

    Governing AI to Advance Shared Prosperity

  • Research summary: What’s Next for AI Ethics, Policy, and Governance? A Global Overview

    Research summary: What’s Next for AI Ethics, Policy, and Governance? A Global Overview

  • Relative Behavioral Attributes: Filling the Gap between Symbolic Goal Specification and Reward Learn...

    Relative Behavioral Attributes: Filling the Gap between Symbolic Goal Specification and Reward Learn...

  • Using attention methods to predict judicial outcomes

    Using attention methods to predict judicial outcomes

  • Why civic competence in AI ethics is needed in 2021

    Why civic competence in AI ethics is needed in 2021

  • The social dilemma in artificial intelligence development and why we have to solve it

    The social dilemma in artificial intelligence development and why we have to solve it

  • Research summary:  The Flight to Safety-Critical AI

    Research summary: The Flight to Safety-Critical AI

  • The Chief AI Ethics Officer: A Champion or a PR Stunt?

    The Chief AI Ethics Officer: A Champion or a PR Stunt?

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.