• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Research summary: AI Governance in 2019, A Year in Review: Observations of 50 Global Experts

June 17, 2020

Summary contributed by Camylle Lanteigne (@CamLante), who’s currently pursuing a Master’s in Public Policy at Concordia University and whose work on social robots and empathy has been featured on Vox.

*Authors of original paper & link at the bottom


2019 has seen a sharp rise in interest surrounding AI Governance. This is a welcome addition to the lasting buzz surrounding AI and AI Ethics, especially if we are to collectively build AI that enriches people’s lives.

The AI Governance in 2019 report presents 44 short articles written by 50 international experts in the fields of AI, AI Ethics, and AI Policy. Each article highlights, from its author’s or authors’ point of view, the salient events in the field of AI Governance in 2019. Apart from the thought-provoking insights it contains, this report also offers a great way for individuals to familiarize themselves with the experts contributing to AI governance internationally, as well as with the numerous research centers, think tanks, and organizations involved.

Throughout the report, many experts mention the large amount of AI Ethics principles published in the past few years by organizations and governments attempting to frame how AI should be developed for good. Experts also highlight how, in 2019, governments were slowly moving from these previously established ethical principles towards more rigid, policy measures. This, of course, is far from accomplished. Currently, many governments are holding consultations and partnering with organizations like MAIEI to help them develop their AI strategy. Authors of the articles featured in this report also suggest considerations they deem necessary to getting AI governance right. For one, Steve Hoffman (pp. 51-52) suggests policymakers take advantage of market forces in regulating AI. FU Ying (pp. 81-82) stresses the importance of a China-US partnership regarding AI, for which better relations between both governments are necessary.

On another note, the release of gradually larger versions of OpenAI’s GPT-2 language model and the risks around its publication are mentioned by many authors as a salient event of 2019. For many, this brought up issues surrounding responsible publishing in AI, as well as more general concerns around how AI may be used to do harm. The report even features an article written by four members of OpenAI discussing the event and its impact on the discussion concerning publishing norms in AI (pp. 43-44). 

One expert, Prof. YANG Qiang, also mentions the importance of new advances like federated learning, differential privacy, and homomorphic encryption, and their importance in ensuring that AI is used to the benefit of humanity (pp. 11-12). In his article, Prof. Colin Allen, highlights a crucial but oft forgotten element of good AI governance: strong AI journalism (pp. 29-30). He writes: “The most important progress related to AI governance during the year 2019 has been the result of increased attention by journalists to the issues surrounding AI” (p. 29). It is necessary for policymakers, politicians, business leaders, and the general public to have a proper understanding of the technical aspects of AI, and journalists play a large role in building public competence in this area. 

It’s interesting to note that the report was released by the Shanghai Institute of Science for Science. Its editor-in-chief (Prof. SHI Qian) and one of its executive editors (Prof. Li Hui) are affiliated with this Institute, and the report features numerous Chinese AI experts. In light of this, it is particularly refreshing to see such a collaboration not only between Chinese and American or British experts, but also with other scholars from around the world. Efforts in AI governance can easily become siloed due to politics and national allegiances. This report, thankfully, does away with these to privilege an international and collaborative approach. In addition, twenty of the fifty experts featured are women, and many of them are at the beginning of their careers. This is commendable, considering the field of AI tends to be male-dominated. However, none of the fifty experts featured in the report are Black. This is unacceptable. There are numerous Black individuals doing innovative and crucial work in AI, and their voices are central to developing beneficial AI. I encourage our readers to engage with the work of Black AI experts. For one, start by listening to this playlist of interviews from the TWIML podcast, which features Black AI experts talking about their work. If a similar report on AI governance is put together next year, it must include the perspectives of Black AI experts.


Original paper by SHI Qian (Editor-in-Chief), Li Hui (Executive Editor), Brian Tse (Executive Editor): https://www.aigovernancereview.com/static/AI-Governance-in-2019-7795369fd451da49ae4471ce9d648a45.pdf

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

related posts

  • A survey on adversarial attacks and defences

    A survey on adversarial attacks and defences

  • Applying the TAII Framework on Tesla Bot

    Applying the TAII Framework on Tesla Bot

  • Reduced, Reused, and Recycled: The Life of a Benchmark in Machine Learning Research

    Reduced, Reused, and Recycled: The Life of a Benchmark in Machine Learning Research

  • Humans are not Boltzmann Distributions: Challenges and Opportunities for Modelling Human Feedback an...

    Humans are not Boltzmann Distributions: Challenges and Opportunities for Modelling Human Feedback an...

  • Effects of ROSS Intelligence and NDAS, highlighting the need for AI regulation

    Effects of ROSS Intelligence and NDAS, highlighting the need for AI regulation

  • Research Summary: The cognitive science of fake news

    Research Summary: The cognitive science of fake news

  • Dating Through the Filters

    Dating Through the Filters

  • Deepfakes and Domestic Violence: Perpetrating Intimate Partner Abuse Using Video Technology

    Deepfakes and Domestic Violence: Perpetrating Intimate Partner Abuse Using Video Technology

  • Upgrading China Through Automation: Manufacturers, Workers and Techno-Development State (Research Su...

    Upgrading China Through Automation: Manufacturers, Workers and Techno-Development State (Research Su...

  • The Ethical AI Startup Ecosystem 04: Targeted AI Solutions and Technologies

    The Ethical AI Startup Ecosystem 04: Targeted AI Solutions and Technologies

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.