• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Research summary: AI Governance in 2019, A Year in Review: Observations of 50 Global Experts

June 17, 2020

Summary contributed by Camylle Lanteigne (@CamLante), who’s currently pursuing a Master’s in Public Policy at Concordia University and whose work on social robots and empathy has been featured on Vox.

*Authors of original paper & link at the bottom


2019 has seen a sharp rise in interest surrounding AI Governance. This is a welcome addition to the lasting buzz surrounding AI and AI Ethics, especially if we are to collectively build AI that enriches people’s lives.

The AI Governance in 2019 report presents 44 short articles written by 50 international experts in the fields of AI, AI Ethics, and AI Policy. Each article highlights, from its author’s or authors’ point of view, the salient events in the field of AI Governance in 2019. Apart from the thought-provoking insights it contains, this report also offers a great way for individuals to familiarize themselves with the experts contributing to AI governance internationally, as well as with the numerous research centers, think tanks, and organizations involved.

Throughout the report, many experts mention the large amount of AI Ethics principles published in the past few years by organizations and governments attempting to frame how AI should be developed for good. Experts also highlight how, in 2019, governments were slowly moving from these previously established ethical principles towards more rigid, policy measures. This, of course, is far from accomplished. Currently, many governments are holding consultations and partnering with organizations like MAIEI to help them develop their AI strategy. Authors of the articles featured in this report also suggest considerations they deem necessary to getting AI governance right. For one, Steve Hoffman (pp. 51-52) suggests policymakers take advantage of market forces in regulating AI. FU Ying (pp. 81-82) stresses the importance of a China-US partnership regarding AI, for which better relations between both governments are necessary.

On another note, the release of gradually larger versions of OpenAI’s GPT-2 language model and the risks around its publication are mentioned by many authors as a salient event of 2019. For many, this brought up issues surrounding responsible publishing in AI, as well as more general concerns around how AI may be used to do harm. The report even features an article written by four members of OpenAI discussing the event and its impact on the discussion concerning publishing norms in AI (pp. 43-44). 

One expert, Prof. YANG Qiang, also mentions the importance of new advances like federated learning, differential privacy, and homomorphic encryption, and their importance in ensuring that AI is used to the benefit of humanity (pp. 11-12). In his article, Prof. Colin Allen, highlights a crucial but oft forgotten element of good AI governance: strong AI journalism (pp. 29-30). He writes: “The most important progress related to AI governance during the year 2019 has been the result of increased attention by journalists to the issues surrounding AI” (p. 29). It is necessary for policymakers, politicians, business leaders, and the general public to have a proper understanding of the technical aspects of AI, and journalists play a large role in building public competence in this area. 

It’s interesting to note that the report was released by the Shanghai Institute of Science for Science. Its editor-in-chief (Prof. SHI Qian) and one of its executive editors (Prof. Li Hui) are affiliated with this Institute, and the report features numerous Chinese AI experts. In light of this, it is particularly refreshing to see such a collaboration not only between Chinese and American or British experts, but also with other scholars from around the world. Efforts in AI governance can easily become siloed due to politics and national allegiances. This report, thankfully, does away with these to privilege an international and collaborative approach. In addition, twenty of the fifty experts featured are women, and many of them are at the beginning of their careers. This is commendable, considering the field of AI tends to be male-dominated. However, none of the fifty experts featured in the report are Black. This is unacceptable. There are numerous Black individuals doing innovative and crucial work in AI, and their voices are central to developing beneficial AI. I encourage our readers to engage with the work of Black AI experts. For one, start by listening to this playlist of interviews from the TWIML podcast, which features Black AI experts talking about their work. If a similar report on AI governance is put together next year, it must include the perspectives of Black AI experts.


Original paper by SHI Qian (Editor-in-Chief), Li Hui (Executive Editor), Brian Tse (Executive Editor): https://www.aigovernancereview.com/static/AI-Governance-in-2019-7795369fd451da49ae4471ce9d648a45.pdf

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

related posts

  • The philosophical basis of algorithmic recourse

    The philosophical basis of algorithmic recourse

  • Creative Agents: Rethinking Agency and Creativity in Human and Artificial Systems

    Creative Agents: Rethinking Agency and Creativity in Human and Artificial Systems

  • Digital Sex Crime, Online Misogyny, and Digital Feminism in South Korea

    Digital Sex Crime, Online Misogyny, and Digital Feminism in South Korea

  • Disability, Bias, and AI (Research Summary)

    Disability, Bias, and AI (Research Summary)

  • The Unnoticed Cognitive Bias Secretly Shaping the AI Agenda

    The Unnoticed Cognitive Bias Secretly Shaping the AI Agenda

  • Efficiency is Not Enough: A Critical Perspective of Environmentally Sustainable AI

    Efficiency is Not Enough: A Critical Perspective of Environmentally Sustainable AI

  • Towards Responsible AI in the Era of ChatGPT: A Reference Architecture for Designing Foundation Mode...

    Towards Responsible AI in the Era of ChatGPT: A Reference Architecture for Designing Foundation Mode...

  • Performative Power

    Performative Power

  • Unlocking Accuracy and Fairness in Differentially Private Image Classification

    Unlocking Accuracy and Fairness in Differentially Private Image Classification

  • Model Positionality and Computational Reflexivity: Promoting Reflexivity in Data Science

    Model Positionality and Computational Reflexivity: Promoting Reflexivity in Data Science

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.