• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Research summary: AI Governance in 2019, A Year in Review: Observations of 50 Global Experts

June 17, 2020

Summary contributed by Camylle Lanteigne (@CamLante), who’s currently pursuing a Master’s in Public Policy at Concordia University and whose work on social robots and empathy has been featured on Vox.

*Authors of original paper & link at the bottom


2019 has seen a sharp rise in interest surrounding AI Governance. This is a welcome addition to the lasting buzz surrounding AI and AI Ethics, especially if we are to collectively build AI that enriches people’s lives.

The AI Governance in 2019 report presents 44 short articles written by 50 international experts in the fields of AI, AI Ethics, and AI Policy. Each article highlights, from its author’s or authors’ point of view, the salient events in the field of AI Governance in 2019. Apart from the thought-provoking insights it contains, this report also offers a great way for individuals to familiarize themselves with the experts contributing to AI governance internationally, as well as with the numerous research centers, think tanks, and organizations involved.

Throughout the report, many experts mention the large amount of AI Ethics principles published in the past few years by organizations and governments attempting to frame how AI should be developed for good. Experts also highlight how, in 2019, governments were slowly moving from these previously established ethical principles towards more rigid, policy measures. This, of course, is far from accomplished. Currently, many governments are holding consultations and partnering with organizations like MAIEI to help them develop their AI strategy. Authors of the articles featured in this report also suggest considerations they deem necessary to getting AI governance right. For one, Steve Hoffman (pp. 51-52) suggests policymakers take advantage of market forces in regulating AI. FU Ying (pp. 81-82) stresses the importance of a China-US partnership regarding AI, for which better relations between both governments are necessary.

On another note, the release of gradually larger versions of OpenAI’s GPT-2 language model and the risks around its publication are mentioned by many authors as a salient event of 2019. For many, this brought up issues surrounding responsible publishing in AI, as well as more general concerns around how AI may be used to do harm. The report even features an article written by four members of OpenAI discussing the event and its impact on the discussion concerning publishing norms in AI (pp. 43-44). 

One expert, Prof. YANG Qiang, also mentions the importance of new advances like federated learning, differential privacy, and homomorphic encryption, and their importance in ensuring that AI is used to the benefit of humanity (pp. 11-12). In his article, Prof. Colin Allen, highlights a crucial but oft forgotten element of good AI governance: strong AI journalism (pp. 29-30). He writes: “The most important progress related to AI governance during the year 2019 has been the result of increased attention by journalists to the issues surrounding AI” (p. 29). It is necessary for policymakers, politicians, business leaders, and the general public to have a proper understanding of the technical aspects of AI, and journalists play a large role in building public competence in this area. 

It’s interesting to note that the report was released by the Shanghai Institute of Science for Science. Its editor-in-chief (Prof. SHI Qian) and one of its executive editors (Prof. Li Hui) are affiliated with this Institute, and the report features numerous Chinese AI experts. In light of this, it is particularly refreshing to see such a collaboration not only between Chinese and American or British experts, but also with other scholars from around the world. Efforts in AI governance can easily become siloed due to politics and national allegiances. This report, thankfully, does away with these to privilege an international and collaborative approach. In addition, twenty of the fifty experts featured are women, and many of them are at the beginning of their careers. This is commendable, considering the field of AI tends to be male-dominated. However, none of the fifty experts featured in the report are Black. This is unacceptable. There are numerous Black individuals doing innovative and crucial work in AI, and their voices are central to developing beneficial AI. I encourage our readers to engage with the work of Black AI experts. For one, start by listening to this playlist of interviews from the TWIML podcast, which features Black AI experts talking about their work. If a similar report on AI governance is put together next year, it must include the perspectives of Black AI experts.


Original paper by SHI Qian (Editor-in-Chief), Li Hui (Executive Editor), Brian Tse (Executive Editor): https://www.aigovernancereview.com/static/AI-Governance-in-2019-7795369fd451da49ae4471ce9d648a45.pdf

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • To Be or Not to Be Algorithm Aware: A Question of a New Digital Divide? (Research Summary)

    To Be or Not to Be Algorithm Aware: A Question of a New Digital Divide? (Research Summary)

  • Digital transformation and the renewal of social theory: Unpacking the new fraudulent myths and misp...

    Digital transformation and the renewal of social theory: Unpacking the new fraudulent myths and misp...

  • The Canada Protocol: AI checklist for Mental Health & Suicide Prevention

    The Canada Protocol: AI checklist for Mental Health & Suicide Prevention

  • Prediction Sensitivity: Continual Audit of Counterfactual Fairness in Deployed Classifiers

    Prediction Sensitivity: Continual Audit of Counterfactual Fairness in Deployed Classifiers

  • Acceptable Risks in Europe’s Proposed AI Act: Reasonableness and Other Principles for Deciding How M...

    Acceptable Risks in Europe’s Proposed AI Act: Reasonableness and Other Principles for Deciding How M...

  • Compute Trends Across Three Eras of Machine Learning

    Compute Trends Across Three Eras of Machine Learning

  • The AI Junkyard: Thinking Through the Lifecycle of AI Systems

    The AI Junkyard: Thinking Through the Lifecycle of AI Systems

  • Submission to World Intellectual Property Organization on IP & AI

    Submission to World Intellectual Property Organization on IP & AI

  • AI Consent Futures: A Case Study on Voice Data Collection with Clinicians

    AI Consent Futures: A Case Study on Voice Data Collection with Clinicians

  • CRUSH: Contextually Regularized and User Anchored Self-Supervised Hate Speech Detection

    CRUSH: Contextually Regularized and User Anchored Self-Supervised Hate Speech Detection

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.