• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • šŸ‡«šŸ‡·
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

AI Certification: Advancing Ethical Practice by Reducing Information Asymmetries

August 18, 2021

šŸ”¬ Research summary by Jonas Schuett, Policy Research Intern at DeepMind | Research Fellow at the Legal Priorities Project | PhD Candidate in Law at Goethe University Frankfurt

[Original paper by Peter Cihon, Moritz J. Kleinaltenkamp, Jonas Schuett, Seth D. Baum]


Overview: How can we incentivize the adoption of AI ethics principles? This paper explores the role of certification. Based on a review of the management literature on certification, it shows how AI certification can reduce information asymmetries and incentivize change. It also surveys the current landscape of AI certification schemes and briefly discusses implications for the future of AI research and development.


Introduction

Certification is widely used to convey that an entity has met some sort of performance standard. It includes everything from the certificate that people receive for completing a university degree to certificates for energy efficiency in consumer appliances and quality management in organizations. As AI technology becomes increasingly impactful across society, there can be a role for certification to improve AI governance. This paper presents an overview of AI certification, applying insights from prior research and experience with certification in other domains to the relatively new domain of AI certification.

Key Insights

Certification can reduce information asymmetries

A primary role of certification is to reduce information asymmetries. Information asymmetries are acute in AI systems because the systems are often complex and opaque and users typically lack the data and expertise necessary to understand them. For example, it is difficult or impossible to evaluate from the outside how biased or explainable a model is, or whether it was developed according to certain ethics principles.

Certification can incentivize change

In reducing the asymmetry of information between insiders and outsiders, certification can further serve to incentivize good behavior by the insiders. For example, corporations may be more motivated to achieve ethics standards if they can use certification to demonstrate their achievements to customers who value these achievements.

The current landscape of AI certification

The paper surveys the landscape of AI certification from 2020, identifying seven active and proposed programs:

  • the European Commission White Paper on Artificial Intelligence (this is outdated, see the proposed Artificial Intelligence Act),
  • the IEEE Ethics Certification Program for Autonomous and Intelligence Systems,
  • the Malta AI Innovative Technology Arrangement,
  • the Turing Certification proposed by Australia’s Chief Scientist,
  • the Queen’s University executive education program Principles of AI Implementation,
  • the Finland civics course Elements of AI, and
  • a Danish program in development for labeling IT-security and responsible use of data.

These programs demonstrate the variety of forms AI certification can take, including both public and private, certifying both individuals and groups, and covering a range of AI-related activities.

The value of certification for future AI research and development

Finally, the paper addresses the potential value of certification for future AI technology. Some aspects of certification will likely remain relevant even as the technology changes. For example, the various roles of corporations, their employees and management, governments, and other actors tend to stay the same. Likewise, certification programs can remain relevant over time by emphasizing human and institutional factors. Programs can also build in mechanisms to update their certification criteria as AI technology changes. Looking further into the future, certification may play a constructive role in governance of the processes that lead to the development of advanced systems. Certification could be especially valuable for building trust among rival AI development groups and ensuring that advanced AI systems are built to high standards of safety and ethics.

Between the lines

In summary, certification can be a valuable tool for AI governance. It is not a panacea for ensuring ethical AI, but it can help especially for reducing information asymmetries and incentivizing ethical AI development and use. The paper presents the first-ever research study of AI certification and therefore serves to establish essential fundamentals of the topic, including key terms and concepts.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

šŸ” SEARCH

Spotlight

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

related posts

  • UK’s roadmap to AI supremacy: Is the ā€˜AI War’ heating up?

    UK’s roadmap to AI supremacy: Is the ā€˜AI War’ heating up?

  • Equal Improvability: A New Fairness Notion Considering the Long-term Impact

    Equal Improvability: A New Fairness Notion Considering the Long-term Impact

  • Making Kin with the Machines

    Making Kin with the Machines

  • Avoiding an Oppressive Future of Machine Learning: A Design Theory for Emancipatory Assistants

    Avoiding an Oppressive Future of Machine Learning: A Design Theory for Emancipatory Assistants

  • The Social Metaverse: Battle for Privacy

    The Social Metaverse: Battle for Privacy

  • AI and Great Power Competition: Implications for National Security

    AI and Great Power Competition: Implications for National Security

  • Universal and Transferable Adversarial Attacks on Aligned Language Models

    Universal and Transferable Adversarial Attacks on Aligned Language Models

  • Warning Signs: The Future of Privacy and Security in an Age of Machine Learning  (Research summary)

    Warning Signs: The Future of Privacy and Security in an Age of Machine Learning (Research summary)

  • A Prompt Array Keeps the Bias Away: Debiasing Vision-Language Models with Adversarial Learning

    A Prompt Array Keeps the Bias Away: Debiasing Vision-Language Models with Adversarial Learning

  • Research summary: Lexicon of Lies: Terms for Problematic Information

    Research summary: Lexicon of Lies: Terms for Problematic Information

Partners

  • Ā 
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • Ā© MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.