• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Levels of AGI: Operationalizing Progress on the Path to AGI

January 24, 2024

🔬 Research Summary by Meredith Ringel Morris, Director of Human-AI Interaction Research at Google DeepMind; she is also an Affiliate Professor at the University of Washington, and is an ACM Fellow and member of the ACM SIGCHI Academy.

[Original paper by Meredith Ringel Morris, Jascha Sohl-dickstein, Noah Fiedel, Tris Warkentin, Allan Dafoe, Aleksandra Faust, Clement Farabet, and Shane Legg (all affiliated with Google DeepMind)]


Overview: This paper proposes a theoretical framework for classifying the capabilities and behavior of AGI (Artificial General Intelligence) systems. The proposed ontology introduces a level-based classification of AGI performance, generality, and autonomy. The paper includes discussions of how this framing relates to prior conceptualizations of AGI, how more nuanced terminology around AGI systems can support discussions of risk and policy options, and the need for future work in developing ecologically valid, living benchmarks to assess systems against this framework.


Introduction

Artificial General Intelligence (AGI) is an important and sometimes controversial concept in computing research, used to describe an AI system that is at least as capable as a human at most tasks. Given the rapid advancement of Machine Learning (ML) models, the concept of AGI has passed from being the subject of philosophical debate to one with near-term practical relevance. Some experts believe that “sparks” of AGI (Bubeck et al., 2023) are already present in the latest generation of large language models (LLMs); some predict AI will broadly outperform humans within about a decade (Bengio et al., 2023); some even assert that current LLMs are AGIs (Agüera y Arcas and Norvig, 2023). However, if you were to ask 100 AI experts to define what they mean by “AGI,” you would likely get 100 related but different definitions. We argue that it is critical for the AI research community to explicitly reflect on what we mean by “AGI” and aspire to quantify attributes like the performance, generality, and autonomy of AI systems. Shared operationalizable definitions for these concepts will support comparisons between models, risk assessments, and mitigation strategies; clear criteria from policymakers and regulators; identifying goals, predictions, and risks for research and development; and the ability to understand and communicate where we are along the path to AGI.

Key Insights

The paper presents nine case studies of prior prominent formulations of the concept of AGI and analyzes these to develop six principles for a clear, operationalizable definition of AGI. These six principles are:

  1. Focus on Capabilities, not Processes
  2. Focus on Generality and Performance
  3. Focus on Cognitive and Metacognitive Tasks
  4. Focus on Potential, not Deployment
  5. Focus on Ecological Validity
  6. Focus on the Path to AGI, not a Single Endpoint

These six principles are used to formulate a matrixed ontology, the “Levels of AGI,” considering a combination of performance and generality. The six levels of Performance are: 

0. No AI

1. Emerging (equal to or somewhat better than an unskilled human)

2. Competent (at least 50th percentile of skilled adults)

3. Expert (at least 90th percentile of skilled adults)

4. Virtuoso (at least 99th percentile of skilled adults)

5. Superhuman (outperforms 100% of humans)

Table 1 in the paper attempts to classify classic and SOTA AI systems according to their level of performance and generality (narrow vs. general) and discusses the need to create an ecologically valid, living benchmark to determine the performance of novel systems more precisely. The paper theorizes that today’s generative language models would be considered “Emerging AGI.” However, they display Competent or even Expert level performance for some narrow tasks, highlighting the unevenness of capability development that is likely to be characteristic of AGI systems. 

Table 2 in the paper introduces an autonomy dimension to the ontology, observing that the paradigm of human-AI interaction jointly with system capabilities determines risk. While higher Levels of AGI unlock novel human-AI interaction paradigms (up to and including fully autonomous operation), they do not determine them – the choice of appropriate interaction paradigm depends on many contextual considerations, including AI safety. The levels of Autonomy are:

0. No AI (human does everything)

1. AI as a Tool (human fully controls tasks and uses AI to automate mundane sub-tasks)

    2. AI as a Consultant (AI takes on a substantive role, but only when invoked by a human)

    3. AI as a Collaborator (co-equal human-AI collaboration; interactive coordination of goals & tasks)

    4. AI as an Expert (AI drives interaction; human provides guidance & feedback or performs subtasks)

    5. AI as an Agent (fully autonomous AI)

    Between the lines

    This paper introduces a nuanced framework for classifying AI systems’ performance, generality, and autonomy. Shared terminology and frameworks can help researchers, policymakers, and other stakeholders communicate more clearly about progress toward (and risks from) powerful AI systems. This paper also emphasizes the need for future research on developing an AGI benchmark (and discusses the challenges of doing so) and emphasizes the importance of investing in human-AI interaction research in tandem with model improvements in light of the insight that autonomy paradigms interact with model capabilities to determine a system’s risk profile.

    Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

    Primary Sidebar

    🔍 SEARCH

    Spotlight

    AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

    AI Policy Corner: The Colorado State Deepfakes Act

    Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

    AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

    From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

    related posts

    • Emerging trends: Unfair, biased, addictive, dangerous, deadly, and insanely profitable

      Emerging trends: Unfair, biased, addictive, dangerous, deadly, and insanely profitable

    • Positive AI Economic Futures: Insight Report

      Positive AI Economic Futures: Insight Report

    • Conformity Assessments and Post-market Monitoring: A Guide to the Role of Auditing in the Proposed E...

      Conformity Assessments and Post-market Monitoring: A Guide to the Role of Auditing in the Proposed E...

    • Submission to World Intellectual Property Organization on IP & AI

      Submission to World Intellectual Property Organization on IP & AI

    • Summoning a New Artificial Intelligence Patent Model: In the Age of Pandemic

      Summoning a New Artificial Intelligence Patent Model: In the Age of Pandemic

    • AI in the Gray: Exploring Moderation Policies in Dialogic Large Language Models vs. Human Answers in...

      AI in the Gray: Exploring Moderation Policies in Dialogic Large Language Models vs. Human Answers in...

    • Research summary:  The Flight to Safety-Critical AI

      Research summary: The Flight to Safety-Critical AI

    • Going public: the role of public participation approaches in commercial AI labs

      Going public: the role of public participation approaches in commercial AI labs

    • Model Positionality and Computational Reflexivity: Promoting Reflexivity in Data Science

      Model Positionality and Computational Reflexivity: Promoting Reflexivity in Data Science

    • You cannot have AI ethics without ethics

      You cannot have AI ethics without ethics

    Partners

    •  
      U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

    • Partnership on AI

    • The LF AI & Data Foundation

    • The AI Alliance

    Footer

    Categories


    • Blog
    • Research Summaries
    • Columns
    • Core Principles of Responsible AI
    • Special Topics

    Signature Content


    • The State Of AI Ethics

    • The Living Dictionary

    • The AI Ethics Brief

    Learn More


    • About

    • Open Access Policy

    • Contributions Policy

    • Editorial Stance on AI Tools

    • Press

    • Donate

    • Contact

    The AI Ethics Brief (bi-weekly newsletter)

    About Us


    Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


    Archive

  1. © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  2. This work is licensed under a Creative Commons Attribution 4.0 International License.
  3. Learn more about our open access policy here.
  4. Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.