• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Levels of AGI: Operationalizing Progress on the Path to AGI

January 24, 2024

🔬 Research Summary by Meredith Ringel Morris, Director of Human-AI Interaction Research at Google DeepMind; she is also an Affiliate Professor at the University of Washington, and is an ACM Fellow and member of the ACM SIGCHI Academy.

[Original paper by Meredith Ringel Morris, Jascha Sohl-dickstein, Noah Fiedel, Tris Warkentin, Allan Dafoe, Aleksandra Faust, Clement Farabet, and Shane Legg (all affiliated with Google DeepMind)]


Overview: This paper proposes a theoretical framework for classifying the capabilities and behavior of AGI (Artificial General Intelligence) systems. The proposed ontology introduces a level-based classification of AGI performance, generality, and autonomy. The paper includes discussions of how this framing relates to prior conceptualizations of AGI, how more nuanced terminology around AGI systems can support discussions of risk and policy options, and the need for future work in developing ecologically valid, living benchmarks to assess systems against this framework.


Introduction

Artificial General Intelligence (AGI) is an important and sometimes controversial concept in computing research, used to describe an AI system that is at least as capable as a human at most tasks. Given the rapid advancement of Machine Learning (ML) models, the concept of AGI has passed from being the subject of philosophical debate to one with near-term practical relevance. Some experts believe that “sparks” of AGI (Bubeck et al., 2023) are already present in the latest generation of large language models (LLMs); some predict AI will broadly outperform humans within about a decade (Bengio et al., 2023); some even assert that current LLMs are AGIs (Agüera y Arcas and Norvig, 2023). However, if you were to ask 100 AI experts to define what they mean by “AGI,” you would likely get 100 related but different definitions. We argue that it is critical for the AI research community to explicitly reflect on what we mean by “AGI” and aspire to quantify attributes like the performance, generality, and autonomy of AI systems. Shared operationalizable definitions for these concepts will support comparisons between models, risk assessments, and mitigation strategies; clear criteria from policymakers and regulators; identifying goals, predictions, and risks for research and development; and the ability to understand and communicate where we are along the path to AGI.

Key Insights

The paper presents nine case studies of prior prominent formulations of the concept of AGI and analyzes these to develop six principles for a clear, operationalizable definition of AGI. These six principles are:

  1. Focus on Capabilities, not Processes
  2. Focus on Generality and Performance
  3. Focus on Cognitive and Metacognitive Tasks
  4. Focus on Potential, not Deployment
  5. Focus on Ecological Validity
  6. Focus on the Path to AGI, not a Single Endpoint

These six principles are used to formulate a matrixed ontology, the “Levels of AGI,” considering a combination of performance and generality. The six levels of Performance are: 

0. No AI

1. Emerging (equal to or somewhat better than an unskilled human)

2. Competent (at least 50th percentile of skilled adults)

3. Expert (at least 90th percentile of skilled adults)

4. Virtuoso (at least 99th percentile of skilled adults)

5. Superhuman (outperforms 100% of humans)

Table 1 in the paper attempts to classify classic and SOTA AI systems according to their level of performance and generality (narrow vs. general) and discusses the need to create an ecologically valid, living benchmark to determine the performance of novel systems more precisely. The paper theorizes that today’s generative language models would be considered “Emerging AGI.” However, they display Competent or even Expert level performance for some narrow tasks, highlighting the unevenness of capability development that is likely to be characteristic of AGI systems. 

Table 2 in the paper introduces an autonomy dimension to the ontology, observing that the paradigm of human-AI interaction jointly with system capabilities determines risk. While higher Levels of AGI unlock novel human-AI interaction paradigms (up to and including fully autonomous operation), they do not determine them – the choice of appropriate interaction paradigm depends on many contextual considerations, including AI safety. The levels of Autonomy are:

0. No AI (human does everything)

1. AI as a Tool (human fully controls tasks and uses AI to automate mundane sub-tasks)

    2. AI as a Consultant (AI takes on a substantive role, but only when invoked by a human)

    3. AI as a Collaborator (co-equal human-AI collaboration; interactive coordination of goals & tasks)

    4. AI as an Expert (AI drives interaction; human provides guidance & feedback or performs subtasks)

    5. AI as an Agent (fully autonomous AI)

    Between the lines

    This paper introduces a nuanced framework for classifying AI systems’ performance, generality, and autonomy. Shared terminology and frameworks can help researchers, policymakers, and other stakeholders communicate more clearly about progress toward (and risks from) powerful AI systems. This paper also emphasizes the need for future research on developing an AGI benchmark (and discusses the challenges of doing so) and emphasizes the importance of investing in human-AI interaction research in tandem with model improvements in light of the insight that autonomy paradigms interact with model capabilities to determine a system’s risk profile.

    Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

    Primary Sidebar

    🔍 SEARCH

    Spotlight

    ALL IN Conference 2025: Four Key Takeaways from Montreal

    Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

    AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

    Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

    AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

    related posts

    • On the Generation of Unsafe Images and Hateful Memes From Text-To-Image Models

      On the Generation of Unsafe Images and Hateful Memes From Text-To-Image Models

    • AI and Great Power Competition: Implications for National Security

      AI and Great Power Competition: Implications for National Security

    • A Snapshot of the Frontiers of Fairness in Machine Learning (Research Summary)

      A Snapshot of the Frontiers of Fairness in Machine Learning (Research Summary)

    • Research summary: Comparing Privacy Law GDPR Vs CCPA

      Research summary: Comparing Privacy Law GDPR Vs CCPA

    • Mapping the Design Space of Human-AI Interaction in Text Summarization

      Mapping the Design Space of Human-AI Interaction in Text Summarization

    • Towards a Feminist Metaethics of AI

      Towards a Feminist Metaethics of AI

    • Research summary: Troubling Trends in Machine Learning Scholarship

      Research summary: Troubling Trends in Machine Learning Scholarship

    • Responsible Use of Technology: The IBM Case Study

      Responsible Use of Technology: The IBM Case Study

    • Regulating computer vision & the ongoing relevance of AI ethics

      Regulating computer vision & the ongoing relevance of AI ethics

    • Measuring Surprise in the Wild

      Measuring Surprise in the Wild

    Partners

    •  
      U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

    • Partnership on AI

    • The LF AI & Data Foundation

    • The AI Alliance

    Footer


    Articles

    Columns

    AI Literacy

    The State of AI Ethics Report


     

    About Us


    Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

    Contact

    Donate


  1. © 2025 MONTREAL AI ETHICS INSTITUTE.
  2. This work is licensed under a Creative Commons Attribution 4.0 International License.
  3. Learn more about our open access policy here.
  4. Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.