• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Research summary: Aligning Super Human AI with Human Behavior: Chess as a Model System

July 12, 2020

Summary contributed by Brooke Criswell (@Brooke_Criswell). She’s pursuing a PhD. in media psychology, and has extensive experience in marketing & communications.

*Reference at the bottom


Artificial Intelligence (AI) is becoming smarter and smarter every day. In some cases, AI is achieving or surpassing superhuman performance. AI systems typically approach problems and decision making differently than the way people do (McIlroy-Young, Sen, Kleinberg, Anderson, 2020). The researchers in this study (McIlroy-Young, Sen, Kleinberg, Anderson, 2020) created a new model that explores human chess players’ behavior at a move-by-move level and the development of chess algorithms that match the human move-level behavior. In other words, the current systems for playing chess online is designed to play the game to win.

However, in this research study, they found a way to align the AI chess player to play in a more human behavioral manner when making decisions on the next move in chess. They found by applying existing chess engines to the data they had did not predict human movements very well. Therefore, their system called “Maia” is a customized version of Alpha-Zero trained on human chess games that predict human moves with a high accuracy rate than existing engines that play chess. They also achieve maximum accuracy when predicting the decisions made by players at specific skill levels. They take a dual approach when designing this algorithm. Instead of asking, “what move should be played?” They are asking, “What move will a human play?”

The researchers were able to do this by repurposing the Alpha Zero deep neural network framework to predict human actions rather than the most likely winning move. Instead of training the algorithm on self-play games, they taught it on human games that were already recorded in datasets to understand how humans play chess. The next part was creating the policy network that was responsible for the prediction. From this, “Maia” was built and has “natural parametrization under which it can be targeted to predict human moves at a particular skill level” (McIlroy-Young, Sen, Kleinberg, Anderson, 2020). 

The second task for “Maia” they developed was to figure out when and whether human players would make a significant mistake on their next move, called “blunder.” 

For this study, they designed a custom deep residual neural network and trained it on the same data. They found that the network outperforms competitive baselines at predicting whether humans will make a mistake (McIlroy-Young, Sen, Kleinberg, Anderson, 2020).

By designing AI with human collaboration in mind, one can accurately model granular human decision making. The choices developers make in the design can lead to this type of performance. It can also help understand the prediction of human error (McIlroy-Young, Sen, Kleinberg, Anderson, 2020).


Reid McIlroy-Young, Siddhartha Sen, Jon Kleinberg, and Ashton Anderson. 2020. Aligning Superhuman AI with Human Behavior: Chess as a Model System. In Proceedings of the 26th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD ’20), August 23–27, 2020, Virtual Event, CA, USA. ACM, New York, NY, USA, 11 pages. https://doi.org/10.1145/3394486.3403219

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • Putting AI ethics to work: are the tools fit for purpose?

    Putting AI ethics to work: are the tools fit for purpose?

  • Research summary: Bring the People Back In: Contesting Benchmark Machine Learning

    Research summary: Bring the People Back In: Contesting Benchmark Machine Learning

  • AI Ethics: Inclusivity in Smart Cities

    AI Ethics: Inclusivity in Smart Cities

  • Research summary: What’s Next for AI Ethics, Policy, and Governance? A Global Overview

    Research summary: What’s Next for AI Ethics, Policy, and Governance? A Global Overview

  • Zoom Out and Observe: News Environment Perception for Fake News Detection

    Zoom Out and Observe: News Environment Perception for Fake News Detection

  • Editing Personality for LLMs

    Editing Personality for LLMs

  • The State of AI Ethics Report (Volume 5)

    The State of AI Ethics Report (Volume 5)

  • Blending Brushstrokes with Bytes: My Artistic Odyssey from Analog to AI

    Blending Brushstrokes with Bytes: My Artistic Odyssey from Analog to AI

  • Government AI Readiness 2021 Index

    Government AI Readiness 2021 Index

  • A Virtue-Based Framework to Support Putting AI Ethics into Practice

    A Virtue-Based Framework to Support Putting AI Ethics into Practice

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.