• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • JEDI (Justice, Equity, Diversity, Inclusion
      • Ethics
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • The AI Ethics Brief
  • AI Literacy
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Research summary: Aligning Super Human AI with Human Behavior: Chess as a Model System

July 12, 2020

Summary contributed by Brooke Criswell (@Brooke_Criswell). She’s pursuing a PhD. in media psychology, and has extensive experience in marketing & communications.

*Reference at the bottom


Artificial Intelligence (AI) is becoming smarter and smarter every day. In some cases, AI is achieving or surpassing superhuman performance. AI systems typically approach problems and decision making differently than the way people do (McIlroy-Young, Sen, Kleinberg, Anderson, 2020). The researchers in this study (McIlroy-Young, Sen, Kleinberg, Anderson, 2020) created a new model that explores human chess players’ behavior at a move-by-move level and the development of chess algorithms that match the human move-level behavior. In other words, the current systems for playing chess online is designed to play the game to win.

However, in this research study, they found a way to align the AI chess player to play in a more human behavioral manner when making decisions on the next move in chess. They found by applying existing chess engines to the data they had did not predict human movements very well. Therefore, their system called “Maia” is a customized version of Alpha-Zero trained on human chess games that predict human moves with a high accuracy rate than existing engines that play chess. They also achieve maximum accuracy when predicting the decisions made by players at specific skill levels. They take a dual approach when designing this algorithm. Instead of asking, “what move should be played?” They are asking, “What move will a human play?”

The researchers were able to do this by repurposing the Alpha Zero deep neural network framework to predict human actions rather than the most likely winning move. Instead of training the algorithm on self-play games, they taught it on human games that were already recorded in datasets to understand how humans play chess. The next part was creating the policy network that was responsible for the prediction. From this, “Maia” was built and has “natural parametrization under which it can be targeted to predict human moves at a particular skill level” (McIlroy-Young, Sen, Kleinberg, Anderson, 2020). 

The second task for “Maia” they developed was to figure out when and whether human players would make a significant mistake on their next move, called “blunder.” 

For this study, they designed a custom deep residual neural network and trained it on the same data. They found that the network outperforms competitive baselines at predicting whether humans will make a mistake (McIlroy-Young, Sen, Kleinberg, Anderson, 2020).

By designing AI with human collaboration in mind, one can accurately model granular human decision making. The choices developers make in the design can lead to this type of performance. It can also help understand the prediction of human error (McIlroy-Young, Sen, Kleinberg, Anderson, 2020).


Reid McIlroy-Young, Siddhartha Sen, Jon Kleinberg, and Ashton Anderson. 2020. Aligning Superhuman AI with Human Behavior: Chess as a Model System. In Proceedings of the 26th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD ’20), August 23–27, 2020, Virtual Event, CA, USA. ACM, New York, NY, USA, 11 pages. https://doi.org/10.1145/3394486.3403219

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • Unlocking Accuracy and Fairness in Differentially Private Image Classification

    Unlocking Accuracy and Fairness in Differentially Private Image Classification

  • Subreddit Links Drive Community Creation and User Engagement on Reddit

    Subreddit Links Drive Community Creation and User Engagement on Reddit

  • Private Training Set Inspection in MLaaS

    Private Training Set Inspection in MLaaS

  • Research summary: Designing for Human Rights in AI

    Research summary: Designing for Human Rights in AI

  • Bias Amplification Enhances Minority Group Performance

    Bias Amplification Enhances Minority Group Performance

  • Judging the algorithm: A case study on the risk assessment tool for gender-based violence implemente...

    Judging the algorithm: A case study on the risk assessment tool for gender-based violence implemente...

  • A Matrix for Selecting Responsible AI Frameworks

    A Matrix for Selecting Responsible AI Frameworks

  • Investing in AI for Social Good: An Analysis of European National Strategies

    Investing in AI for Social Good: An Analysis of European National Strategies

  • Generative AI-Driven Storytelling: A New Era for Marketing

    Generative AI-Driven Storytelling: A New Era for Marketing

  • Employee Perceptions of the Effective Adoption of AI Principles

    Employee Perceptions of the Effective Adoption of AI Principles

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.