• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Research summary: Aligning Super Human AI with Human Behavior: Chess as a Model System

July 12, 2020

Summary contributed by Brooke Criswell (@Brooke_Criswell). She’s pursuing a PhD. in media psychology, and has extensive experience in marketing & communications.

*Reference at the bottom


Artificial Intelligence (AI) is becoming smarter and smarter every day. In some cases, AI is achieving or surpassing superhuman performance. AI systems typically approach problems and decision making differently than the way people do (McIlroy-Young, Sen, Kleinberg, Anderson, 2020). The researchers in this study (McIlroy-Young, Sen, Kleinberg, Anderson, 2020) created a new model that explores human chess players’ behavior at a move-by-move level and the development of chess algorithms that match the human move-level behavior. In other words, the current systems for playing chess online is designed to play the game to win.

However, in this research study, they found a way to align the AI chess player to play in a more human behavioral manner when making decisions on the next move in chess. They found by applying existing chess engines to the data they had did not predict human movements very well. Therefore, their system called “Maia” is a customized version of Alpha-Zero trained on human chess games that predict human moves with a high accuracy rate than existing engines that play chess. They also achieve maximum accuracy when predicting the decisions made by players at specific skill levels. They take a dual approach when designing this algorithm. Instead of asking, “what move should be played?” They are asking, “What move will a human play?”

The researchers were able to do this by repurposing the Alpha Zero deep neural network framework to predict human actions rather than the most likely winning move. Instead of training the algorithm on self-play games, they taught it on human games that were already recorded in datasets to understand how humans play chess. The next part was creating the policy network that was responsible for the prediction. From this, “Maia” was built and has “natural parametrization under which it can be targeted to predict human moves at a particular skill level” (McIlroy-Young, Sen, Kleinberg, Anderson, 2020). 

The second task for “Maia” they developed was to figure out when and whether human players would make a significant mistake on their next move, called “blunder.” 

For this study, they designed a custom deep residual neural network and trained it on the same data. They found that the network outperforms competitive baselines at predicting whether humans will make a mistake (McIlroy-Young, Sen, Kleinberg, Anderson, 2020).

By designing AI with human collaboration in mind, one can accurately model granular human decision making. The choices developers make in the design can lead to this type of performance. It can also help understand the prediction of human error (McIlroy-Young, Sen, Kleinberg, Anderson, 2020).


Reid McIlroy-Young, Siddhartha Sen, Jon Kleinberg, and Ashton Anderson. 2020. Aligning Superhuman AI with Human Behavior: Chess as a Model System. In Proceedings of the 26th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD ’20), August 23–27, 2020, Virtual Event, CA, USA. ACM, New York, NY, USA, 11 pages. https://doi.org/10.1145/3394486.3403219

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

related posts

  • Understanding technology-induced value change: a pragmatist proposal

    Understanding technology-induced value change: a pragmatist proposal

  • The Unnoticed Cognitive Bias Secretly Shaping the AI Agenda

    The Unnoticed Cognitive Bias Secretly Shaping the AI Agenda

  • The Ethical Implications of Generative Audio Models: A Systematic Literature Review

    The Ethical Implications of Generative Audio Models: A Systematic Literature Review

  • From Dance App to Political Mercenary: How disinformation on TikTok gaslights political tensions in ...

    From Dance App to Political Mercenary: How disinformation on TikTok gaslights political tensions in ...

  • Research summary: Social Work Thinking for UX and AI Design

    Research summary: Social Work Thinking for UX and AI Design

  • Using attention methods to predict judicial outcomes

    Using attention methods to predict judicial outcomes

  • Research summary: Apps Gone Rogue: Maintaining Personal Privacy in an Epidemic

    Research summary: Apps Gone Rogue: Maintaining Personal Privacy in an Epidemic

  • Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical ...

    Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical ...

  • The Epistemological View: Data Ethics, Privacy & Trust on Digital Platform

    The Epistemological View: Data Ethics, Privacy & Trust on Digital Platform

  • Robust Distortion-free Watermarks for Language Models

    Robust Distortion-free Watermarks for Language Models

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.