• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Visions of Artificial Intelligence and Robots in Science Fiction: a computational analysis

October 30, 2022

🔬 Research summary by Connor Wright, our Partnerships Manager.

[Original paper by Hirotaka Osawa, Dohjin Miyamoto, Satoshi Hase, Reina Saijo, Kentaro Fukuchi, Yoichiro Miyake]


Overview: How is AI portrayed in science fiction? While this is a powerful tool, science fiction sometimes gets interpreted as science non-fiction, separating ourselves from the reality of technology.


Introduction

How is AI presented in science fiction? In collaboration with science fiction experts, the authors studied the potential patterns and stereotypes attributed to technology in a science fiction context. To explore these appropriately, I’ll cover how science fiction bears influence in both industry and academia, as well as the danger of having the narrative separate from reality. I’ll then get stuck into what these four types are before exploring how embodiment contributes to our perception of intelligence. I’ll then conclude by pondering my encounters with the danger of taking science fiction as science non-fiction.

Key Insights

Science fiction bears influence

Science fiction impacts academics’ work as well as industry attitudes towards technology. With pieces such as Frankenstein, science fiction plays on the fears of the common theme of a creation uprising. The effect is multiplied by how some stories have accurately foretold the future, such as Sakyo Komatsu’s “Virus”, which closely resembles the COVID pandemic (p. 3). 

Nevertheless, at times, the context in which the author writes influences them more than the realities of the technology. For example, Jennifer Robertson appealed to how the vision of the AI future from the Japanese Government drew on sexist interpretations of technology in some science fiction works (p. 2). Hence, while these stories may accurately foretell the future, this doesn’t mean they automatically possess scientific rigour.

The narrative

While those involved in science fiction are good storytellers, they’re not necessarily scientists. Consequently, the narratives could be subjected to existing in a space removed from the realities of the technology. Representations of AI are generally more diverse and closer to reality than robots, but taking science fiction stories into account must be done with caution, especially given the controversial themes mentioned above.

Results

There were four types of AI representations in science fiction identified by the authors:

Machine-type AI: this trend captures AI in science fiction that generally does not learn from its environments and is automated. They exhibit qualities such as low consciousness and low relation to humanity, while their inability to adapt often damages the human cause at play.

Human-type AI: this was the most common trend. They are deemed to possess qualities such as being highly human-like in appearance, high language skills and moderate learning skills. This type of AI is usually treated as a metaphor for humans, with these machines assuming independent roles in society and learning from their environment.

Buddy-type AI: analogous to augmentative AI, these types of AI were usually represented as technology that helped with human tasks or extends human cognition. Included here is their low human appearance and how they are generally not physical in form but used for automated tasks such as weaponry.

Infrastructure-type AI: This type includes low human appearance but high connectivity directed at tasks such as facility management. It’s mainly concerned with facilitating the cooperation between human and machine decision-making.

Given these four types of AI in science fiction, embodiment is a crucial component of viewing an AI as intelligent for humans. Human-like figure contributes to the machine appearing intelligent, as opposed to a general physical appearance. Language ability, consciousness and generability also contributed positively to perceptions of intelligence.

However, the authors found that buddy-AI systems and infrastructure-AI systems may become the main port of call for portraying the AI future. The human aspect of the machine design is a fine line to walk, and it can negatively affect how humans perceive a technology (such as in Terminator). Hence, a non-human form and augmentative approach seems to be the catch-all winner for a positive perception of technology such as AI.

Between the lines

I love a science fiction movie or novel. However, given its prominence in the Western context I find myself in, it has shaped the perception of some people I have come across. Often, in talks I give, I start by defining AI away from killer robots and Skynet and instead focus on how common AI is in our lives (like recommender algorithms). So, this gets me wondering, can we tell a compelling story that sticks true to the current capabilities of technology? It may not be the most compelling story, but it could form part of an educational approach that utilises science fiction to understand this prevalent technology better.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

related posts

  • South Korea as a Fourth Industrial Revolution Middle Power?

    South Korea as a Fourth Industrial Revolution Middle Power?

  • Between a Rock and a Hard Place: Freedom, Flexibility, Precarity and Vulnerability in the Gig Econom...

    Between a Rock and a Hard Place: Freedom, Flexibility, Precarity and Vulnerability in the Gig Econom...

  • The Technologists are Not in Control: What the Internet Experience Can Teach us about AI Ethics and ...

    The Technologists are Not in Control: What the Internet Experience Can Teach us about AI Ethics and ...

  • Fair Interpretable Representation Learning with Correction Vectors

    Fair Interpretable Representation Learning with Correction Vectors

  • Let Users Decide: Navigating the Trade-offs between Costs and Robustness in Algorithmic Recourse

    Let Users Decide: Navigating the Trade-offs between Costs and Robustness in Algorithmic Recourse

  • Research Summary: Trust and Transparency in Contact Tracing Applications

    Research Summary: Trust and Transparency in Contact Tracing Applications

  • The Abuse and Misogynoir Playbook, explained

    The Abuse and Misogynoir Playbook, explained

  • Designing Fiduciary Artificial Intelligence

    Designing Fiduciary Artificial Intelligence

  • The Moral Machine Experiment on Large Language Models

    The Moral Machine Experiment on Large Language Models

  • Friend or foe? Exploring the implications of large language models on the science system

    Friend or foe? Exploring the implications of large language models on the science system

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.