• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Visions of Artificial Intelligence and Robots in Science Fiction: a computational analysis

October 30, 2022

🔬 Research summary by Connor Wright, our Partnerships Manager.

[Original paper by Hirotaka Osawa, Dohjin Miyamoto, Satoshi Hase, Reina Saijo, Kentaro Fukuchi, Yoichiro Miyake]


Overview: How is AI portrayed in science fiction? While this is a powerful tool, science fiction sometimes gets interpreted as science non-fiction, separating ourselves from the reality of technology.


Introduction

How is AI presented in science fiction? In collaboration with science fiction experts, the authors studied the potential patterns and stereotypes attributed to technology in a science fiction context. To explore these appropriately, I’ll cover how science fiction bears influence in both industry and academia, as well as the danger of having the narrative separate from reality. I’ll then get stuck into what these four types are before exploring how embodiment contributes to our perception of intelligence. I’ll then conclude by pondering my encounters with the danger of taking science fiction as science non-fiction.

Key Insights

Science fiction bears influence

Science fiction impacts academics’ work as well as industry attitudes towards technology. With pieces such as Frankenstein, science fiction plays on the fears of the common theme of a creation uprising. The effect is multiplied by how some stories have accurately foretold the future, such as Sakyo Komatsu’s “Virus”, which closely resembles the COVID pandemic (p. 3). 

Nevertheless, at times, the context in which the author writes influences them more than the realities of the technology. For example, Jennifer Robertson appealed to how the vision of the AI future from the Japanese Government drew on sexist interpretations of technology in some science fiction works (p. 2). Hence, while these stories may accurately foretell the future, this doesn’t mean they automatically possess scientific rigour.

The narrative

While those involved in science fiction are good storytellers, they’re not necessarily scientists. Consequently, the narratives could be subjected to existing in a space removed from the realities of the technology. Representations of AI are generally more diverse and closer to reality than robots, but taking science fiction stories into account must be done with caution, especially given the controversial themes mentioned above.

Results

There were four types of AI representations in science fiction identified by the authors:

Machine-type AI: this trend captures AI in science fiction that generally does not learn from its environments and is automated. They exhibit qualities such as low consciousness and low relation to humanity, while their inability to adapt often damages the human cause at play.

Human-type AI: this was the most common trend. They are deemed to possess qualities such as being highly human-like in appearance, high language skills and moderate learning skills. This type of AI is usually treated as a metaphor for humans, with these machines assuming independent roles in society and learning from their environment.

Buddy-type AI: analogous to augmentative AI, these types of AI were usually represented as technology that helped with human tasks or extends human cognition. Included here is their low human appearance and how they are generally not physical in form but used for automated tasks such as weaponry.

Infrastructure-type AI: This type includes low human appearance but high connectivity directed at tasks such as facility management. It’s mainly concerned with facilitating the cooperation between human and machine decision-making.

Given these four types of AI in science fiction, embodiment is a crucial component of viewing an AI as intelligent for humans. Human-like figure contributes to the machine appearing intelligent, as opposed to a general physical appearance. Language ability, consciousness and generability also contributed positively to perceptions of intelligence.

However, the authors found that buddy-AI systems and infrastructure-AI systems may become the main port of call for portraying the AI future. The human aspect of the machine design is a fine line to walk, and it can negatively affect how humans perceive a technology (such as in Terminator). Hence, a non-human form and augmentative approach seems to be the catch-all winner for a positive perception of technology such as AI.

Between the lines

I love a science fiction movie or novel. However, given its prominence in the Western context I find myself in, it has shaped the perception of some people I have come across. Often, in talks I give, I start by defining AI away from killer robots and Skynet and instead focus on how common AI is in our lives (like recommender algorithms). So, this gets me wondering, can we tell a compelling story that sticks true to the current capabilities of technology? It may not be the most compelling story, but it could form part of an educational approach that utilises science fiction to understand this prevalent technology better.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

related posts

  • Research Summary: Towards Evaluating the Robustness of Neural Networks

    Research Summary: Towards Evaluating the Robustness of Neural Networks

  • An error management approach to perceived fakeness of deepfakes: The moderating role of perceived de...

    An error management approach to perceived fakeness of deepfakes: The moderating role of perceived de...

  • The Unequal Opportunities of Large Language Models: Revealing Demographic Bias through Job Recommend...

    The Unequal Opportunities of Large Language Models: Revealing Demographic Bias through Job Recommend...

  • Reliabilism and the Testimony of Robots (Research Summary)

    Reliabilism and the Testimony of Robots (Research Summary)

  • Benchmark Dataset Dynamics, Bias and Privacy Challenges in Voice Biometrics Research

    Benchmark Dataset Dynamics, Bias and Privacy Challenges in Voice Biometrics Research

  • The Impact of Artificial Intelligence on Military Defence and Security

    The Impact of Artificial Intelligence on Military Defence and Security

  • You cannot have AI ethics without ethics

    You cannot have AI ethics without ethics

  • Listen to What They Say: Better Understand and Detect Online Misinformation with User Feedback

    Listen to What They Say: Better Understand and Detect Online Misinformation with User Feedback

  • The increasing footprint of facial recognition technology in Indian law enforcement - pitfalls and r...

    The increasing footprint of facial recognition technology in Indian law enforcement - pitfalls and r...

  • Research summary: Comparing Privacy Law GDPR Vs CCPA

    Research summary: Comparing Privacy Law GDPR Vs CCPA

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.