• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • JEDI (Justice, Equity, Diversity, Inclusion
      • Ethics
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

GenAI Against Humanity: Nefarious Applications of Generative Artificial Intelligence and Large Language Models

December 6, 2023

🔬 Research Summary by Emilio Ferrara, a professor at the Thomas Lord Department of Computer Science of the University of Southern California.

[Original paper by Emilio Ferrara]


Overview: This paper delves into the dual nature of Generative Artificial Intelligence (GenAI) and Large Language Models (LLMs), highlighting their potential for both groundbreaking advancements and malicious misuse. By examining various applications, the viewpoint underscores these technologies’ ethical and societal challenges, emphasizing the need for responsible deployment.


Introduction

In 2019, a UK-based energy company was deceived by a synthetic voice impersonating the parent firm’s CEO, requesting a money transfer, leading to a significant financial loss. This incident, while alarming, is a mere glimpse into the vast implications of GenAI and LLMs. My research aims to explore the myriad ways in which these technologies can be harnessed for both beneficial and nefarious purposes. Through a deep dive into various applications, the paper seeks to answer the following questions: How can GenAI and LLMs reshape society, and what are the potential risks associated with their misuse?

Key Insights

GenAI’s Transformative Potential

Generative AI has shown promise across various sectors, from restoring historical artifacts to enhancing personalized content generation. Its ability to generate content, simulate voices, and even recreate experiences has opened up a plethora of opportunities, promising innovations that could redefine industries.

The Dark Side of GenAI

However, the same capabilities that make GenAI revolutionary also make it susceptible to misuse. My research highlights concerns related to targeted surveillance, where enhanced capabilities can lead to invasive monitoring. While essential for removing harmful content, content moderation can be weaponized for extreme censorship. Adversarial attacks, powered by GenAI, can deceive even experts, leading to potential security breaches. Furthermore, the ability to manipulate public sentiment can have cascading effects on socio-technical systems, from influencing stock markets to swaying election outcomes.

Planetary Implications

The paper emphasizes the planetary implications of GenAI, especially when deployed at scale. Its influence extends beyond technological advancements, impacting socio-technical systems, including the economy, democracy, and infrastructure.

Between the lines

The findings of this research are both enlightening and alarming. While GenAI and LLMs hold immense potential, their unchecked proliferation can lead to unprecedented challenges. The paper serves as a timely reminder of the ethical considerations that must accompany technological advancements. One gap in the research is the exploration of potential mitigation strategies to address the highlighted concerns. Future research could delve deeper into developing robust defense mechanisms and ethical guidelines to ensure the responsible deployment of GenAI and LLMs.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • Visions of Artificial Intelligence and Robots in Science Fiction: a computational analysis

    Visions of Artificial Intelligence and Robots in Science Fiction: a computational analysis

  • “Cool Projects” or “Expanding the Efficiency of the Murderous American War Machine?” (Research Summa...

    “Cool Projects” or “Expanding the Efficiency of the Murderous American War Machine?” (Research Summa...

  • Responsible Use of Technology in Credit Reporting: White Paper

    Responsible Use of Technology in Credit Reporting: White Paper

  • How Do We Teach Tech Ethics? How Should We?

    How Do We Teach Tech Ethics? How Should We?

  • A Look at the American Data Privacy and Protection Act

    A Look at the American Data Privacy and Protection Act

  • Computer vision, surveillance, and social control

    Computer vision, surveillance, and social control

  • Research summary: Mass Incarceration and the Future of AI

    Research summary: Mass Incarceration and the Future of AI

  • Research summary: The Deepfake Detection  Challenge: Insights and Recommendations  for AI and Media ...

    Research summary: The Deepfake Detection Challenge: Insights and Recommendations for AI and Media ...

  • Evaluating a Methodology for Increasing AI Transparency: A Case Study

    Evaluating a Methodology for Increasing AI Transparency: A Case Study

  • Research summary: Fairness in Clustering with Multiple Sensitive Attributes

    Research summary: Fairness in Clustering with Multiple Sensitive Attributes

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.