• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

GenAI Against Humanity: Nefarious Applications of Generative Artificial Intelligence and Large Language Models

December 6, 2023

🔬 Research Summary by Emilio Ferrara, a professor at the Thomas Lord Department of Computer Science of the University of Southern California.

[Original paper by Emilio Ferrara]


Overview: This paper delves into the dual nature of Generative Artificial Intelligence (GenAI) and Large Language Models (LLMs), highlighting their potential for both groundbreaking advancements and malicious misuse. By examining various applications, the viewpoint underscores these technologies’ ethical and societal challenges, emphasizing the need for responsible deployment.


Introduction

In 2019, a UK-based energy company was deceived by a synthetic voice impersonating the parent firm’s CEO, requesting a money transfer, leading to a significant financial loss. This incident, while alarming, is a mere glimpse into the vast implications of GenAI and LLMs. My research aims to explore the myriad ways in which these technologies can be harnessed for both beneficial and nefarious purposes. Through a deep dive into various applications, the paper seeks to answer the following questions: How can GenAI and LLMs reshape society, and what are the potential risks associated with their misuse?

Key Insights

GenAI’s Transformative Potential

Generative AI has shown promise across various sectors, from restoring historical artifacts to enhancing personalized content generation. Its ability to generate content, simulate voices, and even recreate experiences has opened up a plethora of opportunities, promising innovations that could redefine industries.

The Dark Side of GenAI

However, the same capabilities that make GenAI revolutionary also make it susceptible to misuse. My research highlights concerns related to targeted surveillance, where enhanced capabilities can lead to invasive monitoring. While essential for removing harmful content, content moderation can be weaponized for extreme censorship. Adversarial attacks, powered by GenAI, can deceive even experts, leading to potential security breaches. Furthermore, the ability to manipulate public sentiment can have cascading effects on socio-technical systems, from influencing stock markets to swaying election outcomes.

Planetary Implications

The paper emphasizes the planetary implications of GenAI, especially when deployed at scale. Its influence extends beyond technological advancements, impacting socio-technical systems, including the economy, democracy, and infrastructure.

Between the lines

The findings of this research are both enlightening and alarming. While GenAI and LLMs hold immense potential, their unchecked proliferation can lead to unprecedented challenges. The paper serves as a timely reminder of the ethical considerations that must accompany technological advancements. One gap in the research is the exploration of potential mitigation strategies to address the highlighted concerns. Future research could delve deeper into developing robust defense mechanisms and ethical guidelines to ensure the responsible deployment of GenAI and LLMs.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

related posts

  • The Case for Anticipating Undesirable Consequences of Computing Innovations Early, Often, and Across...

    The Case for Anticipating Undesirable Consequences of Computing Innovations Early, Often, and Across...

  • Research summary: Fairness in Clustering with Multiple Sensitive Attributes

    Research summary: Fairness in Clustering with Multiple Sensitive Attributes

  • Siren’s Song in the AI Ocean: A Survey on Hallucination in Large Language Models

    Siren’s Song in the AI Ocean: A Survey on Hallucination in Large Language Models

  • AI in the Gray: Exploring Moderation Policies in Dialogic Large Language Models vs. Human Answers in...

    AI in the Gray: Exploring Moderation Policies in Dialogic Large Language Models vs. Human Answers in...

  • Can Large Language Models Provide Security & Privacy Advice? Measuring the Ability of LLMs to Re...

    Can Large Language Models Provide Security & Privacy Advice? Measuring the Ability of LLMs to Re...

  • Group Fairness Is Not Derivable From Justice: a Mathematical Proof

    Group Fairness Is Not Derivable From Justice: a Mathematical Proof

  • AI in Finance: 8 Frequently Asked Questions

    AI in Finance: 8 Frequently Asked Questions

  • Outsourced & Automated: How AI Companies Have Taken Over Government Decision-Making

    Outsourced & Automated: How AI Companies Have Taken Over Government Decision-Making

  • Combatting Anti-Blackness in the AI Community

    Combatting Anti-Blackness in the AI Community

  • Cleaning Up the Streets: Understanding Motivations, Mental Models, and Concerns of Users Flagging So...

    Cleaning Up the Streets: Understanding Motivations, Mental Models, and Concerns of Users Flagging So...

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.