• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

GenAI Against Humanity: Nefarious Applications of Generative Artificial Intelligence and Large Language Models

December 6, 2023

馃敩 Research Summary by Emilio Ferrara, a professor at the Thomas Lord Department of Computer Science of the University of Southern California.

[Original paper by Emilio Ferrara]


Overview: This paper delves into the dual nature of Generative Artificial Intelligence (GenAI) and Large Language Models (LLMs), highlighting their potential for both groundbreaking advancements and malicious misuse. By examining various applications, the viewpoint underscores these technologies’ ethical and societal challenges, emphasizing the need for responsible deployment.


Introduction

In 2019, a UK-based energy company was deceived by a synthetic voice impersonating the parent firm鈥檚 CEO, requesting a money transfer, leading to a significant financial loss. This incident, while alarming, is a mere glimpse into the vast implications of GenAI and LLMs. My research aims to explore the myriad ways in which these technologies can be harnessed for both beneficial and nefarious purposes. Through a deep dive into various applications, the paper seeks to answer the following questions: How can GenAI and LLMs reshape society, and what are the potential risks associated with their misuse?

Key Insights

GenAI’s Transformative Potential

Generative AI has shown promise across various sectors, from restoring historical artifacts to enhancing personalized content generation. Its ability to generate content, simulate voices, and even recreate experiences has opened up a plethora of opportunities, promising innovations that could redefine industries.

The Dark Side of GenAI

However, the same capabilities that make GenAI revolutionary also make it susceptible to misuse. My research highlights concerns related to targeted surveillance, where enhanced capabilities can lead to invasive monitoring. While essential for removing harmful content, content moderation can be weaponized for extreme censorship. Adversarial attacks, powered by GenAI, can deceive even experts, leading to potential security breaches. Furthermore, the ability to manipulate public sentiment can have cascading effects on socio-technical systems, from influencing stock markets to swaying election outcomes.

Planetary Implications

The paper emphasizes the planetary implications of GenAI, especially when deployed at scale. Its influence extends beyond technological advancements, impacting socio-technical systems, including the economy, democracy, and infrastructure.

Between the lines

The findings of this research are both enlightening and alarming. While GenAI and LLMs hold immense potential, their unchecked proliferation can lead to unprecedented challenges. The paper serves as a timely reminder of the ethical considerations that must accompany technological advancements. One gap in the research is the exploration of potential mitigation strategies to address the highlighted concerns. Future research could delve deeper into developing robust defense mechanisms and ethical guidelines to ensure the responsible deployment of GenAI and LLMs.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

馃攳 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • Achieving Fairness at No Utility Cost via Data Reweighing with Influence

    Achieving Fairness at No Utility Cost via Data Reweighing with Influence

  • FairQueue: Rethinking Prompt Learning for Fair Text-to-Image Generation (NeurIPS 2024)

    FairQueue: Rethinking Prompt Learning for Fair Text-to-Image Generation (NeurIPS 2024)

  • Ethics-based auditing of automated decision-making systems: intervention points and policy implicati...

    Ethics-based auditing of automated decision-making systems: intervention points and policy implicati...

  • Research summary: Designing for Human Rights in AI

    Research summary: Designing for Human Rights in AI

  • How Cognitive Biases Affect XAI-assisted Decision-making: A Systematic Review

    How Cognitive Biases Affect XAI-assisted Decision-making: A Systematic Review

  • Research summary: AI Governance: A Holistic Approach to Implement Ethics in AI

    Research summary: AI Governance: A Holistic Approach to Implement Ethics in AI

  • Exploiting Large Language Models (LLMs) through Deception Techniques and Persuasion Principles

    Exploiting Large Language Models (LLMs) through Deception Techniques and Persuasion Principles

  • Data Capitalism and the User: An Exploration of Privacy Cynicism in Germany

    Data Capitalism and the User: An Exploration of Privacy Cynicism in Germany

  • Customization is Key: Four Characteristics of Textual Affordances for Accessible Data Visualizatio...

    "Customization is Key": Four Characteristics of Textual Affordances for Accessible Data Visualizatio...

  • International Human Rights, Artificial Intelligence, and the Challenge for the Pondering State: Time...

    International Human Rights, Artificial Intelligence, and the Challenge for the Pondering State: Time...

Partners

  • 聽
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • 漏 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.