• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Mapping the Ethics of Generative AI: A Comprehensive Scoping Review

January 5, 2025

🔬 Research Summary by Francesca Carlon. Francesca works for the Research Group “Ethics of Generative AI” at the University of Stuttgart. Her interests cover Ethics & AI, NLP, Machine Learning and Linguistics.

[Original paper by Thilo Hagendorff]


Overview: This comprehensive review synthesizes recent discussions on the ethical implications of generative AI, especially large language models and text-to-image models, using a scoping review methodology to analyze the existing literature. It outlines a detailed taxonomy of ethical issues in the domain of generative AI, identifying 378 distinct codes across various categories and highlighting the discipline’s complexity and the potential harms from misaligned AI systems. The research not only fills a gap by providing a structured overview of ethical considerations of generative AI but also calls for a balanced assessment of risks and benefits, and serves as a resource for stakeholders such as scholars, practitioners, and policymakers, guiding future research and technology governance.


Introduction

This research explores the ethical implications of rapid advancements in generative artificial intelligence (AI) technologies, like large language models (LLMs). It aims to define key terms and provide a structured overview of ethical discussions on generative AI, highlighting potential harms from misaligned AI systems. It identifies a comprehensive taxonomy of ethical issues, organized into 378 distinct codes, reflecting the complex nature of ethical considerations in this domain. The taxonomy of codes can be accessed online here. It synthesizes 19 clusters of ethical issues such as fairness, safety, interaction risks, harmful content, hallucinations, or alignment, allowing to map out normative concepts in the discourse. The findings of this research underscore the complex ethical landscape within the advancements in generative AI technologies. Moreover, it points out critical gaps in the literature and calls for a balanced consideration of both the risks and benefits of generative AI.

Methodology and Scope

The study employed a scoping review methodology, adhering to the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) protocol to ensure a thorough examination of the existing literature. Initially, exploratory readings identified 29 keywords, leading to a comprehensive search across Google Scholar, arXiv, PhilPapers, and Elicit, yielding 1,674 results. Focusing on papers published from 2021 onwards, reflecting the rise of generative AI tools like DALL-E and ChatGPT, the review eventually considered 162 papers for full-text analysis after checking for inclusion criteria, with citation chaining and ongoing literature monitoring adding 17 more, totaling 179 documents for detailed analysis. Using NVivo for content analysis, a bottom-up inductive coding approach highlighted arguments with a normative dimension, excluding non-ethical content. This process led to 378 distinct codes through multiple coding cycles, ensuring consistency and allowing to synthesize high-level categories.

Ethical Issues in Generative AI

The development of generative AI technologies has significantly shifted the ethical discourse compared to the debates on traditional discriminatory machine learning, introducing new ethical issues and changing the focus of existing ones. Recent literature on AI ethics, influenced by advancements in generative AI, reveals a shift in the ethics discourse with the emergence of new concerns and a reevaluation of established principles. Previous meta-studies identified core ethical principles like transparency, fairness, security, safety, accountability, privacy, and beneficence. However, the advent of generative AI technologies has brought to the forefront new issues such as jailbreaking, hallucination, alignment, harmful content, copyright issues, data leaks in models, or impacts on human creativity. 

Fairness and bias remain critical, with discussions around the perpetuation of discriminatory societal patterns, data biases in training models, or the centralization of AI development power. Safety emerges as another paramount concern, focusing among others on the risks associated with superhuman AI models, including existential threats and the necessity for stringent safety measures. Other significant topics include the generation of harmful content, privacy risks, challenges of human-AI interaction, security vulnerabilities e.g. in the context of jailbreaking or prompt hacking, or the impact on education and learning. The literature also explores the implications for copyright and authorship, the economic impacts of AI, as well as the importance of transparency and AI governance.

Discussion on AI and Ethical Issues

In general, the literature on the ethics of generative AI tends to focus strongly on the negative aspects and risks of generative AI, overshadowing potential benefits and opportunities. Furthermore, many ethical concerns in the discourse are amplified by claims lacking empirical evidence, leading to an exaggerated perception of risks associated with generative AI, for instance with regard to the fear of language models assisting in creating pathogens, which is found to be based on minimal or contradicting empirical evidence.

Moreover, the literature largely neglects non-anthropocentric perspectives, overlooking the effects of generative AI on non-human animals. It also focuses mainly on LLMs and text-to-image models, rarely addressing the ethical implications of emerging multi-modal models, agents, or tool-use. When discussions do extend to more speculative areas like AGI (Artificial General Intelligence), they often delve into philosophical debates about potential existential risks, which may distract from addressing present and realistic concerns. This critique suggests a need for a more balanced, empirically grounded discourse that adequately weighs the benefits against the risks and expands the ethical considerations to include a broader spectrum of impacts and technologies.

Critical Gaps and Forthcoming Research

While the review maps out a comprehensive taxonomy of ethical issues, it also identifies gaps in the literature, such as the underrepresentation of certain ethical concerns and the need for more empirical research to support normative claims and risk assessments.

However, as a static review, the study cannot represent the dynamic nature of debates within ethics, including the evolution of normative arguments and positions over time. Moreover, while it identified conflicts between positions, resolving these conflicts was outside the study’s scope.

Between the lines

The findings of this scoping review are crucial for understanding and evaluating the ethical landscape of generative AI. The emphasis on negative aspects and the noted lack of consideration for positive impacts in the research landscape highlight a challenge in achieving a balanced perspective in ethical discussions, suggesting a potential bias in recognizing the benefits alongside the risks. 

In sum, the importance of ethics research, despite its limitations, underlines the critical role that ethical considerations play in shaping the development and deployment of generative AI technologies. It suggests an understanding that ethical guidance is crucial for ensuring that generative AI technologies are developed and used responsibly.

By offering a detailed taxonomy of ethical issues, the review can serve as a foundation for future policy-making.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

related posts

  • AI Art and Misinformation: Approaches and Strategies for Media Literacy and Fact-Checking

    AI Art and Misinformation: Approaches and Strategies for Media Literacy and Fact-Checking

  • Research summary: Suckers List: How Allstate’s Secret Auto Insurance Algorithm Squeezes Big Spenders

    Research summary: Suckers List: How Allstate’s Secret Auto Insurance Algorithm Squeezes Big Spenders

  • AI supply chains make it easy to disavow ethical accountability

    AI supply chains make it easy to disavow ethical accountability

  • Research summary: Beyond a Human Rights Based Approach To AI Governance: Promise, Pitfalls and Plea

    Research summary: Beyond a Human Rights Based Approach To AI Governance: Promise, Pitfalls and Plea

  • The Case for Anticipating Undesirable Consequences of Computing Innovations Early, Often, and Across...

    The Case for Anticipating Undesirable Consequences of Computing Innovations Early, Often, and Across...

  • Faith and Fate: Limits of Transformers on Compositionality

    Faith and Fate: Limits of Transformers on Compositionality

  • HAI Weekly Seminar Series: Decolonizing AI with Sabelo Mhlambi

    HAI Weekly Seminar Series: Decolonizing AI with Sabelo Mhlambi

  • The Role of Relevance in Fair Ranking

    The Role of Relevance in Fair Ranking

  • Prediction Sensitivity: Continual Audit of Counterfactual Fairness in Deployed Classifiers

    Prediction Sensitivity: Continual Audit of Counterfactual Fairness in Deployed Classifiers

  • Attacking Fake News Detectors via Manipulating News Social Engagement

    Attacking Fake News Detectors via Manipulating News Social Engagement

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.