• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Blog

Is ChatGPT for everyone? Seeing beyond the hype toward responsible use in education

January 3, 2023

✍️ Column by Marianna Ganapini, Pamela Lirio, and Andrea Pedeferri. Dr. Marianna Ganapini is our Faculty Director and Assistant Professor in Philosophy at Union College. Dr. Pamela Lirio is an Associate … [Read more...] about Is ChatGPT for everyone? Seeing beyond the hype toward responsible use in education

Can LLMs Enhance the Conversational AI Experience?

December 6, 2022

🔬 Column by Julia Anderson, a writer and conversational UX designer exploring how technology can make us better humans. Part of the ongoing Like Talking to a Person series During conversations, sometimes … [Read more...] about Can LLMs Enhance the Conversational AI Experience?

System Safety and Artificial Intelligence

December 6, 2022

🔬 Research Summary by Roel Dobbe, an Assistant Professor working at the intersection of engineering, design and governance of data-driven and algorithmic control and decision-making systems. [Original paper by … [Read more...] about System Safety and Artificial Intelligence

A Prompt Array Keeps the Bias Away: Debiasing Vision-Language Models with Adversarial Learning

December 6, 2022

🔬 Research Summary by Siobhan Mackenzie Hall, PhD student at the Oxford Neural Interfacing groups at the University of Oxford. Siobhan is also a member of the Oxford Artificial Intelligence Society, along with the … [Read more...] about A Prompt Array Keeps the Bias Away: Debiasing Vision-Language Models with Adversarial Learning

A Hazard Analysis Framework for Code Synthesis Large Language Models

December 6, 2022

🔬 Research Summary by Heidy Khlaaf, an Engineering Director at Trail of Bits specializing in the evaluation, specification, and verification of complex or autonomous software implementations in safety-critical systems, … [Read more...] about A Hazard Analysis Framework for Code Synthesis Large Language Models

« Previous Page
Next Page »

Primary Sidebar

🔍 SEARCH

Spotlight

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.