• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Blog

How Culturally Aligned are Large Language Models?

January 27, 2024

🔬 Research Summary by Reem Ibrahim Masoud, a Ph.D. student at University College London (UCL) specializing in the Cultural Alignment of Large Language Models. [Original paper by Reem I. Masoud, Ziquan Liu, Martin … [Read more...] about How Culturally Aligned are Large Language Models?

How Helpful do Novice Programmers Find the Feedback of an Automated Repair Tool?

January 27, 2024

🔬 Research Summary by Oka Kurniawan, a Computer Science Faculty in the Singapore University of Technology and Design, Singapore. [Original paper by Oka Kurniawan,  Christopher M. Poskitt, Ismam Al Hoque, Norman … [Read more...] about How Helpful do Novice Programmers Find the Feedback of an Automated Repair Tool?

On Prediction-Modelers and Decision-Makers: Why Fairness Requires More Than a Fair Prediction Model

January 27, 2024

🔬 Research Summary by Teresa Scantamburlo, an Assistant Professor at Ca’ Foscari University of Venice (Italy). She works at the intersection of computer science and applied ethics, focusing on AI governance and human … [Read more...] about On Prediction-Modelers and Decision-Makers: Why Fairness Requires More Than a Fair Prediction Model

The importance of audit in AI governance

January 27, 2024

🔬 Research Summary by Diptish Dey, Ph.D. and Debarati Bhaumik, Ph.D. Diptish Dey teaches and conducts research in responsible AI at the Faculty of Business & Economics of the Amsterdam University of Applied … [Read more...] about The importance of audit in AI governance

Exploiting Large Language Models (LLMs) through Deception Techniques and Persuasion Principles

January 25, 2024

🔬 Research Summary by Sonali Singh, a Ph.D. Student at Texas Tech University working on Large language model(LLM). [Original paper by Sonali Singh, Faranak Abri, and Akbar Siami Namin] Overview: This paper … [Read more...] about Exploiting Large Language Models (LLMs) through Deception Techniques and Persuasion Principles

« Previous Page
Next Page »

Primary Sidebar

🔍 SEARCH

Spotlight

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

AI Policy Corner: U.S. Copyright Guidance on Works Created with AI

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.