• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Safety and Security

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

April 28, 2025

✍️ By Alexander Wilhelm. Alexander is a PhD Student in Political Science and a Graduate Affiliate at the Governance and Responsible AI Lab (GRAIL), Purdue University. 📌 Editor’s Note: This article is part of … [Read more...] about AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

Digital Sex Crime, Online Misogyny, and Digital Feminism in South Korea

January 13, 2025

🔬 Research Summary by Giuliana Luz Grabina, a McGill University philosophy alumna, with an interest in AI/technology policy regulation from a gendered perspective. [Original paper by Minyoung Moon] Overview: … [Read more...] about Digital Sex Crime, Online Misogyny, and Digital Feminism in South Korea

Responsible Generative AI: A Reference Architecture for Designing Foundation Model-based Agents

February 14, 2024

🔬 Research Summary by Dr. Qinghua Lu, the team leader of the Responsible AI science team at CSIRO's Data61 and is the winner of the 2023 APAC Women in AI Trailblazer Award. [Original paper by Qinghua Lu, Liming … [Read more...] about Responsible Generative AI: A Reference Architecture for Designing Foundation Model-based Agents

Beyond Empirical Windowing: An Attention-Based Approach for Trust Prediction in Autonomous Vehicles

February 5, 2024

🔬 Research Summary by Zhaobo Zheng, a scientist at Honda Research Institute USA, Inc. [Original paper by Minxue Niu, Zhaobo Zheng, Kumar Akash, and Teruhisa Misu] Overview: The trust in autonomous driving is … [Read more...] about Beyond Empirical Windowing: An Attention-Based Approach for Trust Prediction in Autonomous Vehicles

Exploiting Large Language Models (LLMs) through Deception Techniques and Persuasion Principles

January 25, 2024

🔬 Research Summary by Sonali Singh, a Ph.D. Student at Texas Tech University working on Large language model(LLM). [Original paper by Sonali Singh, Faranak Abri, and Akbar Siami Namin] Overview: This paper … [Read more...] about Exploiting Large Language Models (LLMs) through Deception Techniques and Persuasion Principles

Next Page »

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.