• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Transparency

Representation Engineering: A Top-Down Approach to AI Transparency

January 25, 2024

🔬 Research Summary by Andy Zou, a Ph.D. student at CMU, advised by Zico Kolter and Matt Fredrikson. He also cofounded the Center for AI Safety (safe.ai). [Original paper by Andy Zou, Long Phan, Sarah Chen, James … [Read more...] about Representation Engineering: A Top-Down Approach to AI Transparency

Who to Trust, How and Why: Untangling AI Ethics Principles, Trustworthiness and Trust

January 21, 2024

🔬 Research Summary by Andreas Duenser and David M. Douglas. Andreas Duenser is a Principal Research Scientist at CSIRO - Data61, Hobart, Australia, and is interested in the convergence of psychology and emerging … [Read more...] about Who to Trust, How and Why: Untangling AI Ethics Principles, Trustworthiness and Trust

Science Communications for Explainable Artificial Intelligence

December 14, 2023

🔬 Research Summary by Simon Hudson , a writer and researcher investigating subjects in AI governance, human-machine collaboration, and Science Communications, and is currently co-leading the core team behind Botto, a … [Read more...] about Science Communications for Explainable Artificial Intelligence

Towards an Understanding of Developers’ Perceptions of Transparency in Software Development: A Preliminary Study

December 3, 2023

🔬 Research Summary by Humphrey O. Obie, an Adjunct Research Fellow with the HumaniSE Lab at Monash University; his research is at the intersection of human values and software and AI systems. [Original paper by … [Read more...] about Towards an Understanding of Developers’ Perceptions of Transparency in Software Development: A Preliminary Study

On the Challenges of Using Black-Box APIs for Toxicity Evaluation in Research

September 6, 2023

🔬 Research Summary by Luiza Pozzobon, a Research Scholar at Cohere For AI where she currently researches model safety. She’s also a master’s student at the University of Campinas, Brazil. [Original paper by Luiza … [Read more...] about On the Challenges of Using Black-Box APIs for Toxicity Evaluation in Research

« Previous Page

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.