• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Bias Mitigation

The Bias of Harmful Label Associations in Vision-Language Models

February 3, 2025

🔬 Research Summary by Caner Hazirbas, Research Scientist at Meta and Ph.D. graduate in Computer Vision from the Technical University of Munich. [Original paper by Caner Hazirbas, Alicia Sun, Yonathan Efroni, Mark … [Read more...] about The Bias of Harmful Label Associations in Vision-Language Models

Careless Whisper: Speech-to-text Hallucination Harms

January 5, 2025

🔬 Research Summary by Allison Koenecke, an Assistant Professor of Information Science at Cornell University. Her research focuses on algorithmic fairness in online services. Overview: OpenAI’s speech-to-text … [Read more...] about Careless Whisper: Speech-to-text Hallucination Harms

FairQueue: Rethinking Prompt Learning for Fair Text-to-Image Generation (NeurIPS 2024)

December 10, 2024

🔬 Research Summary by Christopher Teo, PhD, Singapore University of Technology and Design (SUTD). [Original paper by Christopher T.H Teo, Milad Abdollahzadeh, Xinda Ma, Ngai-man Cheung] Note: This paper, … [Read more...] about FairQueue: Rethinking Prompt Learning for Fair Text-to-Image Generation (NeurIPS 2024)

On Measuring Fairness in Generative Modelling (NeurIPS 2023)

December 10, 2024

🔬 Research Summary by Christopher Teo, PhD, Singapore University of Technology and Design (SUTD). [Original paper by Christopher T.H.Teo, Milad Abdollahzadeh, and Ngai-Man Cheung] Note: This paper, On … [Read more...] about On Measuring Fairness in Generative Modelling (NeurIPS 2023)

Generative AI in Writing Research Papers: A New Type of Algorithmic Bias and Uncertainty in Scholarly Work

February 6, 2024

🔬 Research Summary by Rishab Jain, a neuroscience & AI researcher at Massachusetts General Hospital and a student at Harvard College. [Original paper by Rishab Jain and Aditya Jain] Overview: The … [Read more...] about Generative AI in Writing Research Papers: A New Type of Algorithmic Bias and Uncertainty in Scholarly Work

Next Page »

Primary Sidebar

🔍 SEARCH

Spotlight

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.