• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

MAIEI

Does diversity really go well with Large Language Models?

December 9, 2023

✍️ Column by Sun Gyoo Kang, Lawyer. Disclaimer: The views expressed in this article are solely my own and do not reflect my employer's opinions, beliefs, or positions. Any opinions or information in this article … [Read more...] about Does diversity really go well with Large Language Models?

Measuring Value Understanding in Language Models through Discriminator-Critique Gap

December 9, 2023

🔬 Research Summary by Zhaowei Zhang, a Ph.D. student at Peking University, researching Intent Alignment and Multi-Agent Systems for building a trustworthy and social AI system. [Original paper by Zhaowei Zhang, … [Read more...] about Measuring Value Understanding in Language Models through Discriminator-Critique Gap

Open and Linked Data Model for Carbon Footprint Scenarios

December 7, 2023

🔬 Research Summary by Boris Ruf, an AI researcher at AXA, focusing on algorithmic fairness and digital sustainability. [Original paper by Boris Ruf and Marcin Detyniecki] Overview: Measuring the carbon … [Read more...] about Open and Linked Data Model for Carbon Footprint Scenarios

A Sequentially Fair Mechanism for Multiple Sensitive Attributes

December 7, 2023

🔬 Research Summary by Francois Hu & Philipp Ratz. François Hu is a postdoctoral researcher in statistical learning at UdeM in Montreal. Philipp Ratz is a PhD student at UQAM in Montreal. [Original … [Read more...] about A Sequentially Fair Mechanism for Multiple Sensitive Attributes

Towards User-Guided Actionable Recourse

December 7, 2023

🔬 Research Summary by Jayanth Yetukuri, a final year Ph.D. student at UCSC, advised by Professor Yang Liu, where his research focuses on improving the trustworthiness of Machine Learning models. [Original paper … [Read more...] about Towards User-Guided Actionable Recourse

« Previous Page
Next Page »

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: The Kenya National AI Strategy

AI Policy Corner: New York City Local Law 144

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.