• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Articles

From Sparse to Dense: GPT-4 Summarization with Chain of Density Prompting

December 14, 2023

🔬 Research Summary by Griffin Adams, a final year NLP PhD student at Columbia University under Noémie Elhadad and Kathleen McKeown, who will be starting as the Head of Clinical NLP for Stability AI in … [Read more...] about From Sparse to Dense: GPT-4 Summarization with Chain of Density Prompting

Does diversity really go well with Large Language Models?

December 9, 2023

✍️ Column by Sun Gyoo Kang, Lawyer. Disclaimer: The views expressed in this article are solely my own and do not reflect my employer's opinions, beliefs, or positions. Any opinions or information in this article … [Read more...] about Does diversity really go well with Large Language Models?

Measuring Value Understanding in Language Models through Discriminator-Critique Gap

December 9, 2023

🔬 Research Summary by Zhaowei Zhang, a Ph.D. student at Peking University, researching Intent Alignment and Multi-Agent Systems for building a trustworthy and social AI system. [Original paper by Zhaowei Zhang, … [Read more...] about Measuring Value Understanding in Language Models through Discriminator-Critique Gap

Open and Linked Data Model for Carbon Footprint Scenarios

December 7, 2023

🔬 Research Summary by Boris Ruf, an AI researcher at AXA, focusing on algorithmic fairness and digital sustainability. [Original paper by Boris Ruf and Marcin Detyniecki] Overview: Measuring the carbon … [Read more...] about Open and Linked Data Model for Carbon Footprint Scenarios

A Sequentially Fair Mechanism for Multiple Sensitive Attributes

December 7, 2023

🔬 Research Summary by Francois Hu & Philipp Ratz. François Hu is a postdoctoral researcher in statistical learning at UdeM in Montreal. Philipp Ratz is a PhD student at UQAM in Montreal. [Original … [Read more...] about A Sequentially Fair Mechanism for Multiple Sensitive Attributes

« Previous Page
Next Page »

Primary Sidebar

🔍 SEARCH

Spotlight

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.