• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Blog

From Sparse to Dense: GPT-4 Summarization with Chain of Density Prompting

December 14, 2023

🔬 Research Summary by Griffin Adams, a final year NLP PhD student at Columbia University under Noémie Elhadad and Kathleen McKeown, who will be starting as the Head of Clinical NLP for Stability AI in … [Read more...] about From Sparse to Dense: GPT-4 Summarization with Chain of Density Prompting

Does diversity really go well with Large Language Models?

December 9, 2023

✍️ Column by Sun Gyoo Kang, Lawyer. Disclaimer: The views expressed in this article are solely my own and do not reflect my employer's opinions, beliefs, or positions. Any opinions or information in this article … [Read more...] about Does diversity really go well with Large Language Models?

Measuring Value Understanding in Language Models through Discriminator-Critique Gap

December 9, 2023

🔬 Research Summary by Zhaowei Zhang, a Ph.D. student at Peking University, researching Intent Alignment and Multi-Agent Systems for building a trustworthy and social AI system. [Original paper by Zhaowei Zhang, … [Read more...] about Measuring Value Understanding in Language Models through Discriminator-Critique Gap

Open and Linked Data Model for Carbon Footprint Scenarios

December 7, 2023

🔬 Research Summary by Boris Ruf, an AI researcher at AXA, focusing on algorithmic fairness and digital sustainability. [Original paper by Boris Ruf and Marcin Detyniecki] Overview: Measuring the carbon … [Read more...] about Open and Linked Data Model for Carbon Footprint Scenarios

A Sequentially Fair Mechanism for Multiple Sensitive Attributes

December 7, 2023

🔬 Research Summary by Francois Hu & Philipp Ratz. François Hu is a postdoctoral researcher in statistical learning at UdeM in Montreal. Philipp Ratz is a PhD student at UQAM in Montreal. [Original … [Read more...] about A Sequentially Fair Mechanism for Multiple Sensitive Attributes

« Previous Page
Next Page »

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.