• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Core Principles of Responsible AI

LLM Platform Security: Applying a Systematic Evaluation Framework to OpenAI’s ChatGPT Plugins

December 7, 2023

🔬 Research Summary by Umar Iqbal, an Assistant professor at Washington University in St. Louis, researching computer security and privacy. [Original paper by Umar Iqbal (Washington University in St. Louis), … [Read more...] about LLM Platform Security: Applying a Systematic Evaluation Framework to OpenAI’s ChatGPT Plugins

Robust Distortion-free Watermarks for Language Models

December 6, 2023

🔬 Research Summary by Rohith Kuditipudi, a third year Ph.D. student at Stanford University advised by John Duchi and Percy Liang. [Original paper by Rohith Kuditipudi, John Thickstun, Tatsunori Hashimoto, and … [Read more...] about Robust Distortion-free Watermarks for Language Models

Bias Propagation in Federated Learning

December 6, 2023

🔬 Research Summary by Hongyan Chang, a sixth-year Ph.D. student at the National University of Singapore, focuses on algorithmic fairness and privacy, particularly their intersection, and is also invested in advancing … [Read more...] about Bias Propagation in Federated Learning

GenAI Against Humanity: Nefarious Applications of Generative Artificial Intelligence and Large Language Models

December 6, 2023

🔬 Research Summary by Emilio Ferrara, a professor at the Thomas Lord Department of Computer Science of the University of Southern California. [Original paper by Emilio Ferrara] Overview: This paper delves … [Read more...] about GenAI Against Humanity: Nefarious Applications of Generative Artificial Intelligence and Large Language Models

A Case for AI Safety via Law

December 6, 2023

🔬 Research Summary by Jeff Johnston, an independent researcher working on envisioning positive futures, AI safety and alignment via law, and Piaget-inspired constructivist approaches to artificial general … [Read more...] about A Case for AI Safety via Law

« Previous Page
Next Page »

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Texas and New York: Comparing U.S. State-Level AI Laws

What is Sovereign Artificial Intelligence?

AI Policy Corner: The Kenya National AI Strategy

AI Policy Corner: New York City Local Law 144

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.