• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Research Summaries

Towards Responsible AI in the Era of ChatGPT: A Reference Architecture for Designing Foundation Model based AI Systems

May 20, 2023

🔬 Research Summary by Dr Qinghua Lu, the team leader of the responsible AI science team at CSIRO's Data61. [Original paper by  Qinghua Lu, Liming Zhu, Xiwei Xu, Zhenchang Xing, Jon Whittle] Overview: … [Read more...] about Towards Responsible AI in the Era of ChatGPT: A Reference Architecture for Designing Foundation Model based AI Systems

Democratising AI: Multiple Meanings, Goals, and Methods

May 9, 2023

🔬 Research Summary by Elizabeth Seger, PhD, a researcher at the Centre for the Governance of AI (GovAI) in Oxford, UK, investigating beneficial AI model-sharing norms and practices. [Original paper by Elizabeth … [Read more...] about Democratising AI: Multiple Meanings, Goals, and Methods

System Safety and Artificial Intelligence

December 6, 2022

🔬 Research Summary by Roel Dobbe, an Assistant Professor working at the intersection of engineering, design and governance of data-driven and algorithmic control and decision-making systems. [Original paper by … [Read more...] about System Safety and Artificial Intelligence

A Prompt Array Keeps the Bias Away: Debiasing Vision-Language Models with Adversarial Learning

December 6, 2022

🔬 Research Summary by Siobhan Mackenzie Hall, PhD student at the Oxford Neural Interfacing groups at the University of Oxford. Siobhan is also a member of the Oxford Artificial Intelligence Society, along with the … [Read more...] about A Prompt Array Keeps the Bias Away: Debiasing Vision-Language Models with Adversarial Learning

A Hazard Analysis Framework for Code Synthesis Large Language Models

December 6, 2022

🔬 Research Summary by Heidy Khlaaf, an Engineering Director at Trail of Bits specializing in the evaluation, specification, and verification of complex or autonomous software implementations in safety-critical systems, … [Read more...] about A Hazard Analysis Framework for Code Synthesis Large Language Models

« Previous Page
Next Page »

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Texas and New York: Comparing U.S. State-Level AI Laws

What is Sovereign Artificial Intelligence?

AI Policy Corner: The Kenya National AI Strategy

AI Policy Corner: New York City Local Law 144

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.