• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Research Summaries

“It doesn’t tell me anything about how my data is used”: User Perceptions of Data Collection Purposes

February 5, 2024

馃敩 Research Summary by Lin Kyi, a Computer Science Ph.D. student at the Max Planck Institute for Security and Privacy focusing on online consent and the ethical collection of data. [Original paper by Abraham … [Read more...] about “It doesn’t tell me anything about how my data is used”: User Perceptions of Data Collection Purposes

How Prevalent is Gender Bias in ChatGPT? – Exploring German and English ChatGPT Responses

February 1, 2024

馃敩 Research Summary by Stefanie Urchs, a Computer Science Ph.D. student at the Hochschule M眉nchen University of Applied Sciences, deeply interested in interdisciplinary approaches to natural language … [Read more...] about How Prevalent is Gender Bias in ChatGPT? – Exploring German and English ChatGPT Responses

Can Large Language Models Provide Security & Privacy Advice? Measuring the Ability of LLMs to Refute Misconceptions

January 18, 2024

馃敩 Research Summary by聽Arjun Arunasalam, a 4th-year Computer Science Ph.D. student at Purdue University researching security, privacy, and trust on online platforms from a human-centered lens. [Original paper by … [Read more...] about Can Large Language Models Provide Security & Privacy Advice? Measuring the Ability of LLMs to Refute Misconceptions

LLM Platform Security: Applying a Systematic Evaluation Framework to OpenAI’s ChatGPT Plugins

December 7, 2023

馃敩 Research Summary by Umar Iqbal, an Assistant professor at Washington University in St. Louis, researching computer security and privacy. [Original paper by Umar Iqbal (Washington University in St. Louis), … [Read more...] about LLM Platform Security: Applying a Systematic Evaluation Framework to OpenAI’s ChatGPT Plugins

Towards an Understanding of Developers’ Perceptions of Transparency in Software Development: A Preliminary Study

December 3, 2023

馃敩 Research Summary by Humphrey O. Obie, an Adjunct Research Fellow with the HumaniSE Lab at Monash University; his research is at the intersection of human values and software and AI systems. [Original paper by … [Read more...] about Towards an Understanding of Developers’ Perceptions of Transparency in Software Development: A Preliminary Study

« Previous Page
Next Page »

Primary Sidebar

馃攳 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

Partners

  • 聽
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • 漏 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.