• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Research Summaries

“It doesn’t tell me anything about how my data is used”: User Perceptions of Data Collection Purposes

February 5, 2024

馃敩 Research Summary by Lin Kyi, a Computer Science Ph.D. student at the Max Planck Institute for Security and Privacy focusing on online consent and the ethical collection of data. [Original paper by Abraham … [Read more...] about “It doesn’t tell me anything about how my data is used”: User Perceptions of Data Collection Purposes

How Prevalent is Gender Bias in ChatGPT? – Exploring German and English ChatGPT Responses

February 1, 2024

馃敩 Research Summary by Stefanie Urchs, a Computer Science Ph.D. student at the Hochschule M眉nchen University of Applied Sciences, deeply interested in interdisciplinary approaches to natural language … [Read more...] about How Prevalent is Gender Bias in ChatGPT? – Exploring German and English ChatGPT Responses

Can Large Language Models Provide Security & Privacy Advice? Measuring the Ability of LLMs to Refute Misconceptions

January 18, 2024

馃敩 Research Summary by聽Arjun Arunasalam, a 4th-year Computer Science Ph.D. student at Purdue University researching security, privacy, and trust on online platforms from a human-centered lens. [Original paper by … [Read more...] about Can Large Language Models Provide Security & Privacy Advice? Measuring the Ability of LLMs to Refute Misconceptions

LLM Platform Security: Applying a Systematic Evaluation Framework to OpenAI’s ChatGPT Plugins

December 7, 2023

馃敩 Research Summary by Umar Iqbal, an Assistant professor at Washington University in St. Louis, researching computer security and privacy. [Original paper by Umar Iqbal (Washington University in St. Louis), … [Read more...] about LLM Platform Security: Applying a Systematic Evaluation Framework to OpenAI’s ChatGPT Plugins

Towards an Understanding of Developers’ Perceptions of Transparency in Software Development: A Preliminary Study

December 3, 2023

馃敩 Research Summary by Humphrey O. Obie, an Adjunct Research Fellow with the HumaniSE Lab at Monash University; his research is at the intersection of human values and software and AI systems. [Original paper by … [Read more...] about Towards an Understanding of Developers’ Perceptions of Transparency in Software Development: A Preliminary Study

« Previous Page
Next Page »

Primary Sidebar

馃攳 SEARCH

Spotlight

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

Partners

  • 聽
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • 漏 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.