• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Publications

Here are the major long-form works that we’ve published as an institute, from most recent to least recent.

The State of AI Ethics Report (Volume 6)

The State of AI Ethics Report (volume 6) is our most comprehensive report yet touching nearly 300 pages covering (1) What we’re thinking (2) Analysis of the AI Ecosystem (3) Privacy (4) Bias (5) Social Media and Problematic Information (6) AI Design and Governance (7) Laws and Regulations (8) Trends and (9) Outside the Boxes. Our goal with these chapters is to provide both an in-depth analysis of each of those areas (but by no means exhaustive given the richness of each of these subdomains) along with a breadth of coverage for those who are looking to save hundreds of hours in trying to parse through the latest in research and reporting in the domain. We’ve also got special contributions from Idoia Salazar, Michael Klenk, and Kathy Baxter providing chapter introductions. We also welcomed contributions from our network of collaborators and our thanks go out to
Ramya Srinivasan, Jonas Schuett, Jimmy Huang, Robert de Neufville, Natalie Klym, Andrea Pedeferri, Andrea Owe, Nga Than, Khoa Lam, Angshuman Kaushik, Avantika Bhandari, Sarah P. Grant, Anne Boily, Philippe Dambly, Axel Beelen, Laird Gallaghar, Ravit Dotan, Sean McGregor, and Azfar Adib.

The MAIEI Learning Community Report (September 2021)

This is a labor of the Learning Community cohort that was convened by MAIEI in Winter 2021 to work through and discuss important research issues in the field of AI ethics from a multidisciplinary lens. The community came together supported by facilitators from the MAIEI staff to vigorously debate and explore the nuances of issues like bias, privacy, disinformation, accountability, and more especially examining them from the perspective of industry, civil society, academia, and government. The chapters titled “Design and Techno-isolationism”, “Facebook and the Digital Divide: Perspectives from Myanmar, Mexico, and India”, “Future of Work”, and “Media & Communications & Ethical Foresight” will hopefully provide with you novel lenses to explore this domain beyond the usual tropes that are covered in the domain of AI ethics.

The State of AI Ethics Report (Volume 5)

The State of AI Ethics Report (volume 5) captures the most relevant developments in AI Ethics since the first quarter of 2021. We’ve distilled the research & reporting around 3 key themes: (1) Creativity and AI, (2) Environment and AI, and (3) Geopolitics and AI. We also have our evergreen section Outside the Boxes that captures insights across an eclectic mix of topic areas for those looking for a broad horizon of domains where AI has had a societal impact. And we bring back the community spotlights to showcase meaningful work being done by scholars and activists from around the world. This edition opens with a section that has been much requested by you, our community, titled “What we’re thinking” that gives insights into emergent trends and gaps that we’ve noticed in existing coverage of AI ethics. We also have a special contribution titled “The Critical Race Quantum Computer: A Tool for Liberation” by Michael Lipset, Jessica Brown, Michael Crawford, Kwaku Aning, & Katlyn Turner with a very intriguing framing of how we think about race and technology.

The State of AI Ethics Report (Volume 4)

We’ve distilled the research & reporting around 4 key themes: Ethical AI, Fairness & Justice, Humans & Tech, and Privacy. This edition opens with AI and the Face: A Historian’s View — a long-form piece by Edward Higgs (Professor of History at the University of Essex) about the unscientific history of facial analysis, and how AI might be repeating some of those mistakes at scale.

The State of AI Ethics Report (January 2021)

The report includes exclusive content written by world-class AI Ethics experts. This edition opens with The Abuse and Misogynoir Playbook — a 20-page joint piece by a group of MIT professors & research scientists (Danielle Wood, Katlyn Turner, Catherine D’Ignazio) about the mistreatment of Dr. Timnit Gebru by Google and the broader historical significance around this event.

The State of AI Ethics Report (October 2020)

This report captures the most relevant developments in AI Ethics since July of 2020. Includes exclusive content written by: Danit Gal (Tech Advisor, United Nations), Amba Kak (Director of Global Policy & Programs, NYU’s AI Now Institute), Rumman Chowdhury (Global Lead for Responsible AI, Accenture), and Katya Klinova (AI & Economy Program Lead, Partnership on AI).

Publication Norms for Responsible AI


In order to ensure that the science and technology of AI is developed in a humane manner, we must develop research publication norms that are informed by our growing understanding of AI’s potential threats and use cases. To examine this challenge and find solutions, the Montreal AI Ethics Institute (MAIEI) collaborated with the Partnership on AI in May 2020 to host two public consultation meetups. These meetups examined potential publication norms for responsible AI, with the goal of creating a clear set of recommendations and ways forward for publishers.

The Unnoticed Cognitive Bias Secretly Shaping the AI Agenda


This explainer was originally written in response to colleagues’ requests to know more about temporal bias, especially as it relates to AI ethics. It explains how humans understand time, time preferences, present-day preference, confidence changes, planning fallacies, and hindsight bias.

The Short Anthropological Guide to the Study of Ethical AI


To encourage social scientists, in particular anthropologists, to play a part in orienting the future of AI, we created the Short Anthropological Guide to Ethical AI. This guide serves as an introduction to the field of AI ethics and offers new avenues for research by social science practitioners. By looking beyond the algorithm and turning to the humans behind it, we can start to critically examine the broader social, economic and political forces at play and ensure that innovation does not come at the cost of harming lives.

Submission to the World Intellectual Property Organization (WIPO) Conversation on Intellectual Property (IP) and Artificial
Intelligence (AI)


IP Protection for AI-Generated and AI-Assisted Works.
Based on insights from the Montreal AI Ethics Institute (MAIEI) staff and supplemented by workshop contributions from the AI Ethics community convened by MAIEI on July 5, 2020.

Green Lighting ML: Confidentiality, Integrity, and Availability of Machine Learning Systems in Deployment


Automated systems for validating privacy and security of models need to be developed, which will help to lower the burden of implementing hand-offs from those building a model to those deploying the model, and increasing the ubiquity of their adoption.

SECure: A Social and Environmental Certificate for AI Systems


This work proposes an ESG-inspired framework combining socio-technical measures to build eco-socially responsible AI systems. The framework has four pillars: compute-efficient machine learning, federated learning, data sovereignty, and a LEEDesque certificate.

Report prepared for the Santa Clara Principles for Content Moderation


The Electronic Frontier Foundation publicly called for comments on expanding the Santa Clara Principles on Transparency and Accountability (SCP). The Montreal AI Ethics Institute (MAIEI) responded to this call by drafting a set of recommendations based on insights and analysis by the MAIEI staff, supplemented by workshop contributions from the AI Ethics community.

The State of AI Ethics Report (June 2020)


This pulse-check for the state of discourse, research, and development is geared towards researchers and practitioners alike who are making decisions on behalf of their organizations in considering the societal impacts of AI-enabled solutions. We cover a wide set of areas in this report spanning Agency and Responsibility, Security and Risk, Disinformation, Jobs and Labor, the Future of AI Ethics, and more.

Response to the European Commission’s white paper on AI (2020)


In February 2020, the European Commission (EC) published a white paper outlining the EC’s policy options for the promotion and adoption of artificial intelligence (AI) in the European Union. We reviewed this paper and published a response including safety and liability implications of AI, the internet of things (IoT), and robotics. Our insights were supplemented by insights gained from two public workshops we hosted on this topic, on May 27 and June 3.

Response to Mila’s Proposal for a Contact Tracing App


This article will provide a critical response to Mila’s COVI White Paper. COVI is a proposal for a contact tracing app to help fight COVID-19 in Canada. Specifically, this article will discuss: the extent to which diversity has been considered in the design of the app, assumptions surrounding users’ interaction with the app and the app’s utility, as well as unanswered questions surrounding transparency, accountability, and security.

Response to Scotland’s AI Strategy


Based on insights and analysis by the Montreal AI Ethics Institute (MAIEI) Staff on the policy document from Scotland Government and supplemented by workshop contributions from the AI Ethics community convened by MAIEI on May 4, 2020.

Response to Office of the Privacy Commissioner of Canada Consultation Proposals pertaining to amendments to PIPEDA relative to AI (click here for French)


In February 2020, the Montreal AI Ethics Institute (MAIEI) was invited by the Office of the Privacy Commissioner of Canada (OPCC) to provide for comments both at a closed roundtable and in writing on the OPCC consultation proposal for amendments relative to Artificial Intelligence (AI), to the Canadian privacy legislation, the Personal Information Protection and Electronic Documents Act (PIPEDA).

Response to the AHRC and WEF regarding Responsible Innovation in AI


Our response to the white paper on Responsible Innovation in AI that the Australian Human Rights Commission published in partnership with the World Economic Forum. In the context of creating multi-stakeholder dialogue, it is our recommendation that public consultation and engagement be a key component because it helps to surface interdisciplinary solutions, often leveraging first-hand, lived experiences that lead to more practical solutions.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.