The Montreal AI Ethics Institute is an international, non-profit research institute dedicated to defining humanity’s place in a world increasingly characterized and driven by algorithms. We do this by creating tangible and applied technical and policy research in the ethical, safe, and inclusive development of AI.
Our goal is to build public competence and understanding of the societal impacts of AI and to equip and empower diverse stakeholders to actively engage in the shaping of technical and policy measures in the development and deployment of AI systems.
We are a digital-first civil society organization that brings together a diversity of individuals from different disciplines, areas of expertise, and geographic regions. We strive for an open access policy; all our content is licensed under a Creative Commons Attribution 4.0 International License unless otherwise noted — learn more about our open access policy here.

Our Values
Expertise and Experience
We bridge the gap between technical and policy experience with real-world impact.
Biased Towards Action
While there are numerous organizations that have been working on sets of principles, frameworks, and other theoretical guidelines, the missing piece that is now starting to surface is the bridging of the gap between the proposed technical and policy measures and operationalizing them. We are firmly biased towards action and our work with partner organizations has not only created positive change but also one that is sustainable and transformative.
Frameworks Into Practice
Our team combines deep technical, policy, and design expertise with years of experience working with organizations from across the world in putting these frameworks into practice. Given our global network of interdisciplinary researchers and practitioners coupled with an in-depth and all-encompassing view of the cutting-edge responsible AI landscape, we are uniquely positioned to bring about a quick turnover of research into applied measures.
Responsible AI as the Norm
Organizational change starts with people and people require knowledge that is presented in bite-sized, accessible chunks. Our experience in delivering content that meets these criteria has a proven track record of success. We can leverage our combined expertise to create bespoke experiences that will equip and empower individuals with the necessary skills to confidently lead their organization into a future where responsible AI becomes the norm rather than the exception.
Empowering Local Champions
We are creating local champions in the form of informed and engaged citizens who are able to take this knowledge of applied AI ethics to their communities and organizations, thus scaling the impact that we have as a single organization. As an example, a former research intern that worked with MAIEI in 2019 is now the Head of AI Ethics Policy for the Joint Artificial Intelligence Center, Department of Defense, U.S. Government.
Truly Inclusive, Global Participation
Our programs are truly inclusive and eliminate barriers for people from all parts of the world, including the Global South, who are typically not able to access similar programs because of financial constraints, visa troubles, family commitments, and so on. By being digital-first, we are able to bring together perspectives that are otherwise inaccessible where the emphasis is oftentimes on heavy credentials and traditional backgrounds.
Open Source and Open Access
Open source and open access models are embedded into everything we do. This includes deeply researched content for governments and other public entities, made available for all researchers and practitioners so that they can build on our work rather than having to reinvent the wheel.

An inclusive, award-winning AI Ethics community
Since July 2017, we have grown our community to over 4,500+ members and have hosted over 70+ meetups. These AI Ethics Meetups enable civic engagement to enhance policy development on the ethical, safe and inclusive development of AI.
Our members come from diverse backgrounds such as computer science, law, sociology, business, government policy, and so on. We meet in Montreal every 2-3 weeks and are hosted by many different organizations and community partners. We keep the conversation alive between meetups on our public Slack channel.
Our partners spanning academia, government, corporations and community are essential in making the work of the institute a success. They generously share their space for us to host our community building and civic engagement activities while also feeding the virtuous cycle of involving their own members deeper into the discussions.
Our AI Ethics Meetups held online via Zoom allow our global community to provide feedback and recommendations to public documents like the Montreal Declaration for Responsible AI. They also provide insights to a number of active research projects we are currently undertaking with a number of academic collaborators around the world.

Projects
Our global community provides insights and a broad multicultural understanding of the societal impacts of AI, contributing to the following research projects and initiatives:
Public Policy Consultations (in person and online, national and global in scale)
- Australian Human Rights Commission
- European Commission
- G7 Multi Stakeholder Conference on Artificial Intelligence
- Government of Scotland
- Office of the Privacy Commissioner of Canada (OPCC)
- Partnership on AI
- Prime Minister’s Office of New Zealand
- U.S. Department of State
- World Economic Forum
Learning Communities (via Slack and Zoom)
- Complex Systems Theory
- Disinformation
- Labor Impacts of AI
- Machine Learning Security
- Privacy
Research Projects
- Comprehensiveness of Archives: A modern AI-enabled approach to building comprehensive shared cultural heritage
- Exploring the uncanny valley of climate change misinformation
- Folding IN the margins: Building inclusive AI systems using indigenous data
- SECure – Social and Environmental Certificate for AI systems
- Participatory Design as a mechanism for building trustworthy AI
- Participatory Design to build better contact- and proximity-tracing apps trust: the critical pillar of society and others

Issues we’re interested in
Social inclusion in AI: Technical and policy approaches to increase social inclusion in the entire pipeline of AI – from design and conception, data collection and use all the way to the end-of-life management of a project.
Mission-driven AI: How non-profits, social enterprises and NGOs can leverage machine learning solutions to help them stretch their donation dollars further by scaling the work that they do and increasing efficiency of their operations.
AI Ethics in Medicine: Informed consent and its ethical implications in the field of medicine, especially when AI-enabled solutions are used in diagnosis and clinical trials.
Algorithms in Politics: A look at how automation of propaganda in the political sphere happens and what measures and practices we can put in place to prevent some of the issues that arise as we push society towards higher divisiveness.
AI and Business: How AI is changing the way we do business.
AI and Law: Legal and ethical implications with the increasing use of AI in the context of emerging privacy and data security laws like the GDPR.
The Malicious use of AI: Outlining the landscape of potential security threats from malicious uses of AI technologies, proposing ways to better forecast, prevent, and mitigate these threats.
Impact of China on AI: China’s phenomenal rise in developing and deploying AI is quickly becoming both an inspirational model for how a national strategy can be effectively developed to dominate a field and a cause for concern in terms of ethics.
Embedding values into machines: How best to embed values into machines, what the implications of this are.
Algorithmic Discrimination: Impacts of using data-driven approaches and algorithms in the workplace drawing upon economic and social science theory.
Algorithmic Impact Assessments: An interesting way to evaluate how algorithmic systems and society interact and what kind of contract could potentially be setup between both parties to allow for a more beneficial interaction.
Data Privacy and Access Controls: What is the definition of control regarding personal data? How can we redefine data access to honor the individual?
Key areas of Focus
- Privacy – bridging the gap between the legal and technical domains to advise on privacy legislation amendments and assessing privacy implications of new technological solutions like contact- and proximity-tracing applications.
- Disinformation – working on technical and design interventions to combat the spread of misinformation and disinformation related to climate change for the most susceptible Canadians. Surfacing language- and text-based signals to the UX and studying perceptual changes in users.
- Labor Impacts of AI – studying on-the-ground impacts on the lives of blue- and white-collar workers due to automation. Designing iconic demonstrations to galvanize both workers and policymakers to take informed decisions to empower workers to transition into the future of work.
- Machine Learning Security – increasing awareness and education for both cybersecurity and machine learning practitioners on the emergent concerns in machine learning security to protect the incoming and future generations of technological solutions using AI.
- Environmental Impacts of AI – given that AI-enabled solutions are compute- and data-heavy, we are consolidating the fragmented reporting standards to inform developers and users to make informed choices in picking products and services that square with their environmental norms and values.
- Indigenous Data Rights – respecting Traditional Knowledge and data sovereignty requirements, designing a framework to meaningfully incorporate indigenous data into building AI solutions that are truly inclusive .
Our partners and collaborators

Press (see full list here)
