Here are the major long-form works that we’ve published as an institute, from most recent to least recent.
Ramya Srinivasan, Jonas Schuett, Jimmy Huang, Robert de Neufville, Natalie Klym, Andrea Pedeferri, Andrea Owe, Nga Than, Khoa Lam, Angshuman Kaushik, Avantika Bhandari, Sarah P. Grant, Anne Boily, Philippe Dambly, Axel Beelen, Laird Gallaghar, Ravit Dotan, Sean McGregor, and Azfar Adib.
Learning Community cohort that was convened by MAIEI in Winter 2021 to work through and discuss important research issues in the field of AI ethics from a multidisciplinary lens. The community came together supported by facilitators from the MAIEI staff to vigorously debate and explore the nuances of issues like bias, privacy, disinformation, accountability, and more especially examining them from the perspective of industry, civil society, academia, and government. The chapters titled “Design and Techno-isolationism”, “Facebook and the Digital Divide: Perspectives from Myanmar, Mexico, and India”, “Future of Work”, and “Media & Communications & Ethical Foresight” will hopefully provide with you novel lenses to explore this domain beyond the usual tropes that are covered in the domain of AI ethics.
In order to ensure that the science and technology of AI is developed in a humane manner, we must develop research publication norms that are informed by our growing understanding of AI’s potential threats and use cases. To examine this challenge and find solutions, the Montreal AI Ethics Institute (MAIEI) collaborated with the Partnership on AI in May 2020 to host two public consultation meetups. These meetups examined potential publication norms for responsible AI, with the goal of creating a clear set of recommendations and ways forward for publishers.
This explainer was originally written in response to colleagues’ requests to know more about temporal bias, especially as it relates to AI ethics. It explains how humans understand time, time preferences, present-day preference, confidence changes, planning fallacies, and hindsight bias.
To encourage social scientists, in particular anthropologists, to play a part in orienting the future of AI, we created the Short Anthropological Guide to Ethical AI. This guide serves as an introduction to the field of AI ethics and offers new avenues for research by social science practitioners. By looking beyond the algorithm and turning to the humans behind it, we can start to critically examine the broader social, economic and political forces at play and ensure that innovation does not come at the cost of harming lives.
IP Protection for AI-Generated and AI-Assisted Works.
Based on insights from the Montreal AI Ethics Institute (MAIEI) staff and supplemented by workshop contributions from the AI Ethics community convened by MAIEI on July 5, 2020.
Automated systems for validating privacy and security of models need to be developed, which will help to lower the burden of implementing hand-offs from those building a model to those deploying the model, and increasing the ubiquity of their adoption.
This work proposes an ESG-inspired framework combining socio-technical measures to build eco-socially responsible AI systems. The framework has four pillars: compute-efficient machine learning, federated learning, data sovereignty, and a LEEDesque certificate.
The Electronic Frontier Foundation publicly called for comments on expanding the Santa Clara Principles on Transparency and Accountability (SCP). The Montreal AI Ethics Institute (MAIEI) responded to this call by drafting a set of recommendations based on insights and analysis by the MAIEI staff, supplemented by workshop contributions from the AI Ethics community.
This pulse-check for the state of discourse, research, and development is geared towards researchers and practitioners alike who are making decisions on behalf of their organizations in considering the societal impacts of AI-enabled solutions. We cover a wide set of areas in this report spanning Agency and Responsibility, Security and Risk, Disinformation, Jobs and Labor, the Future of AI Ethics, and more.
In February 2020, the European Commission (EC) published a white paper outlining the EC’s policy options for the promotion and adoption of artificial intelligence (AI) in the European Union. We reviewed this paper and published a response including safety and liability implications of AI, the internet of things (IoT), and robotics. Our insights were supplemented by insights gained from two public workshops we hosted on this topic, on May 27 and June 3.
This article will provide a critical response to Mila’s COVI White Paper. COVI is a proposal for a contact tracing app to help fight COVID-19 in Canada. Specifically, this article will discuss: the extent to which diversity has been considered in the design of the app, assumptions surrounding users’ interaction with the app and the app’s utility, as well as unanswered questions surrounding transparency, accountability, and security.
Based on insights and analysis by the Montreal AI Ethics Institute (MAIEI) Staff on the policy document from Scotland Government and supplemented by workshop contributions from the AI Ethics community convened by MAIEI on May 4, 2020.
In February 2020, the Montreal AI Ethics Institute (MAIEI) was invited by the Office of the Privacy Commissioner of Canada (OPCC) to provide for comments both at a closed roundtable and in writing on the OPCC consultation proposal for amendments relative to Artificial Intelligence (AI), to the Canadian privacy legislation, the Personal Information Protection and Electronic Documents Act (PIPEDA).
Our response to the white paper on Responsible Innovation in AI that the Australian Human Rights Commission published in partnership with the World Economic Forum. In the context of creating multi-stakeholder dialogue, it is our recommendation that public consultation and engagement be a key component because it helps to surface interdisciplinary solutions, often leveraging first-hand, lived experiences that lead to more practical solutions.