• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Montreal AI Ethics Institute Hosts a TechAIDE Café Session

July 12, 2020


On July 7th, the Montreal AI Ethics Institute had the privilege of hosting a TechAIDE Café session. The aim of the café is to raise funds for Centraide Montréal, a philanthropic organization that raises money to help fight poverty, homelessness, and social exclusion. Funds are raised when participants pledge, either on Twitter or by email to Centraide, that they will “buy a coffee,” and the size of the coffee bought is proportional to the amount donated. Centraide then reaches out to the individuals who made the pledge and invites them to donate the amount they pledged. 

As baristas, members of our team were there to answer questions from the public and discuss issues surrounding the ethics of AI. Our first topic of discussion was the recent announcements made by tech companies that they would stop researching and developing facial recognition (in the case of IBM), or that they would enforce a moratorium on the use of their facial recognition software (in the case of Amazon). We had a nuanced discussion, highlighting some interests and motivations behind these decisions: genuine concern for the harms caused by facial recognition, virtue-signalling, and even seizing the opportunity to abandon unprofitable efforts in facial recognition research and development while enhancing the company’s reputation. 

We then moved on to the very timely topic of contact-tracing applications for Covid-19, addressing concerns surrounding data privacy and the accuracy of tracking technology. We discussed what a government-sanctioned contact-tracing app could mean for future government actions: we must remain vigilant to ensure that future actions taken by our governments are sound and justified, especially if they ask individuals to provide data as it is the case for contact-tracing applications. If you’re interested in an in-depth analysis of contact-tracing apps, take a look at the Montreal AI Ethics Institute’s response to the COVI contact-tracing application.

Regarding this matter, one participant brought up Canada’s obligations according to international legal agreements, and mentioned that Canada’s solution must address the problem at hand, and be proportional to the problem, too. This seems important, but is rarely mentioned in the context of Covid-19 and contact-tracing applications.

Participants also expressed concerns regarding governments’ ability to keep our data safe from malicious individuals or groups, as governments do not tend to have the top cybersecurity infrastructures. This, combined with copious amounts of data, makes governments a prime target for data theft.

Our discussion later pivoted towards the lack of informed consent around consumer goods like mobile phones or smart speakers and the data these technologies collect. While the makers of these products and others do disclose the types of data they collect, for instance, they do so in long, difficult to read Terms of Service Agreements. This creates a lack of informed consent, as the expectation that each person is able to read and understand such documents, and has the time to read each of them, is unrealistic.

We concluded our time with the participants of this edition of TechAIDE café by circling back to the topic of facial recognition technology. One participant wondered about the privacy and security aspect of using this technology in the context of improving students’ experience in the classroom. The MAIEI team highlighted concerns about how facial recognition technology would provide only minimal benefits in this context when measured against the risks, especially considering the notoriously weak cybersecurity systems of schools and the fact that students are most often minors. As a group, we ultimately came to the conclusion that there are more effective ways of improving students’ experience in the classroom than turning to facial recognition technology, stressing the importance of avoiding the trap of believing technology can fix every problem best.
The Montreal AI Ethics Institute would like to thank TechAIDE café, Nabil Beitinjaneh, and the participants who joined us for this session. You can find out more about TechAIDE café here.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

related posts

  • AI supply chains make it easy to disavow ethical accountability

    AI supply chains make it easy to disavow ethical accountability

  • Technology on the Margins: AI and Global Migration Management From a Human Rights Perspective (Resea...

    Technology on the Margins: AI and Global Migration Management From a Human Rights Perspective (Resea...

  • Battle of Biometrics: The use and issues of facial recognition in Canada

    Battle of Biometrics: The use and issues of facial recognition in Canada

  • Artificial intelligence and biological misuse: Differentiating risks of language models and biologic...

    Artificial intelligence and biological misuse: Differentiating risks of language models and biologic...

  • System Safety and Artificial Intelligence

    System Safety and Artificial Intelligence

  • Why was your job application rejected: Bias in Recruitment Algorithms? (Part 1)

    Why was your job application rejected: Bias in Recruitment Algorithms? (Part 1)

  • Response to the European Commission’s white paper on AI (2020)

    Response to the European Commission’s white paper on AI (2020)

  • Trustworthiness of Artificial Intelligence

    Trustworthiness of Artificial Intelligence

  • Warning Signs: The Future of Privacy and Security in an Age of Machine Learning  (Research summary)

    Warning Signs: The Future of Privacy and Security in an Age of Machine Learning (Research summary)

  • Responsible Generative AI: A Reference Architecture for Designing Foundation Model-based Agents

    Responsible Generative AI: A Reference Architecture for Designing Foundation Model-based Agents

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.