• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Montreal AI Ethics Institute Hosts a TechAIDE Café Session

July 12, 2020


On July 7th, the Montreal AI Ethics Institute had the privilege of hosting a TechAIDE Café session. The aim of the café is to raise funds for Centraide Montréal, a philanthropic organization that raises money to help fight poverty, homelessness, and social exclusion. Funds are raised when participants pledge, either on Twitter or by email to Centraide, that they will “buy a coffee,” and the size of the coffee bought is proportional to the amount donated. Centraide then reaches out to the individuals who made the pledge and invites them to donate the amount they pledged. 

As baristas, members of our team were there to answer questions from the public and discuss issues surrounding the ethics of AI. Our first topic of discussion was the recent announcements made by tech companies that they would stop researching and developing facial recognition (in the case of IBM), or that they would enforce a moratorium on the use of their facial recognition software (in the case of Amazon). We had a nuanced discussion, highlighting some interests and motivations behind these decisions: genuine concern for the harms caused by facial recognition, virtue-signalling, and even seizing the opportunity to abandon unprofitable efforts in facial recognition research and development while enhancing the company’s reputation. 

We then moved on to the very timely topic of contact-tracing applications for Covid-19, addressing concerns surrounding data privacy and the accuracy of tracking technology. We discussed what a government-sanctioned contact-tracing app could mean for future government actions: we must remain vigilant to ensure that future actions taken by our governments are sound and justified, especially if they ask individuals to provide data as it is the case for contact-tracing applications. If you’re interested in an in-depth analysis of contact-tracing apps, take a look at the Montreal AI Ethics Institute’s response to the COVI contact-tracing application.

Regarding this matter, one participant brought up Canada’s obligations according to international legal agreements, and mentioned that Canada’s solution must address the problem at hand, and be proportional to the problem, too. This seems important, but is rarely mentioned in the context of Covid-19 and contact-tracing applications.

Participants also expressed concerns regarding governments’ ability to keep our data safe from malicious individuals or groups, as governments do not tend to have the top cybersecurity infrastructures. This, combined with copious amounts of data, makes governments a prime target for data theft.

Our discussion later pivoted towards the lack of informed consent around consumer goods like mobile phones or smart speakers and the data these technologies collect. While the makers of these products and others do disclose the types of data they collect, for instance, they do so in long, difficult to read Terms of Service Agreements. This creates a lack of informed consent, as the expectation that each person is able to read and understand such documents, and has the time to read each of them, is unrealistic.

We concluded our time with the participants of this edition of TechAIDE café by circling back to the topic of facial recognition technology. One participant wondered about the privacy and security aspect of using this technology in the context of improving students’ experience in the classroom. The MAIEI team highlighted concerns about how facial recognition technology would provide only minimal benefits in this context when measured against the risks, especially considering the notoriously weak cybersecurity systems of schools and the fact that students are most often minors. As a group, we ultimately came to the conclusion that there are more effective ways of improving students’ experience in the classroom than turning to facial recognition technology, stressing the importance of avoiding the trap of believing technology can fix every problem best.
The Montreal AI Ethics Institute would like to thank TechAIDE café, Nabil Beitinjaneh, and the participants who joined us for this session. You can find out more about TechAIDE café here.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

AI Policy Corner: U.S. Copyright Guidance on Works Created with AI

AI Policy Corner: AI for Good Summit 2025

AI Policy Corner: Japan’s AI Promotion Act

related posts

  • Can We Teach AI Robots How to Be Human?

    Can We Teach AI Robots How to Be Human?

  • Green Lighting ML: Confidentiality, Integrity, and Availability of Machine Learning Systems in Deplo...

    Green Lighting ML: Confidentiality, Integrity, and Availability of Machine Learning Systems in Deplo...

  • Why was your job application rejected: Bias in Recruitment Algorithms? (Part 2)

    Why was your job application rejected: Bias in Recruitment Algorithms? (Part 2)

  • Participatory Design to build better contact- and proximity-tracing apps

    Participatory Design to build better contact- and proximity-tracing apps

  • 3 activism lessons from Jane Goodall you can apply in AI Ethics

    3 activism lessons from Jane Goodall you can apply in AI Ethics

  • Social Robots and Empathy: The Harmful Effects of Always Getting What We Want

    Social Robots and Empathy: The Harmful Effects of Always Getting What We Want

  • A 16-year old AI developer's critical take on AI ethics

    A 16-year old AI developer's critical take on AI ethics

  • Interview with Borealis AI

    Interview with Borealis AI

  • 5 Questions & Answers from StradigiAI's Twitter Roundtable

    5 Questions & Answers from StradigiAI's Twitter Roundtable

  • 10 takeaways from our meetup on AI Ethics in the APAC Region

    10 takeaways from our meetup on AI Ethics in the APAC Region

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.