• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Risk and Trust Perceptions of the Public of Artificial Intelligence Applications

November 30, 2021

🔬 Research summary by Connor Wright, our Partnerships Manager.

[Original paper by Keeley Crockett, Matt Garratt, Annabel Latham, Edwin Colyer, Sean Goltz]


Overview: Does the general public trust AI more than those studying a higher education programme in computer science? The report aims to answer this very question, emphasising the importance of civic competence in AI.


Introduction

Is the opinion of the general public on AI different to those studying computer science in higher education? With a survey titled “You, me and “AI”: What’s the risk in giving AI more control?” the paper aims to compare responses on the level of trust and risk that people of the general public and students studying computer science in higher education give. What is for sure is that civic competency in AI is crucial in creating representative technology, something we hold dear to our hearts here at MAIEI.

Key Insights

Civic competency in AI

One of the main slogans of this paper, my TEDxYouth talk and what we do at MAIEI is the importance of civic competency in the AI field. By improving public understanding of AI, we better equip them to fight any misinformation on the subject. One way to do this is to develop online courses, following in the footsteps of the University of Helsinki. By allowing non-experts to become involved in the debate, we enrich and make more representative the AI space. 

Nevertheless, the paper points out how some may feel intimidated by courses offered by universities, for they don’t feel they have the right qualifications. Hence, a future focus can be in creating courses specifically designed for the common person. 

One of my core beliefs is that everyone can bring something to the AI table, no matter the level of expertise. Such value is clearly demonstrated in the data collated in the paper’s surveys.

The results

One of the main driving forces behind the survey is how previous studies conducted on the general public show varying degrees of knowledge about AI, but they all lack a robust description of the general public. Hence, the paper takes the general public to be those who have no specific knowledge in AI.

The groups of participants were split into Group 1 (the general public) and Group 2 (students of a higher education computer science programme). The groups were then asked questions on 3 different themes: trust, risk and questions on a scale of 0-10. A bird’s eye view of the results are as follows:

Trust

  • The groups were found to agree on questions such as not trusting an automated message from their boss, but differed on questions as to whether to trust a driverless car that had passed a “digital MOT” (p.g. 4). 
  • In this case, university students were more trusting of the AI involved.

Risk

  • The students always associated the same if not more risk to different AI applications, especially in terms of following instructions from a recognisable digitised voice.

On a scale from 0-10

  • There was general parity between the two groups on statements such as “I believe the minority of AI systems are biased”. The only difference came in how students placed less emphasis on AI system decisions being explainable.

Between the lines

While the general public is defined as being without deep knowledge in the field, it is crucial that they are deemed to be a key stakeholder. In this way, their interactions with AI systems must be considered when evaluating an AI model’s performance. As the paper rightly mentions, risk can occur at different points of the AI lifecycle, making system monitoring a vital aspect of a successful AI system. I hold that we cannot view AI systems as able to generalise over the whole population, meaning the practice is critical in ensuring that the system accurately tends to what it is designed to do and the problems this could bring. 

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

related posts

  • Research summary: Overcoming Barriers to Cross-Cultural Cooperation in AI Ethics and Governance

    Research summary: Overcoming Barriers to Cross-Cultural Cooperation in AI Ethics and Governance

  • The Moral Machine Experiment on Large Language Models

    The Moral Machine Experiment on Large Language Models

  • Computers, Creativity and Copyright: Autonomous Robot’s Status, Authorship, and Outdated Copyright L...

    Computers, Creativity and Copyright: Autonomous Robot’s Status, Authorship, and Outdated Copyright L...

  • AI Ethics Maturity Model

    AI Ethics Maturity Model

  • The Political Power of Platforms: How Current Attempts to Regulate Misinformation Amplify Opinion Po...

    The Political Power of Platforms: How Current Attempts to Regulate Misinformation Amplify Opinion Po...

  • Risky Analysis: Assessing and Improving AI Governance Tools

    Risky Analysis: Assessing and Improving AI Governance Tools

  • When Algorithms Infer Pregnancy or Other Sensitive Information About People

    When Algorithms Infer Pregnancy or Other Sensitive Information About People

  • Can we trust robots?

    Can we trust robots?

  • Human-Algorithm Collaboration: Achieving Complementarity and Avoiding Unfairness

    Human-Algorithm Collaboration: Achieving Complementarity and Avoiding Unfairness

  • A Hazard Analysis Framework for Code Synthesis Large Language Models

    A Hazard Analysis Framework for Code Synthesis Large Language Models

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.