• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Do Less Teaching, Do More Coaching: Toward Critical Thinking for Ethical Applications of Artificial Intelligence

June 17, 2022

🔬 Research summary by Connor Wright, our Partnerships Manager.

[Original paper by Claire Su-Yeon Park, Haejoong Kim, Sangmin Lee]


Overview: With new online educational platforms, a trend in pedagogy is to coach rather than teach. Without developing a critically evaluative attitude, we risk falling into blind and unwarranted faith in AI systems. For sectors such as healthcare, this could prove fatal. 


Introduction

With the proliferation of online education opportunities, a new tendency in higher education is to focus more on coaching rather than teaching when it comes to AI. Allowing students to be able to evaluate information rather than simply being spoon-fed exam material critically will prove essential to guiding an appropriate AI future. This will prove pertinent to the healthcare field, where we ought to treat technology as a tool rather than a replacement. To best explore this, I’ll first touch upon the new trend emerging in pedagogy before exploring the dangers of uncritical uniformity, later applying it to healthcare. To end, I’ll comment on how the better our skills are, the better this will be reflected in AI systems themselves.

Key Insights

A new trend in pedagogy

Given the rise of new online education opportunities after the pandemic, higher education appears to be in a transition phase. Higher education is becoming more proactive rather than reactive, especially regarding AI. The shift swings away from simply disseminating knowledge to guiding students’ critical thinking journey.

Recommender algorithms already allow students to personalise the content they view and discover new information. Hence, we are seeing a stronger trend in independent learning in addition to any course material, placing students in a more active role than before the pandemic. Said transition thus points towards equipping students with the tools to evaluate information and not just being able to reproduce it on an exam paper.

Uncritical uniformity

Without being critical, the users of recommender algorithms could fall into reproducing the interests of the developers. If they are not careful, the subtle shifts towards the “specific bias” (p.98) a system could contain will go undetected—for example, engaging in newspaper websites that recommend news items associated with their political leanings. Here, if we don’t question, we could fall party to isolating ourselves in our own views.

Building on this, being uncritical leads to unwarranted blind faith in AI systems. Computers cannot think critically for us, but they can reflect our biases. For example. The Korean Scatter Lab-designed chatbot Lee Luda reproduced homophobic and transphobic comments when asked specific questions (Figure 1). Adopting an uncritical attitude to this technology would’ve proved detrimental and allowed this technology to continue to operate without punishment. Nevertheless, the company was castigated with a KRW 13.3 million fine.

Virtual education and healthcare

Involved within these considerations is how, without a critical attitude, it may become difficult to tell the difference between what a student believes and what the online learning system wants them to think. Being constantly fed particular articles eventually becomes difficult to label as reflective of the user or the system itself. Without critically evaluating such information, we become paralysed in our efforts to discern and assert what they believe in.

This proves poignant within the healthcare system. Without developing a critically evaluative outlook, we risk not being able to intervene in AI-driven decisions that could lead to the endangerment of patient welfare. For example, for those who arrive at a hospital with flank pain, the STONE algorithm uses its “origin/race” factor to predict the likelihood of kidney stones in patients. It subsequently adds three points (out of 13) to non-white patients, which could lead to non-white patients not being referred to a specialist even when they need treatment. Hence, while this technology is beneficial, it should be treated as a tool that augments rather than replaces human intelligence.

Between the lines

AI is omnipresent within our lives, from YouTube recommender algorithms to allow us to unlock our phones through facial recognition. Consequently, equipping the standard citizen and students with the dexterity to evaluate AI systems is a must. The better our skills, the more these will be reflected in the AI, its outcomes and how we deal with any negative consequences that arise.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

related posts

  • 10 takeaways from our meetup on AI Ethics in the APAC Region

    10 takeaways from our meetup on AI Ethics in the APAC Region

  • System Safety and Artificial Intelligence

    System Safety and Artificial Intelligence

  • On the Construction of Artificial Moral Agents Agents

    On the Construction of Artificial Moral Agents Agents

  • Bound by the Bounty: Collaboratively Shaping Evaluation Processes for Queer AI Harms

    Bound by the Bounty: Collaboratively Shaping Evaluation Processes for Queer AI Harms

  • Bots don’t Vote, but They Surely Bother! A Study of Anomalous Accounts in a National Referendum

    Bots don’t Vote, but They Surely Bother! A Study of Anomalous Accounts in a National Referendum

  • Language Models: A Guide for the Perplexed

    Language Models: A Guide for the Perplexed

  • Discover Weekly: How the Music Platform Spotify Collects and Uses Your Data

    Discover Weekly: How the Music Platform Spotify Collects and Uses Your Data

  • Research summary: Social Biases in NLP Models as Barriers for Persons with Disabilities

    Research summary: Social Biases in NLP Models as Barriers for Persons with Disabilities

  • Selecting Privacy-Enhancing Technologies for Managing Health Data Use

    Selecting Privacy-Enhancing Technologies for Managing Health Data Use

  • Can LLMs Enhance the Conversational AI Experience?

    Can LLMs Enhance the Conversational AI Experience?

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.