• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Do Less Teaching, Do More Coaching: Toward Critical Thinking for Ethical Applications of Artificial Intelligence

June 17, 2022

🔬 Research summary by Connor Wright, our Partnerships Manager.

[Original paper by Claire Su-Yeon Park, Haejoong Kim, Sangmin Lee]


Overview: With new online educational platforms, a trend in pedagogy is to coach rather than teach. Without developing a critically evaluative attitude, we risk falling into blind and unwarranted faith in AI systems. For sectors such as healthcare, this could prove fatal. 


Introduction

With the proliferation of online education opportunities, a new tendency in higher education is to focus more on coaching rather than teaching when it comes to AI. Allowing students to be able to evaluate information rather than simply being spoon-fed exam material critically will prove essential to guiding an appropriate AI future. This will prove pertinent to the healthcare field, where we ought to treat technology as a tool rather than a replacement. To best explore this, I’ll first touch upon the new trend emerging in pedagogy before exploring the dangers of uncritical uniformity, later applying it to healthcare. To end, I’ll comment on how the better our skills are, the better this will be reflected in AI systems themselves.

Key Insights

A new trend in pedagogy

Given the rise of new online education opportunities after the pandemic, higher education appears to be in a transition phase. Higher education is becoming more proactive rather than reactive, especially regarding AI. The shift swings away from simply disseminating knowledge to guiding students’ critical thinking journey.

Recommender algorithms already allow students to personalise the content they view and discover new information. Hence, we are seeing a stronger trend in independent learning in addition to any course material, placing students in a more active role than before the pandemic. Said transition thus points towards equipping students with the tools to evaluate information and not just being able to reproduce it on an exam paper.

Uncritical uniformity

Without being critical, the users of recommender algorithms could fall into reproducing the interests of the developers. If they are not careful, the subtle shifts towards the “specific bias” (p.98) a system could contain will go undetected—for example, engaging in newspaper websites that recommend news items associated with their political leanings. Here, if we don’t question, we could fall party to isolating ourselves in our own views.

Building on this, being uncritical leads to unwarranted blind faith in AI systems. Computers cannot think critically for us, but they can reflect our biases. For example. The Korean Scatter Lab-designed chatbot Lee Luda reproduced homophobic and transphobic comments when asked specific questions (Figure 1). Adopting an uncritical attitude to this technology would’ve proved detrimental and allowed this technology to continue to operate without punishment. Nevertheless, the company was castigated with a KRW 13.3 million fine.

Virtual education and healthcare

Involved within these considerations is how, without a critical attitude, it may become difficult to tell the difference between what a student believes and what the online learning system wants them to think. Being constantly fed particular articles eventually becomes difficult to label as reflective of the user or the system itself. Without critically evaluating such information, we become paralysed in our efforts to discern and assert what they believe in.

This proves poignant within the healthcare system. Without developing a critically evaluative outlook, we risk not being able to intervene in AI-driven decisions that could lead to the endangerment of patient welfare. For example, for those who arrive at a hospital with flank pain, the STONE algorithm uses its “origin/race” factor to predict the likelihood of kidney stones in patients. It subsequently adds three points (out of 13) to non-white patients, which could lead to non-white patients not being referred to a specialist even when they need treatment. Hence, while this technology is beneficial, it should be treated as a tool that augments rather than replaces human intelligence.

Between the lines

AI is omnipresent within our lives, from YouTube recommender algorithms to allow us to unlock our phones through facial recognition. Consequently, equipping the standard citizen and students with the dexterity to evaluate AI systems is a must. The better our skills, the more these will be reflected in the AI, its outcomes and how we deal with any negative consequences that arise.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

A brightly coloured illustration which can be viewed in any direction. It has many elements to it working together: men in suits around a table, someone in a data centre, big hands controlling the scenes and holding a phone, people in a production line. Motifs such as network diagrams and melting emojis are placed throughout the busy vignettes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part II

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

related posts

  • GenAI Against Humanity: Nefarious Applications of Generative Artificial Intelligence and Large Langu...

    GenAI Against Humanity: Nefarious Applications of Generative Artificial Intelligence and Large Langu...

  • The role of the African value of Ubuntu in global AI inclusion discourse: A normative ethics perspec...

    The role of the African value of Ubuntu in global AI inclusion discourse: A normative ethics perspec...

  • Disability, Bias, and AI (Research Summary)

    Disability, Bias, and AI (Research Summary)

  • Energy and Policy Considerations in Deep Learning for NLP

    Energy and Policy Considerations in Deep Learning for NLP

  • Machines as teammates: A research agenda on AI in team collaboration

    Machines as teammates: A research agenda on AI in team collaboration

  • Beyond Bias and Compliance: Towards Individual Agency and Plurality of Ethics in AI

    Beyond Bias and Compliance: Towards Individual Agency and Plurality of Ethics in AI

  • Algorithms as Social-Ecological-Technological Systems: an Environmental Justice lens on Algorithmic ...

    Algorithms as Social-Ecological-Technological Systems: an Environmental Justice lens on Algorithmic ...

  • (Re)Politicizing Digital Well-Being: Beyond User Engagements

    (Re)Politicizing Digital Well-Being: Beyond User Engagements

  • Effects of ROSS Intelligence and NDAS, highlighting the need for AI regulation

    Effects of ROSS Intelligence and NDAS, highlighting the need for AI regulation

  • From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

    From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.