• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Do Less Teaching, Do More Coaching: Toward Critical Thinking for Ethical Applications of Artificial Intelligence

June 17, 2022

🔬 Research summary by Connor Wright, our Partnerships Manager.

[Original paper by Claire Su-Yeon Park, Haejoong Kim, Sangmin Lee]


Overview: With new online educational platforms, a trend in pedagogy is to coach rather than teach. Without developing a critically evaluative attitude, we risk falling into blind and unwarranted faith in AI systems. For sectors such as healthcare, this could prove fatal. 


Introduction

With the proliferation of online education opportunities, a new tendency in higher education is to focus more on coaching rather than teaching when it comes to AI. Allowing students to be able to evaluate information rather than simply being spoon-fed exam material critically will prove essential to guiding an appropriate AI future. This will prove pertinent to the healthcare field, where we ought to treat technology as a tool rather than a replacement. To best explore this, I’ll first touch upon the new trend emerging in pedagogy before exploring the dangers of uncritical uniformity, later applying it to healthcare. To end, I’ll comment on how the better our skills are, the better this will be reflected in AI systems themselves.

Key Insights

A new trend in pedagogy

Given the rise of new online education opportunities after the pandemic, higher education appears to be in a transition phase. Higher education is becoming more proactive rather than reactive, especially regarding AI. The shift swings away from simply disseminating knowledge to guiding students’ critical thinking journey.

Recommender algorithms already allow students to personalise the content they view and discover new information. Hence, we are seeing a stronger trend in independent learning in addition to any course material, placing students in a more active role than before the pandemic. Said transition thus points towards equipping students with the tools to evaluate information and not just being able to reproduce it on an exam paper.

Uncritical uniformity

Without being critical, the users of recommender algorithms could fall into reproducing the interests of the developers. If they are not careful, the subtle shifts towards the “specific bias” (p.98) a system could contain will go undetected—for example, engaging in newspaper websites that recommend news items associated with their political leanings. Here, if we don’t question, we could fall party to isolating ourselves in our own views.

Building on this, being uncritical leads to unwarranted blind faith in AI systems. Computers cannot think critically for us, but they can reflect our biases. For example. The Korean Scatter Lab-designed chatbot Lee Luda reproduced homophobic and transphobic comments when asked specific questions (Figure 1). Adopting an uncritical attitude to this technology would’ve proved detrimental and allowed this technology to continue to operate without punishment. Nevertheless, the company was castigated with a KRW 13.3 million fine.

Virtual education and healthcare

Involved within these considerations is how, without a critical attitude, it may become difficult to tell the difference between what a student believes and what the online learning system wants them to think. Being constantly fed particular articles eventually becomes difficult to label as reflective of the user or the system itself. Without critically evaluating such information, we become paralysed in our efforts to discern and assert what they believe in.

This proves poignant within the healthcare system. Without developing a critically evaluative outlook, we risk not being able to intervene in AI-driven decisions that could lead to the endangerment of patient welfare. For example, for those who arrive at a hospital with flank pain, the STONE algorithm uses its “origin/race” factor to predict the likelihood of kidney stones in patients. It subsequently adds three points (out of 13) to non-white patients, which could lead to non-white patients not being referred to a specialist even when they need treatment. Hence, while this technology is beneficial, it should be treated as a tool that augments rather than replaces human intelligence.

Between the lines

AI is omnipresent within our lives, from YouTube recommender algorithms to allow us to unlock our phones through facial recognition. Consequently, equipping the standard citizen and students with the dexterity to evaluate AI systems is a must. The better our skills, the more these will be reflected in the AI, its outcomes and how we deal with any negative consequences that arise.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • Robustness and Usefulness in AI Explanation Methods

    Robustness and Usefulness in AI Explanation Methods

  • Putting AI ethics to work: are the tools fit for purpose?

    Putting AI ethics to work: are the tools fit for purpose?

  • Tiny, Always-on and Fragile: Bias Propagation through Design Choices in On-device Machine Learning W...

    Tiny, Always-on and Fragile: Bias Propagation through Design Choices in On-device Machine Learning W...

  • In AI We Trust: Ethics, Artificial Intelligence, and Reliability

    In AI We Trust: Ethics, Artificial Intelligence, and Reliability

  • Target specification bias, counterfactual prediction, and algorithmic fairness in healthcare

    Target specification bias, counterfactual prediction, and algorithmic fairness in healthcare

  • Bias and Fairness in Large Language Models: A Survey

    Bias and Fairness in Large Language Models: A Survey

  • Generative AI-Driven Storytelling: A New Era for Marketing

    Generative AI-Driven Storytelling: A New Era for Marketing

  • Teaching AI Ethics Using Science Fiction (Research summary)

    Teaching AI Ethics Using Science Fiction (Research summary)

  • Research summary: Roles for Computing in Social Change

    Research summary: Roles for Computing in Social Change

  • Research Summary: Risk Shifts in the Gig Economy: The Normative Case for an Insurance Scheme against...

    Research Summary: Risk Shifts in the Gig Economy: The Normative Case for an Insurance Scheme against...

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.