• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Exploring the under-explored areas in teaching tech ethics today

July 21, 2021

✍️ Column by Dr. Marianna Ganapini, our Faculty Director. This is part 5 of her Office Hours series. The interviews in this piece were edited for clarity and length.


Join us again for some new exciting ideas on how to shape curriculum design in the ethics of tech space. This month Chris McClean shares his experience as the global lead for digital ethics at Avanade, and we are excited to learn more about how it trains tech and business professionals to recognize the most pressing ethical challenges. And as always, please get in touch if you want to share your opinions and insights on this fast-developing field.

What is your background? What courses do (or did) you teach connected to Tech Ethics, and who’s your audience (e.g., undergrads, professionals)?

I am the Global Lead for Digital Ethics at Avanade, a 40,000-employee technology consulting and advisory firm. A substantial part of my role includes training our tech and business employees worldwide about how best to recognize and address ethical issues that arise in the technology we design, develop, deploy, and operate. I also offer Digital Ethics training, assessments, and program design for our clients (technology and business executives) as part of a broad advisory practice.

What kind of content do you teach? What topics do you cover? What types of readings do you usually assign?

I teach general concepts and trends in Digital Ethics, which covers a wide range of ways technology impacts individuals (such as privacy, accessibility, financial health and opportunity, mental well-being, personal dignity, and legal status), society (such as health care, education, the economy, criminal justice, and law enforcement), and the environment (such as energy use, material use, waste, pollution, and impact on biodiversity). I also cover a wide range of ethical controls, such as values alignment, ethical testing, security, resilience, monitoring, oversight, recourse, and accountability. I usually distill academic research for my audience given the amount of time such reading might take, and I rely heavily on real-world cases of ethics done well or done poorly.

What are some teaching techniques you have employed that have worked particularly well? For Tech Ethics, what kind of approach to teaching do you recommend?

I found it’s especially helpful to run audiences through scenario analysis, especially if we can use real case examples. I’ve also run workshops that include a detailed assessment of a technical product or project using our Digital Ethics Assessment Framework.

In your opinion, what are some of the things missing in the way Tech Ethics is currently taught? For instance, are there topics that are not covered enough (or at all)? What could be done to improve this field?

It’s hard to say, as I don’t have much visibility into all the different ways people are teaching these topics. However, given what we’re seeing in the industry, it seems like we’re spending a good deal of time on data ethics/privacy and responsible AI (which are critically important) but not enough time on the mental health, personal dignity, and environmental impacts of technology. I also don’t see enough emphasis on how to incorporate ethical practices into various professional disciplines, like design, engineering, marketing, or audit.

How do you see the Tech Ethics Curriculum landscape evolve in the next 5 years? What are the changes you see happening?

I’m encouraged to see how much more often Tech Ethics is taught as part of general computer science and data science curricula. I’m hopeful that this trend will carry into business curricula as well, just as we’ve seen topics like sustainability and corporate responsibility become more popular. Ideally, I think our ethics-related education needs to include perspectives from economics, sociology, and even marketing to show that taking ethics seriously can positively impact business and social performance. 

Is there anything else you’d like to add?

We should look carefully at the value of having stand-alone ethics training versus embedding ethics consideration into other aspects of training. As a disparate subject, it’s very easy to compartmentalize ethics as something that’s done occasionally, possibly by other people. But if it’s incorporated as a standard element of other courses, it’s easier to see that considering and addressing ethics is everyone’s job throughout the entire tech lifecycle.


Bio of interviewee:

As the global lead for digital ethics at Avanade, Chris McClean is responsible for driving the company’s digital ethics fluency and internal change and advising clients on their digital ethics journey. Prior to Avanade, Chris spent 12 years at Forrester Research, leading the company’s analysis and advisory for risk management, compliance, corporate values, and ethics. Chris earned his MS in Business Ethics and Compliance in 2010 and BS in Business with a Marketing emphasis in 2001.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

related posts

  • Efficiency is Not Enough: A Critical Perspective of Environmentally Sustainable AI

    Efficiency is Not Enough: A Critical Perspective of Environmentally Sustainable AI

  • Humans are not Boltzmann Distributions: Challenges and Opportunities for Modelling Human Feedback an...

    Humans are not Boltzmann Distributions: Challenges and Opportunities for Modelling Human Feedback an...

  • Does diversity really go well with Large Language Models?

    Does diversity really go well with Large Language Models?

  • AI hyped? A horizon scan of discourse on artificial intelligence in education (AIED) and development

    AI hyped? A horizon scan of discourse on artificial intelligence in education (AIED) and development

  • Anthropomorphism and the Social Robot

    Anthropomorphism and the Social Robot

  • Towards Climate Awareness in NLP Research

    Towards Climate Awareness in NLP Research

  • How to Help People Understand AI

    How to Help People Understand AI

  • UNESCO’s Recommendation on the Ethics of AI

    UNESCO’s Recommendation on the Ethics of AI

  • From Instructions to Intrinsic Human Values - A Survey of Alignment Goals for Big Models

    From Instructions to Intrinsic Human Values - A Survey of Alignment Goals for Big Models

  • The Larger The Fairer? Small Neural Networks Can Achieve Fairness for Edge Devices

    The Larger The Fairer? Small Neural Networks Can Achieve Fairness for Edge Devices

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.