✍️ Column by Dr. Marianna Ganapini, our Faculty Director. This is the introductory piece to her Office Hours column.
Introduction
A key goal of ours at the Montreal AI Ethics Institute (MAIEI) is to build civic competence and understanding of the societal impacts of AI as epitomized in our mission “Democratizing AI Ethics Literacy”. An important challenge to this objective is to find effective ways and best practices to equip, empower, and engage diverse stakeholders in order to provide them with the necessary tools to become better digital citizens and agents who are able to raise pertinent issues in a well-informed manner.
Organizations working in this space have started to develop and implement a variety of AI Ethics curricula to meet this goal. As MAIEI we always strive to put education at the top of our agenda and so we think that it is now time to start a conversation on how to develop an effective Ethics of AI (or Ethics of Tech) curriculum. This stems from our analysis of disparate efforts across the world that are experimenting with various approaches to impart this knowledge but stumbling on some common blocks again and again without being able to share that knowledge effectively with each other to move past those barriers.
At MAIEI we are leaders in following, analyzing, and reporting the development of Ethics of AI and our “State of AI Ethics Reports” are recognized as the benchmark in the sector. The development of our Ethics of AI Curriculum aims at the same result.
Though our goal is not to be exhaustive in exploring those issues, here we’d like to offer and reflect on the best practices that shape the world of education and Ethics of Tech (and AI Ethics). This will be achieved by discussing both higher-level ideas and topics together with some technical and practical issues connected to the actual delivery of the curriculum and its pedagogy. For example, defining a target audience and level, a preferred delivery methodology, types of contents, etc. are all topics that should go hand in hand with a discussion of the broader aims of the curriculum (such as, for instance, its being inclusive, accessible, comprehensive and so on).
Hence, this column will include our reflections on, as well as the thoughts and inputs coming from, industry and sector leaders on the challenges and opportunities related to the development of an Ethics of AI/Ethics of Tech Curriculum.
Why Should We Teach Ethics of Tech to Young Students?
My plan for this column is to initially share some of the lessons I learned while teaching Ethics of Tech at the college level (I am an assistant professor of Philosophy at Union College). An important disclaimer: my experience is limited, as I have only taught this topic to undergraduates in private colleges in the US. Therefore, I particularly welcome suggestions and ideas from those who have experience with a more diverse body of students!
Today I will begin by asking a motivational question: why should we teach Ethics of Tech? Technology is moving at an incredibly fast pace and it is shaping our lives in profound ways. In the next few years, technology is bound to have existential effects on our identity and our nature as human beings. Because of how it is evolving, technology challenges us to understand who we are, as a society, and what we want to become. At the same time, technology (and AI in particular) is also showing us our weaknesses and blind spots, while giving us tremendous opportunities to overcome those weaknesses and shape our values for the near future. The question “what kind of technology do we want for the future?” is inextricably linked to the question “what kind of persons do we want to become, what kind of lives do we want to live?”
Young students in the US are usually deeply immersed in technology: not only do they often understand and deal with tech easily, for many of them digital technologies are also a given in their lives. At the same time, studies in psychology seem to suggest that digital technologies could represent a potential threat to teen’s mental well being and happiness. So I was pleased to notice that my students were quick to understand the worries and incredible opportunities that technology and AI are raising. They clearly saw the challenges that we all face in trying to shape the future of autonomous systems in a way that respects our values.
I believe that we have a duty to allow students both at the university and at the high school level to understand and reflect on the challenges of technology. At MAIEI we advocate for the importance of empowering stakeholders and promoting civic competence. Because they will be affected by our current choices, young students should be part of the conversation and should be empowered to have their voice heard. To raise awareness and promote civic engagement, ethics of technology courses should be promoted at the college level and possibly even earlier!
How do We Teach Ethics of Technology to Young Students?
The second question of the day is: how should we teach Ethics of Technology? What are the best practices we should adopt here? There is of course a lot to say on this and I will only scratch the surface today. The first point is about content: it is important to teach ethics of technology broadly construed, and not just limit ourselves to AI Ethics. There are ethical issues that pertain to technology in general. For instance, we should ask students to reflect on the use of digital technologies (e.g. smartphones, social media) and their effects on their lives and wellbeing. Students in my classes, for instance, were fascinated by the topic of the ethics of social media. They wrote papers on the role of social media in promoting misinformation and in allowing hate speech to proliferate. These topics may not strictly fall right into what is considered the “Ethics of AI”, but are still extremely important.
The second point is broadly methodological and much more tentative. I usually teach ethics of tech or AI by starting with some key moral theories (e.g. consequentialism, deontology) and their theoretical normative frameworks. Only after I present the practical challenges raised by technology and AI by offering case studies and practical examples. The problem I noticed is that it is quite difficult for the students to connect the theory to the practice and to apply high-level ethical concepts (e.g. rights, duties) to practical cases. They enjoy and understand the case studies presented but when asked to justify their ethical assessments (e.g. facial recognition is wrong!), they tend to struggle and refer back to some pre-theoretical moral intuitions. Why do they face this difficulty? I think there could be two reasons why students struggle in this way.
According to some researchers, it is in fact quite difficult to apply high-level principles and use them to make moral decisions, especially when it comes to technology (Mittelstadt, 2019). So one reason why students struggle is that what we are asking them to do is just very hard to achieve. A second possibility is that the teaching methodology we often adopt is backward: one should first present some practical cases and tangible examples, and then connect those to high-level concepts (such as justice and fairness) and their theoretical framework. That might make it easier for students to bridge theory and practice.
Get in touch!
Have you encountered similar problems? Do you have a solution to address these issues in your teaching? I would love to hear your thoughts, ideas and experiences about this and, more broadly, about teaching ethics of tech, AI ethics, digital ethics and the like. Please, get in touch with me at [email protected].