✍️ Column by Dr. Marianna Ganapini, our Faculty Director. This is part 2 of her Office Hours series. The interviews in this piece were edited for clarity and length.
Last time we asked you about best practices in teaching Tech Ethics. Now we bring to you the ideas, experiences and suggestions of 3 thought leaders with a long track record in developing Tech Ethics curricula:
- Karina Alexanyan (15 years of experience at the intersection of social science/tech/media/education)
- Philip Walsh (Teaches philosophy at Fordham University)
- Daniel Castaño (Professor of Law & Founding Director at the Center for Digital Ethics at Universidad Externado de Colombia)
They will tell us about their teaching philosophies, their course content, and why they think teaching Tech Ethics is so important!
What is your background? What courses do (or did) you teach connected to Tech Ethics and who’s your audience?
Karina: I have over 15 years of experience researching and working at the intersection of technology and society. My academic research explores how our values manifest in the tools we create, and how those tools shape us in turn. My professional work helps academic and social impact organizations apply these insights for social benefit. My background includes PhD research in global media and society, work with Harvard’s Berkman Klein Center on issues pertaining to New Media and Democracy, work with Stanford University on Industry & Academic research partnerships addressing key issues in human sciences and technology, and work with the Partnership on AI on responsible AI development. I have taught courses in Communication and New Media at NYU and with Stanford Continuing Studies, and advise two education start-ups.
Philip: I have a PhD in philosophy. My published work focuses on phenomenology and philosophy of mind. I teach two classes related to Tech Ethics: Philosophy of Technology, and AI, Sci-Fi, and Human Values. These are both undergraduate philosophy elective courses at Fordham University in New York City.
Daniel: I’m a Colombian lawyer. I got my LL.B at Universidad Externado de Colombia, an LL.M. and JSD at the University of California – Berkeley. My scholarship focuses on the regulation of complex issues like technology, environmental protection, and public health under uncertainty and in light of different legal, political, and institutional arrangements. To that end, it maps the architecture of decision-making to postulate a methodology to transform the enlightened principles, rule of law values, ethics, and morality into legal norms, private compliance protocols and tech products.
I’ve been a law professor at Universidad Externado de Colombia Law School since August, 2010. I’ve focused mostly on teaching courses about the legal and ethical challenges of radical technologies like AI, Blockchain, IoT, and AR/VR. It’s been quite a journey because many of these topics and discussions may appear foreign to many people in Colombia. I also started the Center for Digital Ethics at Universidad Externado where I have been advising the Provost and leading an initiative that includes the creation of a new tech & data science department, undergraduate, and graduate degrees. We will launch the new data science undergraduate degree next fall.
Why do you think it is important to teach Tech Ethics courses? And to whom?
Karina: My current professional focus is on the nascent realm of ethical, responsible, and respectful technology. I believe that a diverse workforce is an essential aspect of ensuring that technical innovations are aligned with public interest. The people building our tools should mirror those that will be impacted by these tools. All of society will benefit if a wide range of experiences, perspectives, and expertise is brought to technology development. That is why I believe it is important to educate young people about the social and technical implications of AI – so that the widest range of people sees in themselves the possibility of contributing to building the tools that will shape our future. To help advance this vision, I am working with The AI Education Project and other industry and education partners on bringing their content to high schools and community colleges in California.
Philip: I began answering this question by saying that it is especially important for business and computer science students to study Tech Ethics, but actually I think that’s too narrow. I think everyone should study it. I know that’s a bland answer, but I believe it. Of course, business and computer science students are the likely future decision-makers at tech companies, but Tech Ethics is bigger than that. Other students from other majors should also study Tech Ethics because they should understand that there is an important role for non-business and non-technical people in shaping the future of technology. For example, a recent student of mine was a biology major. Taking my philosophy of technology course opened her eyes to the intersection of Tech Ethics and the healthcare industry. Another student was an English major and is now pursuing journalism focused on the tech industry. The list goes on. Everyone should study Tech Ethics because it ends up intersecting with everything.
Daniel: I think that we are living in what Luciano Floridi calls the “infosphere”, that analog-digital world where our lives unfold. I believe it is critical for everyone to understand the promises and perils of living in the “infosphere”, regardless of their career or background. It is also very important to teach how to address those promises and perils from a cross-functional perspective
What kind of content do you teach? What topics do you cover? What kinds of readings do you usually assign?
Karina: I am collaborating with the AI Education Project which is an education startup that has developed an engaging and accessible curriculum to help all students thrive in the age of artificial intelligence. We believe that AI literacy should be a basic component of every student’s education. Our online lessons and evidence-based activities were developed with students, teachers, and parents in mind, designed to help students understand the social, political and economic implications of ‘the new electricity.’ Our multidisciplinary curriculum addresses the skills and knowledge necessary to thrive in and positively contribute to a society where AI and automation are increasingly a part of every industry. The five key modules touch on AI Foundations; Data & Models, Societal Impacts, AI in Action and AI in Your Life.
Philip: How much time do you have? For the comprehensive answer, here is the link to my course syllabi: https://philipjwalsh.com/teaching
For Philosophy of Technology I start with some classic work in the field focusing on whether we can think of technology as having a single, unifying “essence” (Heidegger and Borgmann). I then move through some contemporary work on the possibility of artificial general intelligence, human enhancement technology, algorithms and big data, and then privacy and surveillance. For AI, Sci-Fi, and Human Value I used Brian Cantwell Smith’s recent The Promise of Artificial Intelligence as our primary text, paired with classic and contemporary science fiction to frame our discussion of the issues. This was a seriously fun class to teach. We read/watched: Frankenstein, R.U.R. (Rossum’s Universal Robots), Westworld, Black Mirror, Prometheus, Ex Machina, Her, and of course: 2001: A Space Odyssey.
These classes aren’t framed as “Tech Ethics” per se. Rather, you might say we cover a lot of meta-ethics of technology. We delve into the epistemic, metaphysical, and existential issues brought on by technology. The human condition is now a technological condition, and that’s what I want my students to understand. For example, in addition to ethical value I also emphasize cognitive and epistemic value. I think most debates about AI neglect this topic. One “reading” that I use for both classes and have found to be very successful with students is an episode of Barry Lam’s podcast Hi Phi Nation called the “Pre-Crime Unit,” which is about algorithmic policing, along with a corresponding paper by Renee Bolinger.
Daniel: I teach product-oriented workshops, undergraduate, and graduate courses aimed at discussing the regulatory and ethical questions raised by AI/ML algorithms, Blockchain, IoT, and AR/VR. I usually cover the history and theoretical foundations of tech ethics since the works of Norbert Weiner until the modern approaches that we can find nowadays. I also cover questions about the nature and scope of tech ethics, the difference between ethics and law, ethical principles, ethics by design, and “enforcement” methods.
What are some teaching techniques you have employed that have worked particularly well? For Tech Ethics, what kind of approach to teaching do you recommend?
Karina: The AIedu curriculum is designed to be accessible to young people of all ages – from middle school through undergrad – although I think adults could benefit as well. Most importantly, it’s designed to be easily integrated into coursework by teachers who have no experience with AI. It’s a relatively self explanatory curriculum that takes students on a guided journey from “what is AI” through examples of how AI will affect future work/careers, and potential ethical concerns. The curriculum combines hands-on activities with videos, self assessments, and small exercises. At the end, there’s a final project that challenges students to come up with an AI solution to a problem in a future career.
The curriculum is designed to give learners the tools they need to make their own decisions about tech ethics. Rather than direct students to “right” or “wrong” answers, we teach them how to think critically about difficult subjects, and how to relate questions about tech ethics to their everyday lives. Learners engage with topics like AI in mental health, data and privacy, and their own social media usage. The course encourages robust debates between students so they feel they have a rightful place in the larger conversation. If we want a diverse set of perspectives in tech ethics, we have to give students from all backgrounds the confidence to amplify their voices in a field that can initially seem intimidating.
The content is also intentionally diverse – the explanatory videos feature women and narrators of color, and the content includes examples of AI in unexpected places – like creative industries. The content is also energetic, delivered with a playful and friendly tone that makes the technical material accessible to students from all backgrounds.
Philip: My strength as an instructor is lecturing. I like lecturing and (dare I say) I’m pretty good at it. I’m naturally extroverted and get very excited about things I’m interested in, so the students seem to respond well to that and we always have very engaging discussions in class. One general piece of advice about teaching that I have is to not underestimate your students. Assign difficult material. Even if you think it will be over their heads, this gives you a challenge: break it down for them. Explain how the puzzle pieces fit together. It will force you to get really clear on the material for yourself and lead to very engaged discussion in class. If the students see that you are working through the material for yourself, it feels like a collaborative enterprise.
Relatedly, I’ve always had more success assigning material that actually makes a claim. A lot of stuff on AI ethics and Tech Ethics doesn’t claim anything. It just lays out a “landscape” of issues or summarizes a bunch of ethical principles that are commonly found in Tech Ethics “frameworks.” That’s all well and good, but honestly gets pretty boring.
Finally, I recommend letting students develop their own multi-stage research projects. This has been one of the most rewarding aspects of teaching these courses. I basically end up managing 20 research projects every semester, on all kinds of issues in technology. I learn so much from my students. Once again, this gives the class a very collaborative feel and the students respond very positively to that.
Daniel: I lecture on the basic theoretical framework and then assign case studies or product review workshops where students analyze the legal and ethical challenges raised by a tech product. For me, product-review and case studies have proven to be an effective teaching method to promote a cross-functional dialogue and bring students as close as possible to real-world scenarios.
What challenges have you encountered, what are the things that have not worked for you and why?
Karina: I reached out to the staff of AIEDU to see what kind of feedback they’ve gotten from the many instructors who have taught their course in high schools across the US. Here’s what they said:
“The biggest challenge with teaching tech ethics, especially as it relates to AI, is that many students think learning about anything technology-related is only meant for computer scientists, mathematicians, and so on. We know that AI will touch everyone’s lives regardless of their interests or career goals but we have to find a way to convey that to students earlier in their school careers, otherwise they self-select out of courses like ours. As educators, we should constantly interrogate our own lesson planning, teaching strategies, and messaging about tech and tech ethics if we want to attract a broad, diverse student audience to the subject. We all have to do a better job of integrating technology, humanities, and the arts so there is an entry point for every student. Another challenge that we’ve come across is finding ways for students to continue talking about tech ethics outside of the course. We know from AIEDU’s student responses that learners who go through the material report high levels of engagement and interest in learning more about topics like AI ethics, but they often don’t know where to turn. We tried implementing a project where students explained some of what they learned to family members or friends and hoped it would help facilitate an ongoing conversation about AI. Unfortunately, the students found it difficult to find loved ones that they could engage with on these topics. Now AIEDU is building a detailed resource of free programming that students can complete after the course if they are interested. We know we can spark students’ interest in AI ethics but we also have to take responsibility for fanning that spark by finding creative ways for students to apply their learning outside of the classroom.”
Philip: Sorry but I can’t think of much to say here. I’ve been teaching these courses for a couple years and they have been uniformly great. As I mentioned above, I think it is best to assign difficult material that makes interesting claims. I’ve assigned overviews of ethical frameworks before and they just aren’t that interesting. That’s not to say they aren’t valuable, but I find they are better suited as a supplement that students can consult and incorporate into their independent research projects.
Daniel: It is very hard to assign readings since most of literature on digital or tech ethics is in English. Maybe it is time to publish a comprehensive textbook on digital ethics in Spanish? I’m convinced that ethics and technology need to speak more Spanish. If anyone is interested in making this happen, please feel free to reach out!
If you want to learn more about teaching Ethics of Tech, check out these useful resources for developing a great course:
- http://aiethics.site/Trento/Syllabus
- https://cmci.colorado.edu/idlab/assets/bibliography/pdf/Raji-pedagogy2021.pdf
- http://z-inspection.org/education/
- https://dataresponsibly.github.io/courses/
- https://www.linkedin.com/feed/update/urn:li:activity:6773634049740165121/
Full bios of interviewees:
Karina Alexanyan has over 15 years of experience directing research & program initiatives at the intersection of social science, information technology, media, and education. She has worked with Stanford, Harvard, and Columbia University, as well as organizations such as Partnership on AI, All Tech is Human, and the m2B Alliance (me2BA.org), and currently advises two educational start-ups.
Philip Walsh received his Ph.D. in philosophy from UC Irvine in 2014. He currently teaches at Fordham University. His research and teaching focus on phenomenology, philosophy of mind, philosophy of technology, and Chinese philosophy. He blogs about philosophy of technology at thinkingtech.co.
Daniel Castaño is a Professor of Law & Founding Director at the Center for Digital Ethics at Universidad Externado de Colombia. LL.B – Universidad Externado, LLM & JSD – University of California at Berkeley. Consultant in new technologies, regulation, and digital ethics.