✍️ Column by Dr. Marianna Ganapini, our Faculty Director. This is part 3 of her Office Hours series. The interviews in this piece were edited for clarity and length.
How do you see the Tech Ethics Curriculum landscape evolve in the next 5 years? In this column, 3 experts tackle this and other important questions about the development of an effective, inclusive and comprehensive Tech Ethics Curriculum for the future. They tell us about their teaching experience and about the gaps in this field. First, meet the experts:
- Merve Hickok — Founder of AIethicist.org, a global repository of reference & research material for anyone interested in the current discussions on AI ethics and impact of AI on individuals and society.
- Dorothea Baur — Principal and Owner at Baur Consulting AG, which advises companies, non-profit organizations and foundations on matters related to ethics, responsibility and sustainability with a distinctive but not exclusive focus on finance and technology.
- Ryan Carrier — Executive Director at ForHumanity, a non-profit founded to examine the downside specific and existential risks associated with AI.
Here are some highlights of the interviews below:
- While teaching Tech Ethics, we need to acknowledge that a discussion around ‘power’ is key to explain and understand how AI and disruptive Tech is changing our social-economical landscape.
- For the next 5 years, there will be more empowerment and integration of Ethics committees and Ethics Officers, and an increased value in philosophical/moral thinking with regards to sustainable profitability in corporations. Our Tech Ethics Curriculum needs to address these changes.
- In the near future, there will be more universities and colleges developing multidisciplinary programs geared towards Tech Ethics.
- Tech Ethics should become a standard part of each degree in Tech.
Full interview below.
What is your background? What courses do (or did) you teach connected to Tech Ethics and who’s your audience (e.g. undergrads, professionals)?
Merve: I have BAs in International Relations and Political Science, and am a certified privacy professional. I provide both consulting and tailored training to organizations and individuals. I am a lecturer at the University of Michigan, School of Information for Data Ethics course under the Master’s of Applied Data Ethics program. I also have an online self-paced course that I created (provided under RMDS Lab) for professionals with any background, who are interested in the fundamentals of AI ethics, bias and ethical decision-making.
Dorothea: My background is a Ph.D. in business ethics, several years of postdoctoral research, and 5 years of independent ethics consulting with a focus on tech and finance. I am teaching classes on “AI and ethics“ at various universities of applied sciences. My class is part of degrees on “AI Management“, “Digital Ethics“, “Disruptive Technologies“, “Digital Finance” etc. The audience always consists of heterogeneous groups of professionals across different industries. They all have previous degrees and several years of work experience.
Ryan: I founded ForHumanity after a 25-year career in finance, and I now focus on Independent Audit of AI Systems as one means to mitigate the risk associated with artificial intelligence. As for my teaching experience, I have had the opportunity to teach general ethics as a part of a course introducing the ‘Independent Audit of AI Systems’.
What kind of content do you teach? What topics do you cover? What kinds of readings do you usually assign?
Merve: I provide details of multiple approaches that are currently highlighted in AI ethics discussions. I cover AI & tech ethics from a fundamental rights approach, principles-based approach and human-centric values-based approach and discuss the pros & cons of each. It is important for decision-makers, developers, implementers and policy-makers to understand what each of these means, what it means for business/agency/society and the harms and burdens that can manifest themselves.
I also provide context on bias, where bias can creep into the system during its lifecycle and what are some of the practices to mitigate. A lot of my focus is around demonstrating the real-world applications (recruitment, predictive policing, healthcare, workplace) of these high-level approaches – in other words bringing theory into action. What does it look like in practice, in what situations they might work, how to ask the right questions and decide. Every organization and every person involved are at different maturity levels with regards to their understanding of impact and consequences. They are also coming from very different industries and backgrounds. So it is important to provide the fundamentals and tools to be able to dissect the AI systems and rhetoric yourself. As for topics, I cover the following: AI bias (especially in recognition and prediction systems), AI governance, policy, regulation, ethical frameworks, power, harms/burdens, inclusive design, exploitation/extraction, surveillance, manipulation, disinformation, social justice, techno-solutionism, diversity, data colonialism.
Dorothea: I mostly only teach one block of 4 or 8 hours in each programme. I usually start with a very brief introduction to ethics because the students usually do not have any prior knowledge about ethics. I then move on to show where you can ‘anchor‘ ethics – i.e. at a state level (through legislation), at industry or corporate levels (through corporate AI ethics initiatives), at the level of individuals (through awareness training etc.). I spend the bulk of my time on highlighting ethical challenges based on use cases – e.g. algorithmic credit lending, facial recognition, emotion recognition, algorithms in education or in court etc. In these contexts, we discuss issues like bias, discrimination, privacy, accountability etc. In this type of continuing education, you don’t usually assign readings, but I always recommend some literature and websites to people in case they want to know more.
Ryan: At ForHumanity we are deeply committed to establishing best practices in curriculum design for AI Ethics. At the moment we are working on building clear learning objectives for new Algorithm Ethics classes. As for topics, our main focus is better developing the idea of ‘Ethical Choice’ and building a curriculum that could empower the Ethics Committee by using our system of accountability, governance and oversight. This is because we see the Ethics Committee as a key tool for managing AI and Autonomous System risk. For more on this, please see “the rise of the Ethics Committee”.
What are some teaching techniques you have employed that have worked particularly well? For Tech Ethics, what kind of approach to teaching do you recommend?
Merve: I use lots of case studies that range from corporate applications to those used by public entities to the ones we use in our private daily lives. I am trying to raise awareness of the fact that AI systems are ubiquitous in all facets of our lives. I want the participants in my courses to know that the focus should not only be on the systems they are asked to develop at work, but it goes beyond that: the impact of discriminatory systems or scoring applications is on both an individual and societal level. I assign the readings curated in AIethicist.org. I also show impactful videos, movies and documentaries. If I am conducting training for a specific company/organization, I tailor it to the needs of the team and help them incorporate it into their processes and work. For the Master’s level program or my online self-paced training, the students choose to take the class to advance their understanding of the issues and their impact. Regardless of the situation though, it is often hard to work with the trainees/students to help them really question these systems and deep-seated ideas.
Dorothea: The class size is usually quite small – not more than twenty students. Given that these are all professional adults, they are usually intrinsically motivated and keen to engage in discussion without me using a lot of specific teaching techniques. However, I also usually let them do group work, e.g. for case studies, where I assign different cases to the groups and let them explain what they have learned to their peers in plenary. In my context, the best recommendation is to keep tech ethics as applied as possible and to inspire people to reflect on what it means for their own work experience.
Ryan: In teaching, I have used a number of different techniques, such as crowdsourcing, lecturing, in both synchronous and asynchronous settings.
In your opinion, what are some of the things missing in the way Tech Ethics is currently taught? For instance, are there topics that are not covered enough (or at all)? What could be done to improve this field?
Merve: These are some of the things that are often not highlighted enough in this space:
- Focus on critical thinking skills and the ability to break apart concepts and resist false dichotomies that benefit certain groups more than others. Critical thinking is also key to the ability to separate prediction vs causality, and to the ability to separate pseudoscience from real science & research
- A discussion about power is crucial to understand how AI and disruptive tech is changing the landscape. This also entails understanding the history of for example civil rights, segregation, colonialism, public vs private entities, international organizations etc…
- A serious reflection on the negative impacts of AI across all domains of our lives (not just professional), and an understanding of corporate power and emergence of legal persons and corporate liability
- The willingness to develop and share the tools to resist unethical practices (importance of collective action here)
- A need for governance and accountability mechanisms in general
Dorothea: I don’t have a good overview of the state of Tech Ethics across universities because I am focused on this very specific continuing education setting. I cannot say anything about Tech Ethics at BA or MA level, or engineering degrees, etc.
Ryan: What is missing is a real focus on an Independent Audit and the subsequent infrastructure of trust, specifically in the Governance, Oversight and Accountability of 3rd party-audits. I see dramatic demand for these skills due in part to their requirement in the process of Independent Audit of AI Systems. That means that we need courses able to train people who can fill that gap.
How do you see the Tech Ethics Curriculum landscape evolve in the next 5 years? What are the changes you see happening?
Merve: In the near future, I see more universities developing multidisciplinary programs and curricula geared towards Tech Ethics. At the moment, interdisciplinary education and professional work are still woefully lacking. And there is indeed a need for a shared language and respect and understanding of each other from both humanities and CS/Engineering sides. In the Master’s course I’m involved in, 90%+ of the students are working in different fields and have different backgrounds. As a result, the conversations among students are very rich even though their understanding of Tech ethics tends to be very different. I think we need more of that interdisciplinary work and education.
Dorothea: I hope that Tech Ethics becomes a standard part of each degree in Tech – be it at undergraduate, graduate, or continuing education level. Everyone studying tech should be exposed to ethical questions in the classroom.
Ryan: For the next 5 years, I see more empowerment and integration of Ethics committees and ethics officers, and I see increased value in philosophical/moral thinking with regards to sustainable profitability in corporations. I would argue that with the rise of soft law and duty of care legal terminology in laws like GDPR, Children’s Code and now the EU High-Risk regs proposals, the demand for skilled practitioners (ethics officers) trained in instances of Ethical Choice all throughout the design and development of algorithmic systems, will rise at the corporate level. The idea is that private entities will see these changes as a risk to the sustainability of their profits unless they learn how to properly risk-manage these issues. My prediction is that this will also transform the landscape of the Tech Ethics curriculum design and new courses will be taught to fill these gaps and address these needs.
Full bios of interviewees:
Merve Hickok is the founder of www.AIethicist.org. She is an independent consultant, lecturer and speaker on AI ethics and bias and its implications. She aims to create awareness, build capacity, and advocate for ethical and responsible AI. She collaborates with several national and international organizations building AI governance methods, and has been recognized by a number of organizations for her work – most recently as one of the 100 Brilliant Women in AI Ethics™ – 2021. Merve has over 15 years of global level senior experience in Fortune 100 with a particular focus on HR technologies, recruitment and diversity. She is a Senior Researcher at the Center for AI & Digital Policy, a Data Ethics lecturer at the University of Michigan, School of Information, an instructor at RMDS Lab providing training on AI & Ethics and responsible AI development and implementation. Merve is a Fellow at ForHumanity Center working to draft a framework for independent audit of AI systems; a founding editorial board member of Springer Nature AI & Ethics journal; and a regional lead and mentor at Women in AI Ethics Collective, she works to empower women in the field.
Dorothea Baur has over 15 years of experience in applied ethics, starting with research, moving on to teaching and consulting. She began with a PhD on business ethics, and more recently focused her consulting and teaching on AI ethics. She has taught at leading European business schools like ESADE University Barcelona, University of St. Gallen, and Nottingham University Business School. Aside from running her own consulting company, she is currently very active as a lecturer on AI ethics in continuing education at various universities in Switzerland.
Ryan Carrier founded ForHumanity after a 25 year career in finance. His global business experience, risk management expertise and unique perspective on how to manage the risk led him to launch the non-profit entity, ForHumanity, personally. Ryan focused on Independent Audit of AI Systems as one means to mitigate the risk associated with artificial intelligence and began to build the business model associated a first-of-its-kind process for auditing corporate AIs, using a globally, open-source, crowd-sourced process to determine “best-practices”. Ryan serves as ForHumanity’s Executive Director and Chairman of the Board of Directors, in these roles he is responsible for the day-to-day function of ForHumanity and the overall process of Independent Audit. Prior to founding ForHumanity, Ryan owned and operated Nautical Capital, a quantitative hedge fund which employed artificial intelligence algorithms. He also was responsible for Macquarie’s Investor Products business in the late 2000’s. He worked at Standard & Poor’s in the Index business and for the International Finance Corporation’s Emerging Markets Database. Ryan has conducted business in over 55 countries and was a frequent speaker at industry conferences around the world. He is a graduate from the University of Michigan.