• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

The “Stanislavsky projects” approach to teaching technology ethics

February 21, 2022

✍️ Column by Dr. Marianna Ganapini, our Faculty Director. This is part 6 of her Office Hours series. The interviews in this piece were edited for clarity and length.


Join us again for some new exciting ideas on how to shape curriculum design in the ethics of tech space. This month Enrico Panai shares his experience as a Data & AI Ethicist and a Human Information Interaction Specialist. Following his studies in philosophy and a multi-year experience as a consultant in Italy, he taught for seven years as an adjunct professor of Digital Humanities in the Department of Philosophy at the University of Sassari. And as always, please get in touch if you want to share your opinions and insights on this fast-developing field.

What is your background? What courses do (or did) you teach connected to Tech Ethics, and who’s your audience (e.g., undergrads, professionals)?

I have taught several of the disciplines related to information processing, from simple text editors to programming languages, from spreadsheets to database design and implementation; from open data to the basics of cybersecurity: I might sound like just a computer scientist, but I actually come from philosophy.I started programming after reading Aristotle’s Metaphysics. I use technology to learn about the world. I believe that in order to know you have to create, to be a “maker”, and so I look at communication and information technologies as the tools to navigate this new world.   By pure chance, I started doing ICT courses for the telecommunications and aviation industries very early on, in the first years of university; then I became lecturer of Digital Humanities at the University of Sassari for 7 years. To date I have lectured for more than 5000 hours, between professional training and academic teaching.

What kind of content do you teach? What topics do you cover? What types of readings do you usually assign?

Apart from a few specific courses on information ethics and open data, or a few seminars on the ethical relevance of cyber wars, I have always taught information-related disciplines (text editors, spreadsheets, databases, programming, information architecture and digital humanities). Of course, I taught these subjects as a philosopher, or rather as an information philosopher, thus paying more attention to the logic, semantics and ethics of data and its transformation mediated by technology, than to technical operations. We could call it “semantics of technology” or “ethics of information”. In fact, ethics has been present in every fragment of my courses. Ethics is present when a student chooses to use the mean instead of a median, a pie chart instead of a histogram, the scale of a line graph, the position of a button in a web page, the structure of a form, the purchase funnel of an e-commerce, the evaluation test in a learning site, the choice of the pieces of data to be collected, the mode of data transformation, the type of machine learning for a prediction and so on.

My approach is to recommend books that provide the tools to understand the world, instead of books about a software or coding language. In other words, these books help us reflect ethically on data and the interaction between information and humans. They do not all talk about Ethics directly, but they help us understand how data interacts in organizations and with people.

Here some examples:

1) Information: A Very Short Introduction (by Luciano Floridi, Oxford Press 2010), provides basics on data, information and knowledge.

2) How To Lie With Charts by Gerald Everett Jones (2006), Storytelling with Data by Cole Nussbaumer Knaflic (2015) or The Truthful Art by Alberto Cairo (2016).   

3) The Neurotic Organization: Diagnosing and Changing Counterproductive Styles of Management by Manfred F. R. Kets de Vries and Danny Miller (1984), very helpful to understand why a wonderful information flow-chart does not actually work in reality.For those interested in ethics, I recommend more philosophical texts: a novel (Lila by Robert Pirsig), a scientific paper (What is Data Ethics? by Floridi and Taddeo) and an essay (The Ethics of Information by Luciano Floridi).

What are some teaching techniques you have employed that have worked particularly well? For Tech Ethics, what kind of approach to teaching do you recommend?

It depends on the context, but what I prefer are ‘Stanislavsky projects’.  I do not assign exercises, because when applied to reality they don’t work. I do not find high-level case studies useful, because they are unrealistic. Stanislavsky projects, as I call them, are information-related projects in which – during the design phase – the participants have to immerse themselves in the role of the final stakeholder. In practice, I ask each person to explain a project they have done using data and information. I select the ones I think are most suitable for the training/learning objectives, I divide the class into teams of 3-4 people, and let them work on the technical solution. I approach them by asking questions that challenge their certainties, using all my rhetoric. I ask questions about ethical consequences of tech, but I avoid using the word ‘ethics’. As they are drawing their diagrams, they have to stop and put themselves in the role of the final users (or other stakeholders) of the system. Putting themselves in the shoes of others forces them to move from the center of the information system to the periphery. Once the exercise is over, a debriefing is done, and I bring into the discussion the philosophical words that describe their choices. As you can see, my pedagogical goals are not to teach them intro to digital ethics, but to make them think ethically. After all, if you think about it, even a child learns to speak without knowing linguistics. Practice is what matters.

In your opinion, what are some of the things missing in the way Tech Ethics is currently taught? For instance, are there topics that are not covered enough (or at all)? What could be done to improve this field?

What is missing is a level of fine granularity. One can (and should) discuss the use of facial recognition in society, freedom of decision-making, transparency, etc. However, ethical choices can (and should) be made at every level. Most people I know do not have access to high-level decisions, but they have to decide on a pie chart, on a button, on a form, on a process, etc. Therefore, ethical choices can (and must) be made at a finer level of granularity.In the landmark book The Design of Everyday Things, Donald Norman started from the observation that many of us make mistakes when opening a door, because we confuse pushing with pulling, even if it is explicitly marked by a sticker. Today, many years later, people still think the mistake is their fault, an error of distraction. In fact, Norman had shown in 1988 that the problem was related to the design of the handle on the door. All these years I have been inspired by that idea. I care mostly not about huge problems that I could never solve, but I am really interested in the small everyday annoyances. We work with data, store it, process it, analyze it, communicate it, turn it into information, and make it travel on different interfaces, … each time we can add or employ the right handle or the wrong one. So, the ethicists must focus their attention on making sure we use the right handle: we need granularity, we need to pay attention to the details. Because even if everything works apparently well in an information system, at the end of the day … we will sprain our wrist if the handles are poorly designed!

How do you see the Tech Ethics Curriculum landscape evolve in the next 5 years? What are the changes you see happening?

We will be ethical when we no longer need to use the term ‘ethics’.  When Germany was divided into two blocs, the East (where freedoms were curtailed) was called the Democratic Republic. Even today, North Korea (far from being an example for individual freedoms) is called the Democratic People’s Republic. In short, when a principle is mentioned, it means that principle is in fact not realized. Today, the word ethics is overused; exploited even by people who have no philosophical competence. I don’t have a crystal ball and I’m very bad at predicting the future. So I can only share my hope with you. I dream that in the next five years, ethical reasoning (and not the history of ethical philosophy) can be integrated into any training course, both school and professional. In particular, I hope that information ethics (which I consider to be the appropriate philosophy for our time) can establish itself as the right tool for ethical reasoning in the information and communication technology industry.


Bio of interviewee:

Enrico Panai is a Data & AI Ethicist and a Human Information Interaction Specialist. Following his studies in philosophy and a multi-year experience as a consultant in Italy, he taught for seven years as an adjunct professor of Digital Humanities in the Department of Philosophy at the University of Sassari. Since his move to France in 2007, he has been working as a consultant for large corporations. He is founder of the consultancy BeEthical.be, a member of the French Standardisation Committee for IA, and a senior fellow of ForHumanity, a non-profit association dedicated to the development of Independent Audits for Artificial Intelligence.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

related posts

  • The Ethical AI Startup Ecosystem 03: ModelOps, Monitoring, and Observability

    The Ethical AI Startup Ecosystem 03: ModelOps, Monitoring, and Observability

  • Blending Brushstrokes with Bytes: My Artistic Odyssey from Analog to AI

    Blending Brushstrokes with Bytes: My Artistic Odyssey from Analog to AI

  • Risks vs. Harms: Unraveling the AI Terminology Confusion

    Risks vs. Harms: Unraveling the AI Terminology Confusion

  • The Ethical Considerations of Self-Driving Cars

    The Ethical Considerations of Self-Driving Cars

  • Teaching Responsible AI in a Time of Hype

    Teaching Responsible AI in a Time of Hype

  • AI Policy Corner: The Texas Responsible AI Governance Act

    AI Policy Corner: The Texas Responsible AI Governance Act

  • The Technologists are Not in Control: What the Internet Experience Can Teach us about AI Ethics and ...

    The Technologists are Not in Control: What the Internet Experience Can Teach us about AI Ethics and ...

  • How to invest in Data and AI companies responsibly

    How to invest in Data and AI companies responsibly

  • Diagnosing Gender Bias In Image Recognition Systems (Research Summary)

    Diagnosing Gender Bias In Image Recognition Systems (Research Summary)

  • The algorithmic imaginary: exploring the ordinary affects of Facebook algorithms (Research Summary)

    The algorithmic imaginary: exploring the ordinary affects of Facebook algorithms (Research Summary)

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.