• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

A Lesson From AI: Ethics Is Not an Imitation Game

June 5, 2022

🔬 Research summary by Connor Wright, our Partnerships Manager.

[Original paper by Gonzalo Génova, Valentín Moreno Pelayo and M. Rosario Gonzålez Martín]


Overview: While the Turing test significantly influenced machine intelligence, it didn’t harbour much ethical consideration. With this in mind, we must be careful not to treat ethics as a neatly packaged set of rules we can input into a machine.


Introduction

The Turing test was hugely influential on considerations about machine intelligence and the subsequent emergence of “programmed ethics” (p. 75). Yet, given the lack of emphasis on ethical issues within the experiment, “learned ethics” (p. 75) and considerations on explainability and bias also arose. Here, experiments such as MIT’s Moral Machine emerged. Yet, while this allowed for public engagement in moral issues with AI, ethics must not consist in simply replicating majority behaviour. Bearing this in mind will help us decipher between what is merely a personal preference and what is societal value. But first, let us consider some initial thoughts on machine intelligence.

Key Insights

Initial considerations on machine intelligence

Initially, the Turing test aimed to demonstrate machine intelligence through the solving of ‘closed’ problems, such as chess and imitating a human. In other words, machine intelligence was measured on how it could find the known solution to a problem. While there was not much consideration of the ethical implications of this experiment, Turing of course cared about ethics. Nevertheless, with inspiration drawn from the Turing test, “programmed” and “learned” ethics (p. 75) emerged.

“Programmed” vs “learned” ethics

Programmed ethics is all about breaking down the solution to a problem into sequential parts to be followed. Concerning the Turing test, the end goal for the machine to deceive a human was broken down into steps such as ‘greeting the human’, or ‘asking questions’, amongst others. However, this approach requires the problem to have a clear desired outcome or solution in the first place, which is certainly not the case with complex ethical issues.

Consequently, the practice of learned ethics arose. Treating ethics in a boxed-up, rigid and sequential fashion did not appropriately account for the nature of ethical problems as constantly evolving and their outcome largely unknown. Said development tied in nicely with the developments made in the AI space seen in deep and machine learnings.

MIT’s Moral Machine

As a result of these developments, moral considerations on autonomous technologies have become more frequent. For example, MIT’s Moral Machine dilemma encourages users to decide for an autonomous car within the experiment between the lesser of two evils. Here, the user can choose between two options for the vehicle, such as running over a man or a woman. The user then relies on their moral faculties to make what they think is the most appropriate decision. These responses are recorded and can then be viewed by those in charge of the experiment.

Experts try to learn from the general responses made by the population that underwent the experiment. However, should they programme a machine’s ethics accordingly, this risks a tyranny of the majority scenario where whatever the majority says is right. What is the majority’s opinion is not always the same as society’s opinion. For example, we know that bias within AI is terrible, even if the effects of a biased algorithm only affect a minority of the population.

For this reason, being aware of the prominence of explainability and bias within AI ethics is starting to become a best practice. Should we not be able to explain the decision of an AI algorithm within a highly divisive ethical situation, we won’t be able to mitigate any adverse outcomes appropriately. The power of explainability also minimises the potential of bias, such as analysing whether the dataset is genuinely representative.

In the spirit of this previous point, the Moral Machine was left open to a relatively diverse audience. Answers were collected in 10-different languages and in 233 countries and territories (p. 77). Yet, drawing on such a variety of responses perhaps simply reflects cultural practices rather than ethical maxims. On the one hand, ‘ethics’ is used in a normative sense, describing what is right and wrong. On the other hand, ‘moral’ takes on a prescriptive sense, detailing social customs. Hence, MIT’s Moral Machine certainly helps to reflect cultural traditions. However, to become normative laws, considerations of what these cultural practices mean in terms of how we act must be considered. This will help us decipher between what is simply a personal preference and what should be a societal value.

Between the lines

In my opinion, the phrase “programmed ethics” (p. 75) almost sounds like an oxymoron. It presents ethics as a nicely packaged set of rules that we know will work at all times, which we can slot into an AI. Given how this is not the case, machines still have to rely heavily on human input for guidance. For example, despite talk about artificial general intelligence, such technology is still being modeled on human activity. To teach an algorithm how to recognise faces, we ask humans to do it first. Hence, while the integration between AI and ethics rightly deserves attention, the human role in this process must not be underestimated.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • The Next Frontier of AI: Lower Emission Processing Using Analog Computers

    The Next Frontier of AI: Lower Emission Processing Using Analog Computers

  • Counterfactual Explanations via Locally-guided Sequential Algorithmic Recourse

    Counterfactual Explanations via Locally-guided Sequential Algorithmic Recourse

  • AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legisla...

    AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legisla...

  • Promoting Bright Patterns

    Promoting Bright Patterns

  • Anthropomorphic interactions with a robot and robot-like agent

    Anthropomorphic interactions with a robot and robot-like agent

  • Post-Mortem Privacy 2.0: Theory, Law and Technology

    Post-Mortem Privacy 2.0: Theory, Law and Technology

  • Emerging trends: Unfair, biased, addictive, dangerous, deadly, and insanely profitable

    Emerging trends: Unfair, biased, addictive, dangerous, deadly, and insanely profitable

  • Computer Vision’s implications for human autonomy

    Computer Vision’s implications for human autonomy

  • The AI Carbon Footprint and Responsibilities of AI Scientists

    The AI Carbon Footprint and Responsibilities of AI Scientists

  • Sharing Space in Conversational AI

    Sharing Space in Conversational AI

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.