• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • šŸ‡«šŸ‡·
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Experts Doubt Ethical AI Design Will Be Broadly Adopted as the Norm Within the Next Decade

June 29, 2021

šŸ”¬ Research summary by Connor Wright, our Partnerships Manager.

[Original paper by Lee Rainie, Janna Anderson and Emily A. Vogels]


Overview: How would you answer the following question: ā€œBy 2030, will most of the AI systems being used by organizations of all sorts employ ethical principles focused primarily on the public good?ā€ An overwhelming majority (68%) say no, and there are more than just ethical reasons why this is the case.


Introduction

How would you answer the following question: ā€œBy 2030, will most of the AI systems being used by organizations of all sorts employ ethical principles focused primarily on the public good?ā€ A resounding 68% of the experts involved in this research paper answered no. Positives are few and far between within the research presented, despite some clear examples. So, let’s look into why that is the case.

Ethics is both vague and subjective

One prevalent theme throughout this piece is the frustratingly vague and subjective nature of ethics. There is no consensus over what ethical AI looks like, nor is there any agreement over what is a moral outcome. In this sense, it could be rightly said how our ethical frameworks are only ā€˜half-written books, missing some crucial pages and chapters to guide us. As a result, ethics turns out to be an iterative rather than dogmatic process, requiring us to be okay with not knowing the potential outcomes and answers of a situation. Unfortunately, this does not bode well with trying to encode ethical systems into AI.

What I mean by this is how real-life situations can be seen as being too situational to programme into an ethical AI framework, whereby actual ethical dilemmas do not possess any correct answers. For example, views of what is ethical differ worldwide, where countries such as China values social stability more than, say, Western countries. Thus, when AI is applied (such as in warfare), it is unlikely that both sides of the conflict would employ the same ethical framework. Hence, finding a common ethical thread can better help fuse a potentially fractured AI regulation approach, which I believe lies in identifying the human in the AI process.

Identifying the human in the process

Here, the paper rightly points out the false claim that technological solutions are better than human solutions as they’re based on ā€˜cold computing’ and not ā€˜emotive human responses’. Instead, it should be noted how, perhaps, when we talk about AI ethics, we should referring to human ethics mediated through AI. By this, I mean how there are no inherently good or evil mathematical functions, whereby it is rather the human presence that determines the ethical propensity of the AI application. The obligation to be moral lies in the hands of corporations and system designers rather than in what the AI does.

As a result, the role the human plays in ā€˜feeding and nurturing’ their AI is to be acknowledged. Supplying the system with adequate data for it to train on and proper privacy protections are two ways in which this role can be carried out meaningfully. Without such measures in place, AI then has the potential to become the medium through which our lack of understanding of human bias and bias in itself is expressed. One environment in which this has become too apparent is in AI innovation.

Ethics doesn’t drive AI innovation

Effective AI has been seen to be prioritised over ethical AI. Looking at facial recognition systems such as Amazon’s Rekognition and IBM, it becomes clear that companies are prioritising the ā€˜E’ word, but not the one that should be emphasised. Thus, Techno-power has become the main driver behind the pursuit of AI instead of ethical considerations. As a consequence, those few at the helm of AI innovation have proliferated the techno-solutionist mindset throughout the practice, allowing AI to be used as the new manifestation to masquerade and hide the business interests and biases of the institutions and people involved. In this sense, AI has become the digital representation of the collective corporate mindset, meaning that, as some experts in the paper observed, so long as AI is owned those who have access to it will benefit and those who do not will suffer the consequences.

In this sense, perhaps taking the view of seeing the wood for the trees and observing what AI is at its core is now worth exploring.

Taking AI as it really is

One of the lures of AI is how it almost creates its own separate reality, filled with the promise of what can be in a different world separate from the current reality. However, this distracts from what AI is in essence. For example, AI applications in different sectors such as law enforcement do what they’re told to do. It does not possess a moral compass nor social awareness. In this sense, AI can be seen to lack contextual understanding as it sets out to achieve its goal. To illustrate, the paper included how an AI tasked to keep you dry would not be fussed about stealing an umbrella from an old lady in the street when it starts to rain. In this sense, recognising AI as a tool, or even yet, potentially going as far as saying that it’s an elongation of previous statistical techniques and innovations, could serve to help cut away the confusing mist surrounding such technology. Perhaps viewing it as a tool can then help to influence the future applications of such a tool, including in the incentives to action it brings with it.

The problem of incentives

One potential way to correct the mentioned corporate prioritisation of efficiency could then be to look into what incentivises businesses to act this way. In this sense, the experts involved in the paper observe how, in its current state, the corporate world is not offered any benefits from ethically coordinating AI, with businesses tending to prioritise efficiency, scale and automation, rather than augmentation, inclusion and local context. If this can be achieved, there certainly is a bright side to AI.

The positives

AI has been showing promise in its use in education and health, allowing the prioritisation of accessible and necessary digital skills in education programmes, as well as improving the accuracy of certain diagnoses. In this way, it has been observed in the paper how the more we develop AI, the more we appreciate the unique traits and special qualities of humans that are so hard to code. Such qualities such as compassion, contextual understanding and decision-making are common through the human world, meaning that AI could also prove the median through which we are able to bridge the conversation between countries. While these positives are few in the paper, they are worth keeping in mind nonetheless.

Between the lines

From my perspective, what kind of humans we want to be should be reflected in how we go about designing our AI systems. In this sense, there should be a lack of cheap and subversive techniques to avoid complicated issues like justice, with the social good and social infrastructure over innovation and the good of the governments. For me, this comes through acknowledging the human in the process, both in its role as the protagonist in the AI process, as well as the eventual recipients of both its positives and its negatives.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

šŸ” SEARCH

Spotlight

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

related posts

  • Research summary: The Deepfake Detection  Challenge: Insights and Recommendations  for AI and Media ...

    Research summary: The Deepfake Detection Challenge: Insights and Recommendations for AI and Media ...

  • Customization is Key: Four Characteristics of Textual Affordances for Accessible Data Visualizatio...

    "Customization is Key": Four Characteristics of Textual Affordances for Accessible Data Visualizatio...

  • UK’s roadmap to AI supremacy: Is the ā€˜AI War’ heating up?

    UK’s roadmap to AI supremacy: Is the ā€˜AI War’ heating up?

  • The role of the African value of Ubuntu in global AI inclusion discourse: A normative ethics perspec...

    The role of the African value of Ubuntu in global AI inclusion discourse: A normative ethics perspec...

  • DICES Dataset: Diversity in Conversational AI Evaluation for Safety

    DICES Dataset: Diversity in Conversational AI Evaluation for Safety

  • Rethinking Gaming: The Ethical Work of Optimization in Web Search Engines (Research Summary)

    Rethinking Gaming: The Ethical Work of Optimization in Web Search Engines (Research Summary)

  • Research summary: Social Work Thinking for UX and AI Design

    Research summary: Social Work Thinking for UX and AI Design

  • The Logic of Strategic Assets: From Oil to AI

    The Logic of Strategic Assets: From Oil to AI

  • The Epistemological View: Data Ethics, Privacy & Trust on Digital Platform

    The Epistemological View: Data Ethics, Privacy & Trust on Digital Platform

  • Automating Extremism: Mapping the Affective Roles of Artificial Agents in Online Radicalization

    Automating Extremism: Mapping the Affective Roles of Artificial Agents in Online Radicalization

Partners

  • Ā 
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • Ā© MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.