• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Public Strategies for Artificial Intelligence: Which Value Drivers?

October 8, 2021

🔬 Research summary by Connor Wright, our Partnerships Manager.

[Original paper by Gianluigi Viscusi, Anca Rusu, and Marie-Valentine Florin]


Overview: Different nations are now catching on to the need of national AI strategies for the good of their futures. However, what really drives AI and whether this is in line with the current fundamental values at the heart of different nations is a different question.


Introduction

With the release of the UK Government’s AI strategy, the question of what is driving its design comes to mind. AI has long been touted as a sure way to improve public service delivery and administration efforts. However, questioning what kind of values are currently driving government initiatives has not been too visited. Are the private and public spheres motivated by different values? Does humanity itself enter into the AI conversation? Is AI being treated in too much of an instrumental maner? I will now explore these three questions in turn.

Key Insights

Do the private and public sectors differ?

With different end goals and different audiences, the private and public sectors can differ on many aspects of governance. However, “accountability, expertise, reliability, efficiency, and effectiveness” (p.g. 2) were found to be held in common between the two spheres. Other aspects like “professionalism”, “efficiency”, “openness” and “inclusion” have been common to both as well (p.g. 2). However, I believe what can differentiate the two are the interpretations of the values listed. Efficiency will differ from business to business, especially in terms of what is deemed as the threshold. Moreover, “inclusion” (p.g. 2) and who is subjected to it can vary widely in terms of extent, depth and what inclusion entails. Being included in the AI governance could range from participating in its design or just occasionally being informed on the changes being made to an AI. Most of the time, a lack of inclusion is witnessed.

A lack of focus on humanity

Principles such as “transparency”, “privacy” and “responsibility” are mentioned in AI governance strategies more so than “Human dignity” (pg. 4). AI is often touted in the media as the golden ticket towards greater ‘prosperity’ for humanity, but humanity’s role in this prosperity is often left untouched. To illustrate, there were fewer risks and challenges identified in the AI strategies of each country than there were values. So, what are these values contributing towards if governments do not articulate the problem? From my reading of the report, AI is far more accepted as a tool than as a socially designed and sensitive technology.

Instrumental normativity over social normativity

Phrases in the shape of privacy, efficiency and transparency often are preferred by governments instead of words such as democracy. One way to express this substitution is by acknowledging the tension between improving administrative features and simultaneously focusing on societal issues. Resultantly, the values held at the core of constitutions and manifestos are often sidelined when thinking about AI, like with the lack of reference to democracy. Questions then arise of whether values serve as sound guiding principles for AI at all, or are they another example of a tokenistic gesture.

Between the lines

What is crucial for me to consider is how what is valued varies so differently across different nations. As explained in one of our event summaries on AI in different national contexts, the cultural interpretation of different values can vary widely. However, fragmentation is not always a bad thing if each country adheres appropriately to their given interpretation.

The next topic of debate comes from how the fundamental values that the entities in the report expounded were not concurrent with the values shown in the AI strategies. Committing to AI values beyond simply writing them down is a known struggle within the space. For me, writing down the value and what problem this value, if championed correctly, helps to prevent is one way of contextualising and focussing efforts on the issue at hand. The more context and grounding in what AI is being deployed to do, the more the equally valuable social values we hold can appear.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

related posts

  • Effects of ROSS Intelligence and NDAS, highlighting the need for AI regulation

    Effects of ROSS Intelligence and NDAS, highlighting the need for AI regulation

  • A survey on adversarial attacks and defences

    A survey on adversarial attacks and defences

  • Towards Sustainable Conversational AI

    Towards Sustainable Conversational AI

  • Siren’s Song in the AI Ocean: A Survey on Hallucination in Large Language Models

    Siren’s Song in the AI Ocean: A Survey on Hallucination in Large Language Models

  • Is the Human Being Lost in the Hiring Process?

    Is the Human Being Lost in the Hiring Process?

  • Ethics-based auditing of automated decision-making systems: intervention points and policy implicati...

    Ethics-based auditing of automated decision-making systems: intervention points and policy implicati...

  • Research summary: Challenging Truth and Trust: A Global Inventory of Organized Social Media Manipula...

    Research summary: Challenging Truth and Trust: A Global Inventory of Organized Social Media Manipula...

  • Research summary:  Algorithmic Bias: On the Implicit Biases of Social Technology

    Research summary: Algorithmic Bias: On the Implicit Biases of Social Technology

  • Collective Action on Artificial Intelligence: A Primer and Review

    Collective Action on Artificial Intelligence: A Primer and Review

  • How Prevalent is Gender Bias in ChatGPT? - Exploring German and English ChatGPT Responses

    How Prevalent is Gender Bias in ChatGPT? - Exploring German and English ChatGPT Responses

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • Š MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.