• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Public Strategies for Artificial Intelligence: Which Value Drivers?

October 8, 2021

🔬 Research summary by Connor Wright, our Partnerships Manager.

[Original paper by Gianluigi Viscusi, Anca Rusu, and Marie-Valentine Florin]


Overview: Different nations are now catching on to the need of national AI strategies for the good of their futures. However, what really drives AI and whether this is in line with the current fundamental values at the heart of different nations is a different question.


Introduction

With the release of the UK Government’s AI strategy, the question of what is driving its design comes to mind. AI has long been touted as a sure way to improve public service delivery and administration efforts. However, questioning what kind of values are currently driving government initiatives has not been too visited. Are the private and public spheres motivated by different values? Does humanity itself enter into the AI conversation? Is AI being treated in too much of an instrumental maner? I will now explore these three questions in turn.

Key Insights

Do the private and public sectors differ?

With different end goals and different audiences, the private and public sectors can differ on many aspects of governance. However, “accountability, expertise, reliability, efficiency, and effectiveness” (p.g. 2) were found to be held in common between the two spheres. Other aspects like “professionalism”, “efficiency”, “openness” and “inclusion” have been common to both as well (p.g. 2). However, I believe what can differentiate the two are the interpretations of the values listed. Efficiency will differ from business to business, especially in terms of what is deemed as the threshold. Moreover, “inclusion” (p.g. 2) and who is subjected to it can vary widely in terms of extent, depth and what inclusion entails. Being included in the AI governance could range from participating in its design or just occasionally being informed on the changes being made to an AI. Most of the time, a lack of inclusion is witnessed.

A lack of focus on humanity

Principles such as “transparency”, “privacy” and “responsibility” are mentioned in AI governance strategies more so than “Human dignity” (pg. 4). AI is often touted in the media as the golden ticket towards greater ‘prosperity’ for humanity, but humanity’s role in this prosperity is often left untouched. To illustrate, there were fewer risks and challenges identified in the AI strategies of each country than there were values. So, what are these values contributing towards if governments do not articulate the problem? From my reading of the report, AI is far more accepted as a tool than as a socially designed and sensitive technology.

Instrumental normativity over social normativity

Phrases in the shape of privacy, efficiency and transparency often are preferred by governments instead of words such as democracy. One way to express this substitution is by acknowledging the tension between improving administrative features and simultaneously focusing on societal issues. Resultantly, the values held at the core of constitutions and manifestos are often sidelined when thinking about AI, like with the lack of reference to democracy. Questions then arise of whether values serve as sound guiding principles for AI at all, or are they another example of a tokenistic gesture.

Between the lines

What is crucial for me to consider is how what is valued varies so differently across different nations. As explained in one of our event summaries on AI in different national contexts, the cultural interpretation of different values can vary widely. However, fragmentation is not always a bad thing if each country adheres appropriately to their given interpretation.

The next topic of debate comes from how the fundamental values that the entities in the report expounded were not concurrent with the values shown in the AI strategies. Committing to AI values beyond simply writing them down is a known struggle within the space. For me, writing down the value and what problem this value, if championed correctly, helps to prevent is one way of contextualising and focussing efforts on the issue at hand. The more context and grounding in what AI is being deployed to do, the more the equally valuable social values we hold can appear.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: New York City Local Law 144

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

related posts

  • Investing in AI for Social Good: An Analysis of European National Strategies

    Investing in AI for Social Good: An Analysis of European National Strategies

  • The Epistemological View: Data Ethics, Privacy & Trust on Digital Platform

    The Epistemological View: Data Ethics, Privacy & Trust on Digital Platform

  • What lies behind AGI: ethical concerns related to LLMs

    What lies behind AGI: ethical concerns related to LLMs

  • Connecting the dots in trustworthy Artificial Intelligence: From AI principles, ethics, and key requ...

    Connecting the dots in trustworthy Artificial Intelligence: From AI principles, ethics, and key requ...

  • Now I’m Seen: An AI Ethics Discussion Across the Globe

    Now I’m Seen: An AI Ethics Discussion Across the Globe

  • Modeling Content Creator Incentives on Algorithm-Curated Platforms

    Modeling Content Creator Incentives on Algorithm-Curated Platforms

  • Re-imagining Algorithmic Fairness in India and Beyond (Research Summary)

    Re-imagining Algorithmic Fairness in India and Beyond (Research Summary)

  • Risk of AI in Healthcare: A Study Framework

    Risk of AI in Healthcare: A Study Framework

  • A Holistic Assessment of the Reliability of Machine Learning Systems

    A Holistic Assessment of the Reliability of Machine Learning Systems

  • Research Summary: Towards Evaluating the Robustness of Neural Networks

    Research Summary: Towards Evaluating the Robustness of Neural Networks

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.