• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Public Strategies for Artificial Intelligence: Which Value Drivers?

October 8, 2021

🔬 Research summary by Connor Wright, our Partnerships Manager.

[Original paper by Gianluigi Viscusi, Anca Rusu, and Marie-Valentine Florin]


Overview: Different nations are now catching on to the need of national AI strategies for the good of their futures. However, what really drives AI and whether this is in line with the current fundamental values at the heart of different nations is a different question.


Introduction

With the release of the UK Government’s AI strategy, the question of what is driving its design comes to mind. AI has long been touted as a sure way to improve public service delivery and administration efforts. However, questioning what kind of values are currently driving government initiatives has not been too visited. Are the private and public spheres motivated by different values? Does humanity itself enter into the AI conversation? Is AI being treated in too much of an instrumental maner? I will now explore these three questions in turn.

Key Insights

Do the private and public sectors differ?

With different end goals and different audiences, the private and public sectors can differ on many aspects of governance. However, “accountability, expertise, reliability, efficiency, and effectiveness” (p.g. 2) were found to be held in common between the two spheres. Other aspects like “professionalism”, “efficiency”, “openness” and “inclusion” have been common to both as well (p.g. 2). However, I believe what can differentiate the two are the interpretations of the values listed. Efficiency will differ from business to business, especially in terms of what is deemed as the threshold. Moreover, “inclusion” (p.g. 2) and who is subjected to it can vary widely in terms of extent, depth and what inclusion entails. Being included in the AI governance could range from participating in its design or just occasionally being informed on the changes being made to an AI. Most of the time, a lack of inclusion is witnessed.

A lack of focus on humanity

Principles such as “transparency”, “privacy” and “responsibility” are mentioned in AI governance strategies more so than “Human dignity” (pg. 4). AI is often touted in the media as the golden ticket towards greater ‘prosperity’ for humanity, but humanity’s role in this prosperity is often left untouched. To illustrate, there were fewer risks and challenges identified in the AI strategies of each country than there were values. So, what are these values contributing towards if governments do not articulate the problem? From my reading of the report, AI is far more accepted as a tool than as a socially designed and sensitive technology.

Instrumental normativity over social normativity

Phrases in the shape of privacy, efficiency and transparency often are preferred by governments instead of words such as democracy. One way to express this substitution is by acknowledging the tension between improving administrative features and simultaneously focusing on societal issues. Resultantly, the values held at the core of constitutions and manifestos are often sidelined when thinking about AI, like with the lack of reference to democracy. Questions then arise of whether values serve as sound guiding principles for AI at all, or are they another example of a tokenistic gesture.

Between the lines

What is crucial for me to consider is how what is valued varies so differently across different nations. As explained in one of our event summaries on AI in different national contexts, the cultural interpretation of different values can vary widely. However, fragmentation is not always a bad thing if each country adheres appropriately to their given interpretation.

The next topic of debate comes from how the fundamental values that the entities in the report expounded were not concurrent with the values shown in the AI strategies. Committing to AI values beyond simply writing them down is a known struggle within the space. For me, writing down the value and what problem this value, if championed correctly, helps to prevent is one way of contextualising and focussing efforts on the issue at hand. The more context and grounding in what AI is being deployed to do, the more the equally valuable social values we hold can appear.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

AI Policy Corner: U.S. Copyright Guidance on Works Created with AI

AI Policy Corner: AI for Good Summit 2025

AI Policy Corner: Japan’s AI Promotion Act

related posts

  • Research summary: Appendix C: Model Benefit-Risk Analysis

    Research summary: Appendix C: Model Benefit-Risk Analysis

  • The ethical ambiguity of AI data enrichment: Measuring gaps in research ethics norms and practices

    The ethical ambiguity of AI data enrichment: Measuring gaps in research ethics norms and practices

  • Who Audits the Auditors? Recommendations from a field scan of the algorithmic auditing ecosystem

    Who Audits the Auditors? Recommendations from a field scan of the algorithmic auditing ecosystem

  • Universal and Transferable Adversarial Attacks on Aligned Language Models

    Universal and Transferable Adversarial Attacks on Aligned Language Models

  • Generative AI in Writing Research Papers: A New Type of Algorithmic Bias and Uncertainty in Scholarl...

    Generative AI in Writing Research Papers: A New Type of Algorithmic Bias and Uncertainty in Scholarl...

  • Exploring XAI for the Arts: Explaining Latent Space in Generative Music

    Exploring XAI for the Arts: Explaining Latent Space in Generative Music

  • On the sui generis value capture of new digital technologies: The case of AI

    On the sui generis value capture of new digital technologies: The case of AI

  • Research summary: Lexicon of Lies: Terms for Problematic Information

    Research summary: Lexicon of Lies: Terms for Problematic Information

  • Research Summary: Geo-indistinguishability: Differential privacy for location-based systems

    Research Summary: Geo-indistinguishability: Differential privacy for location-based systems

  • Data Statements for Natural Language Processing: Toward Mitigating System Bias and Enabling Better S...

    Data Statements for Natural Language Processing: Toward Mitigating System Bias and Enabling Better S...

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • Š 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.