• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Investing in AI for Social Good: An Analysis of European National Strategies

July 6, 2023

🔬 Research Summary by Arun Teja Polcumpally, a Non-Resident Fellow, Center of Excellence – AI for Human Security, Doctoral fellow at Jindal School of International Affairs, India.

[Original paper by Francesa Foffana, Teresa Scantamburlo, and Atia Cortes]


Overview: This research paper examines the AI strategies of EU member states, and this summary explores it in light of the recently approved EU Artificial Intelligence Act (AIA). The research paper explores the countries’ plans for responsible AI development, human-centric and democratic approaches to AI. The paper categorizes the focus areas of 11 countries’ AI policies and highlights the potential negative impacts of AI systems designed for social outcomes. The analysis provides valuable insights for policymakers and the public as the EU progresses with AI regulation.


Introduction

In May 2023, the European Parliament approved the EU Artificial Intelligence Act (AIA) proposal, which is currently under review by the European Council. As the European Parliament approved the AIA on 14 June 2023, the proposal is now under review by the European Council, with a high likelihood of approval by the end of 2023. This legislative approach to AI regulation is a significant step for the EU, as there are no other democratic countries or regions with similar regulations and ethics in place. Understanding the AI strategies of the EU and its member states becomes crucial in this context. The research paper “Investing in AI for Social Good: An Analysis of European National Strategies” offers an overview of the AI strategies of selected EU member states and examines their plans for responsible AI development, human-centric approaches, and inclusivity. The paper aims to categorize the focus areas of 11 countries’ AI policies, explore the potential negative impacts of AI systems designed with social outcomes in mind, and analyze the EU member states’ AI strategies and their stance on using AI for societal development.

Key Insights

The research paper analyzes the AI policy papers of 11 countries to identify their focus areas and evaluate their approaches to AI for social good. The 11 countries are selected based on the official release of the strategy papers and publication in the English language. Austria, Belgium, Denmark, Finland, France, Germany, Lithuania, Malta, Spain, Sweden, and the Netherlands are the selected countries that released official AI strategies. The research focuses on 11 countries that have officially released AI strategies and analyze their policy documents using thematic analysis. Themes are extracted, and common themes are grouped to form first-order observations. Second-order observations are then derived through axial coding methodology, resulting in peripheral themes.

The paper acknowledges the potential negative impacts of AI systems designed with social outcomes as the end goal. This is because the presence of AI to address a social problem will risk putting out an unexpecting inconvenience and harm. After laying out the difficulty in objectively defining ‘AI for social good,’ the paper addresses three key questions: What are member states planning for responsible AI development? How are they translating the human-centric vision into targeted measures? And what plans do they have to make AI development more democratic and open to society?

The authors highlight the importance of AI policies as they provide a roadmap for investors, developers, and regulators based on societal values. The EU has three defining pillars for its AI strategy: boosting technological and industrial capacity, preparing for socio-economic changes driven by AI, and establishing an ethical and legal framework for trustworthy and accountable AI. Based on these pillars, the major takeaways from the analysis are building AI for public governance, educating and skilling society for an AI-driven world, establishing governance mechanisms for AI regulation, and creating a pan-European data and technological infrastructure.

Tracing the EU’s development on the AI policy, it is found that the EU has released four documents guiding member states in developing their AI strategies, emphasizing the need to consider changes in the labor market, improve industrial capacity, develop national AI strategies, and align with the General Data Protection Regulation (GDPR). Notably, the “Policy and Investment Recommendations of AI” document released in June 2019 emphasizes incorporating social science research into AI programs to promote inclusivity and minimize biases. This commendable step will help develop an inclusive AI without gender, racial, and social biases. After the four documents, the EU released a white paper outlining an AI regulatory framework. This has culminated in the European Union Artificial Intelligence Act (2023).

Findings

Among the 11 analyzed countries, five explicitly mention investments in AI for public governance, while others report investments in AI-related academic research. However, explicit mentions of investments in AI for social good were not found, likely due to the wide spectrum of social problems that cannot be addressed individually. To mitigate the unknown impacts of AI on society, the paper recommends public participation in the AI development lifecycle.

Five countries express interest in establishing ethics committees to oversee AI use and development, focusing on collaboration and knowledge exchange. Examples include the Netherlands’ investment in collaboration among public entities and Denmark’s openness to national and international cooperation in building trustworthy AI.

Imbalances in the national strategies are observed, where AI ethics are prioritized in the formulated strategies and policies, but actions do not consistently reflect this prioritization. Investment data on AI for social good is not clearly presented throughout the AI strategies.

Between the lines

The findings of this research are significant as they shed light on the AI strategies of EU member states and their efforts to ensure responsible, inclusive, and ethical AI development. However, discrepancies between the stated priorities and actual actions indicate a need for more cohesive implementation of AI policies across the member states. Further research could explore bridging this gap and identify ways to effectively invest in AI for social good while considering the potential negative impacts on society.

Two significant findings from the analysis are the recommendation of incorporating social science research into AI and the inclusion of public participation in the AI lifecycle. The emphasis on public participation aligns with anticipatory governance research, where the public is educated about the technology, consulted for their opinions on its usage and development, and involved in a feedback loop with AI developers. Notably, explicit mentions of such public participation are absent in the AI policy papers of the United States, China, and India. Encouraging social science research in the AI sector is also important to understand the societal impacts of AI and develop region-specific ethical principles.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

related posts

  • State of AI Ethics

    State of AI Ethics

  • AI hyped? A horizon scan of discourse on artificial intelligence in education (AIED) and development

    AI hyped? A horizon scan of discourse on artificial intelligence in education (AIED) and development

  • A Matrix for Selecting Responsible AI Frameworks

    A Matrix for Selecting Responsible AI Frameworks

  • Collect, Measure, Repeat: Reliability Factors for Responsible AI Data Collection

    Collect, Measure, Repeat: Reliability Factors for Responsible AI Data Collection

  • Collectionless Artificial Intelligence

    Collectionless Artificial Intelligence

  • Research summary: Towards the Systematic Reporting of the Energy and Carbon Footprints of Machine Le...

    Research summary: Towards the Systematic Reporting of the Energy and Carbon Footprints of Machine Le...

  • Policy Brief: AI’s Promise and Peril for the U.S. Government (Research summary)

    Policy Brief: AI’s Promise and Peril for the U.S. Government (Research summary)

  • System Cards for AI-Based Decision-Making for Public Policy

    System Cards for AI-Based Decision-Making for Public Policy

  • A Systematic Review of Ethical Concerns with Voice Assistants

    A Systematic Review of Ethical Concerns with Voice Assistants

  • Adding Structure to AI Harm

    Adding Structure to AI Harm

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.