🔬 Research Summary by Arun Teja Polcumpally, a Non-Resident Fellow, Center of Excellence – AI for Human Security, Doctoral fellow at Jindal School of International Affairs, India.
[Original paper by Francesa Foffana, Teresa Scantamburlo, and Atia Cortes]
Overview: This research paper examines the AI strategies of EU member states, and this summary explores it in light of the recently approved EU Artificial Intelligence Act (AIA). The research paper explores the countries’ plans for responsible AI development, human-centric and democratic approaches to AI. The paper categorizes the focus areas of 11 countries’ AI policies and highlights the potential negative impacts of AI systems designed for social outcomes. The analysis provides valuable insights for policymakers and the public as the EU progresses with AI regulation.
Introduction
In May 2023, the European Parliament approved the EU Artificial Intelligence Act (AIA) proposal, which is currently under review by the European Council. As the European Parliament approved the AIA on 14 June 2023, the proposal is now under review by the European Council, with a high likelihood of approval by the end of 2023. This legislative approach to AI regulation is a significant step for the EU, as there are no other democratic countries or regions with similar regulations and ethics in place. Understanding the AI strategies of the EU and its member states becomes crucial in this context. The research paper “Investing in AI for Social Good: An Analysis of European National Strategies” offers an overview of the AI strategies of selected EU member states and examines their plans for responsible AI development, human-centric approaches, and inclusivity. The paper aims to categorize the focus areas of 11 countries’ AI policies, explore the potential negative impacts of AI systems designed with social outcomes in mind, and analyze the EU member states’ AI strategies and their stance on using AI for societal development.
Key Insights
The research paper analyzes the AI policy papers of 11 countries to identify their focus areas and evaluate their approaches to AI for social good. The 11 countries are selected based on the official release of the strategy papers and publication in the English language. Austria, Belgium, Denmark, Finland, France, Germany, Lithuania, Malta, Spain, Sweden, and the Netherlands are the selected countries that released official AI strategies. The research focuses on 11 countries that have officially released AI strategies and analyze their policy documents using thematic analysis. Themes are extracted, and common themes are grouped to form first-order observations. Second-order observations are then derived through axial coding methodology, resulting in peripheral themes.
The paper acknowledges the potential negative impacts of AI systems designed with social outcomes as the end goal. This is because the presence of AI to address a social problem will risk putting out an unexpecting inconvenience and harm. After laying out the difficulty in objectively defining ‘AI for social good,’ the paper addresses three key questions: What are member states planning for responsible AI development? How are they translating the human-centric vision into targeted measures? And what plans do they have to make AI development more democratic and open to society?
The authors highlight the importance of AI policies as they provide a roadmap for investors, developers, and regulators based on societal values. The EU has three defining pillars for its AI strategy: boosting technological and industrial capacity, preparing for socio-economic changes driven by AI, and establishing an ethical and legal framework for trustworthy and accountable AI. Based on these pillars, the major takeaways from the analysis are building AI for public governance, educating and skilling society for an AI-driven world, establishing governance mechanisms for AI regulation, and creating a pan-European data and technological infrastructure.
Tracing the EU’s development on the AI policy, it is found that the EU has released four documents guiding member states in developing their AI strategies, emphasizing the need to consider changes in the labor market, improve industrial capacity, develop national AI strategies, and align with the General Data Protection Regulation (GDPR). Notably, the “Policy and Investment Recommendations of AI” document released in June 2019 emphasizes incorporating social science research into AI programs to promote inclusivity and minimize biases. This commendable step will help develop an inclusive AI without gender, racial, and social biases. After the four documents, the EU released a white paper outlining an AI regulatory framework. This has culminated in the European Union Artificial Intelligence Act (2023).
Findings
Among the 11 analyzed countries, five explicitly mention investments in AI for public governance, while others report investments in AI-related academic research. However, explicit mentions of investments in AI for social good were not found, likely due to the wide spectrum of social problems that cannot be addressed individually. To mitigate the unknown impacts of AI on society, the paper recommends public participation in the AI development lifecycle.
Five countries express interest in establishing ethics committees to oversee AI use and development, focusing on collaboration and knowledge exchange. Examples include the Netherlands’ investment in collaboration among public entities and Denmark’s openness to national and international cooperation in building trustworthy AI.
Imbalances in the national strategies are observed, where AI ethics are prioritized in the formulated strategies and policies, but actions do not consistently reflect this prioritization. Investment data on AI for social good is not clearly presented throughout the AI strategies.
Between the lines
The findings of this research are significant as they shed light on the AI strategies of EU member states and their efforts to ensure responsible, inclusive, and ethical AI development. However, discrepancies between the stated priorities and actual actions indicate a need for more cohesive implementation of AI policies across the member states. Further research could explore bridging this gap and identify ways to effectively invest in AI for social good while considering the potential negative impacts on society.
Two significant findings from the analysis are the recommendation of incorporating social science research into AI and the inclusion of public participation in the AI lifecycle. The emphasis on public participation aligns with anticipatory governance research, where the public is educated about the technology, consulted for their opinions on its usage and development, and involved in a feedback loop with AI developers. Notably, explicit mentions of such public participation are absent in the AI policy papers of the United States, China, and India. Encouraging social science research in the AI sector is also important to understand the societal impacts of AI and develop region-specific ethical principles.