• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Achieving a ‘Good AI Society’: Comparing the Aims and Progress of the EU and the US

September 21, 2021

🔬 Research summary by Dr. Andrea Pedeferri, instructional designer and leader in higher ed (Faculty at Union College), and founder at Logica, helping learners become more efficient thinkers.

[Original paper by Huw Roberts , Josh Cowls, Emmie Hine , Francesca Mazzi, Andreas Tsamados , Mariarosaria Taddeo, Luciano Floridi]


Overview:  Governments around the world are formulating different strategies to tackle the risks and the benefits of AI technologies. These strategies reflect the normative commitments highlighted in high-level documents such as the EU High-Level Expert Group on AI and the IEEE, among others. The paper “Achieving a ‘Good AI Society’: Comparing the Aims and Progress of the EU and the US compares strategies and progress made in the EU vs the US. The paper concludes by highlighting areas where improvement is still needed to reach a “Good AI Society”.re “autonomous, interactive, and adaptive”.


Introduction

Recently (in these summaries), we have focused on how designers should implement values in AI systems and how design choices can become more ethical. Now is the time to turn to the role of policymakers and governments in shaping strategies and regulations to tackle the risks and the benefits of AI technologies. The paper Achieving a ‘Good AI Society’: Comparing the Aims and Progress of the EU and the US compares the strategies and the progress made in the EU vs the US. The paper concludes by highlighting areas where improvement is still needed. 

Key Insights

At MAIEI, we have looked at some recent research on AI governance in China. Similarly, the current paper gives us the chance to look at how AI-governance is shaping up in the US and in the EU. Why EU and US only? As the authors of the paper explain, “We chose to focus on the EU and US in particular because of their global influence over AI governance, which far exceeds other countries (excluding China). More substantively, the EU and the US make for an interesting comparative case study because of their often-touted political alignment over guiding values, such as representative democracy, the rule of law and freedom.” Hence,  the goal of the paper is to analyze those governments’ “visions for the role of AI in society”, and in particular how they intend to develop a ‘Good AI Society’.

When making a comparative analysis of ethics-related issues, it is crucial to keep in mind that different societies and cultures may subscribe to different values and have a different understanding of what developing a ‘good AI society’ actually means. At the same time, the authors rightly point out that, “to consider no values as inherently ‘good’ is a form of extreme metaethical relativism (Brandt, 2001), according to which nothing of substance can ever be said justifiably about the respective merits of different visions.” The authors’ view on this is that we should adopt a form of “ethical pluralism”. As they explain it, there are “many different valid visions of a ‘Good AI Society’, but […] each one needs to be underpinned by a set of values that are viewed at national and international levels as desirable. Such values are likely to include democracy, justice, privacy, the protection of human rights, and a commitment to environmental protection.” Thus, while they want to avoid adopting ethical absolutism,  the authors also voice the need to avoid the trap of ethical relativism. 

  1. AI Governance in the European Union 

 In particular since 2016, European countries have worked quite hard to find ways to regulate AI. They have put forward some high level requirements for a trustworthy AI  (e.g. robustness, transparency). Most recently, the EU has released the draft Artificial Intelligence Act “which proposes a risk-based approach to regulating AI.” As the authors explain, “the EU’s long-term vision for a ‘Good AI Society’, including the mechanisms for achieving it, appears coherent. The vision for governing AI is underpinned by fundamental European values, including human dignity, privacy and democracy. […] The risk-based approach, which combines hard and soft law, aims to ensure that harms to people are minimised, while allowing for the business and societal benefits of these technologies.”

However, this vision has some notable gaps: 

  • No reference is made to the “contribution of training AI models to increased greenhouse gas emissions.” 
  • Not enough to “support collective interests and social values” (e.g. no right to group-privacy)
  • Not enough emphasis on “how to address systemic risk”. The draft focuses  on “the risk to individuals from specific systems” but does not really look at “the potential of AI to cause wider societal disruptions.”
  • No clear position on “the use of AI in the military domain.”
  • “The aim of boosting the EU’s industrial capacity is hamstrung by the current funding of the EU AI ecosystem, which has been criticised as being inadequate when compared to the US’s and China’s”
  • No clear path to tackle disparities among European countries: “Some Member States, typically in Western Europe, have developed AI strategies, yet this is mostly not the case in Eastern and Southern Europe”.
  • The language around risk and risk-assessment in the draft is “vague and not-committal”. “As a result, effective protection from high-risk systems will be largely reliant on interpretations by standards bodies and effective internal compliance by companies, which could lead to ineffective or unethical outcomes in practice.”
  1. The US approach to AI

In 2016, two broad US reports on AI were released: “Preparing for the Future of Artificial Intelligence’ and the ‘National Artificial Intelligence Research and Development Strategic Plan’. These and other documents released in the last few years focus mostly on making sure US leadership in AI is preserved while limiting regulatory overreach.  When it comes to ensuring a ‘Good AI Society’, the documents focus on ethical principles such as privacy, fairness and transparency.  These principles, however, do not translate into a real AI governance strategy and the tendency is to emphasize self-regulation by industry (as for instance, IBM’s recent initiatives to ensure a trustworthy design and use of AI). The problem is that, as the authors point out, “the lack of specific regulatory measures and oversight can lead to practices such as ethics washing (introducing superficial measures), ethics shopping (choosing ethical frameworks that justify actions a posteriori) and ethics lobbying (exploiting digital ethics to delay regulatory measures).”

The US strategy is much more hand-on when it comes to international relations that concern the use and development of AI. For instance, the authors explain that “The American AI Initiative states the need to promote an international environment that opens markets for American AI industries, protects the US’s technological advantage and ensures that international cooperation is consistent with ‘American values’.” This has translated into a clear effort to frame AI as a “a defence capability that is essential for maintaining technological, and therefore operational, superiority over the adversary.” However, the overall assessment is that “US has not gone far enough in protecting its AI capacities, including its data sets and stopping the illicit transfer of technologies” (e.g. surveillance technology). 

Between the lines

The paper concludes that when it comes to AI governance, “the EU’s approach is ethically superior” as it strives to protect its citizens by implementing regulatory mechanisms. The US has mainly focused on making sure that “the governance of AI” is placed “in the hands of the private sector”. What we have not seen discussed in the paper, though, is the role an independent auditing of AI systems could play in both the US and the EU. It would be important to see how and whether independent auditing in AI could be applied in the US’ and/or the EU’s regulatory systems, and what could be the advantages and disadvantages of doing so (for instance, see here for an analysis of this issue).

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

related posts

  • Beyond Empirical Windowing: An Attention-Based Approach for Trust Prediction in Autonomous Vehicles

    Beyond Empirical Windowing: An Attention-Based Approach for Trust Prediction in Autonomous Vehicles

  • Predatory Medicine: Exploring and Measuring the Vulnerability of Medical AI to Predatory Science

    Predatory Medicine: Exploring and Measuring the Vulnerability of Medical AI to Predatory Science

  • AI Ethics Maturity Model

    AI Ethics Maturity Model

  • Declaration on the ethics of brain-computer interfaces and augment intelligence

    Declaration on the ethics of brain-computer interfaces and augment intelligence

  • AI Art and Misinformation: Approaches and Strategies for Media Literacy and Fact-Checking

    AI Art and Misinformation: Approaches and Strategies for Media Literacy and Fact-Checking

  • Ethics-based auditing of automated decision-making systems: intervention points and policy implicati...

    Ethics-based auditing of automated decision-making systems: intervention points and policy implicati...

  • Bias in Automated Speaker Recognition

    Bias in Automated Speaker Recognition

  • The philosophical basis of algorithmic recourse

    The philosophical basis of algorithmic recourse

  • Effects of ROSS Intelligence and NDAS, highlighting the need for AI regulation

    Effects of ROSS Intelligence and NDAS, highlighting the need for AI regulation

  • On the Creativity of Large Language Models

    On the Creativity of Large Language Models

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.