• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Response to the European Commission’s white paper on AI (2020)

June 17, 2020

Full paper in PDF formDownload

Authors: Abhishek Gupta, Camylle Lanteigne

In February 2020, the European Commission (EC) published a white paper entitled, On Artificial Intelligence – A European approach to excellence and trust. This paper outlines the EC’s policy options for the promotion and adoption of artificial intelligence (AI) in the European Union. We reviewed this paper and published a response addressing the EC’s plans to build an “ecosystem of excellence” and an “ecosystem of trust,” as well as the safety and liability implications of AI, the internet of things (IoT), and robotics.

Special thanks to the AI Ethics community who contributed their insights during our public consultations on this topic on May 27, 2020 and June 3, 2020.

Overview of our recommendations

  1. Focus efforts on the research and innovation community, member states, and the private sector, as well as those that should come first in Europe’s AI strategy.
  2. Create alignment between the major trading partners’ policies and the EU policies governing the development and use of AI.
  3. Analyze the gaps in the current ecosystem between theoretical frameworks and approaches to building trustworthy AI systems to create more actionable guidance that helps organizations implement these principles in practice.
  4. Focus on coordination and policy alignment, particularly in two areas: increasing the financing for AI start-ups and developing skills and adapting current training programs.
  5. Focus on mechanisms that promote private and secure sharing of data in the building up of the European data space, leveraging technical advances like federated learning, differential privacy, federated analytics, and homomorphic encryption.
  6. Create a network of existing AI research excellence centres to strengthen the research and innovation community, with a focus on producing quality scholarship work that takes into account a diverse array of values/ethics.
  7. Promote knowledge transfer and develop AI expertise for SMEs as well as support partnerships between SMEs and the other stakeholders through Digital Innovation Hubs.
  8. Add nuance to the discussion regarding the opacity of AI systems, so that there is a graduated approach to how these systems are governed and in which place there is a requirement for what degree of explainability and transparency.
  9. Create a process for individuals to appeal an AI system’s decision or output, such as a ‘right to negotiate,’ which is similar to the ‘right to object’ detailed in the General Data Protection Regulation (GDPR).
  10. Implement new rules and strengthen existing regulations to better address the concerns regarding AI systems.
  11. Ban the use of facial recognition technology, which could significantly lower risks regarding discriminatory outcomes and breaches in fundamental rights.
  12. Hold all AI systems (e.g. low-, medium-, and high-risk applications) to similar standards and compulsory requirements.
  13. Ensure that if biometric identification systems are used, they fulfill the purpose for which they are implemented while also being the best way of going about the task.
  14. Implement a voluntary labelling system for systems that are not considered high-risk, which should be further supported by strong economic incentives.
  15. Appoint individuals to the human oversight process who understand the AI systems well and are able to communicate any potential risks effectively with a variety of stakeholders so that they can take the appropriate action.
Full paper in PDF formDownload
Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

related posts

  • The Canada Protocol: AI checklist for Mental Health & Suicide Prevention

    The Canada Protocol: AI checklist for Mental Health & Suicide Prevention

  • Trust me!: How to use trust-by-design to build resilient tech in times of crisis

    Trust me!: How to use trust-by-design to build resilient tech in times of crisis

  • A 16-year old AI developer's critical take on AI ethics

    A 16-year old AI developer's critical take on AI ethics

  • Response to Scotland's AI Strategy

    Response to Scotland's AI Strategy

  • Green Lighting ML: Confidentiality, Integrity, and Availability of Machine Learning Systems in Deplo...

    Green Lighting ML: Confidentiality, Integrity, and Availability of Machine Learning Systems in Deplo...

  • Anthropomorphized AI as Capitalist Agents: The Price We Pay for Familiarity

    Anthropomorphized AI as Capitalist Agents: The Price We Pay for Familiarity

  • Montreal AI Symposium Presentation at Polytechnique

    Montreal AI Symposium Presentation at Polytechnique

  • Abhishek Gupta on AI Ethics at the HBS Tech Conference (Keynote Summary)

    Abhishek Gupta on AI Ethics at the HBS Tech Conference (Keynote Summary)

  • Meet the inaugural cohort of the MAIEI Summer Research Internship!

    Meet the inaugural cohort of the MAIEI Summer Research Internship!

  • AI For Good Global Summit Interview (ITU/UN) 2018

    AI For Good Global Summit Interview (ITU/UN) 2018

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.