• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Analysis of the “Artificial Intelligence governance principles: towards ethical and trustworthy artificial intelligence in the European insurance sector”

November 11, 2021

🔬 Analysis by Philippe Dambly (Senior Lecturer at University of Liège) and Axel Beelen (Legal Consultant specialized in data protection and AI)

[Original document by EIOPA’s Consultative Expert Group on Digital Ethics in insurance]


Overview: After the 2020 White Paper on Artificial Intelligence and the Proposal for a new regulation on AI of 21 April 2021 published by the European Commission in April 2021, the European Insurance and Occupational Pensions Authority (« EIOPA ») published, on 18 June 2021, a report towards ethical and trustworthy artificial intelligence in the European insurance sector. This is the first AI EU regulation of insurance. The report is the result of the intensive work of EIOPA’s Consultative Expert Group on Digital Ethics in insurance. The document aims in particular to help insurance companies when they implement AI applications/systems. The measures proposed in this document are risk-based and cover the entire lifecycle of an AI application.


Objectives of the report

The report begins by first identifying the legal framework currently applied to AI in the insurance sector in the EU. Existing legislation should indeed form the basis of any AI governance framework, but the different pieces of legislation need to be applied in a systematic manner and require unpacking to assist organisations understand what they mean in the context of AI. Furthermore, an ethical use of data and digital technologies implies a more extensive approach than merely complying with legal provisions and needs to take into consideration the provision of public goods to society as part of the corporate social responsibility of firms. The existing framework includes, in particular, the 2009 Solvency II Directive, the 2016 IDD Directive, the General Data Protection Regulation (“GDPR”) and the 2002 ePrivacy Directive. Good to know is that the EIOPA report uses the definition of AI included in the Proposal for a regulation recently published by the European Commission.

Key Insights

Six Key Principles

The 6 key principles identified by the report, along with guidance for insurance companies on how to put them into practice throughout the AI system lifecycle for different applications, are:

1. the principle of proportionality;

2. the principle of fairness and non-discrimination;

3. the principle of transparency and explainability;

4. the principle of human oversight;

5. the principle of data governance of record keeping and

6. the principle of Robustness and Performance.

The high-level principles are accompanied by additional guidance for insurance firms on how to implement them in practice throughout the AI system’s lifecycle. For example, in order to implement the principle of proportionality, the report develops an AI use case impact assessment which could help insurance firms understand the potential outcome of AI use cases and subsequently, determine in a proportionate manner the “mix” of governance measures necessary to implement ethical and trustworthy AI systems within their organisations.

With regards to the use of AI in insurance pricing and underwriting, the report includes guidance on how to assess the appropriateness and necessity of rating factors, noting that correlation does not imply causation. From a transparency and explainability perspective, consumers should be provided with counterfactual explanations, i.e. they should be informed about the main rating factors that affect their premium to promote trust and enable them to adopt informed decisions.

Each of the principles is analyzed in the light of the principle of ethics, a transversal principle in AI. The report focuses on private insurance (life, health and non-life insurance). The analysis of the six principles by EIOPA experts is very rich and complemented by multiple graphs and summary tables. The possible issues of big data and AI in social insurance should indeed be analysed separately. The report considers each principle on the one hand in its generality and then on the other hand deepens it through two or three specific applications of the insurance sector (such as pricing and underwriting, claims management and fraud detection).

Against this background, several initiatives have proliferated in recent years at international,European and national level aiming to promote an ethical and trustworthy AI in our society. EIOPA also recognizes that AI is an evolving technology with an ever-growing number of applications and continuous and in-depth research. This is particularly the case in the areas of transparency and explainability, as well as in the areas of active fairness and non-discrimination principles. As these areas of application and research evolve, EIOPA warns that the recommendations included in the report may therefore need to be revised in the future.

Legal value of the report?

But what is the legal value of this report. Do European insurance companies that want to introduce AI processes into their systems (whether opaque or not) have already to take into account its many recommendations on the basis of the well-known principle “satisfy or justify”? Do they have to justify themselves if they want to derogate from it?

The report states, on its page 2, that it was written by members of EIOPA’s Consultative Expert Group on Digital Ethics in insurance.

The European regulator created this working group in 2019 to support its work. This group of experts was created in order to help the regulator in its activities, but their views are purely advisory. It will therefore be necessary to wait for the governing bodies of EIOPA to decide on the report to know whether its content could become mandatory for insurance companies or not.

However, given the excellence of the writing and the importance of the subject, there is no doubt that EIOPA will soon approve this document more formally. It is therefore of utmost importance that insurance companies (and others) take note of it and start to implement the 6 analyzed principles that we have just summarized for you.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

A network diagram with lots of little emojis, organised in clusters.

Tech Futures: AI For and Against Knowledge

A brightly coloured illustration which can be viewed in any direction. It has many elements to it working together: men in suits around a table, someone in a data centre, big hands controlling the scenes and holding a phone, people in a production line. Motifs such as network diagrams and melting emojis are placed throughout the busy vignettes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part II

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

related posts

  • Exploring the Carbon Footprint of Hugging Face's ML Models: A Repository Mining Study

    Exploring the Carbon Footprint of Hugging Face's ML Models: A Repository Mining Study

  • The Narrow Depth and Breadth of Corporate Responsible AI Research

    The Narrow Depth and Breadth of Corporate Responsible AI Research

  • Can Large Language Models Provide Security & Privacy Advice? Measuring the Ability of LLMs to Re...

    Can Large Language Models Provide Security & Privacy Advice? Measuring the Ability of LLMs to Re...

  • Mapping the Ethics of Generative AI: A Comprehensive Scoping Review

    Mapping the Ethics of Generative AI: A Comprehensive Scoping Review

  • Facial Recognition - Can It Evolve From A “Source of Bias” to A “Tool Against Bias”

    Facial Recognition - Can It Evolve From A “Source of Bias” to A “Tool Against Bias”

  • Careless Whisper: Speech-to-text Hallucination Harms

    Careless Whisper: Speech-to-text Hallucination Harms

  • Managing Human and Robots Together - Can That Be a Leadership Dilemma?

    Managing Human and Robots Together - Can That Be a Leadership Dilemma?

  • Towards a Feminist Metaethics of AI

    Towards a Feminist Metaethics of AI

  • Research Summary: Towards Evaluating the Robustness of Neural Networks

    Research Summary: Towards Evaluating the Robustness of Neural Networks

  • It doesn't tell me anything about how my data is used'': User Perceptions of Data Collection Purpos...

    "It doesn't tell me anything about how my data is used'': User Perceptions of Data Collection Purpos...

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.