• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Analysis of the “Artificial Intelligence governance principles: towards ethical and trustworthy artificial intelligence in the European insurance sector”

November 11, 2021

🔬 Analysis by Philippe Dambly (Senior Lecturer at University of Liège) and Axel Beelen (Legal Consultant specialized in data protection and AI)

[Original document by EIOPA’s Consultative Expert Group on Digital Ethics in insurance]


Overview: After the 2020 White Paper on Artificial Intelligence and the Proposal for a new regulation on AI of 21 April 2021 published by the European Commission in April 2021, the European Insurance and Occupational Pensions Authority (« EIOPA ») published, on 18 June 2021, a report towards ethical and trustworthy artificial intelligence in the European insurance sector. This is the first AI EU regulation of insurance. The report is the result of the intensive work of EIOPA’s Consultative Expert Group on Digital Ethics in insurance. The document aims in particular to help insurance companies when they implement AI applications/systems. The measures proposed in this document are risk-based and cover the entire lifecycle of an AI application.


Objectives of the report

The report begins by first identifying the legal framework currently applied to AI in the insurance sector in the EU. Existing legislation should indeed form the basis of any AI governance framework, but the different pieces of legislation need to be applied in a systematic manner and require unpacking to assist organisations understand what they mean in the context of AI. Furthermore, an ethical use of data and digital technologies implies a more extensive approach than merely complying with legal provisions and needs to take into consideration the provision of public goods to society as part of the corporate social responsibility of firms. The existing framework includes, in particular, the 2009 Solvency II Directive, the 2016 IDD Directive, the General Data Protection Regulation (“GDPR”) and the 2002 ePrivacy Directive. Good to know is that the EIOPA report uses the definition of AI included in the Proposal for a regulation recently published by the European Commission.

Key Insights

Six Key Principles

The 6 key principles identified by the report, along with guidance for insurance companies on how to put them into practice throughout the AI system lifecycle for different applications, are:

1. the principle of proportionality;

2. the principle of fairness and non-discrimination;

3. the principle of transparency and explainability;

4. the principle of human oversight;

5. the principle of data governance of record keeping and

6. the principle of Robustness and Performance.

The high-level principles are accompanied by additional guidance for insurance firms on how to implement them in practice throughout the AI system’s lifecycle. For example, in order to implement the principle of proportionality, the report develops an AI use case impact assessment which could help insurance firms understand the potential outcome of AI use cases and subsequently, determine in a proportionate manner the “mix” of governance measures necessary to implement ethical and trustworthy AI systems within their organisations.

With regards to the use of AI in insurance pricing and underwriting, the report includes guidance on how to assess the appropriateness and necessity of rating factors, noting that correlation does not imply causation. From a transparency and explainability perspective, consumers should be provided with counterfactual explanations, i.e. they should be informed about the main rating factors that affect their premium to promote trust and enable them to adopt informed decisions.

Each of the principles is analyzed in the light of the principle of ethics, a transversal principle in AI. The report focuses on private insurance (life, health and non-life insurance). The analysis of the six principles by EIOPA experts is very rich and complemented by multiple graphs and summary tables. The possible issues of big data and AI in social insurance should indeed be analysed separately. The report considers each principle on the one hand in its generality and then on the other hand deepens it through two or three specific applications of the insurance sector (such as pricing and underwriting, claims management and fraud detection).

Against this background, several initiatives have proliferated in recent years at international,European and national level aiming to promote an ethical and trustworthy AI in our society. EIOPA also recognizes that AI is an evolving technology with an ever-growing number of applications and continuous and in-depth research. This is particularly the case in the areas of transparency and explainability, as well as in the areas of active fairness and non-discrimination principles. As these areas of application and research evolve, EIOPA warns that the recommendations included in the report may therefore need to be revised in the future.

Legal value of the report?

But what is the legal value of this report. Do European insurance companies that want to introduce AI processes into their systems (whether opaque or not) have already to take into account its many recommendations on the basis of the well-known principle “satisfy or justify”? Do they have to justify themselves if they want to derogate from it?

The report states, on its page 2, that it was written by members of EIOPA’s Consultative Expert Group on Digital Ethics in insurance.

The European regulator created this working group in 2019 to support its work. This group of experts was created in order to help the regulator in its activities, but their views are purely advisory. It will therefore be necessary to wait for the governing bodies of EIOPA to decide on the report to know whether its content could become mandatory for insurance companies or not.

However, given the excellence of the writing and the importance of the subject, there is no doubt that EIOPA will soon approve this document more formally. It is therefore of utmost importance that insurance companies (and others) take note of it and start to implement the 6 analyzed principles that we have just summarized for you.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

related posts

  • Balancing Data Utility and Confidentiality in the 2020 US Census

    Balancing Data Utility and Confidentiality in the 2020 US Census

  • Measuring Disparate Outcomes of Content Recommendation Algorithms with Distributional Inequality Met...

    Measuring Disparate Outcomes of Content Recommendation Algorithms with Distributional Inequality Met...

  • Knowing Your Annotator: Rapidly Testing the Reliability of Affect Annotation

    Knowing Your Annotator: Rapidly Testing the Reliability of Affect Annotation

  • Common but Different Futures: AI Inequity and Climate Change

    Common but Different Futures: AI Inequity and Climate Change

  • Do Less Teaching, Do More Coaching: Toward Critical Thinking for Ethical Applications of Artificial ...

    Do Less Teaching, Do More Coaching: Toward Critical Thinking for Ethical Applications of Artificial ...

  • Towards a Feminist Metaethics of AI

    Towards a Feminist Metaethics of AI

  • Research summary: Technology-Enabled Disinformation: Summary, Lessons, and Recommendations

    Research summary: Technology-Enabled Disinformation: Summary, Lessons, and Recommendations

  • Explainable artificial intelligence (XAI) post‐hoc explainability methods: risks and limitations in ...

    Explainable artificial intelligence (XAI) post‐hoc explainability methods: risks and limitations in ...

  • Understanding Toxicity Triggers on Reddit in the Context of Singapore

    Understanding Toxicity Triggers on Reddit in the Context of Singapore

  • Artificial Intelligence: the global landscape of ethics guidelines

    Artificial Intelligence: the global landscape of ethics guidelines

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.