• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Responsible Use of Technology in Credit Reporting: White Paper

July 26, 2023

🔬 Research Summary by Dr. Talha Ocal, a financial sector consultant in international development. His work is focused on financial regulatory & supervisory frameworks, AML/CFT, financial technologies, credit risk management, and financial inclusion.

[Original paper by World Bank]


Overview: Technology is at the core of credit reporting systems, which have evolved significantly over the past decade by adopting new technologies and business models. As disruptive technologies have been increasingly adopted around the globe, concerns have arisen over possible misuse or unethical use of these new technologies. These concerns inspired international institutions and national authorities to issue high-level principles and guidance documents on responsible technology use. While adopting new technologies benefits the credit reporting industry, unintended negative outcomes of these technologies from ethics and human rights perspectives must also be considered. Therefore, the white paper focuses on the responsible use of technology in credit reporting and, in particular, the ethical concerns with respect to the use of AI.


Introduction

Over the past decade, technological advancements and innovations, including advanced computing, artificial intelligence (AI), and machine learning (ML), have exploded, reshaping the credit reporting industry. For example, advanced computing and analytics enable regulators to access and use broader data sets for policymaking and supervision. Yet the spread of new technology and disruptive changes in the credit reporting ecosystem raises concerns about possible unintended negative consequences. For example, the use of AI/ML and big data analytics has raised several questions regarding the transparency of the processes, the privacy of the data being accessed, and potential biases internalized into the algorithms and models. Further, some technologies might raise privacy and security concerns, including data ownership and confidentiality issues. While adopting new technologies benefits the credit reporting industry, their ethical and human rights implications must also be considered. International institutions, national regulatory agencies, and industry associations have thus issued guidance and directives on the responsible use of technology. Still, the effort remains in its infancy, and little material guidance directly applies to the credit reporting industry. Against this background, this white paper aims to present for consideration by credit reporting industry stakeholders a framework that combines ethics and rights-based approaches to responsible technology use. The paper begins by reviewing the use of new technologies in credit reporting. Then it evaluates the rights and ethical frameworks that apply to such use and proposes principles for responsible technology use in credit reporting. Applying the proposed principles as appropriate will facilitate the credit reporting industry`s best, most responsible use of disruptive technologies to benefit all stakeholders.

Key Insights

Technology is at the core of credit reporting systems, which have evolved significantly over the past decade by adopting new technologies and business models. Disruptive technologies such as advanced computing, artificial intelligence (AI), machine learning (ML), big data analytics, and digital payments are reshaping the credit reporting industry. Innovations have enabled credit reporting service providers (CRSPs) greater access to and sharing data with improved analytics capabilities. While adopting new technologies benefits the credit reporting industry, unintended negative outcomes of these technologies from ethics and human rights perspectives must also be considered. 

Against this background, the International Committee on Credit Reporting (ICCR) is pleased to offer this white paper as a framework for the responsible use of technology in credit reporting activities. The white paper begins with a brief introductory section, followed by Section 2 with a discussion of technology use in credit reporting, with a special focus on the key disruptive technologies being increasingly adopted by the industry. Moreover, the explosion of technological advancements has led to the emergence of alternative credit reporting service providers in the industry. Section 3 provides information on the scope, development, and high-level principles of several key technology frameworks, including the principles underlying their responsible use. The selection of frameworks for this section was made using criteria such as global applicability, relevance to the credit reporting industry, and suitability from the perspective of responsible use. Section 4 introduces ten principles to guide the responsible use of technology in credit reporting activities. By applying these principles, the industry can make the most responsible use of disruptive technologies while benefiting all stakeholders. The principles are technology agnostic to ensure this objective and apply to all technologies used in credit reporting activities. Participants in credit reporting systems are expected to apply these principles proportionately, according to their technology use. The principles are not mutually exclusive; each entity using technology-supported credit reporting systems should apply them in totality.

The principles are as follows:

1. Fairness. Credit reporting systems should ensure the fair use of technologies deployed in their operations. Technology-driven credit reporting products should always protect individuals’ fundamental rights and not discriminate against individuals, groups of consumers, or SMEs.

2. Ethics. Credit reporting system participants should ensure that any technology they adopt and use complies with their corporate values, codes of conduct, and highest ethical standards. Technology-driven decisions should be held to the same ethical standards as human-driven decisions.

3. Accountability. Credit reporting system participants are accountable for using internally developed and externally resourced technologies. Appropriate governance mechanisms should be in place to oversee the processes of technology-driven credit reporting products.

4. Transparency. Credit reporting system participants should ensure that the techniques and methods used in their technology-driven decisions are explainable, assessable, and understandable by relevant stakeholders.

5. Security and Robustness. An appropriate data security framework should govern credit reporting systems to ensure the confidentiality, integrity, and availability of information always. The robustness of technologies should be ensured to avoid unintentional harm to individuals.

6. Lawfulness. Credit reporting system participants should ensure that the use of data and technologies is lawful and complies with relevant regulations and professional standards.

7. Privacy. Credit reporting system participants should protect the privacy of data subjects while accessing, collecting, analyzing, processing, and distributing their data for credit reporting.

8. Sustainability and Well-Being. Technologies employed in credit reporting systems should support human well-being and be sustainable in all human, social, cultural, economic, and environmental aspects.

9. Inclusivity. The adoption and use of technological innovations in credit reporting systems must not result in or accentuate the exclusion of any individual or group.

10. Trust. Technologies employed in credit reporting systems should be considered trustworthy by stakeholders, including data subjects and financial institutions.

Finally, Section 5 discusses considerations for applying the principles. It discusses how to assess technology for possible use, highlights the need for capacity building, and provides additional technology-specific recommendations to guide adopters. The section concludes with use cases illustrating the principles in action.

Between the lines

Artificial intelligence is becoming increasingly important every year, and finance and credit systems are central to these developments. The increasingly close relationship between artificial intelligence and ethics is one of the most interesting topics of recent times, so much so that even the United Nations has shown us the issue’s importance by preparing an up-to-date report. Although the legal and ethical problems created by artificial intelligence and all technological developments manifest in every sector, the most severe consequences may occur in the finance industry. This is because banking and credit systems are based on principles such as reliability, legality, and transparency. With the rapid development of technology, associated ethical and legal risks will grow each passing day. This study focuses on the key aspects of risks created by technological developments in the credit reporting industry. It also seeks solutions to potential problems based on fundamental principles. This white paper guides those in the banking sector, creditors, and financial technology companies.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

related posts

  • Risk and Trust Perceptions of the Public of Artificial Intelligence Applications

    Risk and Trust Perceptions of the Public of Artificial Intelligence Applications

  • Avoiding an Oppressive Future of Machine Learning: A Design Theory for Emancipatory Assistants

    Avoiding an Oppressive Future of Machine Learning: A Design Theory for Emancipatory Assistants

  • The Ethics of Emotion in AI Systems (Research Summary)

    The Ethics of Emotion in AI Systems (Research Summary)

  • Research summary: Social Biases in NLP Models as Barriers for Persons with Disabilities

    Research summary: Social Biases in NLP Models as Barriers for Persons with Disabilities

  • Can You Meaningfully Consent in Eight Seconds? Identifying Ethical Issues with Verbal Consent for Vo...

    Can You Meaningfully Consent in Eight Seconds? Identifying Ethical Issues with Verbal Consent for Vo...

  • AI hyped? A horizon scan of discourse on artificial intelligence in education (AIED) and development

    AI hyped? A horizon scan of discourse on artificial intelligence in education (AIED) and development

  • An Empirical Analysis of Racial Categories in the Algorithmic Fairness Literature

    An Empirical Analysis of Racial Categories in the Algorithmic Fairness Literature

  • AI Neutrality in the Spotlight: ChatGPT’s Political Biases Revisited

    AI Neutrality in the Spotlight: ChatGPT’s Political Biases Revisited

  • Consequences of Recourse In Binary Classification

    Consequences of Recourse In Binary Classification

  • It’s COMPASlicated: The Messy Relationship between RAI Datasets and Algorithmic Fairness Benchmarks

    It’s COMPASlicated: The Messy Relationship between RAI Datasets and Algorithmic Fairness Benchmarks

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.