• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

AI in Finance: 8 Frequently Asked Questions

September 22, 2019

By Abhishek Gupta (Founder of the Montreal AI Ethics Institute, and Machine Learning Engineer at Microsoft, where he sits on the AI Ethics Review Board for Commercial Software Engineering)


1: What are the ethical challenges of applying AI to Finance?

Finance represents a field that has significant impacts on the life of a person and hence carries a lot of ethical challenges with it. The primary one being discrimination based on attributes that are sensitive, often hidden because of complex decision making rendered by deep learning systems and a lack of transparency in whether such systems are being used to evaluate if you’re suitable to be granted a loan or some other financial decision.

Other challenges include a disproportionate distribution of financial opportunities automatically such as lower interest rates which differs from active discrimination because these are things that you don’t necessarily apply for, say, for example the credit limit on your credit card which can be offered to you automatically based on some undisclosed metrics – this used to be the case before as well but at least if you went in and inquired there was some person that had made the decision but now you won’t be able to ask for an explanation unless the system they’re using is a simple regression model or falls into a category of models that can be explainable with things like LIME (which some institutions to meet their audit and regulatory requirements).

2: Should consumers have the right to obtain confirmation that their personal data has been used in automated decision-making?

Absolutely! It’s important to know that you’ve been subjected to automated decision making, especially given recent legislation like GDPR which demands that people know if they’ve been subjected to automated decision making and have recourse to have a human make a decision about them. Ultimately, this boils down to consumer trust and transparency in the process which is important in retaining business as more and more consumers become aware of such rights and begin to demand them.

3: How is AI impacting customers privacy in Finance?

When external datasets (potentially sourced from data brokers) leveraging the mosaic effect  to create a “richer” profile of the consumer, privacy takes a big hit and is often behind the curtains magic that drives financial decisions about someone without their consent. When possible, for building consumer trust, businesses have it in their best interest to communicate transparently with their consumers on how they calculate different things about the consumer and how they determine what offers to proffer. The impact on privacy intrusions so far are invasive and consumers are none the smarter for it.

4: How do financial systems become unethical and how can we avoid this?

When these systems intrude on their declared purposes and utilize sources of data beyond what is the expectation of the consumer, they become unethical. Avoiding this can be done, in the simplest means by a priori declaring what the purpose of the system is, what the data sources are going to be, how the system will be used and what decisions will be made by the system regarding the user and most importantly sticking to them and issuing statements of compliance (SoC) in a public manner that is perhaps evaluated by an independent third-party.

5: Do you think systems need to be explainable and why?

To build customer trust and comply with some of the requirements above, it will be important for the systems to be explainable. Explainability is also important in allowing people to judge whether the outputs are fair and are behaving as declared by the creators and maintainers of the system.

6: How can AI transform FinTech for good?

There are many ways, primarily in expanding access to services and offering them at lower costs due to automation will be important in allowing more people, especially those who previously were “unbanked” in participating in the formal financial markets. This is important in creating more opportunities for empowerment and uplifting people out of poverty potentially by giving them access to funds that will enable them to carry out activities that improve their financial health.

7: What regulations are in place to avoid bias and where are the gaps?

There currently are very few measures , if any, that are in place to challenge bias in systems. The biggest gaps are in the lack of acknowledging that such a problem exists owing to the still popular notion of math washing where people believe that numerical systems are inherently less biased than human counterparts.

Additionally , even in places where they do recognize that this is a problem, there is a lack of tools that can help them fix the issues. The most important thing is to be able to recognize places where they have biases and then swiftly acting to fix them using appropriate tools and checking that they really have been fixed using standardized tests.

8: How can we use AI to improve customers digital rights and privacy?

Tools like federated learning and differential privacy can help enhance customer’s digital rights and privacy. These are emerging technologies and need more awareness for developers to start integrating them into their sensitive use cases.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

related posts

  • The Social Contract for AI

    The Social Contract for AI

  • Can We Teach AI Robots How to Be Human?

    Can We Teach AI Robots How to Be Human?

  • Response to Scotland's AI Strategy

    Response to Scotland's AI Strategy

  • Response to the AHRC and WEF regarding Responsible Innovation in AI

    Response to the AHRC and WEF regarding Responsible Innovation in AI

  • A call for a critical look at the metrics for success in the evaluation of AI

    A call for a critical look at the metrics for success in the evaluation of AI

  • AI Economist: Reinforcement Learning is the Future for Equitable Economic Policy

    AI Economist: Reinforcement Learning is the Future for Equitable Economic Policy

  • Introduction To Ethical AI Principles

    Introduction To Ethical AI Principles

  • Social Robots and Empathy: The Harmful Effects of Always Getting What We Want

    Social Robots and Empathy: The Harmful Effects of Always Getting What We Want

  • AI and Marketing: Why We Need to Ask Ethical Questions

    AI and Marketing: Why We Need to Ask Ethical Questions

  • Probing Networked Agency: Where is the Locus of Moral Responsibility?

    Probing Networked Agency: Where is the Locus of Moral Responsibility?

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.