• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

AI in Finance: 8 Frequently Asked Questions

September 22, 2019

By Abhishek Gupta (Founder of the Montreal AI Ethics Institute, and Machine Learning Engineer at Microsoft, where he sits on the AI Ethics Review Board for Commercial Software Engineering)


1: What are the ethical challenges of applying AI to Finance?

Finance represents a field that has significant impacts on the life of a person and hence carries a lot of ethical challenges with it. The primary one being discrimination based on attributes that are sensitive, often hidden because of complex decision making rendered by deep learning systems and a lack of transparency in whether such systems are being used to evaluate if you’re suitable to be granted a loan or some other financial decision.

Other challenges include a disproportionate distribution of financial opportunities automatically such as lower interest rates which differs from active discrimination because these are things that you don’t necessarily apply for, say, for example the credit limit on your credit card which can be offered to you automatically based on some undisclosed metrics – this used to be the case before as well but at least if you went in and inquired there was some person that had made the decision but now you won’t be able to ask for an explanation unless the system they’re using is a simple regression model or falls into a category of models that can be explainable with things like LIME (which some institutions to meet their audit and regulatory requirements).

2: Should consumers have the right to obtain confirmation that their personal data has been used in automated decision-making?

Absolutely! It’s important to know that you’ve been subjected to automated decision making, especially given recent legislation like GDPR which demands that people know if they’ve been subjected to automated decision making and have recourse to have a human make a decision about them. Ultimately, this boils down to consumer trust and transparency in the process which is important in retaining business as more and more consumers become aware of such rights and begin to demand them.

3: How is AI impacting customers privacy in Finance?

When external datasets (potentially sourced from data brokers) leveraging the mosaic effect  to create a “richer” profile of the consumer, privacy takes a big hit and is often behind the curtains magic that drives financial decisions about someone without their consent. When possible, for building consumer trust, businesses have it in their best interest to communicate transparently with their consumers on how they calculate different things about the consumer and how they determine what offers to proffer. The impact on privacy intrusions so far are invasive and consumers are none the smarter for it.

4: How do financial systems become unethical and how can we avoid this?

When these systems intrude on their declared purposes and utilize sources of data beyond what is the expectation of the consumer, they become unethical. Avoiding this can be done, in the simplest means by a priori declaring what the purpose of the system is, what the data sources are going to be, how the system will be used and what decisions will be made by the system regarding the user and most importantly sticking to them and issuing statements of compliance (SoC) in a public manner that is perhaps evaluated by an independent third-party.

5: Do you think systems need to be explainable and why?

To build customer trust and comply with some of the requirements above, it will be important for the systems to be explainable. Explainability is also important in allowing people to judge whether the outputs are fair and are behaving as declared by the creators and maintainers of the system.

6: How can AI transform FinTech for good?

There are many ways, primarily in expanding access to services and offering them at lower costs due to automation will be important in allowing more people, especially those who previously were “unbanked” in participating in the formal financial markets. This is important in creating more opportunities for empowerment and uplifting people out of poverty potentially by giving them access to funds that will enable them to carry out activities that improve their financial health.

7: What regulations are in place to avoid bias and where are the gaps?

There currently are very few measures , if any, that are in place to challenge bias in systems. The biggest gaps are in the lack of acknowledging that such a problem exists owing to the still popular notion of math washing where people believe that numerical systems are inherently less biased than human counterparts.

Additionally , even in places where they do recognize that this is a problem, there is a lack of tools that can help them fix the issues. The most important thing is to be able to recognize places where they have biases and then swiftly acting to fix them using appropriate tools and checking that they really have been fixed using standardized tests.

8: How can we use AI to improve customers digital rights and privacy?

Tools like federated learning and differential privacy can help enhance customer’s digital rights and privacy. These are emerging technologies and need more awareness for developers to start integrating them into their sensitive use cases.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

A network diagram with lots of little emojis, organised in clusters.

Tech Futures: AI For and Against Knowledge

A brightly coloured illustration which can be viewed in any direction. It has many elements to it working together: men in suits around a table, someone in a data centre, big hands controlling the scenes and holding a phone, people in a production line. Motifs such as network diagrams and melting emojis are placed throughout the busy vignettes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part II

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

related posts

  • The MAIEI Learning Community Report (September 2021)

    The MAIEI Learning Community Report (September 2021)

  • Ethics in the Software Development Process: from Codes of Conduct to Ethical Deliberation

    Ethics in the Software Development Process: from Codes of Conduct to Ethical Deliberation

  • Computer vision, surveillance, and social control

    Computer vision, surveillance, and social control

  • How Tech Companies are Helping Big Oil Profit from Climate Destruction

    How Tech Companies are Helping Big Oil Profit from Climate Destruction

  • ABScribe: Rapid Exploration of Multiple Writing Variations in Human-AI Co-Writing Tasks using Large ...

    ABScribe: Rapid Exploration of Multiple Writing Variations in Human-AI Co-Writing Tasks using Large ...

  • Social media polarization reflects shifting political alliances in Pakistan

    Social media polarization reflects shifting political alliances in Pakistan

  • The path toward equal performance in medical machine learning

    The path toward equal performance in medical machine learning

  • Oppenheimer As A Timely Warning to the AI Community

    Oppenheimer As A Timely Warning to the AI Community

  • Who Funds Misinformation? A Systematic Analysis of the Ad-related Profit Routines of Fake News sites

    Who Funds Misinformation? A Systematic Analysis of the Ad-related Profit Routines of Fake News sites

  • Data Statements for Natural Language Processing: Toward Mitigating System Bias and Enabling Better S...

    Data Statements for Natural Language Processing: Toward Mitigating System Bias and Enabling Better S...

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.