By Abhishek Gupta (Founder of the Montreal AI Ethics Institute, and Machine Learning Engineer at Microsoft, where he sits on the AI Ethics Review Board for Commercial Software Engineering)
1: What are the ethical challenges of applying AI to Finance?
Finance represents a field that has significant impacts on the life of a person and hence carries a lot of ethical challenges with it. The primary one being discrimination based on attributes that are sensitive, often hidden because of complex decision making rendered by deep learning systems and a lack of transparency in whether such systems are being used to evaluate if you’re suitable to be granted a loan or some other financial decision.
Other challenges include a disproportionate distribution of financial opportunities automatically such as lower interest rates which differs from active discrimination because these are things that you don’t necessarily apply for, say, for example the credit limit on your credit card which can be offered to you automatically based on some undisclosed metrics – this used to be the case before as well but at least if you went in and inquired there was some person that had made the decision but now you won’t be able to ask for an explanation unless the system they’re using is a simple regression model or falls into a category of models that can be explainable with things like LIME (which some institutions to meet their audit and regulatory requirements).
2: Should consumers have the right to obtain confirmation that their personal data has been used in automated decision-making?
Absolutely! It’s important to know that you’ve been subjected to automated decision making, especially given recent legislation like GDPR which demands that people know if they’ve been subjected to automated decision making and have recourse to have a human make a decision about them. Ultimately, this boils down to consumer trust and transparency in the process which is important in retaining business as more and more consumers become aware of such rights and begin to demand them.
3: How is AI impacting customers privacy in Finance?
When external datasets (potentially sourced from data brokers) leveraging the mosaic effect to create a “richer” profile of the consumer, privacy takes a big hit and is often behind the curtains magic that drives financial decisions about someone without their consent. When possible, for building consumer trust, businesses have it in their best interest to communicate transparently with their consumers on how they calculate different things about the consumer and how they determine what offers to proffer. The impact on privacy intrusions so far are invasive and consumers are none the smarter for it.
4: How do financial systems become unethical and how can we avoid this?
When these systems intrude on their declared purposes and utilize sources of data beyond what is the expectation of the consumer, they become unethical. Avoiding this can be done, in the simplest means by a priori declaring what the purpose of the system is, what the data sources are going to be, how the system will be used and what decisions will be made by the system regarding the user and most importantly sticking to them and issuing statements of compliance (SoC) in a public manner that is perhaps evaluated by an independent third-party.
5: Do you think systems need to be explainable and why?
To build customer trust and comply with some of the requirements above, it will be important for the systems to be explainable. Explainability is also important in allowing people to judge whether the outputs are fair and are behaving as declared by the creators and maintainers of the system.
6: How can AI transform FinTech for good?
There are many ways, primarily in expanding access to services and offering them at lower costs due to automation will be important in allowing more people, especially those who previously were “unbanked” in participating in the formal financial markets. This is important in creating more opportunities for empowerment and uplifting people out of poverty potentially by giving them access to funds that will enable them to carry out activities that improve their financial health.
7: What regulations are in place to avoid bias and where are the gaps?
There currently are very few measures , if any, that are in place to challenge bias in systems. The biggest gaps are in the lack of acknowledging that such a problem exists owing to the still popular notion of math washing where people believe that numerical systems are inherently less biased than human counterparts.
Additionally , even in places where they do recognize that this is a problem, there is a lack of tools that can help them fix the issues. The most important thing is to be able to recognize places where they have biases and then swiftly acting to fix them using appropriate tools and checking that they really have been fixed using standardized tests.
8: How can we use AI to improve customers digital rights and privacy?
Tools like federated learning and differential privacy can help enhance customer’s digital rights and privacy. These are emerging technologies and need more awareness for developers to start integrating them into their sensitive use cases.