XAI in Banking: Challenges and Impact

How challenges created by AI in Banking can be solved using explainable AI techniques helping AI penetrate such important corners more easily?

Image Credits: Dall.E2 image generator

As AI research progresses, the algorithms become more complex and efficient, but harder to understand and explain. This is known as the “black-box” conundrum and poses risks for enterprises deploying AI models.

Why the explainability of the model matters is because without it, it is difficult to understand how the model arrived at its predictions or decisions. This can make it difficult to identify any potential biases or errors in the model and can make it hard to ensure that the model is fair and ethical.

Additionally, a lack of explainability can make it hard for organizations, such as banks, to understand how to use the model effectively, or how to interpret the results. It can also make it difficult for regulators to ensure that the model is compliant with laws and regulations.

In general, explainability is crucial for ensuring that AI models are safe, reliable and trustworthy, and it’s crucial to build accountability and trust between the model’s developers, its users and regulators.

How the model explainability matters in the banking sector?

Banks face a similar problem with their own artificial intelligence (AI) strategies. They may create a model that predicts creditworthiness more accurately than any other algorithm, but they cannot explain how the model arrived at its predictions or identify which factors had the biggest influence. It also prevents banks from taking advantage of cutting-edge AI applications, such as underwriting models, facial-recognition software, or AI models in administrative tasks.

Model risk managers at several large banks are reportedly divided on whether or not every machine learning model should be self-explainable by design, or if banks should forgo explainability for model accuracy, or if it should depend on the context and regulatory expectations.

Introducing Explainable AI

XAI (Explainable Artificial Intelligence) is a growing field that aims to make AI models more understandable, intuitive, and transparent to human users while maintaining performance and prediction accuracy. This is becoming a crucial concern for banks as regulators want to ensure that AI processes and outcomes are easily understood by bank employees.

Additionally, consumer advocacy groups, counterparties, and internal stakeholders at financial institutions are showing a growing interest in XAI as it can help address transparency and trust issues and provide greater clarity on AI governance.

To improve their use of AI, many banks are focusing on XAI research and working with academic and scientific communities to create new ways to use explainability techniques. They are also establishing innovation labs to create machine learning models that are more explainable and align with their business objectives. XAI can help banks overcome obstacles in implementing AI models by making them more transparent.

In addition to addressing transparency and trust issues, the implementation of XAI (Explainable Artificial Intelligence) can also bring other advantages, such as uncovering various types of information about a model, discovering connections between variables, identifying poor performance, and identifying potential information leaks. These efforts are crucial for maintaining customer protection and unbiased lending, building trust and credibility in the models, and preventing regulatory issues.

How Explainable AI functions?

There are different types of XAI techniques, such as global and local. Global XAI techniques provide a general overview of the model’s decision-making process and the factors that influenced it. This can include feature importance analysis, which highlights the variables that had the most impact on the model’s predictions, and model interpretability methods, which provide a high-level view of the model’s decision-making process.

Local XAI techniques, on the other hand, focus on providing explanations for specific predictions or decisions made by the model. This can include counterfactual explanations, which show the changes that would need to be made to a specific input to change the model’s prediction, and saliency maps, which highlight the parts of the input that most influenced the model’s prediction.

Both global and local XAI techniques have their own advantages and disadvantages, and the choice of which technique to use will depend on the specific use case and the level of transparency and interpretability required.

There are several different techniques used within XAI, including:

  • LIME (Local Interpretable Model-Agnostic Explanations) — a technique that explains the predictions of any classifier by approximating it locally with an interpretable model.
  • SHAP (SHapley Additive exPlanations) — a technique that assigns a contribution value to each feature of an input dataset, which can be used to explain the output of any model.
  • Saliency Maps — a technique that visualizes the regions of an image that most influenced a model’s prediction.
  • Counterfactual Analysis — a technique that generates explanations by describing the changes that would have to be made to the input data in order to change the model’s prediction.
  • Rule-based Systems — a technique that uses a set of rules to explain the decision-making process of a model.

These techniques are leveraged in multiple different scenarios across industries giving regulators and the general users the trust and assurity at at level that they understand and helps them safely use the AI models. They also help to build trust and credibility in the models among stakeholders, as well as assist in improving model performance by identifying relationships among variables, diagnosing poor performance, and identifying potential information leaks.

Case Study: How XAI helped John, a person of colour, understand how the model decided that they should not get a loan and how it helped the bank saving their reputation and helping the applicant in general?

Background: A person of color, John, applied for a loan at a large financial institution. His loan application was denied, and he was not given a clear explanation as to why. This caused frustration and mistrust for John and damaged the bank’s reputation in the community.

Problem: The bank needed to find a way to improve transparency and fairness in their loan decision process, while also addressing the concerns of John and the community.

Solution: The bank implemented Explainable AI (XAI) techniques to increase transparency and fairness in their loan decision process. These techniques included:

1. Local interpretable model-agnostic explanations (LIME) to provide explanations for individual decisions made by the model.

2. Counterfactual analysis to show what factors would need to change for John’s loan application to be approved.

3. Fairness metrics to measure and ensure that the model was not biased against certain groups, including people of color.

Results: By using XAI techniques, the bank was able to provide John with a clear explanation of why his loan application was denied. They were able to show that his income and credit history were the main factors that led to the decision.

Additionally, they were able to demonstrate that the model was not biased against people of color. This helped to rebuild trust with John and the community and improve the bank’s reputation. The bank also helped John to understand how to improve his credit history and income, which will help him to be approved for a loan in the future.

Conclusion: The use of XAI techniques helped the bank to improve transparency and fairness in their loan decision process. It allowed them to address the concerns of John and the community and rebuild trust. Additionally, it helped the loan applicant to understand the reasons why his request was rejected and how to improve his situation, this help the bank to keep the customer and helping him too.

Limitations of Explainability

Despite recent advancements in XAI research, banks still face many technical challenges when implementing explainability into the AI pipeline. The limitations include:

  1. The shortage of experts in XAI, which is a more specialized field than machine learning and data science, poses a challenge for banks. To overcome this, they can consider hiring professionals from outside the organization or training engineering and computer science graduates in explainable AI internally.
  2. XAI techniques that are effective for credit risk models often only provide binary decisions, such as “approve” or “deny,” but these explanations may not take into account the full range of options for modifications and may not consider the preferences of consumers for different loan terms.
  3. The validity of explanations may become obsolete over time due to changes in external factors or the need to update the model for better performance.
  4. As models become more complex and precise, XAI teams may face limitations due to resource constraints as it takes longer to compute explainability techniques.
  5. Some organizations are worried that by making their machine learning models more explainable, it could give their competitors an opportunity to decipher the inner workings of their models or make it simpler for external parties to manipulate their models or launch attacks on them.
  6. Providing counterfactual explanations that show customers why their loan applications were denied and offering guidance on how to improve their financial standing can be challenging for banks, as it requires considering the specific context of each borrower.

Concluding Points

In order to address the challenges of explainability in AI, banks should focus on developing transparent deep learning applications that do not require post-hoc explainability. They can do this by partnering with think tanks, universities and research institutions working on standard guidelines for the use of such technology. Banks should also actively participate in conferences and workshops on XAI topics and collaborate on research that can drive the field forward.

Additionally, they should push vendors to make prepackaged models more explainable. It is also important for banks to work with regulators and government agencies to produce guidelines that enable AI research and development while also protecting customers’ interests.

--

--

Revca - Helping Companies achieve AI completeness
Revca - Helping Companies achieve AI completeness

Written by Revca - Helping Companies achieve AI completeness

An US based AI startup empowering businesses to become AI-first faster. Check out our products: ANAI (anai.io), Apture (apture.ai), AquaML (aquaml.io)

No responses yet