devarena logo
Reading Time: 7 minutes


Financial organizations and their customers need to learn more about how AI works, citing the “black box” of AI as a complete mystery. They don’t understand how decisions are reached or for what reasons. Explainable AI, or “white box” models, take away the enigma by providing transparent reasons for decisions affecting their customers’ lives.

In a heavily regulated industry like banking, dynamic regulatory and compliance pressures are a major concern and still a barrier to AI adoption. International financial institutions (FIs) deal with state and federal governance, having to comply with diverse regulations from country to country. Explainable AI and machine learning (ML) models are now mandatory in several jurisdictions, including the U.S., the U.K., Singapore and the European Union. While hesitancy is understandable, the confidence to deploy AI and ML is growing.

In this article, we’ll look at how this legislation creates opportunities for FIs to create model governance and ethics standards that unite explainability, bias mitigation and transparency into their corporate brands.

Explaining the unexplainable: the black box of AI

It’s late at night and your customer has just left the ER with their sick child, prescription in hand. They visit the 24-hour pharmacy, wait for the prescription to be filled and use their credit card to pay. The transaction is declined. They know their payments are up to date, and the clerk says they’ll have to call their bank in the morning. They go home with a crying child and no medication. You get a call the next morning, asking “What went wrong?”

It could be several things, from suspected fraud, delayed payment processing or simply a false decline. Some issues are traceable, and others are not. This mysterious process is difficult to trust and even more difficult to explain to a customer. Customers often leave their banks without a clear answer, and some vow to change FIs.

“‘Insufficient funds’ is one the most common reasons for declined transactions, even when that may not be the case,” says Amyn Dhala, Brighterion Chief Product Officer and Global Head of Product for AI Express at Mastercard.

In the development world, a black box is a solution where users know the inputs and the final output, but they have no idea what the process is to make the decision. It isn’t until a mistake is made that problems start to surface.

A closed environment leaves room for error

When an AI model is developed, it is trained with historical data, so it learns to predict events and score transactions based on past events. Once the model goes into production, it receives millions of data points that then interact in billions of ways, processing decisions faster than any team of humans could achieve.

The problem is that the machine learning model is making these decisions in a closed environment, understood only by the team that built the model. This challenge was cited as the second highest concern by 32 percent of financial executives responding to the 2021 LendIt annual survey on AI, after regulation and compliance.

How explainable AI/white box models work 

Explainable AI, sometimes known as a “white box” model, lifts the veil off machine learning models by assigning reason codes to decisions and making them visible to users. Users can review these codes to both explain decisions and verify outcomes. For example, if an account manager or fraud investigator suspects several decisions exhibit similar bias, developers can alter the model to remove that inequity.

“Good explainable AI is simple to understand yet highly personalized for each given event,” Dhala says. “It must operate in a highly scalable environment processing potentially billions of events while satisfying the needs of the model custodians (developers) and the customers who are impacted. At the same time, the model must comply with regulatory and privacy requirements as per the use case and country.”

To ensure the integrity of the process, an essential component of building the model is privacy by design. Rather than reviewing a model after development, systems engineers consider and incorporate privacy at each stage of design and development. So, while reason outcomes are highly personalized, customers’ privacy is proactively protected and embedded, set as the system default.

AI transparency prevents bias in banking

Dhala says the way to AI transparency is via good model governance. This overarching umbrella creates the environment for how an organization regulates access, puts policies in place and tracks the activities and outputs for AI models. Good model governance reduces risk in case of an audit for compliance and creates the framework for ethical, transparent AI in banking that eliminates bias.

“It’s important that you don’t cause or make decisions based on discriminatory factors,” he says. “Do significant reviews and follow guidelines to ensure no sensitive data is used in the model, such as zip code, gender, or age.”

For example, Mastercard developed a strategy, the Five pillars of AI, to provide a framework for its technology operation. Including ethical AI and AI for the good of others are important facets of AI development at Mastercard and form a structure from which its governance model is built.

“When AI algorithms are implemented, we need to implement certain governance to ensure compliance,” Rohit Chauhan, Executive Vice President, Artificial Intelligence at Mastercard says in the Five pillars of AI. “Responsible AI algorithms minimize bias and are understandable, so people feel comfortable that AI is deployed responsibly, and that they understand it.”

Meeting the demands of AI regulations

Model custodians must be watchful to ensure the model continues to perform in compliance with the Equal Credit Opportunity Act (ECOA) in the U.S. and the Coordinated Plan on Artificial Intelligence in the EU, for example.

As AI adoption grows, model explainability will become increasingly important and result in new laws and regulations. Lenders will need to ensure two major forms of explainability:

  • Global explainability, which identifies the variables with the most contributing factors across all predictions made by the model. Having global explainability makes it easier to conclude the model is sound.
  • Local explainability, which identifies the variables that contribute the most to an individual prediction. For example, if someone applies for a loan and gets rejected, what were the most important factors for rejecting the applicant? This also helps to identify opportunities for alternative data sets to increase approvals while minimizing risk.

Real-world challenges need transparent explanations

Credit risk decisions: explaining “no” and making the case for “yes”

FIs have discovered AI is a powerful tool to manage credit risk across the customer lifecycle. “Know your customer” and originations, delinquency prevention and portfolio management all contribute to a positive customer experience and increased revenue for FIs.

The decisions made by AI optimize these areas of lending. While powerful and insightful, these outcomes need to make sense to customers and lenders. Lenders need to be able to answer, “How do I know if this customer is likely to become delinquent?” or “Can I lend this person more money?”

Payments fraud: understanding why a transaction is flagged

When declining a transaction, it’s important to be certain – false positives are embarrassing for both the customer and the bank. The reason code must be easy to understand for an acquirer’s risk management team and for their customers, both merchants and cardholders.

If a transaction is scored at a certain fraud level (e.g., 900 on scale of 0-999), the AI model will provide reasons for that score. It could conclude the transaction was made from a highly risky category, shows an anomaly that could indicate fraud, or be a simple transaction that should be approved.

Building explainability into the AI model

For FIs, it’s important to choose an AI partner with extensive experience in meeting local and global compliance in the financial sector. This expertise is fundamental to building an explainable AI model that meets regulatory requirements at scale. Paired with each customer’s unique business challenge, this know-how ensures models are built effectively using a variety of AI/ML tools.

Today, models can be built in a matter of weeks and the process to implementation is highly efficient. Brighterion has a proven process, AI Express, that helps its customers develop, test and prepare for deployment in under two months.

This highly collaborative process begins with Brighterion’s team working with customers to understand their business goals and challenges, and determine desired outcomes and how the data will be collected.

The development team then begins to build a model framework that supports the customers’ model governance and conforms to regulatory requirements, including explainability embedded throughout the model’s algorithms. The team omits data elements that could lead to problematic results or bias, and associates reason codes with all scores. Scoring over 100 billion events annually for over 2,000 organizations worldwide, Brighterion operates within a very secure model governance framework.

“We try to make it as simple as possible so users can look up reasons for decisions,” Dhala explains. “The model must be accurate and explainable, so it must achieve both objectives. These two components balance the model.”

Dhala adds that these processes are very easy to replicate from one model to the next. “We are very experienced across sectors of financial services and what banks need across their spectrum of customers,” Dhala says. “We have a good feel for what banks need.”

Responsible AI in banking: thoughtful, transparent and explainable

AI’s increasing role in financial services has left some banks and their customers concerned about opaque “black box” decision making. The remedy is transparent AI that protects privacy.

Not only does explainable AI provide users with predictive scores, but it also helps them understand the reasons behind those predictions. This allows credit risk managers to manage portfolios on a one-to-one level and develop more personalized strategies to improve their borrowers’ experiences. Merchant acquirers can understand trends before they cause problems, from fraud attacks to borrower instability. Customers understand decisions that affect their financial well-being.

By partnering with Brighterion, you can deploy a transparent AI model with clearly explainable decision making that benefits both your bank and your customers.

You can learn more about ethical, transparent and explainable AI in and how Brighterion is helping to make it a reality in Mastercard’s five pillars approach to thoughtful, strategic implementation of AI report.

 



Source link

Spread the Word!