Technology
How ‘Explainable’ AI is Boosting Trust Within the Financial Sector
John J. Thomas, Distinguished Engineer & Director, Data & AI, IBM
AI is certainly proving its value in the financial services industry, with applications ranging from identifying fraud and fighting financial crime, to supporting innovative new digital experiences for customers. However, the advance from traditional rule-based models into machine learning for decision-making is creating a new wrinkle for financial institutions.
Without proper steps to ensure trust in decision-making with machine learning models, many organizations are unknowingly exposing themselves to reputational and financial risk. The use of “black box” AI without explainability and transparency leaves them without the ability to understand when things go wrong. It is equally important that AI is fair and unbiased, that it is not providing a systematic advantage to one group over others, especially in the context of sensitive attributes like age, gender, and ethnicity.
Financial institutions are at a crossroads today. A new study from IBM and Morning Consult found that 44% of organizations in the financial sector revealed that limited expertise and skillsets are the biggest challenge to their ability to successfully deploy AI technologies. Throughout the pandemic, pressure has mounted to adopt new technologies that drive operational efficiencies and differentiate financial institutions among their competitors. As organizations adopt AI, it is important to ensure fair outcomes, instill trust in AI decision-making, and operationalize AI to optimize their business operations.
How can the financial industry advance trust in artificial intelligence?
First and foremost, before any financial institution even considers integrating AI into their business operations, they need to understand that ethical, trustworthy AI starts with defining policies and guardrails upfront. Financial services businesses are aware of this, as 85% of those surveyed in IBM’s Global AI Adoption Index 2021 said being able to explain how your AI arrived at a decision is important to their business.
These organizations should be able to clearly define what fairness really means in their industry and how that fairness will be monitored. Similarly, organizations should be clear on what they stand for as a corporate entity today and which policies map back to that stance.
With that guidance in mind, financial institutions can then begin looking at specific use-cases that employ AI models. For example, consider how an AI model might behave in various credit risk scenarios. What parameters are informing its decision-making? Is it unfairly correlating risk with demographics?
All of these elements are important to think through and need to be kept in mind throughout the entire lifecycle of working with AI – from building and validating the models, to deploying and consuming them. Organizations today have access to platforms that help guide this process, ensuring models are fair and unbiased (within the boundaries of fairness dictated by policy), with the capabilities to visualize and explain outcomes for regulators. While those tools exist, 63% of financial services businesses surveyed said AI governance and management tools that do not work across all data environments is a barrier to developing AI that is trustworthy.
With greater confidence in their AI, financial institutions can spend less time on laborious tasks to ensure their trustworthiness and focus their attention on higher-value work. For example, fraud detection is a common use-case for AI in financial services today, but there is still a high rate of false positives. If AI systems can explain why it thinks a case is fraudulent, and more importantly, show that it’s not systematically favoring one group over another, human employees can spend less time verifying results and more time delivering higher-value work.
Do start-ups need to take a different approach than legacy financial institutions?
Ultimately, whether you are a legacy financial institution or a budding start-up, you need to care equally about ensuring fair, ethical, transparent AI.
The immediate difference is that legacy financial institutions will already have an existing model risk management practice, usually one that works with traditional rule-based models. Legacy financial institutions will already have techniques and processes in place, and because of this, it can often be more challenging to change approaches. It is important to consider how the existing model risk management practices can expand to support AI/ML models regardless of which development and deployment tools are being used.
Many fintech start-ups may not have an existing investment in this technology to consider, affording them more liberty to pick best-of-breed development, deployment and monitoring platforms with capabilities baked in.
What comes next for AI in the finance industry?
The pandemic acted as a catalyst for organizations still considering investments in AI to finally “take the plunge,” recognizing the benefits for driving efficiencies, reducing the strain of remote workforces, and more. Currently, 28% of companies within the financial sector report they have actively deployed AI as part of business operations. While infiltration of AI technology happened very fast and at a grand scale, 44% indicate that they are still in the preliminary phase of exploring AI solutions, and 22% are not currently using or exploring the use of AI solutions. That means at present, the majority of financial companies are developing proof of concepts or analyzing their data for future growth and use purposes.
As the world returns to some sense of normalcy this year, organizations will need to be more vigilant than ever to ensure their technology is operating responsibly, rather than contributing to systemic inequities. Upcoming regulations from governments around the world will continue to place a spotlight on how organizations, particularly in the finance industry, are using this technology responsibly.
Ultimately, there is no quick and easy path towards widespread trust in AI decision-making, but taking ongoing, thoughtful steps towards setting guardrails, addressing bias and improving explainability is the best place to start.
John J. Thomas is a Distinguished Engineer & Chief Data Scientist in IBM’s Data & AI business. He currently leads IBM Expert Labs offerings that help clients operationalize Data Science & AI. These offerings include advisory services to establish an AI CoE/Factory, and agile sprints that address various stages of the Data Science & AI lifecycle. Previously he was the technical executive for IBM’s Data Science Elite team that has kickstarted AI for over 100 clients. A lifelong learner, his 25+ years of experience spans a spectrum including Systems, Cloud, Analytics, Data Science, and AI.
-
Top Stories3 days ago
After VW plant victory, UAW sets its sights on Mercedes in Alabama
-
Business2 days ago
Mike Bahun and Fundraising University Make a Lasting Impact on Sports Programs Nationwide
-
Investing2 days ago
Forex Market Trends to Watch Out For in 2024
-
Top Stories3 days ago
Hedge fund borrowing hits five-year peak, Goldman Sachs says