Connect with us

Global Banking and Finance Review is an online platform offering news, analysis, and opinion on the latest trends, developments, and innovations in the banking and finance industry worldwide. The platform covers a diverse range of topics, including banking, insurance, investment, wealth management, fintech, and regulatory issues. The website publishes news, press releases, opinion and advertorials on various financial organizations, products and services which are commissioned from various Companies, Organizations, PR agencies, Bloggers etc. These commissioned articles are commercial in nature. This is not to be considered as financial advice and should be considered only for information purposes. It does not reflect the views or opinion of our website and is not to be considered an endorsement or a recommendation. We cannot guarantee the accuracy or applicability of any information provided with respect to your individual or personal circumstances. Please seek Professional advice from a qualified professional before making any financial decisions. We link to various third-party websites, affiliate sales networks, and to our advertising partners websites. When you view or click on certain links available on our articles, our partners may compensate us for displaying the content to you or make a purchase or fill a form. This will not incur any additional charges to you. To make things simpler for you to identity or distinguish advertised or sponsored articles or links, you may consider all articles or links hosted on our site as a commercial article placement. We will not be responsible for any loss you may suffer as a result of any omission or inaccuracy on the website. .

Technology

How ‘Explainable’ AI is Boosting Trust Within the Financial Sector

Global Banking And Finance 1 News

John J. Thomas, Distinguished Engineer & Director, Data & AI, IBM

AI is certainly proving its value in the financial services industry, with applications ranging from identifying fraud and fighting financial crime, to supporting innovative new digital experiences for customers. However, the advance from traditional rule-based models into machine learning for decision-making is creating a new wrinkle for financial institutions.

Without proper steps to ensure trust in decision-making with machine learning models, many organizations are unknowingly exposing themselves to reputational and financial risk. The use of “black box” AI without explainability and transparency leaves them without the ability to understand when things go wrong. It is equally important that AI is fair and unbiased, that it is not providing a systematic advantage to one group over others, especially in the context of sensitive attributes like age, gender, and ethnicity.

Financial institutions are at a crossroads today. A new study from IBM and Morning Consult found that 44% of organizations in the financial sector revealed that limited expertise and skillsets are the biggest challenge to their ability to successfully deploy AI technologies. Throughout the pandemic, pressure has mounted to adopt new technologies that drive operational efficiencies and differentiate financial institutions among their competitors. As organizations adopt AI, it is important to ensure fair outcomes, instill trust in AI decision-making, and operationalize AI to optimize their business operations.

How can the financial industry advance trust in artificial intelligence? 

First and foremost, before any financial institution even considers integrating AI into their business operations, they need to understand that ethical, trustworthy AI starts with defining policies and guardrails upfront. Financial services businesses are aware of this, as 85% of those surveyed in IBM’s Global AI Adoption Index 2021 said being able to explain how your AI arrived at a decision is important to their business.

These organizations should be able to clearly define what fairness really means in their industry and how that fairness will be monitored. Similarly, organizations should be clear on what they stand for as a corporate entity today and which policies map back to that stance.

With that guidance in mind, financial institutions can then begin looking at specific use-cases that employ AI models. For example, consider how an AI model might behave in various credit risk scenarios. What parameters are informing its decision-making? Is it unfairly correlating risk with demographics?

All of these elements are important to think through and need to be kept in mind throughout the entire lifecycle of working with AI – from building and validating the models, to deploying and consuming them. Organizations today have access to platforms that help guide this process, ensuring models are fair and unbiased (within the boundaries of fairness dictated by policy), with the capabilities to visualize and explain outcomes for regulators. While those tools exist, 63% of financial services businesses surveyed said AI governance and management tools that do not work across all data environments is a barrier to developing AI that is trustworthy.

With greater confidence in their AI, financial institutions can spend less time on laborious tasks to ensure their trustworthiness and focus their attention on higher-value work. For example, fraud detection is a common use-case for AI in financial services today, but there is still a high rate of false positives. If AI systems can explain why it thinks a case is fraudulent, and more importantly, show that it’s not systematically favoring one group over another, human employees can spend less time verifying results and more time delivering higher-value work.

Do start-ups need to take a different approach than legacy financial institutions?

Ultimately, whether you are a legacy financial institution or a budding start-up, you need to care equally about ensuring fair, ethical, transparent AI.

The immediate difference is that legacy financial institutions will already have an existing model risk management practice, usually one that works with traditional rule-based models. Legacy financial institutions will already have techniques and processes in place, and because of this, it can often be more challenging to change approaches. It is important to consider how the existing model risk management practices can expand to support AI/ML models regardless of which development and deployment tools are being used.

Many fintech start-ups may not have an existing investment in this technology to consider, affording them more liberty to pick best-of-breed development, deployment and monitoring platforms with capabilities baked in.

What comes next for AI in the finance industry?

The pandemic acted as a catalyst for organizations still considering investments in AI to finally “take the plunge,” recognizing the benefits for driving efficiencies, reducing the strain of remote workforces, and more. Currently, 28% of companies within the financial sector report they have actively deployed AI as part of business operations. While infiltration of AI technology happened very fast and at a grand scale, 44% indicate that they are still in the preliminary phase of exploring AI solutions, and 22% are not currently using or exploring the use of AI solutions. That means at present, the majority of financial companies are developing proof of concepts or analyzing their data for future growth and use purposes.

As the world returns to some sense of normalcy this year, organizations will need to be more vigilant than ever to ensure their technology is operating responsibly, rather than contributing to systemic inequities. Upcoming regulations from governments around the world will continue to place a spotlight on how organizations, particularly in the finance industry, are using this technology responsibly.

Ultimately, there is no quick and easy path towards widespread trust in AI decision-making, but taking ongoing, thoughtful steps towards setting guardrails, addressing bias and improving explainability is the best place to start.

John Thomas - Global Banking | Finance

John J. Thomas, Distinguished Engineer & Director, Data & AI, IBM

John J. Thomas is a Distinguished Engineer & Chief Data Scientist in IBM’s Data & AI business. He currently leads IBM Expert Labs offerings that help clients operationalize Data Science & AI. These offerings include advisory services to establish an AI CoE/Factory, and agile sprints that address various stages of the Data Science & AI lifecycle. Previously he was the technical executive for IBM’s Data Science Elite team that has kickstarted AI for over 100 clients. A lifelong learner, his 25+ years of experience spans a spectrum including Systems, Cloud, Analytics, Data Science, and AI.

Global Banking & Finance Review

 

Why waste money on news and opinions when you can access them for free?

Take advantage of our newsletter subscription and stay informed on the go!


By submitting this form, you are consenting to receive marketing emails from: Global Banking & Finance Review │ Banking │ Finance │ Technology. You can revoke your consent to receive emails at any time by using the SafeUnsubscribe® link, found at the bottom of every email. Emails are serviced by Constant Contact

Recent Post