By Alexei Markovits, AI Team Manager, Element AI
The world around us is constantly changing due to ground-breaking advances in artificial intelligence (AI). AI systems are being used to buy and sell millions of financial instruments, assessing insurance claims, assigning credit scores and optimising investment portfolios. Along with these advancements, we also need a framework for understanding how AI arrives at its findings and suggestions, in order to build trust to use them to their full potential.
The processes behind how modern AI works isn’t always obvious. Many of today’s advanced machine learning algorithms that power AI systems are inspired by the processes of the human brain, but are limited by their lack of human ability to explain actions or reasoning.
With this in mind, an entire research field is now working towards describing the rationale behind AI decision-making: Explainable AI (XAI). While modern AI systems demonstrate performance and capabilities far beyond previous technologies, practicality and legal compliance present hurdles to successful implementation.
For organisations looking to utilise AI effectively, XAI be a key deciding factor due to its ability to help foster innovation, enable compliance with regulations, optimise model performance, and enhance competitive advantage.
The value of explainable AI in financial services
Explainability techniques are becoming especially valuable in financial services. When it comes to financial data, many service providers and consultants may already be aware of the low signal-to-noise ratio that is typical of this data, which in turn demands a strong feedback loop between user and machine.
AI solutions that are designed without human feedback capabilities run the risk of never being adopted due to the favoured traditional approaches that rely on domain expertise and experience from years gone by. AI-powered products that are not auditable will simply struggle to enter the market as they’ll face regulation issues.
Marketing forecasting and investment management
Time series forecasting methods have grown in prominence across financial services. They are useful for predicting asset returns, econometric data, market volatility and bid-ask spreads—but are limited by their dependence on historical values. As they can lack disparate, meaningful information of the day, using time series to predict the most likely value of a stock or market volatility is very challenging.
By complementing such models with explainability methods, users can understand the key signals the model uses in its prediction, and interpret the output based on their own complementary view of the market. This then enables synergy between finance specialists’ domain expertise and the big data crunching abilities of modern AI.
Explainability techniques also enable human-in-the-loop AI solutions for portfolio selection. An investor might find that they choose not to pick the suggested portfolio with the highest reward if the associated risk appears too prominent. On the other hand, a system that provides a detailed explanation of the risks, such as how they could be uncorrelated with the market, is a powerful addition to investment planning tools.
Assigning or denying credit to an applicant is a consequential decision that is highly regulated to ensure fairness. The success of AI applications in this field is dependent on the ability to provide a detailed explanation of final recommendations.
Beyond compliance, the value of XAI is seen for both the client and financial institution in different ways. Clients can receive explanations that give them the information they need to improve their credit profile, while service providers can better understand predicted client churn and adapt their services.
Through use of XAI, credit scoring can also help with reducing risk. For example, an XAI model might provide an explanation of why a pool of assets has the best distribution to minimise the risk of a covered bond.
Designing for explainability
As AI solutions evolve past proof-of-concept to deployment at scale, it is pivotal to recognise the importance of prioritising explainability to power human-AI collaboration and to satisfy audit, regulatory and adoption needs. Taking a user-centric approach along with the need for transparency across AI systems naturally behoves explainability to be a part of that cycle—from the initial steps of building a solution to the system integration and use.