Connect with us

Global Banking and Finance Review is an online platform offering news, analysis, and opinion on the latest trends, developments, and innovations in the banking and finance industry worldwide. The platform covers a diverse range of topics, including banking, insurance, investment, wealth management, fintech, and regulatory issues. The website publishes news, press releases, opinion and advertorials on various financial organizations, products and services which are commissioned from various Companies, Organizations, PR agencies, Bloggers etc. These commissioned articles are commercial in nature. This is not to be considered as financial advice and should be considered only for information purposes. It does not reflect the views or opinion of our website and is not to be considered an endorsement or a recommendation. We cannot guarantee the accuracy or applicability of any information provided with respect to your individual or personal circumstances. Please seek Professional advice from a qualified professional before making any financial decisions. We link to various third-party websites, affiliate sales networks, and to our advertising partners websites. When you view or click on certain links available on our articles, our partners may compensate us for displaying the content to you or make a purchase or fill a form. This will not incur any additional charges to you. To make things simpler for you to identity or distinguish advertised or sponsored articles or links, you may consider all articles or links hosted on our site as a commercial article placement. We will not be responsible for any loss you may suffer as a result of any omission or inaccuracy on the website. .

Top Stories

How to manage your AI risk better

How to manage your AI risk better

By Richard Watson-Bruhn, financial services expert at PA Consulting

Apple and Goldman Sachs received some headlines they didn’t want in early November when customers, including their founder Steve Wozniak, suggested that their credit card lending algorithms had a sexist bias. This then led New York’s Department of Financial Services (DFS) to open an investigation. However, the problem of bias in algorithms isn’t just Apple’s; other recent examples include a study suggesting UnitedHealth’s algorithm restricted the care given to black patients compared to white patients, or the legal case brought against the Home Office for bias in the UK visa application algorithm.

Richard Watson-Bruhn

Richard Watson-Bruhn

This underlines the risks machine learning presents to firms, but its proven ability to reduce costs and increase profitability through improved decision-making means we should expect its use to increase. The difficulty is that modern decision-making algorithms are a deeply technical and fast-moving area and that makes it hard for firms to understand and manage the risks, especially risks that may take time to be uncovered. You may not even realise that you are already using AI within your organisation, as AI is often embedded and integrated invisibly, filtering the information you receive but also subtly steering decisions – just think how much Google’s search algorithm can affect your choices and those of your customers.

The type of AI we have today is still fundamentally limited; it is really just statistics on steroids, better than a human in specific analysis but prone to missing context when the situation changes or being blind to all but specific details. This means humans are still vitally important in the development and use of AI, and best placed to spot bias and context that an algorithm may miss.

It is clear that AI is not something firms can avoid using if they want to stay competitive, but there are steps they can take to manage the risks better and deploy human intervention effectively to ask the right questions, consider the broader context and provide oversight.

Create a simple framework to govern the use of AI, not detailed technical guidance

The volume of data being used and the variety of applications to which machine learning algorithms are being applied make this a complex field and the risk of unintended consequences is high. The best response is to ask the simple questions: have we built in security, have we checked we won’t get biased results, are we controlling the development process? This can often take the form of a specific policy or internal guidance that sets out how to govern and embed controls into the development and use of algorithms. It is also important to consider how a change would be rolled back if the impact is found to be negative, doing this before release is always far easier than afterwards.

Get the customer perspective into the discussion early

Algorithm developers are often far removed from its application and they can get lost in the detail. They will be focused on the outcome of their work but that means they can miss things. Involving other teams at the beginning and testing on extreme use cases or non-traditional customer groups helps spot potential biases and brings a broader perspective that will then give customers a better experience. For example, given what we know about persistent debt, people who take out loans are likely to want larger loans. In response, an AI algorithm would suggest increasing the marketing of loans to customers already in debt but by taking a broader view the company would implement controls on this type of activity to protect customers and its long-term reputation.

Support compliance teams to understand the risks and how to manage them

Those working on compliance need the same help with understanding machine learning as they get from the business and experts when dealing with complex financial products. That can ensure compliance without heavy handed restrictions or risks caused by a lack of awareness of what results machine learning can produce. When a tool could affect all your customers, and the impact of a change could be hugely costly or beneficial, having an informed wider team will enable better decisions about its use.

There are also more technical steps firms can take, but the best and most robust approach is often the simplest and, at this stage, AI risks are still ultimately risks created by people through their development or use of algorithms. What firms need to recognise is that they can’t afford not to use AI, but, as Apple’s experience showed, the cost of not managing its risks carefully are high.

Global Banking & Finance Review


Why waste money on news and opinions when you can access them for free?

Take advantage of our newsletter subscription and stay informed on the go!

By submitting this form, you are consenting to receive marketing emails from: Global Banking & Finance Review │ Banking │ Finance │ Technology. You can revoke your consent to receive emails at any time by using the SafeUnsubscribe® link, found at the bottom of every email. Emails are serviced by Constant Contact

Recent Post