Posted By Gbaf News
Posted on December 18, 2019
By Richard Watson-Bruhn, financial services expert at PA Consulting
Apple and Goldman Sachs received some headlines they didn’t want in early November when customers, including their founder Steve Wozniak, suggested that their credit card lending algorithms had a sexist bias. This then led New York’s Department of Financial Services (DFS) to open an investigation. However, the problem of bias in algorithms isn’t just Apple’s; other recent examples include a study suggesting UnitedHealth’s algorithm restricted the care given to black patients compared to white patients, or the legal case brought against the Home Office for bias in the UK visa application algorithm.
This underlines the risks machine learning presents to firms, but its proven ability to reduce costs and increase profitability through improved decision-making means we should expect its use to increase. The difficulty is that modern decision-making algorithms are a deeply technical and fast-moving area and that makes it hard for firms to understand and manage the risks, especially risks that may take time to be uncovered. You may not even realise that you are already using AI within your organisation, as AI is often embedded and integrated invisibly, filtering the information you receive but also subtly steering decisions – just think how much Google’s search algorithm can affect your choices and those of your customers.
The type of AI we have today is still fundamentally limited; it is really just statistics on steroids, better than a human in specific analysis but prone to missing context when the situation changes or being blind to all but specific details. This means humans are still vitally important in the development and use of AI, and best placed to spot bias and context that an algorithm may miss.
It is clear that AI is not something firms can avoid using if they want to stay competitive, but there are steps they can take to manage the risks better and deploy human intervention effectively to ask the right questions, consider the broader context and provide oversight.
Create a simple framework to govern the use of AI, not detailed technical guidance
The volume of data being used and the variety of applications to which machine learning algorithms are being applied make this a complex field and the risk of unintended consequences is high. The best response is to ask the simple questions: have we built in security, have we checked we won’t get biased results, are we controlling the development process? This can often take the form of a specific policy or internal guidance that sets out how to govern and embed controls into the development and use of algorithms. It is also important to consider how a change would be rolled back if the impact is found to be negative, doing this before release is always far easier than afterwards.
Get the customer perspective into the discussion early
Algorithm developers are often far removed from its application and they can get lost in the detail. They will be focused on the outcome of their work but that means they can miss things. Involving other teams at the beginning and testing on extreme use cases or non-traditional customer groups helps spot potential biases and brings a broader perspective that will then give customers a better experience. For example, given what we know about persistent debt, people who take out loans are likely to want larger loans. In response, an AI algorithm would suggest increasing the marketing of loans to customers already in debt but by taking a broader view the company would implement controls on this type of activity to protect customers and its long-term reputation.
Support compliance teams to understand the risks and how to manage them
Those working on compliance need the same help with understanding machine learning as they get from the business and experts when dealing with complex financial products. That can ensure compliance without heavy handed restrictions or risks caused by a lack of awareness of what results machine learning can produce. When a tool could affect all your customers, and the impact of a change could be hugely costly or beneficial, having an informed wider team will enable better decisions about its use.
There are also more technical steps firms can take, but the best and most robust approach is often the simplest and, at this stage, AI risks are still ultimately risks created by people through their development or use of algorithms. What firms need to recognise is that they can’t afford not to use AI, but, as Apple’s experience showed, the cost of not managing its risks carefully are high.