By Sharon Lee, Research and Innovation Manager, Cambridge Innovation Centre, OneSpan
The disruption caused by the pandemic has driven a major shift to online and digital platforms. Our world was already becoming increasingly interconnected, but over the past year this shift has been dramatically accelerated, creating vast amounts of data around our daily banking habits. Artificial Intelligence (AI) and Machine Learning (ML) have given banks and financial institutions critical capabilities to act instantaneously in today’s fast-paced digital world. Not so long ago, it would have seemed mind-boggling to many people that these digital tools would be able to detect suspicious, potentially fraudulent activity in enormous amount of data in real-time.
However, while these tools are exceptional for pattern recognition in big data and automating decision making such as detecting fraudulent activity and subsequently implementing additional security measures, other tasks utilising the technologies have raised concerns. For example, in 2019 Apple’s credit card was labelled as ‘sexist’ when it was revealed that the algorithms used to determine credit limits were biased against women. In one case, Steve Wozniak, co-founder of Apple, was offered almost ten times the amount that his wife was.
This demonstrates that, while AI and ML tools can be cost-effective and greatly increase operational efficiencies, there’s still more work to be done to ensure technology doesn’t discriminate against certain minority groups such as women. To begin to understand how we can improve the use of AI and ML in finance, we must learn from past examples in the industry as well as wider society.
Instances of bias technology in society
AI and ML technology already play major roles in business as well as wider society, and their usage is predicted to continue growing over the coming decade. As adoption increases, there needs to be a greater focus on minimising the risk of discrimination and bias as effectively as possible.
The problem right now is that AI and ML models get fed massive data which could be biased in the first place, because the data may not be a representative sample of the real world, or it simply captures biases in our society. ML models are capable of recognising patterns in the data including the biases, and therefore discriminating against a specific group of people because of their gender. This also applies to factors such as age, economic background and race. For example, universities using AI technology for candidate screening may adversely select prospective students based on their personal information such as race, hometown, household income. Similar algorithmic biases may also present in processing job applications.
AI and ML algorithms don’t understand what discrimination is, so organisations need to be aware about the potential AI biases and develop a strategy to detect and mitigate their ability to do so, or to better understand why they make certain decisions based on data. Gaining this understanding will allow banks and FIs to create better models that are more capable of eliminating biases in AI and ML technologies.
Improving AI technologies to shape an equal future
To begin to create a fair and equal future using AI and ML technologies, we need to establish an understanding of why an AI tool makes a certain decision. Why did Apple’s credit card algorithm offer a woman less credit than a man, even when both of their assets and credit history were the same? Organisations automating their operations using these technologies require a transparent and accountable way of ensuring that cases of AI biases which could potentially lead to discrimination are swiftly identified and dealt with.
Work has begun on explainable AI (XAI) models to help shed light into the decision making processes of such algorithms. Some organisations are already working towards this type of technology. In finance, gaining this understanding will allow banks and FIs to identify potential causes of discrimination such as gender bias. AI and ML technologies certainly can cut costs and enhance operational efficiency, but there still needs to be a human element in these processes to ensure that no one is at a disadvantage because of their gender, or any other identifiable characteristic.
Regulators tend to lag behind issuing legislation for such innovative technologies, but it’s quickly becoming apparent that a legal framework is needed to guide the real life application of AI and ML technologies. Since they are already being widely used across different sectors, it’s likely to be something that we’ll see governments and industry bodies deal with in the forseeable future. In the meantime, as an industry it’ll be important to continue to collaborate with both private and public sector organisations to create a transparent market and a fair society for all.