Connect with us

Global Banking and Finance Review is an online platform offering news, analysis, and opinion on the latest trends, developments, and innovations in the banking and finance industry worldwide. The platform covers a diverse range of topics, including banking, insurance, investment, wealth management, fintech, and regulatory issues. The website publishes news, press releases, opinion and advertorials on various financial organizations, products and services which are commissioned from various Companies, Organizations, PR agencies, Bloggers etc. These commissioned articles are commercial in nature. This is not to be considered as financial advice and should be considered only for information purposes. It does not reflect the views or opinion of our website and is not to be considered an endorsement or a recommendation. We cannot guarantee the accuracy or applicability of any information provided with respect to your individual or personal circumstances. Please seek Professional advice from a qualified professional before making any financial decisions. We link to various third-party websites, affiliate sales networks, and to our advertising partners websites. When you view or click on certain links available on our articles, our partners may compensate us for displaying the content to you or make a purchase or fill a form. This will not incur any additional charges to you. To make things simpler for you to identity or distinguish advertised or sponsored articles or links, you may consider all articles or links hosted on our site as a commercial article placement. We will not be responsible for any loss you may suffer as a result of any omission or inaccuracy on the website. .

Technology

How do we debias AI?

iStock 1312588555 - Global Banking | Finance

304 - Global Banking | FinanceBy Nigel Cannings, CTO at Intelligent Voice

Artificial Intelligence (AI) has quickly become integrated into wide-ranging day-to-day processes, from customer services and recruitment profiling to financial services, and even medical practices. It’s an incredibly useful tool and it has streamlined and enhanced many areas of life and business. But many people forget that AI is only as good as the information you put into it. So, what if that information is biased?

It’s a well-established fact that all systems driven by AI have the potential to be exposed to and influenced by the biases of the people who created them or who collected the data. In the past, for example, recruitment algorithms based on existing hiring practices favoured young, white men and excluded women and people of colour due to the CVs and recruiting decisions given to the AI. This type of unintended side-effect is produced when a system’s training data contains the pre-existing biases of the creator. And the larger data sets and models become, the harder they are to debias. But it doesn’t have to be this way. So, how do we debias AI now?

What is the problem with bias in AI?

There are multiple reasons why business leaders need to be aware of the potential for bias in AI.

The impact of bias can be damaging from a business perspective – skewing a data set, and reducing the performance and appeal of a product or marketing campaign.  It can impact customer service, by using a customer’s profile and previous habits to determine their needs rather than listening to their wants. And alienating customers – because who wants to buy from a brand known to be biased, or discriminatory?

And then there are ethical concerns. Notably, in 2019, the Babylon healthcare app was accused of bias when its Symptom Checker said the most probable cause of a 59-year-old female’s central chest pain was a panic attack. While it suggested that a male with the same symptoms and characteristics may be at risk of a heart attack. Babylon later went on to explain the mitigating circumstances of the case, and a system that works on the basis of probability. But it still stands as a case in point about the dangers of bias in AI. Would the app have been blameless if the woman had ignored her symptoms on the basis of her app’s diagnosis and died of heart failure?

So, we know why bias in AI needs to be addressed. But how do we set about the process? It starts by understanding how AI works.

Why do AI networks make the decisions they do?

Despite some of the hyperbole in the press, we are a long way from AGI, “Artificial General Intelligence” where computers can start to “think” like humans, ie to make intuitive leaps outside of the areas they have been trained on.

So at the moment, an AI network makes a decision or prediction based on all of the data it has been trained on, and while some of the results are truly stunning, they are ultimately predictions based on the information provided to it by its human creators.  A machine might be able to detect cancer in an X-Ray that defeats even the most assiduous human, but it was only able to do that because a human labelled thousands of X-Rays as having come from a person who did or did not have cancer.

Asked to fill in a missing word, even the largest, most powerful AI model in the world will always say “cat” when asked what sat on the mat.  Why?  Because its training data says that is how the world works.  Only a human could ever decide on its own that just one day, maybe it was a rat sitting quietly in the sun.

So, how can understanding these processes help us to prevent AI bias now and in the future?

How can we debias AI?

Data is at the heart of the problem.  We know that faulty data leads to faulty conclusions.  We start our journey into AI training with what is called “labelled data”, ie a human being has classified a piece of training data as belonging to a particular category.  It might be a label on a picture, an identification of an emotion, or just an outcome, e.g. this is the CV of a successful candidate.  Some networks rely on very sparse labelling, maybe only a language, but require vast amounts of data to make sense of the world, learning, say, how sequences of words are used. A particular area of concern is so-called Large Language Models such as GPT-3 and NVIDIA’s Turing NLG, where the ability to track the data they are being trained on is a tricky proposition in itself. The training data for these models is what is termed ‘self-supervised’, which is a way of labelling vast quantities of data using the data itself. Further complicated for text-based LLMs by the fact that much of the text does not have associated metadata in the form of demographics, author’s gender etc. This means in AI parlance that data is not ‘balanced’ to fairly represent every kind of demographic. Additional steps need to be taken to ensure the construction of these datasets is unbiased.

“Bias” is a charged word in itself.  But there are hopefully some areas that all commentators can agree need to be addressed.  Going back to the previous example of hiring, it is important that we look at why certain groups are rejected.  To do this, we can “ask” the network why it reached the decision it did. This is a growing area of AI research called eXplainable AI (XAI), where we can look at AI models and infer what the AI is basing its predictions on. For even so-called black-box models we can at least infer the part of the input data that has the most impact on the network’s classification (the decision). In some architectures such as tree-based methods, categorised as white-box methods, we can even understand the decision by looking at the internal components of the AI.

Such capability is at the heart of one of the principle tenets of the EU’s data protection regulation GDPR which states that people have a right to have automated decisions explained to them. These forward-thinking regulations are having a big impact on the development of public-facing AI systems and the knock-on is the drive for explainability. Explainability is not just needed for transparency of the AI but it is also needed for communicating the scope of an AI system with regards to the data it uses. Recently the Department of Work and Pensions (DWP) is attracting a lot of negative press over its proposed AI to assess whether people should receive Universal Credit. While the system may or may not be biased, the lack of transparency is problematic. The DWP has not disclosed the details of the system, what data it uses, is it profiling etc, despite Freedom of Information Act requests. The lack of transparency is concerning and one would hope that this will be addressed before the system is brought online.

Why do we need to worry about debiasing AI now?

AI is an incredibly important tool. And it holds the potential to be even more valuable as it evolves – it was recently discovered that AI might even identify a patient’s race based on X-Rays, something that medical professionals cannot do when observing the same data. But while this is phenomenally exciting, it also raises concerns about what this might mean for certain demographics, and how AI, if left without bias checks, could lead to ultimate discrimination. In America, where some health care system uses AI to expedite decision-making, a study found that a widely adopted algorithm discriminated against black people because it equated required care with costs. And because Caucasians spend more money on healthcare, the algorithm wrongly concluded that black people were generally healthier, so required less care. And that’s an extremely worrying scenario.

If we don’t take steps to take control of AI bias now, and to understand the distribution of gender, age and other demographics in the data we use to train AI models, then it’s not unrealistic to imagine a bleak future of discrimination. Intentional or otherwise. When we enter personal bias into large-scale decision-making processes – whether in healthcare or insurance, hospitality or retail – we set in motion a potentially dangerous series of events. And while a few lost customers can be easy for a business to recover from, it’s harder for individuals to recover from unnecessarily inflated prices, poor service, or reduced care. And no one wants to be in a position where lives are lost through the implementation of flawed AI.

Global Banking & Finance Review

 

Why waste money on news and opinions when you can access them for free?

Take advantage of our newsletter subscription and stay informed on the go!


By submitting this form, you are consenting to receive marketing emails from: Global Banking & Finance Review │ Banking │ Finance │ Technology. You can revoke your consent to receive emails at any time by using the SafeUnsubscribe® link, found at the bottom of every email. Emails are serviced by Constant Contact

Recent Post