Connect with us

Global Banking and Finance Review is an online platform offering news, analysis, and opinion on the latest trends, developments, and innovations in the banking and finance industry worldwide. The platform covers a diverse range of topics, including banking, insurance, investment, wealth management, fintech, and regulatory issues. The website publishes news, press releases, opinion and advertorials on various financial organizations, products and services which are commissioned from various Companies, Organizations, PR agencies, Bloggers etc. These commissioned articles are commercial in nature. This is not to be considered as financial advice and should be considered only for information purposes. It does not reflect the views or opinion of our website and is not to be considered an endorsement or a recommendation. We cannot guarantee the accuracy or applicability of any information provided with respect to your individual or personal circumstances. Please seek Professional advice from a qualified professional before making any financial decisions. We link to various third-party websites, affiliate sales networks, and to our advertising partners websites. When you view or click on certain links available on our articles, our partners may compensate us for displaying the content to you or make a purchase or fill a form. This will not incur any additional charges to you. To make things simpler for you to identity or distinguish advertised or sponsored articles or links, you may consider all articles or links hosted on our site as a commercial article placement. We will not be responsible for any loss you may suffer as a result of any omission or inaccuracy on the website. .

Technology

The Good, The Bad and The Ugly of AI Chatbots

iStock 1474963602 - Global Banking | Finance

The Good, The Bad and The Ugly of AI Chatbots

AkberDatoo 1 - Global Banking | FinanceThe implications of AI chatbot technology – in both its glory and denunciation – are becoming increasingly evident. While Klarna’s customer service story was heralded for its success, there are one or more converse situations (e.g. Air Canada) where incorrect information has lead to fines and reputational damage. Slowly, but surely, it is not only the transformative nature of AI that is being recognised, but also the importance of stewardship that must sit alongside it. 

Before rushing headlong into deploying AI, especially in customer-facing roles, companies need to invest in appropriate AI usage policies, frameworks, and architecture—as well as train people to better understand how AI is changing the way they work.

As Akber Datoo, CEO and Founder of D2 Legal Technology (D2LT) explains, to maximise the opportunities and mitigate the risks associated with AI, it is vitally important to build the skills and knowledge to use AI legally, ethically, and effectively while maintaining data confidentiality.

Introduction

Since Open AI’s ChatGPT entered the stage in November 2022, AI has taken the world by storm. Within a matter of months, the potential and promise of generative AI was seemingly within reach of every individual and every organisation. Yet as with many things adopted at such a phenomenal rate, there is an almost wilful misunderstanding – or blissful ignorance – of the associated (and considerable) risks.

It seems that the sheer ease of use of these tools is further undermining the risk perception. Do organisations understand enough about how these tools operate to be able to adequately utilise them in a safe manner with appropriate processes cognisant of their limitations and risks? How many have assessed the implications for regulatory compliance, including data privacy (e.g. GDPR, CCPA), Know Your Customer (KYC) and Anti-Money Laundering (AML)?  Or recognised the vital importance of well–governed and appropriate–quality data to deliver effective, accurate and trustworthy output?

These issues are just the start when it comes to creating a robust corporate AI strategy. Organisations are rushing headfirst to deploy AI chatbots – not only internally but in customer facing roles – without even considering the fact that they may have no right to use the AI output due to IP ownership issues. Or assessing the different risk postures associated with developing an in-house tool versus using a commercial option, not least the implications for data confidentiality and associated risk of a compliance breach. Where is the legal understanding to mitigate these very significant risks?

Mixed Messages

The temptation to accelerate AI adoption is understandable. There is no doubt that AI has the potential to deliver substantial operational benefits, as Klarna’s AI assistant has evidenced.

However, for every good news AI story, there are multiple instances of AI providing incorrect or inconsistent information. TurboTax and H&R Block have faced recent criticism for deploying chatbots that give out bad tax-prep advice, while New York City has been compelled to defend the use of an AI Chatbot amid criticism and legal missteps following the provision of incorrect legal advice to small businesses. Even more high profile was the case where Air Canada’s chatbot gave a traveller incorrect advice – advice which was upheld by the British Columbia Civil Resolution Tribunal, which then insisted the airline pay damages and tribunal fees. Moreover, this has raised vital discussion around where the liability rests when a company enlists a chatbot to be its agent.

Endemic AI Misperceptions

The big question, therefore, is why are so many organisations rushing ahead in deploying AI chatbots without either understanding the technology or undertaking a robust risk assessment?  Without in depth understanding of how the AI technology works, it will be impossible for organisations to determine how and where to deploy AI in a way that adds value and mitigates the risk appropriately.

Increasingly, examples highlight the issue of AI generating hallucinations – but do organisations understand why AI is prone to such behaviour? Generative AI is non-deterministic, which means: ask the same question in sequence and the answer could well be different. Plus, models exhibit drift: not only are they changing constantly based on the ever-growing depth of training information but the AI is learning on the job.

In a legal context, for example, AI is bad at finding citations and tends to make up fictitious citations when trying to justify the answer to a question. There is no truth or falsehood embedded within the parameter weighting, the simple fact is that the underpinning Natural Language Models (NLM) have a fundamental disadvantage when it comes to factual information. The AI does not understand the content it is generating, in the same way that a calculator does not know that it is producing numbers.

Understanding Business Implications

The implications of this disadvantage when it comes to business problem solving were highlighted in a recent study undertaken by BCG Henderson Institute. The study revealed that when using generative AI (OpenAI’s GPT-4) for creative product innovation, around 90% of the participants improved their performance. Further, they converged on a level of performance that was 40% higher than that of those working on the same task without GPT-4.

In contrast, when using the technology for business problem solving, participants performed 23% worse than those doing the task without GPT-4. Even worse, even when warned about the possibility of wrong answers from the tool during a short training session, its output was not challenged – underlining the misperception and false sense of security created by the apparent simplicity of such tools. Organisations need to invest in robust training that ensures individuals have an in-depth understanding of AI and, critically, continue to update their knowledge in a fast-changing environment.

These findings underline the need to put a human in the loop. There is no traceability with AI, and no explainability as to how it operates or how output has been generated. Nominating an individual to be responsible for ensuring that nothing inappropriate, inaccurate or incorrect is provided is a fundamental aspect of any AI development.

Techniques to Mitigate AI Chatbot Risks

That said, a number of approaches can (if used correctly), be used to enhance the accuracy and performance of chatbots – particularly when combined with the addition of the human in the loop. These include:

  • Fine-Tuning: By adapting a pre-trained language model to a specific domain or task, fine-tuning customises a chatbot’s behaviour and responses, making it more suitable for specific use cases.
  • Retrieval Augmented Generation (RAG): This approach enhances large language models (LLMs) by incorporating a human-verified knowledge base into the response generation process. RAG dynamically pulls information from specified data sources, leading to more accurate and relevant chatbot interactions.
  • Function Calling: This refers to the ability of a language model to interact with and utilise external tools or APIs (Application Programming Interfaces) to perform specific tasks. Complementing RAG with function calling enables precise queries to external databases, further optimising response accuracy and relevance.

Conclusion

Growing numbers of organisations are warning about the danger of unmanaged AI chatbots. The Consumer Financial Protection Bureau has warned the increased use of chatbots in the banking sector raises the risks – such as non-compliance with federal consumer financial protection laws, diminished customer service and trust and potential harm to consumers.

The onus is on organisations, therefore, to take a far more robust approach to understanding the technology, the evolving legal debates and risk perception. To truly unlock AI’s value, it is imperative to understand the different iterations of AI technology, determine appropriate use cases, identify robust data sources and assess the correct postures. Critically, individuals at every level of the business need to truly understand the difference between how AI could and should be used within a regulated industry.

Global Banking & Finance Review

 

Why waste money on news and opinions when you can access them for free?

Take advantage of our newsletter subscription and stay informed on the go!


By submitting this form, you are consenting to receive marketing emails from: Global Banking & Finance Review │ Banking │ Finance │ Technology. You can revoke your consent to receive emails at any time by using the SafeUnsubscribe® link, found at the bottom of every email. Emails are serviced by Constant Contact

Recent Post