Search
00
GBAF Logo
trophy
Top StoriesInterviewsBusinessFinanceBankingTechnologyInvestingTradingVideosAwardsMagazinesHeadlinesTrends

Subscribe to our newsletter

Get the latest news and updates from our team.

Global Banking and Finance Review

Global Banking & Finance Review

Company

    GBAF Logo
    • About Us
    • Profile
    • Privacy & Cookie Policy
    • Terms of Use
    • Contact Us
    • Advertising
    • Submit Post
    • Latest News
    • Research Reports
    • Press Release
    • Awards▾
      • About the Awards
      • Awards TimeTable
      • Submit Nominations
      • Testimonials
      • Media Room
      • Award Winners
      • FAQ
    • Magazines▾
      • Global Banking & Finance Review Magazine Issue 79
      • Global Banking & Finance Review Magazine Issue 78
      • Global Banking & Finance Review Magazine Issue 77
      • Global Banking & Finance Review Magazine Issue 76
      • Global Banking & Finance Review Magazine Issue 75
      • Global Banking & Finance Review Magazine Issue 73
      • Global Banking & Finance Review Magazine Issue 71
      • Global Banking & Finance Review Magazine Issue 70
      • Global Banking & Finance Review Magazine Issue 69
      • Global Banking & Finance Review Magazine Issue 66
    Top StoriesInterviewsBusinessFinanceBankingTechnologyInvestingTradingVideosAwardsMagazinesHeadlinesTrends

    Global Banking & Finance Review® is a leading financial portal and online magazine offering News, Analysis, Opinion, Reviews, Interviews & Videos from the world of Banking, Finance, Business, Trading, Technology, Investing, Brokerage, Foreign Exchange, Tax & Legal, Islamic Finance, Asset & Wealth Management.
    Copyright © 2010-2025 GBAF Publications Ltd - All Rights Reserved.

    Editorial & Advertiser disclosure

    Global Banking and Finance Review is an online platform offering news, analysis, and opinion on the latest trends, developments, and innovations in the banking and finance industry worldwide. The platform covers a diverse range of topics, including banking, insurance, investment, wealth management, fintech, and regulatory issues. The website publishes news, press releases, opinion and advertorials on various financial organizations, products and services which are commissioned from various Companies, Organizations, PR agencies, Bloggers etc. These commissioned articles are commercial in nature. This is not to be considered as financial advice and should be considered only for information purposes. It does not reflect the views or opinion of our website and is not to be considered an endorsement or a recommendation. We cannot guarantee the accuracy or applicability of any information provided with respect to your individual or personal circumstances. Please seek Professional advice from a qualified professional before making any financial decisions. We link to various third-party websites, affiliate sales networks, and to our advertising partners websites. When you view or click on certain links available on our articles, our partners may compensate us for displaying the content to you or make a purchase or fill a form. This will not incur any additional charges to you. To make things simpler for you to identity or distinguish advertised or sponsored articles or links, you may consider all articles or links hosted on our site as a commercial article placement. We will not be responsible for any loss you may suffer as a result of any omission or inaccuracy on the website.

    Home > Technology > The Good, The Bad and The Ugly of AI Chatbots
    Technology

    The Good, The Bad and The Ugly of AI Chatbots

    The Good, The Bad and The Ugly of AI Chatbots

    Published by Jessica Weisman-Pitts

    Posted on May 23, 2024

    Featured image for article about Technology

    The Good, The Bad and The Ugly of AI Chatbots

    The implications of AI chatbot technology – in both its glory and denunciation – are becoming increasingly evident. While Klarna’s customer service story was heralded for its success, there are one or more converse situations (e.g. Air Canada) where incorrect information has lead to fines and reputational damage. Slowly, but surely, it is not only the transformative nature of AI that is being recognised, but also the importance of stewardship that must sit alongside it.

    Before rushing headlong into deploying AI, especially in customer-facing roles, companies need to invest in appropriate AI usage policies, frameworks, and architecture—as well as train people to better understand how AI is changing the way they work.

    As Akber Datoo, CEO and Founder of D2 Legal Technology (D2LT) explains, to maximise the opportunities and mitigate the risks associated with AI, it is vitally important to build the skills and knowledge to use AI legally, ethically, and effectively while maintaining data confidentiality.

    Introduction

    Since Open AI’s ChatGPT entered the stage in November 2022, AI has taken the world by storm. Within a matter of months, the potential and promise of generative AI was seemingly within reach of every individual and every organisation. Yet as with many things adopted at such a phenomenal rate, there is an almost wilful misunderstanding – or blissful ignorance – of the associated (and considerable) risks.

    It seems that the sheer ease of use of these tools is further undermining the risk perception. Do organisations understand enough about how these tools operate to be able to adequately utilise them in a safe manner with appropriate processes cognisant of their limitations and risks? How many have assessed the implications for regulatory compliance, including data privacy (e.g. GDPR, CCPA), Know Your Customer (KYC) and Anti-Money Laundering (AML)? Or recognised the vital importance of well–governed and appropriate–quality data to deliver effective, accurate and trustworthy output?

    These issues are just the start when it comes to creating a robust corporate AI strategy. Organisations are rushing headfirst to deploy AI chatbots – not only internally but in customer facing roles – without even considering the fact that they may have no right to use the AI output due to IP ownership issues. Or assessing the different risk postures associated with developing an in-house tool versus using a commercial option, not least the implications for data confidentiality and associated risk of a compliance breach. Where is the legal understanding to mitigate these very significant risks?

    Mixed Messages

    The temptation to accelerate AI adoption is understandable. There is no doubt that AI has the potential to deliver substantial operational benefits, as Klarna’s AI assistant has evidenced.

    However, for every good news AI story, there are multiple instances of AI providing incorrect or inconsistent information. TurboTax and H&R Block have faced recent criticism for deploying chatbots that give out bad tax-prep advice, while New York City has been compelled to defend the use of an AI Chatbot amid criticism and legal missteps following the provision of incorrect legal advice to small businesses. Even more high profile was the case where Air Canada’s chatbot gave a traveller incorrect advice – advice which was upheld by the British Columbia Civil Resolution Tribunal, which then insisted the airline pay damages and tribunal fees. Moreover, this has raised vital discussion around where the liability rests when a company enlists a chatbot to be its agent.

    Endemic AI Misperceptions

    The big question, therefore, is why are so many organisations rushing ahead in deploying AI chatbots without either understanding the technology or undertaking a robust risk assessment? Without in depth understanding of how the AI technology works, it will be impossible for organisations to determine how and where to deploy AI in a way that adds value and mitigates the risk appropriately.

    Increasingly, examples highlight the issue of AI generating hallucinations – but do organisations understand why AI is prone to such behaviour? Generative AI is non-deterministic, which means: ask the same question in sequence and the answer could well be different. Plus, models exhibit drift: not only are they changing constantly based on the ever-growing depth of training information but the AI is learning on the job.

    In a legal context, for example, AI is bad at finding citations and tends to make up fictitious citations when trying to justify the answer to a question. There is no truth or falsehood embedded within the parameter weighting, the simple fact is that the underpinning Natural Language Models (NLM) have a fundamental disadvantage when it comes to factual information. The AI does not understand the content it is generating, in the same way that a calculator does not know that it is producing numbers.

    Understanding Business Implications

    The implications of this disadvantage when it comes to business problem solving were highlighted in a recent study undertaken by BCG Henderson Institute. The study revealed that when using generative AI (OpenAI’s GPT-4) for creative product innovation, around 90% of the participants improved their performance. Further, they converged on a level of performance that was 40% higher than that of those working on the same task without GPT-4.

    In contrast, when using the technology for business problem solving, participants performed 23% worse than those doing the task without GPT-4. Even worse, even when warned about the possibility of wrong answers from the tool during a short training session, its output was not challenged – underlining the misperception and false sense of security created by the apparent simplicity of such tools. Organisations need to invest in robust training that ensures individuals have an in-depth understanding of AI and, critically, continue to update their knowledge in a fast-changing environment.

    These findings underline the need to put a human in the loop. There is no traceability with AI, and no explainability as to how it operates or how output has been generated. Nominating an individual to be responsible for ensuring that nothing inappropriate, inaccurate or incorrect is provided is a fundamental aspect of any AI development.

    Techniques to Mitigate AI Chatbot Risks

    That said, a number of approaches can (if used correctly), be used to enhance the accuracy and performance of chatbots – particularly when combined with the addition of the human in the loop. These include:

    • Fine-Tuning: By adapting a pre-trained language model to a specific domain or task, fine-tuning customises a chatbot’s behaviour and responses, making it more suitable for specific use cases.
    • Retrieval Augmented Generation (RAG): This approach enhances large language models (LLMs) by incorporating a human-verified knowledge base into the response generation process. RAG dynamically pulls information from specified data sources, leading to more accurate and relevant chatbot interactions.
    • Function Calling: This refers to the ability of a language model to interact with and utilise external tools or APIs (Application Programming Interfaces) to perform specific tasks. Complementing RAG with function calling enables precise queries to external databases, further optimising response accuracy and relevance.

    Conclusion

    Growing numbers of organisations are warning about the danger of unmanaged AI chatbots. The Consumer Financial Protection Bureau has warned the increased use of chatbots in the banking sector raises the risks – such as non-compliance with federal consumer financial protection laws, diminished customer service and trust and potential harm to consumers.

    The onus is on organisations, therefore, to take a far more robust approach to understanding the technology, the evolving legal debates and risk perception. To truly unlock AI’s value, it is imperative to understand the different iterations of AI technology, determine appropriate use cases, identify robust data sources and assess the correct postures. Critically, individuals at every level of the business need to truly understand the difference between how AI could and should be used within a regulated industry.

    Related Posts
    Treasury transformation must be built on accountability and trust
    Treasury transformation must be built on accountability and trust
    Financial services: a human-centric approach to managing risk
    Financial services: a human-centric approach to managing risk
    LakeFusion Secures Seed Funding to Advance AI-Native Master Data Management
    LakeFusion Secures Seed Funding to Advance AI-Native Master Data Management
    Clarity, Context, Confidence: Explainable AI and the New Era of Investor Trust
    Clarity, Context, Confidence: Explainable AI and the New Era of Investor Trust
    Data Intelligence Transforms the Future of Credit Risk Strategy
    Data Intelligence Transforms the Future of Credit Risk Strategy
    Architect of Integration Ushers in a New Era for AI in Regulated Industries
    Architect of Integration Ushers in a New Era for AI in Regulated Industries
    How One Technologist is Building Self-Healing AI Systems that Could Transform Financial Regulation
    How One Technologist is Building Self-Healing AI Systems that Could Transform Financial Regulation
    SBS is Doubling Down on SaaS to Power the Next Wave of Bank Modernization
    SBS is Doubling Down on SaaS to Power the Next Wave of Bank Modernization
    Trust Embedding: Integrating Governance into Next-Generation Data Platforms
    Trust Embedding: Integrating Governance into Next-Generation Data Platforms
    The Guardian of Connectivity: How Rohith Kumar Punithavel Is Redefining Trust in Private Networks
    The Guardian of Connectivity: How Rohith Kumar Punithavel Is Redefining Trust in Private Networks
    BNY Partners With HID and SwiftConnect to Provide Mobile Access to its Offices Around the Globe With Employee Badge in Apple Wallet
    BNY Partners With HID and SwiftConnect to Provide Mobile Access to its Offices Around the Globe With Employee Badge in Apple Wallet
    How Integral’s CTO Chidambaram Bhat is helping to solve  transfer pricing problems through cutting edge AI.
    How Integral’s CTO Chidambaram Bhat is helping to solve transfer pricing problems through cutting edge AI.

    Why waste money on news and opinions when you can access them for free?

    Take advantage of our newsletter subscription and stay informed on the go!

    Subscribe

    Previous Technology PostWhy the need for insurance firms to innovate cannot be overstated
    Next Technology PostWhat lies ahead for the financial services industry?

    More from Technology

    Explore more articles in the Technology category

    Why Physical Infrastructure Still Matters in a Digital Economy

    Why Physical Infrastructure Still Matters in a Digital Economy

    Why Compliance Has Become an Engineering Problem

    Why Compliance Has Become an Engineering Problem

    Can AI-Powered Security Prevent $4.2 Billion in Banking Fraud?

    Can AI-Powered Security Prevent $4.2 Billion in Banking Fraud?

    Reimagining Human-Technology Interaction: Sagar Kesarpu’s Mission to Humanize Automation

    Reimagining Human-Technology Interaction: Sagar Kesarpu’s Mission to Humanize Automation

    LeapXpert: How financial institutions can turn shadow messaging from a risk into an opportunity

    LeapXpert: How financial institutions can turn shadow messaging from a risk into an opportunity

    Intelligence in Motion: Building Predictive Systems for Global Operations

    Intelligence in Motion: Building Predictive Systems for Global Operations

    Predictive Analytics and Strategic Operations: Strengthening Supply Chain Resilience

    Predictive Analytics and Strategic Operations: Strengthening Supply Chain Resilience

    How Nclude.ai   turned broken portals into completed applications

    How Nclude.ai turned broken portals into completed applications

    The Silent Shift: Rethinking Services for a Digital World?

    The Silent Shift: Rethinking Services for a Digital World?

    Culture as Capital: How Woxa Corporation Is Redefining Fintech Sustainability

    Culture as Capital: How Woxa Corporation Is Redefining Fintech Sustainability

    Securing the Future: We're Fixing Cyber Resilience by Finally Making Compliance Cool

    Securing the Future: We're Fixing Cyber Resilience by Finally Making Compliance Cool

    Supply chain security risks now innumerable and unmanageable for majority of cybersecurity leaders, IO research reveals

    Supply chain security risks now innumerable and unmanageable for majority of cybersecurity leaders, IO research reveals

    View All Technology Posts