Search
00
GBAF Logo
trophy
Top StoriesInterviewsBusinessFinanceBankingTechnologyInvestingTradingVideosAwardsMagazinesHeadlinesTrends

Subscribe to our newsletter

Get the latest news and updates from our team.

Global Banking & Finance Review®

Global Banking & Finance Review® - Subscribe to our newsletter

Company

    GBAF Logo
    • About Us
    • Advertising and Sponsorship
    • Profile & Readership
    • Contact Us
    • Latest News
    • Privacy & Cookies Policies
    • Terms of Use
    • Advertising Terms
    • Issue 81
    • Issue 80
    • Issue 79
    • Issue 78
    • Issue 77
    • Issue 76
    • Issue 75
    • Issue 74
    • Issue 73
    • Issue 72
    • Issue 71
    • Issue 70
    • View All
    • About the Awards
    • Awards Timetable
    • Awards Winners
    • Submit Nominations
    • Testimonials
    • Media Room
    • FAQ
    • Asset Management Awards
    • Brand of the Year Awards
    • Business Awards
    • Cash Management Banking Awards
    • Banking Technology Awards
    • CEO Awards
    • Customer Service Awards
    • CSR Awards
    • Deal of the Year Awards
    • Corporate Governance Awards
    • Corporate Banking Awards
    • Digital Transformation Awards
    • Fintech Awards
    • Education & Training Awards
    • ESG & Sustainability Awards
    • ESG Awards
    • Forex Banking Awards
    • Innovation Awards
    • Insurance & Takaful Awards
    • Investment Banking Awards
    • Investor Relations Awards
    • Leadership Awards
    • Islamic Banking Awards
    • Real Estate Awards
    • Project Finance Awards
    • Process & Product Awards
    • Telecommunication Awards
    • HR & Recruitment Awards
    • Trade Finance Awards
    • The Next 100 Global Awards
    • Wealth Management Awards
    • Travel Awards
    • Years of Excellence Awards
    • Publishing Principles
    • Ownership & Funding
    • Corrections Policy
    • Editorial Code of Ethics
    • Diversity & Inclusion Policy
    • Fact Checking Policy
    Original content: Global Banking and Finance Review - https://www.globalbankingandfinance.com

    A global financial intelligence and recognition platform delivering authoritative insights, data-driven analysis, and institutional benchmarking across Banking, Capital Markets, Investment, Technology, and Financial Infrastructure.

    Copyright © 2010-2026 - All Rights Reserved. | Sitemap | Tags

    Editorial & Advertiser disclosure

    Global Banking & Finance Review® is an online platform offering news, analysis, and opinion on the latest trends, developments, and innovations in the banking and finance industry worldwide. The platform covers a diverse range of topics, including banking, insurance, investment, wealth management, fintech, and regulatory issues. The website publishes news, press releases, opinion and advertorials on various financial organizations, products and services which are commissioned from various Companies, Organizations, PR agencies, Bloggers etc. These commissioned articles are commercial in nature. This is not to be considered as financial advice and should be considered only for information purposes. It does not reflect the views or opinion of our website and is not to be considered an endorsement or a recommendation. We cannot guarantee the accuracy or applicability of any information provided with respect to your individual or personal circumstances. Please seek Professional advice from a qualified professional before making any financial decisions. We link to various third-party websites, affiliate sales networks, and to our advertising partners websites. When you view or click on certain links available on our articles, our partners may compensate us for displaying the content to you or make a purchase or fill a form. This will not incur any additional charges to you. To make things simpler for you to identity or distinguish advertised or sponsored articles or links, you may consider all articles or links hosted on our site as a commercial article placement. We will not be responsible for any loss you may suffer as a result of any omission or inaccuracy on the website.

    1. Home
    2. >Business
    3. >Chatbots Say Plenty About New Threats to Data
    Business

    Chatbots Say Plenty About New Threats to Data

    Published by Gbaf News

    Posted on August 21, 2018

    8 min read

    Last updated: January 21, 2026

    Add as preferred source on Google
    Image of Kim Leadbeater addressing the media about proposed changes to the UK's assisted dying law, emphasizing the removal of High Court judge sign-off to enhance the legislative process.
    Lawmaker Kim Leadbeater discusses UK's assisted dying law changes - Global Banking & Finance Review
    Why waste money on news and opinion when you can access them for free?

    Take advantage of our newsletter subscription and stay informed on the go!

    Subscribe

    Tags:Artificial IntelligenceCustomer interactioncybercriminalspayment card

    By Amina Bashir and Mike Mimoso, Flashpoint

    Chatbots are becoming a useful customer interaction and support tool for businesses.

    These bots are powered by an artificial intelligence that allows customers to ask simple questions, pay bills, or resolve conflicts over transactions; they’re cheaper than hiring more call centre personnel, and they’re popping up everywhere.

    As with most other innovations, threat actors have found a use for them too.

    A number of recent security incidents have involved the abuse of a chatbot to steal personal or payment card information from customers, or to post offensive messages in a business’s channel threatening its reputation. There is potential for worse with the possibility of attackers finding inroads with chatbots, either by exploiting vulnerabilities in the code to sit in a man-in-the-middle position and steal data from an interaction as it traverses the wire or sending the user links to exploits in order to access a backend database where information is stored. Attackers may also mimic chatbots, impersonating an existing business’s messaging to interact with customers directly and steal personal information that way.

    It’s an array of risks and threats that could be hidden in an innocuous communicational channel and are challenging to mitigate.

    Flashpoint analysts believe as businesses integrate chatbots into their platforms, threat actors will continue to leverage chatbots in malicious campaigns to target individuals and businesses across multiple industries. Moreover, threat actors will likely evolve the methods used to leverage chatbots in attacks as businesses move to enhance chatbot security.

     Few Chatbot Attacks Made Public

    Further complicating matters is that many attacks are going unreported. The attacks that are made public provide interesting insight into how attackers are leveraging chatbots.

     In June, Ticketmaster UK disclosed a breach of personal and payment card data belonging to 40,000 international customers. The threat actor group, identified as Magecart, targeted JavaScript built by service provider Inbenta for Ticketmaster UK’s chatbot. Inbenta said in a statementthat a piece of custom JavaScript designed to collect personal information and payment card data for the Ticketmaster chatbot was exploited; the code had been supplied more than nine months earlier. It was disabled immediately upon the disclosure.

    Microsoft and Tinder have also experienced issues with chatbots. In Microsoft’s case, the release of its AI chatbot Tay in 2016 was reportedly commandeered by threat actors who led it to spout anti-Semitic and racist abuse in an attack methodology classified as “pollution in communication channels.”

    On the popular dating app Tinder, cybercriminals used a chatbot to conduct fraudulent activity by impersonating a female who asked victims to enter their payment card information to become verified on the platform.

    Mitigations and Assessment

    Awareness about potential risks related to chatbots isn’t high. For their part too, attackers likely hadn’t set out to exploit chatbot vulnerabilities, but in targeting the supply chain or scanning for bugs in code, found themselves an available and relatively new attack vector with direct access to users and their information. In addition to man-in-the-middle attacks where chatbots can be mimicked, attackers can use them in phishing and other social engineering scams. Attackers can also use chatbots to provide users with links redirecting them to malicious domains, steal information, or access protected networks.

    Since most of these attacks can be essentially attacks against software, tried and tested security hygiene goes a long way as a mitigation. This entails starting with requiring multi-factor authentication to verify a user’s identity before any personal or payment card data is exchanged through a chatbot.

    Monitoring for and deploying regular software updates and security patches is imperative. Organisations should also consider encrypting conversations between the user and the chatbot, as this is also essential to warding off the loss of personal data.

    Companies may also consider breaking messages into smaller bits and encrypting those bits individually rather than the whole message. This approach makes offline decryption in the case of a memory leak attack much more difficult for an attacker. Additionally, appropriately storing and securing the data collected by chatbots is crucial. Companies can encrypt any stored data, and rules can be set in place regarding the length of time the chatbot will store this data.

    Finally, the rise in chatbot-related attacks should also reinforce the need for continuous end-user education to counter social engineering.

    By Amina Bashir and Mike Mimoso, Flashpoint

    Chatbots are becoming a useful customer interaction and support tool for businesses.

    These bots are powered by an artificial intelligence that allows customers to ask simple questions, pay bills, or resolve conflicts over transactions; they’re cheaper than hiring more call centre personnel, and they’re popping up everywhere.

    As with most other innovations, threat actors have found a use for them too.

    A number of recent security incidents have involved the abuse of a chatbot to steal personal or payment card information from customers, or to post offensive messages in a business’s channel threatening its reputation. There is potential for worse with the possibility of attackers finding inroads with chatbots, either by exploiting vulnerabilities in the code to sit in a man-in-the-middle position and steal data from an interaction as it traverses the wire or sending the user links to exploits in order to access a backend database where information is stored. Attackers may also mimic chatbots, impersonating an existing business’s messaging to interact with customers directly and steal personal information that way.

    It’s an array of risks and threats that could be hidden in an innocuous communicational channel and are challenging to mitigate.

    Flashpoint analysts believe as businesses integrate chatbots into their platforms, threat actors will continue to leverage chatbots in malicious campaigns to target individuals and businesses across multiple industries. Moreover, threat actors will likely evolve the methods used to leverage chatbots in attacks as businesses move to enhance chatbot security.

     Few Chatbot Attacks Made Public

    Further complicating matters is that many attacks are going unreported. The attacks that are made public provide interesting insight into how attackers are leveraging chatbots.

     In June, Ticketmaster UK disclosed a breach of personal and payment card data belonging to 40,000 international customers. The threat actor group, identified as Magecart, targeted JavaScript built by service provider Inbenta for Ticketmaster UK’s chatbot. Inbenta said in a statementthat a piece of custom JavaScript designed to collect personal information and payment card data for the Ticketmaster chatbot was exploited; the code had been supplied more than nine months earlier. It was disabled immediately upon the disclosure.

    Microsoft and Tinder have also experienced issues with chatbots. In Microsoft’s case, the release of its AI chatbot Tay in 2016 was reportedly commandeered by threat actors who led it to spout anti-Semitic and racist abuse in an attack methodology classified as “pollution in communication channels.”

    On the popular dating app Tinder, cybercriminals used a chatbot to conduct fraudulent activity by impersonating a female who asked victims to enter their payment card information to become verified on the platform.

    Mitigations and Assessment

    Awareness about potential risks related to chatbots isn’t high. For their part too, attackers likely hadn’t set out to exploit chatbot vulnerabilities, but in targeting the supply chain or scanning for bugs in code, found themselves an available and relatively new attack vector with direct access to users and their information. In addition to man-in-the-middle attacks where chatbots can be mimicked, attackers can use them in phishing and other social engineering scams. Attackers can also use chatbots to provide users with links redirecting them to malicious domains, steal information, or access protected networks.

    Since most of these attacks can be essentially attacks against software, tried and tested security hygiene goes a long way as a mitigation. This entails starting with requiring multi-factor authentication to verify a user’s identity before any personal or payment card data is exchanged through a chatbot.

    Monitoring for and deploying regular software updates and security patches is imperative. Organisations should also consider encrypting conversations between the user and the chatbot, as this is also essential to warding off the loss of personal data.

    Companies may also consider breaking messages into smaller bits and encrypting those bits individually rather than the whole message. This approach makes offline decryption in the case of a memory leak attack much more difficult for an attacker. Additionally, appropriately storing and securing the data collected by chatbots is crucial. Companies can encrypt any stored data, and rules can be set in place regarding the length of time the chatbot will store this data.

    Finally, the rise in chatbot-related attacks should also reinforce the need for continuous end-user education to counter social engineering.

    More from Business

    Explore more articles in the Business category

    Image for Submit Your Entry for Years of Excellence Awards 2026
    Submit Your Entry for Years of Excellence Awards 2026
    Image for Nominations Open for Travel & Hospitality Awards 2026
    Nominations Open for Travel & Hospitality Awards 2026
    Image for Submit Your Entry Today for Telecom Awards 2026
    Submit Your Entry Today for Telecom Awards 2026
    Image for Submit Your Entries for The Next 100 Global Awards 2026
    Submit Your Entries for the Next 100 Global Awards 2026
    Image for Submit Your Entry: Public Sector & Governance Excellence Awards 2026
    Submit Your Entry: Public Sector & Governance Excellence Awards 2026
    Image for Nominations Invited for Real Estate Development Awards 2026
    Nominations Invited for Real Estate Development Awards 2026
    Image for Submit Your Entry: Process & Product Awards 2026
    Submit Your Entry: Process & Product Awards 2026
    Image for Call for Entries: HR & Recruitment Awards 2026
    Call for Entries: HR & Recruitment Awards 2026
    Image for Submit Your Nominations Today for Education & Training Awards 2026
    Submit Your Nominations Today for Education & Training Awards 2026
    Image for Join the Corporate Governance Awards 2026: Showcase Your Organisation’s Leadership
    Join the Corporate Governance Awards 2026: Showcase Your Organisation’s Leadership
    Image for Submit Your Entry Today for Business Awards 2026
    Submit Your Entry Today for Business Awards 2026
    Image for Decentralized Masters’ ‘family culture’ building trust instead of hierarchy
    Decentralized Masters’ ‘family Culture’ Building Trust Instead of Hierarchy
    View All Business Posts
    Previous Business PostCorero Study Confirms DDoS Attacks Damage Customer Trust and Erode Confidence
    Next Business Post” PayTech for Good “