Connect with us

Global Banking and Finance Review is an online platform offering news, analysis, and opinion on the latest trends, developments, and innovations in the banking and finance industry worldwide. The platform covers a diverse range of topics, including banking, insurance, investment, wealth management, fintech, and regulatory issues. The website publishes news, press releases, opinion and advertorials on various financial organizations, products and services which are commissioned from various Companies, Organizations, PR agencies, Bloggers etc. These commissioned articles are commercial in nature. This is not to be considered as financial advice and should be considered only for information purposes. It does not reflect the views or opinion of our website and is not to be considered an endorsement or a recommendation. We cannot guarantee the accuracy or applicability of any information provided with respect to your individual or personal circumstances. Please seek Professional advice from a qualified professional before making any financial decisions. We link to various third-party websites, affiliate sales networks, and to our advertising partners websites. When you view or click on certain links available on our articles, our partners may compensate us for displaying the content to you or make a purchase or fill a form. This will not incur any additional charges to you. To make things simpler for you to identity or distinguish advertised or sponsored articles or links, you may consider all articles or links hosted on our site as a commercial article placement. We will not be responsible for any loss you may suffer as a result of any omission or inaccuracy on the website. .

Business

Chatbots Say Plenty About New Threats to Data

Chatbots Say Plenty About New Threats to Data

By Amina Bashir and Mike Mimoso, Flashpoint

Chatbots are becoming a useful customer interaction and support tool for businesses.

These bots are powered by an artificial intelligence that allows customers to ask simple questions, pay bills, or resolve conflicts over transactions; they’re cheaper than hiring more call centre personnel, and they’re popping up everywhere.

As with most other innovations, threat actors have found a use for them too.

A number of recent security incidents have involved the abuse of a chatbot to steal personal or payment card information from customers, or to post offensive messages in a business’s channel threatening its reputation. There is potential for worse with the possibility of attackers finding inroads with chatbots, either by exploiting vulnerabilities in the code to sit in a man-in-the-middle position and steal data from an interaction as it traverses the wire or sending the user links to exploits in order to access a backend database where information is stored. Attackers may also mimic chatbots, impersonating an existing business’s messaging to interact with customers directly and steal personal information that way.

It’s an array of risks and threats that could be hidden in an innocuous communicational channel and are challenging to mitigate.

Flashpoint analysts believe as businesses integrate chatbots into their platforms, threat actors will continue to leverage chatbots in malicious campaigns to target individuals and businesses across multiple industries. Moreover, threat actors will likely evolve the methods used to leverage chatbots in attacks as businesses move to enhance chatbot security.

 Few Chatbot Attacks Made Public

Further complicating matters is that many attacks are going unreported. The attacks that are made public provide interesting insight into how attackers are leveraging chatbots.

 In June, Ticketmaster UK disclosed a breach of personal and payment card data belonging to 40,000 international customers. The threat actor group, identified as Magecart, targeted JavaScript built by service provider Inbenta for Ticketmaster UK’s chatbot. Inbenta said in a statement that a piece of custom JavaScript designed to collect personal information and payment card data for the Ticketmaster chatbot was exploited; the code had been supplied more than nine months earlier. It was disabled immediately upon the disclosure.

Microsoft and Tinder have also experienced issues with chatbots. In Microsoft’s case, the release of its AI chatbot Tay in 2016 was reportedly commandeered by threat actors who led it to spout anti-Semitic and racist abuse in an attack methodology classified as “pollution in communication channels.”

On the popular dating app Tinder, cybercriminals used a chatbot to conduct fraudulent activity by impersonating a female who asked victims to enter their payment card information to become verified on the platform.

Mitigations and Assessment

Awareness about potential risks related to chatbots isn’t high. For their part too, attackers likely hadn’t set out to exploit chatbot vulnerabilities, but in targeting the supply chain or scanning for bugs in code, found themselves an available and relatively new attack vector with direct access to users and their information. In addition to man-in-the-middle attacks where chatbots can be mimicked, attackers can use them in phishing and other social engineering scams. Attackers can also use chatbots to provide users with links redirecting them to malicious domains, steal information, or access protected networks.

Since most of these attacks can be essentially attacks against software, tried and tested security hygiene goes a long way as a mitigation. This entails starting with requiring multi-factor authentication to verify a user’s identity before any personal or payment card data is exchanged through a chatbot.

Monitoring for and deploying regular software updates and security patches is imperative. Organisations should also consider encrypting conversations between the user and the chatbot, as this is also essential to warding off the loss of personal data.

Companies may also consider breaking messages into smaller bits and encrypting those bits individually rather than the whole message. This approach makes offline decryption in the case of a memory leak attack much more difficult for an attacker. Additionally, appropriately storing and securing the data collected by chatbots is crucial. Companies can encrypt any stored data, and rules can be set in place regarding the length of time the chatbot will store this data.

Finally, the rise in chatbot-related attacks should also reinforce the need for continuous end-user education to counter social engineering.

Global Banking & Finance Review

 

Why waste money on news and opinions when you can access them for free?

Take advantage of our newsletter subscription and stay informed on the go!


By submitting this form, you are consenting to receive marketing emails from: Global Banking & Finance Review │ Banking │ Finance │ Technology. You can revoke your consent to receive emails at any time by using the SafeUnsubscribe® link, found at the bottom of every email. Emails are serviced by Constant Contact

Recent Post