Connect with us
Our website publishes news, press releases, opinion and advertorials on various financial organizations, products and services which are commissioned from various Companies, Organizations, PR agencies, Bloggers etc. These commissioned articles are commercial in nature. This is not to be considered as financial advice and should be considered only for information purposes. It does not reflect the views or opinion of our website and is not to be considered an endorsement or a recommendation. We cannot guarantee the accuracy or applicability of any information provided with respect to your individual or personal circumstances. Please seek Professional advice from a qualified professional before making any financial decisions. We link to various third-party websites, affiliate sales networks, and to our advertising partners websites. When you view or click on certain links available on our articles, our partners may compensate us for displaying the content to you or make a purchase or fill a form. This will not incur any additional charges to you. To make things simpler for you to identity or distinguish advertised or sponsored articles or links, you may consider all articles or links hosted on our site as a commercial article placement. We will not be responsible for any loss you may suffer as a result of any omission or inaccuracy on the website.

Finance

The Human-AI Partnership Must Lead The Fightback Against Financial Crime

The Human-AI Partnership Must Lead The Fightback Against Financial Crime 1

By Martin Rehak, founder and CEO, Resistant.AI

It’s now over six months since Brexit, and with opportunities opening up for the UK to take advantage of new-found regulatory freedom, the domestic financial services industry is also seeing rising instances of organised fraud.

A recent report from Cifas, the UK’s fraud prevention community, found that during the first six months of 2021 there was an 11% increase in incidents of identity fraud relative to the same period in 2020. The same report also pointed to the rise in cybercrime as a service – phishing kits, fraud tool kits and hacking services – all posing serious risks across all sectors.

There are some particularly worrying areas. These include account takeover, which is a form of identity theft and fraud where a malicious third party successfully gains access to a user’s account credentials. Next is synthetic/sign up fraud, where a criminal combines stolen data with false information to create a new identity. There’s also the challenge presented by ‘money mules’ – those individuals who transfer money acquired illegally in person, through a courier service, or electronically, on behalf of others.

With these and many other issues in mind, regulators are looking to minimise the risks. It’s no coincidence that the FCA has started to take actions against banks over issues such as money laundering.  NatWest has been the first British bank to face a money laundering case at a cost of £265m for failing to prevent the laundering of nearly £400m.

An Intelligent Response

Adding to the impact of these problems is that sophisticated fraudsters constantly adapt their methods, leaving traditional banks and fintechs open to attacks that they can’t necessarily keep up with. As a result, the sector is under increasing pressure to proactively spot fraud patterns, no matter how often they evolve.

But, what has become clear is that traditional approaches to fraud prevention – those that primarily rely on human intervention – are losing out to criminals that can operate at huge scale. It’s well known, for instance, how overwhelmed many fraud (and cyber) analyst teams are with an ever-increasing number of alerts – many of them false.

Instead, the combination of AI, automation and the human brain offers the strongest form of defence when it comes to fighting cybercrime. Indeed, without greater investment in AI technologies it will become virtually impossible to defeat fraudsters who are themselves using AI to optimise their approach.

Sophisticated AI is able to predict, detect and deter financial crime. Continual assessment of transactions, customer behaviour within a session, across sessions and between sessions can alert teams to fraudulent activity taking place in the moment, or highlight anomalies which could point to a crime that’s taken place in recent weeks, months or even years. Experienced human analysts can then infer the attacker’s motives, confirm the finding and react appropriately.

These anomalies can be behavioural, or relate to device characteristics, to Internet and/or financial service providers, contact information, geo-locations, spikes of related activity, unusual switching between accounts. The list goes on, but efficient and effective anomaly detection identifies behaviours that deviate from the expected and which might be symptomatic of criminal activity. AI can block attacks, and push criminals to the limit of their efficiency.

Consider the processes involved in onboarding new customers, for example. Organisations across the financial sector are competing to make their approach as seamless as possible, but in reality, this has left businesses and their customers exposed to fraudulent behaviour, some of which is unlikely to get spotted for weeks or months after the attack has taken place. In addition, false positives, where a legitimate transaction is flagged as suspicious, can be extremely frustrating for customers and can result in them not returning.

Utilising AI to strengthen the validation, verification and transaction processes ensures security is enhanced – but not at the expense of the customer experience.  Not only will this create a safer and more trusted customer journey, it will play a key part in improving company reputation and attracting new customers.

Similarly, the role of automation is also key to narrowing the focus of investigations. With the priority alerts identified, fraud analysts then receive a complete view of transactions taking into consideration historical data, real-time analytics and insight when assessing risk.

Financial crime will always be with us, and realism suggests that the industry must work harder to mitigate risks as its perpetrators shift methods and change targets. Traditional, rules-based countermeasures are simply not up to the task and the industry needs the tools to detect both known criminal practices and recognise never-seen-before emerging patterns of financial crime. Blended with human experience and expertise, AI makes it possible to spot different types of threats or attack vectors and respond to them before it’s too late.

Global Banking and Finance Review Awards Nominations 2022
2022 Awards now open. Click Here to Nominate

Advertisement

Newsletters with Secrets & Analysis. Subscribe Now