Editorial & Advertiser Disclosure Global Banking And Finance Review is an independent publisher which offers News, information, Analysis, Opinion, Press Releases, Reviews, Research reports covering various economies, industries, products, services and companies. The content available on globalbankingandfinance.com is sourced by a mixture of different methods which is not limited to content produced and supplied by various staff writers, journalists, freelancers, individuals, organizations, companies, PR agencies Sponsored Posts etc. The information available on this website is purely for educational and informational purposes only. We cannot guarantee the accuracy or applicability of any of the information provided at globalbankingandfinance.com with respect to your individual or personal circumstances. Please seek professional advice from a qualified professional before making any financial decisions. Globalbankingandfinance.com also links to various third party websites and we cannot guarantee the accuracy or applicability of the information provided by third party websites. Links from various articles on our site to third party websites are a mixture of non-sponsored links and sponsored links. Only a very small fraction of the links which point to external websites are affiliate links. Some of the links which you may click on our website may link to various products and services from our partners who may compensate us if you buy a service or product or fill a form or install an app. This will not incur additional cost to you. A very few articles on our website are sponsored posts or paid advertorials. These are marked as sponsored posts at the bottom of each post. For avoidance of any doubts and to make it easier for you to differentiate sponsored or non-sponsored articles or links, you may consider all articles on our site or all links to external websites as sponsored . Please note that some of the services or products which we talk about carry a high level of risk and may not be suitable for everyone. These may be complex services or products and we request the readers to consider this purely from an educational standpoint. The information provided on this website is general in nature. Global Banking & Finance Review expressly disclaims any liability without any limitation which may arise directly or indirectly from the use of such information.

Combatting the rise of deepfakes

By Sanjay Gupta, Global VP at Mitek

Deepfake videos are now populating the internet. These AI-driven videos are being increasingly commodified, being used to impersonate politicians, business magnates and A-listers. Just recently at the World Economic Forum in Davos, a deepfake installation was used to demonstrate exactly how advanced this technology is becoming.

Sanjay Gupta
Sanjay Gupta

As attendees tried out the interactive display at the event, superimposing images of Leonardo DiCaprio onto images of their own face in real time, the message was loud and clear: the potential for deepfakes to distort and manipulate reality could be limitless.

This advanced technology started appearing on our screens following the rapid growth of ‘fake news’, blurring the lines between fact and fiction. As this fine line becomes more difficult to distinguish, we can expect to see more examples of how deepfakes can facilitate the fabrication of false stories and be used as convincing ‘evidence’ to prove someone said or did something. What’s impressive about this is how little is needed to produce a video so highly believable – only a couple hundred images.

But as deepfake technology matures, a much more concerning use of the phenomenon is emerging. Deepfake technologies are being used to produce nearly flawless falsified digital identities and ID documents – and the range and quality of the technologies available to fraudsters is the underlying driver. Technology that was once considered to be available to a few industry mavericks – using it for the right reasons – has now gone ‘mainstream’.

Identity fraud is an age-old problem. Yet, tech innovation has taken it to the next level. Banks and financial institutions already employ thousands of people to stamp out identity theft, but to fight the ever-growing threat of these new forms of fraud, businesses must look to technology to spot what the human eye can’t always see.

As the annual cost of punitive ‘Know Your Customer’ (KYC) and Anti-Money Laundering (AML4/5) non-compliance fines for a typical bank rises to an eye-watering €3.5 million, technology can and should be part of the solution. By digitising customer identity verification alone, a typical bank could save €10m a year, while the best technology is being deployed to ensure customers’ identity documents are digitally verified via an ID scan and a selfie in near-real time. Technology that can verify identities – of customers, partners, employees, or suppliers – is not new. It has become a must-have, especially in regulation-heavy industries like banking.

The good news is that the ability to identify deep fakes will only improve with time, as researchers are experimenting with AI that is trained to spot even the deepest of fakes, using facial recognition and, more recently, behavioural biometrics. By creating a record of how a person types and talks, the websites they visit, and even how they hold their phone, researchers can create a unique digital ‘fingerprint’ to verify a user’s identity and prevent any unauthorised access to devices or documents. Using this technique, researchers are aiming to crack even the most flawless of deepfakes, by pitting one AI engine against another in the hope that the cost of ‘superior AI’ will be prohibitive for cybercriminals.

Meanwhile, the tech industry is testing other safeguards. Apple has expanded its Near Field Communication (NFC) on devices to enable them to unlock and read data stored in security chips of passports. Once unlocked, it allows the device to see biometric data and high-res photo of the biometric document owner, and is now being used by the UK Home Office to enable European citizens to apply for Settled Status through both iPhone and Android apps.

For the deepfakes antidote to work as intended, biometrics will need to keep pace with cutting edge innovation. Biometric technologies will always be a preventative measure – the only real indicator of who you really are. Overall, the adoption of biometrics as an option for digital identity verification, or account and device sign-in, is a reliable measure of security.

New websites like thispersondoesnotexist.com, however, can generate incredibly life-like images, revealing how fraudsters can become very realistic actors within the digital ecosystem. This means that fake digital identities could impact nearly every organisation that operates a digital onboarding solution – which, in the age of the instant real-time experience, is any company who wants to win new digital-savvy customers.

As social media platforms and governments are stepping up efforts to curb the use of deepfakes, many businesses are turning to technology to help verify customer identities through AI and machine learning during the onboarding processes. This minimises potential threats, even before new regulations are introduced to curb the threat. In addition, these tech providers should be up to date on the most pressing risks and the technologies or techniques making the rounds among malicious threat actors.

While deepfakes continue to proliferate, regulations surrounding the technology remain largely open to interpretation. It is now a critical time for businesses to ensure the biometric information they collect is protected and not compromised at all costs.

The true consequences of deepfakes are yet to unfold, but the potential for this technology to erode trust in today’s society is clear. Technology has enabled new opportunities for fraud to emerge, but technology can also be the cure. Ensuring customer identities are verified every step of the way will certainly be the best first line of defence – and might be the only way to stop this new breed of cybercriminals in their tracks.