Connect with us

Global Banking and Finance Review is an online platform offering news, analysis, and opinion on the latest trends, developments, and innovations in the banking and finance industry worldwide. The platform covers a diverse range of topics, including banking, insurance, investment, wealth management, fintech, and regulatory issues. The website publishes news, press releases, opinion and advertorials on various financial organizations, products and services which are commissioned from various Companies, Organizations, PR agencies, Bloggers etc. These commissioned articles are commercial in nature. This is not to be considered as financial advice and should be considered only for information purposes. It does not reflect the views or opinion of our website and is not to be considered an endorsement or a recommendation. We cannot guarantee the accuracy or applicability of any information provided with respect to your individual or personal circumstances. Please seek Professional advice from a qualified professional before making any financial decisions. We link to various third-party websites, affiliate sales networks, and to our advertising partners websites. When you view or click on certain links available on our articles, our partners may compensate us for displaying the content to you or make a purchase or fill a form. This will not incur any additional charges to you. To make things simpler for you to identity or distinguish advertised or sponsored articles or links, you may consider all articles or links hosted on our site as a commercial article placement. We will not be responsible for any loss you may suffer as a result of any omission or inaccuracy on the website. .

Top Stories

Combatting the rise of deepfakes

 Combatting the rise of deepfakes

By Sanjay Gupta, Global VP at Mitek

Deepfake videos are now populating the internet. These AI-driven videos are being increasingly commodified, being used to impersonate politicians, business magnates and A-listers. Just recently at the World Economic Forum in Davos, a deepfake installation was used to demonstrate exactly how advanced this technology is becoming.

Sanjay Gupta

Sanjay Gupta

As attendees tried out the interactive display at the event, superimposing images of Leonardo DiCaprio onto images of their own face in real time, the message was loud and clear: the potential for deepfakes to distort and manipulate reality could be limitless.

This advanced technology started appearing on our screens following the rapid growth of ‘fake news’, blurring the lines between fact and fiction. As this fine line becomes more difficult to distinguish, we can expect to see more examples of how deepfakes can facilitate the fabrication of false stories and be used as convincing ‘evidence’ to prove someone said or did something. What’s impressive about this is how little is needed to produce a video so highly believable – only a couple hundred images.

But as deepfake technology matures, a much more concerning use of the phenomenon is emerging. Deepfake technologies are being used to produce nearly flawless falsified digital identities and ID documents – and the range and quality of the technologies available to fraudsters is the underlying driver. Technology that was once considered to be available to a few industry mavericks – using it for the right reasons – has now gone ‘mainstream’.

Identity fraud is an age-old problem. Yet, tech innovation has taken it to the next level. Banks and financial institutions already employ thousands of people to stamp out identity theft, but to fight the ever-growing threat of these new forms of fraud, businesses must look to technology to spot what the human eye can’t always see.

As the annual cost of punitive ‘Know Your Customer’ (KYC) and Anti-Money Laundering (AML4/5) non-compliance fines for a typical bank rises to an eye-watering €3.5 million, technology can and should be part of the solution. By digitising customer identity verification alone, a typical bank could save €10m a year, while the best technology is being deployed to ensure customers’ identity documents are digitally verified via an ID scan and a selfie in near-real time. Technology that can verify identities – of customers, partners, employees, or suppliers – is not new. It has become a must-have, especially in regulation-heavy industries like banking.

The good news is that the ability to identify deep fakes will only improve with time, as researchers are experimenting with AI that is trained to spot even the deepest of fakes, using facial recognition and, more recently, behavioural biometrics. By creating a record of how a person types and talks, the websites they visit, and even how they hold their phone, researchers can create a unique digital ‘fingerprint’ to verify a user’s identity and prevent any unauthorised access to devices or documents. Using this technique, researchers are aiming to crack even the most flawless of deepfakes, by pitting one AI engine against another in the hope that the cost of ‘superior AI’ will be prohibitive for cybercriminals.

Meanwhile, the tech industry is testing other safeguards. Apple has expanded its Near Field Communication (NFC) on devices to enable them to unlock and read data stored in security chips of passports. Once unlocked, it allows the device to see biometric data and high-res photo of the biometric document owner, and is now being used by the UK Home Office to enable European citizens to apply for Settled Status through both iPhone and Android apps.

For the deepfakes antidote to work as intended, biometrics will need to keep pace with cutting edge innovation. Biometric technologies will always be a preventative measure – the only real indicator of who you really are. Overall, the adoption of biometrics as an option for digital identity verification, or account and device sign-in, is a reliable measure of security.

New websites like thispersondoesnotexist.com, however, can generate incredibly life-like images, revealing how fraudsters can become very realistic actors within the digital ecosystem. This means that fake digital identities could impact nearly every organisation that operates a digital onboarding solution – which, in the age of the instant real-time experience, is any company who wants to win new digital-savvy customers.

As social media platforms and governments are stepping up efforts to curb the use of deepfakes, many businesses are turning to technology to help verify customer identities through AI and machine learning during the onboarding processes. This minimises potential threats, even before new regulations are introduced to curb the threat. In addition, these tech providers should be up to date on the most pressing risks and the technologies or techniques making the rounds among malicious threat actors.

While deepfakes continue to proliferate, regulations surrounding the technology remain largely open to interpretation. It is now a critical time for businesses to ensure the biometric information they collect is protected and not compromised at all costs.

The true consequences of deepfakes are yet to unfold, but the potential for this technology to erode trust in today’s society is clear. Technology has enabled new opportunities for fraud to emerge, but technology can also be the cure. Ensuring customer identities are verified every step of the way will certainly be the best first line of defence – and might be the only way to stop this new breed of cybercriminals in their tracks.

Global Banking & Finance Review

 

Why waste money on news and opinions when you can access them for free?

Take advantage of our newsletter subscription and stay informed on the go!


By submitting this form, you are consenting to receive marketing emails from: Global Banking & Finance Review │ Banking │ Finance │ Technology. You can revoke your consent to receive emails at any time by using the SafeUnsubscribe® link, found at the bottom of every email. Emails are serviced by Constant Contact

Recent Post