By Yaroslav Rosomakho, Global Solutions Architect for Netskope.
When the Coronavirus pandemic hit our shores last year, businesses in all industries scrambled to keep the show on the road. Remote working was the order of the day, as hundreds of thousands of UK employees swapped the office for the kitchen table, and set up makeshift workspaces at home.
But while a speedy shift to remote working allowed companies to keep doing business, the change was a major headache for IT teams. For some, the rush to set up home working practices meant that cyber security was an afterthought, and even in the security aware and highly-regulated banking and finance markets many companies have relied on short-term fixes that left users with painful experiences, bound up in ill-suited security architectures.
As offices slowly open up, it’s likely that many companies will maintain some form of remote working. HSBC and JP Morgan have both announced permanent work-from-home policies for thousands of employees, and Deutsche Bank is planning to allow staff up to three days a week at home post-pandemic. This all means that now is a crucial opportunity for financial institutions to assess how successful their remote security strategies have been during the last year, and whether they’re fit for purpose for the long-term.
What risks do businesses now face?
It’s an unfortunate fact that fraudsters tend to prey on unexpected events or challenges. When normality becomes disrupted, they see an opportunity to exploit. It shouldn’t be a surprise, then, that the COVID-19 pandemic brought with it an increase in fraudulent activity.
In the rush to facilitate home working 12 months ago, many businesses speedily rolled-out or expanded their use of cloud applications. Netskope’s Cloud and Threat Report 2021 tracks a constant increase in the use of cloud apps in the enterprise, with the average number in use increasing by 20% in 2020. But efforts to make them secure often involved shortcuts and ‘make do’ solutions. Routing cloud services through VPNs caused unworkable network delays and removed many of the intrinsic benefits of cloud. In addition, with global supply chains frozen, organisations looking to acquire more VPN appliances to expand their remote access infrastructure simply couldn’t source them. For both these reasons, the decision was often made to allow access to certain cloud services to bypass key security infrastructure – relying on insufficient native security functionality within the services themselves.
Even when appliances were available and investment was made to increase bandwidth, many organisations made the unnerving discovery that traditional security solutions are simply not sophisticated enough to police the cloud. The growth in cloud app usage in 2020 comes mainly from services such as Microsoft OneDrive, Box and Gmail – services for which employees often have both corporate and personal instances.
Legacy security appliances cannot cope with the necessary nuances of personal and private instances / accounts, different access allowances or different data types. Legacy security appliances struggle to see and manage cloud traffic because cloud apps literally speak a different language (something called JSON or APIs rather than the HTML web language). To be effective at securing data within the cloud, security tools need to be able to interrogate API and JSON data, as well as to be able to makes sense of both content and context. This is particularly important for data protection policies covering PII data.
So just as malicious actors were focusing on cloud services as an appealing target, enterprises were whitelisting them, and exempting them from the usual level of forensic data protection efforts they would normally enforce. And a lack of nuance in the ability to manage policies between different instances of the same service has meant that the risk of data policy contravention and accidental data leakage has increased.
So, what are the steps businesses can take to get their remote working architectures properly secure, and ready for a more regular hybrid and remote working future?
The answer – Secure Access Service Edge (SASE)
Moving from an on-premise data centre model to a cloud approach is a big architectural shift, and when done in normal times (without the impossible time pressure of a pandemic), it is easy to recognise that security infrastructure needs to change in similar ways. Ultimately, the corporate data centre is no longer sitting at the hub of a network around which a secure perimeter can be erected. Users, devices, applications and data all flow in and out of corporate-owned territories, and security needs to be able to follow the data – applying nuanced data protection and threat protection that keeps an organisation safe and compliant without impacting on productivity.
This approach is called Secure Access Service Edge, or SASE. It’s a way of delivering security in-line, embracing the logical data flows that optimise user experience and network efficiencies. It removes the need to hairpin traffic from its logical path and allows for direct connectivity to cloud services. Essentially it places security into the cloud, at the heart of where the action is. It is a native cloud-speaker, which means that SASE security architectures have huge visibility into what is happening and allow granular controls to be enforced, based on user, device, location, data type and activity.
SASE enhances both security protection and user experience, but early adopters say that the benefits don’t stop there. The model reduces costs through appliance consolidation, which narrows down the management, software updates and patching required across the infrastructure, and in turn increases efficiency. It also avoids the need for expensive private bandwidth, and enables financial institutions to navigate detailed data protection regulations within the industry.