By Barry Shteiman, Director of Threat Research at Exabeam
Security has always been paramount in the banking and finance industry, but these days the most potent threats aren’t the kind that come through the bank’s front door wearing a mask, they are the virtual ones hiding on the IT network or buried inside fraudulent emails. For the security analysts tasked with protecting their institutions against these threats, it’s a daunting prospect.
It seems like only yesterday that the biggest cyber threats facing many financial institutions were viruses and SQL injections. However, the last decade has seen the cyber landscape change dramatically for the worse when it comes to the volume and variety of threats faced. While the attacks mentioned above had the power to cause significant damage to any organisation, they were also quite easy for any seasoned security analyst to spot. By contrast, many of the most dangerous threats today are specifically designed to fly under the radar, remaining undetected for months or even years, while infecting numerous machines and accounts in the victim’s network.
Understanding ‘normal’ user behaviour holds the key to modern threat detection
Many organisations build profiles of normal user behaviour to help them identify potential cyber threats. They do this through the creation of Incident Response (IR) units, where security analysts trawl through large amounts of data in order to understand events that have taken place and make judgements based on the behaviour of those involved. This process typically encompasses detailed analysis of IP addresses used before, during and after an incident, account details and workstations involved etc. As a result, it can take days or weeks to manually analyse each incident and make a final decision.
To alleviate the manual workload, automation of certain processes is used. It can take various forms, but typical examples are scripts that automate data collection and signatures to detect certain types of attacks. In more recent years, there has also been a rise in the use of event correlation to help uncover well-defined, network-based attacks. An example of this could be an employee logging on from home over the organisation’s VPN, but also using their security badge to enter company property around the same time. Event correlation technology can notify analysts that either the same person is in two places at once, or a potential security incident is taking place.
Big data presents big challenges
Unfortunately, for a while it’s been apparent that existing security and intelligence practices are struggling to keep up with the fast-changing cyber security landscape. Without a doubt, data volume is the main driver behind this negative trend. In the modern banking environment, it’s not uncommon for a large financial institution to collect more than 300 terabytes of data per day as a result of larger, more sophisticated data collection activities. To cope with such high volumes, often only 30 days’ worth of data is kept at any time. The thinking being that any more will overwhelm reporting systems. However, as a result, it makes effective security investigations very difficult to conduct over any period longer than this.
At the same time, the volume of data coming in makes it much harder for IR analysts to quickly identify important trends and correlate them against normal behaviour baselines. The only real way to combat this is to hire more personnel, but even if there were a surplus of security experts out there (which there isn’t), the reality is few institutions have the finances to keep hiring indefinitely. In short, IR teams are simply too overwhelmed to understand where the next threat might be coming from.
Machines can play a major role in threat detection (but they aren’t a silver bullet)
While the threat landscape has become more challenging, machines have also become a lot smarter. Recent developments in AI and machine learning have been met with significant hype within the security industry. Unfortunately, many technology vendors haven’t helped themselves with the way they’ve positioned new products and services, resulting in confusion in the market. When customers hear a vendor urging them to “pour data” into their machine learning based analytics engine, they expect magical results. In reality, it simply doesn’t work like that.
However, that’s not to say these new technologies don’t have a significant role to play. Understanding normal behaviour is one area where artificial intelligence and machine learning can be extremely effective. For example, there are now algorithms that can create context by connecting events into coherent user sessions. Combining these algorithms with statistical analysis can answer a huge range of questions incredibly quickly, such as: ‘is this person an admin?’, ‘is this a real user or a service account?’, or ‘does this activity deviate from this user’s peer group’s activity?’.
Finding a happy medium
When faced with the double threat of growing data volumes and more complex online threats, the best solution is to use machine intelligence to augment human intelligence, not replace it. An effective machine-based analytics system can ingest new data, identify irregularities in user activity and stitch together timelines in minutes, saving security analysts weeks at a time. Analysts can then use the machine-generated data to quickly spot any deviations from a user’s normal behaviour. Machines can also be used to automatically assign points against anomalous user behaviour based on their baseline ‘normal’ activity, helping to greatly reduce false positives and alert fatigue.
Advances in machine learning don’t spell the end for the traditional security team. Far from it. Rather, they exist to make the job of threat detection and data security easier, but only if a happy medium can be found between man and machine.