Technology
New threats require a new breed of solutions to keep organisations secure
By Brooks Wallace, VP EMEA at Deep Instinct
Threat-actors are constantly developing new threats and more creative ways to breach an organisation’s network. In the past two years we have seen an onslaught of ransomware attacks and cyber threats targeting organisations and industries around the world, and there looks to be no sign of it slowing down. With attackers continuing to advance their skillsets, 2022 will undoubtedly bring more attacks, and ultimately, more sophisticated threats. This includes the use of Adversarial AI.
AI is one of the most advanced and rapidly fast-paced technologies of today. Most of the use cases we hear about is how organisations are using machine learning, a subset of AI, to defend themselves against threat actors. However, AI is now being used by these same threat actors to target organisations in order to launch cyberattacks and spread malware, a term referred to as Advesarial AI.
Machine learning solutions are trained to identify patterns and links, which is achieved by manually inputting datasets for the machine learning algorithm to process so that it can learn what are considered ‘good’ and ‘bad’ attributes. This would then lead to it eventually distinguishing the difference between a genuine or benign threat. However, this model is based on known threats therefore, when faced with an unknown threat, the machine learning algorithm is unable to detect it and results in not only a loss of accuracy but puts the business at risk.
Adversarial AI manipulates the analytic and decision-making powers of AI and machine learning to develop cyberattacks in ways that were previously impossible by using machine learning tools to attack other machine learning tools. Threat-actors are able to trick machine learning by training it to think that data is harmless, and therefore granting cyber-threats free access and movement virtually undetected.
It is a highly sophisticated attack method, and one cyber criminals are undoubtedly already using stealthily to target organisations. Due to the complexity of the attack, once the SOC team have identified a potential issue, it is often already too late. The extra dwell time this attack gives to the threat actors, the more opportunity they have to move laterally throughout the network, gaining persistence and inflicting more and more damage as they go.
Adversarial AI will only increase in the years to come, and organisations mustn’t be naïve to the genuine threat this attack can have on them as a business. Remember, threat-actors are constantly upping their game in terms of new threats and techniques, so organisations must do the same in terms of security.
For too long there has been a focus on what to do once your business has been hit. Wouldn’t it be better to be able to predict and prevent attacks before they enter and inflict damage on the network?
The ability to stop a hacker before they’ve had a chance to wreak havoc is no longer a dream and can be achieved today using deep learning techniques. Deep learning, the most advanced subset of AI, involves the creation of neural networks that mimic the human brain, and is trained on raw data samples of millions of files.
This independent training by deep learning means that it can process a significant number of characteristics from the raw data in order to autonomously determine whether the data is malicious or benign. This advanced technology gives greater accuracy which results in a dramatic reduction in the number of false positives dealt with by SOC teams, giving them the ability to predict and prevent cyberattacks.
The fully autonomous working of deep learning means cyberattacks can be dealt with at a much faster pace than with machine learning. Deep learning delivers a sub-20 millisecond response time stopping a cyberattack, pre-execution, before it can take hold, giving the power back to SOC teams to prevent cyberattacks.
With deep learning only being trained on raw data, it is near-on impossible for datasets to be tampered with before they are fed into the system. This means that it is much harder for Adversarial AI attacks to be successful as malicious actors are unable to manipulate deep learning in the same way they do with machine learning.
Organisations continue to be exposed to cyberattacks which could potentially provide long-term damage to their customers and employees. SOC teams will immediately see results by integrating deep learning and taking a proactive approach to dealing with cyberattacks rather than feeling like a sitting duck, wating for a cyberattack to happen. Cyber-criminals are always innovating their technology to bypass systems, and organisations must do the same to protect themselves.
With threats as sophisticated as Adversarial AI, we need to make 2022 a year of cyber change. The only way organisations can do this is if we look toward genuinely innovative solutions that don’t simply focus on mitigation, detection, and response. We all need to level-up and not only meet but surpass the techniques being used by our cyber adversaries.
-
Top Stories3 days ago
After VW plant victory, UAW sets its sights on Mercedes in Alabama
-
Business3 days ago
Mike Bahun and Fundraising University Make a Lasting Impact on Sports Programs Nationwide
-
Investing3 days ago
Forex Market Trends to Watch Out For in 2024
-
Top Stories3 days ago
Hedge fund borrowing hits five-year peak, Goldman Sachs says