Daryl Cornelius, Director Spirent communications suggests that recent advances in fuzzing testing and Big Data analytics could help restore public confidence in financial systems
It wasn’t so long ago that the PIN and personal password were your guarantee for secure Internet banking. Then along came digital signatures and personalized images or phrases to ensure that the website is genuine;the addition of single use Transaction Authentication Numbers (TAN); and two
factor authentication, where the TAN is generated by an individual security token or independently transmitted by e-mail or SMS; chip TAN generator sadded transaction data to outwit man-in-the-middle attacks; and now there are calls for a further layer of bio metric identification for added security.
Does all this mean that, year on year, the public is growing ever more confident of the safety and security of Internet banking? Probably not –anymore than a house surrounded by a high wall with razor wire, electric fencing, motion detectors, security cameras and armed response warnings makes you feel confident that this must be a safe neighborhood to live in.
Adding many layers of security is the obvious bit – the criminal may have discovered my PIN code and got a bank statement from the refuse bin, but still not be sure about my birth date and mother’s maiden name.
When there is a certain amount of human interaction, as in telephone banking, you can even allow a bit of leeway on getting these answers exactly right. Sometimes the call center asks for more details than I can provide: I have remembered to take my debit card and PIN, reminded myself of all my security answers – and then they ask for the amount of a monthly standing order and I simply cannot remember. But does that mean they will slam the phone down on me? No, they go on asking other questions and see how I manage. Even though I failed one security test, I get another chance because a human operator has time and social skills to judge how I react to being told I have failed a test, how I explain or justify my failure, and how I respond to further questioning. A human operator has a human brain that can make very many more subtle decisions based on further layers of information. It can also be wrong.
If, however, the whole transaction takes place via a keypad, there is vastly less corroborating data and greater reliance on mechanical answers. If the PIN or keyword is wrong, it is wrong, and it would be unwise to allow too many further attempts – because we might be under attack from a system that was using an algorithm to generate a series of likely PIN numbers.
But what if the keypad entry system was so sophisticated that it could, like the call center staff, make judgments about such mistakes – whether, for 2 example, the entry process was a mechanized attack, or behaving like an absent-minded but genuine customer, or like a hacker trying out a series of likely guesses? Google searches, for example, are pretty good at guessing what was really meant when terms are misspelled – they don’t just shut down on you. Similar intelligence might help make decisions on whether a mistaken password was slip or fraud and, like human operator, it might actually identify, raise an alarm and help nail the attacker instead of simply blocking them to try again later.
We’re talking futures here – artificial intelligence may be sufficiently advanced to provide some interesting screening attempts, but not yet enough to be trusted with anything as sensitive and precious as real-world customers who are paying for the bank’s services.
There are, however recent developments that could bring that future closer.
A fuzzy approach
So what can be done right now to increase trust in banking systems?
Today’s most advanced automated security tests throw every known attack at the system under every likely operating condition and – being cloud based –the tests are kept up-to-date with new attacks as soon as they are recognized. This is a powerful solution for reassuring the bank’s management that their systems are indeed secure and trustworthy, but it is hard to explain this to the customer in a way that builds their trust. They might even wonder why – if the system was properly designed in the first place – does it now need so much additional testing?
The human factor in telephone banking raises the question whether better trust might be built around a more organic test approach – one that builds up layers of testing that are not so rigidly defined. You could describe these test criteria as being “fuzzy”, meaning that the correct responses are not so sharply delineated around the edges. The point is that today’s sophisticated test procedures do include a form of “fuzzing testing” as a way of addressing unknown security threats.
Fuzzing testing bombs the system – anywhere where applications and devices receive inputs – with semi-random data instead of known attack profiles. This is one way to find if any irregular input can crash or hang an application, bring down a website or put a device in a compromised state – the sort of thing that might happen when someone inputs a letter ‘O’ when it should have been zero, or accidentally hits an adjacent key.
Another goal of fuzzing testing is to anticipate “zero-day” attacks – ie those that hit you before they hit the news. Hackers assume that you have thoroughly tested your system with traditional functional testing, but there are so many permutations of invalid random input many that not have been tested. As David Newman, President of Bench marking Consultancy Network 3 Test, explains: “Attackers have long exploited the fact that even subtle variations in protocols can cause compromise or failure of networked devices.Fuzzing technology helps level the playing field, giving implements a chance to subject their systems to millions of variations in traffic patterns before the bad guys get a chance to”.
All it might take is one random string of input to cause a crash or hang, and so hackers use automated software to keep throwing random input at your network in the chance of striking lucky. “It takes a thief to catch a thief”, so fuzzing testing does the same thing, but under controlled conditions. Again,such fuzzing testing relies heavily on automation to get sufficient test coverage. Today’s fuzzing test tools generate millions of permutations – not only making the network much more secure, but also saving manual work and keeping the testing fast and efficient.
The immediate benefit of fuzzing testing is that it increases the bank’s trust in its own system security. But does that help the customer to build trust?
I suggest that it does, for the following reasons. One of the things that supports trust in Google is the way it handles silly mistakes: if a user misspells a search term, Google comes up with intelligent suggestions, and that gives the feel of a well-designed system. By analogy, if a customer makes a small slip when logging in to the bank, and the system responds stupidly or even crashes, it suggests that the system is fragile, and that does not build customer confidence.
So the greater resilience to error resulting from repeated fuzzing testing does make the system seem less fragile – and that is the first step in building confidence.
What lies ahead?
Today’s functional test systems can do a lot to reassure the network managers that their systems are as well as possible defended against attacks and faults, but then the task is to pass on that confidence to the customer without over-explaining and sounding “defensive” in the negative sense.
Fuzzing tests go further along the same lines by adding confidence against unknown and unexpected threats, but I suggest that their application could also make the system begin to feel more solid and trustworthy to the customer.
Can we go further? Can we build into a mechanized entry system the equivalent of human intelligence that can assess the personality of the applicant and make good decisions about the credibility of their responses,and what further questions to ask? Instead of just dumbly closing down, can the system flag a danger signal and then escalate authentication with further security checks? To the customer, such an intelligent response would suggest that the system really is alert to danger and “knows what it is doing” – as 4 scary, and yet as comforting, as a community police officer with good local knowledge and experience.
We still have a long way to go before computers can match those skills, but recent advances in real time Big Data analysis could help clarify understanding of human behavior patterns, and suggest more subtle tests to identify fraudulent behavior. Couple that with fuzzing techniques that extend response testing to embrace the infinite variety of possible near misses, and this could point the way ahead.
Because the real challenge is two-fold: both to make the system resilient to attack and, at the same time, to build the customers’ trust that it truly is resilient.