ForeScout Technologies, Inc., a pioneer in agent less cyber security, has announced the findings of its new “European Perceptions, Preparedness and Strategies for IoT Security” survey. The research revealed that while the majority of respondents acknowledge the business opportunity presented by the Internet of Things (“IoT”), and the growing number of IoT devices connected to their enterprise networks, their organisations lack understanding of how to properly secure them.
“The staggering growth of IoT is creating both value and risks for enterprise organisations,” said Jan Hof, International Marketing Director, ForeScout Technologies. “While IoT is recognised by many as an opportunity to improve and streamline business processes, there are associated security risks that need to be addressed – first and foremost through visibility of devices as soon as they connect to the network. You cannot secure what you cannot see.”
Commissioned by ForeScout and conducted by a non-affiliated third party, Quocirca, the survey of 201 senior IT decision makers in the UK and German speaking regions of Germany, Austria and Switzerland (‘DACH’) assessed their organisations’ IoT security practices. The research covered a range of industry sectors and businesses, including the finance industry. Key findings from the survey include:
- Increased size and diversity of attack surface: The average business expects to be dealing with 7,000 IoT devices over the next 18 months. Even smaller businesses expect the numbers to be hundreds or thousands; far more than they are used to securing when it comes to traditional user endpoints.
- Healthcare lagging in IoT readiness: One third of respondents say the IoT is already having a major impact on their organisation and a further third expect it to soon. IT and telecoms are the most advanced industries in terms of IoT readiness with healthcare, which many think stands to benefit significantly from the IoT, lagging behind.
- Uncertainty over identification and control: 65% of respondents have ‘quite’, ‘little’ or ‘no’ confidence in terms of being able to identify and control all IoT devices on their network. This uncertainty is substantiated by the fact that many IoT operating systems are open source and can therefore be adapted by device manufacturers, leading to many variants.
- Agentless approach is the only way:Being able to discover and classify IoT devices without the use of agents (most of which will only support popular operations systems such as Windows, Android, iOS and OS X) was perceived by 64% of respondents as ‘extremely important’ or ‘quite important’, with this figure increasing to 73% within the healthcare sector, which has the most unusual range of devices including CT scanners, diabetic pumps and heart monitors.
- Biggest IoT security challenge? IT functions working together:Getting the various IT functions (networking, security, DevOps, etc.) at an organisation to work together was perceived by 83% of respondents as one of the top IoT security challenges. A minority of survey participants considered lack of personnel to be problem, but well over half worry about budgets and the availability of appropriate products.
Bob Tarzey, Analyst and Director at Quocirca (who conducted the survey), said, “IoT deployments already involve millions of devices in businesses across Europe. Many will have limited processing power and require low power usage. Others will have unusual operating systems and, in certain cases, the Things involved will be unknown to IT security teams when they first request network access. All of this requires tools that can manage and understand the security status of all network attached devices, without the need to install agents.”
ForeScout commissioned Quocirca to conduct the “European Perceptions, Preparedness and Strategies for IoT Security” survey from August – September 2016. The survey of 201 senior IT decision makers in the UK and German speaking regions of Germany, Austria and Switzerland (‘DACH’) analysed and assessed respondents’ views on their organisation’s IoT devices, security policies, approaches and tools. The research covered a range of industry sectors and businesses with as few as 10 employees, up to large enterprises with more than 10,000 employees. The research follows on from an earlier survey carried out in the U.S. by Webtorials in March-April 2016.
To download the full European report, please go here:http://resources.forescout.com/rs/124-WUR-613/images/ForeScout_IoT_report-Quocirca.pdf
Using AI to identify public sector fraud
When it comes to audits in the public sector, both accountability and transparency are essential. Not only is the public sector under increasing scrutiny to provide assurance that finances are being managed appropriately, but it is also vital to be able to give early warnings of financial pressures or failures. Right now, given the huge value of funds flowing from the public purse into the hands of individuals and companies due to COVID measures, renewed focus on audit is essential to ensure that these funds are used for the purposes intended by parliament.
As Rachel Kirkham, former Head of Data Analytics Research at the UK National Audit Office and now Director of AI Solutions at MindBridge, discusses, introducing AI to identify and rectify potential problems before they become an issue is a key way for public sector organisations and bodies to ensure public funds are being administered efficiently, effectively and economically.
The National Crime Agency has warned repeatedly that criminals are seeking to capitalise on the Covid crisis and the latest warnings suggest that coronavirus-related fraud could end up costing the taxpayer £4bn. From the rise in company registrations associated with Bounce Back loan fraud, to job retention scheme (furlough) misuse, what plans are in place for government departments to identify the scale of fraud and error and then recoup lost funds?
There is no doubt that the speed with which these schemes were deployed, when the public sector was also dealing with a fundamental shift in service delivery, created both opportunities for fraud and risk of systematic error. But six months on, while the pandemic is still creating economic challenges, the peak of the financial crisis has passed. Ongoing financial support for businesses and individuals remains important and it is now essential to learn lessons in order to both target fraudulent activity and, critically, minimise the potential loss of public funds in the future.
Timing is everything. Government has an opportunity to review the last 6 months’ performance and strengthen internal controls to ensure that further use of public funds is appropriate. Technology should play a critical role in detecting and preventing future fraud and error.
If the public sector is to move beyond the current estimates of fraudulent activity and gain real insight into both the true level of fraud and the primary areas to address, an intelligent, data-led approach will be critical. The use of Artificial Intelligence (AI) in public sector IT systems can be used to detect errors, fraud or mismanagement of funds, and enable the process changes required to prevent further issues.
HMRC is leading the way, using its extensive experience in identifying and tackling tax fraud to address the misuse of furlough – an approach that has led to many companies making use of the amnesty to repay erroneous claims. Other public sector bodies, especially smaller local authorities, are less likely to have the skills or resources in place to undertake the required analysis. If public money is to be both recouped and safeguarded in the future, it is likely that a central government initiative will be required.
Data resources are key; the government holds a vast amount of data that could be used, although this will require cross-government collaboration and co-operation. It is possible that the delivery speed of COVID-19 responses will have led to data collection gaps – an issue that will need rapid exploration and resolution. It should be a priority to take stock of existing data holdings to identify any gaps and, at the same time, use Machine Learning to identify anomalies that could reveal either fraud or systematic error.
In addition to identifying fraud, this insight can also feed back into claims processes providing public sector bodies with a chance to move away from retrospective review towards the use of predictive analytics to improve control. With an understanding of the key indicators of fraud, the application process can automatically raise an alert when a claim looks unusual, minimising the risk of such claims being processed.
While many public sector bodies may still feel overwhelmed, it is essential to take these steps quickly. Even at a time of crisis, good processes are important – failing to learn from the mistakes of the past few months will simply compound the problem and lead to greater misuse of public funds. The public sector, businesses, and individuals need to learn how to operate in this environment, and that requires the right people to spend time looking at the data, identifying problems and putting in place new controls. With an AI-led approach, these individuals will learn lessons about what worked and what didn’t work in this unprecedented release of public funds. And they will gain invaluable insight into the identification of fraud – something that will provide on-going benefit for all public sector bodies.
Why dependency on SMS OTPs should not be the universal solution
By Chris Stephens, Head of Banking Solutions at Callsign
In our day-to-day lives, SMS one-time passwords, also known as OTPs, have unintentionally become the default authentication factor when carrying out high risk and confidential transactions online. Banks, telcos, and businesses are opting for this method as SMS OTPs are relatively quick and simple to put in place. In our digital age, this solution works for the majority of users, who more often than not possess a mobile phone and are familiar with the user experience. As a result, companies are using them to securely authenticate both their customers and employees.
When looking into SMS OTPs, businesses should consider the bigger picture and how time- and cost-efficient solutions are as a whole by taking into account other key elements that might have been neglected in the past, such as hidden fees and security vulnerabilities. Apart from this approach, there are also other options better suited to different business needs – the European Authority (EBA) has already recognised other forms, such as employing the secure binding of a device to achieve possession and the use of behavioural biometrics as an inherence factor. For example, earlier this year Google officially began moving away from SMS OTP-based authentication. Whilst in the UK both the Financial Conduct Authority (FCA) and UK Finance have recommended banks ought to reduce their dependence on its use in the longer-term. Whereas, in the past, financial institutions were choosing to use this solution because it enabled them to save time on becoming compliant with the PSD2 Strong Customer Authentication (SCA) regulation.
It is common knowledge that SMS OTPs are not without their flaws, and with the extended deadline for SCA for e-commerce less than a year away (September 2021) – is now the best time for the industry to look elsewhere for more intelligent approaches to authentication?
SMS as the go-to solution
Fraudsters are sophisticated criminals, who attack the weakest points in the system – they have observed that banks and businesses heavily rely on SMS OTPs for 2FA (two-factor authentication) transactions, which is why they continue to abuse and weaken existing systems and exploit these solutions for their own benefit. Fraudsters commonly practise SIM-swap – where they steal personal information about the victim and then contact the target’s mobile operator pretending that their phone has been lost or stolen. With lockdown rules constantly changing, not all customers are able to easily visit stores right now, therefore operators are dependent on mobile-authentication channels that are more susceptible to this type of manipulation to service their customers.
SIM-swap fraud can easily be done. As soon as the fraudster has duped the mobile operator, a number transfer is authorised and then activated on a new SIM card – it works by granting cybercriminals access to the victim’s number and consequently all one-time passwords and authentication codes that are sent to that number. In March 2020, Europol warned that SIM-swap scams are a growing problem across Europe, following an investigation that resulted in the arrest of 12 suspects associated with the theft of more than €3 million ($3.3 million).
However, consumers and businesses need to be aware that SIM-swap fraud is not the only method cybercriminals are deploying to intercept OTPs from their victims during the pandemic and beyond.
Spotting a scam
SIM-swap attacks are not the only method scammers are using, there is also a growing number of cases that take advantage of malware and remote access applications to steal SMS OTPs. They do this by socially engineering individuals to download remote access apps or hidden surveillance apps to grant access to the victim’s device, without coming into contact with it. The cybercriminals can, therefore, directly read their messages or secretly record all their texts and phone calls to another device. The unknowing victim’s personal messages, including OTPs, are tapped into by the fraudster using the same approach as SIM-swap attacks. However, this time they also have direct access to the target’s device.
Several different parties are involved in the delivery of OTPs and at each stage of the process there is an opportunity for fraudsters to capture messages. There is also the potential mass compromise as a result of hidden vulnerabilities in the SS7 network and the attack surface to consider. With all these in mind, banks need to have a good overview of all data sub-processors to allow them to adopt the most suitable security controls, such as multi-factor authentication (MFA), audit logs, and dashboards.
Watch out for hidden costs
It comes as no surprise that intercepted OTPs result in fraud losses, which quickly increase as hidden fees go unnoticed over time. Beyond the upfront costs of SMS OTPs, such as cost per text, there are also several hidden costs that are difficult to budget for and avoid. They are typically the result of the domino effect of the aforementioned issues – forcing businesses into a reactive mode that is tricky to handle.
As an example, where drop-offs take place in an authentication journey, including when SMS texts are not received, financial institutions need to be ready to manage an influx in calls to their customer service helplines and the associated fees. Or else the customer may decide to use another card to make the payment, which is worse for the bank. This is due to the fact that customers are likely to abandon the use of a card when they are fed up with a customer journey that involves too much unnecessary friction. These abandonments lead to a decrease in interchange fees for banks and could even potentially reduce the customer base for merchants.
Evaluating the user experience
Whilst most consumers possess a mobile phone, SMS is not a reliable solution for everybody. For instance, SMS OTPs are not accessible to those living in remote or low-service locations, who may struggle to receive SMS alerts. This overall experience is also cumbersome as it takes roughly 30 seconds of transaction time for the text to be delivered, compared with the almost instantaneous transactions experienced by alternative authentication approaches, such as biometrics.
In this digital age, businesses are constantly adapting to accommodate different generations including Gen Z who are digital natives – so mobile use is only going to increase and, along with it, the volume of transactions taking place on these devices will also grow. This goes hand in hand with the ever-changing needs and expectations of customers as they look for hyper-personalised online experiences as the new norm. Yes, SMS OTPs are mobile-first, but they do still require the user to switch to another app to view the SMS so they can complete the transaction, which can be annoying for the customer as it interrupts the e-commerce user journey. After a friction-filled experience, it would be unsurprising if the user then decides to abandon the transaction. With this and other existing security implications in mind, the EBA recommends banks adopt other options.
Benefits of behavioural biometrics
Every person has their own unique behaviour and habits when swiping across the screen, which can be tracked through the analysis of the data signals captured from hardware sensors when the user engages with their device. These signals are crucial to designing user features such as finger movement, hand orientation, and wrist strength. Together, artificial intelligence and machine learning provide us with the capability to analyse this information to develop a personalised prototype of that user’s swipe behaviour, which only takes milliseconds to confirm whether the customer is who they say they are. This immediately allows the bank to seamlessly carry out appropriate security actions and stop fraudsters in their path before they can even begin using a target’s device.
Behavioural biometrics is ideal for positively identifying an individual and also effectively identifies bad actors. Including when cybercriminals use technologies, such as bots or remote access Trojan (RAT) software, to control transactional flows without the user being aware. This approach to biometrics works on both high- and low-end devices and helps to protect potential victims against both blind (where the fraudster has never observed how the user swipes their phone) and over-the-shoulder attacks (where the fraudster has been able to observe the victim’s swipe movements). Both forms of attack can be detected unique algorithms, with an accuracy rate of 98%; by layering in device intelligence and locational habits it is the most accurate and robust identification method currently available on the market. By preventing criminal access, even when the attacker has observed the user’s behaviour, it offers an added level of security to businesses and banks that other traditional methods, such as a PIN or password, cannot.
In order for organisations to maintain a competitive edge and successfully navigate through the pandemic, they will need to deliver hyper-personalised journeys to meet consumers’ expectations. They are increasingly looking to bank with or sign-up to services that offer a secure and bespoke service that meets their daily needs during and beyond the pandemic.
Therefore, a holistic approach to security empowers businesses to take back control of their fraud and authentication management. Unfortunately, single point solutions, like SMS OTPs, do not allow businesses to scale or provide enough flexibility to meet these requirements. By adopting a strategic, and intelligence-based, approach financial institutions and organisations will be able to upgrade security measures and enhance the user experience – whilst keeping IT spend low.
The rise of AI in compliance management
By Martin Ellingham, director, product management compliance at Aptean, looks at the increasing role of AI in compliance management and just what we can expect for the future
Artificial Intelligence (or AI as it’s now more commonly known) has been around in some shape or form since the 1960s. Although now into its eighth decade, as a technology, it’s still in its relative infancy, with the nirvana of general AI still just the stuff of Hollywood. That’s not to say that AI hasn’t developed over the decades, of course it has, and it now presents itself not as a standalone technology but as a distinct and effective set of tools that, although not a panacea for all business ills, certainly brings with it a whole host of benefits for the business world.
As with all new and emerging technologies, wider understanding takes time to take hold and this is proving especially true of AI where a lack of understanding has led to a cautious, hesitant approach. Nowhere is this more evident that when it comes to compliance, particularly within the financial services sector. Very much playing catch-up with the industry it regulates, up until very recently the UK’s Financial Conduct Authority (FCA) had hunkered down with their policy of demanding maximum transparency from banks in their use of AI and machine learning algorithms, mandating that banks justify the use of all kinds of automated decision making, almost but not quite shutting down the use of AI in any kind of front-line customer interactions.
But, as regulators are learning and understanding more about the potential benefits of AI, seeing first-hand how businesses are implementing AI tools to not only increase business efficiencies but to add a further layer of customer protection to their processes, so they are gradually peeling back the tight regulations to make more room for AI. The FCA’s recent announcement of the Financial Services AI Public Private Forum (AIPPF), in conjunction with the Bank of England, is testament to this increasing acceptance of the use of AI. The AIPFF is set to explore the safe adoption of AI technologies within financial services, and while not pulling back on its demands that AI technology be applied intelligently, it signals a clear move forward in its approach to AI, recognising how financial services already are making good use of certain AI tools to tighten up compliance.
Complexity and bias
So what are the issues that are standing in the way of wider adoption of AI? Well, to start with is the inherently complex nature of AI. If firms are to deploy AI, in any guise, they need to ensure they not only have a solid understanding of the technology itself but of the governance surrounding it. The main problem here is the shortage of programmers worldwide. With the list of businesses wanting to recruit programmers no longer limited to software businesses, now including any type of organisation who recognises the potential competitive advantage to be gained by developing their own AI systems, the shortage is getting more acute. And, even if businesses are able to recruit AI programmers, if it takes an experienced programmer to understand AI, what hope does a compliance expert have?
For the moment, there is still a nervousness among regulators about how they can possibly implement robust regulation when there is still so much to learn about AI, particularly when there is currently no standard way of using AI in compliance. With time this will obviously change, as AI becomes more commonplace and general understanding increases, and instead of the digital natives that are spoken about today, businesses and regulators will be led by AI-natives, well-versed in all things AI and capable of implementing AI solutions and the accompanying regulatory frameworks.
As well as a lack of understanding, there is also the issue of bias. While businesses have checks and balances in place to prevent human bias coming into play for lending decisions for example, they might be mistaken in thinking that implementing AI technologies will eradicate any risk of bias emerging. AI technologies are programmed by humans and are therefore fallible, with unintended bias a well-documented outcome of many AI trials leading certain academics to argue that bias-free machine learning doesn’t exist. This presents a double quandary for regulators. Should they be encouraging the use of a technology where bias is seemingly inherent and if they do pave the way for the wider use of AI, do they understand enough about the technology to pinpoint where any bias has occurred, should the need arise? With questions such as this, it’s not difficult to see why regulators are taking their time to understand how AI fits with compliance.
So, bearing all this in mind, where are we seeing real benefits from AI with regards to compliance, if not right now but in the near future? AI is very good at dealing with tasks on a large scale and in super-quick time. It’s not that AI is more intelligent than the human brain, it’s just that it can work at much faster speeds and on a much bigger scale, making it the perfect fit for the data-heavy world in which we all live and work. For compliance purposes, this makes it an ideal solution for double-checking work and an accurate detector of systemic faults, one of the major challenges that regulators in the financial sector in particular have faced in recent years.
In this respect, rather than a replacement for humans in the compliance arena, AI is adding another layer of protection for businesses and consumers alike. When it comes to double-checking work, AI can pinpoint patterns or trends in employee activity and customer interactions much quicker than any human, enabling remedial action to be taken to ensure adherence to regulations. Similarly, by analysing the data from case management solutions across multiple users, departments and locations, AI can readily identify systemic issues before they take hold, enabling the business to take the necessary steps to rectify practices to guarantee compliance before they adversely affect customers and before the business itself contravenes regulatory compliance.
Similarly, when it comes to complaint management for example, AI can play a vital role in determining the nature of an initial phone call, directing the call to the right team or department without the need for any human intervention and fast-tracking more urgent cases quickly and effectively. Again, it’s not a case of replacing humans but complementing existing processes and procedures to not only improve outcomes for customers, but to increase compliance, too.
At its most basic level, AI can minimise the time taken to complete tasks and reduce errors, which, in theory, makes it the ideal solution for businesses of all shapes, sizes and sectors. For highly regulated industries, where compliance is mandatory, it’s not so clear cut. While there are clearly benefits to be had from implementing AI solutions, for the moment, they should be regarded as complementary technologies, protecting both consumers and businesses by adding an extra guarantee of compliant processes. While knowledge and understanding of the intricacies of AI are still growing, it would be a mistake to implement AI technologies across the board, particularly when a well-considered human response to the nuances of customer behaviours and reactions play such an important role in staying compliant. That’s not to say that we should be frightened of AI, and nor should the regulators. As the technology develops, so will our wider understanding. It’s up to businesses and regulators alike to do better, being totally transparent about the uses of AI and putting in place a robust, reliable framework to monitor the ongoing behaviour of their AI systems.
Why cybercriminals have ‘Gone Vishing’ during the COVID-19 Pandemic
More than 215,000 vishing attempts in the last year alone As new coronavirus restrictions look set to confine much of...
Risk Mitigation vs. Risk Avoidance: Why FIs Need to Maintain Risk Appetite and Not Place All Bets on De-Risking
De-risking aims to protect financial institutions from the increasing pressures placed by regulators and threats, associated with clients operating in...
Using AI to identify public sector fraud
When it comes to audits in the public sector, both accountability and transparency are essential. Not only is the public...
Five golden rules of recruitment
Former investment banker and entrepreneur, Connie Nam, discusses five ways in which basing your recruitment process around understanding a candidate’s...
Using data analytics to improve SME cash flow and treasury management
The pressure facing SMEs this year is widely known, and they are looking for ways to improve their cash flow...
Why dependency on SMS OTPs should not be the universal solution
By Chris Stephens, Head of Banking Solutions at Callsign In our day-to-day lives, SMS one-time passwords, also known as OTPs, have...
The chosen one
By Jesse Swash, Co-Founder Design by Structure. The lessons for the future lie in the past. The same truths still hold. This time...
How PR can help franchise businesses emerge stronger from 2020
By Mimi Brown, Head of Entrepreneurs & Business at The PHA Group A second wave of coronavirus is gathering pace...
Cash and digital payments – a balancing act to aid financial inclusion
By Matthew Jackson, Head of Partner Development, EMEA at PPRO The cashless debate is one that continues to spark both conversation...
Research exposes the £68.8 billion opportunity for UK retailers
Modelling shows increasing the proportion of online sales by 5 percentage points would have significantly boosted retailers’ revenues during the...