Connect with us

Technology

The 8 Personality Traits You Need to Succeed in Cybersecurity

Published

on

The 8 Personality Traits You Need to Succeed in Cybersecurity 1

Cybersecurity threats are on the rise. With the rapid increase of security breaches, company hacks and data leaks, cybercrime has become one of the most significant threats to global business. Skilled cybersecurity professionals are key for the safety of companies and governments, but there is an anticipated skills shortage of 1.8 million workers by 2022. The demand for talent in this space is at an all-time high, and there are some unique personality traits that recruiters and companies need to look out for.

Hogan Assessments, a leading provider of personality assessments, has helped some of the world’s top IT and cybersecurity firms recruit the right individuals. Hogan’s science-based assessments and 30 years of validated research found that there are eight personality characteristics best suited to a successful career in cybersecurity.

  1. Modest.  Those that tend to excel in cybersecurity typically prefer to avoid the spotlight. A successful cybersecurity agenda is not egotistical or fame hungry, and instead favors a more low-key lifestyle. After all, most of the well-known names in cybersecurity are notorious cyber-criminals.
  1. Altruistic. Cybersecurity professionals should want to help people. While they are working all day with systems and programming, protecting and helping people is at the core of this profession. They should work well with others and avoid isolating themselves. Fighting threats will require cooperation and trust between colleagues as they are striving together towards the same security goals.
  1. Composed. The enterprise systems they are protecting from attacks are always under threat. Cybersecurity agents naturally need to have a sense of urgency, but it is crucial that they stay composed handling cyberthreats. Unnecessary outbursts when the pressure is rising can be counterproductive and shift their attention away from what is at stake.
  1. Scientific. The perfect cybersecurity professional wants to solve problems using data and analytic skills. Cyber criminals are increasingly sophisticated in their attacks and this requires individuals who are highly technical and value evidence-based decision making.
  1. Inquisitive. The world of cybersecurity is ever changing. When threats are prevented, new ones emerge which can require a completely different set of skills than the ones needed previously. A successful cybersecurity candidate is imaginative, curious and creative. They need to figure things out quickly, show motivation to learn and be open to new ideas.
  1. Skeptical. ‘Trust no one’ would be a useful motto for a cybersecurity worker. To get ahead of the game and prevent attacks means sometimes having to think like a hacker. This means maintaining suspicion about what’s going on around you, because in a world of constant threats, naivety can be a dangerous thing.
  1. Responsive.In cybersecurity, things can go wrong quickly, and you might be blamed for breaches that weren’t your fault. If someone in the company opens a phishing email and exposes sensitive information, you might be held accountable. It is thus very important for a cybersecurity worker to be open and responsive to criticisms and avoid being passive-aggressive.
  2. Diligent.In a pressured environment with a firm’s security at stake, a successful candidate needs to be detail-oriented and constantly pushing projects to completion. One small oversight could lead to attacks, so cybersecurity specialists need to scrutinise every detail. They also need to value achievement and making an impact.

Dr. Ryne Sherman, Chief Science Officer, adds: “Traditional recruiting practices often overlook personality and focus on education, experience and a set of hard skills. While these are important, it is crucial to remember that personality characteristics play a huge role. A candidate with the suitable personality can be easily trained into the right role. This is especially true in the cybersecurity world where companies struggle to find the experienced individuals they need. To recruit top talent, companies should direct their attention to the power of personality.”

Technology

Using AI to identify public sector fraud

Published

on

Using AI to identify public sector fraud 2

When it comes to audits in the public sector, both accountability and transparency are essential. Not only is the public sector under increasing scrutiny to provide assurance that finances are being managed appropriately, but it is also vital to be able to give early warnings of financial pressures or failures. Right now, given the huge value of funds flowing from the public purse into the hands of individuals and companies due to COVID measures, renewed focus on audit is essential to ensure that these funds are used for the purposes intended by parliament.

As Rachel Kirkham, former Head of Data Analytics Research at the UK National Audit Office and now Director of AI Solutions at MindBridge, discusses, introducing AI to identify and rectify potential problems before they become an issue is a key way for public sector organisations and bodies to ensure public funds are being administered efficiently, effectively and economically.

Crime Wave

The National Crime Agency has warned repeatedly that criminals are seeking to capitalise on the Covid crisis and the latest warnings suggest that coronavirus-related fraud could end up costing the taxpayer £4bn. From the rise in company registrations associated with Bounce Back loan fraud, to job retention scheme (furlough) misuse, what plans are in place for government departments to identify the scale of fraud and error and then recoup lost funds?

There is no doubt that the speed with which these schemes were deployed, when the public sector was also dealing with a fundamental shift in service delivery, created both opportunities for fraud and risk of systematic error. But six months on, while the pandemic is still creating economic challenges, the peak of the financial crisis has passed. Ongoing financial support for businesses and individuals remains important and it is now essential to learn lessons in order to both target fraudulent activity and, critically, minimise the potential loss of public funds in the future.

Timing is everything. Government has an opportunity to review the last 6 months’ performance and strengthen internal controls to ensure that further use of public funds is appropriate. Technology should play a critical role in detecting and preventing future fraud and error.

Intelligence-Led Audit

If the public sector is to move beyond the current estimates of fraudulent activity and gain real insight into both the true level of fraud and the primary areas to address, an intelligent, data-led approach will be critical. The use of Artificial Intelligence (AI) in public sector IT systems can be used to detect errors, fraud or mismanagement of funds, and enable the process changes required to prevent further issues.

HMRC is leading the way, using its extensive experience in identifying and tackling tax fraud to address the misuse of furlough – an approach that has led to many companies making use of the amnesty to repay erroneous claims. Other public sector bodies, especially smaller local authorities, are less likely to have the skills or resources in place to undertake the required analysis. If public money is to be both recouped and safeguarded in the future, it is likely that a central government initiative will be required.

Data resources are key; the government holds a vast amount of data that could be used, although this will require cross-government collaboration and co-operation. It is possible that the delivery speed of COVID-19 responses will have led to data collection gaps – an issue that will need rapid exploration and resolution. It should be a priority to take stock of existing data holdings to identify any gaps and, at the same time, use Machine Learning to identify anomalies that could reveal either fraud or systematic error.

Taking Control

In addition to identifying fraud, this insight can also feed back into claims processes providing public sector bodies with a chance to move away from retrospective review towards the use of predictive analytics to improve control. With an understanding of the key indicators of fraud, the application process can automatically raise an alert when a claim looks unusual, minimising the risk of such claims being processed.

While many public sector bodies may still feel overwhelmed, it is essential to take these steps quickly. Even at a time of crisis, good processes are important – failing to learn from the mistakes of the past few months will simply compound the problem and lead to greater misuse of public funds. The public sector, businesses, and individuals need to learn how to operate in this environment, and that requires the right people to spend time looking at the data, identifying problems and putting in place new controls. With an AI-led approach, these individuals will learn lessons about what worked and what didn’t work in this unprecedented release of public funds. And they will gain invaluable insight into the identification of fraud – something that will provide on-going benefit for all public sector bodies.

Continue Reading

Technology

Why dependency on SMS OTPs should not be the universal solution

Published

on

Why dependency on SMS OTPs should not be the universal solution 3

By Chris Stephens, Head of Banking Solutions at Callsign

In our day-to-day lives, SMS one-time passwords, also known as OTPs, have unintentionally become the default authentication factor when carrying out high risk and confidential transactions online. Banks, telcos, and businesses are opting for this method as SMS OTPs are relatively quick and simple to put in place. In our digital age, this solution works for the majority of users, who more often than not possess a mobile phone and are familiar with the user experience. As a result, companies are using them to securely authenticate both their customers and employees.

When looking into SMS OTPs, businesses should consider the bigger picture and how time- and cost-efficient solutions are as a whole by taking into account other key elements that might have been neglected in the past, such as hidden fees and security vulnerabilities. Apart from this approach, there are also other options better suited to different business needs – the European Authority (EBA) has already recognised other forms, such as employing the secure binding of a device to achieve possession and the use of behavioural biometrics as an inherence factor. For example, earlier this year Google officially began moving away from SMS OTP-based authentication. Whilst in the UK both the Financial Conduct Authority (FCA) and UK Finance have recommended banks ought to reduce their dependence on its use in the longer-term.  Whereas, in the past, financial institutions were choosing to use this solution because it enabled them to save time on becoming compliant with the PSD2 Strong Customer Authentication (SCA) regulation.

It is common knowledge that SMS OTPs are not without their flaws, and with the extended deadline for SCA for e-commerce less than a year away (September 2021) – is now the best time for the industry to look elsewhere for more intelligent approaches to authentication?

SMS as the go-to solution

Fraudsters are sophisticated criminals, who attack the weakest points in the system – they have observed that banks and businesses heavily rely on SMS OTPs for 2FA (two-factor authentication) transactions, which is why they continue to abuse and weaken existing systems and exploit these solutions for their own benefit. Fraudsters commonly practise SIM-swap – where they steal personal information about the victim and then contact the target’s mobile operator pretending that their phone has been lost or stolen. With lockdown rules constantly changing, not all customers are able to easily visit stores right now, therefore operators are dependent on mobile-authentication channels that are more susceptible to this type of manipulation to service their customers.

SIM-swap fraud can easily be done. As soon as the fraudster has duped the mobile operator, a number transfer is authorised and then activated on a new SIM card – it works by granting cybercriminals access to the victim’s number and consequently all one-time passwords and authentication codes that are sent to that number. In March 2020, Europol warned that SIM-swap scams are a growing problem across Europe, following an investigation that resulted in the arrest of 12 suspects associated with the theft of more than €3 million ($3.3 million).

However, consumers and businesses need to be aware that SIM-swap fraud is not the only method cybercriminals are deploying to intercept OTPs from their victims during the pandemic and beyond.

Spotting a scam

SIM-swap attacks are not the only method scammers are using, there is also a growing number of cases that take advantage of malware and remote access applications to steal SMS OTPs. They do this by socially engineering individuals to download remote access apps or hidden surveillance apps to grant access to the victim’s device, without coming into contact with it. The cybercriminals can, therefore, directly read their messages or secretly record all their texts and phone calls to another device. The unknowing victim’s personal messages, including OTPs, are tapped into by the fraudster using the same approach as SIM-swap attacks. However, this time they also have direct access to the target’s device.

Several different parties are involved in the delivery of OTPs and at each stage of the process there is an opportunity for fraudsters to capture  messages. There is also the potential mass compromise as a result of hidden vulnerabilities in the SS7 network and the attack surface to consider. With all these in mind, banks need to have a good overview of all data sub-processors to allow them to adopt the most suitable security controls, such as multi-factor authentication (MFA), audit logs, and dashboards.

Watch out for hidden costs

It comes as no surprise that intercepted OTPs result in fraud losses, which quickly increase as hidden fees go unnoticed over time. Beyond the upfront costs of SMS OTPs, such as cost per text, there are also several hidden costs that are difficult to budget for and avoid. They are typically the result of the domino effect of the aforementioned issues – forcing businesses into a reactive mode that is tricky to handle.

As an example, where drop-offs take place in an authentication journey, including when SMS texts are not received, financial institutions need to be ready to manage an influx in calls to their customer service helplines and the associated fees. Or else the customer may decide to use another card to make the payment, which is worse for the bank. This is due to the fact that customers are likely to abandon the use of a card when they are fed up with a customer journey that involves too much unnecessary friction. These abandonments lead to a decrease in interchange fees for banks and could even potentially reduce the customer base for merchants.

Evaluating the user experience

Whilst most consumers possess a mobile phone, SMS is not a reliable solution for everybody. For instance, SMS OTPs are not accessible to those living in remote or low-service locations, who may struggle to receive SMS alerts. This overall experience is also cumbersome as it takes roughly 30 seconds of transaction time for the text to be delivered, compared with the almost instantaneous transactions experienced by alternative authentication approaches, such as biometrics.

In this digital age, businesses are constantly adapting to accommodate different generations including Gen Z who are digital natives – so mobile use is only going to increase and, along with it, the volume of transactions taking place on these devices will also grow. This goes hand in hand with the ever-changing needs and expectations of customers as they look for hyper-personalised online experiences as the new norm. Yes, SMS OTPs are mobile-first, but they do still require the user to switch to another app to view the SMS so they can complete the transaction, which can be annoying for the customer as it interrupts the e-commerce user journey. After a friction-filled experience, it would be unsurprising if the user then decides to abandon the transaction. With this and other existing security implications in mind, the EBA recommends banks adopt other options.

Chris Stephens

Chris Stephens

Benefits of behavioural biometrics

Every person has their own unique behaviour and habits when swiping across the screen, which can be tracked through the analysis of the data signals captured from hardware sensors when the user engages with their device. These signals are crucial to designing user features such as finger movement, hand orientation, and wrist strength. Together, artificial intelligence and machine learning provide us with the capability to analyse this information to develop a personalised prototype of that user’s swipe behaviour, which only takes milliseconds to confirm whether the customer is who they say they are. This immediately allows the bank to seamlessly carry out appropriate security actions and stop fraudsters in their path before they can even begin using a target’s device.

Behavioural biometrics is ideal for positively identifying an individual and also effectively identifies bad actors. Including when cybercriminals use technologies, such as bots or remote access Trojan (RAT) software, to control transactional flows without the user being aware. This approach to biometrics works on both high- and low-end devices and helps to protect potential victims against both blind (where the fraudster has never observed how the user swipes their phone) and over-the-shoulder attacks (where the fraudster has been able to observe the victim’s swipe movements). Both forms of attack can be detected unique algorithms, with an accuracy rate of 98%; by layering in device intelligence and locational habits it is the most accurate and robust identification method currently available on the market. By preventing criminal access, even when the attacker has observed the user’s behaviour, it offers an added level of security to businesses and banks that other traditional methods, such as a PIN or password, cannot.

In order for organisations to maintain a competitive edge and successfully navigate through the pandemic, they will need to deliver hyper-personalised journeys to meet consumers’ expectations. They are increasingly looking to bank with or sign-up to services that offer a secure and bespoke service that meets their daily needs during and beyond the pandemic.

Therefore, a holistic approach to security empowers businesses to take back control of their fraud and authentication management. Unfortunately, single point solutions, like SMS OTPs, do not allow businesses to scale or provide enough flexibility to meet these requirements. By adopting a strategic, and intelligence-based, approach financial institutions and organisations will be able to upgrade security measures and enhance the user experience – whilst keeping IT spend low.

Continue Reading

Technology

The rise of AI in compliance management

Published

on

The rise of AI in compliance management 4

By Martin Ellingham, director, product management compliance at Aptean, looks at the increasing role of AI in compliance management and just what we can expect for the future

Artificial Intelligence (or AI as it’s now more commonly known) has been around in some shape or form since the 1960s. Although now into its eighth decade, as a technology, it’s still in its relative infancy, with the nirvana of general AI still just the stuff of Hollywood. That’s not to say that AI hasn’t developed over the decades, of course it has, and it now presents itself not as a standalone technology but as a distinct and effective set of tools that, although not a panacea for all business ills, certainly brings with it a whole host of benefits for the business world.

As with all new and emerging technologies, wider understanding takes time to take hold and this is proving especially true of AI where a lack of understanding has led to a cautious, hesitant approach. Nowhere is this more evident that when it comes to compliance, particularly within the financial services sector. Very much playing catch-up with the industry it regulates, up until very recently the UK’s Financial Conduct Authority (FCA) had hunkered down with their policy of demanding maximum transparency from banks in their use of AI and machine learning algorithms, mandating that banks justify the use of all kinds of automated decision making, almost but not quite shutting down the use of AI in any kind of front-line customer interactions.

But, as regulators are learning and understanding more about the potential benefits of AI, seeing first-hand how businesses are implementing AI tools to not only increase business efficiencies but to add a further layer of customer protection to their processes, so they are gradually peeling back the tight regulations to make more room for AI. The FCA’s recent announcement of the Financial Services AI Public Private Forum (AIPPF), in conjunction with the Bank of England, is testament to this increasing acceptance of the use of AI. The AIPFF is set to explore the safe adoption of AI technologies within financial services, and while not pulling back on its demands that AI technology be applied intelligently, it signals a clear move forward in its approach to AI, recognising how financial services already are making good use of certain AI tools to tighten up compliance.

Complexity and bias

So what are the issues that are standing in the way of wider adoption of AI? Well, to start with is the inherently complex nature of AI. If firms are to deploy AI, in any guise, they need to ensure they not only have a solid understanding of the technology itself but of the governance surrounding it. The main problem here is the shortage of programmers worldwide. With the list of businesses wanting to recruit programmers no longer limited to software businesses, now including any type of organisation who recognises the potential competitive advantage to be gained by developing their own AI systems, the shortage is getting more acute. And, even if businesses are able to recruit AI programmers, if it takes an experienced programmer to understand AI, what hope does a compliance expert have?

For the moment, there is still a nervousness among regulators about how they can possibly implement robust regulation when there is still so much to learn about AI, particularly when there is currently no standard way of using AI in compliance. With time this will obviously change, as AI becomes more commonplace and general understanding increases, and instead of the digital natives that are spoken about today, businesses and regulators will be led by AI-natives, well-versed in all things AI and capable of implementing AI solutions and the accompanying regulatory frameworks.

As well as a lack of understanding, there is also the issue of bias. While businesses have checks and balances in place to prevent human bias coming into play for lending decisions for example, they might be mistaken in thinking that implementing AI technologies will eradicate any risk of bias emerging. AI technologies are programmed by humans and are therefore fallible, with unintended bias a well-documented outcome of many AI trials leading certain academics to argue that bias-free machine learning doesn’t exist. This presents a double quandary for regulators. Should they be encouraging the use of a technology where bias is seemingly inherent and if they do pave the way for the wider use of AI, do they understand enough about the technology to pinpoint where any bias has occurred, should the need arise? With questions such as this, it’s not difficult to see why regulators are taking their time to understand how AI fits with compliance.

Complementary AI

So, bearing all this in mind, where are we seeing real benefits from AI with regards to compliance, if not right now but in the near future? AI is very good at dealing with tasks on a large scale and in super-quick time. It’s not that AI is more intelligent than the human brain, it’s just that it can work at much faster speeds and on a much bigger scale, making it the perfect fit for the data-heavy world in which we all live and work. For compliance purposes, this makes it an ideal solution for double-checking work and an accurate detector of systemic faults, one of the major challenges that regulators in the financial sector in particular have faced in recent years.

In this respect, rather than a replacement for humans in the compliance arena, AI is adding another layer of protection for businesses and consumers alike. When it comes to double-checking work, AI can pinpoint patterns or trends in employee activity and customer interactions much quicker than any human, enabling remedial action to be taken to ensure adherence to regulations. Similarly, by analysing the data from case management solutions across multiple users, departments and locations, AI can readily identify systemic issues before they take hold, enabling the business to take the necessary steps to rectify practices to guarantee compliance before they adversely affect customers and before the business itself contravenes regulatory compliance.

Similarly, when it comes to complaint management for example, AI can play a vital role in determining the nature of an initial phone call, directing the call to the right team or department without the need for any human intervention and fast-tracking more urgent cases quickly and effectively. Again, it’s not a case of replacing humans but complementing existing processes and procedures to not only improve outcomes for customers, but to increase compliance, too.

At its most basic level, AI can minimise the time taken to complete tasks and reduce errors, which, in theory, makes it the ideal solution for businesses of all shapes, sizes and sectors. For highly regulated industries, where compliance is mandatory, it’s not so clear cut. While there are clearly benefits to be had from implementing AI solutions, for the moment, they should be regarded as complementary technologies, protecting both consumers and businesses by adding an extra guarantee of compliant processes. While knowledge and understanding of the intricacies of AI are still growing, it would be a mistake to implement AI technologies across the board, particularly when a well-considered human response to the nuances of customer behaviours and reactions play such an important role in staying compliant. That’s not to say that we should be frightened of AI, and nor should the regulators. As the technology develops, so will our wider understanding. It’s up to businesses and regulators alike to do better, being totally transparent about the uses of AI and putting in place a robust, reliable framework to monitor the ongoing behaviour of their AI systems.

Continue Reading
Editorial & Advertiser disclosureOur website provides you with information, news, press releases, Opinion and advertorials on various financial products and services. This is not to be considered as financial advice and should be considered only for information purposes. We cannot guarantee the accuracy or applicability of any information provided with respect to your individual or personal circumstances. Please seek Professional advice from a qualified professional before making any financial decisions. We link to various third party websites, affiliate sales networks, and may link to our advertising partners websites. Though we are tied up with various advertising and affiliate networks, this does not affect our analysis or opinion. When you view or click on certain links available on our articles, our partners may compensate us for displaying the content to you, or make a purchase or fill a form. This will not incur any additional charges to you. To make things simpler for you to identity or distinguish sponsored articles or links, you may consider all articles or links hosted on our site as a partner endorsed link.

Call For Entries

Global Banking and Finance Review Awards Nominations 2020
2020 Global Banking & Finance Awards now open. Click Here

Latest Articles

Why cybercriminals have ‘Gone Vishing’ during the COVID-19 Pandemic 5 Why cybercriminals have ‘Gone Vishing’ during the COVID-19 Pandemic 6
Business7 hours ago

Why cybercriminals have ‘Gone Vishing’ during the COVID-19 Pandemic

More than 215,000 vishing attempts in the last year alone As new coronavirus restrictions look set to confine much of...

Risk Mitigation vs. Risk Avoidance: Why FIs Need to Maintain Risk Appetite and Not Place All Bets on De-Risking 7 Risk Mitigation vs. Risk Avoidance: Why FIs Need to Maintain Risk Appetite and Not Place All Bets on De-Risking 8
Finance7 hours ago

Risk Mitigation vs. Risk Avoidance: Why FIs Need to Maintain Risk Appetite and Not Place All Bets on De-Risking

De-risking aims to protect financial institutions from the increasing pressures placed by regulators and threats, associated with clients operating in...

Using AI to identify public sector fraud 9 Using AI to identify public sector fraud 10
Technology10 hours ago

Using AI to identify public sector fraud

When it comes to audits in the public sector, both accountability and transparency are essential. Not only is the public...

Five golden rules of recruitment 11 Five golden rules of recruitment 12
Business10 hours ago

Five golden rules of recruitment

Former investment banker and entrepreneur, Connie Nam, discusses five ways in which basing your recruitment process around understanding a candidate’s...

Using data analytics to improve SME cash flow and treasury management 14 Using data analytics to improve SME cash flow and treasury management 15
Business11 hours ago

Using data analytics to improve SME cash flow and treasury management

The pressure facing SMEs this year is widely known, and they are looking for ways to improve their cash flow...

Why dependency on SMS OTPs should not be the universal solution 16 Why dependency on SMS OTPs should not be the universal solution 17
Technology11 hours ago

Why dependency on SMS OTPs should not be the universal solution

By Chris Stephens, Head of Banking Solutions at Callsign In our day-to-day lives, SMS one-time passwords, also known as OTPs, have...

The chosen one 18 The chosen one 19
Business12 hours ago

The chosen one

By Jesse Swash, Co-Founder Design by Structure. The lessons for the future lie in the past. The same truths still hold. This time...

How PR can help franchise businesses emerge stronger from 2020 20 How PR can help franchise businesses emerge stronger from 2020 21
Business13 hours ago

How PR can help franchise businesses emerge stronger from 2020

By Mimi Brown, Head of Entrepreneurs & Business at The PHA Group A second wave of coronavirus is gathering pace...

Cash and digital payments – a balancing act to aid financial inclusion 22 Cash and digital payments – a balancing act to aid financial inclusion 23
Finance13 hours ago

Cash and digital payments – a balancing act to aid financial inclusion

By Matthew Jackson, Head of Partner Development, EMEA at PPRO The cashless debate is one that continues to spark both conversation...

Research exposes the £68.8 billion opportunity for UK retailers 24 Research exposes the £68.8 billion opportunity for UK retailers 25
Business3 days ago

Research exposes the £68.8 billion opportunity for UK retailers

Modelling shows increasing the proportion of online sales by 5 percentage points would have significantly boosted retailers’ revenues during the...

Newsletters with Secrets & Analysis. Subscribe Now