Technology
The Role of No-Code in Ensuring Software and Quality Compliance

By Sune Engsig, vice president of product development, Leapwork
Software plays a vital role in almost every aspect of our daily lives. It is, therefore, unsurprising that there has been a growing trend of high-profile cases which demonstrate the serious impact – from damaged reputations to significant financial loss – of software outages.
In June 2021, Fastly ‘broke the internet’ when a valid software configuration change by one of its customers triggered a previously undiscovered bug introduced during a May software deployment.
In October 2021, Meta saw Facebook, WhatsApp and Instagram down for seven hours. The outage cost Meta an estimated $100 million in lost online advertising sales, while its shares fell 5 per cent, wiping about $40 billion from its market value.
While the potential impact of software failure is clear, recent research from Leapwork suggests that business leaders continue to adopt ostrich-like behaviour to the consequences for themselves, their organisation and their customers. More than two-thirds (71%) of UK CEOs say that they’re concerned about losing their jobs in the wake of a software failure. Yet a similar number (70%) of UK testers in banking and financial services firms think it’s acceptable to release software that hasn’t been properly tested, so long as it is patch tested later. This comes despite consumers’ increasing reliance on banking apps and the huge implications for software failure in such a highly regulated sector.
The risks of inadequate software testing
Identifying, managing and resolving software quality issues is much easier during pre-launch testing, especially when it is being run concurrently with the software development phase. Bug fixes become increasingly difficult and expensive, the further along the software development process they go and the deeper they become embedded and, of course, this price peaks once the bug has made its way into production. Additional costs may include longer working hours, lost productivity and a loss of revenue during downtime, as well as excessive efforts across the organisation managing the potential fallout from the outage itself.
Many defects remain undetected until it’s far too late, because the organisations are not able to fully cover the scope using existing (manual) testing. The inadequate pre-launch testing forces teams to then scramble post-launch to pick up the pieces of faulty software applications with renewed urgency, and the added pressure of now also managing potential loss of revenue and damaged brand reputation caused by the defect.
When the faulty software reaches the end users, then dissatisfied customers are a problem that could have far longer reaching effects as users pass on their negative experiences to others. The negative feedback could also prevent potential new customers from ever trying the software in the first place. Not only is a customer less likely to use a product after having a negative experience, but they are also more likely to perceive the whole brand, from there on in, in a poor light regardless of how positively they view the brand’s other products. But the biggest problem of all, and one that banks and other financial institutions face every day, is the risk of breaking regulatory compliance rules.
Why software isn’t tested properly
Changing customer behaviours in the financial services sector as well as the increased competition from digital-native fintech startups have led many organisations to invest in a huge amount of digital transformation in recent years. With companies coming under more pressure than ever to respond to market demands and user experience trends through increasingly frequent software releases, the sheer volume of software needing testing has skyrocketed, placing a further burden on resources already stretched to breaking point.
When CEOs were asked why their software wasn’t tested properly before being released, 40% cited ‘reliance on manual testing’ as being the main reason. Another reason for a lack of sufficient testing is underinvestment in automation and a lack of time. Of those whose company uses or develops in-house software; nearly four in 10 (39%) testers say ‘underinvestment in test automation’ is the main reason sufficient testing does not occur. Only half of testers (50%) say they are using some element of automation (i.e., an automation tool or a combination of manual and automation). Just over a third of testers (35%) cite ‘lack of time’ and 28% say they are ‘unable to test all software due to increased frequency of development.’
Relying on skilled developers to implement code-heavy automation solutions only creates more bottlenecks. Just over a third (34% of CEOs and 36% of testers cite ‘lack of available skilled’ as a main cause of poorly tested software. This indicates the digital skills gap remains a big issues for companies depending on developers to manage the test automation effort.
The solution lies in automating the quality effort
The automation of everyday business processes has helped organisations to overcome a plethora of challenges, helping to increase efficiency, improve ROI and reduce errors. The same is true with software testing. However, as more companies transition from manual testing to automation to meet the testing requirements of increasingly complex software, they are facing yet another hurdle. A reliance on code-heavy or even low-code solutions which require at least a basic – and often an in-depth – understanding of code means they are dependent on developers and other professionals that are skilled in coding. But as there’s a massive skills shortage of developers, this makes it difficult for firms to scale automation. So, when you combine that with the pressure of meeting digital transformation goals, companies quickly find automation in the ever-changing QA context a bigger burden than the manual approach.
While code-based and low-code tools require users to do some level of coding, no-code tools, on the other hand, democratise automation by providing non-technical users with the opportunity to meaningfully contribute to the automation effort. These individuals, such as QA experts and testers, have an in-depth understanding of the business functions and requirements to software applications within their organisation, but aren’t trained to code. With a visual, no-code approach to test automation, everyday business users and subject matter experts can quickly and easily build, test, and implement software applications into their business.
Conclusion
As recent high-profile outages have shown, a failure to maintain software quality can result in major financial and reputational damage. Within the financial services sector, the stringent regulatory and compliance measures in place raise the stakes even higher – there is the risk of major fines and customers are unlikely to give their banking and insurance providers a second chance if something goes awry. No-code test automation has an important role to play in helping improve quality and thus significantly mitigating risk, by making it easier for banks and financial organisations to quickly implement and scale their testing efforts. Financial services firms that don’t consider no-code test automation not only risk falling behind the competition from a speed-to-market standpoint but also leave themselves open to significant and lasting damage in the increased likelihood of a major outage caused by human error.

-
Business4 days ago
Trinidad & Tobago Unit Trust Corporation Expands Caribbean Footprint with the Promise of More ‘Transformational Changes’
-
Banking2 days ago
Open banking: Shaping the future of FinTech and Finance
-
Top Stories23 hours ago
France’s Eramet and Suez pick Dunkirk for EV battery recycling
-
Top Stories2 days ago
Thousands of Greeks rally as state workers strike over labour law plans