By Douglas Greenwell, head of commercial strategy, Duco
Throughout 2020, businesses had to pivot quickly to new ways of working. With the physical location of employees shifting from offices to homes, enterprises were forced to assess how information was stored, managed and distributed. For the financial services sector in particular this was challenging because of data being confidential, highly regulated and driven by tight timescales.
The unforeseen and rapidly unfolding events in the wake of the Covid-19 pandemic meant that digitisation was adopted on a scale never seen before, with operational resilience high on the list of firms’ priorities. While digitisation was a welcome shift in some respects, for example supporting the mass shift to remote and flexible working, the move also showed where the cracks lay. It exposed weaknesses in existing manual processes and legacy data management systems used by the majority of financial services firms.
One area especially that has been forced under the spotlight by the recent digitisation shift is data integrity. In part caused by outdated technology and manual practices, the issue of poor-quality data is a serious one. But why is this the case? And why does it affect the financial sector so severely?
The crucial role of data in finance
Financial services and data are inextricably linked. Data pervades every facet of the industry, from its role in staple tasks such as reconciling invoices with payments to more complex tasks such as risk analysis and long-term business projections.
Yet, despite this crucial reliance on data, there are still significant challenges within financial circles when it comes to measuring and prioritising its integrity and accuracy. Some of this is due to legacy systems, which can be difficult and costly to replace or update. Another interlinked issue is the reliance on manual processes. Accounting teams typically spend time waiting for complex calculations to run across tens of worksheets, often using inherited processes built over years by multiple owners.
A survey from the Financial Technologies Forum (FTF) and Duco found that nearly a third (28%) of financial services organisations say mistakes from manual processes are their biggest data reconciliation pain points. These error-riddled processed affect the integrity of the data and consequently the entire banking ecosystem.
The scale of the issue
As such, data quality is far from a trivial concern. A 2018 Gartner study revealed that ‘bad’ data cost the average firm $15 million in 2017. This was supported by additional findings from MIT Sloan, showing that data integrity failings were costing the typical business between 15–25% of its revenue.
One of the issues affecting the finance sector when it comes to data is the pure multitude of sources and datasets involved in every process. In many cases, where the data is complex or in multiple formats, the act of comparing, reconciling and finalising these datasets is carried out manually, meaning human error creeps in. Manual reconciliation also means increased timeframes, which in turn reduces the overall productivity and efficiency of the departments involved.
Why is good data integrity particularly crucial in financial services?
There are a number of reasons why data integrity is integral to the success of FS firms.
Firstly, the data sets in this sector are vast. With billions of transactions being made every day, data is produced for each one, all of which is vital to the running of each organisation and the FS system as a whole.
Consequently, any errors in processing or transferring this data can have a significant and material impact on the profit and loss levels of each organisation, as well as their reputation.
Finally, regulation is this sector is very strict. The fact is that while practically every industry in the world deals with data, very few verticals are held to the same standards of regulation around how this data is collected, stored and treated as the financial services sector.
Of course, now that the UK has left the EU, there are new questions about how to apply the same amount of scrutiny on UK data practices and around how to guarantee the safe and lawful transfer of data between the UK and Europe. Essentially, post-Brexit, any such transfers are now technically cross-border transactions and must be subject to additional stringent safeguarding measures.
Low risk starts with high-quality data
Yet with digitisation trends only set to accelerate, firms are beginning to address this, starting with the automation of manual processes.
Two thirds (66%) of financial services organisations expect new solutions that automate manual processes to be one of their top three greatest investment areas in the next three years — while 68% expect to have fully automated their reconciliation function within the next five years.
The next step is the introduction of what we call Intelligent Data Automation, enabling companies to approach data management in a holistic way. This means using an ecosystem of no-code, cloud-based tools to automate and control all financial, operational and commercial data across an organisation. With its use of fully customisable, low-cost solutions that can sit alongside or on top of legacy systems, an IDA approach is the key to not only successfully managing data, but to unlocking the full benefits of that data for the business.
This approach gives firms agility and speed of reporting which reduces manual labour and its associated costs, while also letting individual users see and fix data issues more easily. The path to progress is clear for businesses, with automation, cloud-based solutions and removal of manual labour all key in enabling them to work faster and more efficiently — and still survive the scrutiny of regulation.
Now is the time to move proactively, grabbing these tools and solutions with both hands as data increases in quantity and complexity, making automation essential.