BANKING COMPLIANCE AND BIG DATA: IS IT WORKING?

By Sean O’Dowd, Global Financial Services Director at MapR

Over the past few years, banks and financial services institutions have increasingly looked towards the opportunities brought by big data.There is no shortage of frameworks and data lake reference architectures for how big data can help with regulations and compliance.

But how much is actually getting done and is it working?

Sean O’Dowd,
Sean O’Dowd,

Big data for compliance

Failures in customer reporting is the single most expensive compliance issue, costing the world’s top investment banks $43bn in fines over the past seven years. These fines, coupled with the cost of compliance, are drastically impacting banks – and will continue to do so. Especially while the average efficiency ratios of major banks continue to sit way too high, at well over 50 per cent in many cases.

And while much has been done to decrease costs, there are still areas where new data platforms can improve speed and automation, while driving operational costs down. Beyond costs, when correctly applied, big data can help banks reduce regulatory compliance risks and avoid potential problems in real time.

But how? Let’s just look at some basics that modern data platforms should be able to handle for compliance purposes:

  • Speed: institutional banking clients – as well as auditors – are demanding to see risk and possible exposure scenarios at increasingly regular intervals. Real-time analysis is a critical aspect here, coupled with scalability in production environments to handle high volumes of data, so big data platforms can help move large data fast.
  • Archive: regulators simply want financial services institutions to hold more data and for longer. Banks must also use analytics to understand the integrity of that data – including voice data. And remember, if half a phone call is missing come audit time, it does not fully comply.
  • Complexity:the increasingly complex, global assets being traded and held in portfolios contain not just more data – such as payments, fixed income, and derivatives – but also a greater amount of unstructured data, which big data platforms can then translate.
  • Cost:with such high legacy costs and more personnel going to regulatory and financial reconciliation, firms have to comply at a lower total cost of ownership (TCO). These regulations and the market environment have greatly hampered banks’ abilities to just throw money at the problem. There is a high degree of automation that can be brought to bear with simple machine learning rules as an example.
  • Agility:as regulations continue to morph, so do markets, and the data platform must be able to adapt more rapidly.

With these requirements of big data platforms– and many more besides – it is important to use a platform that can easily ingest data from both new and legacy sources.

And as banks look to maximise the value of their data, by feeding it into a variety of application and analytical models, both internal and external data sources should be used to consider these models from a regulator’s perspective. Too often the client fines that are hitting these banks are due to investor protection, transparency, and transaction accuracy – including misleading communication or fraudulent agent activity.

Limitations of current deployments

In many organisations, big data is not yet being fully utilised for compliance and regulations. Greater trust in data quality, lineage, and metadata management solutions are needed to reap the potential of big data platforms.

Moving from big data 1.0 to 2.0 solutions that are hardened for the enterprise is one way that businesses can achieve this. While many early to market providers and projects were initially good for simple offload and testing, real business applications require different standards.

This is why, for example, you see the adoption of Apache Spark rapidly pulling ahead of MapReduce. The market has realised the value, and different projects and solutions have come to market to meet the enterprise-challenge since these early solutions.

But beyond the technology deployed, one great challenge is that big data is not currently widely being used for broad-based, multi-asset compliance across multiple portfolios. Instead, it is happening in silos – either on an asset base or line of business level. And, on the occasions where it is being used across multiple assets, it is rarely deployed for reporting, which ultimately is what’s necessary to enable compliance.

Where is it going wrong?

This siloed mentality largely stems from process and political hurdles. There is great hesitancy around the move from the testing to the operational phase on a large scale for more critical applications. And this is where utility-grade solutions are paramount.

But instead of moving towards the heavier computation and real-time capabilities of these enterprise- grade data platforms, we’re continuing to see banks fall short on the use of such platforms to enable greater usage across assets, and for managing finance and risk.

With the information stored in silos and not deployed for larger analysis and reporting, banks remain forced to rely on manual intervention for timely reporting. And this ultimately leads to greater risk and exposure to human error.

But while there is considerable opportunity to drive greater implementation of new platforms to help firms with the current pace of compliance at banks, many banks are still making great strides here. And with new platform solutions now offering improved compliance data management, stress testing analytics, and historical data records keeping, there is an even greater incentive for using big data.

For banks to successfully digitise, however, they need money to clear risk hurdles and ensure better coordination across all lines of business – even if they’re not client facing. Ultimately, this will enable them to deploy the big data being collected more effectively to reduce regulatory compliance risks.

The resistance is not unlike what had been seen with cloud solutions as its adoption moved through the maturity curve. Senior data and IT execs need the confidence that these platforms can deliver. And rightly so, as their careers and the resiliency of the banks’ systems are riding on these platforms.

Some of the keys to moving forward:

  • Banks need to adopt open source frameworks that allow more rapid adoption and testing of new solutions.
  • Look for providers whose new big data platforms are being used for client facing applications, which are some of the most demanding and offer operational proof points.
  • Data platforms should look to resolve legacy and silo pain points, not repeat them.
  • There are vastly improved data governance solutions for data lakes that can adopt data standards (ontologies), leverage existing data models, rapidly bring structure to new incoming sources. and provide provenance that is required for compliance

Ultimately, it is a matter of gaining confidence that the new breed of data platforms is prepared to manage the workload. It’s also a matter of working with these providers to dial in the requirements and criteria that will be needed, and to fit those into the existing infrastructure.

Comments are closed