By Wael Elrifai, Director of Enterprise Solutions, Pentaho
These days, stepping forward to lead any ambitious data IT transformation in a bank is a bit like being a Premier League football manager. You can be doing all the right things for the long-term health of the club yet still regret those moments you decided to glance at your Twitter feed. Footie fans and banking professionals alike are impatient for fast results and naturally want to avoid costly, high-profile defeats. However, when you are working to heal deep-rooted systemic problems, quick, miracle cures are in short supply.
Readers working in banking IT know too well the problems of integrating, prepping and governing siloed data particularly when the data sits in multiple locations across the organisation (and often the world) and is stored on different platforms – common in M&A situations. When you start talking big data and Hadoop, that ‘single source of truth’ starts to seem as mythical a possibility as, say, Leicester City winning the League.
And yet we’re truly at the tipping point where the risks of inaction outweigh the risks of action. Headlines about customer rage, massive non-compliance penalties and high-profile security breaches issue forth from news and social media channels almost every day. Even more concerning for established banks is the slew of nimble, young “challenger banks” unencumbered by legacy systems. Newcomer Atom Bank is a notable example of one investing heavily up front in the latest mobile, biometrics, apps and machine learning technology, in order to really stand out competitively. These banks will appeal to a new generation of consumers living in very different circumstances to their parents and seeking the kinds of experiences they get with services like Uber, Airbnb and even Tinder.
Established banks don’t continue to silo off their data just to make life difficult for themselves and customers. It comes down to the insurmountable task of defining metadata structures when you have hundreds or even thousands of data sources in various formats. To solve this manually, fast enough to be useful, and in compliance would require so many people, strict processes and quality checks as to be essentially impossible. You say you want to blend and analyse those different data sources? As my friends in Brooklyn like to say: “Fuggeddaboutit!”
WANT TO BUILD A FINANCIAL EMPIRE?
Subscribe to the Global Banking & Finance Review Newsletter for FREE Get Access to Exclusive Reports to Save Time & Money
By using this form you agree with the storage and handling of your data by this website. We Will Not Spam, Rent, or Sell Your Information.
Game-changing tools and approach
So now for some good news. After years of prioritising the relatively easier task of making data look pretty, analytics and integration vendors have started releasing tools that automate and de-risk the hard, time and resource-intensive (and mind-numbingly boring) processes associated with integrating, prepping and governing data.
The real game-changer falls in an area called data onboarding. Some new tools reduce the complexities of filling your data lake using an approach we call ‘metadata injection’. They allow you to safely automate hundreds of data onboarding and preparation processes using just a few transformations by scanning your data sources, analysing the data structures and automatically building your data flows on-the-fly. When your data sources change, the metadata structure automatically accommodates the changes. This process gives you the foundation you need to design safe, reliable, compliant data blends using different data sources, analyse that data and really start to gain meaningful, profitable insights.
Data onboarding makes Hadoop less hard…
Let’s not mince words: working with Hadoop is hard. Gartner predicts, “Through 2018, 70% of Hadoop deployments will fail to meet cost savings and revenue generation objectives due to skills and integration challenges.” Moving data into your lake in a simple, automated way is particularly hard and where the new tools for onboarding really come into their own. Some teams use Python or another language to code their way through these processes. However, when you have thousands of disparate data sources, coding scripts for each source is, again, prohibitively impractical and expensive. Data onboarding tools let you manage a changing array of data sources, establishing repeatable processes at scale and maintaining control and governance along the way.
…and IoT possible
Free from the shackles of having to manually manage metadata, you can dare to think about moving into the terrain of IoT. Although banking may not seem as obvious as, say, automotive manufacturing when it comes to IoT, there are actually quite a few compelling use cases. Our customer Edo, for example, helps banks use IoT technology to offer location-based rewards and discounts in real time. It uses geographical data to find and activate offers and deals when customers swipe their debit or credit cards at nearby merchants. This helps Edo’s bank customers differentiate by being able to offer highly personalised, location-based services. A recent white paper from Deloitte University Press identified several more applications for IoT in banking from insurance underwriting to claims assessment to improving trading systems.
We currently work with a number of big, established banks, which not only use data onboarding tools but have been actively involved in defining their functionality. These banks are on track to save billions, improve compliance, detect fraud and offer new, competitive services. All this means that IT leaders can safely step forward and run those ambitious data initiatives safe in the knowledge that they have the tools to deliver those quick wins that business managers and customers want while putting in place the foundation for long-term, competitive strength. This just might have your IT team lifting the trophy!