Jane Tweddle, financial services industry principal at SAP UKI looks at the complexity of big data for insurers
According to IDC, by 2020 there will be a 50 fold increase in data compared to 2010. This data is coming from more and more sources in a variety of formats, and, with customers’ ever increasing demands, the speed at which businesses need to react to this data is changing.
For the insurance industry in particular, governed by regulation and with fraud and increased competitiveness continuing to be a concern, the pressure is on to make use of all the data available to them.
So what is big data? At SAP we use the three V’s to give a definition of big data:
Volume – Information is exploding
Velocity – The speed with which insurers are expected to react to the data
Variety – Big data comes in a variety of formats – structured and unstructured
There is also a fourth, Variability. Over time, the type of data varies and new data types and fields need to be stored.
With that in mind, to compete effectively, the intelligent insurer will collect and harness all insight to enable them to differentiate themselves in an ever competitive, low-margin market; providing more customised and flexible products and pricing as well as exceptional customer service.
By implementing a real time data platform and unwiring the business – i.e. making information available to users not only in real time, but also across any device, these intelligent insurers will be able to:
- Better understand customers and provide improved services based on individual needs
- Better manage risk and address compliance and regulatory disclosure requirements
- Generate a state of the art experience for all customers and internal users
- Run their business better by being able to make the right decision at the right time
- Reduce time to market for new products and services.
However, challenges to the effective use of big data are well known and reflect many of the challenges insurers have in using data generally.
Often insurers store data because they can and it might come in handy. At a departmental and divisional level data is being stored in a silo approach as each part of the organisation aims to be self-sufficient and not exposed to sharing data across lines of business. Data storage is now cheaper and this perhaps exacerbates this behaviour – ironically this reduced IT cost becomes part of the problem as well as part of the solution.
Apart from this siloed approach, data is coming from many different sources and in many different formats.
Common semantic layer
Being able to analyse big data using a common semantic layer – although important – is not easy and, as such, poses a challenge to insurers. Without this single source of data across the business, gaining insight and having the right information at the right time for effective decision making is difficult and ultimately means data cannot be analysed accurately, in real time.
Reduce IT stack and access time
Reducing both the IT stack and time to access data is challenging for many insurers, but necessary in order to provide more up-to-date data which can be used to take action in real time. For insurers operating in a high risk industry, it’s important to be able to act on data as it looks now, rather than how it looked last week or last month. It also provides a platform for insurers to obtain insights to make predictions about trends in the market, potential risks and profitability.
Insurers that struggle with getting to grips with data are likely to become uncompetitive. Getting control of data across the organisation is no longer a nice to have – regulation is very much driving this agenda in the integration of risk and finance data and customers have higher and higher expectations of their insurers and can make or break a brand through the use of social media.
The highest potential return for Big Data in insurance
Establishing a clear approach towards harnessing and using data in real time is crucial. Significant benefits can be gained, especially in the areas of fraud, risk management, financial analysis, fast, actionable MI and cross-selling.
The technology is available today for insurers to manage and use big data to enable the operational change that is required to embrace the opportunities for better differentiation, business growth and reduced costs. Only then will insurers be in a position to achieve profitability in what are still difficult economic times.
Changing the Game
Germany’s largest health insurer, AOK, is one such example of an insurer putting big data into action to provide faster analysis, decision making and predictive insights. AOK implemented SAP’s in-memory HANA technology to conduct extensive evaluations of medical mass data, so that it can identify health risks early on and test different prevention models.
Implementation of SAP’s HANA technology has provided a fast analysis tool to support AOK’s requirements – not only allowing multiple sources of data with complex joins to be brought together, but also by enabling analysis of this data in 2.5 minutes, compared to 150 hours using traditional data warehouse and analysis technologies. While the faster analysis times and ability to perform complex predictive models are impressive, more importantly is the ability for AOK to achieve its objective of business transformation. AOK transitioned from a claims management company to one that provides preventive healthcare plans and treatments, resulting in healthier, satisfied customers and considerable cost reductions.
So we have a definition of big data and we know it’s definitely here, but are insurers still struggling with small data?
For some this may well be the case but for others, like AOK, using innovative technology is allowing them to leap-frog other financial services institutions. It enables organisations to move rapidly from struggling with small data to benefiting significantly from getting to grips with big data.
2014 is the time for the industry as a whole to follow suit.