By Tony Tarquini, European Insurance Director, Pegasystems
When the new FCA chairman Charles Randell voiced his opinion about big data and artificial intelligence (AI) in July this year, it certainly upset the apple cart for a number of individuals across the insurance sector.
Mr Randell highlighted his concerns and proposed future regulations for insurers and other financial services firms using the aforementioned technologies.
As digital transformation takes hold of every industry, the exploration of big data and AI and its integration into insurance services and products has also increased. Consequently, players in the sector should be prepared to reconsider the AI basics and clarify the ground rules to ensure their customers are not left exposed to the mismanagement of personal data or a security breach. It is important that insurance companies take this challenge seriously so they themselves can avoid another scandal such as that experienced with Cambridge Analytica.
There is widespread agreement that since the advent of AI and big data, the insurance landscape has seen a degree of change. By having the ability to turn data into insights, brokers can now grow profits considerably. Right now, the latest technologies are permitting insurers to delve into their past and ongoing data records, as well as analyse sources in the public domain, for example social media platforms, so they can build up a more complete picture of their customers’ profiles and offer the most appropriate products and services to them.
Headlines splashed across newspapers and magazines citing the dangers of AI and machine learning in the financial sector should be taken with a pinch of salt. In fact, businesses should educate their audiences about how important and beneficial it is for financial services companies to use data in a positive way, for example using telematics to reduce insurance premiums. Yet, implementing AI is not straightforward. Throughout its incorporation into systems there are a few critial factors that necessitate vigilant consideration to guarantee and smooth processes. Infrastructure and relevant skill sets are crucial success factors for AI deployment, as AI requires unique skills and computing environments. But, first and foremost, data quality should be the highest priority because the more accurate the data, the better AI is able to perform. Although small data sets can be a good starting point and are capable of delivering quite astounding results, in general, the greater the data set the better the algorithms function.
The most important feature for insurers implementing AI is that they must be able to employ a particular type of the technology, called transparent AI. This type of AI is compulsory for being able to clarify its outputs and results, as well as how it calculated these. Bearing in mind that financial services is a highly regulated market, insurance brokers have to have the ability to substantiate and describe how they have produced a particular outcome. When making predictions or decisions, employees must be able to explain their reasoning, to guarantee they are correct and comply to regulatory requirements. Regrettably, there are some companies that are ignoring the warning signs and have started to rely on the opposite type of AI, namely opaque AI, which makes being held to account an almost unfeasible task.
Nevertheless, the future of the sector undeniably lies in the use of big data, as long as it is used ethically, and its results can be used to improve relationships with customers. There are some companies who have accepted responsibility and adopted transparent AI with open arms. Yet, Unavoidably, there are others who scrimp on costs and cut corners, or just don’t have the relevant knowledge or skills for using this technology. We’ve seen the disastrous results of when AI hasn’t gone to plan which should serve as a warning to insurers to spur them to act.
The Microsoft Tay chatbot that rapidly turned into a racist, incorrect orders on Amazon Alexa, sex-crazed neo-Nazi, and Google Home devices developing abusive behaviour, all demonstrate the possible dangers of opaque closed loop AI technology gone wrong and bring to attention the question of ethics of its use. The insurance industry shouldn’t kid themselves that they will avoid succumbing to a similar fate. Therefore, regulators need to outline the ethical and moral guidelines they would require from AI roll-out in insurance, for example, refusal of health or motor insurance policies. From both an ethical and regulatory viewpoint, it is crucial that employees understand what AI is and how it operates within the context of the insurance industry, to avoid any regulatory mishaps.
Finally, greater insight and control of the decision-making processes can be achieved through the use of AI technologies, but it is imperative that any AI that is utilised can be thoroughly adapted and inspected. This will help mitigate against the possible issue of AI going awol and guarantee that brokers can improve their risk algorithms which, in turn, will result in cheaper and more tailored products for their clients.