By Adrian Harvey, CEO of Elephants don’t forget
“Goodbye” is the next word out of Anne Robinson’s mouth, as she dispatches the contestant who has either performed the worst or been perceived to perform the worst.
The harsh reality is that human employees struggle to compete in a head to (robotic) head comparison in the accurate, timely and low-cost processing of certain activities. It therefore makes common sense for all firms, banks included, to embrace the deployment of AI in their enterprise – particularly in areas that play to AI’s strengths.
According to PWC: “AI startups have raised more than US$ 2 billion in venture capital funding this year. This is clearly seen as one of the more promising technologies, with a bright future.”
I suspect some ardent AI evangelists are reading this proclaiming that “there is nothing that AI cannot do and it is only a matter of time.…” Technically, at least right now, that simply is not true, in time, perhaps the development of AI will mean that there is nothing a human can do that AI cannot – but not yet and perhaps even then, it will not happen.
WANT TO BUILD A FINANCIAL EMPIRE?
Subscribe to the Global Banking & Finance Review Newsletter for FREE Get Access to Exclusive Reports to Save Time & Money
By using this form you agree with the storage and handling of your data by this website. We Will Not Spam, Rent, or Sell Your Information.
Why so? Because paying customers may decide and continue to decide that they would prefer to interact with a human being rather than a robot. Heaven forbid! I mean right now I am certain it is perfectly possible for my family and I to fly to our holiday destination without a pilot, however, I believe it would be quite an empty plane. I know this because at a recent event I spoke at I posed that very question to the audience and only a very small fraction would be prepared to fly in a plane with no pilots.
The conclusion to be drawn from this is that while the technology, in this instance, may be there, paying customers are not willing to embrace it and thus, it is not being deployed. I am human and I want human-on -human interaction and perhaps, as ridiculous as that may sound, I want humans to back up the robots just in case somebody made an error in the coding.
As a consumer I am not fussed about how AI is deployed ‘behind the scenes’ in the ‘back office’ as so many firms refer to it. If my insurer is deploying AI to spot and combat fraud more accurately, I applaud them as it is likely to reduce my premiums. If the law firm down the road is using AI to research past cases to increase the accuracy of their win/lose forecasting in court then that is beneficial right?
But when AI is deployed in the parts of the customer journey that directly impacts the customer, then more caution needs to be deployed. Recently, I read an article telling readers how advantageous it was going to be when this particular firm deployed AI to help customers with their investment decisions. From personal experience and discussions, the general consensus seems to be that customers do not mind AI running the numbers, doing the forecasting and analyzing the options, however, interacting solely with a robot makes them more hesitant.
Woe betides the overzealous banks that deploy AI in these customer touch points as switching has never been easier and consumer choice has never been greater. The consequences of forcing AI onto the public because it saves the enterprise money, but in fact downgrades the customer experience, is likely to end in tears for the bank that does so.
So AI is a great thing when it is deployed behind the scenes and in some basic interactions with the customer, but there are plenty of other aspects that require human intervention because AI either cannot (yet) do it or the consumer will not stand for a human-on-robot interaction.
This means that despite the phenomenal ability of AI to ‘get it right’ and do so faster and cheaper than the human equivalent, many tasks and processes will still be carried out by the far more error prone human equivalent – “The Weakest Link” in Anne Robinson’s parlance.
To compound matters, in the old days employees were only ever really compared with their peers and fellow humans, now inevitably, AI introduces a far more challenging yardstick. Employers now have, and will become more accustomed to ever higher standards of processing accuracy which will, undoubtedly, highlight the gap between the accuracy and capability of the employee and that of AI.
But what can be done to close this accuracy, capability and risk gap and why are humans quite so prone to error?
While some may dispute the fact that employees are actually so far off the mark and that the ‘problem’ isn’t as great as inferred here, we have evidence to support this. In 2016 Elephants don’t forget conducted more than 20 million individual employee knowledge and competency checks, across numerous firms, the majority of who are regulated. Based on client specific learning requirements, this substantial data pool evidenced that on average employees learn and retain only half of the training material that is provided for them to optimally perform their role.
Employers train employees to ensure competence and compliance and knowing half of what is required is hardly a recipe for an error free and compliant environment.
In 2017 Elephants don’t forget forecast to complete more than 50 million employee competency interventions and so far the data point is consistent. Each new deployment reinforces the fact that employees continue to learn only half of what employers expect of them. This inevitably makes employees more prone to error than AI and thus, represents far greater compliance risk.
Interestingly, this has little to do with training and everything to do with learning and knowledge retention. The situation isn’t new and has blighted the learning and development functions of every bank and organisation since training began.
Exam testing to establish competence and knowledge doesn’t necessarily work as employees cram for the exam and promptly forget most of what they remembered (different from learned). Moreover, it is hugely unpopular with not only the workforce but also management and unions and at best only ever provides a questionably accurate snap shot at a single point in time.
Whilst the problem appears systemic within the enterprise, it is in fact an entirely individualised affair. What is learned and retained from any workplace training is a highly personalized experience that differs greatly from one colleague to another. Thus, knowledge (and lack thereof) operates at an individual level. Solving the problem needs to be a similarly individualised affair.
And, even if firms have been able to accurately identify individual employee knowledge profiles, which they have not, then addressing the gaps in individual knowledge would have been a Herculean administration task – given that gaps vary by individual and many firms have an employee workforce that exceeds into the hundreds, if not thousands.
That was true until AI came along that can process millions of data points accurately and subsequently make decisions that drive automated interactions. At Elephants don’t forget AI is used to continually assess individual employee knowledge levels, identify areas of weakness (knowledge and competency gaps at an individual employee level) and then tailor and execute automated remedial interventions.
Sounds complicated and time consuming? Well, it would be impossible to manually do, with the same amount of accuracy as our AI does and certainly it would be so cost prohibitive as to be a non-starter. Perhaps the cleverest thing about this use of our AI is the time it takes to solve this problem.
Just 1 minute 47 seconds per day per employee is our global average and this reduces in mature, professional/clerical deployments. One could argue that mathematically this amounts to 6.5 hours a year of ‘lost productivity’. However, these interactions have proven to have no negative impact on productivity and instead evidence has shown that they inevitably drive considerable, sustained productivity and compliance improvement in employees. It stands to reason that all things being equal an employee that knows what they have been trained will outperform an employee that only knows half of what they have been trained.
Ironically, AI has come to the rescue of the weakest link in your customer journey – employees. AI has enabled Elephants don’t forget to guarantee that every employee genuinely learns and retains what employers train them for, namely, optimal compliance and productivity. Moreover, AI has enabled this to happen in a way that is completely acceptable (even enjoyable) to harassed and time poor employees and line management and has provided a standard of proof way beyond what the regulatory community dreamed possible. Our prediction is that AI will form an integral part of the human/employee support framework of every major employer within the next decade.
Indeed, Elephants don’t forget are already using AI to ensure the smooth and consistent on-boarding of employees around the globe for Tier 1 employers with small HR footprints that understand the cost of mis-hires and the value of a thorough and comprehensive on-boarding experience. The fact of the matter is that managers are human and prone to forget and make mistakes. The AI enables personalization that would otherwise be impossible and brings a level of consistency and certainty that otherwise would not exist. Moreover, 83% of those c.3,000 employees experiencing an AI managed on-boarding process in 2016/ 17 gave it positive reviews.
To conclude, perhaps we should view AI as less of a threat to employment and more as an opportunity to ensure the weakest link isn’t quite so weak and that the productivity and risk gap between AI and employee is reduced.