How to deliver trusted, safe, and responsible AI
How to deliver trusted, safe, and responsible AI
Published by Jessica Weisman-Pitts
Posted on December 27, 2023

Published by Jessica Weisman-Pitts
Posted on December 27, 2023

How to deliver trusted, safe, and responsible AI
By Shakeel Khan, CEO, Validate AI and David Hand, Emeritus Professor of Mathematics and Senior Research Investigator at Imperial College
A working definition of Artificial Intelligence (AI) is that it is the ability of a machine to perform tasks regarded as requiring intelligence. Although not new, it is an idea which has suddenly become part of everyday conversation. This is largely because the power of modern computers, coupled with the size of databases that are now available, has led to remarkable achievements, with even more remarkable breakthroughs promised. These span all aspects of human enterprise: beating chess and go champions, making scientific and medical breakthroughs, real-time language translation, facial recognition, driverless cars, and so on. Most recently, the media have become enthralled by the potential of chatbots and large language models, such as ChatGPT, Claude 2, and PaLM. These appear to have the capacity to carry out a sensible conversation and even to write documents at a level adequate to pass university examinations.
But two things about these developments are striking. One is the rate of progress. Every week we appear to read about an even more dramatic advance: whereas Chat GPT-3.5 outperformed 10 percent of human candidates on the Uniform Bar law exam, the improved Chat GPT-4 version beat 90 percent. And the other is that sometimes the systems make silly mistakes – like the early version of ChatGPT confidently asserting that 47 was larger than 64 (and then attempting to count from 47 to 64, before giving up).
Put these two things together, and alarm bells might start ringing. Will AI take over jobs? Will it aggravate social inequality? Will it lead to disastrous mistakes? Who bears responsibility when things go wrong? What about autonomous weapons? Is what an AI system is trying to do really aligned with what we want; that is, is it solving the right problem? And even if it is, is it doing so in an ethical way? After all, hospital waiting lists are easily reduced by putting fewer patients on the list.
In short, can we trust AI, is it safe, and how can be ensure that such systems are used to benefit humanity?
Validate AI has been at the forefront of developing strategies to mitigate these risks since its formation in 2019. The risk mitigation strategy is based on six key pillars that could form an outline for businesses to follow to drive assurance:
In short, we are moving into a new world. It is a world of huge potential for benefitting humankind. However, as the recent gathering of leaders from around the world at the UK AI Safety Summit illustrated, the technology of AI, like any other advanced technology such as nuclear or biotechnology, carries risks. For its vast promise to be fulfilled, we need to tread carefully. The risk mitigation strategy embodied within the pillars above forms the basis of a checklist which will give us confidence that the future is bright. That safe AI can be delivered.
About the authors:
Shakeel Khan is CEO of Validate AI, a community interest company championing innovation in how we deliver Trusted, Safe and Responsible AI, working with experts from Government, Academia and Industry. He has worked extensively in the banking and government sectors over 28 years leading the development of a comprehensive practitioner centric AI assurance tool kit. This has been adopted for projects by government departments and fiscal authorities globally. He also chairs an AI committee at the OR Society that partners with Validate AI to deliver community events and learning opportunities.
David J. Hand is emeritus professor of mathematics and senior research investigator at Imperial College London and Chair of Validate AI CIC. He a past president of the Royal Statistical Society and is a fellow of the British Academy. His books include Dark Data, The Improbability Principle, Information Generation, Intelligent Data Analysis, Artificial Intelligence and Psychiatry, and Principles of Data Mining.
Explore more articles in the Technology category











