Ethical AI: a vital investment for businesses

By Martin Benson, Head of AI Consulting, Jaywing

Artificial intelligence is already reshaping businesses. Corporate leaders are recognising its benefits: automating supply chain management, liberating workers from the more mundane, repetitive tasks and boosting efficiency across the board. While AI is rightly being talked about in largely optimistic terms within business, there are complexities that we need to overcome. The impact on jobs is an obvious and important talking point, although these fears are almost certainly overblown: it’s not at all clear that AI will generate an aggregate reduction in jobs (there have been similar concerns with most other major technological advances throughout history, but this has never happened). But there are also other potential issues, which are being recognised at a governmental level.

Last month’s Budget announcement included an interesting development; a pledge to establish a new Centre for Data Ethics and Innovation. The project will focus on ensuring that society is able to keep up with the pace of change driven by artificial intelligence, assessing the implications for public services, the future of employment and businesses. Crucially, the Centre will also explore “ethical issues” relating to AI. This is a crucial element of the debate, which is worthy of greater attention.

But what exactly do we mean by ethics in AI?

AI: the potential ethical dilemmas

WANT TO BUILD A FINANCIAL EMPIRE?

Subscribe to the Global Banking & Finance Review Newsletter for FREE
Get Access to Exclusive Reports to Save Time & Money

By using this form you agree with the storage and handling of your data by this website. We Will Not Spam, Rent, or Sell Your Information.
All emails include an unsubscribe link. You may opt-out at any time. See our privacy policy.

Martin Benson
Martin Benson

As AI becomes a fundamental tool in the delivery of public services, as well as healthcare and business decisions, there are potential ethical issues to consider. A model predicting whether someone is more likely than another to buy a product may not need to be understood in depth. However, a life-changing decision on mortgage eligibility requires careful assessment by the lender, and a degree of transparency in how decisions are arrived at.

In addition, we could start to see AI used in making predictions on criminal recidivism, potentially informing decisions on probation, as well as deciding whether a patient should receive a drug, and in what quantity. When it comes to decisions like these, it’s vital that the behaviour of the AI is properly understood and potential mistakes are avoided (for obvious reasons).

As Lord Timothy-Jones, a House of Lords Select Committee chair, recently pointed out, “there could be circumstances where the decision that is made with the aid of an algorithm is so important there may be circumstances where you may insist on that level of explainability or intelligibility from the outset”. The question, therefore, is whether there is a way of restraining AI’s potential to make incorrect decisions. Is it possible to build rules into the technology at the outset to prevent poor decision-making later?

Building ethics into AI 

For businesses, the answer is to construct AI technologies that make people feel comfortable and address matters troubling them. Avoiding losing sight of the need for control and the role of human emotion in making business decisions is paramount. When choosing solutions to implement, it’s crucial that leaders take this into consideration, asking themselves whether the technology allows for human insights to be built in, thereby paving the way for common sense outcomes.

Fortunately, revolutionary products are emerging which allow users to specify upfront rules about the behaviour of predictive models, so that common sense outcomes are guaranteed. One such solution is Archetype, a patent-pending technology that builds predictive models using AI but allows users to insist that certain rules are adhered to.

For instance, if the business needs to be able to explain to a customer that it considers that credit risk increases as salary levels decrease, this can be enforced at the outset.  Archetype only produces models in which specified constraints are always adhered to, which is key to being able to confidently deploy a predictive model without fear of generating undesirable outputs.

It’s not just about the societal impact. Innovation in ethical AI is about ensuring the UK is a digital innovation leader globally. A recent House of Lords report identified ethical AI as a developmental area and growth opportunity for the British economy. By focussing on the ethical side of AI, there is a real opportunity for Britain to join the US and China as an AI leader, driving progress that will benefit everyone.

There can sometimes be a perception that AI is an uncontrolled force, set to wreak unrestrained revolutionary change on businesses. While the technology is certainly transformative, the development of Archetype illustrates how AI can be controlled and engineered to produce common sense outcomes. The Government’s commitment to understanding more about AI is a promising development and I’m looking forward to hearing more about the Centre for Data Ethics and Innovation’s progress.