Editorial & Advertiser Disclosure Global Banking And Finance Review is an independent publisher which offers News, information, Analysis, Opinion, Press Releases, Reviews, Research reports covering various economies, industries, products, services and companies. The content available on globalbankingandfinance.com is sourced by a mixture of different methods which is not limited to content produced and supplied by various staff writers, journalists, freelancers, individuals, organizations, companies, PR agencies Sponsored Posts etc. The information available on this website is purely for educational and informational purposes only. We cannot guarantee the accuracy or applicability of any of the information provided at globalbankingandfinance.com with respect to your individual or personal circumstances. Please seek professional advice from a qualified professional before making any financial decisions. Globalbankingandfinance.com also links to various third party websites and we cannot guarantee the accuracy or applicability of the information provided by third party websites. Links from various articles on our site to third party websites are a mixture of non-sponsored links and sponsored links. Only a very small fraction of the links which point to external websites are affiliate links. Some of the links which you may click on our website may link to various products and services from our partners who may compensate us if you buy a service or product or fill a form or install an app. This will not incur additional cost to you. A very few articles on our website are sponsored posts or paid advertorials. These are marked as sponsored posts at the bottom of each post. For avoidance of any doubts and to make it easier for you to differentiate sponsored or non-sponsored articles or links, you may consider all articles on our site or all links to external websites as sponsored . Please note that some of the services or products which we talk about carry a high level of risk and may not be suitable for everyone. These may be complex services or products and we request the readers to consider this purely from an educational standpoint. The information provided on this website is general in nature. Global Banking & Finance Review expressly disclaims any liability without any limitation which may arise directly or indirectly from the use of such information.

Ethical AI: a vital investment for businesses

By Martin Benson, Head of AI Consulting, Jaywing

Artificial intelligence is already reshaping businesses. Corporate leaders are recognising its benefits: automating supply chain management, liberating workers from the more mundane, repetitive tasks and boosting efficiency across the board. While AI is rightly being talked about in largely optimistic terms within business, there are complexities that we need to overcome. The impact on jobs is an obvious and important talking point, although these fears are almost certainly overblown: it’s not at all clear that AI will generate an aggregate reduction in jobs (there have been similar concerns with most other major technological advances throughout history, but this has never happened). But there are also other potential issues, which are being recognised at a governmental level.

Last month’s Budget announcement included an interesting development; a pledge to establish a new Centre for Data Ethics and Innovation. The project will focus on ensuring that society is able to keep up with the pace of change driven by artificial intelligence, assessing the implications for public services, the future of employment and businesses. Crucially, the Centre will also explore “ethical issues” relating to AI. This is a crucial element of the debate, which is worthy of greater attention.

But what exactly do we mean by ethics in AI?

AI: the potential ethical dilemmas

Martin Benson
Martin Benson

As AI becomes a fundamental tool in the delivery of public services, as well as healthcare and business decisions, there are potential ethical issues to consider. A model predicting whether someone is more likely than another to buy a product may not need to be understood in depth. However, a life-changing decision on mortgage eligibility requires careful assessment by the lender, and a degree of transparency in how decisions are arrived at.

In addition, we could start to see AI used in making predictions on criminal recidivism, potentially informing decisions on probation, as well as deciding whether a patient should receive a drug, and in what quantity. When it comes to decisions like these, it’s vital that the behaviour of the AI is properly understood and potential mistakes are avoided (for obvious reasons).

As Lord Timothy-Jones, a House of Lords Select Committee chair, recently pointed out, “there could be circumstances where the decision that is made with the aid of an algorithm is so important there may be circumstances where you may insist on that level of explainability or intelligibility from the outset”. The question, therefore, is whether there is a way of restraining AI’s potential to make incorrect decisions. Is it possible to build rules into the technology at the outset to prevent poor decision-making later?

Building ethics into AI 

For businesses, the answer is to construct AI technologies that make people feel comfortable and address matters troubling them. Avoiding losing sight of the need for control and the role of human emotion in making business decisions is paramount. When choosing solutions to implement, it’s crucial that leaders take this into consideration, asking themselves whether the technology allows for human insights to be built in, thereby paving the way for common sense outcomes.

Fortunately, revolutionary products are emerging which allow users to specify upfront rules about the behaviour of predictive models, so that common sense outcomes are guaranteed. One such solution is Archetype, a patent-pending technology that builds predictive models using AI but allows users to insist that certain rules are adhered to.

For instance, if the business needs to be able to explain to a customer that it considers that credit risk increases as salary levels decrease, this can be enforced at the outset.  Archetype only produces models in which specified constraints are always adhered to, which is key to being able to confidently deploy a predictive model without fear of generating undesirable outputs.

It’s not just about the societal impact. Innovation in ethical AI is about ensuring the UK is a digital innovation leader globally. A recent House of Lords report identified ethical AI as a developmental area and growth opportunity for the British economy. By focussing on the ethical side of AI, there is a real opportunity for Britain to join the US and China as an AI leader, driving progress that will benefit everyone.

There can sometimes be a perception that AI is an uncontrolled force, set to wreak unrestrained revolutionary change on businesses. While the technology is certainly transformative, the development of Archetype illustrates how AI can be controlled and engineered to produce common sense outcomes. The Government’s commitment to understanding more about AI is a promising development and I’m looking forward to hearing more about the Centre for Data Ethics and Innovation’s progress.