Connect with us

Technology

AI: Do the Right Thing

AI: Do the Right Thing 1

By Alix Melchy, Jumio VP of AI

The application of emerging technologies such as AI, cloud, blockchain and IoT in financial services has altered the traditional operating models of financial institutions, the competitive dynamics of the industry, the role of people in those institutions and the landscape of the financial system as a whole. In fact, AI is positioned as an essential investment, with the World Economic Forum arguing how it is set to become central to the fabric of financial institutions.

While the adoption of AI in financial services may be in its infancy, the use cases are ever growing. From recommending loan and credit offerings to detecting fraud, 94% of financial services in European and Middle Eastern markets believe that AI will disrupt their business. The direction and the awareness of AI is clear but it is essential that companies invest now, as if done too hastily, the process is marred by pitfalls.

Despite the transformative promise of AI and machine learning algorithms, we have seen its application come under scrutiny in other industries. Take the UK A-Level exam grading debacle that dominated headlines back in August. Exam grades of students living in certain UK postcodes were disproportionately and negatively impacted, while other students saw their results inflated. This was down to an algorithm implemented by Ofqual that was set to predict grades using historical data including grades obtained at exams in previous years.

The incident raises the question as to what would happen if the algorithm used in this instance was applied to a financial decision. The same biases could negatively impact the way millions of consumers and businesses borrow, save and manage their money.

It is therefore imperative that financial institutions learn from this scenario, ensuring that when implemented in financial decision-making, AI is nothing short of a success.

AI is no fairy godmother

While many tout the game-changing effects of the looming AI revolution, it’s fundamentally important to understand that AI is not magic. Instead, we need to learn to set reasonable expectations with AI so not to paint an unrealistic picture of its power.

In order to start out on the right track, businesses must first define and align on the task they want the algorithm to perform before it can be developed and implemented. Articulating the problem to be solved is the prerequisite for a solid framework of development and evaluation of your algorithms.

Removing bias in AI

AI is the tool, not the hand that wields it or the eye that guides it. It is a type of learning system that requires data, training integration, and course correction. Just as we would train a young engineer to use a tool correctly, we are training AI systems to become expert learning systems through the data, process and people.

Therefore, in order to solve a problem using AI, the task must be expressed in a form which a machine can understand and the machine must be supplied with the necessary data to perform or otherwise learn to generate predictions that enable it to accomplish its objective. Without strong and relevant data underpinning an AI model, it will never be able to produce strong and relevant results.

To design a fair algorithm, the key is to collect a sufficient amount of data so that the algorithm can be trained to represent an entire community. While it is possible to buy datasets to speed up the process, when doing so, it is essential that the data meets your required criteria rather than simply being a large data set. For the financial services sector, this enables employees to treat customers fairly and, when combined with appropriate modelling and processes, allows them to maintain transparency and accountability in their decision-making processes to avoid legal claims or fines from regulators which can cause deep reputational damage.

Building back better

As the Ofqual issue revealed, a preliminary, small-scale algorithm test is an essential step before applying it into a real-world scenario. A pilot testing phase will help a business to amend the design to identify unnecessary costs and time expenditures, while also better understanding the data. As this was not sufficiently done in the Ofqual case, the algorithm simply did not provide the right answer to the problem it was trying to solve.

Championing ethical AI

More than ever, companies are realising one simple truth: failing to operationalise data and AI ethics is a threat to the bottom line. Missing the mark can expose companies to reputational, regulatory and legal risks. Here are some key areas that businesses should consider when leveraging AI models:

  • Usage consent: make sure that all the data you are using has been acquired with the proper consent
  • Diversity and representativity: AI practitioners should consider how diverse their programming teams are and whether or not they undertake relevant anti-bias and discrimination training. This will draw upon perspectives of individuals from different genders, backgrounds and faiths which will increase the likelihood that decisions made on purchasing and operating AI solutions are inclusive and not biased
  • Transparency and trust building: accurate and robust record keeping is important to assure that those impacted by it know how the model works

The ways AI can be utilised in the financial services industry is increasingly growing. An example is the use of document-centric identity proofing space whereby an identification document, such as a passport, is matched with a selfie of the user to confirm real and virtual identities. This will be an essential area of focus for financial services companies as they look to confirm that users are who they claim to be when the physical branch is diminishing. When analysing if a person is the same as the picture on their documentation, for example, a biased AI model can completely undermine the decision made.

However, it’s reassuring to see that the 2020 Gartner Market Guide for Identity Proofing & Affirmation predicts that by 2022, 95% of RFPs will have introduced clear requirements around minimising demographic bias. This demonstrates how organisations are now becoming more aware of the detrimental impacts that demographic bias in the performance of identity-proofing processes could have on their brand as well as being clear on the legal consequences they risk facing.

In turn, there is a real opportunity to leverage AI solutions to provide the best service, but financial institutions must ensure that they are doing so in an ethical, accurate, and representative way.

Editorial & Advertiser disclosure
Our website provides you with information, news, press releases, Opinion and advertorials on various financial products and services. This is not to be considered as financial advice and should be considered only for information purposes. We cannot guarantee the accuracy or applicability of any information provided with respect to your individual or personal circumstances. Please seek Professional advice from a qualified professional before making any financial decisions. We link to various third party websites, affiliate sales networks, and may link to our advertising partners websites. Though we are tied up with various advertising and affiliate networks, this does not affect our analysis or opinion. When you view or click on certain links available on our articles, our partners may compensate us for displaying the content to you, or make a purchase or fill a form. This will not incur any additional charges to you. To make things simpler for you to identity or distinguish sponsored articles or links, you may consider all articles or links hosted on our site as a partner endorsed link.
Global Banking and Finance Review Awards Nominations 2021
2021 Awards now open. Click Here to Nominate

Recommended

Newsletters with Secrets & Analysis. Subscribe Now