Connect with us

Global Banking and Finance Review is an online platform offering news, analysis, and opinion on the latest trends, developments, and innovations in the banking and finance industry worldwide. The platform covers a diverse range of topics, including banking, insurance, investment, wealth management, fintech, and regulatory issues. The website publishes news, press releases, opinion and advertorials on various financial organizations, products and services which are commissioned from various Companies, Organizations, PR agencies, Bloggers etc. These commissioned articles are commercial in nature. This is not to be considered as financial advice and should be considered only for information purposes. It does not reflect the views or opinion of our website and is not to be considered an endorsement or a recommendation. We cannot guarantee the accuracy or applicability of any information provided with respect to your individual or personal circumstances. Please seek Professional advice from a qualified professional before making any financial decisions. We link to various third-party websites, affiliate sales networks, and to our advertising partners websites. When you view or click on certain links available on our articles, our partners may compensate us for displaying the content to you or make a purchase or fill a form. This will not incur any additional charges to you. To make things simpler for you to identity or distinguish advertised or sponsored articles or links, you may consider all articles or links hosted on our site as a commercial article placement. We will not be responsible for any loss you may suffer as a result of any omission or inaccuracy on the website. .

Technology

Firm foundations for responsible AI

Firm foundations for responsible AI

By Dr. Scott Zoldi is chief analytics officer at FICO 

As the use of Artificial Intelligence grows across all markets and all industries, so too does the focus on the reasoning behind the decision-making algorithms behind it.  Scott Zoldi, AI expert at FICO discusses the ways in which data scientists and organisations can ensure they use AI responsibly and ethically.

Increasing volumes of digitally generated data coupled with automated decisioning have led to faster applications, form-filling, insurance claims handling and so much more. Artificial Intelligence (AI) has sped up our lives, increased convenience and fed our expectation for instant access, instant decisions, instant gratification. However, it has also brought a new set of challenges for businesses and governments alike.

Advocacy groups have started questioning the increasing use of AI to make decisions about the lives of people and the pushback is not always unfounded. With algorithms taking the lead and human understanding and empathy removed from the process, decision-making can seem callous, perhaps even careless. As a result of these concerns, regulations have been introduced to protect consumer rights and keep a close watch on AI developments.

Dr.Scott Zoldi

Dr.Scott Zoldi

Milestones on the journey to responsible AI

Building responsible AI models takes time and painstaking work. Systems must be built upon strong foundations. Continued monitoring, tweaking and upgrading must be employed to ensure the use of AI remains responsible while in production. As dependence on AI is growing by the day, organisations must act now to enforce responsible AI.

To do this, standards need to be established in the three pillars of responsible AI: explainability, accountability and ethics. With these in place, organisations of all types can be confident they are making sound digital decisions.

 

EXPLAINABILITY: AI decision systems should be based on an algorithmic construct that reports the reasons associated with the decision made, allowing a business to explain why the model made the decision it did – for example, why flag a transaction as fraud. This explanation can then be used by human analysts to further investigate the implications and accuracy of the decision; it will also enable a clear explanation to be provided to customers. A detailed explanation of the risk indicators ensures the decision is understandable, plausible and palatable. In addition, if an error has been made by the customer providing data or the AI system itself, that can be rectified and reassessed, potentially resulting in a different outcome.

ACCOUNTABILITY: The importance of thoughtful development of models should not be under-estimated. Algorithm limitations must be taken into account and carefully chosen to create reliable machine learning models.  Technology must be transparent and compliant. Accountable development of models ensures the decisions make sense with changing inputs, for example, scores adapt appropriately with increasing risk.

Beyond explainable AI, there is the concept of humble AI — ensuring that the model is used only on the data examples and scenarios similar to data on which it was trained. Where that is not the case, the model may not be trustworthy and an organisation should downgrade to an alternate algorithm.

AI-PyramidETHICS: Having been built by humans and trained using societal data, which is often implicitly full of bias, AI can be far more discriminatory than many would expect from a machine. Explainable machine learning architectures allow extraction of the specific machine learned relationships between features that can lead to biased decision-making. Ethical models ensure that bias and discrimination are explicitly tested and removed.

Enforcing responsible AI

Having completed the careful, painstaking work of building a responsible AI model that is explainable, accountable and ethical, data scientists need to enlist external forces to ensure the model is and continues to deliver responsible AI. These forces include regulation, audit, and advocacy.

AI-SetREGULATION: Without external regulation, there would be no restrictions on, or control over, how organisations could use data and AI. Regulations are vital for setting the standard of conduct and rule of law for use of algorithms, to ensure decisions are fair.

AUDIT: To demonstrate compliance with regulation, data scientists and organisations require a framework for creating auditable models and modelling processes. Audits must ensure essential steps such as explainability, bias detection, and accountability tests are performed ahead of model release, with explicit approvals recorded. This creates an audit trail for accountability, attribution, and forensic analysis.  Furthermore, as data changes these same concepts of ethical AI must be retested and verified.

ADVOCACY: As mentioned earlier, there are many groups concerned about the wrong done by AI and willing to take up legal proceedings. Growing public awareness of how algorithms are making very serious life-changing decisions has led to organised advocacy efforts in many regions. Clearly there is a burgeoning need for collaboration between these advocates and machine learning experts for the greater good of both humans and AI.

‘Doing your best’ won’t be good enough

As the use of AI continues to grow across industries, borders and lives, ‘doing your best’ as a data scientist and as an organisation won’t be good enough. Responsible AI will soon be the expectation and standard Organisations must enforce responsible AI now and strengthen and set their standards of AI explainability, accountability, and ethics to ensure they are making digital decisions responsibly.

Global Banking & Finance Review

 

Why waste money on news and opinions when you can access them for free?

Take advantage of our newsletter subscription and stay informed on the go!


By submitting this form, you are consenting to receive marketing emails from: Global Banking & Finance Review │ Banking │ Finance │ Technology. You can revoke your consent to receive emails at any time by using the SafeUnsubscribe® link, found at the bottom of every email. Emails are serviced by Constant Contact

Recent Post