Connect with us

Global Banking and Finance Review is an online platform offering news, analysis, and opinion on the latest trends, developments, and innovations in the banking and finance industry worldwide. The platform covers a diverse range of topics, including banking, insurance, investment, wealth management, fintech, and regulatory issues. The website publishes news, press releases, opinion and advertorials on various financial organizations, products and services which are commissioned from various Companies, Organizations, PR agencies, Bloggers etc. These commissioned articles are commercial in nature. This is not to be considered as financial advice and should be considered only for information purposes. It does not reflect the views or opinion of our website and is not to be considered an endorsement or a recommendation. We cannot guarantee the accuracy or applicability of any information provided with respect to your individual or personal circumstances. Please seek Professional advice from a qualified professional before making any financial decisions. We link to various third-party websites, affiliate sales networks, and to our advertising partners websites. When you view or click on certain links available on our articles, our partners may compensate us for displaying the content to you or make a purchase or fill a form. This will not incur any additional charges to you. To make things simpler for you to identity or distinguish advertised or sponsored articles or links, you may consider all articles or links hosted on our site as a commercial article placement. We will not be responsible for any loss you may suffer as a result of any omission or inaccuracy on the website. .

Technology

How to Keep Your AI Ethical

How to Keep Your AI Ethical

By Dave Trier, VP of Product at ModelOp

Model monitoring technology is a foundation, but is not enough

Artificial intelligence (AI) brings many proven and sustainable benefits to finance and other business operations and provides a strong opportunity for companies that master it to gain a competitive advantage. But AI also introduces new risks to compliance and brand reputation that finance professionals must learn to recognize and manage. Keeping AI use ethical and fair requires a skillful blend of management and data science. Even then, risk is constant because business conditions, regulations, public sentiment and AI performance are all continually changing. These changes create the possibility for bias to develop, which is an ongoing risk to ethical AI and threat to compliance, especially with the myriad of inconsistencies around the world. Gartner predicts that 15% of application leaders will face board-level investigations into AI failures by 2022 (Top 5 Priorities for Managing AI Risk Within Gartner’s MOST Framework, January 2021).

This article gives readers guidance on what they can do to preserve ethical AI use in their organizations. It identifies the human and technical components needed to apply artificial intelligence ethically, and presents a recommended holistic approach that incorporates AI policy, governance, model monitoring and orchestrated remediation to keep AI unbiased and ethical for as long as models are in use.

Overview

Data scientists build AI models to put the enterprise’s artificial intelligence initiatives into practice. Enterprises on average have about 300 models in production according to the State of ModelOps 2021 Report. Once data scientists build the model, they or the IT staff puts it into production. At this stage the AI model is similar to other software assets in that it must be monitored and managed on an ongoing basis to keep it performing as designed. However, the skills, management software, policies and governance processes needed to effectively manage AI models are all different from those needed for other enterprise software. If these differences are not understood and addressed the enterprise is putting itself at risk.

Risk is constant because AI model operations is not static. It is a certainty that there will be changes in the underlying data consumed by the model, typically due to shifts in demographics, consumer behavior or business conditions. Such change can introduce unintentional bias, which can introduce additional ethical considerations and can produce risk, liability and compliance exposure. The cumulative effect of these biases and other slight deviations in the model’s actual performance compared to its intended design can easily rise to problem levels if not detected and corrected.

Some such problems have already occurred and attracted a lot of unwanted attention for the enterprises that were not able to maintain ethical AI use. These failures, which have occurred at some of the world’s largest and most tech-savvy organizations, should not be considered isolated incidents. Eighty percent of executives said difficulty managing risk and ensuring compliance is a barrier to AI adoption for their enterprise in 2021.

Keeping ethical bias out of artificial intelligence models remains a largely unsolved problem, however there are steps organizations can take and resources they can apply to prevent bias from developing and to mitigate its effects. It starts by setting thoughtful and executable policies to guide AI use in the organization. After models are promoted to production they need to be monitored, tested and periodically retrained – regularly and repeatedly throughout the model’s entire life cycle.

Problems Can Develop at Any Time

Data scientists do not knowingly put biased models into production. Most AI ethical problems develop over time after a bias-free start to the program. Depending on what the model is used for, even slight deviations from baseline standards, assumptions and intended use can create ethical bias and compliance risk. Change doesn’t necessarily lead to bias, but can open the door to it.

One reason ethical AI is a persistent problem is that there is no single, complete way to solve for it. Organizations can set appropriate policies for ethical AI, but that doesn’t ensure policies will be put into practice. There is an emerging set of code and tooling available to build fairness into models during model development, but that may not provide sufficient protection as conditions change during the model’s life cycle. Governance processes are intended to track such changes, ensure KPIs are being met and that policies are being followed, but governance requires visibility into model operations. Model monitoring solutions provide visibility, but they are not arbiters of ethical fairness. Thus, there is mutual dependency among policy, governance and monitoring efforts to keep AI use ethical.

The interdependencies are why it is important to have a holistic approach to AI management, because neither policy, software code, model monitoring nor enterprise governance alone are enough to prevent models from being compromised by ethical bias. The following sections provide an overview of anti-bias and ethical fairness protections that organizations can put in place for every stage of the AI life cycle.

Use People and Policies to Set an Ethical Foundation

Making sure AI use is ethical is not just a data science challenge, it is an organizational challenge. For example, a Harvard Business Review article identifies seven requirements for ethical AI and a McKinsey report presents six. Of the 13 suggested actions and requirements, perhaps two can be accomplished with software; most involve the human aspects of the model development and management processes. Much research, best practices development and thought leadership have been developed around managing AI, and these efforts are too extensive to adequately cover here. McKinsey Global Institute’s Notes from the AI frontier: Tackling bias in AI (and in humans) lists multiple resources and is a good report in itself.

The anti-bias effort needs to start even before the model is built, using a foundation or organizational guidelines and governance for ethical AI use. Before organizations create any specific AI use cases and models they should develop policies to guide AI use across the organization, and appropriate governance to carry it out. An ad-hoc approach to AI use cases, model development and deployment can leave gaps. AI policies should be debated and developed by teams that have broad representation from different business roles and team member backgrounds that extend beyond data science.

Enforce Policies throughout the Model’s Life Cycle

For policies to be useful, they must be continuously and consistently enforced.  This can be difficult in today’s heterogenous IT enterprise where models are developed by multiple teams using a myriad of different technologies and a variety of different data sources, and run in multiple different on premises and cloud environments.  This problem becomes increasingly challenging as enterprises scale from hundreds to thousands of AI models.

Enforcement should include defined ethical fairness checkpoints, gates, and monitors through the model’s life cycle.   Modern ModelOps software solutions are able to encode the ethical fairness policies directly into a system that manages the model’s life cycle, ensuring that the appropriate rules are followed, tests are run, independent validations are conducted, and approvals are obtained.  Moreover, the policy adherence needs to be tracked for each version of a model for auditability.

Keep Bias Out of the Model Build

Bias can be inadvertently built into models from the start, for example through the features that it is designed to have or because of the input data sources that are selected. Human policy and decision making at the model development stage are fundamental to preventing such inherent bias. This effort is now getting a big boost from an emerging category of software tooling that helps identify data sources and model designs that will lead to unintended bias. The most prominent resource is the Aequitas toolkit that is maintained by the Center for Data Science and Public Policy. Similar and complementary resources are also available.

Continually Test & Monitor the Model

Putting policy and model design best practices in place does not ensure models will properly and continually execute. That requires actionable model monitoring, which is fundamental to maintaining ethical and overall model performance. For a model to remain ethical, it needs to be observable and fixable. Model monitoring provides those functionalities.

There is no single key performance indicator or other metric that shows whether a model is biased or not. Multiple performance metrics must be monitored and assessed for how they relate to fairness, bias and ethical AI. Here are some general ways model monitoring solutions can help prevent bias from developing and prevent other performance problems; the specifics and their effectiveness depend on the individual solution.

  • Model monitoring can compare if model output is within the thresholds that were set in the model design and build, including by integrating with ethical AI frameworks like Aequitas. Solutions also can monitor for a wide range of input and output quality deviations.
  • Detection isn’t enough, organizations need to act on what they find. Proactive solutions can launch and orchestrate testing and remediation processes immediately as needed. Automated, orchestrated performance monitoring might involve identifying models that need to be retrained, further examined for emerging imbalances, or even shutting down models if extreme problems are found.
  • Model monitoring solutions can automate and conduct regular ethical fairness testing, and archive the results to ensure auditability.

If the model cannot be monitored, and if the results cannot be proactively acted on, it will be very difficult to prevent ethical bias from developing.

Conclusion

Business conditions, demographics consumer behavior, and inherent data shifts all have the potential to change over a model’s life cycle. So will model performance, because it depends on those conditions. For AI to remain ethical, its models must be built on an ethical foundation and be observable and fixable at every stage of their life cycle. Achieving the needed visibility and control over their model operations, organizations need a coordinated effort that involves AI policy and governance leaders, line-of-business leaders, data science teams, IT organizations and the multiple software and infrastructure systems involved in running AI, providing the data for it and producing and distributing the output. Neither policy, people or technology alone are enough to prevent bias and ensure ethical AI performance. A holistic approach is needed, and is greatly aided when monitoring and remediation tasks can be automated and at each stage of the model life cycle.

Author Bio:

Dave Trier, VP of Product at ModelOp and their ModelOp Center product.  Dave has over 15 years of experience helping enterprises implement transformational business strategies using innovative technologies—from AI, big data, cloud, to IoT solutions. Currently, Dave serves as the VP Product for ModelOp, charged with defining and executing the product and solutions portfolio to help companies overcome their ModelOps challenges and realize their AI transformation.

 

This is a Sponsored Feature.

Global Banking & Finance Review

 

Why waste money on news and opinions when you can access them for free?

Take advantage of our newsletter subscription and stay informed on the go!


By submitting this form, you are consenting to receive marketing emails from: Global Banking & Finance Review │ Banking │ Finance │ Technology. You can revoke your consent to receive emails at any time by using the SafeUnsubscribe® link, found at the bottom of every email. Emails are serviced by Constant Contact

Recent Post