Connect with us

Top Stories

The Art & Science of Getting the Most Value from AI

The Art & Science of Getting the Most Value from AI 1

For years, many business activities have been managed under the principle “If you can’t measure it, you can’t improve it.” However, that principle often is not applied to enterprise artificial intelligence model operations. Two-thirds of organizations do not monitor their AI models at all. That could be why the value of some AI programs has plateaued. Model performance naturally degrades over time. For financial institutions, even small degradations can have major consequences for the entire business. 

We know from experience – and have the data to prove it – that continuous and actionable monitoring is needed across the entire lifecycle for an AI model to maintain optimal performance and to enforce risk and compliance controls. Monitoring AI models isn’t just good data science, it’s good business. This article will show you why, by highlighting how actionable monitoring optimizes business value of the model, increases data science team productivity, and reduces risk. 

AI performance and value are often discussed in terms of data science elements such as accuracy, data drift, and other metrics. These are important elements for measuring model performance, but we need to look beyond them to understand a model’s value to the business. For example, what revenue impact would a slight (0.5%) performance degradation have on your AI model that is used to detect fraud? How would your revenue be affected by a 10% optimization in the performance of a model used to recommend lending decisions? These are the types of variables that actionable and continuous model monitoring can address. 

More accuracy leads to more profitability

Data science and metrics aside, you can begin to appreciate the value of monitoring when you understand that model performance changes over time. That isn’t a reflection of the AI team or how models were built, it is a fundamental characteristic of models, especially AI models. The predictions or inferences that a model creates is intrinsically tied to the data that was used to build the model and the data that feeds the model. But this data changes over time. Shifts in underlying business conditions, consumer behavior, or even catastrophic events cause models to produce unintended results and likely declines in business value. Without proactive monitoring, model performance typically decays at a rate of 10% annually but can be as high as 50% for some models, directly impacting the business value a model returns. 

The good news: this is a characteristic that can be mitigated through proactive model monitoring and remediation. Setting monitors that comprehensively analyze a model’s performance against desired statistical, technical, and business thresholds gives real-time visibility into the current state of the model. However, the real value of proactive monitoring is enabled by ensuring that proactive steps are taken to address any deviation in the model’s expected performance before it becomes a real problem. A top or bottom-line problem. that the protection against on the model Plus, if models are continually monitored, and remediation is automated, there may be little or no need to take the models offline for testing and retraining. Improving uptime and keeping models in production longer improves the return on investment for the model development.

Productivity provides value

Data science and AI positions are among the best-paid and hardest-to-fill positions in business today. The talent shortage is holding back AI expansion at many enterprises. It is clearly in enterprises’ interests to keep these high-value employees engaged on high-value tasks, such as building new AI models, rather than doing routine monitoring, maintenance and troubleshooting that could be automated. Unfortunately, most organizations do not currently have a robust model operations (ModelOps) capability to use automation to free data science teams from conducting these routine monitoring tasks. Half of organizations say their ability to monitor models to detect and fix issues in a timely way is ineffective or not very effective (2021 State of ModelOps Report).  Perhaps it is not a coincidence that Gartner found that “Approximately half of all AI models never make it into production due to lack of ModelOps.

Automating model monitoring and remediation throughout the model lifecycle is the key to scaling AI. According to a Forrester study: “A top complaint of data science, application development & delivery (AD&D) teams, and, increasingly, line-of-business leaders is the challenge in deploying, monitoring, and governing machine learning models in production. Manual handoffs, frantic monitoring, and loose governance prevent organizations from deploying more AI use cases.”

The more AI models that enterprises can put into production for business decisioning—and the longer they can sustain optimal business value from them—the greater return they will realize from their investments in data science staff, big data, AI development tools and the supporting IT infrastructure.

Compliance, cost avoidance and automated risk controls

More potential business value, in the form of cost avoidance, is available when automated remediation is combined with model monitoring. As I detailed in a previous article, compliance, bias and other AI risks are not only constant, they are constantly evolving because of changes in demographics, consumer behaviors and attitudes, business conditions and the model performance itself. These changes make it difficult to enforce risk and compliance controls in AI models unless the models are continuously monitored. 

The results of monitoring can be documented through some of the AI metrics referenced earlier. The business impact can be viewed in terms of compliance violations, liability and reputational harm that are avoided. While it is hard to calculate the value of these benefits, consider that two of the eight largest regulatory fines issued in 2020 – $400 million assessed against a finance company by the U.S. Office of the Comptroller of the Currency (OCC), and a separate $85 million penalty the OCC levied on a bank – could be attributed to the failure to implement effective risk management controls. Advanced model monitoring solutions can help organizations avoid these costs, improve compliance and maintain the ethical use of AI by automating and operationalizing AI risk controls. 

So far this article has focused on showing how businesses benefit from monitoring their AI models, but it hasn’t focused on how to actually monitor them. That subject gets technical and there are many competing approaches. 

Without getting too deep into the data science, here are two fundamentals:

1) Monitor four key components

  • Operations: for example, is the AI system meeting SLAs for the business applications or processes that are using the models
  • Quality: are model decisions and outcomes optimized? Are models being tested against known conditions to measure their performance?)
  • Risk: Are models operating within the predefined thresholds and risk controls? is bias developing in the models?
  • Processes: are models progressing efficiently through life cycle stages, including model risk validation and eventual production enablement? Are governance processes being followed?)

2) Monitor across the entire model life cycle: It is not enough to test a model before putting it into production, and then quarterly or yearly thereafter. For both technical and business reasons, testing and monitoring need to be continuous, from the time models are developed to the time they are retired, after which data and records still must be retained for auditability. That can be automated too.

Unsure of whether your model monitoring and AI governance are holding you back from getting the full value from your AI investments? Here are some questions for organizations to help determine if their model monitoring process needs an upgrade:

  • How many models are in production?
  • Where are the models running?
  • Are model predictions or decisions being made in a timely manner?
  • Are model results reliable and accurate?
  • Are compliance and regulatory requirements being satisfied?
  • Are models performing within established business, operational and risk controls and thresholds?
  • How is model performance changing over time?

If your processes and systems can’t answer these questions, you probably need to update them. The effort is worth the investment, because of the improvements in model performance, staff productivity, risk exposure and compliance that optimized, properly managed models provide. Here is one more question to consider: If AI models are used to help the business, wouldn’t they help more if they were running as intended and at peak accuracy at all times?

Author:  Dave Trier, VP of Product at ModelOp and their ModelOp Center product.  Dave has over 15 years of experience helping enterprises implement transformational business strategies using innovative technologies—from AI, big data, cloud, to IoT solutions. Currently, Dave serves as the VP Product for ModelOp, charged with defining and executing the product and solutions portfolio to help companies overcome their ModelOps challenges and realize their AI transformation.

[1] Corinium Intelligence, 2020.
[2] ModelOp “2021 State of ModelOps Report”
[3] Gartner, “Innovation Insights for ModelOps” August 6, 2020
[4] Forrester, “Introducing ModelOps To Operationalize AI” August 13, 2020

This is a Sponsored Feature

Editorial & Advertiser disclosure
Our website provides you with information, news, press releases, Opinion and advertorials on various financial products and services. This is not to be considered as financial advice and should be considered only for information purposes. We cannot guarantee the accuracy or applicability of any information provided with respect to your individual or personal circumstances. Please seek Professional advice from a qualified professional before making any financial decisions. We link to various third party websites, affiliate sales networks, and may link to our advertising partners websites. Though we are tied up with various advertising and affiliate networks, this does not affect our analysis or opinion. When you view or click on certain links available on our articles, our partners may compensate us for displaying the content to you, or make a purchase or fill a form. This will not incur any additional charges to you. To make things simpler for you to identity or distinguish sponsored articles or links, you may consider all articles or links hosted on our site as a partner endorsed link.
Global Banking and Finance Review Awards Nominations 2021
2021 Awards now open. Click Here to Nominate


Newsletters with Secrets & Analysis. Subscribe Now