By Stu Bailey, co-founder and chief enterprise AI architect at ModelOp
Recently I had the pleasure of opening a panel discussion on “Governance and Risk Management for AI and ML Models” that featured three very experienced, senior executives from leading financial institutions:
Jacob Kosoff, Head of Model Risk Management and Validation at Regions Bank
Dr. Menglin Cao, SVP, Head of AI NLP Model Development at Wells Fargo
Andreza Barbosa, Global Head of Model Governance at Goldman Sachs
The discussion centered on the challenges wrought by the rapidly growing use of artificial intelligence (AI) across their enterprises. They shared some very interesting anecdotes and offered insights that speak to a core theme in any enterprise AI journey: Balancing the pressure to maximize value by rapidly deploying AI models with the necessity to limit the organization’s risk exposure. For those who may not have the time to view the entire discussion – available here – I’d like to summarize a few highlights that I think are especially important.
Before I start – If you’re thinking that the panelists’ observations are only relevant for financial services companies, hold on a bit. They made it clear from the outset of the discussion that much if not most of their risk management practices are driven by internal controls and business practices rather than by government or industry regulations. As put succinctly by Jacob Kosoff from Regions Bank, “I would say if there was no regulation, our work would be the same.” So even if you’re not working in financial services or a regulated industry, read on as I think you’ll find value.
A key theme echoed by all panelists was the inherent tension between the ‘need for speed’ to wring the most value from the models produced by their data scientists and the need for rigorous and deliberate controls to protect against excess risk. Dr. Cao from Wells Fargo put it nicely: “When you ask people to describe AI they would say, ‘Oh, AI is smart and it’s got to be very fast.’ And then when you ask people to describe risk management, they would say, ‘It’s comprehensive and it takes time.’” This dichotomy lies at the core of any mature, enterprise AI program, and addressing it effectively may be the most important factor in determining an organization’s success with AI and corresponding success as an enterprise.
Balancing speed and governance with AI is still an emerging issue for more businesses of various sizes. The fact that mature and typically conservative enterprises are spending considerable resources on addressing these challenges is a clear indication that we’ve moved past the early adoption phase for AI and have entered the scaling phase. But as the panelists explained, the initial exuberance around the transformative potential of AI needs to be tempered. As Jacob Kosoff told me, “I think executives in the banking space have a mindset view of, ‘I only want to use AI responsibly. I only want to drive a car if there’s brakes. I only drive a car if I can lift the hood. If there’s no brakes or I can’t lift the hood, I’m just not going to buy that car.’ So transparency in how AI is used and how it benefits consumers and businesses alike is crucial.”
To address these priorities, the panelists’ organizations have implemented robust model operations capabilities with risk management as a key driver of the process. While each program is unique to each company, they share a number of common elements:
- A comprehensive model inventory is used to collect and track all models and their artifacts.
- Each model is managed according to a lifecycle tailored to its unique needs, impacts and risks.
- The risk management team signs off on model releases for models with the highest risk classification, both those internally developed and those provided by 3rd parties. In some cases, the risk team must approve the acquisition or deployment of any analytic asset.
- Automation is used extensively to eliminate bottlenecks, prevent missed steps, provide transparency and auditability and ensure that their processes accelerate rather than hinder the ability to extract value from their models.
Another key theme echoed by the panelists was the value of executive sponsorship for model governance. As noted by Andreza Barbosa, “We really drive this initiative in terms of ensuring that the development of AI and machine learning models follow our best practices, our policies, and procedures. And there’s also a very strong governance framework in the sense that we report to a number of very senior committees of the firm. So the tone really comes from the top around the relevance of adopting these risk management principles.”
The snippets above are just a few of the many valuable insights that were shared during the discussion, and I again encourage you to check out the full conversation. In the meantime, here are a few questions you may want to consider:
- Do you have visibility to all of the models in production in your company?
- Do you have a comprehensive inventory of all models and their artifacts?
- Is each model operationalized and governed according to a model lifecycle appropriate to its value and risk?
- Does the governance organization have the necessary visibility and controls across model lifecycles?
- Is automation in place to streamline processes and avoid errors and delays?
- How long would it take to respond to an internal or external audit?
- Are lines of communication among all stakeholders clear and operating?
- Do senior executives have visibility to and oversight of model governance?
The answers to these questions will be critical as you chart a course for your enterprise with AI that is both fast, and responsible.
Stu Bailey is co-founder and Chief Enterprise AI Architect at ModelOp. He is a technologist and entrepreneur who has been helping large enterprises to effectively consume AI, leveraging data intensive and AI focused technology for over two decades.Stu has received several patents and awards and has helped drive emerging standards in analytics and distributed systems control.
This is a Sponsored Feature