Connect with us

Top Stories

A Framework for Analytics Operational Risk Management

Published

on

A Framework for Analytics Operational Risk Management 1

By H. P. Bunaes founder of AI Powered Banking.

As analytics, descriptive and predictive, are embedded in business processes in every nook and cranny of your organization, managing the operational risk associated with all of this is critical. A failure of your data analytics may, at best, impact operational efficiency, but at worst it could result in reputational damage or monetary loss. What makes this tricky is that analytics can appear to be working normally, when in fact erroneous results are being produced and sent to unsuspecting internal or external recipients downstream.

When there were only a handful of models in use, and they were developed by one group who controlled them from end to end, operational risk was manageable. But analytics is becoming pervasive, and may now be fragmented across many functions and lines of business, and operational risk is rising as a result. Many analytics groups have a long backlog of requests and resources are stretched thin. Monitoring of models in production may be low on the priority list. And, it is the rare organization indeed that knows where all the analytics in operation are and how they are being used.

Some recent examples:

● A chief analytics officer at a large US bank described how a model for approving overdrafts was found deeply embedded in the deposit system. No one remembered it was there, never mind knew how it worked.
● Another described the “what the hell” moment when data critical to credit models one day simply disappeared from the data stream.
● And a consumer banking analytics head at another bank described how models used to predict delinquencies suddenly stopped working as the pandemic hit since data used to build them was simply no longer relevant.

WHY NOW?

The topic of model risk management has been well thought through, and in some sectors, such as banking, regulatory guidance is clear. But the focus of model risk management has been on model validation and testing: all the important things that need to happen prior to implementation.

But as one head of analytics told me recently “it’s what happens after the fact that is of greatest concern [now]”. A new head of Model Risk Management at a top 10 US bank told me that “operational risk management is top of mind”. And a recently retired chief analytics officer added that unfortunately “[data scientists] just don’t get operational risk.”

In many organizations, the full extent of their deployed analytics is not known. There is no consolidated inventory of analytics, so no one knows where it all is and what it does. One large US bank last year did a survey of all of their predictive models in operation and found “thousands of models” that had not been through any formal approval, validation, or testing process according to several people I spoke with.

TOOLS AND PLATFORMS

There are tools and platforms coming on the market for managing analytics op risk (often referred to, somewhat narrowly, as “ML ops”, for machine learning operations). I’ve counted 10 of them: Verta.ai, Algorithmia, quickpath, fiddler, Domino, ModelOp, superwise.ai, DataKitchen, cnvrg.io, and DataRobot (their standalone MLops product formerly known as Parallel M). Each vendor takes a somewhat different approach to managing analytics ops risk. Over simplifying a bit, most focus either on model monitoring or on model management, only a few try to do both. Algorithmia is strong in model management, quickpath is strong in model monitoring. ModelOp and Verta.ai try to do both.

But, none of them have a prescribed operational risk management (ORM) framework. And without an effective framework for managing analytics in use, no tool will solve the problem.

In this article I will describe what an effect ORM for analytics should include at minimum.

Comprehensive Operational Risk Management Framework for Analytics

Comprehensive Operational Risk Management Framework for Analytics

MODEL MANAGEMENT

The keystone to any ORM framework is a comprehensive model inventory, a database of models including all documentation, metadata (e.g. input data used and its source and lineage, results produced and where consumed), and operational results and metrics. Knowing what and where all of your analytics are and where and how they are being used is a prerequisite for good ORM. You can’t manage what you don’t know about.

Requiring that all data about each model is captured and stored centrally prior to implementation and use is the first bit of policy I’d recommend. All of the model validation and testing done in an effective Model Risk Management process needs to be captured in the model inventory/database. And all model inputs and model outputs, their sources and their destinations need to be cataloged.

The second bit of policy is that any use of a model must be captured centrally – – who is using the model, why, and to do what? The framework falls apart if there are unknown users of models. As described in a great paper on the hidden technical debt of analytics models, a system of models can grow over time such that a change to one model can affect many downstream models. “Changing anything changes everything.”

The second critical piece to analytics operational risk management is good change management: data change management, IT change management, and model change management. Nothing ever stays the same. The environment changes, client and competitor behavior changes, upstream data sources come and go, and the IT environment is in a constant state of change. From my experience, and confirmed through many conversations with industry practitioners, the primary reason that models fail in operation is poor change management. Even subtle changes, with no obvious impact to downstream models, can have dramatic and unpredictable effects.

Changes to data need to go through a process for identifying, triaging, and remediating downstream impacts. A database of models can be used to quickly identify which models could be impacted by a change in the data. The data changes then need to be tested prior to implementation, at least for models exceeding some risk threshold. Changes to models themselves need to be tested as well when those results, even if more accurate for one purpose, are consumed by multiple applications or as inputs to other models downstream. And, of course, changes to the IT environment need to be tested to be sure that there isn’t an impact to models such as latency or performance under load.

People tend to dislike a change management process viewed as slow or bureaucratic. So change management has to be time and cost efficient. Higher priority changes going through first, for example, routine changes as a lower priority. If the change management process is slow and burdensome, people will inevitably try to go around it degrading the effectiveness of the process.

MODEL MONITORING

Model monitoring means actively watching models for signs of any degradation or of increasing risk of failure (prior to any measurable degradation). An analytics head at a top 10 US bank confided that “modelers just don’t think monitoring is important”. Monitoring must include watching the incoming data for drift, data quality problems, anomalies in the data, or combinations of data never seen before. Even subtle changes in the incoming data can have dramatic downstream effects. There must be operational metrics and logs, capturing all incoming data and outgoing results, performance relative to SLA’s, volumes over time, and a record of all control issues or process failures.

H. P. Bunaes

H. P. Bunaes

Operational data on models must be captured and logged to provide an audit trail, for diagnostics, and for reporting purposes. Logs should include all incoming data used in the model and all resulting predictions output, as well as volumes and latency metrics for tracking performance against SLA’s. Traceability, explainability, and reproducibility will all be necessary for 3rd line of defense auditors and regulators.

Traceability means the full data lineage from raw source data through all data preparation and manipulation steps prior to model input. Explainability means being able to show how models arrived at their predictions, including which feature values were most important to the predicted outcomes. Model reproducibility requires keeping a log not only of incoming data, but of the model version, so that results can be replicated in the future after multiple generations of changes to the data and/or the model itself.

Issue logs must be continuously updated describing any process failures (unanticipated incoming data changes), control failures (data quality problems), or outages causing models to go “off line” temporarily. Auditors and regulators will want to see a triage and escalation process, demonstrating that the big issues are identified and get the right level of attention quickly.

ETHICS AND MODEL BIAS

Models must be tested for bias and independently reviewed for fairness and appropriateness of data use. Reputational risk assessments should be completed, including a review of the use of any sensitive personal data. Models should be tested for bias across multiple demographics (gender, age, ethnicity, and location). Models used especially for decisioning such as credit approval must be independently reviewed for fairness. A record of declines, for example, should be reviewed to ensure that the model is not systematically declining any one demographic unfairly. It is an unavoidable consequence of building predictive models that any model trained on biased data will itself be biased. It may be necessary therefore to mask sensitive data from the model that could result in unintentional model bias.

REPORTING

Lastly, it is not enough to have an effective model management and monitoring process. One must be able to prove to auditors and examiners that it works. For that you need good reporting which includes:

● An inventory of all models in operation
● A log of all model changes in a specified time period (this quarter to date, last full quarter, year to date, etc): new models implemented, model upgrades, and models retrained on new data
● A log of data changes: new data introduced, new features engineered, or changes in data definitions or usage
● For changes to existing models performance metrics on out of sample test data before and after the enhancements
● For each model in production, ability to generate a detailed report of model operation including a log of data in/results out, model accuracy metrics (where absolute truth can be known after the fact), and operational metrics (number of predictions made, latency, and performance under load for operationally critical models)
● Issue log: issue description, issue priority, date of issue logging and aging, status of remediation, escalation status, actions to be taken, and individual responsible for closure, new issues and closed issues in a given period
● Operational alert history: for a given period, for each model, a report of all incoming data alerts (missing data, data errors, anomalies in the data)
● Data change management logs showing what data changed and when and which models were identified as potentially effected and tested
● IT change management logs showing changes to the infrastructure effecting models

In my experience auditors and examiners presented with a comprehensive report package for review can be satisfied that you have an effective process in place and are likely to stop there. If no such evidence is available, they will look much deeper into your organization’s use of models which will be disruptive to operations and likely result in a long list of issues for management attention.

ORGANIZATIONAL MODEL

There are multiple ways to create the right organizational partnerships for effective analytics ORM. The brute force method would be to create a new organizational unit for “analytics operations”. One could argue in favor of this approach that this new organizational unit could be built with all the right skills and expertise and could build or select the right tools and platforms to support their mission.

But a better approach might be to create a virtual organization comprised of all the key players: data scientists, data engineers (the CDO’s organization, typically), the business unit, model risk management (typically in Corporate Risk Management, but sometimes found in Finance or embedded in multiple business units), traditional IT, and audit.

Orchestrating this partnership requires clear roles and responsibilities, and well articulated and documented policies and procedures explaining the rules of the road and who’s responsible for every aspect of analytics ORM.

The latter is harder to pull off, requires more upfront thought and investment, but may yield a better and more efficient result in the long run as everyone has a stake in the success of the process and existing resources can be both leveraged and focused on the aspects of the framework they are best suited to support.

CONCLUSION

As organizations increasingly become analytics driven, a process for managing analytics operational risk will safeguard the company from unpleasant surprises and ensure that analytics continue to operate effectively. Some might argue that the process outlined here will be costly to build and operate. I would argue that (a) they are already spending more than they think on model operations, management, and maintenance (b) that unexpected failures that cascade through the data environment are always harder and more costly to fix than the cost of proactive prevention and (c) that creating a centrally managed process will free up expensive resources to do more  of the high value add work the business needs. Companies that want to scale up analytics will find that an effective ORM framework creates additional capacity, speeds the process, and eliminates nasty surprises.

Author Bio:

H.P. Bunaes has 30 years experience in banking, with broad banking domain knowledge and deep expertise in data and analytics. After retiring from banking H.P. led the financial services industry vertical at DataRobot, designing the Go To Market strategy for banking and fintech, and advising 100’s of banks and fintechs on data and analytics strategy. H.P. recently founded AI Powered Banking (https://aipoweredbanking.net) with a mission of helping banks and fintechs leverage their data and advanced analytics and helping technology firms craft their GTM strategy for the financial services sector. H.P. is a graduate of M.I.T. where he earned an M.S. in Information Technology.  

 

This is a Sponsored Feature.

Top Stories

Deloitte: Middle East organizations need to rethink their workforce in the wake of COVID-19

Published

on

Deloitte: Middle East organizations need to rethink their workforce in the wake of COVID-19 2

Organizations in the Middle East have had to take immediate actions in reaction to the COVID-19 pandemic, such as shifting to remote and virtual work, implementing new ways of working and redirecting the workforce on critical activities. According to Deloitte’s 10th annual 2020 Middle East Human Capital Trends report, “The social enterprise at work: Paradox as a path forward,” organizations now need to think about how to sustain these actions by embedding them into their organizational culture.

“COVID-19 has created a clarifying moment for work and the workforce. Organizations that expand their focus on worker well-being, from programs adjacent to work to designing well-being into the work itself, will help their workers not only feel their best but perform at their best. Doing so will strengthen the tie between well-being and organizational outcomes, drive meaningful work, and foster a greater sense of belonging overall,” said Ghassan Turqieh, Consulting Partner, Human Capital, Deloitte Middle East.

According to the Deloitte report, many organizations in the Middle East made quick arrangements to engage with employees in the wake of the pandemic through frequent communications, multiple webinars where senior leaders addressed employee concerns, virtual employee events, manager check-ins, periodic calls and other targeted interactions with the workforce.

The report also discussed how UAE and KSA governments have reexamined work policies and practices, amended regulations and introduced COVID-19 initiatives to support companies and the workforce in the public and private sectors. Flexible and remote working, team-building and engagement activities, well-ness programs, recognition awards and modern workspaces are among the many things that are now adding to the employee experience.

Key findings from the Deloitte global report include:

  • Only 17% of respondents are making significant investments in reskilling to support their AI strategy with only 12% using AI primarily to replace workers;
  • 27% of respondents have clear policies and practices to manage the ethical challenges resulting from the future of work despite 85% of respondents saying the future of work raises ethical challenges;
  • Three-quarters of leaders are expecting to source new skills and capabilities through reskilling, but only 45% are rewarding workers for the development of new skills; and
  • Only 45% of respondents are prepared or very prepared to take advantage of the alternative workforce to access key capabilities despite gig workers being likely to comprise 43% of the U.S. workforce this year according to the Bureau of Labor Statistics.

“Worker well-being is a top priority today, and similarly to the rest of the world, companies in the Middle East are focusing their efforts to redesign work around well-being by understanding workforce well-being needs,” said Rania Abu Shukur, Director, Human Capital, Consulting, Deloitte Middle East.

Continue Reading

Top Stories

One in five insurance customers saw an improvement in customer service over lockdown, research shows

Published

on

One in five insurance customers saw an improvement in customer service over lockdown, research shows 3

SAS research reveals that insurers improved their customer experience during lockdown

One in five insurance customers noted an improvement in their customer experience over lockdown, according to research conducted by SAS, the leader in analytics. This far outweighed the 11% of customers who felt it had deteriorated over the same period.

This is positive news for insurers during such challenging times, with 59% of customers also saying that they would pay more to buy or use products and services from any company that provided them with a good customer experience over lockdown.

The improvement in customer experience also coincides with a rise in the number of digital customers. Since the pandemic started, the number of insurance customers using a digital service or app has grown by 10%. Three-fifths (60%) of new users plan to continue using these digital services moving forward.

However, while the number of digital users grew over lockdown, half of the insurance customer base has not yet chosen to move to digital insurance apps or services.

Paul Ridge, Head of Insurance at SAS UK & Ireland, said:

“It’s impressive that there was a net improvement in customer experience during lockdown, despite the challenges the industry was facing with a transition to remote working and increased claims for things like cancelled holidays. While many were forced to wait on customer help lines for long periods, part of the improvement may be explained by even a small (10%) increase in the number of digital users.

“However, it’s clear that a huge number of customers are still yet to make the move online. It’s vital that insurers provide the most accurate, timely and relevant offerings to customers, and this is best achieved by having additional insight into online customer journeys so they can understand them better. Using analytics and AI, insurers can seize this opportunity to digitalise their customer experience and offer a more personalised approach.”

Meanwhile, for insurers that fail to offer a consistently satisfactory customer experience, the price could be severe. A third (33%) of customers claimed that they would ditch a company after just one poor experience. This number jumps to 90% for between one and five poor examples of customer service.

For more insight into how other industries across EMEA performed during lockdown, download the full report: Experience 2030: Has COVID-19 created a new kind of customer? 

Continue Reading

Top Stories

The power of superstar firms amid the pandemic: should regulators intervene?

Published

on

The power of superstar firms amid the pandemic: should regulators intervene? 4

By Professor Anton Korinek, Darden School of Business and Research Associate at the Oxford Future of Humanity Institute. Gosia Glinska, associate director of research impact, Batten Institute for Entrepreneurship and Innovation, Darden School of Business

Recent news that Apple hit a market cap of USD2 trillion highlights an extraordinary success story: A once struggling computer-maker on the verge of bankruptcy innovates its way to becoming the most valuable publicly traded company in the United States.

Apple’s 13-figure valuation is indicative of a larger trend that is not entirely benign — the rise of a handful of superstar firms that dominate the economy. Over the past three decades, advances in information technology, mainly the Internet, have supercharged the superstar phenomenon, allowing a small number of entrepreneurs and firms to serve a large market and reap outsize rewards. And COVID-19 has greatly accelerated the phenomenon by pushing us all into a more virtual world.

Apple — along with Amazon, Facebook, Google, Microsoft and Netflix — is a case in point. The combined market value of those six companies exceeds USD7 trillion, which accounts for more than a quarter of the entire S&P 500 index. Even amid the pandemic’s economic wreckage, these megacompanies continue to prosper. The combined share price for Apple and its five peers was up more than 43 percent this year, while the rest of the companies in the S&P 500 collectively lost about 4 percent.[1]

Superstar firms can be found in almost every sector of the economy, including tech, management, finance, sports and the music industry. They command increasing market power, which has consequences for technological, social and economic progress. It is, therefore, critical to understand how their advantages arose in the first place.

THE FORCES BEHIND THE SUPERSTAR PHENOMENON

The “economics of superstars” was first studied by the late University of Chicago economist Sherwin Rosen. Forty years ago, Rosen argued that certain new technologies would significantly enhance the productivity of talented workers, enabling superstars in any industry to greatly expand the scope of their market, while reducing market opportunities for everyone else.[2] Digital innovations, including advances in the collection, processing and transmission of information, is what Rosen envisioned would lead to the superstar phenomenon.

Digital technologies are information goods, which are different from the traditional, physical goods in the economy. What it means is that fundamentally different economic considerations apply. Unlike physical goods — a loaf of bread or a car — information goods have two key properties: They are non-rival and excludable. Non-rival means that something can be used without being used up. Excludability means that an owner of digital innovation can prevent others from using it, by protecting it with patents, for example. These two fundamental properties of information goods are what give rise to the superstar phenomenon.

In a working paper I co-authored with Professor Ding Xuan Ng at Johns Hopkins University[3], we described superstars as arising from digital innovations that require upfront fixed costs that allow firms to reduce the marginal costs of serving additional customers.[4] For example, once an online travel agency has programmed its website at a fixed cost, it can easily displace thousands of traditional travel agents without much additional effort, scaling at near-zero cost.

Because a firm can exclude others from using its digital innovation, it automatically gains market power. The innovator then uses that power to charge a mark-up and earn a monopoly rent — basically, a price superstars charge in excess of what it costs them to provide the good — which we call the ‘superstar profit share’.

THE POLICYMAKER’S DILEMMA

In a vibrant free market economy, businesses compete for customers by innovating and improving their offerings while keeping prices low; otherwise, they are displaced by more innovative rivals entering the market. Unfortunately, the increasing monopolization of the economy by technology superstars is weakening the competitive environment around the world.

Monopoly power is the main inefficiency from the emergence of superstar firms, because superstars can exclude others from using the innovation that they have developed.

So, what policy measures can be employed to mitigate the inefficiencies arising from the superstar phenomenon?

We do have antitrust policies designed to promote competition and hence economic efficiency. Authorities could take a drastic measure and break up monopolies. Or they could tax all those excess profits megacompanies make.

Another policy to consider involves giving consumers control rights over their data. Right now, only companies have that data, and they are selling it. If you free it up and don’t allow them to sell it anymore, it reduces their monopoly profits. And if you give consumers more freedom over their data, they could, for example, share it with the latest start-up and create a more competitive landscape.

However, such policy remedies can be a double-edged sword. On the one hand, they reduce monopoly rents. On the other hand, they can also reduce innovation.

Innovation requires investments in R&D, which represent a significant sunk cost that only large firms can afford. Government regulations can easily backfire, discouraging large firms from making long-term R&D investments.

What, then, is the best policy intervention? Professor Ding Xuan Ng and I believe that basic research should be public. Digital innovations should be financed by public investments and should be provided as free public goods to all. This would make the superstar phenomenon disappear, and the effects of digital innovation would simply show up as productivity increases.[5]

We live in a brave new world that is increasingly based on information. Because the information economy is different from the traditional economy, antitrust policy should be revamped to reflect that. Instead of worrying about the economy being eaten up by these gigantic monopolies, policymakers need to focus on the question ‘What specific actions can we pursue to make the economy more competitive and efficient?’

Continue Reading
Editorial & Advertiser disclosureOur website provides you with information, news, press releases, Opinion and advertorials on various financial products and services. This is not to be considered as financial advice and should be considered only for information purposes. We cannot guarantee the accuracy or applicability of any information provided with respect to your individual or personal circumstances. Please seek Professional advice from a qualified professional before making any financial decisions. We link to various third party websites, affiliate sales networks, and may link to our advertising partners websites. Though we are tied up with various advertising and affiliate networks, this does not affect our analysis or opinion. When you view or click on certain links available on our articles, our partners may compensate us for displaying the content to you, or make a purchase or fill a form. This will not incur any additional charges to you. To make things simpler for you to identity or distinguish sponsored articles or links, you may consider all articles or links hosted on our site as a partner endorsed link.

Call For Entries

Global Banking and Finance Review Awards Nominations 2020
2020 Global Banking & Finance Awards now open. Click Here

Latest Articles

Data Unions, fisherfolk and DeFi 5 Data Unions, fisherfolk and DeFi 6
Finance1 hour ago

Data Unions, fisherfolk and DeFi

By Ruby Short, Streamr In the fintech world it seems every month there’s a new trend or terminology to get...

Deloitte: Middle East organizations need to rethink their workforce in the wake of COVID-19 7 Deloitte: Middle East organizations need to rethink their workforce in the wake of COVID-19 8
Top Stories1 hour ago

Deloitte: Middle East organizations need to rethink their workforce in the wake of COVID-19

Organizations in the Middle East have had to take immediate actions in reaction to the COVID-19 pandemic, such as shifting...

One in five insurance customers saw an improvement in customer service over lockdown, research shows 9 One in five insurance customers saw an improvement in customer service over lockdown, research shows 10
Top Stories1 hour ago

One in five insurance customers saw an improvement in customer service over lockdown, research shows

SAS research reveals that insurers improved their customer experience during lockdown One in five insurance customers noted an improvement in...

ECOMMPAY expands Open Banking payments solution to Europe 11 ECOMMPAY expands Open Banking payments solution to Europe 12
Finance1 hour ago

ECOMMPAY expands Open Banking payments solution to Europe

Open Banking by ECOMMPAY facilitates fast, secure and simple payments  International payment service provider and direct bank card acquirer, ECOMMPAY, has...

Bots Are People Too: Robotic Process Automation in Finance 13 Bots Are People Too: Robotic Process Automation in Finance 14
Technology2 hours ago

Bots Are People Too: Robotic Process Automation in Finance

By Tom Venables, Practice Director – Application & Cyber Security at Turnkey Consulting As technology has advanced, Robotic Process Automation...

The power of superstar firms amid the pandemic: should regulators intervene? 15 The power of superstar firms amid the pandemic: should regulators intervene? 16
Top Stories2 hours ago

The power of superstar firms amid the pandemic: should regulators intervene?

By Professor Anton Korinek, Darden School of Business and Research Associate at the Oxford Future of Humanity Institute. Gosia Glinska, associate...

How to drive effective AI adoption in investment management firms 17 How to drive effective AI adoption in investment management firms 18
Technology2 hours ago

How to drive effective AI adoption in investment management firms

By Chandini Jain, CEO of Auquan Artificial intelligence (AI) has the potential to augment the work of investment management firms...

Democratising today’s business software with integrated cloud suites 19 Democratising today’s business software with integrated cloud suites 20
Technology3 hours ago

Democratising today’s business software with integrated cloud suites

By Gibu Mathew, VP & GM, APAC, Zoho Corporation Advances in the cloud have changed the way we interact with...

Why the UK is standing tall at the forefront of fintech 21 Why the UK is standing tall at the forefront of fintech 22
Top Stories3 hours ago

Why the UK is standing tall at the forefront of fintech

By Michael Magrath, Director of Global Standards and Regulations, OneSpan In recent years, the UK has established itself as one...

How CFO’s can Help Their Businesses Successfully Navigate The Financial Fallout From COVID-19 23 How CFO’s can Help Their Businesses Successfully Navigate The Financial Fallout From COVID-19 24
Top Stories1 day ago

How CFO’s can Help Their Businesses Successfully Navigate The Financial Fallout From COVID-19

By Mohamed Chaudry, Group CFO of FoodHub 2020 has been one of the toughest years in recent memory for business....

Newsletters with Secrets & Analysis. Subscribe Now