Martijn Groot, VP of Product Strategy from Asset Control
The Fundamental Review of the Trading Book (FRTB) is designed to provide institutions and regulators with an accurate risk measure of the potential impact of the worst case scenarios. But as financial institutions begin to wrestle with this upcoming regulatory demand, the challenges associated with collecting and validating ten years of history across every single instrument in order to identify the worst 2.5% of performance cannot be underestimated.
FRTB is just one component of a reinvigorated focus on historical data – but it is one that, should an organisation fail to achieve, will have significant implications on capital holdings requirements. Furthermore, this is not just a minor tweak on existing requirements: FRTB is both deep and wide-ranging and demands a substantial data infrastructure overhaul.
From identifying gaps in history, to flagging history that doesn’t qualify for use due to inaccuracy and adding external data sources and proxies, Martijn Groot, VP of Product Strategy at Asset Control insists that this renewed regulatory focus on historical time series data demands a strong information management architecture.
Value at Risk
One of the many outcomes of the 2007-2008 banking crisis was a devaluing of the Value at Risk (VaR) measure in use since the mid-1990s to evaluate an institution’s day to day risk. As the unprecedented and unpredicted events occurred, a risk probability created on a ‘business-as-usual’ basis simply did not stand up.
While the most recent regulatory focus has been on stress testing, the complete market risk evaluation is now being revisited – and with far more stringent information demands. These demands are designed to overcome the deficiencies of the VaR metric and, critically, estimate the size of potential losses in the event of unusual events, rather than putting an upper bound on losses in a business-as-usual day.
The Fundamental Review of the Trading Book (FRTB) should provide both institutions and regulators with a more accurate and trusted risk measure. However, achieving this objective will require far more than simply extending the original VaR calculations. Indeed, FRTB has thrown the VaR metric out completely – rather than looking to determine the maximum possible loss on a normal day, institutions must now calculate the Expected Shortfall (ES) on an abnormal day. Essentially, the demand now is to create a daily metric that estimates potential losses on the worst 2.5% of days for any given institution.
With regulators becoming more prescriptive and demanding less unwarranted variation between firms, FRTB is, of course, not the only legislative change refocusing activities towards historical data. However, the challenges associated with such wide-ranging data requirements may take some organisations by surprise. This shift in emphasis may sound straightforward, but drill down through the detail and the information management requirements associated with FRTB are significant and multi-layered. One of the intrinsic differences between ES and VAR calculations is that the former will be based on ten years of history, as opposed to the much shorter history required previously. It is paramount that this data is of sufficient quality and frequency to ensure there are no gaps and no inconsistencies – and it needs to be validated and audited.
In addition, FRTB distinguishes between modellable and non-modellable risk factors. Something can be modellable based on the availability of sufficient observable traded prices or non-modellable. The point here is that an organisation needs different sets of market data for the identification and calibration of these risk factors. For the determination of modellable vs non-modellable risk factors, only transaction prices can be used, whereas for the calibration thereof a wider permitted set of market data applies.
So what happens if an institution doesn’t have ten years of history for each and every aspect of the trade portfolio? If it does not have real prices? Or continuous, gap free observations? Obviously estimates can be used – but such calculations need to be audited and they must meet very specific validation requirements.
The right information management infrastructure to collect and retain this data is clearly important – but institutions will also need a way to identify gaps in the history, flag any history that doesn’t qualify for use within the risk management calculation and support the use of third party sources where required to build the complete picture.
There is no doubt that the majority of institutions will have to turn to external third party data providers, including brokers and pricing providers, to fill the gaps in their time series data. However, poor validation is one of the biggest potential issues facing organisations creating the ES metric. Without strong screening mechanisms, there is a profound risk that erroneous data could be included in the calculations. For example, if a screen has not refreshed its quotes the result will be a flat graph showing the same price for some time – something that would clearly disqualify the history from the calculations.
Another challenge will be the use of proxies. If an institution opts to use another instrument to approximate or plug a gap in the history, it will be important to record on what basis the proxy has been used, and why, in order to prove to regulators the proxy’s validity as a comparable security.
Clearly to achieve a reliable picture of the market, it will be important to combine multiple data sources – adding quotes from market makers and taking the average. The ability to combine multiple inputs, identify outliers and validate data sources is required to build up and verify this ten year history.
It is only once the complete history of returns for each instrument is in place that organisations can begin to create the ES metric. At this point, the institution must then rank returns, sorting them from low to high, and then zoom in on the tail – the 2.5% of worst cases – and determine the average associated expected loss. In addition to the challenge of scanning these histories for the most stressful period for potentially thousands of risk factors, institutions will need to take into account different periods, ranging from ten to sixty days, depending on the liquidity horizon of the instrument.
Speed is essential. The ability to focus quickly on the worst 2.5% of returns for each risk factor is key – and demands an institution can scan and identify these thousands of risks on a daily basis. The immediate challenge, however, is to get that data infrastructure in place. Collecting, identifying gaps, introducing new sources, validating ten years of history across every single risk factor and quickly identifying the most turbulent periods for each time series will be a major project.
While FRTB will soon become a regulatory requirement, there are benefits to organisations over and above compliance. Firstly, this in-depth, accurate and validated data source can become the base upon which market shock/stress tests are applied – delivering another aspect of the regulatory demands in the US and Europe.
However, over and above any regulatory compliance, there are significant financial implications associated with FRTB. Any bank that fails to achieve the ES calculation with a validated risk and information infrastructure will lose the right to use its own internal models to run risk and will be forced to use the far coarser standard models from the regulators. The result will be a demand for far higher capital holdings – as much as five times higher according to some estimates. Essentially, get FRTB right and banks will be in a position to make more efficient use of their capital, which is a clear commercial benefit beyond regulatory compliance.
This is a major shift in both mind-set and technology – and the sooner organisations embrace a new, robust data management architecture, the better.