As the FRTB implementation race begins, it is crucial that banks now get a thorough understanding of the structure of trading desks, refine operating models and the front office and transform the market risk infrastructure effectively in order to meet the so called ‘FRTB standard’. With less that 18 months to go, banks are under pressure to develop high quality and consistent projects that will minimise the risk of capital charges and ensure integrity across the front office, risk and finance departments. The biggest challenge is yet to be addressed: the application of the P&L attribution test and the definition of non modellable risk factors will be key topics.

Ahead of  the 4th Edition Impact of the Fundamental Review of the Trading BookConference, we spoke with Dr Jeremy Penn

Head of Market Risk Data Science at Credit Suisseabout the pros and cons of an Internal Model Approach versus a Standardised Approach and the most crucial steps of preparing trading desk for validation 

What are the pros and cons of an Internal Model Approach versus a Standardised Approach?

Jeremy Penn
Jeremy Penn

Historically there has been an emphasis in capital calculation on conservatism of risk estimation. Under the new regime the Standardised Approach is still one of conservatism, particularly with regard to aggregation. The Internal Model Approach on the other hand very much requires accuracy of risk estimation rather than conservatism. This, which is reinforced by the P&L Attribution test, is an interesting new development.

It is very important for the industry to appreciate that it is very much the case that a mixture of Internal Model approach and Standardised Approach could lead to a higher capital requirement than Standardised alone given the aggregation approach. Furthermore, there will be a regulatory expectation for sophisticated institutions, to have models of appropriate complexity for risk management purposes even if they use the Standardised Approach for capitalisation. This expectation is entirely reasonable, as risk models are as much about risk management as they are about capitalisation.

What are the most crucial steps of preparing trading desks for validation? 

One of Credit Suisse’s culture principles is for colleagues to develop, challenge and support each other. The subtle balance of collaboration with and challenge of Front Office by the central Risk function has always been key to effective Risk Management.

The finance function has a critical role in any bank’s activities and that function has specific professional requirements to the accounting procedure and adjustments that it uses. A clear understanding of these adjustments within the central Risk function will be invaluable. It will also be necessary to have good transparency on Front Office marking processes. Most institutions will want to carry out an analysis of the risk factors in their capital model and the quality of risk capture on a desk by desk basis.

What are the factors that could optimise P&L attribution testing? 

There are no real shortcuts to the P&L Attribution tests. To perform well in the test an institution will need two things. The first is comprehensive capture of material risks, including in many cases material basis risks.

The second key requirement is a tight alignment of market data between Front Office and Risk. Clearly Front Office and Risk have subtly different goals for their market risk. Front Office seeks on each day to most accurately value a position. Risk on the other hand is seeking to construct a time series that gives a historical distribution that is most representative of future risks. Although there is a requirement for the market data used for modellability and that used for the capital calculation to have a clear relationship, there is undoubtedly allowance for them not to be identical.

Similarly, what measures can be taken to avoid an increase in capital cost? 

The steps above for optimising P&L Attribution testing will also be important for managing possible increases in regulatory capital. Many financial institutions will also want to take steps such as evaluating the calculation eligibility metrics on a more frequent basis. Complex institutions may want to do this on a daily basis.

At present there are often procedural differences between the processes of Finance functions across an institution between regions and between asset classes. Institutions are likely to want to normalise these procedure as much as is possible.

Avoidance of unnecessary use of Non-Modellable Risk Factor (NMRF) charges will also avoid capital increases. Institutions will need sources of good modellability data to achieve this.

What would you like to achieve by attending the 4th Edition Impact of the Fundamental Review of the Trading Book Conference? 

I’ve participated in a number of conferences on this topic and I see an encouraging trend towards the practical discussion of tangible implementation measures. I’m looking forward to the opportunity to discuss pragmatically with industry peers the most suitable interpretation of regulatory texts and likely local regulator interpretations.

There is a huge variation in the foundational system architecture that different financial institutions will be building their machinery to comply with the new regulations on top of. I will be keen to compare and contrast the differing implementation challenges that different institutions are facing and see the predominant themes in those challenges.

Leave A Reply

Your email address will not be published.

Warning: Illegal string offset 'subject' in /srv/users/anandnals/apps/gbaforg/public/wp-content/plugins/spamlord/spamlord.php on line 86