A year after Basel’s new Minimum Capital Requirements for Market Risk (commonly referred to as FRTB) were published, its P&L attribution tests remain a major issue for banks needing to implement it – both in the lack of clarity around it and the potential difficulty in passing these tests. The Frequently asked questions document released in January addresses a couple of issues – valuation adjustments and data timing differences – but does not clear up inconsistencies in earlier publications about how risk-theoretical P&L should be calculated.
The current industry consensus on this seems to be that the “harder” approach, using Risk models rather than Front Office models, will be favoured. This is backed up by the draft EU regulation, which differentiates between the institution’s “risk measurement model”, used for risk-theoretical P&L, and “pricing model”, used for hypothetical P&L. It does not reproduce the technical definitions of the attribution tests – these will be included in future “regulatory technical standards” – which may reflect the expectation that these tests will need to change to be practical. There is some optimism amongst banks around this, especially after the appointment of a new chair of the Basel group that drafted the rules, but any significant change at this stage seems unlikely. The danger is that banks that “wait and see” merely delay their implementation without getting a relaxation of the rules.
The P&L Attribution Tests
The tests as they are currently defined are based on two metrics, calculated monthly:
|The mean of the unexplained P&L divided by the standard deviation of HPL
|Must be between -10% and +10%.
|The variance of unexplained P&L divided by the variance of HPL.
|Must be less than 20%.
- Hypothetical P&L (HPL) is that produced by Front Office models using all risk factors.
- Risk-theoretical P&L (RTPL) is that produced by risk models using a (possibly) smaller number of risk factors.
- Unexplained P&L is the difference between risk-theoretical and hypothetical P&L.
If a desk experiences four or more breaches of either test in a 12 month period, it will lose internal model approval. With a few simplifying assumptions, the second test can be shown to require a correlation of at least 90% between HPL and RTPL to pass (for example, see Chandrakant Maheshwari’s blog Chandrakant Maheshwari’s blog ). This test is particularly difficult for well-hedged books; positions that largely offset will result in a variance of the HPL that is very low. Work by the market risk team at Intesa Sanpaolo has gone even further, showing a correlation closer to 97% is required to stay within the four breach threshold, as reported in the Risk.net article Basel group shake-up has banks hoping for FRTB changes .
Whatever form the final P&L attribution tests take, they are likely (by design) to require greater alignment between Risk and Front Office (FO). This comes on top of closer coupling necessitated by the new rules as a whole, where considerations of risk capital, especially internal model approval, will affect desk structure and even business models. This alignment can be done in a number of different areas:
The source and timing of data often differs between Risk and FO. The source is driven by different requirements on granularity, the need for historical series or simply different systems and project teams. The timing for Risk is typically a single global daily snapshot, whereas FO will calculate its daily P&L at close of business in each region. The FAQs document has confirmed that the snapshot times used for risk-theoretical P&L can be aligned to those used for hypothetical (FO) P&L, which should be fairly straightforward for a flexible risk system. Aligning the data pipelines is more of an integration and coordination challenge, but having a more consistent data set across Risk and FO will prove invaluable under the new regime and help with the P&L attribution test.
The key component in producing comparable P&L results is the pricing model. Risk and FO models typically have different trade-offs between speed and accuracy, with risk models using approximations to do the far greater number of revaluations required for measures such as historical Value-at-Risk (VaR) and Expected Shortfall (ES). This pressure on calculation time has only grown with FRTB’s Global Expected Shortfall, but with the availability of elastic grid computing and cheap commodity hardware, using FO models for risk calculations in a distributed pricing framework is a cost-effective solution. With the new demands on accuracy from the P&L attribution tests to get IMA approval, it may be the only viable solution. Reusing FO models for risk has other benefits:
- It aids in aligning data, as the model data requirements are the same (and terms data no longer needs to leave the trading system).
- Separate approval for risk models is no longer required – once a pricing model is available for trading, it is available for risk.
The complexity of aggregation in the new regulation makes combining results across risk systems much more involved. One tempting solution is consolidation onto a single system that handles both risk and Front Office needs. This in theory should ensure the consistency of data and models, but comes at a high cost:
- It is hard for a single system to meet the needs of traders (trade execution, real-time risk) and regulatory risk management (end of day reporting).
- The company is tied in to that provider and limited to the products it supports. A regulatory programme that becomes dependent on a potentially lengthy consolidation exercise.
Many institutions have recognised that full consolidation might be too costly a step and are moving to leverage their existing investments in FO systems and pricing models. They are looking to rearchitect their infrastructure to allow FO models to be reused easily from other areas of the firm, using shared data and APIs, moving towards more connected and aligned Front Office and Risk. While this alignment is likely to be driven by regulatory compliance and capital requirements, it can bring more general risk management benefits by giving risk managers and traders a more consistent view of their portfolios and more agility for future requirements.