Editor’s note: In Optimal Granularity (TH!NK, Nov. 2011), Yaacov Mutnikas and Michael Zerbs suggested that curve fitting, combined with CCR (counterparty credit risk) data at a specific level of granularity, could provide unprecedented insights into CCR dynamics at firms.
This follow-up article summarizes results found by empirically testing different curve fitting techniques to CCR exposures.
Many financial institutions have invested in robust and sophisticated CCR systems to be compliant with IMM (internal model method) and to address supervisors’ growing interest in assessing CCR more effectively. Replicating these systems at a transactional level may be ideal in theory, but institutions and supervisors are aware that acquiring this level of detail is impractical.
Curve fitting is an approach that could be adopted in order to measure CCR and assess the impact of stress tests without the need to replicate transaction level data. One aspect that makes curve fitting such an attractive option is its ability to rapidly add new and confidential scenarios flexibly without the need to approach firms first, as they can be introduced after information has been collected.
The purpose of curve fitting CCR is not to replicate exposures that could be generated with the combination of Monte Carlo scenarios and full valuation or to provide an equivalent level of accuracy. Rather, curve fitting is proposed to give financial institutions and supervisors a critical insight about major exposures to changing scenarios, which they would otherwise be unable to obtain. Curve fitting can also be useful for firms to get a quick assessment of CCR without having to perform full revaluation of their positions. If adopted widely, curve fitting could be an effective means of developing a more meaningful yet parsimonious dialogue on major risk exposures between firms and supervisors.
Grading the options
In this article we are exploring whether the promise of curve fitting holds up for a realistic data set and which curve fitting approach works best. We performed an exercise together with the FSA on real world counterparty data sets to evaluate curve fitting options. The approaches explored include: linear and nonparametric curve fitting, higher order polynomials and risk factor reduction via Principal Component Analysis (PCA) and Portfolio PCA. Portfolio PCA considers the size of the exposure to each risk factor when determining the most relevant principal components.
If adopted widely, curve fitting could be an effective means of developing a more meaningful yet parsimonious dialogue on major risk exposures between firms and supervisors.
Our empirical results suggest fitting CCR with a second order polynomial equation, using linear curve fitting with Portfolio PCA, provides the most useful insights. Further, our empirical results show that curve fitting can succeed at capturing the stochastic nature of exposure of different counterparties at various netting set levels. The fitted equations can then be used to perform accurate, forward-looking stress testing without the need to perform full revaluation or requiring specific knowledge about the transaction level data.
Setting the parameters
Assume that the supervisors provide the firms with around 200 multi-step scenarios. These scenarios would cover the longest maturity of the instruments present in a counterparty portfolio or the relevant period of interest (e.g. one year) by using a non-homogenous time grid. For example, daily time steps for the first week, weekly time steps up to the first month, monthly time steps up to five years, and so on until the desired horizon has been reached. We have previously suggested that the stress scenarios use the risk factors that the European Central Bank has defined for the 2011 European Banking Authority stress testing exercise and use Portfolio PCA, as described in the Appendix, to focus on the risk factors most relevant to each firm’s exposure. The 200 scenarios could be a combination of Monte Carlo scenarios drawn from a distribution and stress scenarios created by the supervisors. This idea is very similar to the stressed EEPE approach in the sense that stressed scenarios should be used in order to capture non-normal and higher volatility events.
Given the options, we suggest using Monte Carlo scenarios calibrated to recent data plus stress test scenarios of interest. Since curve fitting is a statistical approach, it is indispensable to have enough observations for the calibration of its parameters. Monte Carlo scenarios would give scenarios that are consistent with the normal state of the world while stress scenarios would be projections of possible crisis scenarios. The stress scenarios could be a combination of what-if scenarios, or can be taken from different periods of crisis and significant market volatility.
By combining “normal” Monte Carlo scenarios and stress scenarios we can fit polynomials that assume breaks in the original correlation assumptions. It is important to mention that the stress scenarios should be large enough in quantity to have an effect on the distribution of the scenarios. For example, if we choose 200 scenarios for curve fitting we would expect at least 50 scenarios come from stress scenarios. If less that 50 stress scenarios are used then the distribution of the Monte Carlos scenarios will dominate the overall distribution in the curve fitting and we will be essentially fitting polynomials to normal conditions which is not the objective of this particular exercise.
The stress scenarios need to be rich and relevant in the sense that they need to cover a wide range of possible outcomes and boundary scenarios. If the polynomial is fitted using these boundary scenarios then the scenarios within that region will be feasible and will have some probability. However, if the stress scenarios are not relevant there will be some cases that are unachievable.
To the firm and back
Once the scenarios have been created, firms will use them to do a full valuation of their portfolios. After the valuation is complete, firms will provide the MtM (mark-to-market) values, positive and negative, by time step, scenario and by netting set together with the counterparty hierarchies and first order portfolio risk sensitivities to the most relevant risk factors. This should not pose any difficulties to firms with an already approved IMM system in place. Since CCR tends to be clustered, the firm can select those largest counterparties that together drive a large part – for example, 80% – of the firm’s overall exposure and will provide the information explained above on those counterparties to the supervisors. For the portfolio risk sensitivities the counterparty will select the risk factors that contribute more to the exposure and will generate first order portfolio sensitivities by netting set.
Our empirical results have shown that using curve fitting combined with CCR data at the proposed optimal level of granularity allows us to capture the dynamics of exposure and use the fitted equation for further forward-looking stress test analysis.
With this dataset, we have enough counterparty and exposure information to construct polynomials at the different netting set levels without the need to replicate information at a more granular level. Specifically, we can curve fit the values using a second order polynomial and assess the goodness of fit with statistics such as r-squared and standard error. Values are turned into exposures given the counterparty hierarchies provided by the firms.
Crucially, we can assess the impact of additional stress test on the CCR exposures by applying the fitted polynomials to new scenarios without going back to the firms for more information. What-if scenarios can be constructed interactively and on-demand.
The data set that we suggest firms provide is to some extent an extension of the data template for credit exposures proposed by the FSB (Financial Stability Board). In particular, the FSB requires principal amounts, gross MtM exposures, collateral, net MtM exposures, and potential future exposures. In our approach, having the MtM values at netting set levels by scenario and applying curve fitting allows us to derive PFEs, and in the case where the scenarios include stress scenarios, we are able to obtain some “stressed PFE” figures as well. Furthermore, we believe that first order risk factor sensitivities are also relevant pieces of information that firms should provide as exposures tend to be driven and have different sensitivity to specific risk factors. Our empirical results also show that, in conformity with the FSB objectives, better data can help contingency planning for stress events.
Empirical results and findings
The data set used for the curve fitting analysis consisted of 18 counterparties. Each of the counterparties had a netting structure containing two subsidiaries, and each of the subsidiaries had netting, no netting sets, and CSA in place. The instrument with the largest maturity in the data set had around five years left before expiry; however, analysis of curve fitting to a one year exposure was also performed.
The data set contained approximately 20,000 instruments with the following asset types: Interest Rate Swaps, Cross-Currency Swaps, Swaptions, Equity Swaps, European Equity and Index options, repo, Forward Rate Agreements, FX option (American and Asian), and FX forwards. All the collateral was modeled as cash. There were 275 risk factors affecting the exposure of these instruments. The risk factors consisted of interest rates, exchange rate, interest rate and equity volatilities, and market indices, and the positions covered 15 of the major currencies. For some particular analyses we considered specific counterparties with 33 or 76 risk factors. Using Portfolio PCA and 98% to 99% variance criteria we were able to reduce a risk factor space of 275 risk factors to between 33 and 36 components.
For netting sets containing linear instruments such as vanilla or cross currency interest rate swaps, a linear curve fitting approach was sufficient to achieve a good fit of the exposure. However, netting sets containing instruments such as options required second order polynomials in order to capture the non-linearity. As we start fitting netting sets with more risk factors, dimensionality and multicollinearity issues arose, leading us to start using Principal Component Analysis. We were able to explain most of the variability with few components.
For example, we were able to reduce the risk factor space from 275×275 to 33×33 dimensions and maintain successful curve fitting results. In some cases where normal PCA has been applied the optimal number of PCs seemed to explain approximately 92.5%-95% of the variability. However, curve fitting was not successful in these cases and did not improve until extra components that materially impacted CCR exposures were used, even though they explained almost no risk factor variability. This led us to use Portfolio PCA where the risk factors in the curve fitting can be weighted according to their impact to exposure.
The subsequent performance of our curve fitting with Portfolio PCA improved dramatically. However, note that, as opposed to normal PCA where components are selected from the VCV of the universe of risk factors selected, in Portfolio PCA the ranking is specific to each of the netting sets. As a result, we need risk sensitivities by key risk factor for each of the netting sets in the counterparty hierarchy and the principal components would be different at each level. In our empirical test the approach with best performance was linear curve fitting with a second order polynomial with Portfolio PCA. A more detailed description of the curve fitting techniques implemented in this exercise can be found in the appendix.
Further up the curve
Through curve fitting, supervisors and firms alike can incorporate confidential scenarios and ad-hoc scenarios into their analysis. We tested the effectiveness of curve fitting to model additional scenarios by using Out-of-Sample analysis, which consists of revaluating the fitted equation under scenarios other than the ones used to calibrate the parameters of the equation. The possibility of adding confidential scenarios is an important benefit as it provides a greater insight on-demand compared with recent initiatives such as the European Banking Authority stress tests that can be re-run periodically only and require significant modeling effort by each firm each time they are applied. Figure 1 shows that Curve Fitting approximates exposures well even under out-of-sample scenarios.
Our empirical results have shown that using curve fitting combined with CCR data at the proposed optimal level of granularity allows us to capture the dynamics of exposure and use the fitted equation for further forward-looking stress test analysis. We believe that enterprise risk management and supervisors can benefit from the curve fitting techniques to better understand CCR dynamics within individual firms, as well as across firms, without the need to disclose scenarios. The flexibility to incorporate confidential scenarios and assess possible outcomes without revealing their specific concern is of great value to risk management and supervisors, who would not have to be concerned that the effect that it is trying to be prevented could be propagated instead.
We are certainly at an early stage of using curve fitting techniques for CCR and we expect more studies, more advanced curve fitting and statistical approaches to follow to increase the accuracy of the assessment of CCR at an optimal granularity level.
Fitting to values or exposures?
Given that the firms provide values, positive and negative, the supervisors will then fit the polynomials directly to values and then compute the exposure using the following formula:
Fitting to values would allow the assessment of CCR from two perspectives: the firm and the counterparty. Curve fitting to exposure directly becomes complicated due the non-linear nature of the function itself. For example, if we try to fit to positive exposures directly there will be extreme scenarios that generate zero exposure and there would be other non-extreme scenarios that would generate zero exposure as well. In these cases, the overall result could lead to a smoothing of the observations where risk could be under or overestimated. As a result we suggest that curve fitting is applied on values as opposed to credit exposures.
It is also important to remove cash settled from the portfolios/netting set values. Cash settled does not generate exposure and since the dataset does not include information at the transaction level it would be impossible to know the settlement dates of the individual transactions. If the cash settled is mistakenly taken into account the curve fitting to values could work correctly but the exposure profile would be wrong since it will keep growing monotonically as cash settles through time.
Fitting to netting sets
Even though curve fitting would allow us to fit CCR at a higher level than transaction level, it cannot be applied to the highest level in the counterparty hierarchy. The reason is that different netting rules and restrictions may apply to different asset classes according to different ISDA Master Agreements in a unilateral or bilateral way. The same case applies to collateral; trading with some counterparties may require CSA to be in place while some sets with no netting agreement would not require collateral to be held or exchanged. Hence, curve fitting has to be applied at the netting set level and then aggregate exposures all the way up to counterparty level.
Curve fitting sets where netting of exposure between transactions is allowed is quite straightforward. One just needs to use one polynomial equation to approximate the values of the particular set and then compute the credit exposure. However, curve fitting sets where netting is not allowed, is a little more complicated in the sense that one needs to use two curve fitting equations to model the dynamics of the complete set. It is necessary to split scenarios that generate negative values from scenarios that generate positive values and then the first equation will be used to fit the positive values and the second to curve fit negative values, the exposure of the netting set for a particular scenario and time step would be the sum of the resulting exposure of these two equations.
Curve fitting techniques
Two different curve fitting techniques were considered during this curve fitting CCR exercise: linear curve fitting and nonparametric curve fitting. The former consists of performing a regression based on a linear combination of explanatory variables and the objective is to choose, or calibrate, the coefficients of the equation, selected polynomial, that minimize the weighted sum of squared of the residuals. The latter method performs a non-parametric regression on a local polynomial estimator based on a normal kernel with a bandwidth chosen based on the mean absolute difference between observations.
In both approaches the curve fitting is performed and three important steps are followed to ensure the soundness of the fit:
While minimizing the objective function to solve the curve fitting problem the goodness of fit measures are used to describe how well our equation fits a set of observations. Typical measures are: R-squared, standard error and estimate regret. Other very useful statistical hypothesis tests such as Kolmogorov-Smirnov and Chi-squared test are commonly used to test normality of the residuals.
In -Sample consists of valuing the curve fitted equation under the scenarios that were used to calibrate it. After being comfortable with our goodness of fit statistics, we proceed to analyze our In-Sample. If the in-sample is bad, it would be our first clue for poor forcasting performance in the out-of-sample. In the context of CCR, good In-Sample results means that we could used our fitted equation to compute measures such as Expected Exposure (EE) and Potential Future Exposure (PFE).
Out-of-Sample consists of revaluing the curve fitted equation under some scenarios (i.e. Stress Test) other than the ones that were used to calibrate the equation. Having a good In-Sample fit does not necessarily mean that the model will perform good Out-of-Sample. This is a very common case with nonparametric approaches. For example, when using a normal Kernel with global bandwidth calibration the fitted local polynomial curve will have, by construction, very good In-Sample fitting but very poor performance Out-of-Sample. A supervisor will be more interested in the performance Out-of-Sample since the supervisor is interested in assessing what would be the exposure under different stress test cases. The Expected Exposure and PFE is something that the IMM model of a firm should be able to compute with reliable accuracy.
When the exposure to a counterparty is a function of a few risk factors the curve fitting exercise is fairly easy. However, as the number of risk factors grows the curve fitting problem becomes more complicated. Firstly, because the dimensionality is high and it would be difficult to construct functions with a high number of risk factors, this will mean having very long equations that are difficult to handle, and secondly and more important, a multicollinearity problem usually arises. In mathematical terms linear curve fitting is reduced to solving a system of simultaneous linear equations and solving it becomes complicated when high correlation between the risk factors is present.
A way to overcome these issues is by using risk reduction techniques such as Principal Component Analysis (PCA) or Portfolio PCA. PCA is applied to the Variance Covariance matrix (VCV) of the risk factors log-returns and the idea is to first convert the correlated risk factors into abstract uncorrelated components and then reduce dimensionality by selecting the components that explain most of the variability. Curve fitting can then be performed only on the relevant principal components. Both PCA and Portfolio PCA use the same methodology to compute the principal components; the difference relies on the way they rank the components. In a normal PCA approach all the components have the same weight while in a Portfolio PCA approach the components are selected based on the portfolio sensitivity to each factor, contributing to portfolio risk. It is important to mention that the exposure to different factors varies across time as the portfolio matures and this is why we suggest one of the requirements of the firm is to provide the supervisor with first-order approximations to risk sensitivities.
Yaacov Mutnikas at the time of writing was the Head of Risk Architecture in the Risk Specialist Division, Financial Services Authority. The views and opinions expressed by the authors are theirs alone, and do not necessarily reflect the views and opinions of any organization or professional affiliation.
Think Magazine, November 2011: Optimal Granularity, A Supervisory Deep Dive on Counterparty Credit Risk.
“Principal Component Analysis in Quasi Monte Carlo Simulation”, Alexander Kreinin, Leonid Merkoulovitch, Dan Rosen and Michael Zerbs
“Modelling Stochastic Counterparty Credit Exposure for Derivatives Portfolios”, Ben De Prisco, Dan Rosen
“Understanding Financial Linkages: A Common Data Template for Global Systemically Important Banks” Financial Stability Board.