_{1}

^{*}

The conventional judgement-based method for fixing the risk tolerance level in the Value-at-Risk (VaR) model might be a suboptimal method, because the procedure induces the possibility of bias in risk measurement. Conversely, a superior risk management practice might be one, where input parameters are determined by a quantitative process which is “ non-subjective to the risk modeller’s preferences ” . Based on this insight, we have improved on the VaR model. Our model allows time variation of the risk tolerance level and so is suitable for scenario-wise risk analysis.

A class of risk measures, which are commonly referred to as “tail-related risk measures” in the economic literature, is based on basics of fixing ex-ante a risk tolerance level. Value-at-Risk is a common example of this class. Risk tolerance is the level of risk that an investor is willing to take. But, gauging risk appetite accurately can be a tricky task. In practice, the risk tolerance level is generally decided by judgement/or perception by a risk manager or a risk management committee or, in certain cases, an external regulatory body. For this purpose, it has been a common practice to follow recommendations by the BASEL committee of banking supervision. At present, BASEL guidelines are 99% and 99.9% confidence level for Value-at-Risk (VaR) and 97.5% confidence level for Expected Shortfall (ES) [

In an alternative approach, the present paper proposes that the risk tolerance level ought not to be pre-assigned, but may be determined by the model itself. In this framework, this parameter may vary with the shape of the loss distribution. One way to determine the same might be using the Pickands-Balkema-de Haan theorem which essentially says that, for a wide class of distributions, losses which exceed the high enough threshold follow the generalized Pareto distribution (GPD) [

Suppose x 1 , x 2 , ⋯ , x n are n independent realizations from a random variable (X) representing the loss with distribution function F X ( x ) with a finite or infinite right endpoint (x_{0}). We are interested in investigating the behavior of this distribution exceeding a high threshold (u). In the line of Hogg and Klugman [

F Y 1 u ( x ) = P [ Y 1 u ≤ x ] = P [ X ≤ x / X > u ] = { 0 if x ≤ u F X ( x ) − F X ( u ) 1 − F X ( u ) if x > u

Based on F Y 1 u , we can define the distribution function of the excess over a high threshold u:

F Y u ( x ) = P [ X − u ≤ x / X > u ] = F X ( x + u ) − F X ( u ) 1 − F X ( u ) (1)

for 0 ≤ x < x 0 − u .

Balkema and de Haan [

L i m u → x 0 S u p 0 ≤ x < x 0 − u | F Y u ( x ) − G ξ , σ ( u ) ( x ) | = 0 (2)

where the distribution function of a two parameter generalised Pareto distribution with the shape parameter ( ξ ), and scale parameter ( σ ( u ) ) has the following representation:

G ξ , σ ( u ) ( x ) = { 1 − ( 1 + ξ x / σ ( u ) ) − 1 / ξ if ξ ≠ 0 1 − exp ( − x / σ ( u ) ) if ξ = 0

where σ > 0 , x ≥ 0 when ξ ≥ 0 and 0 ≤ x ≤ − σ ξ when ξ < 0 . (2) holds if

and only if F belongs to the maximum domain of attraction of the generalised extreme value (GEV) distribution (H) [

G ξ , u , σ ( x ) = { 1 − ( 1 + ξ ( x - u ) / σ ) − 1 / ξ if ξ ≠ 0 1 − exp ( − ( x - u ) / σ ) if ξ = 0

where σ > 0 , ( x − u ) ≥ 0 when ξ ≥ 0 and 0 ≤ ( x - u ) ≤ − σ ξ when ξ < 0 .

This representation would provide us a theoretical ground to claim that there exists a threshold, the data above which would have generalized Pareto be haviour.

Equations (1) and (2) suggest that for a sufficiently high threshold, it can be written:

F X ( x + u ) ≈ F X ( u ) + G ξ , σ , u ( x ) ( 1 − F X ( u ) )

Setting y = x + u

F X ( y ) ≈ F X ( u ) + G ξ , σ , u ( y − u ) ( 1 − F X ( u ) ) (3)

The right hand side of the Equation (3) can be simplified in the form of a distribution function of a GPD:

F X ( y ) ≈ G ξ , σ ˜ ( y − μ ˜ ) (4)

where σ ˜ = σ ( 1 − F X ( u ) ) ξ and μ ˜ = u − σ ˜ ( ( 1 − F X ( u ) ) − ξ − 1 ) / ξ .

Hence, if we can fit the GPD to the conditional distribution of the excess above a high threshold, it can also be fitted to the tail of the original distribution above a certain threshold [

When u is fixed at u ^ , y ^ would be the minimum value of y for which the Equation (4) will hold. The deviation of F X ( y ) from G ξ , σ ˜ ( y − μ ˜ ) would, therefore, be non-zero for y < y ^ , which is expected to be zero for all y ≥ y ^ . We may consider an indicator, viz. the cumulative square deviation for y < y 0 ,

D ( y 0 ) = ∑ y < y 0 [ F X ( y ) − G ξ , σ ˜ ( y − μ ˜ ) ] 2 , which might be useful for identifying

y ^ . By its nature, D ( y 0 ) would be an increasing function of y 0 for y 0 < y ^ and would be nearly flat for y 0 ≥ y ^ . Therefore, the slope of the D ( y 0 ) would be positive for y 0 < y ^ , which would be almost zero for y 0 ≥ y ^ . We can identify the cut-off point, y ^ , after which the slope of the D ( y 0 ) would be statistically insignificant [

Therefore, we can bifurcate the underlying distribution into two parts: X ≥ y ^ is the risky region of the distribution in the sense that this region could be approximated by the tail of an equivalent GPD. All large unforeseen losses would belong to this part. Conversely, X < y ^ is the region of the distribution which does not cause severe tail risk.

For a small quantile of order p, P = 1 − F X ( y ^ ) < y ^ , we can write

P ≈ ( 1 − F X ( u ^ ) ) ( 1 − G ς , σ ( u 0 ) ( y ^ − u ^ ) ) (5)

VaR represents in probabilistic terms a quantile of the loss distribution function F_{X} [

V a R p = y ^ V a R p = y ^ (6)

Equations (5) and (6) lead to interesting inferences: when the distributional form of the underlying distribution (F_{X}(.)) is known, p and VaR_{p} can be estimated simultaneously. Majumder [

When the form of the underlying loss distribution F_{X}(.) is known, we can develop a procedure for estimating the threshold, u ⌢ , by a simulation study. We may recall our result in the preceding section that we can get a sufficiently high threshold u, above which the distribution function of the excesses F Y u ( x ) can be approximated by the distribution function of a generalised Pareto distribution, G ξ , σ ( u ) ( x ) . Initially, we fix u to some u^{/} and generate 100 samples each of size 4000 from the underlying distribution F_{X}. If u^{/} is the true threshold, then the

deviation of G ξ , σ ( u / ) ( x ) from G ξ , σ ( u / ) ( x ) is expected to be zero for all x ≥ u /

for the j th sample, j = 1 , 2 , ⋯ , 100. We may consider an indicator, viz. the cumulative square deviation for x ≥ u / , D 2 ( u ′ ) = ∑ x ≥ u ′ [ F Y u / ( x ) − G ξ , σ ( u / ) ( x ) ] 2 ,

which might be useful for identifying the threshold. If u / is the true threshold, D 2 ( u ′ ) would be zero for each sample. Based on this indicator, we can form a Mean Squared Error (MSE):

M S E ( u / ) = 1 100 ∑ i = 1 100 { D 2 ( u / ) } n i

where n_{i} is the number of observation in the ith sample exceeding u / . M S E ( u ) can be computed for various values of u starting from 0. The best estimate of u (say u ⌢ ) would be one, for which M S E ( u ) is minimum.

VaR and VaR^{N}^{-S} based on daily returns on S & P 500 Composite Index for the period of 30 years, from 18th February, 1985 to 17th February, 2015, computed using five risk models separately for the full sample and the simulated stress scenario are reported in ^{N-S} for GARCH. For each risk model, in the normal as well as the turbulent period, the equilibrium probability level^{1} in VaR^{N-S} lies in-between 0.05 and 0.1 and the estimate of VaR^{N-S} in-between VaR_{0.1} and VaR_{0.05}. Furthermore, similar to the conventional model, the estimate of VaR^{N}^{-S} in the stress scenario is greater than the estimate of the same for the full sample indicating that the new risk measure correctly captures riskiness of markets. Hence, estimates of VaR^{N}^{-S} are not too arbitrary numbers to be accepted the same as a risk measure. Interestingly, the standard error of the probability level is low (highest value: 0.024 (Normal (unconditional)). This indicates that additional volatility in VaR due to introduction of time variation in the probability level would be limited.

The recurring criticism against the existing framework of market risk management has been in two leading directions: 1) it is often not possible to find a risk model which accurately predicts the data generating process and 2) input parameters are judgement-based which makes the risk measure subjective. Precision in prediction of data generating process, however, depends on skill and expertise of the risk modeller and so it is more of an art than a science. On the other hand, non-subjectivity in selection of input parameters is possible to be obtained. This could be achieved if the risk tolerance level and the threshold are simultaneously determined by the risk model. Based on this insight, we have improved on the VaR model by allowing time variation in the risk tolerance

Scenario | Risk Model | Conventional VaR_{ } | Non-Subjective VaR^{ } | ||||
---|---|---|---|---|---|---|---|

VaR_{0}_{.01} | VaR_{0}_{.05 } | VaR_{0.1} | Threshold ( u ⌢ ) | Probability level (p) | VaR^{N}^{-S} | ||

Unconditional | Historical Simulation | 3.05 (0.145) | 1.68 (0.053) | 1.12 (0.040) | 0.1 | 0.0643 (0.014) | 1.51 (0.203) |

Normal | 2.60 (0.066) | 1.82 (0.038) | 1.41 (0.030) | 0.8 | 0.0609 (0.024) | 1.75 (0.242) | |

Student’s t | 3.21 (0.215) | 1.55 (0.056) | 1.04 (0.034) | 0.4 | 0.0724 (0.018) | 1.33 (0.349) | |

GARCH-normal | 2.66 (0.066) | 1.89 (0.038) | 1.47 (0.029) | 0.9 | 0.0638 (0.023) | 1.80 (0.280) | |

GARCH-t | 3.28 (0.211) | 1.61 (0.056) | 1.10 (0.032) | 0.8 | 0.0727 (0.017) | 1.37 (0.251) | |

Simulated stress scenario | Historical Simulation | 4.68 (0.240) | 2.59 (0.060) | 2.02 (0.046) | 0.8 | 0.0727 (0.005) | 2.30 (0.085) |

Normal | 4.41 (0.111) | 3.12 (0.623) | 2.43 (0.054) | 1.5 | 0.0643 (0.022) | 2.95 (0.420) | |

Student’s t | 4.53 (0.144) | 3.01 (0.069) | 2.28 (0.052) | 1.8 | 0.0626 (0.021) | 2.86 (0.480) | |

GARCH-normal | 4.48 (0.111) | 3.19 (0.064) | 2.50 (0.050) | 1.5 | 0.0634 (0.022) | 3.02 (0.409) | |

GARCH-t | 4.57 (0.138) | 3.06 (0.067) | 2.34 (0.050) | 1.5 | 0.0670 (0.020) | 2.82 (0.427) |

Note: VaR and VaR^{N}^{-S} are average based on 50 estimates. The standard error of the estimate is provided in the parenthesis. Data Source: Data Stream.

level. Our empirical study based on S & P 500 composite index reveals that the tail risk of the loss distribution is well captured by the new risk measure in the normal as well as in the stress scenarios. The significance of the research is twofold: a) reduction of bias by minimising the scope of human intervention in risk measurement which is of practical as well as of social significance and b) gauging risk appetite methodically which is of academic significance. The approach may widen the applicability of tail-related risk models in institutional and regulatory policymaking. At this stage, however, it is not possible to provide the method for backtesting the new VaR model. This might be the topic for future research.

The author is grateful to Prof. Romar Correa, former Professor of Economics, University of Mumbai and Prof. Raghuram Rajan, Katherine Dusar Miller Distinguished Professor of Finance at University of Chicago Booth School of Business and former Governor of Reserve Bank of India for their insightful comments/suggestions. He is also thankful to Chief Science Officer, Dr. Chitro Majumdar, RsRL (R-square Risk Lab) for his contribution and inspiration at the initial stage of this study.

Majumder, D. (2018) Value-at-Risk Based on Time-Varying Risk Tolerance Level. Theoretical Economics Letters, 8, 111-118. https://doi.org/10.4236/tel.2018.81007