Using Conditional Extreme Value Theory to Estimate Value-at-Risk for Daily Currency Exchange Rates

This paper implements different approaches used to compute the one-day Value-at-Risk (VaR) forecast for a portfolio of four currency exchange rates. The concepts and techniques of the conventional methods considered in the study are first reviewed. These approaches have shortcomings and therefore fail to capture the stylized characteristics of financial time series returns such as; non-normality, the phenomenon of volatility clustering and the fat tails exhibited by the return distribution. The GARCH models and its extensions have been widely used in financial econometrics to model the conditional volatility dynamics of financial returns. The paper utilizes a conditional extreme value theory (EVT) based model that combines the GJR-GARCH model that takes into account the asymmetric shocks in time-varying volatility observed in financial return series and EVT focuses on modeling the tail distribution to estimate extreme currency tail risk. The relative out-of-sample forecasting performance of the conditional-EVT model compared to the conventional models in estimating extreme risk is evaluated using the dynamic backtesting procedures. Comparing each of the methods based on the backtesting results, the conditional EVT-based model overwhelmingly outperforms all the conventional models. The overall results demonstrate that the conditional EVT-based model provides more accurate out-of-sample VaR forecasts in estimating the currency tail risk and captures the stylized facts of financial returns.


Introduction
In the recent past, the financial markets worldwide have experienced exponential growth coupled with significant extreme price changes such as the recent global financial crisis, currency crisis, and extreme default losses.The increasing financial uncertainties have challenged the financial market participants to develop and improve the existing methodologies used in measuring risk.Value-at-Risk (VaR) is a measure commonly used by regulators and practitioners to quantify market risk for purposes of internal financial risk management and regulatory economic capital allocations.For a given asset or portfolio of financial assets, probability and time horizon, VaR is defined as the worst expected loss due to change in value of the asset or portfolio of financial assets at a given confidence level over a specific time horizon (typically a day or 10 days) under the assumption of normal market conditions and no transactions in the assets.For example, a financial institution may possibly declare that its one-day portfolio VaR is KES 1 million at 95 percent significance level.This implies that the daily losses will exceed KES 1 million only 5 percent of the time given that the normal market conditions prevail.Statistically, estimating VaR involves estimating a specific quantile of the distribution of returns over a specified time horizon.The main complexity in modeling VaR lies in making the appropriate assumption about the distribution of financial returns, which typically exhibit well-known stylized characteristics such as; non-normality, volatility clustering, fat tails, leptokurtosis and asymmetric conditional volatility.Engle and Manganelli [1] noted that the main difference among VaR models is how they deal with the difficulty of reliably describing the tail distribution of returns of an asset or portfolio.The main challenge is choosing an appropriate distribution of returns to capture the time-varying conditional volatility of future return series.However, the popularity of VaR as a risk measure can be attributed to its theoretical and computational simplicity, flexibility and its ability to summarize into a single value several components of risk at the firm level that can be easily communicated to the management for decision making.
The existing conventional approaches for estimating VaR in practice can be classified into three main approaches [2].First, the non-parametric approaches that often rely on the empirical distribution of returns to compute the tail quantiles without making any limiting assumptions concerning the distribution of returns.Second, the fully parametric models approach based on an econometric model for volatility dynamics and the underlying assumption of statistical distribution to describe the entire distribution of returns (losses) including possible volatility dynamic.Finally, the semi-parametric approach utilizes both the flexible modeling framework of parametric approaches and benefits of non-parametric approaches.Historical simulation (HS) is the most commonly used non-parametric method introduced by Boudoukh et al. [3].HS assumes that recent past observations will be sufficient to approximate well the expected near future observations, however, the inherent lack of observations in the tails Parametric and semi-parametric methods such as the Generalized Autoregressive Conditional Heteroskedasticity (GARCH) models and Filtered Historical Simulation (FHS) are well-known to generally either overestimate or underestimate tail risk due to the residuals often exhibiting heavier-tailed distributions rather than the normal or Student t-distributions which are frequently assumed.
The conventional methods usually consider the entire return distribution and often fail to give accurate risk measure during periods of extreme price fluctuations.According to Artzner et al. [4], the conventional VaR estimation methods have been criticized for various theoretical deficiencies and the failure to fulfill the subadditivity property of a coherent risk measure of market risk.Many models for estimating VaR try to integrate one or more of the stylized characteristics of financial time series data.
EVT based approaches have in the recent past been considered in finance to address the shortcomings of the conventional techniques as well as improve the estimation of VaR.The EVT theory focuses on modeling the tail behaviour of the distribution instead of the entire distribution of observations.Modeling extreme values has become popular in financial risk management since it targets the extreme events that happen rarely but have catastrophic effects such as market crashes, currency crisis, and extreme default losses.EVT provides a robust framework for modeling the tail distributions and it does for the maxima of independently and identically distributed (i.i.d.) random variables what the central limit theorem (CLT) does for modeling the summation of random variables and both theories give the asymptotically limiting distributions as the sample increases.In extreme value theory, there are two statistical approaches for analyzing extreme values: the block maxima (BM) method and peaks-over-threshold (POT) method.The block maxima approach consists of splitting the observation period into non-overlapping periods of equal size and only considers the maximum observation in each period.The set of extreme observations selected under extreme value conditions approximately follows the generalized extreme value (GEV) distribution.The peak-over-threshold (POT) approach selects extreme observations that exceed a certain high threshold.The probability distribution of the exceedances over a given high threshold follows approximately a generalized Pareto distribution (GPD).POT method is considered to be more data efficient since it makes better utilization of all the available information and is therefore mostly used for practical applications.
EVT is well established in many different fields of practice including engineering, applied science, insurance and finance among many others [5] [6].Embrechts et al. [7] provide an overview of the empirical application of EVT in modeling extreme risks in finance and specifically in estimating VaR.In recent and Këllezi [10], Bhattacharyya et al. [11], Ghorbel and Trabelsi, [12].Bali [13] demonstrates that the EVT method yields better with respect to the skewed Student-t and normal distributions using the daily index of the DJIA stock markets.
EVT normally assumes that extreme observations under study are usually independently and identically distributed (i.i.d), but such an assumption is unlikely to be appropriate for the estimation of extreme tail probabilities of financial returns.
Parametric volatility models and the EVT theory have been combined to capture the impact of serial dependence and heteroscedastic dynamics on the tail behaviour of the financial return series.McNeil and Frey [14] proposed a dynamic two-step approach built around the standard GARCH model, with innovations allowed to follow a heavy-tailed distribution.First, the GARCH model is fitted to the financial return series to filter the serial autocorrelation and obtain close to independently and identically distributed standardized residuals.Subsequently, the standardized residuals are fitted using the POT-EVT framework.
The conditional or dynamic method integrates the time-varying volatility using GARCH model and the heavy-tailed distribution using EVT to estimate conditional VaR.Bali and Neftci [9] estimate VaR using the GARCH-GPD model and the model yields more accurate results than that obtained from a GARCH Student t-distributed model.Similarly, Byström [15] and Fernandez [16] conclude that the GARCH-GPD model performs much better than the GARCH models in estimating VaR.A number of researchers have applied the McNeil and Frey [14] approach in estimating market risk, Chan and Gray [17], Ghorbel and Trabelsi [12], Marimoutou et al. [2], Singh et al. [18] and Ghorbel and Trabelsi [19] among others and have demonstrated that methods for estimating VaR based on modeling extremes measures the financial risk more accurately compared to the conventional approaches based on the normal distribution.
This paper focuses on the implementation several different approaches to computing the one-day-ahead VaR forecast and the comparative performance of the models when applied to of a portfolio of four currency exchange rates.The motivation is to compare the performance of the conditional EVT model that captures the time-varying volatility and extreme losses with nine other conventional models in forecasting VaR.The contribution to the literature is illustrated as follows.First, the study reviews the concepts of conventional techniques and also proposes the conditional EVT model that accounts for the time-varying volatility, asymmetric effects, and heavy tails in return distribution.The combining of GJR-GARCH model [20]  Finally, the out-of-sample predictive performance of the competing models is assessed through dynamic backtesting using the Kupiec's [21] unconditional coverage tests and Christoffersen's [22] likelihood ratio tests.The overall performance rating of the competing models is determined by ranking the top two models terms of the violation ratios and the passing both statistical backtesting tests.
The outline of the rest of the paper is as follows.Section 2 describes the different conventional methods for estimating VaR considered in this paper.Section 3 presents the dynamic backtesting procedures.Section 4 reports the empirical analysis and the dynamic backtesting results of the performance of the methods of estimating VaR.Finally, Section 6 gives a conclusion of the study.

Methods of Estimating Value-at-Risk
Value-at-Risk (VaR) measures the maximum possible losses in the market value over a specified time horizon under typical market conditions at a given level of significance [23].From a risk manager's perspective, hedging against loss is important and as a result, this paper focuses on the negative return (loss) distribution such that high VaR estimates values correspond to high levels of risk.Suppose t p is the price of an asset at time t and ( ) is the daily negative continuously compounded returns.For a given level of significance ( ) ( ) where F − is the inverse of the distribution function F represent the quantile function.

Filtered Historical Simulation
Filtered Historical Simulation (FHS) attempts to combine the power and flexibility of parametric volatility models (like GARCH or EGARCH) and the benefits of non-parametric Historical Simulation into a semi-parametric model which accommodates the volatility dynamics of financial returns.FHS is superior to Historical Simulation since it captures the volatility dynamics and other factors that can have an asymmetric effect on the volatility of the empirical distribution.Given a sequence of the past return observations { }  ) By utilizing the standardized residuals, the hypothetical future returns distribution is estimated and with the conditional mean and conditional standard deviation forecasts from the volatility model, the one-period-ahead VaR forecast is computed as where 1 ˆt µ + is the conditional mean and 1 ˆt σ + is the conditional standard devia- tion forecast from the volatility model.

GARCH Models
Under the assumption of constant volatility over time, the volatility dynamics of financial assets are not taken into account and the estimated VaR fail to incorporate the observed volatility clustering in financial returns and hence, the models may fail to generate adequate VaR estimations.Conditional heteroscedastic models take into account the conditional volatility dynamics in financial returns when estimating Value at Risk.In practice, there are many generalized conditional heteroscedastic models and extensions that have been proposed in econometrics literature.The autoregressive conditional heteroscedastic (ARCH) model first introduced by Engle [24] and the subsequent generalized conditional heteroscedastic (GARCH) model by Bollerslev [25] are the most commonly used conditional volatility models in financial econometrics.In this paper, the focus is on standard GARCH, Exponential GARCH (EGARCH), Glosten-Jagannathan-Runkle GARCH (GJR-GARCH) models.
The GARCH model specification has two main components: the conditional mean component that captures the dynamics of the return series as a function of past returns and the conditional variance component that formulates the evolution of returns volatility over time as a function of past errors.The conditional mean of the daily return series can be assumed to follow a first-order autoregressive process, where 1 t r − is the lagged return, 0 ϕ and 1 ϕ are constants to be determined and t ε is the innovations term.
The dynamic conditional variance equation of the GARCH (p, q) model can be characterized by where 0 0 α > , where ( ) volatility in financial time series data and it assumes that positive and negative shocks have the same effect on future conditional volatility since it only depends on the squared past residuals.However, a number of empirical studies have observed that negative shocks (bad news) like market crashes, currency crisis, and economic crisis have a greater impact on volatility relative to a positive shock (good news) such as a positive financial performance of markets or positive economic growth of the country.Such a phenomenon leads to the concept leverage effect [26].The asymmetric GARCH models are designed to capture leverage effects in financial return series.
To account for the occurrence of asymmetric effects between financial returns and volatility changes, Nelson [27] proposed the asymmetric exponential GARCH (EGARCH) model, which can capture magnitude as well as sign effects of the shock.The conditional variance equation of EGARCH model is given by ( ) ( ) where i γ represents the leverage effect of positive or negative shocks.When the market experiences positive (good) news, the impact on the conditional volatility is ( ) On the contrary, when there is negative (bad) shock's effect the impact on volatility is equal to ( ) The existence leverage effects characteristics can be tested under the hypothesis that i γ is expected to be nega- tive.
Another GARCH extension model that accounts for the asymmetric effect is the GJR-GARCH model introduced by [20].The conditional variance equation of the GJR-GARCH (p, q) is given by where t i S − − is an indicator variable which is assigned to value one if t i ε − is negative and zero otherwise.When 0 γ > , it implies that bad news (negative shocks) has a bigger impact than good news (positive shocks).The GJR-GARCH models reduce to the standard GARCH model when all the leverage coefficients are equal to zero.
The one-step-ahead forecast of conditional variance for the GARCH, EGARCH and GJR-GARCH models are as follows: ( ) For the GARCH model under the assumption of normally distributed innova-

Extreme Value Theory
Extreme value theory (EVT) deals with events that rarely happen but their impact possibly catastrophic.EVT focuses on the modeling of the limiting distributions generated by the tail distribution of the extreme values and the estimation of extreme risk.In financial risk management, the peak over threshold (POT) method is commonly used.In this paper, the focus is also on the POT method.
EVT assumes that extreme data are usually independently and identically distributed (iid).However, the assumption is unlikely to be appropriate for the estimation of extreme tail probabilities of financial returns which are known to exhibit some serial correlations and volatility clustering.
The POT method considers the distribution of exceedances conditionally over a given high threshold u is defined by where F x < ∞ is the right endpoint of F.
From the results of Gnedenko-Pickands-Balkema de Haan Theorem [29] [30] for a sufficiently large class of underlying distribution function F , the excess conditional distribution function ( ) u F y , for an increasing threshold u can be approximated by is the Generalized Pareto Distribution (GPD) which is given by where ( ) ξ is the shape parameter and σ the scale parameter for the GPD.Consequently, if 0 ξ > , the GPD is a heavy-tailed Pareto distribution and if 0 ξ = , the GPD is a light-tailed exponential distribution and if 0 ξ < the GPD is a short-tailed Pareto type II distributions.Gilli and Kellezi [10] indicates that in general, financial losses have heavy-tailed distributions and therefore only the family of distributions with 0 ξ > are suitable in financial analysis since they are heavy-tailed.
The application the POT method requires an appropriate threshold value u to be determined.The selection of the threshold value is usually a compromise between bias and variance.Ideally, the threshold u should be set adequately high to guarantee exceedances have a limiting distribution that is within the domain of attraction of the generalized Pareto distribution.Conversely, if u is set extremely high, there is the likelihood to have very few exceedances to sufficiently estimate the parameters of the GPD.The common practice is to make a choice of a threshold value that is as low as possible provided it gives a reliable asymptotic approximation of the limiting distribution [31].In this paper, the threshold u is determined using two popular techniques; the Hill estimator [32] and the mean excess function (MEF).
By setting appropriate threshold u, the parameters ξ and σ of the GPD can be estimated by maximizing the log-likelihood function given by ( ) Using the results of the estimation of the distribution of exceedances, we can estimate the tails of the distribution by substituting ( ) F u with the empirical estimator ( ) , for Thus, the cumulative distribution function for the tail of the distribution is Therefore, for a given probability ( ) VaR can be estimated as Many empirical research studies on the performance of EVT based methods of estimating VaR have revealed that unconditional models generate VaR forecasts that respond slowly to varying market conditions.However, extreme price movements in financial markets due to unpredictable events like financial crisis, currency crisis or even stock market crashes cannot be fully modeled by using volatility models like GARCH [33].

The GARCH-GPD Method
As noted in Section 2.4, financial return series exhibit stochastic volatility that results in the phenomenon of volatility clustering, non-normality distribution resulting in heavy tails of the returns distribution and autocorrelation, all which violate the independent and identically distributed (i.i.d.) assumption of EVT.Therefore in order to address the deficiencies of the financial return series we adopt the conditional extreme value theory introduced by McNeil and Frey [14].
The conditional-EVT model suggests first to use a GARCH model to filter the financial return series such that the residuals obtained are relatively close to satisfying the i.i.d.assumption of the original financial return series.In the next step, the POT based method is applied to model the tail behavior of standardized residuals obtained with GARCH model.Consequently, the conditional EVT approach handles both dynamic volatility and heavy-tailed exhibited by the return distribution.The McNeil and Frey [14]'s two-step approach denoted by GARCH-GPD can be stated as follows: Step 1: The GARCH-type model assuming the error term follows a Student t-distribution is fitted the currency exchange return series by maximum likelihood estimation method.
Step 2: EVT is applied to the standardized residuals obtained in Step 1 to estimate the tail distribution.The POT method is used to select the exceedances of standardized residual beyond a high threshold.
From the fitted GARCH model the realized standardized residuals t z are computed as follows: ( ) The standardized innovations sequence ( ) , , , is assumed to be i.i.d observations which can be denoted as order statistics ( ) ( ) ( ) Given that u N denote the number of excess observations exceeding a high threshold u and assuming that the excess residuals follow the GPD, the estimation of the tail estimator ( ) z F z given by ( ) ( ) Inverting Equation (30) for a given probability ( ) Therefore, the one-day-ahead conditional EVT-VaR for the return is given by ( ) The conditional heteroscedastic models and semi-parametric VaR estimation approaches; conditional-GPD model and FHS described in this section are used to estimate the currency risk.The non-parametric HS and the parametric approaches; EVT, variance covariance and RiskMetrics both of which assume a normal distribution of the return series are also implemented.In order to validate the forecasting accuracy and measure the comparative performance of the conditional-GPD approach with the conventional procedures of estimating VaR forecasts, statistical backtesting procedures introduced in Section 3 are performed.

Backtesting Value-at-Risk
Backtesting is a statistical procedure designed to compare the realized trading losses with the VaR model predicted losses in order to evaluate the accuracy of the VaR model.It is an important component of the VaR estimation.The Basel Committee on Banking Supervision (BCBS) framework requires banks and other financial institutions using internal VaR risk models to routinely validate the accuracy and consistency of their models through backtesting [34].In financial econometrics, backtesting implies the assessment of the forecasting performance of the financial risk model by using historical data for risk forecasting and comparison with the realized rates of return [35].In order to determine the reliability and accuracy of the VaR model, backtesting is used to determine whether the number of exceptions generated have come close enough to the realized outputs, in order to enable the reaching of the conclusion that such assessments are statistically compatible with the relevant outputs.In this study, the unconditional coverage test proposed by Kupiec [21] and Christoffersen [22] conditional coverage tests are used to perform the comparative assessment of the VaR models.
In order for the backtesting tests to be implemented an indicator function of VaR exceptions sometimes referred to as the "hit sequence" [35] is defined.Let 1 + t I be an indicator function of VaR violations that can be denoted as: where p is the specified probability of occurrence of the violations corresponding to the confidence level α of the VaR model, i.e.
( ) and p is equal to the observed number of exceedances (N) divided by the sample size (T).
The unconditional coverage likelihood ratio test statistic is defined as: The statistic is asymptotically distributed as a chi-square probability distribution where n ij is the number of observations of the hit sequence with a j following an i, j = 0, 1.The test statistic is asymptotically distributed as a chi-square distribution with two degrees of freedom.

Empirical Results
The empirical analysis is based on daily average currency exchange rates: US   Table 1 presents the basic summary statistics of the daily negative returns of the currency exchange rates.For all the exchange rates the mean and median are close to zero, which is one of the stylized facts of daily financial time series data.
The standard deviations for all the series are reasonably high which confirms the high volatility displayed in the return plots.Skewness for most of the series is biting an important characteristic of financial time series data, namely leptokurtosis.Moreover, the Jarque-Bera (JB) normality hypothesis is rejected for each return series also confirming that all daily returns series are far from being normal distributed.The Augmented Dickey-Fuller (ADF) test is used to verify the stationarity of the return series.The test indicates that the return series can be assumed to be stationary since the unit root null hypothesis is rejected at all levels of significance.This is usually expected as continuously compounded returns are considered and taking the logarithms amounts to a variance stabilizing transformation.Finally, to test the presence of conditional heteroscedasticity in the data, the autocorrelation of squared residuals was tested.The Ljung Box Q statistic for all currencies is significantly high above the critical value of 31.404,thus the null hypothesis of no serial autocorrelation is rejected, and thus we confirm the strong presence conditional heteroscedasticity in the data.This indicates that a conditional heteroscedastic model should be considered in the modeling of the currency exchange returns.Asymmetric GARCH models may also prove practical, because of the leverage effect.Finally, a distribution other than the normal (e.g. the Student t distribution) for the errors may be considered, since the currency returns demonstrate excess kurtosis and the heavy-tailed distributions.
Table 2 presents the parameter estimates from the AR(1)-GJR-GARCH (1, 1) model fitted based on the in-sample period.The parameters of the fitted models are estimated using the maximum likelihood method.The parameters for the mean equation are not statistically significant for all the returns series except for the USD/KES.The parameters for the conditional variance equation, ( 0 1 , α α β ) are highly significant for all currency series and the estimated asymmetry coefficient, ( γ ) is negative for two of the series, suggesting the presence of the leverage effects as negative returns are more likely than positive returns.The shape parameter estimates that is associated with the degrees of freedom parameter which determines the kurtosis of its probability density function are statistically significant for all the series.The p-values of the specification tests carried out after estimating the model parameters including Jarque-Bera (JB) test for testing normality, Ljung-Box test of autocorrelation in the standardized and squared standardized residuals with 20 lags, Q(20) and Q 2 (20), and Lagrange Multiplier (LM)-test for ARCH effects, respectively are also presented in Table 2.For all the currency series, the null hypothesis of normality is rejected at the 5% significance level, given that the JB test statistic value is significantly high confirming that the residuals are not normally distributed.The LM-test for all the currency series fails to detect any serial correlation and presence of ARCH effects.The null hypothesis of no serial autocorrelation remain is not rejected at 5% level, indicating that neither long memory dependence nor non-linear dependence is found in the residual series.
Therefore, the AR(1)-GJR-GARCH (1, 1) specifications sufficiently filter the serial autocorrelation and volatility dynamics present in the conditional mean and variance of the currency return series effectively producing residuals that are closer to being i.i.d. on which extreme value theory can be applied.The fitted model is also applied as a risk measurement method to be compared with other conventional risk measurement methods and the extracted standardized residuals are used in the filtered simulation approach.
In this paper, the threshold value required to fit the GPD model is obtained using the graphical approach, which uses the Mean Excess Function (MEF) plot of the return series.Maximum likelihood estimation is implemented on the exceedances over the threshold.Table 3 reports the estimated parameters of the generalized Pareto distribution (GPD) as well as their estimated asymptotic standard errors resulting from applying the POT approach to the filtered standardized innovations.For all the returns series, the shape parameter is found to be positive and significantly different from zero, indicating heavy-tailed distributions of the innovation process characterized by the Frechet distribution.

Out-of-Sample VaR Backtesting
Backtesting is a statistical technique used to measure the accuracy of the computed VaR forecasts compared with actual losses realized at the end of the specified time horizon.This is key in financial risk management practices in order to evaluate the model performance in a sample period that is not used to estimate the model parameters [37].Thus, the sample period that is used to estimate the parameters of the model is the in-sample period and the retained period for the forecast evaluation of the model is the out-of-sample period.
In this paper, the dynamic backtesting is used to evaluate the relative performance of the conventional methods and conditional EVT approach to forecast VaR is implemented at five different levels of significance namely; 95%, 99%, 97.5%, 99.5% and 99.9% for the out-of-sample currency returns.The   two statistical backtesting measures 100% of the time and always take the last two positions of the violation ratio rankings in most of the cases.The dynamic backtesting evidence demonstrates that the conditional extreme value theory (conditional EVT) model outperforms all the competing conventional models at low levels of significance (from 95% up to 97.5%).However at higher levels from the 99% level and above the dominance of the extreme value technique stands out since it is the only method where most VaR forecasts are statistically significant.

Conclusion
In financial risk management, the implementation of Value-at-Risk (VaR) has been widely used to measure risk.The objective of this paper is to provide a comparative analysis of the out-of-sample forecasting performance for the and are ranked as the top two models in most cases.Such models produce a VaR estimate which reacts to the volatility dynamics and provides a significant improvement over the widely used conventional VaR models.Predominantly, the non-parametric models such as HS and its modification; Filtered Historical Simulation (FHS), unconditional EVT model and the models with the normal assumption which usually fail to provide statistically accurate VaR estimates especially for higher confidence levels and tends to underestimates risk.For purposes of further research, we recommend the modeling of dependence structure of a portfolio of assets using copula functions.The copula function plays a critical role in portfolio construction and risk management.

C
. O. Omari et al.DOI: 10.4236/jmf.2017.74045848 Journal of Mathematical Finance of the distribution make the methods rather uncertain in estimating tail risk.

(
are positive parameters with the necessary restrictions to ensure finite conditional variance as well as covariance stationary.Empirical studies within the financial econometrics literature have demonstrated that the standard GARCH (1, 1) model works well in estimating and produce accurate volatility forecasts.The parameters of the conditional variance equation of the GARCH (1, 1) model under the assumption of normally distributed innovations can be estimated through the maximization of the log-likelihood function given by Θ =are the parameters of the model.The GARCH models have been extensively used in modeling the conditional C. O. Omari et al.DOI: 10.4236/jmf.2017.74045852 Journal of Mathematical Finance C. O. Omari et al.DOI: 10.4236/jmf.2017.74045855 Journal of Mathematical Finance

Figure 1
Figure1is a plot of the daily currency exchange prices, daily currency returns for the four currency exchange rates.Each plot of the currency prices illustrates the features exhibited by financial time series data namely; extreme movements, time-varying volatility clustering, i.e. upward movements have a tendency to be followed by other upward movements and downward movements also followed by other downward.In financial econometrics modeling a conditional heteroscedastic model can be proposed to model the volatility clustering in the daily currency exchange price returns and the measures of risk coverage are conditional on the current volatility dynamics.
backtesting results have also been evaluated using a rolling window of size 1000 daily returns (about 4 years) for all the VaR models.The rolling window procedure has a two-fold advantage of assessing the accuracy of the VaR forecasts as well as the stability of the model over time.The stability of the model C. O. Omari et al.DOI: 10.4236/jmf.2017.74045862 Journal of Mathematical Finance is evaluated in terms of whether the coefficients of the fitted model are time-invariant.When dealing with long time periods, it is impracticable to evaluate the suitable fitting model on a daily basis and to select a new exceedance value over the given threshold for the implementation of GARCH-GPD specification in tail estimation to be assumed to be adequate for each rolling window.Table 4 presents the comparative performance results of the forecasting VaR model in terms of the violation ratio for the left tail (losses) and right tail (gains) implemented at different levels of significance.The rankings (shown in parenthesis) indicate the absolute departure of the estimated VaR forecasts from the expected violation ratios.Based on the proximity of the actual violation ratio to the expected violation ratio, the rankings demonstrate GARCH-GPD model generates the best performance for all the currency exchange rates passing the violation test 75% of the time in forecasting the VaR in the specified backtesting period.The GJR-GARCH Student-t model is ranked second with a success rate of 20%.The Filtered Historical simulation, standard GARCH model and other conventional models perform relatively poorly compared to the conditional EVT model.The unconditional EVT, unconditional normal, RiskMetrics and Historical simulation models performs the worst realizing violation ratios that are far from the expected failure.The consequence of overestimating risk is an unnecessary increase in capital allocation leading to an inflated cost of doing business.
conditional extreme value theory (conditional EVT) model based on the peaks-over-threshold approach compared to conventional VaR estimation techniques such as Historical Simulation, Filtered Historical Simulation, variance-covariance model, unconditional Extreme Value theory model, RiskMetrics model, GARCH and GJR-GARCH models under normal and Student-t distributions.The conditional EVT approach is based on the two-step procedure suggested by McNeil and Frey [14] in modeling the tails of distributions and in estimating and forecasting VaR.Empirical backtesting results demonstrate that the conditional EVT and conditional GJR-GARCH Student-t models are the most appropriate techniques in measuring and forecasting risk since they outperform the competing conventional methods (non-parametric and parametric) [28]he p-th quantile of the standard normal distribution.Bollerslev[28]proposed using the standardized Student's t-distribution model the innovation component of the GARCH models since it captures the non-normality characteristics of excess kurtosis and heavy-tailed distribution of financial returns.Under this assumption, VaR can be computed as tions, the estimation of Value-at-Risk is computed as C. O. Omari et al.DOI: 10.4236/jmf.2017.74045853 Journal of Mathematical Finance ( ) [36]reedom.Kupiec's unconditional test only gives the essential condition to categorize a VaR model as satisfactory but it does not account for the possibility of clustering of violations, which can be as a result of the volatility of the return series.Thus, backtesting VaR model should not exclusively rely on unconditional coverage tests[36].VaR model validation is also reliant on a test of the randomness of these VaR violations, to avoid clustered violations.

Table 1 .
Summary statistics of currency exchange returns.
*Significant at 5% level.positive, except for EURO/KES exchange rate which is negative.Kurtosis for all currencies is high, with the lowest kurtosis estimate of 4.075 and the highest estimate of 22.29 giving a strong indication of the presence of fat tails, thus exhi-

Table 3 .
Maximum likelihood estimates of the fitted GARCH-GPD distribution.

Table 5 and
Table 6 presents the p-values of unconditional coverage and

Table 7
models that satisfy both unconditional PoF test and the conditional test and also must be ranked among the top two in terms of the violation ratios out of the 40 possible cases (four currency exchange rates, two (left and right) tails and five C. O. Omari et al.DOI: 10.4236/jmf.2017.74045863 Journal of Mathematical Finance

Table 4 .
Out-of-sample one-day VaR violation ratios of currency returns (in %) and model ranking.