Expected Shortfall Semi-Scale T-Distribution M-Estimator ()

R. Douglas Martin^{1}, Shengyu Zhang^{2}

^{1}Department of Applied Mathematics and Statistics, University of Washington, Seattle, USA.

^{2}BECU, Tukwila, Washington, USA.

**DOI: **10.4236/jmf.2023.134029
PDF
HTML XML
93
Downloads
312
Views
Citations

The influence function of parametric t-distribution expected shortfall (ES) estimators has an approximately symmetric shape, for which large positive returns indicate large losses. We avoid this risk estimator’s unacceptable feature by introducing an ES semi-scale M-estimator for t-distributions, for which the usual t-distribution scale parameter is replaced by a semi-scale parameter. We derive the influence function of the ES semi-scale M-estimator, and show that its influence function has large values only for large negative returns as one expects, and only very small typically negative values for positive returns. The computation of an ES semi-scale M-estimator is shown to be a simple modification of a parametric t-distribution ES maximum-likelihood estimator (MLE), in which the scale MLE is replaced by a semi-scale estimator. We also derive the asymptotic variance expression for the ES semi-scale M-estimator, and show that its standard error is not very much larger than that of the t-distribution ES maximum-likelihood estimator.

Keywords

Risk, Expected Shortfall, Semi-Scale, MLE, M-Estimator, Influence Function

Share and Cite:

Martin, R. and Zhang, S. (2023) Expected Shortfall Semi-Scale T-Distribution M-Estimator. *Journal of Mathematical Finance*, **13**, 483-500. doi: 10.4236/jmf.2023.134029.

1. Introduction

The early development of portfolio risk management focused on the Value-at-Risk (VaR) measure of risk, whose popularity owes a lot to JP Morgan Risk Metrics document (1996) [1] . Subsequently, a set of risk measure coherence axioms was introduced by Artzner *et al.* (1999) [2] , who showed that VaR did not satisfy these axioms, and that the expected shortfall (ES) measure of the average losses beyond the VaR is a coherent risk measure. See also, Sections 2.3 and 8.1 of McNeil *et al.* (2015) [3] . Over the years, ES has become an increasingly popular risk measure of choice, in part due to its properties for portfolio optimization established by Rockafellar and Uryasev (2000) [4] , who used the term CVaR in place of ES, and due to its support by Basel regulatory guidelines for use by banks.

There exist two basic variants of ES estimators, nonparametric ES estimators which do not depend on returns distribution assumptions, and parametric ES estimators which depend on an assumed returns distribution. Nonparametric ES estimators are the average of a small fraction of the smallest ordered portfolio returns, and the ES risk measure that gives rise to such an estimator is known to satisfy the risk coherence axioms. The parametric normal distribution ES risk measure is a linear combination of the distribution mean and the distribution standard deviation, and this linear combination fails to be a coherent risk measure because it fails to satisfy the coherence monotonicity axiom, which states that if random return *R*_{1} is greater or equal to random return *R*_{2}, then the risk for *R*_{1} is less or equal to the risk for *R*_{2}. In terms of the ES risk estimator obtained by replacing the distribution mean and standard deviation with sample estimates, this failure manifests itself in the fact that one large return can give rise to an increase in ES risk, which is quite non-intuitive. For a reasonable risk measure, increasing returns should intuitively result in decreasing risk. Fischer (2003) [5] showed that a simple fix to the lack of coherence of parametric normal distribution ES is to replace the standard deviation by the lower standard semi-deviation, “semi-deviation” for short.

The fact that returns are often non-normal led to a literature focus on parametric ES for fat-tailed non-normal distributions, the simplest of which are t-distributions. Furthermore, parametric ES estimators based on maximum-likelihood parameter estimators (MLE’s) have the attractive feature that they achieve the minimum large-sample variance when returns in fact have the assumed non-normal distribution. With this in mind, Martin and Zhang (2019) [6] , henceforth MZ2019 [6] , studied the comparative behavior of both parametric normal and parametric t-distribution ES risk measures and estimators, in terms of the behavior of their *influence functions* which are extensively treated in the robust statistics literature. It turns out that the influence functions of both parametric normal and parametric t-distribution based on parameter MLE’s are approximately symmetric functions of a return, which reflects the failure of these risk measures to satisfy the coherence monotonicity axiom.

In MZ2019 [6] it is shown that the monotonicity of a risk measure influence function is a sufficient condition to ensure that the risk measure satisfies the coherence monotonicity axiom. Thus, our goal was to modify the parametric t-distribution ES in such a way that its influence function is monotonic. Motivated by the Fischer (2003) [5] result that replacement of the standard deviation by semi-deviation results in a coherent mean semi-deviation risk measure, we will show that replacement of the t-distribution MLE scale estimator by a semi-scale estimator results in ES influence functions that are quite close to being monotonic, the more so for t-distribution tail probabilities that are larger than commonly recommended values.

The remainder of the paper is organized as follows. Section 2 briefly reviews nonparametric and normal distribution expected shortfall, and their influence functions. Section 3 introduces general parametric risk measures and influence functions, and discusses the t-distribution special case. Section 4 introduces a new t-distribution ES Semi-Scale M-estimator, derives its influence function formula, and displays its typical influence function shape. Section 5 describes a practical implementation of the ES Semi-Scale M-estimator. Section 6 derives the asymptotic variance formula for the ES Semi-Scale M-estimator and shows that, as a function of the ES tail probability, its asymptotic standard errors are not very much greater than those of a t-distribution ES maximum likelihood estimator. Section 7 contains concluding comments. The detailed derivation of some mathematical results may be found in the Appendix of an early draft version of this paper, which is available at SSRN (https://ssrn.com/abstract=4605604). An early version of MZ2019 [6] , which includes its Appendices, may be found at SSRN (https://ssrn.com/abstract=2747179).

2. Nonparametric and Normal Distribution Parametric Expected Shortfall and Influence Functions

This section reviews some basic material about nonparametric and normal distribution parametric shortfall, and their influence functions, which is discussed in more detail in MZ2019 [6] .

2.1. Nonparametric and Normal Distribution Parametric Expected Shortfall Formulas

The integral form of expected shortfall with risk as a positive quantity is

$E{S}_{\gamma}\left(F\right)=-\frac{1}{\gamma}{\displaystyle {\int}_{r\le {q}_{\gamma}\left(F\right)}r\cdot \text{d}F\left(r\right)}$ (2.1)

where *F* is an unrestricted returns distribution function and
${q}_{\gamma}\left(F\right)$ is the tail probability
$\gamma $ quantile functional. A nonparametric estimator of ES is obtained by replacing the unknown distribution function *F* in the
$E{S}_{\gamma}\left(F\right)$ formula by the empirical distribution function
${F}_{n}\left(r\right)$ which has a jump of size
${n}^{-1}$ at each of the return values
${r}_{1},{r}_{2},\cdots ,{r}_{n}$ . For purposes of providing a formula for the ES estimator, we let
${r}_{\left(1\right)}\le {r}_{\left(2\right)}\le \cdots \le {r}_{\left(n\right)}$ be the ordered values of the observed returns, and let
$\lceil x\rceil $ be the smallest integer greater or equal to *x*. Then the nonparametric estimator
$\stackrel{^}{E{S}_{\gamma}}$ formula is:

$\stackrel{^}{E{S}_{\gamma}}=-\frac{1}{\lceil n\gamma \rceil}{\displaystyle {\sum}_{i=1}^{\lceil n\gamma \rceil}{r}_{\left(i\right)}}$ (2.2)

In typical risk management applications, the choice of
$\gamma $ will be 0.01, 0.025 or 0.05, *i.e.*, 1%, 2.5% or 5%. 0. For such applications, the ordered returns in the above summation are that small fraction of all returns which have the most negative single period returns, thereby resulting in a positive ES estimator. The greater such losses are, the greater will be the associated positive ES estimator.

A parametric ES risk measure is obtained by replacing *F* with a parametric distribution function
${F}_{\theta}$ where
$\theta $ is a vector of the distribution parameters. In the case of a normal distribution, the parameters are *µ* and *σ*, and straight-forward evaluation of the integral (2.1) results in the *normal distribution parametric* ES formula

$E{S}_{\gamma}\left(\mu ,\sigma \right)=-\mu +\frac{\varphi \left({z}_{\gamma}\right)}{\gamma}\cdot \sigma $ (2.3)

where $\varphi $ is the standard normal density function, and ${z}_{\gamma}$ is the standard normal distribution $\gamma $ quantile1. In this case the parametric ES estimator is:

$E{S}_{\gamma}\left(\stackrel{^}{\mu},\stackrel{^}{\sigma}\right)=-\stackrel{^}{\mu}+\frac{\varphi \left({z}_{\gamma}\right)}{\gamma}\cdot \stackrel{^}{\sigma}$ (2.4)

where $\stackrel{^}{\mu}$ and $\stackrel{^}{\sigma}$ are the normal distribution sample mean and sample standard deviation maximum likelihood estimators.

Likewise, for a t-distribution with parameters $\theta =\left(\mu ,s,\nu \right)$ , evaluation of the integral ( 2.1 ) results in the parametric t-distribution ES formula:

$E{S}_{\gamma}\left(\mu ,s,\nu \right)=-\mu +\frac{{g}_{\gamma ,\nu}}{\gamma}\cdot s$ (2.5)

where

${g}_{\gamma ,\nu}\triangleq \sqrt{\frac{\nu}{\nu -2}}\cdot {f}_{\nu -2}\left(\sqrt{\frac{\nu -2}{\nu}}\cdot {q}_{\gamma ,\nu}\right)$ (2.6)

1See for example MZ2019 [6] or Jorion (2007) [7] .

2Zhang (2016) [9] shows that the expression (2.5) is equivalent to the one in McNeil *et al.* (2015) [3] .

with ${f}_{\nu -2}$ the standard t-density with $\nu -2$ degrees of freedom, and ${q}_{\gamma ,\nu}$ the tail probability $\gamma $ quantile for the standard t-distribution with $\nu $ degrees of freedom2. The parametric t-distribution ES estimator is obtained by replacing the parameters $\mu ,s,\nu $ with their t-distribution maximum-likelihood estimates.

The above nonparametric ES estimator is fundamentally different in character than the above parametric ES estimators, in that the former depends only on a small fraction of the most negative ordered returns, whereas the latter depends on all the returns for their parameter estimates. Furthermore, nonparametric ES is known to be a coherent risk measure, whereas parametric ES is not a coherent risk measure. In particular, parametric ES fails to satisfy the coherence monotonicity axiom. In data-oriented estimator terms, monotonicity means that if any return data value is decreased the risk should always increase or remain constant, and conversely. For the nonparametric risk estimator ( 2.2 ), this condition clearly holds since the decrease of any return among the lowest $\gamma $ percent of the returns results in an increase in risk, and conversely, the decrease or increase of any return in the largest $1-\gamma $ percent of the returns results in no change in the risk. On the other hand, monotonicity does not hold for the parametric normal distribution ES because while the decrease of a return value below the mean will decrease both terms in Equation (2.3), a decrease of a large return above the mean can decrease the second term while increasing the first term, in such a way that the risk decreases.

2.2. Nonparametric and Parametric Normal Distribution ES Influence Functions

The influence function was introduced by Hampel (1974) [8] as a basic tool for the study of robust parameter estimators3. As we shall see, the shape of a risk measure influence function provides an immediate intuitive understanding of the impact that gains and losses, positive and negative returns respectively, have on a parametric risk estimator. The influence function is defined using a mixture distribution of the form

${F}_{p}\left(x\right)=\left(1-p\right)F\left(x\right)+p\cdot {\delta}_{r}\left(x\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}0\le p<1$

where
${\delta}_{r}\left(x\right)$ is a point mass probability distribution located at *r*. The influence function of nonparametric
$E{S}_{\gamma}\left(F\right)$ is defined as the following directional (Gateaux) derivative of the ES functional at
$F\left(x\right)$ in the direction of the point mass distribution
${\delta}_{r}\left(x\right)$ :

$I{F}_{E{S}_{\gamma}}\left(r;F\right)=\underset{p\to 0}{\mathrm{lim}}\frac{E{S}_{\gamma}\left({F}_{p}\right)-E{S}_{\gamma}\left(F\right)}{p}={\frac{\text{d}}{\text{d}p}E{S}_{\gamma}\left({F}_{p}\right)|}_{p=0}$ .

As such, it has the intuitive interpretation as the asymptotic form of the difference quotient for an ES estimator evaluated at a fixed set of *n* returns
${r}_{1},{r}_{2},\cdots ,{r}_{n}$ and at an augmented set of
$n+1$ returns
${r}_{1},{r}_{2},\cdots ,{r}_{n},r$ with *r* variable, divided by
${n}^{-1}$ .

Using the integral representation of $E{S}_{\gamma}\left(F\right)$ in equation (2.1) it is shown in MZ2019 [6] that the nonparametric influence function of ES is

$I{F}_{E{S}_{\gamma}}\left(r;F\right)=-{q}_{\gamma}\left(F\right)-E{S}_{\gamma}\left(F\right)-\frac{{I}_{\left(-\infty ,{q}_{\gamma}\left(F\right)\right]}\left(r\right)}{\gamma}\left(r-{q}_{\gamma}\left(F\right)\right)$ (2.7)

where
${I}_{\left(-\infty ,{q}_{\gamma}\left(F\right)\right]}\left(r\right)$ is the indicator function of the interval
$\left(-\infty ,{q}_{\gamma}\left(F\right)\right]$ . The above influence function expression can be evaluated for any distribution *F*, and the left-hand plot in Figure 1 displays the nonparametric ES influence function for the case where
$\gamma =5\%$ , with
$F\left(x\right)$ a normal distribution with mean 0.12 and standard deviation 0.24. The resulting influence function has a striking piece-wise linear monotonic decreasing shape.

3See Hampel *et al.* (1986) [10] for detailed discussions of influence functions, and their applications.

As for the influence function of the parametric normal distribution ES given by Equation (2.3), we first note that this ES functional representation $E{S}_{\gamma}\left(\mu \left(F\right),\sigma \left(F\right)\right)$ is the following linear combination of the mean functional $\mu \left(F\right)$ and the standard deviation functional $\sigma \left(F\right)$ :

$E{S}_{\gamma}\left(\mu \left(F\right),\sigma \left(F\right)\right)=-\mu \left(F\right)+\frac{\varphi \left({z}_{\gamma}\right)}{\gamma}\cdot \sigma \left(F\right)$ .

Figure 1. Nonparametric and parametric tail probability $\gamma =5\%$ ES influence function for a normal distribution with mean 0.12 and standard deviation 0.24.

It follows that the influence function of parametric normal distribution ES is the corresponding linear combination of the influence functions of $\mu \left(F\right)$ and $\sigma \left(F\right)$ . It is well-known in the robust statistics literature that influence function of the mean is

$I{F}_{\mu \left(F\right)}\left(r\right)=r-\mu $

and the influence function of the standard deviation is4:

$I{F}_{\sigma \left(F\right)}\left(r\right)=\frac{{\left(r-\mu \right)}^{2}-{\sigma}^{2}}{2\sigma}$ .

Thus, the influence function of parametric normal distribution ES is

$I{F}_{ES,\gamma}\left(r;\mu ,\sigma \right)=-\left(r-\mu \right)+\frac{\varphi \left({z}_{\gamma}\right)}{\gamma}\cdot \frac{{\left(r-\mu \right)}^{2}-{\sigma}^{2}}{2\sigma}$ (2.8)

and it is displayed in the right-hand plot of Figure 1, with the same mean and standard deviation values as for the nonparametric ES influence function in the left-hand plot.

4See for example Section 3 of Zhang, Martin and Christidis (2021) [11] .

The shape of the parametric ES influence function is symmetric about the value $\mu +\gamma \sigma /\varphi \left({z}_{\gamma}\right)=0.24$ , and in this regard is strikingly different from the desirable monotonic character of the nonparametric ES influence function, which is either constant, or strictly increasing, for decreasing return , beyond the tail probability $\gamma =0.05$ quantile. On the other hand, the parametric normal distribution influence function is quite non-monotonic, with increasing return values much larger than 0.24 indicating increased risk, which is quite unnatural.

3. Parametric Risk Measures and Influence Functions

This section first discusses general parametric risk measures and their general influence function formula, as well as a specific influence function formula for the case where maximum-likelihood parameter estimators are used. Then it presents the MZ2019 [6] influence function formula for parametric t-distribution expected shortfall.

3.1. General Parametric Risk Measures and Influence Functions

Let
$\rho \left(\theta \right)=\rho \left({F}_{\theta}\right)$ be a parametric risk measure defined by a fixed parametric family of univariate distribution functions
${F}_{\theta}$ where
$\theta =\left({\theta}_{1},{\theta}_{2},\cdots ,{\theta}_{K}\right)$ . It will suffice for our applications in this paper to assume that
${F}_{\theta}\left(r\right)$ is continuous and strictly increasing with quantile function
${q}_{\gamma}\left({F}_{\theta}\right)={F}_{\theta}^{-1}\left(\gamma \right)$ . One obtains a parametric risk measure estimator by first choosing a parameter estimator
$\theta ={\stackrel{^}{\theta}}_{n}$ of
$\theta $ and then using it to obtain a risk estimator
${\stackrel{^}{\rho}}_{n}=\rho \left(\stackrel{^}{\theta}\right)$ , where we have suppressed the subscript *n* on
${\stackrel{^}{\theta}}_{n}$ for notational convenience. We assume that
$\theta ={\stackrel{^}{\theta}}_{n}$ is based on independent and identically (i.i.d.) returns
${r}_{1},{r}_{2},\cdots ,{r}_{n}$ , with a general distribution function that will ultimately be equal to a parametric family
${F}_{\theta}$ of t-distributions for our parametric ES maximum-likelihood estimator.

An estimator $\theta ={\stackrel{^}{\theta}}_{n}$ is assumed to be obtained from a functional $\theta \left(F\right)$ by the plug-in-rule

${\stackrel{^}{\theta}}_{n}=\theta \left({F}_{n}\right)$ (3.1)

where ${F}_{n}$ is the empirical distribution function of a set of returns ${r}_{1},{r}_{2},\cdots ,{r}_{n}$ . Correspondingly, we represent the risk measure functional as

$\rho \left(\theta \right)=\rho \left(\theta \left(F\right)\right)$ (3.2)

and the risk measure estimator is

${\stackrel{^}{\rho}}_{n}=\rho \left(\theta \left({F}_{n}\right)\right)$ . (3.3)

We tacitly assume that the returns are independent and identically distributed (i.i.d.) with a parametric distribution ${F}_{\theta}$ . Then when the usual Fisher consistency condition $\theta \left({F}_{\theta}\right)=\theta $ holds, the estimator ${\stackrel{^}{\theta}}_{n}$ will under reasonable conditions converge in probability to the true parametric distribution parameter $\theta $ . Correspondingly, the risk measure estimator ${\stackrel{^}{\rho}}_{n}=\rho \left(\theta \left({F}_{n}\right)\right)$ will converge in probability to $\rho \left(\theta \right)$ . Note however that, when the returns come from $F\ne {F}_{\theta}$ then $\theta \left({F}_{n}\right)$ will converge in probability to $\theta \left(F\right)$ , but typically $\theta \left(F\right)\ne \theta $ .

For deriving parametric risk measure influence functions, the distribution function *F* is replaced by the parametric distribution function
${F}_{\theta}$ , which results in the parametric mixture distribution

${F}_{\theta ,p}=\left(1-p\right){F}_{\theta}+p\cdot {\delta}_{r},\text{\hspace{0.17em}}\text{\hspace{0.17em}}0\le p<1$ . (3.4)

Then, referring to the risk measure loosely by its estimator $\stackrel{^}{\rho}$ , the influence function of $\stackrel{^}{\rho}$ is defined as

$I{F}_{\stackrel{^}{\rho}}\left(r;{F}_{\theta}\right)=\underset{p\downarrow 0}{\mathrm{lim}}\frac{\rho \left(\theta \left({F}_{\theta ,p}\right)\right)-\rho \left(\theta \left({F}_{\theta}\right)\right)}{p}={\frac{\text{d}\rho \left(\theta \left({F}_{\theta ,p}\right)\right)}{\text{d}p}|}_{p=0}.$ (3.5)

The parameter estimator vector influence function $I{F}_{\stackrel{^}{\theta}}\left(r\right)=I{F}_{\stackrel{^}{\theta}}\left(r;F\right)$ has the components

$I{F}_{{\stackrel{^}{\theta}}_{k}}\left(r;{F}_{\theta}\right)={\left[\partial {\theta}_{k}\left({F}_{\theta ,p}\right)/\partial p\right]}_{p=0}$ (3.6)

With $\nabla \rho \left(\theta \right)={\left({\rho}_{1}\left(\theta \right),\cdots ,{\rho}_{K}\left(\theta \right)\right)}^{\prime}$ the gradient of $\rho \left(\theta \right)$ , where ${\rho}_{k}\left(\theta \right)=\partial \rho \left(\theta \right)/\partial {\theta}_{k}$ , the chain rule gives:

$\begin{array}{c}I{F}_{\stackrel{^}{\rho}}\left(r;F\right)={\left[\nabla \rho {\left(\theta \left({F}_{p}\right)\right)}^{\prime}\cdot \frac{\partial \theta \left({F}_{p}\right)}{\partial p}\right]}_{p=0}\\ =\nabla \rho {\left(\theta \left({F}_{p}\right)\right)}^{\prime}\cdot I{F}_{\stackrel{^}{\theta}}\left(r\right)\\ ={\displaystyle {\sum}_{k=1}^{K}{\rho}_{k}\left(\theta \right)\cdot I{F}_{{\stackrel{^}{\theta}}_{k}}\left(r\right)}\end{array}$ (3.7)

The above is a general expression valid for any consistent parameter estimator $\stackrel{^}{\theta}$ having an influence function $I{F}_{\stackrel{^}{\theta}}\left(r\right)$ . For a given parametric risk measure the above expression can be used for a variety of choices of parameter estimators, and for a given parameter estimator the expression can be used for a variety of parametric risk measures.

In the frequently occurring case where the parameter estimator is an MLE, it was shown in Hampel *et al.* (1986) [10] that
$I{F}_{\stackrel{^}{\theta}}\left(r\right)$ has the special form

$I{F}_{\stackrel{^}{\theta}}\left(r;\theta \right)={I}_{\theta}^{-1}\cdot \psi \left(r;\theta \right)$ (3.8)

where ${I}_{\theta}$ is the information matrix, and $\psi \left(r;\theta \right)$ is the score function vector5. In this case the IF formula (3.7) has the general form:

$I{F}_{\stackrel{^}{\rho}}\left(r;F\right)=\nabla \rho \left(\theta \left({F}_{p}\right)\right){I}_{\theta}^{-1}\psi \left(r;\theta \right).$ (3.9)

3.2. Parametric t-Distribution Expected Shortfall Influence Function

For a parametric t-distribution ES based on maximum-likelihood parameter estimates, the formula (3.9) becomes

$I{F}_{ES}\left(r;\mu ,s,\nu \right)=\nabla {\rho}_{ES}\left(\mu ,s,\nu \right)\cdot {I}_{\mu ,s,\nu}^{-1}\cdot {\psi}_{ES}\left(r;\mu ,s,\nu \right)$ (3.10)

5With
$l\left(\theta \right)=\mathrm{ln}f\left(r;\theta \right)$ the log-likelihood for a single observation, the vector score function
$\psi \left(r;\theta \right)$ has components
${\psi}_{{\theta}_{k}}\left(r\right)=\partial l\left(\theta \right)/\partial {\theta}_{k},k=1,2,\cdots ,K$ , and the *K* × *K* information matrix
${I}_{\theta}={E}_{F}\left[\psi \left(r;\theta \right){\psi}^{\prime}\left(r;\theta \right)\right]$ has elements
${I}_{k,j}={E}_{F}\left[{\psi}_{{\theta}_{k}}\left(r\right)\cdot {\psi}_{{\theta}_{j}}\left(r\right)\right]$ .

where $\nabla {\rho}_{ES}\left(\mu ,s,\nu \right)$ is the gradient vector of ${\rho}_{ES}\left(\mu ,s,\nu \right)=E{S}_{\gamma}\left(\mu ,s,\nu \right)$ given by Equations (2.5) and (2.6), ${I}_{\mu ,s,\nu}$ is the Fisher information matrix, and ${\psi}_{ES}\left(r;\mu ,s,\nu \right)$ is the t-distribution score function. The risk measure gradient vector is given by

$\nabla {\rho}_{ES}\left(\mu ,s,\nu \right)={\left(-1,\frac{{g}_{\gamma ,\nu}}{\gamma},\frac{s}{\gamma}\cdot \frac{\partial {g}_{\gamma ,\nu}}{\partial \nu}\right)}^{\prime}$

with the partial derivative approximated by a finite difference quotient. The ${\psi}_{ES}\left(r;\theta \right)$ score vector is

$\left[\begin{array}{c}{\psi}_{\mu}\left(r\right)\\ {\psi}_{s}\left(r\right)\\ {\psi}_{\nu}\left(r\right)\end{array}\right]=\left[\begin{array}{c}\frac{\left(v+1\right)\left(r-\mu \right)}{v{s}^{2}+{\left(r-\mu \right)}^{2}}\\ {\psi}_{s}\left(r\right)\\ \frac{1}{2}\left(\Omega \left(\frac{v+1}{2}\right)-\Omega \left(\frac{v}{2}\right)\right)-\frac{1}{2}\mathrm{log}\left(1+\frac{{\left(r-\mu \right)}^{2}}{v{s}^{2}}\right)+\frac{s}{2\nu}{\psi}_{s}\left(r\right)\end{array}\right]$ (3.11)

where the formula of the *scale score function*
${\psi}_{s}\left(r\right)$ is

${\psi}_{s}\left(r\right)=\frac{\nu}{s}\cdot \frac{{\left(r-\mu \right)}^{2}-{s}^{2}}{v{s}^{2}+{\left(r-\mu \right)}^{2}}$

And $\Omega \left(v\right)=\frac{\text{d}\mathrm{log}\Gamma \left(v\right)}{\text{d}v}$ is the digamma function. The formula for the information matrix ${I}_{\mu ,s,\nu}$ was derived by Lucas (1997) [12] and may be found in MZ2019 [6] .

Note that
$I{F}_{ES}\left(r;\mu ,s,\nu \right)$ depends on *r* solely through its appearance in the three components
${\psi}_{\mu}\left(r\right)$ ,
${\psi}_{s}\left(r\right)$ ,
${\psi}_{\nu}\left(r\right)$ of the score function vector, and for large *r* this dependence is symmetric about *μ*. In this regard, the behavior of
$I{F}_{ES}\left(r;\mu ,s,\nu \right)$ has a quite undesirable non-monotonic approximately symmetric behavior similar to that of the parametric normal distribution influence function in Equation (2.8) and the right-hand plot of Figure 1. This behavior is displayed in the left-hand plot of Figure 2, which is discussed further in the next section in comparison with an alternative parametric t-distribution ES M-estimator with semi-scale, and its influence function.

4. Parametric T-Distribution Semi-Scale Expected Shortfall M-Estimator and Influence Function

In this Section we derive a semi-scale modified version of the parametric t-distribution expected shortfall estimator, whose influence function does not suffer from the approximately symmetric behavior of the parametric t-distribution expected shortfall exhibited in the previous Section. Since the semi-scale parameter estimator is not an MLE, the MLE results of the previous Section no longer apply. However, the following closely related *M-estimator*, as discussed in Hampel *et al.* (1986) [10] can be used in place of an MLE.

6The above equation becomes an MLE estimating equation for the special choice ${\psi}_{MLE}\left(r;\theta \right)=-\text{d}\mathrm{ln}f\left(r;\theta \right)/\text{d}\theta $ , but here we move beyond the MLE.

A general M-estimator functional $\theta \left(F\right)$ is defined as the solution for $\theta $ of the vector equation

${E}_{F}\left[\psi \left(r;\theta \right)\right]={\displaystyle \int \psi \left(r;\theta \right)\text{d}F\left(r\right)}=0$ (4.1)

where
$\psi \left(r;\theta \right)$ is a vector-valued general “score function”, which depends on a vector parameter
$\theta $ and a scalar return variable *r*6. An M-estimator
$\stackrel{^}{\theta}={\stackrel{^}{\theta}}_{n}$ is obtained from a set of returns
${r}_{1},{r}_{2},\cdots ,{r}_{n}$ by replacing *F* in (2.11) with the empirical distribution
${F}_{n}$ :

Figure 2. t-Distribution parametric symmetric and semi-scale ES influence function with tail probability $\gamma =5\%$ for t-distribution with $\gamma =10$ degrees of freedom, mean 0.12 and standard deviation 0.24.

${\sum}_{i=1}^{n}\psi \left({r}_{i};\stackrel{^}{\theta}\right)}=0$ . (4.2)

For our application where the distribution of is in a parametric family ${F}_{\theta},\theta \in \Theta $ , we assume the Fisher consistency condition

${E}_{{F}_{\theta}}\left[\psi \left(r;\theta \right)\right]={\displaystyle \int \psi \left(r;\theta \right)\text{d}{F}_{\theta}\left(r\right)}=0,\text{\hspace{0.17em}}\theta \in \Theta $ (4.3)

In other words, the expected value of the score function at the true parameter value is zero. Correspondingly, an M-estimator defined by ( 4.2 ) converges in probability to a solution of the asymptotic estimating Equation (4.3).

For many estimation problems $\psi \left(r;\theta \right)$ may be defined as the derivative $\psi \left(r;\theta \right)=\text{d}\rho \left(r;\theta \right)/\text{d}\theta $ . Of a loss function $\rho \left(r;\theta \right)$ . In particular the log-likelihood function is a choice.

It is shown in Hampel *et al.* (1986) [10] that the influence function of an M-estimator is given by

$IF\left(r;\theta \left(F\right)\right)={M}^{-1}\left(\psi ,F\right)\psi \left(r;\theta \left(F\right)\right)$ (4.4)

where the M-matrix
$M\left(\psi ,F\right)$ is the *p*×*p* matrix

$M\left(\psi ,F\right)=-{{\displaystyle \int}}^{\text{}}{\left[\frac{\partial}{\partial \theta}\psi \left(r;\theta \right)\right]}_{\theta =\theta \left(F\right)}\text{d}F\left(r\right)$ . (4.5)

Combining (3.7) and (4.4) we have the following general expression for the influence function of a parametric risk measure based on a general M-estimator of the unknown parameters:

$\begin{array}{c}I{F}_{\stackrel{^}{\rho}}\left(r;F\right)=\nabla \rho {\left(\theta \left(F\right)\right)}^{\prime}\cdot I{F}_{\stackrel{^}{\theta}}\left(r\right)\\ =\nabla \rho {\left(\theta \left(F\right)\right)}^{\prime}\cdot {M}^{-1}\left(\psi ,F\right)\psi \left(r;\theta \left(F\right)\right)\end{array}$ (4.6)

Note that the gradient vector in the above expression depends only on the risk measure chosen and the M-estimator functional that represents the asymptotic value of the M-estimator. For parametric distributions and consistent estimators, this gradient only depends on the parametric risk measure and the distribution parameters. However, for any given distribution the influence function part of the above expression depends only on the choice of M-score function $\psi $ .

We seek to modify the t-distribution MLE score function of the previous Section so that the scale score function is a semi-scale score function, and the expected value of the resulting M-score function is still zero. One way to do this is as follows. Define the semi-scale score function by

$\begin{array}{c}\psi \left(r\right)=\left[\begin{array}{c}{\stackrel{\u02dc}{\psi}}_{\mu}\left(r\right)\\ {\stackrel{\u02dc}{\psi}}_{s}\left(r\right)\\ {\stackrel{\u02dc}{\psi}}_{\nu}\left(r\right)\end{array}\right]\\ =\left[\begin{array}{c}{\psi}_{\mu}\left(r\right)\\ {\stackrel{\u02dc}{\psi}}_{s}\left(r\right)\\ \frac{1}{2}\left(\Omega \left(\frac{v+1}{2}\right)-\Omega \left(\frac{v}{2}\right)\right)-\frac{1}{2}\mathrm{log}\left(1+\frac{{\left(r-\mu \right)}^{2}\cdot {I}_{\left(-\infty ,\mu \right)}\left(r\right)}{v{s}^{2}}\right)-c+\frac{s}{2\nu}{\stackrel{\u02dc}{\psi}}_{s}\left(r\right)\end{array}\right]\end{array}$ (4.7)

where the location score function ${\psi}_{\mu}\left(r\right)$ is the same as for a t-distribution MLE as given by (3.11), and ${\stackrel{\u02dc}{\psi}}_{s}\left(r\right)={\stackrel{\u02dc}{\psi}}_{s}\left(r;\mu ,s,\nu \right)$ is the semi-scale score function

${\stackrel{\u02dc}{\psi}}_{s}\left(r\right)=\frac{\left(\nu +1\right){\left(r-\mu \right)}^{2}\cdot {I}_{\left(-\infty ,\mu \right)}\left(r\right)}{v{s}^{3}+s\cdot {\left(r-\mu \right)}^{2}}-\frac{1}{2s}$ (4.8)

where the constant *c* remains to be determined.

In order to ensure consistency of the M-estimator defined by the above M-score function, the latter must satisfy the zero-expectation condition (4.2) under a t-distribution for returns. This is already the case for ${\psi}_{\mu}\left(r\right)$ , and we now show this is also the case for ${\stackrel{\u02dc}{\psi}}_{s}\left(r\right)$ .

In order to see that the expected value of ${\stackrel{\u02dc}{\psi}}_{s}\left(r\right)$ is zero, note that from the t-distribution MLE score function (3.11) we have

$\begin{array}{c}s\cdot {\psi}_{s}\left(r\right)=\frac{\left(v+1\right){\left(r-\mu \right)}^{2}}{v{s}^{2}+{\left(r-\mu \right)}^{2}}-1\\ =\frac{\left(v+1\right){\left(r-\mu \right)}^{2}}{v{s}^{2}+{\left(r-\mu \right)}^{2}}\cdot {I}_{\left(-\infty ,\mu \right]}\left(r\right)+\frac{\left(v+1\right){\left(r-\mu \right)}^{2}}{v{s}^{2}+{\left(r-\mu \right)}^{2}}\cdot {I}_{\left(\mu ,\infty \right)}\left(r\right)-1\text{\hspace{0.17em}}\text{.}\end{array}$ (4.9)

Since the expected value of
${\psi}_{s}\left(r\right)$ is zero the sum of the first two terms on the right is one, and from the symmetry of the t-distribution about *μ* the expected value of each of these first two terms must be equal, and hence equal to one-half. Thus

$\begin{array}{c}E\left[{\stackrel{\u02dc}{\psi}}_{s}\left(r\right)\right]=\frac{1}{s}E\left[\frac{\left(\nu +1\right){\left(r-\mu \right)}^{2}\cdot {I}_{\left(-\infty ,\mu \right]}\left(r\right)}{v{s}^{2}+{\left(r-\mu \right)}^{2}}-\frac{1}{2}\right]\\ =\frac{1}{s}E\left[\frac{1}{2}-\frac{1}{2}\right]\\ =0\text{\hspace{0.17em}}\text{.}\end{array}$

Now we just need to choose *c* so that the M-score
${\psi}_{\nu}\left(r\right)$ for degrees of freedom has zero expected value. First note that we require that

$\begin{array}{l}E\left[{\psi}_{\nu}\left(r\right)\right]\\ =\frac{1}{2}\left(\Omega \left(\frac{v+1}{2}\right)-\Omega \left(\frac{v}{2}\right)\right)-E\left[\frac{1}{2}\mathrm{log}\left(1+\frac{{\left(r-\mu \right)}^{2}\cdot {I}_{\left(-\infty ,\mu \right]}\left(r\right)}{v{s}^{2}}\right)\right]-c+\frac{s}{\nu}E\left[{\psi}_{s}\left(r\right)\right]\\ =\frac{1}{2}\left(\Omega \left(\frac{v+1}{2}\right)-\Omega \left(\frac{v}{2}\right)\right)-E\left[\frac{1}{2}\mathrm{log}\left(1+\frac{{\left(r-\mu \right)}^{2}\cdot {I}_{\left(-\infty ,\mu \right]}\left(r\right)}{v{s}^{2}}\right)\right]-c\\ =0\end{array}$

which means that

$c=\frac{1}{2}\left(\Omega \left(\frac{v+1}{2}\right)-\Omega \left(\frac{v}{2}\right)\right)-\frac{1}{2}E\left[\mathrm{log}\left(1+\frac{{\left(r-\mu \right)}^{2}\cdot {I}_{\left(-\infty ,\mu \right)}\left(r\right)}{v{s}^{2}}\right)\right]$ . (4.10)

Noting that

$\mathrm{log}\left(1+\frac{{\left(r-\mu \right)}^{2}}{v{s}^{2}}\right)=\mathrm{log}\left(1+\frac{{\left(r-\mu \right)}^{2}\cdot {I}_{\left(-\infty ,\mu \right)}\left(r\right)}{v{s}^{2}}\right)+\mathrm{log}\left(1+\frac{{\left(r-\mu \right)}^{2}\cdot {I}_{\left(\mu ,+\infty \right)}\left(r\right)}{v{s}^{2}}\right)$ (4.11)

along with the symmetry of the t-distribution about *μ* shows that the expected value of the last two terms must be equal, and thus

$E\left[\mathrm{log}\left(1+\frac{{\left(r-\mu \right)}^{2}\cdot {I}_{\left(-\infty ,\mu \right)}\left(r\right)}{v{s}^{2}}\right)\right]=\frac{1}{2}\cdot E\left[\mathrm{log}\left(1+\frac{{\left(r-\mu \right)}^{2}}{v{s}^{2}}\right)\right]$ (4.12)

Plugging (4.12) into (4.10) gives

$\begin{array}{c}c=\frac{1}{2}\left(\Omega \left(\frac{v+1}{2}\right)-\Omega \left(\frac{v}{2}\right)\right)-\frac{1}{4}\left(\Omega \left(\frac{v+1}{2}\right)-\Omega \left(\frac{v}{2}\right)\right)\\ =\frac{1}{4}\left(\Omega \left(\frac{v+1}{2}\right)-\Omega \left(\frac{v}{2}\right)\right).\end{array}$

7We note that this choice of M-score function is not unique. Another valid choice would be ${\stackrel{\u02dc}{\psi}}_{2s}\left(r\right)=\frac{\nu}{s}\left(\frac{{\left(r-\mu \right)}^{2}-{s}^{2}}{v{s}^{2}+{\left(r-\mu \right)}^{2}}\right)\cdot {I}_{\left(-\infty ,\mu \right]}\left(r\right)$ .

However, it is easily verified that this choice of scale score function is discontinuous at $r=\mu $ and it is a basic principle in robust statistics is that discontinuous influence functions are to be avoided.

Thus, we have7

$\left[\begin{array}{c}{\stackrel{\u02dc}{\psi}}_{\mu}\left(r\right)\\ {\stackrel{\u02dc}{\psi}}_{s}\left(r\right)\\ {\stackrel{\u02dc}{\psi}}_{\nu}\left(r\right)\end{array}\right]=\left[\begin{array}{c}{\psi}_{\mu}\left(r\right)\\ {\stackrel{\u02dc}{\psi}}_{s}\left(r\right)\\ \frac{1}{4}\left(\Omega \left(\frac{v+1}{2}\right)-\Omega \left(\frac{v}{2}\right)\right)-\frac{1}{2}\mathrm{log}\left(1+\frac{{\left(r-\mu \right)}^{2}\cdot {I}_{\left(-\infty ,\mu \right)}\left(r\right)}{v{s}^{2}}\right)+\frac{s}{2\nu}{\stackrel{\u02dc}{\psi}}_{s}\left(r\right)\end{array}\right]$ . (4.13)

Now to get the expression for the parameter estimator score function in (4.4), we just need to evaluate the M-matrix

$M\left(\psi ,F\right)=M\left(\mu ,s,\nu \right)=\left(\begin{array}{ccc}E\left[\frac{\partial {\psi}_{\mu}}{\partial \mu}\right]& E\left[\frac{\partial {\psi}_{\mu}}{\partial s}\right]& E\left[\frac{\partial {\psi}_{\mu}}{\partial \nu}\right]\\ E\left[\frac{\partial {\stackrel{\u02dc}{\psi}}_{s}}{\partial \mu}\right]& E\left[\frac{\partial {\stackrel{\u02dc}{\psi}}_{s}}{\partial s}\right]& E\left[\frac{\partial {\stackrel{\u02dc}{\psi}}_{s}}{\partial \nu}\right]\\ E\left[\frac{\partial {\stackrel{\u02dc}{\psi}}_{\nu}}{\partial \mu}\right]& E\left[\frac{\partial {\stackrel{\u02dc}{\psi}}_{\nu}}{\partial s}\right]& E\left[\frac{\partial {\stackrel{\u02dc}{\psi}}_{\nu}}{\partial \nu}\right]\end{array}\right)$ (4.14)

where the expectations are taken with $F={F}_{\theta}$ . Straightforward but tedious calculations (see Appendix A in an early draft version of this paper available at SSRN, https://ssrn.com/abstract=4605604) show that

$M=\left(\begin{array}{ccc}\frac{1}{{s}^{2}}\left[1-\frac{2}{v+3}\right]& 0& 0\\ \frac{\Gamma \left(\left(\nu +1\right)/2\right)}{\Gamma \left(\nu /2\right)\cdot \sqrt{\nu \pi}}\cdot \frac{-2\left(v+1\right)}{\left(\nu +3\right)\cdot {s}^{2}}& \frac{1}{{s}^{2}}\cdot \frac{v}{v+3}& \frac{1}{2s}\left[\frac{1}{v+3}-\frac{1}{v+1}\right]\\ \frac{\Gamma \left(\left(\nu +1\right)/2\right)}{\Gamma \left(\nu /2\right)\cdot \sqrt{\nu \pi}}\cdot \frac{\nu -1}{\left(\nu +3\right)\cdot \left(\nu +1\right)\cdot v\cdot s}& \frac{1}{2s}\left(\frac{1}{v+3}-\frac{1}{v+1}\right)& \frac{1}{8}\left[{\Omega}^{\prime}\left(\frac{v}{2}\right)-{\Omega}^{\prime}\left(\frac{v+1}{2}\right)\right]-\frac{v+5}{4v\left(v+1\right)\left(v+3\right)}\end{array}\right)$

which has the form

$M=\left(\begin{array}{ccc}A& 0& 0\\ B& C& D\\ E& D& F\end{array}\right)$ (4.15)

with inverse

${M}^{-1}=\frac{1}{A\left(CF-{D}^{2}\right)}\left(\begin{array}{ccc}CF-{D}^{2}& 0& 0\\ ED-BF& AF& -AD\\ BD-CE& -AD& AC\end{array}\right).$ (4.16)

The left-hand plot in Figure 2 displays
$\gamma =5\%$ tail probability parametric t-Distribution ES estimator influence function for with Degrees of Freedom = 10, mean = 0.12 and standard deviation = 0.24. Except for the curious negative bump for small positive values of the Return r, this influence function is essentially symmetric about the *μ *= 0.12, as we pointed out in the last paragraph of Section 3. Thus, the parametric t-distribution ES influence function suffers from the lack of monotonicity in a manner similar to that of the parametric normal distribution ES estimator influence function in the right-hand plot of Figure 1.

On the other hand, the right-hand plot in Figure 2 displays the parametric t-distribution semi-scale M-estimator influence function, with the same
$\mu ,s,\nu $ parameters as for the influence function in the left-hand plot. But now, except for the curious negative bump for small negative values of the Return *r*, this influence function has the desirable monotonic decreasing character similar to that of the nonparametric ES influence function in the left-hand plot of Figure 1.

Figure 3 gives a good feeling for how the shape of the parametric t-distribution ES M-estimator influence function changes with changes in the t-distribution degrees of freedom and ES tail probability, for the four degrees of freedom

Figure 3. Influence functions of ES semi-scale m-estimators with monthly $\mu =1\%,s=7\%$ , and annual SR = 0.5

parameter
$\nu $ values 20, 10, 6, 3, and the three tail probability parameter
$\gamma $ values 1%, 2.5%, 5%. These influence functions all have the desirable shape that there is essentially zero influence for positive values of return *r*, and positive influence that increases rapidly for decreasing negative return values. As one expects, the positive values of the influence function increase for each fixed negative *r* as the tail probability
$\gamma $ decreases. The behavior of the shape of the influence functions for negative returns close to 0 as the degrees of freedom decrease from 20 to 3 is more subtle, with the shape being slightly convex for
$\nu =20$ and slightly concave for
$\nu =3$ . The latter is related to the fact, demonstrated in MZ2019 [6] , that the t-distribution maximum-likelihood ES influence function is logarithmically unbounded.

5. Implementation of the Parametric Semi-Scale ES Estimator

Now we propose to construct an ES semi-scale M-estimator for risk monitoring, with the following straightforward steps.

(1) First compute the t-distribution MLE estimates $\left(\stackrel{^}{v},{\stackrel{^}{s}}_{0},\stackrel{^}{\mu}\right)$ , for example using the Azzalini SN package [13] available on CRAN (https://cran.r-project.org/web/packages/sn/index.html).

(2) Compute the semi-scale parameter estimate as follows. Plug the $\stackrel{^}{\mu}$ and $\stackrel{^}{v}$ MLE’s from step one into the semi-scale score function ${\stackrel{\u02dc}{\psi}}_{s}\left(r;\mu ,s,\nu \right)$ . Then the semi-scale parameter estimate ${\stackrel{^}{s}}_{semi}$ can be computed by solving the equation:

$\underset{i=1}{\overset{n}{\sum}}{\stackrel{\u02dc}{\psi}}_{s}\left({r}_{i};\stackrel{^}{\mu},s,\stackrel{^}{v}\right)}=0.$

Note that above summation has the form

$\begin{array}{c}{\displaystyle \underset{i=1}{\overset{n}{\sum}}{\stackrel{\u02dc}{\psi}}_{s}\left({r}_{i};\stackrel{^}{\mu},s,\stackrel{^}{v}\right)}={\displaystyle \underset{i=1}{\overset{n}{\sum}}\left[\frac{\left(\stackrel{^}{v}+1\right){\left({r}_{i}-\stackrel{^}{\mu}\right)}^{2}\cdot {I}_{\left(-\infty ,\stackrel{^}{\mu}\right]}\left({r}_{i}\right)}{\stackrel{^}{v}{s}^{3}+s\cdot {\left({r}_{i}-\mu \right)}^{2}}-\frac{1}{2s}\right]}\\ =\frac{1}{s}\cdot {\displaystyle \underset{i=1}{\overset{n}{\sum}}\left[\frac{\left(\stackrel{^}{v}+1\right){\left({r}_{i}-\stackrel{^}{\mu}\right)}^{2}\cdot {I}_{\left(-\infty ,\stackrel{^}{\mu}\right]}\left({r}_{i}\right)}{\stackrel{^}{v}{s}^{2}+{\left({r}_{i}-\mu \right)}^{2}}-\frac{1}{2}\right]}\\ =\frac{\stackrel{^}{v}+1}{s}\cdot \left[{\displaystyle \underset{i=1}{\overset{n}{\sum}}\frac{{\left({r}_{i}-\stackrel{^}{\mu}\right)}^{2}\cdot {I}_{\left(-\infty ,\stackrel{^}{\mu}\right]}\left({r}_{i}\right)}{\stackrel{^}{v}{s}^{2}+{\left({r}_{i}-\mu \right)}^{2}}}-\frac{n}{2\left(\stackrel{^}{v}+1\right)}\right]\end{array}$

and to compute
${\stackrel{^}{s}}_{semi}$ , one just needs to solve the following equation whose left-hand side is strictly monotonic in *s*:

$\underset{i=1}{\overset{n}{{\displaystyle \sum}}}\frac{{\left({r}_{i}-\stackrel{^}{\mu}\right)}^{2}\cdot {I}_{\left(-\infty ,\stackrel{^}{\mu}\right]}\left({r}_{i}\right)}{\stackrel{^}{v}{s}^{2}+{\left({r}_{i}-\stackrel{^}{\mu}\right)}^{2}}-\frac{n}{2\left(\stackrel{^}{v}+1\right)}=0$ .

Any simple search algorithm will suffice, for example using the Newton-Raphson method package rootSolve available on CRAN (https://CRAN.R-project.org/package=rootSolve).

(3) Finally, plug $\left(\stackrel{^}{v},{\stackrel{^}{s}}_{semi},\stackrel{^}{\mu}\right)$ into the parametric t-distribution ES expression (2.5) to obtain the ES semi-scale M-estimator:

$E{S}_{\gamma}\left(\stackrel{^}{\mu},{\stackrel{^}{s}}_{semi},\stackrel{^}{v}\right)=-\stackrel{^}{\mu}+\frac{{g}_{\gamma ,\stackrel{^}{v}}}{\gamma}\cdot {\stackrel{^}{s}}_{semi}.$

It remains to carry out some empirical studies of the performance of this risk estimator.

6. ES Semi-Scale M-Estimator Asymptotic Variance

The asymptotic variance of a consistent M-estimator ${\stackrel{^}{\theta}}_{n}=\theta \left({F}_{n}\right)$ of $\theta $ has the form

$V\left(\theta \left(F\right),F\right)={M}^{-1}\left(\psi ,F\right)\cdot Q\left(\psi ,F\right)\cdot {M}^{-1}{\left(\psi ,F\right)}^{\prime}$ (6.1)

where

$\begin{array}{c}Q\left(\psi ,F\right)={\displaystyle \int \psi \left(r;\theta \left(F\right)\right){\psi}^{\prime}\left(r;\theta \left(F\right)\right)\text{d}F\left(r\right)}\\ =\left(\begin{array}{ccc}E\left[{\stackrel{\u02dc}{\psi}}_{\mu}{\stackrel{\u02dc}{\psi}}_{\mu}\right]& E\left[{\stackrel{\u02dc}{\psi}}_{\mu}{\stackrel{\u02dc}{\psi}}_{s}\right]& E\left[{\stackrel{\u02dc}{\psi}}_{\mu}{\stackrel{\u02dc}{\psi}}_{v}\right]\\ E\left[{\stackrel{\u02dc}{\psi}}_{s}{\stackrel{\u02dc}{\psi}}_{\mu}\right]& E\left[{\stackrel{\u02dc}{\psi}}_{s}{\stackrel{\u02dc}{\psi}}_{s}\right]& E\left[{\stackrel{\u02dc}{\psi}}_{s}{\stackrel{\u02dc}{\psi}}_{v}\right]\\ E\left[{\stackrel{\u02dc}{\psi}}_{\nu}{\stackrel{\u02dc}{\psi}}_{\mu}\right]& E\left[{\stackrel{\u02dc}{\psi}}_{\nu}{\stackrel{\u02dc}{\psi}}_{s}\right]& E\left[{\stackrel{\u02dc}{\psi}}_{\nu}{\stackrel{\u02dc}{\psi}}_{\nu}\right]\end{array}\right)\end{array}$ (6.2)

See for example Hample *et al.* (1986) [10] . In the special case of MLE estimators both
$M$ and
$Q$ reduce to the information matrix
$I\left(\theta \right)$ and the expression (6.1) reduces to
$I{\left(\theta \right)}^{-1}$ as expected.

Straightforward but tedious derivations (see Appendix B in an early draft version of this paper available at SSRN, https://ssrn.com/abstract=4605604) give the following expressions:

$\begin{array}{c}E\left[{\psi}_{\nu}{\psi}_{\nu}\right]=\frac{1}{16}{\left[\Omega \left(\frac{v+1}{2}\right)-\Omega \left(\frac{v}{2}\right)-\frac{1}{v}\right]}^{2}+\frac{1}{8}\left[{\Omega}^{\prime}\left(\frac{v}{2}\right)-{\Omega}^{\prime}\left(\frac{v+1}{2}\right)\right]\\ \text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}-\frac{1}{2v}\left[\frac{1}{v+1}-\frac{1}{2\left(v+3\right)}\right]\end{array}$ (6.3)

$E\left[{\psi}_{s}{\psi}_{s}\right]=\frac{1}{{s}^{2}}\left[\frac{\nu}{\nu +3}+\frac{1}{4}\right]$ (6.4)

$E\left[{\psi}_{\mu}{\psi}_{\mu}\right]=\frac{1}{{s}^{2}}\left[1-\frac{2}{v+3}\right]$ (6.5)

$E\left[{\psi}_{\nu}{\psi}_{s}\right]=\frac{1}{2s}\left[\frac{1}{\nu +3}-\frac{1}{\nu +1}\right]-\frac{1}{8s}\left[\Omega \left(\frac{v+1}{2}\right)-\Omega \left(\frac{v}{2}\right)-\frac{1}{\nu}\right]$ (6.6)

$E\left[{\psi}_{\nu}{\psi}_{\mu}\right]=\frac{\Gamma \left(\left(\nu +1\right)/2\right)}{\Gamma \left(\nu /2\right)\cdot \sqrt{\nu \pi}\cdot s}\cdot \frac{v-1}{\left(v+3\right)\left(v+1\right)v}$ (6.7)

$E\left[{\psi}_{s}{\psi}_{\mu}\right]=\frac{\Gamma \left(\left(\nu +1\right)/2\right)}{\Gamma \left(\nu /2\right)\cdot \sqrt{\nu \pi}\cdot {s}^{2}}\cdot \frac{-2\left(\nu +1\right)}{\nu +3}$ (6.8)

Since our t-distribution ES Semi-scale M-estimator is a small modification of the t-distribution ES MLE, one expects that the increase in the asymptotic variance of this estimator, relative to a t-distribution MLE, will not be very great. Figure 4 and Figure 5, which are based on standard errors (SE’s) obtained as the square root of the asymptotic variances of the ES semi-scale estimator using 6 degrees of freedom, indicate that indeed the increase in variance is not very great.

Figure 4. Asymptotic standard error of t-distribution ES semi-scale m-estimator and the asymptotic standard error of the ES MLE for a t-distribution with $\mu =0,s=1,\nu =5$ .

Figure 5. Ratio of standard error of t-distribution ES estimators: semi-scale m-estimator versus maximum likelihood ES estimator for 5 degrees of freedom.

7. Concluding Comments

We have introduced a new ES Semi-Scale M-Estimator by replacing the scale estimator of an ES t-distribution joint MLE of the location, scale, and degrees of freedom, with a semi-scale estimator, and we derived the new estimator’s influence function and asymptotic variance formula. The mathematical form of the new estimator’s estimating equation and influence function show that the estimator avoids the unsatisfactory behavior of the t-distribution ES MLE that (large) positive returns indicate (large) risk.

Since an ES semi-scale M-estimator influence function is not exactly monotonic decreasing as returns increase, we cannot assert that this ES is a coherent risk measure, as is a mean semi-deviation estimator. However, the fact that such influence functions are nearly monotonic decreasing suggests that ES semi-scale M-estimators will have good properties for risk reporting. What is needed now is an in-depth empirical study of the relative performance of the ES semi-scale M-estimators and the mean semi-deviation coherent risk measures, using both simulated and real returns whose distributions range from approximately normal to moderately fat-tailed (e.g., t-distributions with 8 to 12 degrees of freedom), and to very fat-tailed distributions (e.g., t-distributions with 3 to 6 degrees of freedom).

Conflicts of Interest

The authors declare no conflicts of interest.

[1] |
JPMorgan/Reuters (1996) RiskMetrics—Technical Document. 4th Edition. https://www.msci.com/documents/10199/5915b101-4206-4ba0-aee2-3449d5c7e95a |

[2] | Artzner, P., Delbaen, F., Eber, J.M. and Heather, D. (1999) Coherent Measures of Risk. Mathematical Finance, 9, 203-228. https://doi.org/10.1111/1467-9965.00068 |

[3] | McNeil, A.J., Frey, R. and Embrechts, P. (2015) Quantitative Risk Management. Princeton University Press, Princeton. |

[4] | Rockafellar, R.T. and Uryasev, S. (2000) Optimization of Conditional Value-at-Risk. Journal of Risk, 2, 21-41. https://doi.org/10.21314/JOR.2000.038 |

[5] |
Fischer, T. (2003) Risk Capital Allocation by Coherent Risk Measures Based on One-Sided Moments. Insurance: Mathematics and Economics, 32, 135-146. https://doi.org/10.1016/S0167-6687(02)00209-3 |

[6] | Martin, R.D. and Zhang, S. (2019) Non-Parametric versus Parametric Expected Shortfall. Journal of Risk, 21, 1-41. https://doi.org/10.21314/JOR.2019.416 |

[7] | Jorion, P. (2007) Value-at-Risk. 3rd Edition, McGraw-Hill, New York. |

[8] |
Hampel, F.R. (1974) The Influence Curve and Its Role in Robust Estimation. Journal of American Statistical Association, 69, 383-393. https://doi.org/10.1080/01621459.1974.10482962 |

[9] | Zhang, S. (2016) Two Equivalent Parametric Expected Shortfall Formulas for T-Distributions. https://ssrn.com/abstract = 2883935 |

[10] | Hampel, F.R., Ronchetti, E.M., Rousseeuw, P.J., and Stahel, W.A. (1986) Robust Statistics: The Approach Based on Influence Functions. John Wiley & Sons, Inc., New York. |

[11] |
Zhang, S., Martin, R.D. and Chritidis, A.A. (2021) Influence Functions for Risk and Performance Estimators. Journal of Mathematical Finance, 1, 1-33. https://doi.org/10.4236/jmf.2021.111002 |

[12] |
Lucas, A. (1997) Robustness of the Student-t Based M-Estimator. Communications in Statists-Theory and Methods, 26, 1165-1182. https://doi.org/10.1080/03610929708831974 |

[13] | Azzalini, A. (2023). http://azzalini.stat.unipd.it/SN/ |

Journals Menu

Contact us

+1 323-425-8868 | |

customer@scirp.org | |

+86 18163351462(WhatsApp) | |

1655362766 | |

Paper Publishing WeChat |

Copyright © 2024 by authors and Scientific Research Publishing Inc.

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.