An Analytical Portfolio Credit Risk Model Based on the Extended Binomial Distribution ()
1. Introduction
Many projects are composed of different partial projects. In general, the probabilities of success of the individual projects are not uniform. Furthermore, the individual partial projects have different weighting, in other words different values for the overall project. The aim of this study was to describe a distribution of the values of the overall project.
Abstracting specific values of the partial projects, the problem can be modeled by the general binomial distribution (Fisz, 1981) , also called Poisson binomial distribution or by the Bernoulli mixture distribution (McNeil et al., 2005) . In the case of further abstraction of identical probabilities of success, the experimental arrangement can be described by the binomial distribution. Also the Panjer (1981) recursion enables to determine the probability mass function and the cumulative distribution function of the binomial distribution for experimental arrangements with a large number of partial projects. However, the calculation of the Poisson binomial distribution is already very complex and only possible in the case of small experimental arrangements.
Since the turn of the millennium, the task has been taken up again in connection with the modeling of the loss estimates of loan portfolios. It is undisputable that the loss distribution of a heterogeneous loan portfolio can be described by the idea of binomial distribution. However, until now no way has been found to cope with the task directly. For solution simulation techniques were used primarily (KMV, 1997; J.P. Morgan, 1997; Wilson, 1998; McKinsey and Company, 1998) . The only alternative of a direct calculation using Poisson approximations was shown by CSFB (1997) due to CreditRisk+.
This study describes for the first time an opportunity to calculate directly the weighted extended binominal distribution. The feature of this method is the exploitation of a bijection between the elementary events of the binomial distribution and the digit sequences of binary numbers. The description of a numerical implementation for the analytically exact calculation of the distribution function is unique for real qualitative and quantitative heterogeneous Bernoulli processes (see Section 2). In particular, the rigorous calculation of the range of the distribution is of practical value for the risk management in quantification of tail risks and the assessment of risk concentrations (see Sections 5 and 6).
Nevertheless, the use of the model is subjected to numerical limitations. In general, the number of sub-trials is limited. For approaches with a limited range of weights or uniform weights, the model restrictions can be significantly relativized (see Section 4).
Notations used in this paper were applied according to the general mathematical literature (see Appendix).
2. Extended Binomial Distribution
The simplest of discrete distributions is the Bernoulli distribution. Here, it only has to be checked whether a particular event X was successful X = 1 or has failed X = 0. The probability of X = 1 is
$P\left(X=1\right)=p$ and of the complementary event X = 0 it is
$P\left(X=0\right)=\text{1}-p$. In comparison to the Bernoulli distribution the binomial distribution is the hierarchal higher order distribution. Due to the binomial distribution random variables based on the so-called Bernoulli trial scheme are described. To do so, n identical independent trials of a Bernoulli distributed random variable are performed. The number of realizations of the Bernoulli distributed random variable, in which the successful event X occurs, in each case describes a trial outcome of the binomial distributed random variable.
In this paper, an extension of the binomial distribution without the limitation of identical probabilities p of the trials is described. In order to be additionally able to weight the trials differently when required, they are enhanced by specific weighting parameters.
Bernoulli distributed random variables have only two experimental outcomes and can be represented in binary code. This enables to map the Bernoulli trial scheme for n trials one-to-one into a matrix with n columns (number of trials) and 2^{n} different rows (number of possible combinations of the experimental outcomes) of elements zero and one. This matrix is denoted scenario matrix. The scenario matrix plays the central role in the description of the following distribution.
Definition 1:
Let
${X}_{j},j=1,\cdots ,n$ be independent Bernoulli distributed random variables with probabilities p_{j}. Here, denote X_{j} = 1 the occurrence of the event and X_{j} = 0 the complementary event with each individual probabilities
${P}_{j}\left({X}_{j}=1\right)={p}_{j}$ and
${P}_{j}\left({X}_{j}=0\right)=1-{p}_{j}$. The random variables X_{j} have finite weights w_{j}: |w_{j}| < ∞ for all
$j=1,\cdots ,n$. The possible combinations of events of the
$j=1,\cdots ,n$ random variables X_{j} are represented by the scenario matrix
$S\in {R}^{2n\times n}$ with components
${s}_{ij}\in \left\{0,1\right\}$. The 2^{n} rows of the scenario matrix are the digit sequences of binary numbers from 0 to 2^{n }^{−}^{ 1}.
Then determine
${f}_{i}={\displaystyle \underset{j=1}{\overset{n}{\prod}}\left({p}_{j}^{{s}_{ij}}\cdot {\left(1-{p}_{j}\right)}^{1-{s}_{ij}}\right)}$,
$i=1,\cdots ,{2}^{n}$ (1)
the individual probabilities and
${d}_{i}={\displaystyle \underset{j=1}{\overset{n}{\sum}}{w}_{j}\cdot {s}_{ij}}$ (2)
their quantitative expressions. The distribution described by function
${F}_{X}\left(t\right)=P\left(X<t\right)={\displaystyle \underset{{d}_{i}<t}{\sum}{f}_{i}}={\displaystyle \underset{{d}_{i}<t}{\sum}{\displaystyle \underset{j=1}{\overset{n}{\prod}}\left({p}_{j}^{{s}_{ij}}\cdot {\left(1-{p}_{j}\right)}^{1-{s}_{ij}}\right)}}$ (3)
is called the extended binomial distribution.
The definition is meaningful only if function (3) is a distribution function. To show this, the criteria of the following theorem should be checked.
Theorem: (Fisz, 1981; Gnedenko, 1987)
A real-valued function F(x) is a distribution function if and only if
1) The two conditions F(−∞) = 0 and F(+∞) = 1 satisfy,
2) It is monotonically non-decreasing and
3) It is left-continuous.
The distribution is described by a finite number of finite weights w_{j}. Since
${s}_{ij}\in \left\{0,1\right\}$, the sum of the quantitative characteristics (2) is bounded below by the sum of all negative weights
$B={\displaystyle \underset{\begin{array}{c}j=1\\ {w}_{j}<0\end{array}}{\overset{n}{\sum}}{w}_{j}}$. Thus summations over f_{i} in
Equation (3) for all t < B are summations over the empty set. That means F(t) = 0 for all t < B and in particular F(−∞) = 0. Moreover, if all weights are non-negative, F(t) is identical zero for all t < 0 and the condition F(−∞) = 0 is also satisfied.
The proof of F(+∞) = 1 is carried by complete induction. Therefore, in Equation (3) all single probabilities have to be sum up
${f}_{i},\text{}i=1,\cdots ,{2}^{n}$. First of all, it must be examined that the statement apply to for the base case n = 1
$F\left(\infty \right)={p}_{1}^{{s}_{11}}\cdot {\left(1-{p}_{1}\right)}^{1-{s}_{11}}+{p}_{1}^{{s}_{21}}\cdot {\left(1-{p}_{1}\right)}^{1-{s}_{21}}$.
Without restricting the generality s_{11} = 1 and s_{21} = 0. This implies
$F\left(\infty \right)={p}_{1}^{1}\cdot {\left(1-{p}_{1}\right)}^{0}+{p}_{1}^{0}\cdot {\left(1-{p}_{1}\right)}^{1}={p}_{1}+\left(1-{p}_{1}\right)=1$.
The induction assumption F(+∞) = 1 is satisfied for n = k
$F\left(\infty \right)={\displaystyle \underset{i=1}{\overset{{2}^{k}}{\sum}}{\displaystyle \underset{j=1}{\overset{k}{\prod}}\left({p}_{j}^{{s}_{ij}}\cdot {\left(1-{p}_{j}\right)}^{1-{s}_{ij}}\right)}}=1$.
In the inductive step it is to show, that the statement apply to for n = k + 1
$F\left(\infty \right)={\displaystyle \underset{i=1}{\overset{{2}^{k+1}}{\sum}}{\displaystyle \underset{j=1}{\overset{k+1}{\prod}}\left({p}_{j}^{{s}_{ij}}\cdot {\left(1-{p}_{j}\right)}^{1-{s}_{ij}}\right)}}$.
Therefore, the products under the summation sign are decomposed
$F\left(\infty \right)={\displaystyle \underset{i=1}{\overset{{2}^{k+1}}{\sum}}\left(\left({p}_{k+1}^{{s}_{ik+1}}\cdot {\left(1-{p}_{k+1}\right)}^{1-{s}_{ik+1}}\right)\cdot {\displaystyle \underset{j=1}{\overset{k}{\prod}}\left({p}_{j}^{{s}_{ij}}\cdot {\left(1-{p}_{j}\right)}^{1-{s}_{ij}}\right)}\right)}$.
The coefficients
${s}_{ik+1}$ consist of 2^{k} coefficients that equal zero and 2^{k} coefficients that equal one, each in pairs with identical factor below the product sign. The sum is decomposed by the expressions of the coefficients
${s}_{ik+1}$
$\begin{array}{c}F\left(\infty \right)={\displaystyle \underset{\begin{array}{c}i=1\\ {s}_{ik+1}=1\end{array}}{\overset{{2}^{k}}{\sum}}\left(\left({p}_{k+1}^{{s}_{ik+1}}\cdot {\left(1-{p}_{k+1}\right)}^{1-{s}_{ik+1}}\right)\cdot {\displaystyle \underset{j=1}{\overset{k}{\prod}}\left({p}_{j}^{{s}_{ij}}\cdot {\left(1-{p}_{j}\right)}^{1-{s}_{ij}}\right)}\right)}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+{\displaystyle \underset{\begin{array}{c}i=1\\ {s}_{ik+1}=0\end{array}}{\overset{{2}^{k}}{\sum}}\left(\left({p}_{k+1}^{{s}_{ik+1}}\cdot {\left(1-{p}_{k+1}\right)}^{1-{s}_{ik+1}}\right)\cdot {\displaystyle \underset{j=1}{\overset{k}{\prod}}\left({p}_{j}^{{s}_{ij}}\cdot {\left(1-{p}_{j}\right)}^{1-{s}_{ij}}\right)}\right)}\end{array}$
The coefficients
${s}_{ik+1}$ are replaced by their concrete values one or zero. Then follows
$\begin{array}{c}F\left(\infty \right)={\displaystyle \underset{i=1}{\overset{{2}^{k}}{\sum}}\left({p}_{k+1}\cdot {\displaystyle \underset{j=1}{\overset{k}{\prod}}\left({p}_{j}^{{s}_{ij}}\cdot {\left(1-{p}_{j}\right)}^{1-{s}_{ij}}\right)}\right)}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+{\displaystyle \underset{i=1}{\overset{{2}^{k}}{\sum}}\left(\left(1-{p}_{k+1}\right)\cdot {\displaystyle \underset{j=1}{\overset{k}{\prod}}\left({p}_{j}^{{s}_{ij}}\cdot {\left(1-{p}_{j}\right)}^{1-{s}_{ij}}\right)}\right)}\\ ={\displaystyle \underset{i=1}{\overset{{2}^{k}}{\sum}}\left(\left({p}_{k+1}+1-{p}_{k+1}\right)\cdot {\displaystyle \underset{j=1}{\overset{k}{\prod}}\left({p}_{j}^{{s}_{ij}}\cdot {\left(1-{p}_{j}\right)}^{1-{s}_{ij}}\right)}\right)}\end{array}$
From the induction assumption it follows, that F(∞) = 1 for n = k + 1. So it was verified that the condition (1) of the theorem is accomplished.
By increasing t the number of summands in function (3) increases. To demonstrate the monotony it has to be shown, that all summands are non-negative. The summands are products themselves. The factors have the structure
${p}_{j}^{{s}_{ij}}\cdot {\left(1-{p}_{j}\right)}^{1-{s}_{ij}}$. Distinction has to be made between two cases:
s_{ij} = 0: then the factor reduces to (1 − p_{j}), which is non-negative since
$0\le {p}_{j}\le 1$.
s_{ij} = 1: then the factor only remaining p_{j}, which is non-negative.
So the function (3) also satisfies the condition (2) of the theorem.
The function (3) is a jump continuous function and because of the strict inequality in
${F}_{X}\left(t\right)=P\left(X<t\right)$ it is a left-continuous function. So the function (3) also satisfies the condition (3) of the sentence and it is a distribution function.
To illustrate the Definition 1, the probability mass function and the cumulative distribution function are considered for the following example in Table 1 and Figure 1.
Figure 1. Probability mass function and cumulative distribution function.
3. Moments and Characteristics of the Extended Binomial Distribution
The extended binomial distribution of a random variable X by Definition 1 is a linear combination of independent Bernoulli distributed random variables
${X}_{j},j=1,\cdots ,n$ with probabilities p_{j} and with weights w_{j} as linear coefficients. Based on these considerations, the moments of the extended binomial distribution are determined in the following.
The expected value of a random variable is generated by the expected value operator. The expected value operator is also linear (Fisz, 1981; Rényi, 1971) . It means, for the sum of a finite number of random variables
${X}_{j},j=1,\cdots ,n$ and real numbers
${\alpha}_{j},j=1,\cdots ,n$ applies
$E\left({\alpha}_{1}\cdot {X}_{1}+\cdots +{\alpha}_{n}\cdot {X}_{n}\right)={\alpha}_{1}\cdot E\left({X}_{1}\right)+\cdots +{\alpha}_{1}\cdot E\left({X}_{1}\right)$. Bernoulli distributed random variables X_{j} have expected values
$E\left({X}_{j}\right)={p}_{j}$ For the expected value
of the extended binomial distributed random variable
$X={\displaystyle \underset{j=1}{\overset{n}{\sum}}{w}_{j}\cdot {X}_{j}}$ follows
$E\left(X\right)={\displaystyle \underset{j=1}{\overset{n}{\sum}}{w}_{j}\cdot {p}_{j}}$.
The variance of a sum of independent random variables X_{j} and real numbers
${\alpha}_{j},j=1,\cdots ,n$ is given by
${D}^{2}\left({\alpha}_{1}\cdot {X}_{1}+\cdots +{\alpha}_{n}\cdot {X}_{n}\right)={\alpha}_{1}^{2}\cdot {D}^{2}\left({X}_{1}\right)+\cdots +{\alpha}_{1}^{2}\cdot {D}^{2}\left({X}_{1}\right)$ (Rényi, 1971) . Bernoulli distributed random variables X_{j} have the variances
${D}^{2}\left({X}_{j}\right)={p}_{j}\cdot \left(1-{p}_{j}\right)$. For the variance of the extended binomial distributed random variable
$X={\displaystyle \underset{j=1}{\overset{n}{\sum}}{w}_{j}\cdot {X}_{j}}$ follows
${D}^{2}\left(X\right)={\displaystyle \underset{j=1}{\overset{n}{\sum}}{w}_{j}^{2}\cdot {p}_{j}}\cdot \left(1-{p}_{j}\right)$.
The probability generating function G_{X}(z) of a discrete random variable X is defined by
${G}_{X}\left(z\right)={\displaystyle \underset{k=0}{\overset{\infty}{\sum}}P\left(X=k\right)\cdot {z}^{k}}={\displaystyle \underset{k=0}{\overset{\infty}{\sum}}{p}_{k}\cdot {z}^{k}}$. The probability generating
function for a α‒multiples of a random variable X is
${G}_{\alpha \cdot X}\left(z\right)={G}_{X}\left({z}^{\alpha}\right)$ (Gribakin, 2002) . Furthermore, the probability generating function of the sum of independent random variables
${X}_{j},j=1,\cdots ,n$ applies to be equal to the product of their probability generating functions
${G}_{{X}_{1}+\cdots +{X}_{n}}\left(z\right)={G}_{{X}_{1}}\left(z\right)\cdots {G}_{{X}_{n}}\left(z\right)$. From both follows the probability generating function of a linear combination of independent random variables
${\alpha}_{1}\cdot {X}_{1}+\cdots +{\alpha}_{n}\cdot {X}_{n}$ is
${G}_{{\alpha}_{1}\cdot {X}_{1}+\cdots +{\alpha}_{n}\cdot {X}_{n}}\left(z\right)={G}_{{X}_{1}}\left({z}^{{\alpha}_{1}}\right)\cdots {G}_{{X}_{n}}\left({z}^{{\alpha}_{n}}\right)$ (Gribakin, 2002) . The probability generating function of a Bernoulli distributed random variable X_{j} is given by
${G}_{{X}_{j}}\left(z\right)=1-{p}_{j}+{p}_{j}\cdot z$ (Rényi, 1971) . For the probability generating function
of the extended binomial distributed random variable
$X={\displaystyle \underset{j=1}{\overset{n}{\sum}}{w}_{j}\cdot {X}_{j}}$ follows
${G}_{X}\left(z\right)={\displaystyle \underset{j=1}{\overset{n}{\prod}}\left(1-{p}_{j}+{p}_{j}\cdot {z}^{{w}_{j}}\right)}$.
The characteristic function φ_{X}(t) of a random variable X is defined by the expected value
${\phi}_{X}\left(t\right)=E\left({\text{e}}^{itX}\right)$. The characteristic function of the linear expression
$\alpha \cdot X+\beta $ is
${\phi}_{\alpha \cdot X+\beta}\left(t\right)={\text{e}}^{it\beta}\cdot \phi \left(\alpha \cdot t\right)$. Furthermore, the characteristic function of the sum of independent random variables
${X}_{j},j=1,\cdots ,n$ applies to be equal to the product of their characteristic functions
${\phi}_{{X}_{1}+\cdots +{X}_{n}}\left(t\right)={\phi}_{{X}_{1}}\left(t\right)\cdots {\phi}_{{X}_{n}}\left(t\right)$. From both follows the characteristic function of a linear combination of independent random variables
${\alpha}_{1}\cdot {X}_{1}+\cdots +{\alpha}_{n}\cdot {X}_{n}$ is
${\phi}_{{\alpha}_{1}\cdot {X}_{1}+\cdots +{\alpha}_{n}\cdot {X}_{n}}\left(t\right)={\phi}_{{X}_{1}}\left({\alpha}_{1}\cdot t\right)\cdots {\phi}_{{X}_{n}}\left({\alpha}_{n}\cdot t\right)$ (Lukacs, 1960) . The characteristic function of Bernoulli distributed random variables X_{j} is given by
${\phi}_{X{}_{j}}\left(t\right)=1-{p}_{j}+{p}_{j}\cdot {\text{e}}^{it}$. For the characteristic function of the extended binomial distributed random variable
$X={\displaystyle \underset{j=1}{\overset{n}{\sum}}{w}_{j}\cdot {X}_{j}}$ follows
${\phi}_{X}\left(t\right)={\displaystyle \underset{j=1}{\overset{n}{\prod}}\left(1-{p}_{j}+{p}_{j}\cdot {\text{e}}^{i\cdot {w}_{j}\cdot t}\right)}$.
4. A Numerical Approach to Apply the Extended Binomial Distribution on Higher Number of Independent Trials
Computational effort for calculating the extended binomial distribution will be doubled for an additional trial. The limits of the computational feasibility are quickly achieved. Under the additional condition, that the weights are in approximately the same order of size, the computational effort can be reduced significantly. Then the extended binomial distribution also is numerically approximately applicable for problems with a large number of trials.
In Definition 1 it is assumed, that the trials X_{j} are independent from each other. Under this assumption, the extended binomial distribution can be calculated for problems with a large number of trials, by:
• Splitting larger trials into partial tests,
• Determining the distribution functions and the probability for the partial tests separately and
• Finally, aggregating the distributions of the partial tests to the distribution of the complete problem successively.
For aggregation the following calculus is used:
Definition 2 (Smirnow & Dunin-Barkowski, 1969) :
Let D ⊆ Z be a discrete subset of the integers and P_{1} and P_{2} be two functions with P_{i}: D → R for i = 1, 2. Then
$\left({P}_{1}\ast {P}_{2}\right)\left(X=n\right)={\displaystyle \underset{k\in D}{\sum}{P}_{1}\left(X=k\right)\cdot {P}_{2}\left(X=n-k\right)}$ is the discrete convolution of P_{1} and P_{2}.
In the computational implementation of the convolution of the probability functions of extended binomial distributed random variables is a practical problem. This results from the potentially large number of different quantitative manifestations (3). Moreover, in definition 1 it was not required that the weights w_{j} are integers. To apply the calculus of convolution computational efficiently, the probability functions are approximated for aggregation. For this purpose the quantitative manifestations of the extended binomial distributions have to be projected onto reference points (Figure 2). The projection is done by rounding the quantitative manifestations of integer multiples of a given unit discretization U:
${\stackrel{\u02dc}{d}}_{i}=round\left({d}_{i};U\right)=\left[{d}_{i}/U\right]\cdot U=\left[\frac{1}{U}\cdot {\displaystyle \underset{j=1}{\overset{n}{\sum}}{w}_{j}\cdot {s}_{ij}}\right]\cdot U$
Figure 2. Real and projected extended binomial distribution.
Because of the projection, the discretized probability functions take positive values only at identical points with the same distance U.
Now, each of the two probability functions of two partial tests should be aggregated by convolution. The resulting probability functions again take positive values only on multiples of U. With the resulting probability functions the aggregation can be continued successively until the probability function for the complete problem was calculated.
The approximation error caused by the projection is low if the weights w_{j} are approximately in the same order of size. In this way the model of the extended binomial distribution is applicable for problems with a large number of trials. Up to this point, the meaning of “approximately the same order of size” was not specified. The key role is played by the discretization unit U.
On the one hand the discretization unit U should not be larger than the smallest weight, since otherwise in the approximation the impact of the trials with smaller weights is neutralized. On the other hand the discretization unit U should not be smaller than a fraction of the greatest weights, since this would have negative effects on the performance of the convolution. Experience shows, that there are no significant performance impairments, if one percent of the largest weight is chosen as discretization unit. From both restrictions it can be derived that in the context above the weights are in approximately the same order of size, when the greatest weight is not significantly higher than one hundred times of the smallest.
5. Application of the Extended Binomial Distribution
A project in this sense is the investment in a loan portfolio. A loan portfolio consists of a certain number of loans. Each loan has a specific exposure and its own probability of default. An estimation of the expected portfolio loss and the loss distribution is required for the management of the portfolio.
To illustrate the problem, a portfolio of four loans is considered. Usually a tree structure is used to represent the elementary events. The characteristic of the extended binomial distribution, the bijection between the tree structure and the scenario matrix is shown in Figure 3.
Hereinafter concrete values are used in the example (Table 2).
Figure 3. Bijection between the tree structure and the scenario matrix.
Table 2. Example for a small portfolio.
The loss distribution is determined according to Equation (3) with
${F}_{X}\left(t\right)=P\left(X<t\right)={\displaystyle \underset{{d}_{i}<t}{\sum}{f}_{i}}$.
For portfolios with a larger number of loans, the portfolio is divided into partial portfolios as described in section 4. The loss distributions are computed for the sub-portfolios. While doing so, the losses are rounded to integer multiples of a discretization unit U. Next, the loss distributions of the partial portfolios are successively aggregated by convolution until the loss distribution of the complete portfolio is determined, see Figure 4.
Figure 4. Strategy disassembling—calculation—aggregation.
By disassembling the problem and aggregating the partial results as described above it is possible to determinate the loss distribution for portfolios consisting of a few hundred of loans. In this way, the numerical restrictions are relativized, but not completely eliminated. What does it mean in practical realization?
A partial portfolio of the largest loans is taken from the complete portfolio. For this sub-portfolio the loss distribution is determined by the extended binomial distribution. What should be done with the remaining portfolio?
In the remaining portfolio the largest loan account for only a few per thousand of the complete portfolio. If the remaining portfolio consists of a few loans, its influence on loss distribution of the complete portfolio is marginal. This is not the case if the remaining portfolio consists of many loans. Then because of the large number of small loans, the remaining portfolio is well diversified and heterogeneous in general. The loss distribution in such a portfolio can be well approximated by a Gaussian normal distribution. The parameters μ and σ for the normal distribution approximation are (Fischer, 2012)
$\mu ={\displaystyle \underset{j=1}{\overset{n}{\sum}}{w}_{j}\cdot {\mu}_{j}}={\displaystyle \underset{j=1}{\overset{n}{\sum}}{w}_{j}\cdot {p}_{j}}$
this correspond to expected loss and
${\sigma}^{2}={\displaystyle \underset{j=1}{\overset{n}{\sum}}{w}_{j}^{2}\cdot {\sigma}_{j}^{2}}={\displaystyle \underset{j=1}{\overset{n}{\sum}}{w}_{j}^{2}\cdot {p}_{j}\cdot \left(1-{p}_{j}\right)}$.
The w_{j} are the exposures, p_{j} are the probabilities of default and r is the number of loans of the remaining portfolio.
Finally, the loss distribution determined by the generalized binomial distribution and the loss distribution of the remaining portfolio approximated by the Gaussian normal distribution are aggregated by convolution to the loss distribution of the complete portfolio.
6. Peripheral Modeling of Dependencies
In Definition 1 it is assumed, that the trials X_{j} are independent from each other. But the default behaviors of the loans of a portfolio are not independent in reality. There are common dependencies on external risk factors for example, the economic situation. Because the dependencies cannot be mapped directly in the extended binomial distribution, the modeling of the dependencies of risk factors is transferred to the processing of input data.
The common dependencies of external risk factors are modeled on the basis of construction from Gordy (2002) . First, the probabilities of default p_{j} are decomposed as follows:
${p}_{j}=\left[\left(1-{s}_{j}\right)+{s}_{j}\right]\cdot {p}_{j}$,
$0\le {s}_{j}\le 1$ (4)
The parameters s_{j} are specific sensitivity factors. Subsequently, a risk factor M, for example for mapping the macroeconomic economic situation, is integrated in the formula (4). The decomposition of the probabilities of default is indicated by the index (M):
${p}_{j}^{\left(M\right)}=\left[\left(1-{s}_{j}\right)+M\cdot {s}_{j}\right]\cdot {p}_{j}$,
$0\le M\le \frac{1+{p}_{j}\cdot \left({s}_{j}-1\right)}{{p}_{j}\cdot {s}_{j}}$ (5)
Under normal conditions the parameter M has the value 1 (or 100%). That means under normal conditions apply
${p}_{j}^{\left(M\right)}={p}_{j}$ for all j. The effects of systematic changes are implemented in the model by modifying the parameter M, especially by M > 1 for events of economic downturn and by M < 1 for economic upturn phases.
Systematic changes affect the individual debtors individually (Basel Committee on Bank Supervision, 2004) , Article 415, Sentence 3. The parameter s_{j} takes over the control of the individual sensitivity. If the creditworthiness of a debtor is above-average dependent on systematic influences, then s_{j} is greater 1 (or 100%). In the opposite case, if the creditworthiness of a debtor is lower-average dependent on systematic influences, then s_{j} is less than 1 (or 100%). In principle, the sensitivities can be individually assigned to the debtors. For practical reasons of data availability and from a calibration point of view a sectoral assignment will be the more practical, for example by industry sectors.
Adjusted decomposed probabilities of default for debtors reacting differently sensitive to changes in the systematic factor are shown schematically in the following Table 3 for different scenarios of the systematic factor M. In this case, an a priori probability of default of 0.03 is assumed.
Table 3. Adjusted probabilities of default.
Similar to the one-factor model, the approach of decomposed probability of default can be further differentiated extended to several risk factors
${R}_{i},i=1,\cdots ,k$ (Gordy, 2002) . Through the synchronization of probabilities of default implied correlations respectively dependencies arise, which are observable in the real world.
Not yet considered is the parameter M. The M parameter is used for mapping clusters for relative changes in the economic situation. The economy itself is not measurable. Representative for changes in economy, changes in insolvency frequencies are used as a measurable quantity, see Figure 5. For risk considerations, this substitution should be opportune.
For the individual clusters c (from
${M}_{1}=\left(1-0.50\right)=0.50$ to
${M}_{a}=\left(1+0.20\right)=1.20$ ), the distributions of loss P_{c} are determined using the extended binomial distribution. Finally, the loss distributions are summarized weighted by the frequencies h_{c} of the clusters c
$P\left(X=x\right)={\displaystyle \underset{c=1}{\overset{a}{\sum}}{h}_{c}\cdot {P}_{c}\left(X=x\right)}$.
From the aggregated distribution function and from the aggregated probability mass function of the loss of the complete loan portfolio, the known risk ratios value at risk or expected shortfall are determined (Albrecht, 2004) .
7. Conclusion
A technique was developed to determine the probability mass function and the cumulative distribution function for the extended binomial distribution or for trials consisting of independent heterogeneous Bernoulli distributed single trials. Additionally, a numerical approach was described for approximating solutions of tasks with a larger number of trials.
The extended binomial distribution provides the foundation for a new analytical portfolio credit risk model. The new model expanded the set of analytical portfolio credit risk models, which were previously essentially represented only by the family of CreditRisk+ models. The analytical approach enables identical reproducibility of results. This in turn allows separate analysis with regard to individual risk factors or risk positions.
The approach of the extended binomial distribution allows the reduction of the approximation error. This has practical benefits in particular in determining the edges of the loss distribution. Hence the model is predestined for the identification of tail phenomena and for the management of risk concentrations.
Appendix
Notations
X random variable
P(X) probability mass function
F_{X}(t) cumulative distribution function
E(X) expected value of the random variable X
D^{2}(X) variance of the random variable X
G_{X}(z) probability generating function of the random variable X
φ_{X}(t) characteristic function of the random variable X
(P*R) convolution on functions P and R
R set of all real numbers
Z set of all integers
R^{n} n-dimensional euclidean space
R^{n}^{×m} set of all real m-by-n matrices
∑ sum sign for the summation operator
∏ product sign for the product operator
¥ sign for infinity