Multi-Name Extension to the Credit Grades and an Efficient Monte Carlo Method

In this paper, we present a multi-name incomplete information structural model which possess the contagion mechanism and its efficient Monte Carlo algorithm based on Interacting Particle System. Along with the Credit Grades, which is industrially used single-name credit model, we suppose that investors can observe firm values and defaults but are not informed of the threshold level at which a firm is deemed to default. Additionally, in order to model the possibility of crisis normalization, we introduce the concept of memory period after default. During the memory period after a default, public investors remember when the previous default occurred and directly reflect that information for updating their belief. When the memory period after a default finish, investors forget about that default and shift their interest to recent defaults if exist. One of the variance reduction techniques, relying upon Interacting Particle System, is combined with the standard Monte Carlo method to address the rare but critical events represented by the tail of loss distribution of portfolio.


Introduction
Interaction of default events play a central role for systemic risk measurement as well as the credit risk management and portfolio credit derivative valuation.Recent financial crisis has revealed a necessity of quantitative methodology to analyze default contagion effects which are observed in several financial markets.Default contagion is a phenomenon where a default by one firm has direct impact on the health of other surviving firms.Since the contagion effects heavily influence the correlations of defaults, capturing them in quantitative models is crucial.Existing dynamic credit risk models which deal with default contagion include, among others, Davis and Lo [1], Fan Yu [2], Frey and Backhaus [3], Frey and Runggaldier [4], Giesecke [5], Giesecke and Goldberg [6], Giesecke et al. [7], Schönbucher [8], Takada and Sumita [9] and comprehensive surveys can be found in Chapter 9 of McNeil, Frey and Embrechts [10].Generally, credit risk modeling methodologies are categorized to either reduced form approach or structural approach.In the reduced form approach, by introducing interacting intensities, default contagion can be captured by the jump up of the default intensity immediately after the default as in Davis and Lo [1], Fan Yu [2], Frey and Backhaus [3], Giesecke et al. [7] and Takada and Sumita [9].However, it is not easy to incorporate the mechanism where the crisis mode would resolve after some period.Information based default contagion described in Chapter 9 of McNeil, Frey and Embrechts [10] and Frey and Runggaldier [4] might be promising methods that allow to represent normalization of crisis via belief updating, however, explicit formulation of normalization and its effects to future defaults are not thoroughly studied.On the other hand, Giesecke [5] and Giesecke and Goldberg [6] have studied multi-name structural model under incomplete information and proposed a simulation method for sequential defaults without covering the explicit formulation of normalization.Unfortunately, since the closed formula for joint distribution of the first-passage time of correlated multivariate Brownian motion is unknown, the proposed algorithms therein are not directly applicable to correlated firm value cases.
In this paper, we present a multi-name incomplete information structural model which possess a default contagion mechanism in the sense that the sudden change of default probabilities arise from the investors' revising their perspectives towards unobserved factors which characterize the joint density of default thresholds.
Here, in our model, default thresholds are assumed to be unobservable from public investors and a firm is deemed to default when firm value touch this level of threshold for the first time.This formulation is a slight generalization of Giesecke and Goldberg [6].Also, to analyze the contagion effects under general settings, we consider the dependence structure of firm value dynamics as well as the joint distribution of default thresholds.Additionally, in order to model the possibility of crisis normalization, we introduce the concept of memory period after default.A preliminary version of this study is reported at RIMS Workshop on Financial Modeling and Analysis (FMA2013) by Takada [11] which introduced the concept of memory period first.As Takada [11] pointed out, the model is designed so as to confine investors' attention to the recent defaults.During the memory period after a default, public investors remember when the previous default occurred and directly reflect that information for updating their belief.When the memory period after a default terminate, investors attach little importance to that default and shift their interest to recent defaults if exist.When all the existing memory periods terminate, we can consider the situation as a complete return to the normal economic condition.In order to evaluate the credit risk under the presence of the default contagion and possibilities of normalization, Monte Carlo simulation is the most reasonable method because of their non Markovian environment.However, the previous study, relying on standard Monte Carlo method, performed slow convergence.We examine that the Interacting Particle System (IPS) is a powerful tool to overcome the slow convergence.Intuitively, Interacting Particle System works in a following mechanism.On a discrete time grid, IPS evolves a collection of particles representing the states of our interest, including firm values.At each time step, particles are randomly selected by sampling with replacement, placing more weight on particles that experienced increase in default probability in the previous period.The new generation of selected particles is then evolved over the next period based on the standard transition lows and at the end of the period a selection takes place again.The selection procedure adaptively forces the process into the regime of interest and therefore reduces variance.
The rest of this paper is organized as follows.Section 2 introduce our model and deduces an expression for the conditional joint distribution of the default thresholds.Section 3 develops standard Monte Carlo simulation algorithm.Section 4 gives an overview of Feynman-Kac path measure which plays a central role of the Interacting Particle System and how the algorithm can be applied to the model.Section 5 provides numerical examples and Section 6 concludes.

Incomplete Information Credit Risk Model
Uncertainty is modeled by a probability space ( ) , , Ω   equipped with a filtration ( ) 0 t t ≥  that describes the information flow over time.We impose two additional technical conditions, often called the usual conditions.The first is that t  is right-continuous and the second is that 0  contains all  -null sets, meaning that one can always identify a sure event.Without mentioning it again, these conditions will be imposed on every filtration that we introduce in the sequel.The probability measure  serve as the statistical real world measure in risk management applications, while in derivatives pricing applications,  is a risk-neutral pricing measure.On the financial market, investors can trade credit risky securities such as bonds and loans issued by several firms indexed by ( ) In the following, we extend the CreditGrades model in the sense that we consider more than two firms in the portfolio and their asset correlation as well as the dependence structure of the default barriers.Furthermore, we give a slight modification of the CreditGrades model reflecting the fact that the surviving firm's default barrier is lower than its historical path of asset value.

Model Setting
 represent the time t asset value of the firm i on a per share basis which solves the next stochastic differential equation where i δ ∈  is the asset volatility and i v is the firm value at time 0 at which we stand.We assume that the asset value processes have correlations, i.e., ( ) ( ) ⋅ is the quadratic covariation.Filtrations generated by observed asset values are denoted by ( ) ( ) . There is a random default threshold i i L D such that firm i default as soon as the asset value falls to the level i i L D , where i L denotes the recovery rate at default and i D is a positive constant representing debt per share, which may given by accounting reports.Then the default time of the firm i is a random variable ( ] Here random variables ( )  .More complicated stochastic processes for ( ) i V t such as stochastic volatility may be possible, however, we shed lights to the multi-name setting and model the so-called default contagion.Let

Incomplete Information
With the view to analyzing how the period of past default memories affect succeeding defaults, we consider the incomplete information framework which is known to represent default contagion.In order to depict the incomplete information structure more concretely, in addition to the assumption of the randomness of the default threshold, we postulate the following assumptions.
Assumption 1 Public investors can observe firm values and default events although they can not directly observe the firm's default thresholds ( ) Define the set of survived firms and the set of defaulted firms at the time t .We write # t t r =  , the number of elements in the set t  .Since investors have knowledge that the surviving firms have lower default barrier than their running minimum of the firm value processes, it is natural to suppose the next assumption.Assumption 2 At time 0 t = , we assume every firm in the portfolio are surviving, i.e., 0 r n = and then the inequality i Let ( )  ( )

Remark 1
The definition of the mean vector is given along the line of original CreditGrades model.Finger, Finkelstein, Pan, Lardy and Ta [12] proposed that the random recovery rate i L is modeled as Assumption 3 There is a consensus on the prior joint distribution of firm's default thresholds among the public investors.More concretely, investor's uncertainty about the default thresholds where n N is a n -dimensional Normal distribution.Assumption 4 For each default time i τ , public investors update their belief on the joint distribution function of surviving firm's default thresholds based on the Assumption 3 and newly arrived information, i.e., the realized recovery rate ( )

Remark 2 Since public investors observe all the history of the firm value, they know that the unobservable threshold should be located below the running minimum of the firm value. Despite these knowledge, we assume that public investors treat the logarithm of the recovery rate
as normally distributed random variable.
Assumption 1, Assumption 3 and Assumption 4 provide the default contagion mechanism; The default of a firm reveals information about the default threshold and then public investors update their beliefs on surviving firm's joint distribution of thresholds.From public investors' perspective, this naturally causes the sudden change of default probabilities of survived firms, which is just what we wanted to model.The situation of contagious defaults can be translated to the recession, however, it will not continue forever.In our model, we further assume that public investors view the crisis will return to normal condition after some finite time interval.
Assumption 5 The covariance parameter jumps from ij This can be captured by introducing time-depending covariance parameters and then assume that the elements of the variance-covariance matrix Γ are given by (0.4).We call i s the memory period of i after i τ .
Thus the mean vector t µ and the variance-covariance matrix t Γ at time t can be defined as ( ) Rearrange the order of firm identity numbers in such a way that the elements of t   come after the elements of t  and the elements of t t     are located the last.Let t Γ be a ( ) ( ) Assumption 6 implies that during the memory period, public investors remember the firm values at which the defaults occurred.We note that By virtue of Assumption 3, we can deduce the conditional joint distribution of the default thresholds as follows.Here we don't eliminate the possibility of simultaneous defaults, i.e., we don't need to assume

(
) × matrix, and ( ) Proof 1 From the continuity of the asset process ( ) i V t and Equation (0.5), public investors know that ( ) Here, whenever defaults occur, let the order of the firms be rearranged in such a way that the elements of t  come after the elements of t  .Define the set ( ) with the special case ( )  to be the possible range of the recovery rate vector ( ) As in the proof of the Lemma 4.1 of [5], from Bayes' Theorem, ( .
The last equality holds because L is independent of t  .Hence the joint distribution of the surviving firm's logarithm of recovery rates are given by the conditional distribution of


at which the realization ( ) Conditional distributions of the multivariate normal distribution are well known.See for example [13] for details.However, from Assumption 1, public investors have already know the following inequalities hold.
( )   should be truncated above t c given by (0.7).

Remark 3
In the case 0 ij ρ = for all i j ≠ , the problem become quite easy because first-passage time of 1-dimensional Geometric Brownian motion is well known.In fact, in such a case, Giesecke and Goldberg [6] showed that the counting process ( ) has intensity process and they proposed the simulation method based on the total hazard with the case , 1, 2, , , , , n τ τ τ


. Figure 1 illustrate an example of sequence of defaults with 5 n = and the corresponding memory periods.At time 0, since all the firms are active and then the unconditional joint density of ( ) ( ) For example, square bracket [ ] , log , log . However, at the third default time 3 1 T τ = , investor's interest have changed from the first default to the second default completely, i.e., the memory period of 5 after 5 τ have finished.Therefore updated default threshold should be sampled under the condition ( ) ( )

Default Contagion
In this subsection we see how the conditional distribution of the default threshold change at the default time.Suppose that the first default occurred at time 0 ( ) i g x denote the unconditional density of i i L D and let ( ) = then the default probabilities ( ) Remark 4 Giesecke [5] showed that i τ does not admit the default intensity.Define the supermartingale where i G is cumulative distribution function of i i L D .Define the nondecreasing process  .This is the reason why we argue the default probabilities instead of the default intensities in our model.

Monte Carlo Method
This section develops a numerical method to compute the distribution of the number of defaults via Monte Carlo simulation.Complicating matters is the fact that new information of defaults changes the mean and covariance of the joint distribution of the thresholds.At each moment, covariance matrix should be calculated relying upon whether the memory period have terminated or not.Therefore, the simulation depends on the path, i.e. the order of occurrence of sequential defaults.
The time interval [ ] 0,T is partitioned into sub-intervals of equal length ∆ and firm value processes evolve along the discretized time step , 1, 2, , k k n ∆ =  , where n T ∆ = .With the discretization of the time variable, we redefine the default time as , perform the following: L D for all firms in portfolio and fix them until the first default occurred.
Step 1. Generate the ( ) Step 2. Determine whether default occurred or not at time k∆ and renew the set k∆  as follows.If , then the firm i gets default at time k∆ , and then set 1 Else, set 0 , , , m i i i  be a set consists of defaulted firms at time k∆ and go to Step 3.
for all i and go to Step 1.
Step 3. Determine { } , , , and the set k∆   .Draw the random barrier i i L D for all survived firms k i ∆ ∈  and fix them until next default occurred.Sampling is based on the distribu-tion truncat- ed above ( ) Step 5. Set 1 k k = + and go to Step1.

Interacting Particle System
In credit risk management, it is important to measure how often the rare but crucial events will occur.However, standard Monte Carlo algorithm in Section 3 may be inefficient in some situations such as the portfolio constituents have small default probabilities.This is because, in order to estimate accurately the probability of rare events, a large number of trials may be required with Algorithm 3.1.
In an effort to estimate the accurate probabilities within a reasonable computational time, we embed IPS to original standard Monte Carlo simulation algorithm.In the following two subsections, we provide a quick overview of the IPS inspired by the pioneering work Del Moral and Garnier [15] and subsequent papers Carmona, Forque and Vestal [14], Carmona and Crepey [16] and Giesecke et al. [7].

Feynman-Kac Path Measures
Let 0 T > be the time horizon.The Interacting Particle System consists of a set of N path-particles evolving from time 0 p = to p m = .The initial generation at time 0 p = is a set of independent copies of 0 X and the system evolves as if strong animals produce many offsprings, however the rest die.
For each 0 p ≥ , we consider non-negative measurable functions p G defined on p F equipped with the product σ -algebra, and we call these functions as potential functions.We associate to the pair of potentials and transitions ( ) the Feynman-Kac path measure defined for any test function We also introduce the corresponding normalized measure Therefore, for any given bounded measurable function The above relationship has the merit of relating the un-normalized expectations in the left hand side to normalized expectations in the right hand side.Furthermore, for any path ( ) 0 , , , we define the weighted indicator function , , , .
This formula can be applied to the situation that the are the rare events and it can be computed via computation of normalized measures.It is known (see Del Moral and Garnier [15]) that the computation of the sequence of normalized measures is achieved by the following non-linear recursive equation starting from ( ) ) , into the integrand of the right hand side of (0.15).

Interacting Particle Interpretation
For the purpose of numerical computations of the rare events of the form (0.14), we introduce an interacting particle system.We choose a large integer N which we shall interpret as the number of particles.We construct a Markov chain { } 0 at time p can be interpreted as a set of N samples of particles with respect to the measure p η , We start with an initial configuration ( ) = that consists of N independent and identically distri- buted random samples from the distribution where the 1,1 j ξ are drawn independently of each other from the distribution ( ) where ( ) is the empirical measure defined by ( ) represents an infinitesimal neighborhood of the point ( ) . By the definition of p Φ , one sees that (0.16) is the superposition of two identifiable transitions, a selection followed by a mutation as shown below.
The selection stage is performed by choosing randomly and independently N particles ( ) During the mutation stage, each selected particle 1 ˆj p ξ − is extended in time by an elementary p K -transition.
In other words, we A result of [17] reproduced in [15] states that for each fixed time p , the empirical measure converges in distribution, as N → ∞ , toward the normalized Feynman-Kac measure p η , i.e., ( ) , and then, by (0.14), we can get the particle approximation of the rare event probabilities.More precisely, if we let 1 is an unbiased estimator of the rare event probabilities We refer to Del Moral [17] and Del Moral and Garnier [15] for the details of the convergence results.Their complete proofs rely on a precise propagation of chaos type analysis and they can be found in Section 7.4 on Pages 239-241 and Theorem 7.4.1 on Page 232 in Del Moral [17].

IPS Algorithm and Potential Functions
We introduce special notations X and Ŵ which indicate that these are the input to the selection stage, while another notation X  and W  indicate that these are the input to the mutation stage of the IPS algorithm.Here, as we will describe later, ˆp W indicate the parent of p X .According to Del Moral and Garnier [15], one of the recommended potential functions are of the form for some 0 α > and suitable function V so as to satisfying X − , and then the selection would be implemented with those two particles.The form of the distribution (0.17) shows that in order to have more simulation paths realizing great many defaults, it is important to choose a potential function becoming larger as the likelihood of default increases.To meet our purpose, we choose the function V as follows.
( ) ( ) Then our potential function is given by ( ) ( ) The first term of (0.21) reflects the fact that the high weights are assigned to the particle which had renewed the running minimum during the period [ ] . When i L is not random, i.e., the default barrier is observable, it is known that the IPS is effective to simulate the counting process with reasonable accuracy by Carmona, Fouque and Vestal [14].We borrowed the form of potential function from Carmona, Fouque and Vestal [14].Detailed IPS algorithm is summarized as follows.
Algorithm 2 Assume that we have a set of N particles at time p denoted by To generate an estimate of ( ) X W and indicators 0 H . Choose t ∆ as a discretized time step for the firm value processes, to be some small value.We start with a set of N i.i.d.initial conditions (initial value of 0 X ) for all j and then form a set of N particles Step 1.For each step ( ) , repeat the following steps.• Selection.
Compute the normalizing constant can be easily calculated within the above IPS algorithm.This provides criteria to choose the parameter 0 q < to be a suitable level.

Numerical Examples
This section demonstrates the performance of the IPS algorithm through numerical examples with a sample portfolio consists of 25 firms.We consider portfolio consisting of high credit quality names with high correlations ( ) ij ρ of their firm value processes, as well as high correlations ( ) ij γ of the default thresholds.The parameters of the model are summarized as follows.
Those parameters are set with the intention to notice rare default events.As Carmona, Fouque and Vestal [14] reports, the number of selections/mutations which is equal to m in Algorithm 4.1 will not have so significant impact to numerical results then we set 5 m = per one year.Here we set 0.005 t ∆ = .First we compare the results of the IPS algorithm to the results obtained by the standard Monte Carlo algorithm in case of 3 T = years.For the standard Monte Carlo, we run 10,000 trials and in addition, 500,000 trials that will be expected to achieve reasonably accurate values.As for IPS algorithm, we set 15 m = and 0.3 q = − and take 10, 000 N = for number of the particles.all the default by the time horizon.Figure 5 plot the log scale for these three cases of probabilities for comparison.One can see that the standard Monte Carlo with 10,000 trials has oscillating results for rare events although the IPS results shows similar shape as 500,000 trials of Monte Carlo.For this numerical calculation, 500,000 trials took about 8000 seconds, whereas the IPS algorithm took about 275 seconds with 3.4 GHz Intel Core i7 processor and 4 GB of RAM.These numerical results show that the standard Monte Carlo with 10,000 trials can not capture the occurrence of rare default events such as over 20 defaults, however, one sees that there exist very small probabilities for such events via 500,000 trials which is indicated by solid blue line in Figure 5.As expected, IPS algorithm can capture these rare event probabilities which are important for the credit risk management.
Next, we investigate how variance would reduced by IPS with following two cases • Case 1: 1 T = and 1 i s = for all i , • Case 2: 3 T = and 1 i s = for all i , to see the difference with respect to time horizon with the same memory period.Preliminary version of this paper, Takada [11] illustrated how the default distributions change in response to the memory period i s based on the standard Monte Carlo.One sees that the first default occurs with the same probability for different ij γ s but the second default occurs with different probability because contagion effects are different in response to the memory period s ; The larger the memory periods get, the more tail gets fat.
In contrast, current study focuses on how the variance is reduced with IPS algorithm compared to the standard Monte Carlo.Due mainly to the computation of sampling with replacement according to the distribution (0.23) in the selection stage, IPS algorithm generally requires more time than the standard Monte Carlo.Although it obviously depends on input parameters, with the above parameter set of 25 names and in case of 1, 1 i T s = = , the calculation in IPS took approximately 1.03 times longer than that of standard Monte Carlo.Thus, in the rest of the paper, for comparison for accuracy, we take the number of trials in Monte Carlo equals to the number of the particles in IPS.In order to see the effectiveness of the IPS, we run both the Monte Carlo with 10000 trials and the IPS with 10000 N = particles for 1000 times for each, and then compare the sample standard deviation of the 1000 outcomes of the probabilities { }  Remarkable feature is that the IPS algorithm reduces variance for the rare events, i.e., more than 10 defaults in our example, while instead, demonstrates weak performance for 9 k ≤ .Therefore, whether to chose IPS depends on the objective and its assesment(division) might depends on the portfolio and the parameters.Thus, although we need several trial runs for the first time with given portfolio, once we get the suitable control parameters such as 0 q < , reliable results would be obtained.

Conclusion
This paper proposed incomplete information multi-name structural model and its efficient Monte Carlo algorithm based on the Interacting Particle System.We extend naturally the CreditGrades model in the sense that we consider more than two firms in the portfolio and their asset correlation as well as the dependence structure of the default thresholds.For this purpose, we introduced the prior joint distribution of default thresholds among the public investors described by truncated normal distribution.Numerical experience demonstrated that the IPS algorithm can generate rare default events which normally requires numerous trials if relying upon a simple Monte Carlo simulation.Finally we verified the IPS algorithm reduces variance for the rare events.

Figure 1 .
Figure 1.Sequence of defaults and the corresponding memory periods.

Figure 2 and
Figure 2 and Figure 3 show the conditional distribution of i i L D at j τ − and j τ with
firms and calculate the realized barrier for the defaulted firms and store { } 1 2 of all bounded measurable functions on some measurable space ( ) , E  , and we equip ( ) b   with the uniform norm.
13), unbiased particle approximation measures N p γ of the un-normalized measure p γ are defined as and positive.Thanks to the form of this potential function, we note that we need only to keep track of p X and 1 p

Figure 4
illustrates the probability of defaults all i .Thus the market participants memorize


obtained by the IPS.Calculate the sample standard deviation of IPS STD k .Finally compare the two values MC STD k and IPS STD k for each k and then we see which algorithm achieves low standard deviation.

Figure 6 and
Figure 7 illustrate the differences between MC STD k and IPS STD k in Case 1.And Figure 8 and Figure 9 illustrate the differences between MC STD k and IPS STD k in Case 2.
vector consists of the logarithm of the realized recovery rate at time t .Partition the vector t [14]n analogy with its continuous time version (0.3).As mentioned in Carmona, Forque and Vestal[14], we do not have to correct for the bias introduces by the discretization of a continuous time boundary crossing problem.