(0.18)

We refer to Del Moral [17] and Del Moral and Garnier [15] for the details of the convergence results. Their complete proofs rely on a precise propagation of chaos type analysis and they can be found in Section 7.4 on Pages 239-241 and Theorem 7.4.1 on Page 232 in Del Moral [17] .

4.3. IPS Algorithm and Potential Functions

We introduce special notations and which indicate that these are the input to the selection stage, while another notation and indicate that these are the input to the mutation stage of the IPS algorithm. Here, as we will describe later, indicate the parent of.

According to Del Moral and Garnier [15], one of the recommended potential functions are of the form

(0.19)

for some and suitable function so as to satisfying

This regularity condition ensures that the normalizing constants and the measure are bounded and positive. Thanks to the form of this potential function, we note that we need only to keep track of and, and then the selection would be implemented with those two particles. The form of the distribution (0.17) shows that in order to have more simulation paths realizing great many defaults, it is important to choose a potential function becoming larger as the likelihood of default increases. To meet our purpose, we choose the function as follows.

(0.20)

Then our potential function is given by

(0.21)

The first term of (0.21) reflects the fact that the high weights are assigned to the particle which had renewed the running minimum during the period. When is not random, i.e., the default barrier is observable, it is known that the IPS is effective to simulate the counting process with reasonable accuracy by Carmona, Fouque and Vestal [14] . We borrowed the form of potential function from Carmona, Fouque and Vestal [14] . Detailed IPS algorithm is summarized as follows.

Algorithm 2 Assume that we have a set of particles at time denoted by. We define the Markov process

where, and define the discrete time Markov process.

To generate an estimate of for, perform the following:

Step 0. Initialize the particles and indicators. Choose as a discretized time step for the firm value processes, to be some small value. We start with a set of i.i.d. initial conditionschosen according to the initial distribution of.

Set (initial value of) for all and then form a set of particles

.

Step 1. For each step, repeat the following steps.

• Selection.

Compute the normalizing constant

(0.22)

and choose independently particles according to the empirical distribution

(0.23)

The new particles are denoted by.

• Mutation.

For every, using the Algorithm 3.1, the particle is transformed into

independently by

(0.24)

and set.

Step 2. The estimator of the probability is given by

(0.25)

It is known that this estimator is unbiased in the sense that and satisfies the central limit theorem. (Refer to [15] [17] )

Instead of explicit calculation of the asymptotic variance, we notice that the approximate variance

defined by

(0.26)

can be easily calculated within the above IPS algorithm. This provides criteria to choose the parameter to be a suitable level.

5. Numerical Examples

This section demonstrates the performance of the IPS algorithm through numerical examples with a sample portfolio consists of 25 firms. We consider portfolio consisting of high credit quality names with high correlations of their firm value processes, as well as high correlations of the default thresholds. The parameters of the model are summarized as follows.

• for all• for and, for and.

Those parameters are set with the intention to notice rare default events. As Carmona, Fouque and Vestal [14] reports, the number of selections/mutations which is equal to in Algorithm 4.1 will not have so significant impact to numerical results then we set per one year. Here we set.

First we compare the results of the IPS algorithm to the results obtained by the standard Monte Carlo algorithm in case of years. For the standard Monte Carlo, we run 10,000 trials and in addition, 500,000 trials that will be expected to achieve reasonably accurate values. As for IPS algorithm, we set and and take for number of the particles. Figure 4 illustrates the probability of defaults

for for with for all. Thus the market participants memorize all the default by the time horizon. Figure 5 plot the log scale for these three cases of probabilities for comparison. One can see that the standard Monte Carlo with 10,000 trials has oscillating results for rare events although the IPS results shows similar shape as 500,000 trials of Monte Carlo. For this numerical calculation, 500,000 trials took about 8000 seconds, whereas the IPS algorithm took about 275 seconds with 3.4 GHz Intel Core i7 processor and 4 GB of RAM.

Figure 4. Default probabilities.

Figure 5. Default probabilities in log-scale.

These numerical results show that the standard Monte Carlo with 10,000 trials can not capture the occurrence of rare default events such as over 20 defaults, however, one sees that there exist very small probabilities for such events via 500,000 trials which is indicated by solid blue line in Figure 5. As expected, IPS algorithm can capture these rare event probabilities which are important for the credit risk management.

Next, we investigate how variance would reduced by IPS with following two cases

• Case 1: and for all• Case 2: and for allto see the difference with respect to time horizon with the same memory period. Preliminary version of this paper, Takada [11] illustrated how the default distributions change in response to the memory period based on the standard Monte Carlo. One sees that the first default occurs with the same probability for different s but the second default occurs with different probability because contagion effects are different in response to the memory period; The larger the memory periods get, the more tail gets fat.

In contrast, current study focuses on how the variance is reduced with IPS algorithm compared to the standard Monte Carlo. Due mainly to the computation of sampling with replacement according to the distribution (0.23) in the selection stage, IPS algorithm generally requires more time than the standard Monte Carlo. Although it obviously depends on input parameters, with the above parameter set of 25 names and in case of, the calculation in IPS took approximately 1.03 times longer than that of standard Monte Carlo. Thus, in the rest of the paper, for comparison for accuracy, we take the number of trials in Monte Carlo equals to the number of the particles in IPS. In order to see the effectiveness of the IPS, we run both the Monte Carlo with trials and the IPS with particles for 1000 times for each, and then compare the sample standard deviation of the 1000 outcomes of the probabilities for all More specifically, let

be -th outcome of obtained by the standard Monte Carlo and be -th outcome of obtained by the IPS. Calculate the sample standard deviation of

, denoted by, and also calculate the sample standard deviation of

, denoted by. Finally compare the two values and for each

and then we see which algorithm achieves low standard deviation. Figure 6 and Figure 7 illustrate the differences between and in Case 1.

And figure 8 and figure 9 illustrate the differences between and in Case 2.

Remarkable feature is that the IPS algorithm reduces variance for the rare events, i.e., more than 10 defaults in our example, while instead, demonstrates weak performance for. Therefore, whether to chose IPS depends on the objective and its assesment(division) might depends on the portfolio and the parameters. Thus, although we need several trial runs for the first time with given portfolio, once we get the suitable control parameters such as, reliable results would be obtained.

Figure 6. Case 1: 1 ≤ k ≤ 7.

Figure 7. Case 1: 8 ≤ k ≤ 25.

Figure 8. Case 2: 1 ≤ k ≤ 7.

Figure 9. Case 2: 8 ≤ k ≤ 25.

6. Conclusion

This paper proposed incomplete information multi-name structural model and its efficient Monte Carlo algorithm based on the Interacting Particle System. We extend naturally the CreditGrades model in the sense that we consider more than two firms in the portfolio and their asset correlation as well as the dependence structure of the default thresholds. For this purpose, we introduced the prior joint distribution of default thresholds among the public investors described by truncated normal distribution. Numerical experience demonstrated that the IPS algorithm can generate rare default events which normally requires numerous trials if relying upon a simple Monte Carlo simulation. Finally we verified the IPS algorithm reduces variance for the rare events.

Acknowledgements

The author is grateful to participants of the RIMS Workshop on Financial Modeling and Analysis (FMA2013) at Kyoto and participants of the Quantitative Methods in Finance 2013 Conference (QMF 2013) at Sydney for valuable comments which helped to improve this paper.

References

- Davis, M. and Lo, V. (2001) Infectious Defaults. Quantitative Finance, 1, 382-387.
- Yu, F. (2007) Correlated Defaults in Intensity Based Models. Mathematical Finance, 17, 155-173.
- Frey, R. and Backhaus, J. (2010) Dynamic Hedging of Synthetic SDO Tranches with Spread and Contagion Risk. Journal of Economic Dynamics and Control, 34, 710-724.
- Frey, R. and Runggaldier, W. (2010) Credit Risk and Incomplete Information: A Nonlinear-Filtering Approach. Finance and Stochastics, 14, 495-526. http://dx.doi.org/10.1007/s00780-010-0129-5
- Giesecke, K. (2004) Correlated Default with Incomplete Information. Journal of Banking and Finance, 28, 1521-1545.
- Giesecke, K. and Goldberg, L. (2004) Sequential Defaults and Incomplete Information. Journal of Risk, 7, 1-26.
- Giesecke, K., Kakavand, H., Mousavi, M. and Takada, H. (2010) Exact and Efficient Simulation of Correlated Defaults. SIAM Journal of Financial Mathematics, 1, 868-896.
- Schönbucher, P. (2004) Information Driven Default Contagion. ETH, Zurich.
- Takada, H. and Sumita, U. (2011) Credit Risk Model with Contagious Default Dependencies Affected by MacroEconomic Condition. European Journal of Operational Research, 214, 365-379.
- McNeil, A., Frey, R. and Embrechts, P. (2005) Quantitative Risk Management: Concepts, Techniques and Tools. Princeton University Press, Princeton.
- Takada, H. (2014) Structural Model Based Analysis on the Period of Past Default Memories. Research Institute for Mathematical Sciences (RIMS), Kyoto, 82-94.
- Finger, C.C., Finkelstein, V., Pan, G., Lardy, J., Ta, T. and Tierney, J. (2002) Credit Grades. Technical Document, Riskmetrics Group, New York.
- Fang, K., Kotz, S. and Nq, K.W. (1987) Symmetric Multivariate and Related Distributions. CRC Monographs on Statistics & Applied Probability, Chapman & Hall.
- Carmona, R., Fouque, J. and Vestal, D. (2009) Interacting Particle Systems for the Computation of Rare Credit Portfolio Losses. Finance and Stochastics, 13, 613-633. http://dx.doi.org/10.1007/s00780-009-0098-8
- Del Moral, P. and Garnier, J. (2005) Genealogical Particle Analysis of Rare Events. The Annals of Applied Probability, 15, 2496-2534. http://dx.doi.org/10.1214/105051605000000566
- Carmona, R. and Crepey, S. (2010) Particle Methods for the Estimation of Credit Portfolio Loss Distributions. International Journal of Theoretical and Applied Finance, 13, 577. http://dx.doi.org/10.1142/S0219024910005905

NOTES

^{*}Corresponding author.