Low Default Portfolios—A Proposed Rule to Identify Differences between Imprudence, Conservatism, and Exaggeration

Internal models may be used by banks to calculate their regulatory capital for credit risk. There are a variety of methodologies for estimating default probabilities, which leads to major differences in credit provisions and capital requirements. Using either a classical or a Bayesian technique, the computation of default probabilities can be ensured. Reduced form models are a choice. These models, however, might not be used to quantify economic capital because they assume independence among default events. Banks are compelled to employ structural models since defaults in the real world of banking are not solely due to exogenous causes. Because of the diversification effects between credit losses for one obligor and credit losses for other obligors in each bank’s portfolio, total unexpected losses do not equal the sum of individual unexpected losses. Those two types of models—reduced form and structural— are provided in either a theoretical or a numerical format. This paper covers both the classical and Bayesian techniques, with the latter employing a broader set of prior functions that offer considerably different probabilities. Distinguishing between imprudence, conservatism, and exaggeration might be difficult in the context of low default portfolios with scarce data. A realistic rule is proposed for finding the minimum and maximum bounds and therefore assessing the required conservatism margin by comparing classical and Bayesian probabilities.


Introduction
Banks may use internal models to assess their credit risk exposure within an internal ratings-based framework under the Basel Accords issued by the Basel Committee on Banking Supervision, according to capital requirements rules adopted by banking institutions and transposed into the legal system across the European Union 1 . The default probability is the same for different internal rating systems, whether foundation internal rating-based or advanced internal rating-based-nomenclature used in such accords.
Loans with no defaults over a lengthy period of time are a utopian concept.
Indeed, during an economic downturn, it is projected that unemployment will grow and, as a result, higher default rates will emerge, projections that are heightened by the contagion risk. It will be impossible to generate accurate and realistic estimations of default probabilities if there is no correlation among individual exposures within portfolios.
Therefore, the zero-default assumption for all risk classes will not be presented here. In any case, if the observed default number is nil, the formulas in this paper can be immediately converted to the zero-presume assumption.
A utopia would likewise imply independence of default events foreseen in reduced form models. This independence will be studied in order to compare the results to those obtained from structural models (which account for the existence of a dependence structure among borrowers in the same risk class) and to assess the significance of the asset correlation component.
The methodology used in this document is explained in Chapter 2, which is divided into four sections. Section 2.1 addresses general considerations (specifically, asset correlation) that are necessary for each axis of credit risk models applied to low default portfolios-reduced form models and structural modelsand for each axis of statistical approaches used in credit risk assessment-classical and Bayesian approaches. The classical technique is covered in Section 2.2, which has two subsections: one for reduced form models and the other for structural models. The Bayesian technique is covered in Section 2.3, which is broken into two subsections: prior distributions and posterior distributions, the latter of which encompasses both reduced form models and structural models.
In Section 2.4, a reasonable criterion is used to connect the classical and Bayesian approaches, allowing one to distinguish between imprudence, conservatism, and exaggeration in terms of default probability.
The outputs of such models and approaches are provided in Chapter 3. In the last section of this chapter, the main conclusions drawn from the comparison of those models and approaches are presented.
Finally, some closing remarks are made. They address subjects like uncertainty, mix of prior functions, and other open issues that need to be investigated further. 1 tegrating algorithm, which requires a stochastic treatment or a simulation procedure for the classical (or frequentist) approach. The algorithm takes into account all possible values "y" of a standard and normally distributed random variable "Y" that represents the systematic risk's realization range. Under the Bayesian (or subjective) approach, the stochastic treatment or the simulation procedure must be doubled: one for "y" and the other for "λ", the default probability.
The frequentist approach and the subjective approach are used to discuss both reduced form models and structural models. The trapezoidal rule for numerical integration approximation is used to obtain outputs when an analytical solution is not attainable.
For each risk class, the binomial and Poisson distributions are used to simulate the default probability random variable. Nonetheless, unlike the binomial distribution, using the Poisson distribution to represent the default probability is not entirely accurate because the size of the risk class, "n", is not fixed in this type of distribution.
The posterior distribution, according to Bayesian inference, corresponds to the conditional distribution of the default probability random variable, "Λ", given a set number of borrowers, "n", and a fixed number of defaults, "k", as well as the previous distribution of the default probability. The posterior density of the default probability is obtained by matching the likelihood function and the prior function.
The Bayesian approach provides another conceptual distinction. In the classical approach, there are frequentist confidence intervals, while in the Bayesian approach, there are posterior credible intervals. Despite the fact that the latter are commonly conceived of as a Bayesian variant of confidence intervals used in classical probability, they have different meanings. The highest density interval existing a unimodal posterior density 2 is related to the shortest possible interval, determined by numerical calculation. Because "n", "k", and "ρ" are regarded as constants, there is a ( ) 100 % − δ probability that the true value of the unidi-2 If the posterior density is a multimodal distribution, "the highest density region" should be used, instead of "the highest density interval".
If a standard Poisson distribution is used to represent "K", the probability is: When default events are completely independent (ρ = 0%), "upper confidence bounds" 4 for both binomial and Poisson distributions can be computed analytically through beta and gamma approximations or numerically. They provide the same outputs because the binominal distribution is proportional to the beta distribution and the Poisson distribution is proportional to the gamma distribution:

Classical Approach with Structural Models
The theoretical environment must be drastically changed by positive asset correlation values, "ρ". Instead of basic and unrealistic reduced form models that rely on default independence, complex and adequate structural models should be used.

( )
, , J y λ ρ represents the probability function of the sample data resulting from the binomial function of "λ" or the Poisson function of "λ", ( ) y φ represents the standard normal probability density function of "Y", ( ) represents the standard normal cumulative distribution function, and ( ) 1 − Φ λ represents the inversed standard normal cumulative distribution function for "λ". The mean-3 The true value of "λ" has a ( ) This term is tied to Katja Pluto and Dirk Tasche's "the most prudent estimation" concept that they established in 2005 and applied to the classical default probability. Each risk class contains not only "n" and "k" from that particular class, but also "n" and "k" from other classes with lower rating grades.
Therefore, the probability of having no more than "k" defaults inside a risk class with "n" obligors is provided by: if the probability function of the sample data follows a binomial distribution or a Poisson distribution, respectively, and The function ( ) , , G y λ ρ distinguishes the formulae of reduced form models from the structural models. A brief explanation of that crucial function and its origin can be found in Appendix A.

Prior Distributions
The default probability of the posterior distribution is derived from the likelihood and prior densities matching, as previously indicated. Multiple prior functions, ranging from less prudent to over conservative, are identified in this subsection.
They reflect various elements or beliefs about the effective default probability.
When there is no understanding of the behavior of posterior default probability, it is common to use a non-informative prior. The most non-informative prior is a flat prior, concretely a uniform distribution between 0 and 1. However, prior functions are useful in most circumstances since they represent the default risk profile, in which case distributions need to be parametrized.
Because of its versatility in expressing the uncertainty of the default probability, the beta distribution is a widely used parametrized prior distribution. Let "Λ" be the random variable of the default probability "λ". Assuming "Λ" follows a beta distribution, , it is simple to adapt this distribution to subjective information about the mean (or average) and variance of default probability using the hyperparameters "α" and "β". The mean and variance of a beta distribution are respectively: When no (objective or subjective) information about the default probability is available, a beta distribution with 1 α = β = can be taken because it represents a uniform distribution between 0 and 1, making it a non-informative prior.
The set of prior functions, ( ) f λ , addressed in this document is listed below.
being "u" the maximum limit of "λ". Four possibilities of "u" are tested: 1, 0.25, 0.1, and 0.01-the same values evaluated by Dirk Tasche in his work 5 . These values are also utilized in two other types of priors: linear growth and linear decrease, as shown below. A tighter representation should be used because when 1 u = the condition u λ < must be met, not u λ ≤ . This note is also applicable to prior functions with linear growth and linear decrease.
2) Linear growth 3) Linear decrease where "m" stands for the minimum of "λ", "Mo" stands for the mode of "λ", and "M" stands for the maximum of "λ", as assumed by the expert who defines the prior distribution. The values for "m", "Mo", and "M" were initially set to be conservative: m = 0.01, Mo = 0.025, and M = 0.075 7 .
6.2) Beta distribution as a proxy For the beta distribution, α = 6.67 and β = 175.31 were used. These two parameters were set to ensure that the beta distribution's mean and variance matched those of the base scenario, as decided by expert opinion-prior 6.1-, resulting in μ = 0.03667 and σ 2 = 0.00019, respectively. 6.3) Normal distribution as a proxy To check that the normal distribution's mean is equal to the mean of the prior 6.1 and the variance is equal to 0.03667/1.645, one used μ = 0.03667 and σ 2 = 5 Tasche (2012). 6 Term used by Tasche (2012). 7 In the second stage, another hypothetical scenario with m = 0.003, Mo = 0.01, and M = 0.02 was found considerably more suitable to the real-world issue of low default portfolios. 0.02229 for normal distribution 8 .

7) Beta distribution based on empirical default rate
The prior's mean is assumed to be the observed default rate for each combination of "n" and "k", and the prior's variance is expected to be equal to the ratio between that rate (and thus the mean) and the number 1.645. These mean and variance, on the one hand, and Equation (11) and Equation (12), on the other hand, are used to compute the beta distribution's parameters 9 . This prior will only be used in Section 3.4 to compare the classical and Bayesian approaches.

Posterior Distributions 1) Bayesian Approach with Reduced Form Models
Let's use "K" to represent the random variable of the default number once more. For any potential default probability, "λ", each value "y" of a standard and normally distributed random variable, "Y", and a specified asset correlation, "ρ", the posterior probability computation based on the Bayes' theorem 10 of the occurrence exactly "k" and no more than "k" as for Equation (1) and Equation (2) defaults is as follows: if the likelihood function follows a binomial distribution, or if the likelihood function follows a Poisson distribution.
It's worth noting that "u" is the upper limit of "λ" 11 , ( ) f λ is the prior probability density function of "λ"as described in 2.3.1, and ( ) y φ is the standard normal probability density function of "Y". The

( )
, H y λ denominator, also known as prior predictive distribution, nor-8 The 1.645 denominator corresponds to a 90% confidence level. 9 When there are no defaults, it is used at k = 0.000000001 instead of k = 0 to ensure that the analytical solution of the beta distribution is valid. 10 For events "X" and "Z", the conditional probability of "X" given the occurrence of "Z", representing "s" the number of disjoint events, is computed as follows: Z X is derived from a statistical model ( ) | L z x that describes the likelihood function and the probability ( ) P X is derived from a prior function ( ) , the posterior density function is obtained by: It should be remembered that four different values of "u" were tested: 1, 0.25, 0.1, and 0.01. malizes the posterior distribution function. Analytical outputs can be generated if no correlation of defaults among borrowers is assumed-reduced form models-, as mentioned in Subsection 2.2.1. Only a few circumstances in Bayesian estimation have an explicit analytical solution: when there is also independence among borrowers, and simultaneously when joining the prior function with the likelihood function yields a standard distribution.
Concretely, analytical forms occur in the following special cases: when the likelihood is a binomial distribution and the prior is a beta distribution, on the one hand, and when the likelihood is a Poisson distribution and the prior is a gamma distribution, on the other hand. The beta-binomial distribution and the gamma-Poisson distribution derive from those joining.
2) Bayesian Approach with Structural Models It should be noted that the existence of correlation requires numerical outputs from stochastic treatments or simulation procedures. Being ρ > 0, the formulae stated in the preceding subsection must be changed: if the likelihood function is represented respectively by a binomial or a Poisson distribution, and ( ) , , G y λ ρ has the meaning expressed in Equation (8).

( )
, , L y λ ρ denotes the probability of the sample data generated from binomial or Poisson likelihood functions of "λ", for a risk class with "n" counterparties and "k" defaults.

( )
, , H y λ ρ denominator ensures that the posterior distribution function is normalized once again.

Conservative Zone
The posterior default probabilities differ significantly depending on the priors used. Some of those probabilities are unwise, while others are overblown. The key goal is to establish limits that will allow for the identification of a conservative zone.
Drift and volatility should be included in every estimate of a random variable to explain the uncertainty. The historical default data for the drift and the standard deviation of the likelihood distribution for the volatility are adopted to ensure an acceptable margin of conservatism, avoiding unwanted levels of imprudence and exaggeration. The coefficient of skewness is also included in the rule because default distributions are heavily skewed toward positive asset correlations.
On the one hand, if at least one default occurrence is recorded, the following rule is applied to create imprudent, conservative, and exaggerated zones: λ > λ Exaggerated zone. The default probability " c λ " is determined as follows for a given confidence level "c": The ratio k/n represents the empirical default experience, L σ represents the standard deviation of the correlated likelihood function, L γ represents the coefficient of skewness of this function, and ( ) 1 c − Φ represents the inversed standard normal cumulative distribution function for "c". The likelihood function is used to indicate the level of conservatism since that function depicts the group's intrinsic risk and so eliminates the need for any prior risk data. Furthermore, to improve the required conservatism, volatility is computed using the standard deviation of the likelihood function at 97.5% and 99.9% confidence levels.
The expression inside square brackets in Equation (24) is the result of a binomial test of significance adaptation 12 . It should be noted that the conventional binomial test assumes mutual independence of events, which is an incorrect assumption in the default probability models. Aside from the standard deviation of the correlated binomial distribution-i.e., the correlated likelihood function-, the conservative margin should also account for the asymmetry of the same distribution.
On the other hand, when no default event is identified, L λ = µ is assumed, , λ ∈ λ λ . One keeps in mind that when 0 ρ > the posterior distribution's mean is much greater than when default events independence is assumed. As a result, when no past defaults have been recorded, the mean will be an overly cautious estimator of default probability.
It is important to note that the suggested practical rule should not be used to calculate default probabilities. Its advantage is that it provides for a more impartial comparison between an expected default probability and a conservative threshold based on the likelihood distribution.

Risk Classes and Asset Correlation
The figures in the tables of this paper were made for three different risk categories described in Table 1 13 .
The results are provided from two perspectives: an individual approach by credit risk class, in which each class is seen as a separate group; and an integrated approach combining two or more classes 14 , in which the upper confidence bound concept 15 is used assuming that the classes combined have the same rating category. When a unique rating grade is assigned to homogenous classes, it is presumed that default risks for all counterparties within the integrated group are exposed to the same default probability (and the same asset correlation).
As a result of regulatory regimes, banks are frequently subjected to a constant "ρ". The Basel Committee on Banking Supervision recognizes that major corporations are more dependent on systematic risk than small firms and retail counterparties because they are more exposed to overall economic conditions. Small firms and retail counterparties are less affected by economic cycles, therefore their defaults are more idiosyncratic rather than systematic.
It will always be set to ρ = 12%, as this is one of the standards available to banking regulators.
Four risk criteria are used to calculate the classical confidence intervals and the Bayesian credible intervals: 50%, 25%, 10%, and 5%.

Classical Approach
The binomial and Poisson distributions describe the counting of defaults. Table 2 and Table 3 show that those distributions produce effectively identical outcomes. In fact, it is expected that the means of both distributions are identical, n k λ = , and the variances are likewise quite similar 16 . In the tables, the terms "basic binomial" and "basic Poisson" are used to describe the corresponding distributions where default occurrences are assumed to be completely independent. It is also worth noting that the default probability increases as the confidence level rises. The observed default rates are presented in the tables so that the results from those distributions can be quickly compared to these rates.
Assuming ρ = 0%, upper confidence bound computations can be done analytically (through beta and gamma approximations) or numerically, as described in Subsection 2.2.1. For the five scenarios assumed, the largest discrepancy between the numerical simulation method and the analytical alternative approximation is 0.0003%. 13 Similar to Pluto and Tasche (2005 Table 2. Classical default probability for basic binomial (asset correlation = 0%).
Individual approach (by risk class) Integrated approach When the independence assumption is replaced with the correlation assumption and thus standard distributions are turned into correlated distributions, the greater the confidence level, the greater the difference between reduced form and structural models' results. Table 4 and Table 5 shows that at a 50% confidence level, from ρ = 0% to ρ = 12%, default probabilities for binomial distribution range from 0.34% -0.76% to 0.53% -1.11% (depending on "n" and "k"). At a 95% confidence level, one rises from 0.77% -1.98% to 2.89% -5.16%. Therefore, at a 50% confidence level, default probabilities increase by 46% -59% 17 , and at a 95% confidence level, they increase by 138% -274% 18 . At 75% and 90% confidence levels, the growth ranges by 81% -130% 19 , and 115% -214% 20 , respectively. There are significant capital savings when an integrated approach is used rather than an individual approach 21 . The greater the confidence level, the greater the spread of savings: from 27% 22 at a 50% confidence level to 45% at a 95% confidence level when three risk classes are aggregated as one homogeneous group, and from 16% at a 50% confidence level to 30% at a 95% confidence level when risk classes B and C are aggregated as one homogeneous group. These savings were obtained with ρ = 0%.
When reduced form models are replaced with structural models and asset cor-17 46% = 1.11% (Table 4)/0.76% (Table 2) -1, for (350, 2), and 59% = 0.58% (Table 4)/0.37% ( relation is assumed to be uniform, with ρ = 12%, the savings are lower: from 23% at a 50% confidence level to 28% at a 95% confidence level, and from 13% at a 50% confidence level to 16% at a 95% confidence level, respectively for three and two aggregated classes. Table 6 and Table 7, on the one hand, and Table 8 and Table 9, on the other hand, show that there are no significant differences between the binomial distribution and the Poisson distribution used as the likelihood function of the Bayesian default probability, with the outcome being roughly the same for both the basic and correlated techniques (similar to the classical approach). Nonetheless, the Poisson distribution's means are marginally higher than the binomial distribution's because the Poisson distribution is a little skewer than the binomial distribution 23 . Hence the matching percentiles associated with the mean in the binomial distribution are immaterially higher than the equivalent percentiles in the Poisson distribution. The largest deviation in the mean of the default probability (0.014%) occurs with n = 150, k = 0 and ρ = 12%. When comparing Table 6 and Table 8 (or Table 7 and Table 9), it is clear that (regardless of "n" and "k") the larger "ρ", the greater the likelihood function's mean and standard deviation. When the asset correlation is introduced, the likelihood function becomes considerably skewer than when no correlation is employed. Furthermore, one verifies the rule that the larger the risk group, the higher the coefficient of skewness and the smaller the mean and the standard deviation 24 .

Likelihood Functions
There are capital savings with the integration of risk categories, as seen in the classical approach. Savings in the Bayesian context, assuming ρ = 12%, equate to 9% or 18% whether two risk classes (B + C) or three risk classes (A + B + C) are aggregated. For ρ = 0%, corresponding savings increase to 20% or 33%-34% (rather than 33%) regarding that the likelihood function follows a Poisson distribution-if two or three risk classes are aggregated 25 . 24 This rule may also be shown comparing the values of the posterior probabilities in the last two columns of Table 10 and Table 11 which will be presented later. They are both tied to the same number of defaults, k = 3.
The lowering effect on the standard deviation when "n" grows is explained by the fact that the percentage increase in "n" is smaller than the modulus of the percentage decrease in "λ".

Posterior Distributions
A binomial distribution is suitably better than a Poisson distribution for depicting the number of defaults within a risk class containing a fixed number of obligors, as stated in Section 2.1. As a result, for the sake of simplicity, only outcomes based on the binomial distribution are now reported.
The prior functions discussed in 2.3.1 are used to find a group of statistical values for the posterior distributions: mean, median, mode, standard deviation and coefficient of skewness as well as four highest density intervals. Table 10 displays the outcomes of a 12% asset correlation, while Table 11 in Appendix B shows the results of no correlation 26 . These tables indicate that different prior functions and asset correlations have a big impact on posterior probabilities 27 . compared to capital savings derived from the Bayesian approach, the former are higher (than the latter) at a 50% confidence level when ρ = 12%-13% > 9% and 23% > 18%-, and lower (at the same confidence level) when ρ = 0%-16% < 20% and 27% < 33%. 26 Although both priors 6.2 and 7 are connected to beta distribution, only the first one is included in Table 10 and Table 11. 27 With ρ = 12%, the means for likelihood distribution-or posterior with uniform distribution and u = 1 as a prior-are three or four times bigger than those with ρ = 0%, depending on the pair "n" and "k" considered. It's worth noting that, according to the concept of expected value, the mean of the posterior probability of "λ", λ µ , is computed by: having ( ) f λ , ( ) , , L y λ ρ , and ( ) y φ the same meaning as before. The effect described in the last phrase of the penultimate paragraph of 3.3.1 about the growth of "n" is validated for all priors used: the larger the risk group, the smaller the mean and the standard deviation, and the greater the coefficient of skewness.
When the prior function is a uniform distribution spanning from 0 to 1 (i.e., the same figures as the default probability assumes), the values for posterior probabilities are obviously the same as those provided by the likelihood function. This can be seen by comparing Table 8 and Table 10 likewise Table 6 and Table  11. By definition, default probabilities are tiny; empirical rates for the five scenarios range from 0% to 0.57%. As a result, a uniform distribution with values between 0 and 0.25 is expected to yield identical results (about the likelihood function and the posterior with uniform distribution as a prior). Even if the top limit of the uniform distribution is set to 0.1, there are no discernible differences. Table 10 shows that the posterior with an immoderate prior-the polar opposite of the conservative prior-produces the lowest default probability, with the mean of 4.3 to 11.9 times lower than the posterior resulting from linear growth as a prior (being "u" constrained between 0 and 1) and between 2.4 and 5 times lower than the posterior resulting from the uniform distribution as a prior (being "u" between 0 and 1 too). Therefore, the most cautious or conservative prior is the linear growth function-the theoretical polar opposite of the linear decrease prior-, not the conservative prior itself.
The uniform distribution, linear decrease (both with "u" between 0 and 1), and conservative priors generate comparable posterior default probabilities for all the pairs of "n" and "k" studied. Those three sorts of functions will appear to be the most appropriate and beneficial priors, in contrast to immoderate and linear growth priors. Comments on expert knowledge priors-base scenario as well as Journal of Financial Risk Management beta and normal distributions as proxie-will be addressed later.
One returns to capital savings through risk group integration. Using the linear growth function with "u" between 0 and 1 as a prior and ρ = 12%, savings are substantially equivalent to savings at a 50% confidence level under the classical approach: 13% for n = 850 and 23% for n = 1000. Furthermore, a 95% confidence level comparison is required when ρ = 0%. This is because, at a 95% confidence level, savings obtained using the linear growth function as a prior are similar to those obtained using the classical approach: 28% for n = 850 and 44% for n = 1000, which are close to the 30% and 45% mentioned in 3.2, respectively.
Because the uniform distribution is a non-informative prior, posterior distribution outputs match to likelihood function outputs as aforementioned; corresponding capital savings have already been presented (in the last paragraph of 3.3.1). Being the means of posterior probabilities identical for uniform distribution, linear decrease, and conservative priors (ranging from 0 to 1), the savings are also equivalent. Immoderate prior yields the smallest savings. Table 10 and Table 11 show a simple rule: the stronger the asset correlation, the wider the tail of the right side of the distribution (or the higher the coefficient of skewness) 28 . However, this rule is not established in two instances: when u = 0.01 (and in some cases, when u = 0.1 for linear growth prior), and when expert information is the base of the prior function. The first exception is self-evident, as "λ" has a smaller upper limit-0.01 (or 0.1 is some cases)-than the broad range of values that default probabilities can tolerate (based on the binomial distribution).
The second exemption is explained by the nature of the prior subjected to expert judgment. The previous point concerning the 0.01 threshold also applies to priors based on that judgment. These judgments typically specify a much narrower range of values than the likelihood function's available values 29 .
It is important to remember that priors that provide information have a stronger influence on the posterior than priors that do not. Furthermore, as previously stated, a prior based on expert opinion is excessively rigid because the posterior probability density is constrained by the range of values for expert knowledge as a prior. Table 12 compares the cumulative densities of prior and posterior distributions linked to expert information thresholds to the densities of different kinds of priors and their corresponding posterior distributions. The mean is utilized as a reference point for prior and posterior distributions. In the case of employing the normal distribution as a proxy for the base scenario of expert knowledge, μ = 0.03912 rather than 0.03667 because only positive values less than 1 were used 30 . 28 The second paragraph of 3.3.1 came to a similar conclusion. 29 This is clear in the base scenario since the full mass of probability density is fixed between 0.01 and 0.075, the minimum and maximum values of the expert prior function. However, it can be seen in any other priors that are used as a proxy for the base scenario.   In other words, when the size of the risk class, "n", grows, the ratio between posterior probability with prior based on expert opinion and likelihood probability climbs significantly. The likelihood probabilities decrease greatly when "n" rises, which is not the case with the posterior based on expert information as a prior. By contrast, when posterior probability is calculated from any other prior, the ratio remains roughly constant because both posterior and likelihood probabilities are sensitive to "n" in almost equal proportions. Similarly, the ratio of empirical default rate to likelihood probability is quite stable across "n".
When "k" and "ρ" are fixed, the figure shows that posterior default probabilities tend to empirical rates as the number of obligors grows, as expected by the law of large numbers. There is no such coherence when posterior probabilities are generated from any prior that relies on expert judgment. From n = 50 to n = 500, with k = 2 and ρ = 12%, the empirical default rate falls by 360 basis points (bp), from 4% to 0.4%. The likelihood decreases by 770 bp, from 9.77% to 2.07%, corresponding to same drop as the posterior with uniform distribution between 0 and 1 as a prior. For posteriors with linear growth, conservative, linear decrease, and immoderate functions, the respective declines are 1,072, 825, 721, and 441 bp. For posteriors with hypothetical scenarios related to 1%/2.5%/7.5% (EJ-BS) and 0.3%/1%/2% (EJ-ABS)-for the minimum/mode/ maximum values of the prior default probability-, the decreases are only 101 and 16 bp, respectively.
Finally, Figure 1 shows that the concavity verified in general for posterior functions of default probabilities does not exist when expert prior knowledge is provided, regardless of the values taken by the base scenario as a prior. To demonstrate the (in)flexibility of expert distributions, one notes that coefficients of variation-the mean and variance of these coefficients are measured via the 10 means (one for each n-value 31 )-in two aforesaid expert scenarios are vastly smaller: 0.1 or 0.04 regarding respectively EJ-BS or EJ-ABS. The coefficients of variation for posteriors with other priors are substantially greater, ranging from 0.48 (linear growth) to 0.7 (immoderate).
The prior density of the default probability for EJ-BS has a mean of 3.67% and a standard deviation of 1.39%, whereas EJ-ABS as a prior has a mean of 1.1% and a standard deviation of 0.35%. Although these two scenarios are quite different, they both display the same inflexibility: large risk class sizes are insufficient to ensure a substantial convergence of the posterior distribution to the likelihood function, and thus the prior is stronger than the likelihood. In light of the foregoing concerns, selecting a prior based on expert information should be approached with caution.
The 0.01 and 0.1 thresholds are retaken. The two formers of the four thresholds of "u" used for uniform, linear growth, and linear decrease priors-1, 0.25, 0.1, and 0.01-will be sufficient because they reflect the default probability profile.
The others, particularly the latter (0.01), have some difficulties due to differences in probability profiles. The following is a quick rundown of these difficulties.
There are a number of inconsistencies when the upper threshold assumed for "λ" is believed to be low. As previously mentioned (after the analysis of Table 10 and Table 11), the stronger the asset correlation, the higher the coefficient of skewness. However, this rule does not work with u = 0.01 and (for linear growth function as a prior) with u = 0.1.
The default distribution is right-skewed, regardless of the asset correlation percentages that are connected. Nevertheless, the coefficient of skewness might be negative when u = 0.01 (either for ρ = 0% or ρ = 12%).
The higher the asset correlation, the broader the range of the highest density interval; this is, as correlation increases, the distance between the upper and lower bounds of the shortest credible interval growths. At times, such a rule is not verified when u = 0.01. Furthermore, in some circumstances with u = 0.01, that range is wider with ρ = 0% than with ρ = 12%. Table 13 compares the classical and Bayesian approaches. For each value of the mean linked to the posterior default probability of Table 10 and Figure 1, the corresponding matching percentile was found using the classical binomial ap-31 Multiples of 50, from n = 50 to n = 500. D. J. C. Dinis proach, assuming a correlation of 12%. That table also provides the matching percentile associated with the beta prior for the empirical default rate as well as the matching percentile related to this empirical rate. It also provides minimum and maximum default probabilities, which are derived using the practical rule described in 2.4.  Because the data in Table 13 shows a wide range of percentiles, it is necessary to distinguish between imprudence, conservatism, and exaggeration. Applying the practical rule aforementioned to the figures in the table, one may deduce that the 65th percentile can be used to separate imprudence from conservatism, and the 70th percentile can be used to separate conservatism from exaggeration. Other caps may be considered, in part because evaluating probabilities in low default portfolios requires a great deal of personal judgment. Given this, a 70th minimum percentile would be dangerous for some people, whereas a 75th maximum percentile would not be excessive. Over and above percentiles, the most significant consideration is the need to objectively discern different levels of safety or prudence, which demand the use of a practical rule like the one stated above. Figure 2 shows the percentiles for all posterior distributions established in this paper (also assuming 12% for asset correlation).

Comparison of the Results
When there are one or more defaults, the 65th and 70th percentiles-dashed vertical lines-correspond to the lower and upper limits of conservatism. When using the practical rule as a reference to choose the model to estimate the default probability, one notes that only the prior related to the alternative base scenario of the expert judgement may be deemed acceptable to establish a prudent approach estimating. In that scenario, it is reliable combining empirical data (integrated into the likelihood function) with expert knowledge (integrated into the prior function). The other priors do not deliver such reliable outputs because they produce either imprudent or exaggerated default probabilities.
When there is no default event (therefore, only for n = 150), the reference corresponds to the 77.91th percentile-solid vertical line. The priors 1.1 and 1.2 (uniform distribution), 3.1 (linear decrease), and 4 (conservative) are suitable since they correspond to the intended conservatism 32 . 32 The probability indicated by the prior 6.3 might also be adequate. It is, nevertheless, connected to an exaggerated base scenario of expert knowledge (i.e., the prior 6.1). Figure 2. Percentiles of the classical binomial approach corresponding to mean of several Bayesian default distributions (asset correlation = 12%).
Because default distributions are heavily skewed to the right, using the mean to distinguish between conservatism and exaggeration in defaulted portfolios looks to be unnecessarily cautious. On the other hand, if there are no defaults in the portfolios, using the mean of the default distributions is a good strategy. When n = 150 and k = 0, the mean of default probability is 1.87%, which corresponds to the 77.91th percentile in a classical approach (see the prior 1.1 in Table 13) 33 . The corresponding upper limits of the highest density interval for 75%, 90%, and 95% are 2.43%, 4.51%, and 6.17%, respectively (as shown in Table 10), which are clearly unrealistic default probabilities for a low default portfolio. to the projected number of defaults using the practical rule. Even with 65th and 70th percentiles, which would appear to be insufficiently cautious at first glance, default estimates are significantly higher than empirical data, a condition that any risk management tool for capital buffer measurement must take into account.

Final Thoughts
Both It is easier to apply a non-informative prior if there is no idea about the default probability. Nonetheless, it is beneficial to make efforts to get information about that probability. Although the prior function is just one of many assumptions in the entire complex model, it is desirable if it reflects knowledge or feelings about the default probability. In low default portfolios with a lack of loss observations, additional information and expertise become particularly crucial.
The more informative the prior function, the worse the convergence of the posterior distribution to the likelihood function. This resistance holds true for both the base scenarios (regardless of how conservative they are) and the theoretical distributions generated by these scenarios, particularly the beta distribution.
The figures also show that the prior distribution can be chosen with a lot of freedom. Thus, it is possible to use a combination of priors, i.e., an average prior rather than simply one as a way of attempting to reflect the uncertainty degree of the default probability. However, differentiations between imprudence, conservatism, and exaggeration must first be made.
Subjectivity is present in both "the Bayesian (or subjective) approach" and "the classical (or frequentist) approach"-terminology stated in the second paragraph of 2.1. Indeed, the classical approach requires the subjective selection of confidence levels (as well as upper confidence bounds), whereas the Bayesian approach requires the selection of prior functions. A realistic rule was proposed to deal with such a wide variety of arbitrary choices. The rule will have the benefit of validating default financial models in general and Bayesian options in particular. As a result, unduly optimistic or overly pessimistic default probabilities are eliminated, allowing banks' pricing to reflect adequate and consistent levels of provisioning and economic capital.
Practical experts, theoretical academics and wary regulators cannot agree on methods and strategies for estimating default probabilities in portfolios with scarce historical data. Some topics were addressed in this paper; others, such as the model calibrations, and the use of a multi-period method by estimating models, are difficult open issues that require further exploration.

Conflicts of Interest
The author declares no conflicts of interest regarding the publication of this paper.
"S" and " j Z " are standardized normal random variables that are equicorrelated. Being these variables mutually independent, "X" is also a standardized normal random variable.
the main theoretical axis in structural default models. 34 For instance, an economic index could be a random variable "S" that reflects the portfolio's exposure to a common factor.