Computation of Reinsurance Premiums by Incorporating a Composite Lognormal Model in a Risk-Adjusted Premium Principle ()

Gilbert Chambashi^{1*}, Wamulume Mushala^{2}, Clement Mwaanga^{2}, Chilayi Mayondi^{2}, Bupe Kolosa^{3}, Levy K. Matindih^{3}, Edwin Moyo^{3}

^{1}Unicaf University, School of Business Studies, Lusaka, Zambia.

^{2}Mulungushi University, Department of Business Studies, Kabwe, Zambia.

^{3}Mulungushi University, Department of Mathematics and Statistics, Kabwe, Zambia.

**DOI: **10.4236/jmf.2023.131001
PDF HTML XML
38
Downloads
263
Views
Citations

In this paper, a formula for calculating a premium for reinsurance is presented. This formula was determined by incorporating a lognormal-burr probability distribution model into the PH-transform principle which is one of the risk-adjusted premium calculating principles. The lognormal-burr probability distribution model was selected, modelled, classified and validated as the best fitting model to 2016 GAM’s automobile insurance claims data among the eight candidates of composite lognormal probability distribution models. Then, the formula was applied in calculating reinsurance premiums for an automobile insurance branch under an excess of loss non-proportional reinsurance treaty.

Keywords

Risk-Adjusted Premium Calculating Principle, PH-Transform Premium Principle, Composite Lognormal Models, Reinsurance Premium Formula, Excess of Loss Non-Proportional Reinsurance Treaty

Share and Cite:

Chambashi, G. , Mushala, W. , Mwaanga, C. , Mayondi, C. , Kolosa, B. , Matindih, L. and Moyo, E. (2023) Computation of Reinsurance Premiums by Incorporating a Composite Lognormal Model in a Risk-Adjusted Premium Principle. *Journal of Mathematical Finance*, **13**, 1-16. doi: 10.4236/jmf.2023.131001.

1. Introduction

The risk that insurance companies face in insuring goods and services is enormous. Claims may be made in such a way that the insurance company fails to pay them. You may take an example of an insurance company which may insure ten (10) houses from the same locality: the premium paid by the insured is usually less than the value of the property being insured, however, the insurance company promises to pay to the value of the property in case of damage. In the case where all the 10 houses are damaged completely due to, for example, an earthquake occurring in the locality, the insurance company is definitely not going to manage to pay all the claims which are going to be made. It is for cases as such that insurance companies go for reinsurance so that in case the claims are beyond paying them, the reinsurance company may come in to help. This is called the transfer of risk [1], and for this, the insurance company is required to pay a certain sum of money called a reinsurance premium to the reinsurance company.

Now, how to determine or calculate a reinsurance premium is the challenge that presents itself. Some reinsurance companies only take a product of a certain rate and the total of collected premiums of a branch being reinsured as the reinsurance premium which they will require to be paid from the insurance company seeking reinsurance. However, this would not take into consideration the random nature of the claims being made. For this reason, actuaries have developed probabilistic models (to handle the randomness in the data) to be used in calculating reinsurance premiums by using premium calculating principles incorporating a probability distribution. With the use of probabilistic models, premiums are calculated in such a way that data is first modelled to fit the candidate probability distributions. Then the best fitting probability distribution among the candidates is incorporated in an appropriate risk measure (premium calculating principle) requiring it.

As mentioned in the second paragraph of this introduction, not all determinations or calculations of reinsurance premiums require risk measures with incorporated probability distributions. But in the case of this paper, we use a risk measure with incorporated probability distribution to handle the randomness of the data which would otherwise cause an estimated reinsurance premium to be far away from the real one if not taken into consideration. Some of the well-known risk measures, also known as premium calculating principles, requiring the fitting of a probability distribution are the pure premium principle, expected value principle, variance principle, standard deviation principle, exponential principle, Esscher principle and the risk-adjusted premium principles [2].

Despite being there several desirable properties which a premium calculation principle must satisfy, [2] lists most of the basic properties such as non-negative loading, additivity, scale invariance, consistence and no ripoff. And it was shown that the risk adjusted premium principle satisfies all except one, the property of additivity [2]. This definitely makes it one of the most desirable premium calculation principles to use for calculating either insurance or reinsurance premiums.

There now, come the issue of which probability distribution is the correct one to be incorporated in a risk measure (premium calculating principle); this requires very hard work to determine. Distributions of insurance data have often been that of positive skew, with a thick upper tail, and indicating a mixture of moderate and large claims [3]. In other words, they consist of what are called a head and a tail for moderate and large claims respectively [4]. Depending on whether moderate claims or large claims dominate the data, [3] explain that standard probability distributions such as Pareto, Gamma, Weibull, Lognormal and Inverse Gaussian have often been used to model the data. The problem of modelling with standard probability distributions had often come in when the dominance of both moderate and large claims was apparent. To overcome this problem, several papers have proposed the use of composite probability models [3]. The composition of these models is two pieces of distributions separated at a certain threshold [5]. To model moderate claims for a head of a distribution, some papers have proposed the use of Weibull probability distribution [4] and others have proposed the use of lognormal [6]. Although Pareto has often been used to model the large claims for distributions’ tails, some papers such as that of [4], have shown that Burr distribution modelled better when they worked with Danish Fire Insurance Data and also, the paper by [4], have proposed the use of Burr, loglogistic, paralogistic, generalised pareto, pareto, inverse burr, inverse pareto and inverse paralogistic.

In the case where Weibull distribution is used in the head part, a composite model becomes a Composite Weibull Model [4] and in the case where, instead of Weibull, we use lognormal, it becomes Composite lognormal model [6]; the composite model naming becomes dependent on the probability distribution modelling moderate claims, on the head.

The significance of this paper is that the insurance company is presented with an alternative approach (a formula) to calculating the reinsurance premium so that it is able to have a voice in renegotiating the reinsurance premium charged to it by the reinsuring company. The alternative approach provided is flexible in that, at different risk aversion indexes, several premiums for the same branch being reinsured can be calculated. And the availability of several premiums for the same branch enables the insurance company to choose which premium would go well with them according to their financial situation. The reinsuring company can also use this approach and make available to themselves several premiums for a single branch under consideration for reinsuring. From the reinsuring company’s side, the choice of the reinsurance premium to charge the reinsured company will be dependent on whether the risk to reinsure is great or small, and where the risk is great a bigger premium will be charged and vice versa. Also, unlike [7] who calculated reinsurance premiums in risk measures by using standard probability distributions which were not classified and statistically tested for best fitting, this paper uses composite probability distributions (composite lognormal to be specific) in a risk measure and these composite distributions were classified and statistically tested for best fitting to the insurance claims data. As insurance claims are a mixture of moderate and large ones, standard probability distributions are far from being better in fitting; for this, composite probability distributions such as composite lognormal give a much better fitting as it caters for both moderate claims at the head (of the distribution) and large claims at the tail. Further still, a number of reinsurance premiums calculations done in practice are deterministic, however, this paper presents an approach that is probabilistic in order to handle the random nature in the insurance claims data so that the reinsurance premium calculated can be as close to accuracy as possible.

Summarily, this paper presents Section 2 with the type of risk-adjusted premium principle (the PH-transform principle) which will be used for reinsurance premium calculation, the general structure in terms of probability density function (5) and cumulative distribution function (9) for the two-piece composite probability distribution models which will be able to take lognormal for the head and seven diverse other probability distributions for the tail in Section 3, some characteristics of the excess of loss reinsurance treaty essential for this study, the two reinsurance premium formulae and the methods used in selecting and validating the best among the candidate composite models. Section 3 presents the fitting of the candidate models to the data, selecting and validating the best fitting models, incorporation of the best and valid composite model into the reinsurance premium formulae (12) and (13), and the computation of the reinsurance premiums at diverse values of treaty priorities (retentions) while adjusting the risk aversion index thrice. Section 4 presents the evaluation of results obtained in Section 3, Section 5 presents some limitations to some of the mathematical methods used in this paper and Section 6 presents all the general tools and techniques used in this paper to arrive at our results.

2. Theoretical Framework

2.1. Risk-Adjusted Premium Calculation Principle

Given that *X* is a non-negative random variable representing insurance claims, its survival function will be given by
${S}_{X}\left(x\right)=P\left(X>x\right)=1-F\left(x\right)$ [2] [6].

By using transforms to distort *X*’s survival function as
${S}_{Z}\left(x\right)=g\left({S}_{X}\left(x\right)\right)$, [8] proposed a general class of premium calculation principles given by

${\text{\Pi}}_{X}={\displaystyle {\int}_{0}^{+\infty}g\left({S}_{X}\left(x\right)\right)\text{d}x}$ (1)

where the function *g*: is increasing, continuous and concave. We have
$g\left(0\right)=0$ and
$g\left(1\right)=1$ [7].

The premium calculation principles we get from the above general class are called *risk-adjusted premium principles*. When the survival function is distorted by having
$g\left(x\right)={x}^{1/r}$, we get a risk-adjusted premium principle called the *proportional hazard transform* (PH-transform) *principle* given by

${\text{\Pi}}_{r}\left(X\right)={\displaystyle {\int}_{0}^{+\infty}{\left({S}_{X}\left(x\right)\right)}^{1/r}\text{d}x}$ (2)

where
$r\ge 1$, and *r* is referred to as a *risk aversion index* [7].

2.2. Two-Piece Composite Models Structure in Terms of PDF and CDF

Knowing that the survival function ${S}_{X}\left(x\right)$, in Equation (2), equals $1-F\left(x\right)$, it is evident that the challenge is in determining the better fitting $F\left(x\right)$ and this paper proposes the use of composite lognormal Models which were proposed by [5].

[5] presented a general two-probability density function of composite models in the form

$f\left(x\right)=\{\begin{array}{l}{f}_{Comp1}\left(x\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{if}\text{\hspace{0.17em}}-\infty <x\le \theta \\ {f}_{Comp2}\left(x\right),\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{if}\text{\hspace{0.17em}}\theta <x<+\infty \end{array}$ (3)

getting equated to

$f\left(x\right)=\{\begin{array}{l}{a}_{1}{f}_{1}^{*}\left(x\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{if}\text{\hspace{0.17em}}-\infty <x\le \theta \\ {a}_{2}{f}_{2}^{*}\left(x\right),\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{if}\text{\hspace{0.17em}}\theta <x<+\infty \end{array}$ (4)

where ${f}_{Comp1}\left(x\right)$ is standing for the head part of the distribution modelling moderate claims while being taken as a lognormal probability density function and ${f}_{Comp2}\left(x\right)$ is standing for the tail part of the distribution modelling large claims while it can be taken by diverse probability density functions such as Pareto, inverse Pareto burr, inverse burr, paralogistic, inverse paralogistic and log-logistic.

From Equation (4), we have *θ* representing a threshold at which the distribution modelling moderate claims separate from a distribution modelling large claims. Also, we have
${f}_{1}^{*}\left(x\right)=\frac{{f}_{1}\left(x\right)}{{F}_{1}\left(\theta \right)}$ and
${f}_{2}^{*}\left(x\right)=\frac{{f}_{2}\left(x\right)}{\left\{1-{F}_{2}\left(\theta \right)\right\}}$. The non-negative weights
${a}_{1}$ and
${a}_{2}$ are factors of normalisation given by
${a}_{1}=\frac{1}{1+\varphi}$ and
${a}_{2}=\frac{\varphi}{1+\varphi}$ such that, for
$\varphi =\frac{{f}_{1}\left(\theta \right)\left[1-{F}_{2}\left(\theta \right)\right]}{{f}_{2}\left(\theta \right){F}_{1}\left(\theta \right)}>0$ we have
${a}_{1}+{a}_{2}=1$.

The probability density function (4) must be continuous and differentiable at the threshold and to be sure that these properties are always satisfied, [6] imposed

${a}_{1}{f}_{1}^{*}\left(\theta \right)={a}_{2}{f}_{2}^{*}\left(\theta \right)\text{and}{a}_{1}\frac{\text{d}{f}_{1}^{*}\left(\theta \right)}{\text{d}\theta}={a}_{2}\frac{\text{d}{f}_{2}^{*}\left(\theta \right)}{\text{d}\theta}.$

In replacing the above expressions for ${f}_{1}^{*}\left(x\right)$, ${f}_{2}^{*}\left(x\right)$, $\varphi $, ${a}_{1}$ and ${a}_{2}$ into Equation (4), we obtain [4]

$f\left(x\right)=\{\begin{array}{ll}\frac{1}{1+\varphi}\frac{{f}_{1}\left(x\right)}{{F}_{1}\left(\theta \right)},\hfill & if\text{\hspace{0.17em}}-\infty <x\le \theta \hfill \\ \frac{\varphi}{1+\varphi}\frac{{f}_{2}\left(x\right)}{\left\{1-{F}_{2}\left(\theta \right)\right\}},\hfill & if\text{\hspace{0.17em}}\theta <x<+\infty \hfill \end{array}$ (5)

The cumulative distribution function (cdf) $F\left(x\right)$ will be obtained by integrating the pdf (5) to have an expression of the form

$F\left(x\right)=\{\begin{array}{l}{F}_{Comp1}\left(x\right),\text{}if\text{\hspace{0.17em}}-\infty <x\le \theta \\ {F}_{Comp2}\left(x\right),\text{}if\text{\hspace{0.17em}}\theta <x<+\infty \end{array}$ (6)

such that $F\left(x\right)$ satisfies the following properties [9]:

· $0\le F\left(x\right)\le 1$ ;

· *F* is non-decreasing, that is to say, if
$x<y$, then
$F\left(x\right)<F\left(y\right)$ ;

· ${\mathrm{lim}}_{x\to +\infty}F\left(x\right)=1$ and ${\mathrm{lim}}_{x\to -\infty}F\left(x\right)=0$ ;

· *F* is continuous to the right, that is to say
${\mathrm{lim}}_{y\downarrow x}F\left(y\right)=F(\; x\; )$

By integrating each piece of (5) separately, we have

$\begin{array}{c}{F}_{Comp1}\left(x\right)={\displaystyle {\int}_{-\infty}^{x}{f}_{Comp1}\left(t\right)\text{d}t}=\frac{1}{1+\varphi}\frac{1}{{F}_{1}\left(\theta \right)}{\displaystyle {\int}_{-\infty}^{x}{f}_{1}\left(t\right)\text{d}t}\\ =\frac{1}{1+\varphi}\frac{1}{{F}_{1}\left(\theta \right)}\left[{F}_{1}\left(x\right)-{\mathrm{lim}}_{t\to -\infty}{F}_{1}\left(t\right)\right]\\ =\frac{1}{1+\varphi}\frac{1}{{F}_{1}\left(\theta \right)}\left[{F}_{1}\left(x\right)-0\right]=\frac{1}{1+\varphi}\frac{{F}_{1}\left(x\right)}{{F}_{1}\left(\theta \right)}\end{array}$ (7)

and

$\begin{array}{c}{F}_{Comp2}\left(x\right)={F}_{Comp1}\left(\theta \right)+{\displaystyle {\int}_{\theta}^{x}{f}_{Comp2}\left(t\right)\text{d}t}=\frac{1}{1+\varphi}\frac{{F}_{1}\left(\theta \right)}{{F}_{1}\left(\theta \right)}+\frac{\varphi}{1+\varphi}\frac{{F}_{2}\left(x\right)-{F}_{2}\left(\theta \right)}{\left\{1-{F}_{2}\left(\theta \right)\right\}}\\ =\frac{1}{1+\varphi}+\frac{\varphi}{1+\varphi}\frac{{F}_{2}\left(x\right)-{F}_{2}\left(\theta \right)}{\left\{1-{F}_{2}\left(\theta \right)\right\}}=\frac{1}{1+\varphi}\left[1+\varphi \frac{{F}_{2}\left(x\right)-{F}_{2}\left(\theta \right)}{\left\{1-{F}_{2}\left(\theta \right)\right\}}\right]\end{array}$ (8)

Thereby giving the following cumulative distribution function

$F\left(x\right)=\{\begin{array}{l}\frac{1}{1+\varphi}\frac{{F}_{1}\left(x\right)}{{F}_{1}\left(\theta \right)},\text{}if\text{\hspace{0.17em}}-\infty <x\le \theta \hfill \\ \frac{1}{1+\varphi}\left[1+\varphi \frac{{F}_{2}\left(x\right)-{F}_{2}\left(\theta \right)}{1-{F}_{2}\left(\theta \right)}\right],\text{}if\text{\hspace{0.17em}}\theta <x<+\infty \hfill \end{array}$ (9)

2.3. Excess of Loss Non-Proportional Reinsurance Treaty

This treaty has characteristics such as treaty priority, treaty guarantee and treaty ceiling [10].

2.3.1. The Treaty Priority

The priority also called the retention *R* is the agreement’s claim amount at which a Reinsurer intervenes provided the claim or claims of an event amount equals or exceeds *R* [11].

2.3.2. The Treaty Guarantee

The treaty guarantee or the limit *h* is the agreed amount exceeding *R* beyond which the Reinsurer does not intervene in paying the claim or claims of an event. This means the Reinsurer is obliged to pay a claim or claims of an event exceeding R but this claim (or claims of the event) must be less than or equal to *h* [10]. However, it must be noted that some excess of loss non-proportional reinsurance treaties have a treaty guarantee without limit, *i.e.*
$h=+\infty $ [6].

2.3.3. The Treaty Ceiling

The treaty ceiling
$R+h$ is an amount of a claim or claims of an event beyond which the Reinsurer does not intervene [11]. This means the Reinsured is itself responsible to pay an amount of the claim above
$R+h$. Also, to be noted that in the case of a limitless treaty guarantee, the Reinsurer is responsible to pay any amount exceeding the retention *R*.

2.3.4. Reinsurer’s Responsibility in Limitless Treaty Guarantee Case

In this case, where *X* is a random variable for a claim or claims of event amount, the amount
${L}_{h\to +\infty}$ to be paid by the Reinsurer is presented as

${L}_{h\to +\infty}\left(X\right)=\{\begin{array}{l}0,\text{if}\text{\hspace{0.17em}}0\le X<R\hfill \\ X-R,\text{if}\text{\hspace{0.17em}}R\le X\hfill \end{array}$ (10)

2.3.5. Reinsurer’s Responsibility in Limited Treaty Guarantee Case

In this case, *X* being a random variable for claims, the amount to be paid or contributed towards payment of a claim or claims of an event amount is given by [7]

${L}_{h}\left(X\right)=\{\begin{array}{l}0,\text{if}\text{\hspace{0.17em}}0\le X<R\hfill \\ X-R,\text{if}\text{\hspace{0.17em}}R\le X<R+h\hfill \\ h,\text{}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{if}\text{\hspace{0.17em}}X\ge R+h\hfill \end{array}$ (11)

2.4. Reinsurance Premium

The premium calculation principle (2) is a perfect example for calculating a reinsurance premium if it were assumed that any amount made as a claim was to be paid by the Reinsurer and the excess of loss non-proportional reinsurance treaty does not contain any such characteristics as treaty priority, treaty guarantee and treaty ceiling: this can be observed by the integration being carried out between 0 and $+\infty $. However, in the presence of treaty priority, treaty guarantee and treaty ceiling, the reinsurance premium will be calculated as follows [7]:

2.4.1. Reinsurance Premium in Limitless Treaty Guarantee Case

${\text{\Pi}}_{r}\left(R\right)={\displaystyle {\int}_{R}^{+\infty}{\left({S}_{X}\left(x\right)\right)}^{1/r}\text{d}x}$ (12)

2.4.2. Reinsurance Premium in Limited Treaty Guarantee Case

${\text{\Pi}}_{r}\left(R\right)={\displaystyle {\int}_{R}^{R+h}{\left({S}_{X}\left(x\right)\right)}^{1/r}\text{d}x}$ (13)

2.4.3. Reinsurance Premium Where F(x) Is Composite Lognormal Distribution

This was determined after having selected a best-fitting composite lognormal distribution model to data in Section 3. Before presenting Section 3, we present tools for selecting a best-fitting probability distribution to data in Section 2.5.

2.5. Tools for Selecting a Best-Fitting Probability Distribution Model

2.5.1. Estimation of Parameters and Classification of Candidate Models

*Maximum Likelihood Estimator *(*MLE*):By the maximum likelihood method, we estimate the parameters of a given probability distribution by differentiating the function
$\mathrm{log}L\left(\theta ;{x}_{1},{x}_{2},{x}_{3},\cdots ,{x}_{n}\right)$, called the log-likelihood, in terms of parameters being represented by
$\theta $. The derivatives are then equated to zero and then we solve for the parameters [12]. Usually, this method takes a long time or is complicated to use, as a result, we apply to it numerical methods to arrive at estimated parameters, see [4] [6]. Please take note that

·
$\left({x}_{1},{x}_{2},{x}_{3},\cdots ,{x}_{n}\right)$ is a sample of *n* observed random variables which are independent and of the same probability distribution.

· $L\left(\theta ;{x}_{1},{x}_{2},{x}_{3},\cdots ,{x}_{n}\right)$ is a joint probability distribution function called a joint cumulative distribution function if $\left({x}_{1},{x}_{2},{x}_{3},\cdots ,{x}_{n}\right)$ is discrete or joint probability density function if $\left({x}_{1},{x}_{2},{x}_{3},\cdots ,{x}_{n}\right)$ is continuous. It is called a likelihood function and is considered a function of only $\theta $.

· $\theta $ is a set of all parameters of a given probability distribution.

After estimations, estimated parameters come along with a value called the log-likelihood value.

*Akaike Information Criterion *(*AIC*): The criterion is used to measure the quality of a model by penalising the model in terms of its number of parameters. It is most suitable to use only for classification purposes of distributions than for making decisions [13]. And the model with the smallest AIC is classified as the best. The AIC is given by

$\text{AIC}=2\text{NLL}+2k$

where *k* is the number of parameters to estimate for the model and NLL is a negative log-likelihood value.

2.5.2. Goodness of Fit Tests

We used the goodness-of-fit tests as used by [3]. They defined goodness-of-fit measures as test statistics that quantify the “distance” between the empirical distribution function (EDF) constructed from the data and the cumulative distribution function (cdf) of the fitted models. Based on the work of [3], they suggested the use of

· Kolmogorov-Smirnov (KS) test statistics given by $D=\mathrm{max}\left({D}^{+},{D}^{-}\right)$, where ${D}^{+}={\mathrm{max}}_{1\le j\le N}\left\{\frac{j}{N}-\stackrel{^}{F}\left({x}_{\left(j\right)}\right)\right\}$ and ${D}^{-}={\mathrm{max}}_{1\le j\le N}\left\{\stackrel{^}{F}\left({x}_{\left(j\right)}\right)-\frac{j-1}{N}\right\}$

· Cramer-von Mises (CvM) test statistic given by

${W}^{2}={\displaystyle {\sum}_{j=1}^{N}{\left[\stackrel{^}{F}\left({x}_{\left(j\right)}\right)-\frac{2j-1}{2N}\right]}^{2}}+\frac{1}{12N}$

· Anderson-Darling (AD) test statistic given by

${A}^{2}=-N-\frac{1}{N}{\displaystyle {\sum}_{j=1}^{N}\left[\left(2j-1\right)\mathrm{log}\left(\stackrel{^}{F}\left({x}_{\left(j\right)}\right)\right)+\left(2n+1-2j\right)\mathrm{log}\left(1-\stackrel{^}{F}\left({x}_{\left(j\right)}\right)\right)\right]}$

where,

$\stackrel{^}{F}$ is the cdf of the fitted model, ${x}_{1},{x}_{2},\cdots ,{x}_{N}$ is the original data and ${x}_{\left(1\right)},{x}_{\left(2\right)},\cdots ,{x}_{\left(N\right)}$ is an increasing ordered data from the original data. And the smaller the values of KS, CvM and AD are, the better the model fits the data [3].

2.5.3. Model Validating

To prove the validity of the model selected by the indication of the goodness-of-fit tests as best, we will carry out a hypothesis test with the following hypothesis

Null hypothesis (*H*_{0}): the best model is valid if its p-value is greater than the level of significance *α*.

Alternative hypothesis (*H _{a}*): the best model is not valid if its p-value is less than the level of significance

In this paper, the standard level of significance $\alpha =0.05$ was used.

We proceeded with the approach to determining the p-value using the bootstrap procedure proposed by [3] in the following order:

· For the model selected as a better fit to the data using methods in section 2.5.1, calculate the goodness-of-fit test statistics ${t}_{KS}$, ${t}_{CvM}$ and ${t}_{AD}$,

· By using the model providing a better fit for data ${x}_{1},{x}_{2},\cdots ,{x}_{N}$,

○ generate*M* sets of resampled data and denote them as
${\stackrel{^}{x}}_{1}^{\left(i\right)},{\stackrel{^}{x}}_{2}^{\left(i\right)},\cdots ,{\stackrel{^}{x}}_{N}^{\left(i\right)}$ for
$i=1,\cdots ,M$.

○ refit it to each set of the resampled data and then compute the test statistics ${t}_{KS}^{\left(i\right)}$, ${t}_{CvM}^{\left(i\right)}$ and ${t}_{AD}^{\left(i\right)}$ for $i=1,\cdots ,M$.

· Then, finally, determine the p-values by $\frac{\#\left\{i:{t}_{KS}^{\left(i\right)}\ge {t}_{KS}\right\}}{M}$, $\frac{\#\left\{i:{t}_{AD}^{\left(i\right)}\ge {t}_{AD}\right\}}{M}$ and $\frac{\#\left\{i:{t}_{CvM}^{\left(i\right)}\ge {t}_{CvM}\right\}}{M}$.

In section 3, the resampling was done by taking $M=1000$.

3. Theoretical Applications to Insurance Claims Data

The data for the applications is that of all automobile insurance claims made in 2016 for GAM insurance company. Claims were of two types: corporal claims and material claims. Corporal claims are claims made on damages caused directly to persons’ bodies and material claims are claims made on damages caused to vehicles. The data had a total of 6499 claims made, of which 0.4% were corporal claims and 99.6% were material claims.

The company had entered into a 2017 excess of loss non-proportional reinsurance treaty with some of the characteristics being as follows:

· Treaty priority = 10,000,000 DZD per claim amount (or event’s total claim amount)

· Treaty ceiling:

○ unlimited for corporal damages.

○ limited to 150,000,000 DZD for material damages.

3.1. Composite Lognormal Model Fitting to Data

Seven candidate composite lognormal models for fitting to data were considered: Lognormal-Pareto, lognormal-burr, lognormal-inverse paralogistic, lognormal-paralogistic, lognormal-loglogistic, lognormal-inverse pareto and lognormal-inverse burr. When, for example, the tail part of the distribution (modelling large claims) was to be modelled by Pareto distribution, the composite lognormal model became the composite lognormal-Pareto (LPC) probability distribution model.

The fitting produced the estimated parameters (whose exponentials are the values put in Table 1) and the negative log-likelihood (NLL) of which (−NLL)

Table 1. Fitting composite lognormal probability distribution models to insurance claims.

was used to calculate the Akaike Information Criterion (AIC) [2]. Later, we calculated the KS, CvM and AD test statistics to aid in the selection of a model providing a better fit as we could not entirely rely on AIC.

The lognormal-burr (*LBC*) model produces the smallest value of AIC, hence, we conclude that it has been classified first. And the AD test statistic supports that it should be considered as a model providing a better fit to the insurance claims data despite the KS and CvM test statistics having gone for lognormal-paralogistic (*LPaC*).

3.2. Testing the Validity of LBC and LPaC Models

The validity of each model will be tested based on the hypotheses

$\begin{array}{l}{H}_{0}:\text{themodelisvalidifp-value}\ge \alpha =0.05\\ {H}_{a}:\text{themodelisnotvalidifp-value}<\alpha =0.05\end{array}$

where $\alpha =0.05$ is a level of confidence which means that of all the calculations we will make, we have a chance of $1-\alpha =0.95$ that they are going to be correct with a chance of $\alpha =0.05$ that they are incorrect if the hypothesis ${H}_{0}$ is accepted.

As shown in section 2.5.2, the p-values will be determined in terms of KS, CvM and AD test statistics using the bootstrap method where $M=1000$.

All the p-values were above 0.05 and which signifies that the two models have been accepted as being valid as shown in Table 2 under Section 3.3.

3.3. Incorporating LBC into the PH-Transform Principle

Although both the lognormal-burr and the lognormal-paralogistic qualify as models giving a better fit to data, we opt to use the lognormal-burr in the reinsurance premium principle (12) and (13) because it would take much more space if we used all the two. Also, some other reasons are given in section 4 for the preference of lognormal-burr to lognormal-paralogistic.

Using the cumulative distribution function (9), we take ${F}_{1}\left(x\right)=\Phi \left(\frac{\mathrm{ln}x-\mu}{\sigma}\right)$, where $\mu ,\sigma >0$, as a cumulative distribution function for lognormal and we take ${F}_{2}\left(x\right)=1-{\left(1+{\left(x/s\right)}^{\beta}\right)}^{-\alpha}$, where $\alpha ,\beta ,s>0$, as a cumulative distribution function for burr. Consequently, we have a cumulative distribution function for the composite lognormal-burr probability distribution model given by

Table 2. p-values of best fitting composite lognormal models.

$F\left(x\right)=\{\begin{array}{ll}\frac{1}{\left(1+\varphi \right)}\frac{\Phi \left(\left(\mathrm{ln}x-\mu \right)/\sigma \right)}{\Phi \left(\left(\mathrm{ln}\theta -\mu \right)/\sigma \right)},\hfill & \text{if}\text{\hspace{0.17em}}0<x\le \theta \hfill \\ 1-\frac{\varphi}{1+\varphi}{\left(\frac{1+{\left(\frac{\theta}{s}\right)}^{\beta}}{1+{\left(\frac{x}{s}\right)}^{\beta}}\right)}^{\alpha},\hfill & \text{if}\text{\hspace{0.17em}}\theta <x<+\infty \hfill \end{array}$ (14)

The reinsurance premium formula, using the PH-transform principle, is determined by,

$\begin{array}{l}{\text{\Pi}}_{r}\left(R\right)={\displaystyle {\int}_{R}^{+\infty}{\left(S\left(x\right)\right)}^{1/r}\text{d}x}\\ ={\left(\frac{\varphi}{1+\varphi}\right)}^{1/r}\ast {\left(1+{\left(\frac{\theta}{s}\right)}^{\beta}\right)}^{\alpha /r}\ast {\displaystyle {\int}_{R}^{+\infty}\frac{1}{{\left(1+{\left(\frac{x}{s}\right)}^{\beta}\right)}^{\alpha /r}}\text{d}x},\text{for}\theta <R\end{array}$ (15)

provided, of course, that $S\left(x\right)\text{\hspace{0.17em}}=1\text{\hspace{0.17em}}-\text{\hspace{0.17em}}F\left(x\right)=\frac{\varphi}{1\text{\hspace{0.17em}}+\text{\hspace{0.17em}}\varphi}{\left(\frac{1\text{\hspace{0.17em}}+\text{\hspace{0.17em}}{\left(\frac{\theta}{s}\right)}^{\beta}}{1\text{\hspace{0.17em}}+\text{\hspace{0.17em}}{\left(\frac{x}{s}\right)}^{\beta}}\right)}^{\alpha}$, where $\theta \text{\hspace{0.17em}}<\text{\hspace{0.17em}}x\text{\hspace{0.17em}}<+\infty $.

The integral in formula (15) exists if and only if
$1\le r<\alpha \ast \beta $. [5] shows that the parameters *μ* and *ϕ* can be calculated from the other estimated parameters by

$\mu =\mathrm{ln}\theta +{\sigma}^{2}+\theta {\sigma}^{2}\frac{{{f}^{\prime}}_{2}\left(\theta \right)}{{f}_{2}\left(\theta \right)}\text{and}\varphi =\frac{\left({\theta}^{\beta}+{s}^{\beta}\right)\psi \left[\left(\mathrm{ln}\theta -\mu \right)/\sigma \right]}{\sigma \alpha \beta {\theta}^{\beta}\Phi \left[\left(\mathrm{ln}\theta -\mu \right)/\sigma \right]}$.

Formula (15) is suitable for limitless guarantee cases and in the case where the guarantee is limited, the reinsurance premium will be given by

$\begin{array}{l}{\text{\Pi}}_{r}\left(R\right)={\displaystyle {\int}_{R}^{R+h}{\left(S\left(x\right)\right)}^{1/r}\text{d}x}\underset{}{}\\ ={\left(\frac{\varphi}{1+\varphi}\right)}^{1/r}*{\left(1+{\left(\frac{\theta}{s}\right)}^{\beta}\right)}^{\alpha /r}*{\displaystyle {\int}_{R}^{R+h}\frac{1}{{\left(1+{\left(\frac{x}{s}\right)}^{\beta}\right)}^{\alpha /r}}\text{d}x},\\ \text{for}r\ge 1\text{and}\theta <R\end{array}$ (16)

where, of course, *R* is a treaty priority and h is a treaty guarantee [9].

Due to corporal claims under unlimited guarantee being 0.4% and material claims under limited guarantee being 99.6% of the whole claims, we decided to consider corporal claims conditions for reinsurance cover as negligible in the presence of those for material claims. As a result, formula (16), instead of formula (15), was used for premium calculations. The reinsurance premium computations using formula (16) were done numerically in R statistical software [14], hence, one of the reasons why formula (16) was left without completing the integration.

3.4. Reinsurance Premium Computations

They have been computed at diverse values of retention *R* and risk aversion index *r*.

4. Evaluation

Among the risk-adjusted premium principles of [8], we have chosen to use the PH-transform principle because it provides for the treaty priority and treaty ceiling in the calculation of the reinsurance premium under the excess of loss non-proportional reinsurance treaty [7], as can be seen in Table 3. The PH-transform principle also provides for the possibility of adjusting the risk aversion index depending on whether the reinsurer anticipates the high or low risk on damage claims because the premium increases as the risk aversion index increases and vice versa as is shown in Figure 1 and Table 3. As is also shown in Figure 1, the higher the treaty priority the lower the calculated PH-transform reinsurance premium [7]. Table 1 shows that the composite lognormal-burr model is the best of the candidate models due to the smallest values of AIC and AD test statistics. And going by the KS and CvM test statistics, the composite

Table 3. Reinsurance premiums at diverse values of retention (R) and risk aversion index (r).

Figure 1. Plots of retentions and reinsurance premiums for *r* = 6.8 and *r* = 7.

lognormal-paralogistic model is being presented as the one providing the best fit to data. However, when we compare the KS test statistics for lognormal-burr and lognormal-paralogistic we see that there is a very minimal difference which suggests that the KS could have favoured the lognormal-burr except that it is sensitive in capturing the behaviour of the model at the tail [7]. Having the possibility of lognormal-paralogistic not being supported as the best of the candidate models does not make it an invalid model or a lesser best-fitting model as is evidenced by the p-values in Table 2. They both can be just as best fitting models except that, also, going by the suggestions of the values of AIC and ADtest statistic, the high chance of the KS is not so reliable as to capture the model’s behaviour at the tail, we opted to use the lognormal-burr model in our reinsurance premium formulae (15) and (16). Also, the possibility of some integrals not being able to exist for some composite models led us to leave the reinsurance formulae (15) and (16) in an integrated form for them to be solved by computing numerically.

5. Limitations

The PH-transform principle though very desirable in the property of premiums adjustments because of the presence of the risk aversion index cannot be used in modelling and computations of reinsurance premiums for all reinsurance treaties. Reinsurance treaties such as the surplus proportional reinsurance treaty [11] and those that do not involve treaty priority would require research of other premium calculation principles suitable for them. Also, despite the composite lognormal models having produced the best fitting models from among them, there still stands a chance that other composite models, such as those that would use Weibull instead of lognormal [4], would still produce a much better fitting model. Therefore, the composite lognormal-burr model having come out as best in this paper does not imply it is the best of any models that may be fitted to the given data. Only space in this paper and time has limited us from presenting other possible candidate models.

6. General Tools and Techniques

For all the computations in this paper, we used the R statistical software version 4.1.0 [12]: The parameters were estimated by the optim function in collaboration with the dcomplnorm function of the CompLognormal package [6]. The negative log-likelihood (NLL) was part of the results gotten from the parameter estimation process and we manually calculated the AIC by incorporating the –NLL in the formula for AIC [4]. The seven probability distributions used to model the tail are of the family of transformed beta distributions that we got from the actuar package [4] [15]. The test statistics KS, CvM and AD were computed by programming their corresponding functions suitable for their computations. The p-values related to KS, CvM and AD test statistics were also computed by programming related functions for their computations using other functions such as sample and the already created functions for KS, CvM and AD test statistics. The numerical integration of the reinsurance premium formula (16) was done by using the integrated function. And the retention vs premiums figure was created using plot, lines and legend functions after programming a function which gives many reinsurance premiums given several retentions.

Acknowledgements

Special thanks go to the anonymous reviewers for their helpful comments on this paper.

Conflicts of Interest

The authors declare no conflicts of interest.

[1] | Olivieri, A. and Pitacco, E. (2015) Introduction to Insurance Mathematics: Technical and Financial Features of Risk Transfers. Springer, Cham. |

[2] | Dickson, D.C. (2016) Insurance Risk and Ruin. Cambridge University Press, Cambridge. https://doi.org/10.1017/9781316650776 |

[3] |
Calderín-Ojeda, E. and Kwok, C.F. (2016) Modeling Claims Data with Composite Stoppa Models. Scandinavian Actuarial Journal, 2016, 817-836. https://doi.org/10.1080/03461238.2015.1034763 |

[4] | Bakar, S.A.A., Hamzah, N.A., Maghsoudi, M. and Nadarajah, S. (2015) Modeling Loss Data Using Composite Models. Insurance: Mathematics and Economics, 61, 146-154. https://doi.org/10.1016/j.insmatheco.2014.08.008 |

[5] |
Nadarajah, S. and Bakar, S.A.A. (2014) New Composite Models for the Danish Fire Insurance Data. Scandinavian Actuarial Journal, 2014, 180-187. https://doi.org/10.1080/03461238.2012.695748 |

[6] |
Nadarajah, S. and Bakar, S.A.A. (2013) CompLognormal: An R Package for Composite Lognormal Distributions. The R Journal, 5, 97-103. https://doi.org/10.32614/RJ-2013-030 |

[7] | e Silva, J.M.A. and de Lourdes Centeno, M. (1998) Comparing Risk Adjusted Premiums from the Reinsurance Point of View. ASTIN Bulletin: The Journal of the IAA, 28, 221-239. https://doi.org/10.2143/AST.28.2.519067 |

[8] |
Wang, S. (1996) Premium Calculation by Transforming the Layer Premium Density. ASTIN Bulletin: The Journal of the IAA, 26, 71-92. https://doi.org/10.2143/AST.26.1.563234 |

[9] | Klugman, S.A., Panjer, H.H. and Willmot, G E. (2012) Loss Models: From Data to Decisions. 4 Edition, Wiley, New York. |

[10] | Deelstra, G. and Plantin, G. (2014) Risk Theory and Reinsurance. Springer, London, 78. https://doi.org/10.1007/978-1-4471-5568-3 |

[11] |
Noussia, K. (2013) Reinsurance Arbitrations. Springer, Berlin. https://doi.org/10.1007/978-3-642-45146-1 |

[12] | Dress, F. (2007) Les probabilités et la statistique de A à Z: 500 définitions, formules et tests d’hypothèse. Dunod. |

[13] |
Parodi, P. (2014) Pricing in General Insurance. Chapman and Hall/CRC, New York. https://doi.org/10.1201/b17525 |

[14] |
Dutang, C., Goulet, V. and Pigeon, M. (2008) Actuar: An R Package for Actuarial Science. Journal of Statistical Software, 25, 1-37. https://doi.org/10.18637/jss.v025.i07 |

[15] | R Core Team (2021) R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna. |

Journals Menu

Contact us

customer@scirp.org | |

+86 18163351462(WhatsApp) | |

1655362766 | |

Paper Publishing WeChat |

Copyright © 2023 by authors and Scientific Research Publishing Inc.

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.