1. Introduction
Recent decades have seen rapid growth of decentralized schemes in insurance which are supported by internet technology, with practices like online peer-to-peer insurance, mutual aid and micro-insurance. Decentralizes schemes arose as a solution to low insurance penetration rate in Africa at an average of 2% way below a world-average of 7%, which [1] stated was caused by low financial literacy, unaffordable premiums and infancy of the insurance industry in Africa. Decentralization disrupted traditional business with a goal to broaden penetration by complementing insurance operations, through revival of centuries-old community loss sharing traditions that can be traced back to the Roman Empire, as per [2] . However, decentralized insurance schemes have been plagued by high failure rate in short period of time. [3] provided an example of a giant Chinese online mutual aid platform Xianghubao which collapsed in three years of operations after amassing more than 100 million policyholders within a year of operations, and other defunct peer-to-peer schemes.
One theory put forward to explain high failure rate is nature of policyholders: while traditionally insurance collects large numbers of homogeneous people such that Central Limiting Theorem applies, [4] discerned that decentralized schemes have close relations such as co-workers and neighbours. This presents two problems: individual risks are not approximately independent and schemes have relatively smaller number of members. Reinsurance was introduced to the schemes as a control strategy which has shown success in stabilizing decentralized finance, discussed in [5] and [6] . The superiority stop-loss reinsurance in minimizing the variance of insurer loss is well documented in actuarial literature such as [7] . Reinsurance threshold, known as retention limit, is a cut-off point between the two parties and a point of particular interest, because it determines how much a scheme has to pay as reinsurance premium. The threshold can be derived from a viewpoint of insurer, reinsurer or both (combined approach) acceptable by both sides.
This paper proposes to apply a combined approach in constructing stop-loss retention threshold for a pool of dependent risks. The methodology is divided into two main stages: the first step is to leverage definition of reinsurance to build separate functions representing costs of insurer and reinsurer. This is achieved by using stop-loss reinsurance to reconstruct two aggregate random variables for insurer and reinsurer, which when added by premium form cost functions. The second step involves minimizing aggregate cost of the whole scheme. This is accomplished using convex combination of cost functions measured at quantile measures to produce a linear function, which is minimized to arrive at an optimal retention threshold. Gamma-distributed losses were used to demonstrate viability of the solution. Results are tested through Monte Carlo simulation by ensuring that optimized values loss tend to zero with each iteration and comparison is performed with a variance-covariance method. The approach was selected, because convex combination optimization increases the speed at which algorithm converges to the solution, and results can be extended to a system of linear equations.
The suggested approach found that optimal threshold is defined within the survival function of the aggregate loss, which is consistent with the definition of quantile measure for non-negative random variables. Survival function is monotone decreasing, thus providing natural upper and lower boundaries to the optimized loss function, and results to consistent threshold estimates for both small and large pools. An assumption of comonotonicity was applied to construct the aggregate loss function, and since comonotonic vector provides the highest risk for any combination of individual risks, resulting threshold would exaggerate actual risk providing a safety loading for adverse experience.
Results showed that reinsurer has to bear larger portion of aggregate risk than insurer for a portfolio with high dependence, for stop-loss reinsurance to be effective. By extension, dependence structure influences optimal share of aggregate loss between an insurer and reinsurer. As reinsurance incurs a cost in form of the premium to an insurer, there is an incentive for higher threshold in order to attract low reinsurance premium. With comonotonic risks findings showed that optimal threshold was obtained when more weight was on the reinsurer than insurer, with little variation in relation to parameter controlling cost functions. This means that effective risk management of dependent risks calls for a lower reinsurance threshold than independent risks.
The implication of this work is to challenge assumption of independence in decentralized schemes, despite being acceptable on conventional insurance which has a larger group size with either homogeneous (independent and identically distributed) or heterogeneous risks (independent but not identically distributed). For industry regulators, reinsurance with decentralized insurance has to involve different set of margins owing to the nature of risks it undertakes. Such outcome is consistent with the hypothesis that traditional reinsurance may fail to stabilize decentralized insurance schemes because of use of traditional margins and not considering dependency structure. This regulation approach has shown success with decentralized finance in developing countries as investigated by [8] using vehicles such as Savings and Credit Cooperative Societies (SaCCoS) and mobile money. [9] discusses similarities and differences between decentralization in finance versus insurance, and concludes that both are good candidates for inclusive financial practices.
This paper relates to optimal reinsurance problems, which involve a class of infinite dimensional (constrained) optimization problems whose solution search for an optimal function in lieu of a parameter value. The solution is guided by Pareto Optimality pioneered by [10] [11] [12] [13] . Determination of retention threshold by minimization of cost is centred on measures of risk beyond classical variance/standard deviation methods with a goal of complementing risk measurement applied in other financial institutions such as the banking sector’s Basel Accords. The method has been applied by several authors for single ( [14] [15] [16] ) and multiple risks ( [17] [18] [19] ). In addition, there are a number of contributions to the literature. First this work adds to dependency analysis for multiple risks by establishing a connect between risk measures in [17] and cost analysis for non independent risks in [20] . The paper contributes to quantitative analysis of decentralized insurance in [21] by relaxing assumption of independence which understates claims leading to liquidity problems. Finally this paper derives a simple threshold easily understood by practitioners, which is an important area of inclusive finance that encourage hybrid products between banking and insurance services; a merger which was suggested in [22] .
The rest of paper is organized as follows: Section 2 is a background on sums of random variables, and Section 3 discusses definition of stop-loss reinsurance. Quantile risk measure is revisited in Section 4, before the optimal retention threshold derived in Section 5. Section 6 presents numerical applications and a conclusion is done in Section 7.
2. Sum of Random Variables
Aggregate risk is made up of several individual risks, as such the sum of random variables is of special interest in risk aggregation. By extension, characteristics of S in (1) are determined by relationship between individual random variables, and a joint cumulative density function has all information on characteristics of a random variable.
Definition 1 (Aggregate loss) Consider n individuals,
each facing risk
represented by distribution function
and a survival function
; the aggregate loss of the pool is a random variable S defined as:
(1)
Assuming that risks
are independent, distribution of S is determined by well-known convolution methods with an aid of algebraic manipulation, where S is a product of marginal distributions shown in (2), with statistical properties similar to individual risks. The assumption is undertaken to simplify calculations for reasonable large pool which tends to normalization, since in real world risks cannot be completely independent. Note that, in cases where risks are significantly dependent, independence assumption may understate or overstate aggregate risk.
(2)
Positive dependence means that individual
’s move in the same direction, such that when there is perfect positive dependence then distribution of S is said to be comonotonic with cumulative distribution is defined in (3). Comonotonic sum results into the riskiest portfolio in any combination of risks
’s, and has been studied by many such as [23] [24] [25] and [26] ,
(3)
Negative dependence results when random variables
’s move in opposite direction, effectively compensating each others’ risks therefore decreasing aggregate loss S. When individual risks have perfectly negative dependency, distribution of S is said to be countermonotonic defined in (4). Counter-monotonic sum forms the lowest-risk portfolio and by extension, internal hedging mechanism, which is an interesting problem in risk management.
(4)
However, research in counter monotonicity beyond two dimensions (
) is limited by absence of universal mathematical definition; including pairwise counter monotonicity, d-countermonotonicity, joint mixability, complete mixability and
-countermonotonicity as investigated by [27] [28] [29] [30] and [31] .
3. Stop-Loss Reinsurance
Reinsurance divides an individual risk
between insurer and reinsurer, either proportionally or non-proportionally, which are further discussed as reinsurance types in [32] . The Broker Model illustrated in Figure 1 was coined by [33] for a decentralized insurance setting, and has similar structure to reinsurance in traditional insurance.
Under Broker Model, aggregate loss S is divided amongst individuals using a Pareto-efficient and financially fair rule called the Conditional Mean Risk Sharing by [26] . Using this method, a participant i must contribute expected value of loss
brought into the pool conditional to total loss S experienced by all members, i.e. the contribution of each participant is the average part of total loss attributed to the risk added to the pool.
Definition 2 (Conditional Mean Risk Sharing) Let
be realizations of loss
and
be the realization of S. There exists measurable functions
such that:
(5)
Conditional Mean Risk Sharing is Pareto efficient because the whole loss is allocated as shown in (6), and financially fair because mean of individual contribution is the expected value of the whole pool as in (7).
(6)
(7)
Now to define stop-loss random variables, let the proportion of insurer be
and that of reinsurer be
, then each individual loss is a combination of component random variables detailed in (8).
(8)
Definition 3 (Stop-loss random variables) Let an individual risk threshold be
, such that insurer covers
of the loss and retain
while the reinsurer cover
, the stop-loss random variables are defined in (9):
(9)
Assumptions
1) Losses resulting from
obey zero-augmented probability distributions i.e.
for each i.
(a) Model visualization (b) Model working
Figure 1. Illustration of broker model.
2) The mean and variance of individual losses
are finite and non-negative, that is,
and
respectively.
3) For the pool S, retention threshold
is the sum of individual retentions, that is, there exists
such that
From definition of
the distribution function can be derived as
. For the insurer, distribution function of
is derived in (10).
(10)
When a reinsurer has no knowledge of underlying claim distribution, retention threshold is derived from a conditional random variable
. By setting
simplifies to
. The distribution of threshold is given by:
(11)
Estimation of threshold from reinsurer point of view applies Extreme Value Theorem (EVT) for right-tailed distributions, since EVT models extreme events using statistical tools. For instance, eyeball inspection approach (EIA) uses mean excess plot to determine appropriate threshold, which has a disadvantage of testing individual risk thresholds in isolation, therefore neglecting diversification effect in the pool. Other methods incorporate aggregate loss function for better approximations.
4. Quantile Risk Measure
α-quantile risk measure is also known as value-at-risk (VaR) in financial models, and mathematically defined as (12).
Definition 4 (Value-at-Risk) The α-quantile measure for a random variable X where
is defined as:
(12)
A solution to the equation
(13)
4.1. Properties
Value-at-risk satisfies the following properties for a risk measure
:
Normalization: The risk of nothing is zero, hence
.
Positive homogeinity: Risk of a portfolio is proportional to the size, such that for any positive constant
the equation
applies. By extension, let X and
be real-valued random variables, if g is continuous and non-decreasing then
.
Monotonicity: A random variable preceding another in the convex order has a higher risk between the two, i.e. for two risks X and Y if
then
.
Translation Invariance: Addition of a constant (or risk-free asset) to a portfolio changes total risk by similar proportion, that is, for any positive constant
,
.
VaR is a not a coherent risk measure because of not satisfying additivity property. Under some special cases such as elliptic distributions VaR satisfies the property:
Subadditivity: Additivity is the risk reducing property, also known as diversification property where for two risks X and Y,
.
4.2. VaR for Comonotonic Risks
In place of independence assumption, when the sum (1) has a complicated dependency structure that may be too tedious to calculate, there is an acceptable practice to replace it with a less desirable sum using prudential assumption. Comonotonic sum of S denoted as
belongs to the same Fréchet class and has similar properties ( [26] ), but
has heavier tails which means larger variance, resulting into higher aggregate risk.
Theorem 1 (for comonotonic risks). Assume the risks
’s are comonotonic and form a random vector
whose sum
is also comonotonic, then:
(14)
Proof. Sum is the addition of individual comonotonic variables; let U be a uniform random variable in the interval
such that:
Now let g be a non-decreasing, left continuous function and
, then
It follows, from the definitions, that:
□
5. Retention Threshold Using Quantile Measure
For a comonotonic portfolio of risks
which follows assumption 3, the stop-loss aggregate random variables
and
for an insurer and reinsurer respectively are defined as sum of individual random variables in (9) such that:
(15)
Reinsurance requires a fee, which in this paper will be defined as pure premium P without a loading as shown in (16):
(16)
Note that P is a decreasing function of
: a higher threshold attracts low premium and a lower threshold has a higher premium margin.
5.1. Cost Functions
Decentralized insurance has no capital requirements therefore total cost is the aggregate loss random variable, and any division into component variables should equal the aggregate loss in order to satisfy Pareto Optimality. Let the total cost of insurance be a random variable T: define the component costs of insurer and reinsurer as random variables
and
respectively as in (17).
(17)
Next is to construct a cost model as convex combination of cost functions in (17) measured at their respective values-at-risk:
Theorem 2 (loss function). Let
and
be portfolio VaR for insurer and reinsurer respectively with random variables denoting costs of a insurer and reinsurer defined in (17); then aggregate loss function is a convex combination of the random variables defined in (18):
(18)
Parameter
determines the division of total loss between the insurer and reinsurer: if
then reinsurance company carries all costs while if
then there are no reinsurance arrangements applied; hence it sets optimal aggregate cost of the pool subject to the set value-at-risk levels for players.
5.2. Optimal Threshold
The objective of cost approach is minimization of the total cost subject to quantile risk measure in order to obtain an optimal retention level,
. Breaking down (18) to components:
(19)
where the premium P is defined in (16). Define
and
and rewrite objective function (19) such that;
(20)
Optimization problem becomes:
(21)
Naturally, since the problem aims to align the interests of both reinsurer and insurer, four cases arise:
,
,
and
. If optimal
then the pool loss is within insurer’s safety margin while
indicate that the optimal level might be unacceptable to the insurer, and
shows that the reinsurer requires (re-)insurance to manage costs. The cases are combined into three distinct ones:
,
and
.
Theorem 3 (loss Retention Threshold). The optimal retention level
is a piecewise-function (22):
(22)
where the parameter
is defined as:
(23)
Proof. The derivative of function (20), with premium P in (16) is given as:
(24)
Set
to arrive at equations:
(25)
For the sufficient condition for minimization
, recall that the derivative of a survival function
is a hazard function characterized by non-decreasing nature; which confirms that the threshold is a minimum threshold.
A special case of
for model in (22) where
.
(26)
Here, two cases arise which are described below:
when
(27)
which means that
(28)
and
(29)
when
(30)
which means that
(31)
and
(32)
Therefore, aggregate function is reduced to depend on values-of-risk only as shown in (26), and threshold depends solely on the values of risk. In this case, retention has indirect relationship with the VaR of insurer
but direct relationship with that of reinsurer
.
6. Numerical Example
Gamma distribution was used to demonstrate application because of possession of many useful and tractable mathematical properties, such that a sum of gamma random variables is also a Gamma distribution. This means that comonotonic random variables in (8) shall be gamma as well with density function in (33).
(33)
6.1. Comparison
The model was a quantile-measure extension of [21] which applied variance measures for threshold determination by using two approaches: a minimum variance derived by maximizing the covariance between insurer and reinsurer in Proposition 3.1. such that:
(34)
where
defined in (16); which was compared to results obtained by similar approach of correlation coefficient defined as
, a standardized measure in Proposition 3.3 such that:
(35)
where
The comparison applied a Gamma distribution with a mean 1, variance 2 and skewness of
where a set of values-at-risk were chosen for insurer and
reinsurer with pool sizes as little as 30 individuals up to 100,000 members, and using Monte Carlo simulation was applied to model retention thresholds. Parameter values of risk simulated were upper-quartile,
with .05 step. [21] obtained
using covariance measure and
for correlation coefficient, when
and
respectively using analytical methods.
Results were compared with this paper’s model using numerical approximation, and noted [21] fails to perform for small samples (less than 500), but our model provides several optimal thresholds for smaller groups, as shown in Table 1. When
retention threshold does not depend on the survival function (constant for combinations of parameters) as summarized in Table 2.
6.2. Discussion
It is noted that optimal retention took values of
which suggest that optimality is achieved by ceding more risk to reinsurer. This result confirms findings of [34] and [9] that decentralised insurance benefits from stop-loss reinsurance of any form, be it losses below insurer’s deductible or selective risks in the portfolio, both of which result into relatively smaller losses.
Loss function reduces with subsequent iterations (Figure 2) which suggests convergence into a true minimum, which also happens relatively fast within a few iterations as the number of risks grows, a fact that is consistent with the
Table 1. Threshold determined by the two models.
Table 2. Optimal retention parameters when
.
Central Limit Theorem. [21] variance-covariance approximation fails to perform with small samples (less than 500), but the model provides several optimal thresholds for smaller groups.
Results of the model show that threshold closely relate to reinsurer’s value at risk. Combined with the behaviour of larger samples discussed above, it suggests that decentralised insurance depends on efficient modelling of reinsurer’s value-at-risk. The stability offered by the risk sharing between the two parties is only as beneficial if insurer quotes the correct risk profile for the model, including dependency structure of its composition.
7. Conclusions
This paper presents a model for selecting retention threshold for dependent risks using value-at-risk, which was demonstrated using a numerical example. In comparison to variance measures, value-at-risk presented a better threshold for pools of all sizes and in particular it provides reliable estimation of retention threshold for small groups. This is an important result for decentralized insurance characterized by small groups which are closely-related. The fact that it is based on an arbitrary selected parameter rather than distribution function simplifies calculations and provides flexibility on application of the model.
Possible future research may consider information asymmetry between reinsurer and insurer where the two parties have different beliefs on the type and/or behaviour of aggregate loss, using either a parametric, semi-parametric or non-parametric approach. That is, the distribution of loss may present different characteristics for a reinsurer such that optimization criteria combining Extreme Value Theorem approach provides improved approximation for dependent risks.
Acknowledgements
The authors are grateful for the comments of two reviewers who would prefer to remain anonymous, whose valuable comments provided different angle to the work. We offer gratitude to an anonymous reviewer from University of Illinois, Urbana Campaign helped to restructure paper for the better. This research work is supported by the Pan African University through its scholarship program.