Solvency II premium risk modeling under the - Padis

Transcription

Solvency II premium risk modeling under the - Padis
Solvency II premium risk modeling under the
direct compensation CARD system
Giorgio Alfredo Spedicato
Via Firenze, 11
20037 Paderno Dugnano, Italy
E-mail address: spedicato [email protected]
1991 Mathematics Subject Classification. Primary 62P05; Secondary 62J12, 91B30
Professor Nino Savelli, PhD tutor
Gloria Leonardi and Stella Garnier, my actuarial supervisors at Axa Assicurazioni
& Investimenti.
iii
This dissertation will discuss most relevant issues and challenges for Italian TPML practice derived from the introduction of CARD direct reimbursement scheme
regulation. A special attention will be devoted to the development of an internal model
for the non-life underwriting premium risk capital charge coherent with the new business
practices that the CARD system has introduced.
Abstract.
The PhD thesis is divided into two part: the external environment description and
an internal model development process detailed. The final appendix shows statistical remarks and reference with respect to the special techniques used in the thesis and the
bibliographic references.
The external environment description summarizes the historical evolution of TPML
underwriting and actuarial practices during last 20 years.
A detailed description of the CARD direct reimbursement structure is thereby developed.
The nature of the composite claim structure, a specific description of each components of
claims, the regulatory context changes are discussed with special attention to the actuarial
implication on the pricing of third party motor liability coverage.
The previous actuarial practice is contrasted with the new challenges introduced by the
CARD scheme.
Some statistical tables coming from official industrial bureau (ANIA) and government
bodies (ISVAP) will be reported to show the market wide frequency and severity figures
for TPML. These figures have been detailed by year, sub-line and component of claims
whether data had been available.
The external environment description follows with a description in general terms of the
Solvency II framework. A special attention has been given to the non - life premium risk
module of the underwriting risk. QIS5 and QIS4 standard formulas will be presented, even
if the internal model disclosed further has been developed with respect to QIS4 framework.
A brief theoretical description of risk adjusted measures of performance has been finally
reported.
The internal model development has been developed into two parts: a description of
provided data sources along with univariate and bivariate analysis and the development
of internal model. Initial data source analysis showed aggregate figures of exposure, frequency and severity by component of claims for provided portfolio. The analysis has been
detailed by accident year, sub - line of business and most relevant provided rate making variables. These analyses have been introductory for the development of the internal
model.
In order to develop the internal model, the premium risk capital
h i charge has been defined
using a Solvency II coherent VaR like measure S̃99.5% − E S̃ . A collective risk theory
approach has been used to develop the total loss distribution of S̃.
Original features of the internal model were: risk heterogeneity consideration by
clustering risks, coherence with direct reimbursement CARD structure, use of GAMLSS
in capital modeling. 2009 steady state in terms on claims price level and total loss
The heterogeneity of risk, usually considered by TPML pricing actuaries when developing
loss predictive modeling and usually not taken into account in capital modeling, has been
considered dividing the portfolio in clusters defined by the level of few relevant rate making factor. A total loss distribution has been developed on each cluster and the occurrence
of each specific cluster distribution have been aggregated.
An underwriting risk modeling coherent with the CARD scheme has been implemented by two alternative approaches. In the first approach, a specific loss distribution
for each component of claims have been developed whithin each cluster. Their occurrences
have been summed up assuming conditional dependence by component of claims. Compensating forfeit amounts for suffered CARD claims distributions or forfeit costs for caused
CARD claims distributions have been modeled by re sampling. In the second approach
the number of total payments has been simulated for each cluster of insureds regardless
specific component of claims or whether the claims was caused or suffered. The amount
of payments distribution have been obtained by re sampling from the 2009 claims distribution.
Finally this paper is the first paper (as known by me at December 2010) that shows the
use of GAMLSS on P&C capital modeling. GAMLSS are an extension of generalized linear
models that provide a mathematical structure to model the dependency of more parameters than to usual mean (as done in standard GLMs) with respect to candidate predictors.
GAMLSS have been used to model the frequency of each component of claims and the
severity of component of claims directly handled by the company.
The two modeling frameworks have been modified therefore by testing the use of GPD
distribution to model large claims.
A total of four internal models have been developed and corresponding capital charges
have been estimated. Results shows that capital charges seem comparable with standard
formula results (in some cases higher and in some cases lower). Main limitations of the
developed models are the difficulty to consider the yearly changes of forfeits and the treatment of claims development until ultimate (IBNR & IBNER charge).
Contents
Preface
vii
Part 1. The CARD direct reimbursement scheme for TPML
insurance
1
Chapter 1. Historical development of TPML actuarial practice in Italy
1. The milestones
2. ”‘Commissione Filippi”’ tariff
3. Changes since 1994
4. General remarks about TPML pricing
5. Overview of TPML ratemaking actuarial practice
3
3
3
6
7
7
Chapter 2. The DR scheme in Italy
1. The new regulatory environment
2. Current actuarial literature
3. Official risk statistic about Italian TPML
11
11
19
20
Chapter 3. The Solvency II framework
1. Synthetic overview of the Solvency II framework
2. SCR non-life underwriting risk
3. Literature on underwriting risk internal models
27
27
31
34
Chapter 4. Capital allocation and profit measures
1. Overview
2. Market consistent value and capital allocation for a P&C insurer
37
37
37
Part 2. A CARD - coherent premium risk internal model for a
TPML portfolio
41
Chapter 5. Preliminary data analysis
1. Description of data sources
2. Components of claim distribution
3. Basic univariate statistics
43
43
43
47
Chapter 6. CARD portfolio standard formula & internal model exemplified
1. Overview
2. The standard formula
3. Internal models
4. Discussion
51
51
55
56
68
Chapter 7.
71
Conclusions
v
vi
CONTENTS
1.
2.
Final remarks
Disclaimer
Part 3.
Appendix
71
72
73
Appendix A. Review of statistics and predictive modeling
1. Predictive modeling
2. Individual and collective risk theory
3. Peak over threshold extreme value theory approach
75
75
82
84
Appendix B.
One way data analysis
87
Appendix C.
GAMLSS first model relativities
Appendix.
Bibliography
111
137
Preface
The purpose of this PhD Thesis is to introduce the actuarial issues regarding
the new DR scheme in force in Italian TPML market since 2007, the CARD system.
CARD system has been introduced in Italy along with Bersani II Law. Bersani II
law also affected TPML insurance market modifying provisions of law regarding
policyholders’ claim history recording and bonus-malus structure. Bersani II provisions halted insurers’ ability to classify policyholder according to previous claims
experience.
From an actuarial point of view, the most important effect of CARD is that
both responsible and non responsible claims have to be considered in ratemaking
as long as non responsible handled claim have an impact on the total claims cost.
An historical digression on TPML insurance market in Italy will be outlined
in chapter 1. In the same paper the design of a TPML coverage used in Italy will
presented. The structure of the CARD direct reimbursement system will be analyzed in detail in chapter 2. Moreover some statistical figures regarding frequency
and severity under the card scheme will be reported in this chapter. Solvency II
framework will be presented and discussed in chapter 3, with a specific focus on
non-life premium risk.
Risk adjusted performance measures and capital allocation will be presented in
chapter 4.
The second part of the thesis will define and develop a framework to build
internal models to assess Solvency II premium risk capital charge on a TPML portfolio under the Italian CARD direct reimbursement scheme. The internal model
has been calibrated on TPML real insurance data. Employed data sets come from a
P&C medium size insurer operating nationwide in Italy. Chapter 5 will present the
dataset and descriptive risk statistics. Chapter 6 will report the standard formula
and the internal model.
Conclusions are drawn in 7.
In the appendix, chapterA reports basic remarks on most relevant statistical techniques used in this thesis. Chapter B reports one way analyses on applied
example data set, while chapter C reports the GAMLSS outputs for the first model.
R software [R Development Core Team, 2010] has been used for all calculations. While R script can be distributed upon request, data used will never be
given as protected by a confidentiality agreemeent with my employeer.
vii
Part 1
The CARD direct reimbursement
scheme for TPML insurance
CHAPTER 1
Historical development of TPML actuarial
practice in Italy
1. The milestones
Fundamental milestones in Italian TPML actuarial practice have been:
(1) 1969, TPML insurance became compulsory insurance for vehicles owners
in December 1969 and since then it has always been kept under strict
supervision.
(2) 1994. Since this date insurance TPML tariffs have been liberalized and
their rate have been no more subject to the filing and prior approval of
governmental bodies. Before 1994 the tariff level was set by a govermental
council, ”Commissione Filippi”. Commissione Filippi team (henceforth
CF) aggregated data (exposures, premiums and losses) provided by most
insurance companies operating in TPML Italian market. Since 1994 each
company has been granted almost complete freedom to set the tariff level
and structure. The tariff is only subject to a adequateness audit conducted
by an appointed actuary, chosen by the insurer. Most revelant results of
1994 changes have been:
• The increase of ratemaking variable used for a deeper risk profiling.
• A general increase of average premium, as political based price caps
restriction operating on insurance tariffs were removed. Before 1994
TPML line was usually operated at underwriting loss as political
pressures kept average tariff to a lower than adequate level.
(3) 2006 was the year where DR was introduced and traditional experience rating was halted thanks to Bersani Laws. DR specifical guidelines has been delined by rules collected under ”Convenzione Assicuratori sul Risarcimento Diretto” (henceforth CARD scheme). Moreover the
rules of Bersani Laws, affecting Bonus Malus dynamics have been enacted since this date. Nevertheless Bersani law effect on TPML actuarial practice will be not discussed in this work. See ([Conterno, 2007,
Gazzetta Ufficiale, 2007] for details.
2. ”‘Commissione Filippi”’ tariff
The complete analysis carried out by the CF is described in a long document
which, for obvious political reasons, had a very restricted circulation and is now almost impossible to find ([Clarke and Salvatori, 1991]). Therefore we outline the
main characteristic of the CF according to a synthetic source, ([Clarke and Salvatori, 1991]).
3
4
1. HISTORICAL DEVELOPMENT OF TPML ACTUARIAL PRACTICE IN ITALY
Before 1994, every year (normally around April) the Ministry of Trade and
Industry (the body in charge of regulating this sector), through CIP (Prices InterMinisterial committee) and supported by a special committee of technicians called
”commissione Filippi”, determines the new premium rates to be effective from the
date stated by CIP (1 May 1991 for the latest tariff) ([Clarke and Salvatori, 1991]).
CF tariff was designed aggregating data coming from 99 different insurance
companies. In fact since 1971 every company writing RCA was compelled to coinsure 2% of every risk with a body, ”Conto Consortile”. The main purpose of this
compulsory cession is to provide a database from which to obtain statistics for the
whole market: each company, in fact, has to provide a magnetic tape containing all
their exposure and claims details. This way Conto Consortile ends up with having
the same details available to the original insurer.
The purpose of CF report ([Filippi, 1993, Clarke and Salvatori, 1991])
was to estimate the a reference premium for a TPML coverage. The tariff was set
by the governemnt and administered by the companies, that had to collect and file
data regarding insureds’ risk characteristics and loss transactions. The tariff structure was a BM structure without deductible and applies to around 98% of vehicle
([Clarke and Salvatori, 1991]).
The estimate of the pure premium relied on the best possible estimate of the frequency and the severity of TPML. A special attention was given to the severity
estimate. In fact the insurers’ case reserves estimates were considered not reliable
and a structured algorithm had been developed to obtain a final estimate of the
claim cost.
2.1. The source of data. The tariff for year T + 2 was set using experience
data earned until year T and evaluated as half T + 1 year. Available data were:
• earned exposures (car / years) in year T .
• claim reported in year T by AY.
• paid losses in year Y by AY and case reserve by AY.
• earned premiums.
With respect to the consulted document, [Filippi, 1993], the sample used in
the analysis process consisted in 30.2 million earned exposure (car / year) of 1991
calendar year (henceforth CY). The sample was considered to be full representative
of the Italian situation. 1991 four wheels most relevant risk statistics were:
• claim frequency (gross of IBNR, reopens, close without payments, etc):
13.67%
• disposal rate for AY 1991 at 12 month of maturity: 58.04%
• severity 2,061,000 L. (paid + case reserves)
• burning cost: 353,000 L.
• loss ratio: 79.75%
For rating purposes, all motor vehicles were subdivided into 6 groups:
(1) private motor vehicles
(2) taxis
(3) buses
(4) lorries and trucks
(5) mopeds and motorcycles
2. ”‘COMMISSIONE FILIPPI”’ TARIFF
5
(6) vehicles for special uses
(7) farm vehicles
(8) boats
(9) public boats
BM structure was applied to I and II sectors only. For ratemaking purposes,
all risk were classified according to: vehicle power, geographical zone, BM class,
cover limit. Limit of liability were set on split limit basis: BI per person / PD per
event / Total claim per event. Rating structure was set on multiplicative basis.
BM structure had been set on 18 level since May 1991, with a reduction of 50%
and and increase of 200% with respect to the base level respectively. According to
the revised BM system, the proposed bonus malus class BMt+1 would have been
set according to the rule in equation 2.1.
BMt+1 = (k = 0) ∗ max (BMt−1 , 1) + (k > 0) ∗ min (BMt + 3k − 1, 18)
(2.1)
The
points:
(1)
(2)
(3)
(4)
algoritm set to obtain the proposed tariff was articulated in following
calculating the average net pure premium;
calculating the average tariff premium;
calculating of the rate of increase to be applied to the tariff in force;
calculating of the premium factors corresponding to the various rating
variables considered;
(5) calculating of the premium factors corresponding to the various BonusMalus classes.
2.2. Calculating the average net pure premium. The average net pure
premium is a synonim of the risk premium, that is P = f ∗ C. The determination
of the average net pure premium was defined by producing the best estimate of f
and C.
The formula found in ([Clarke and Salvatori, 1991]) to properly estimate the
claim severity is (2.2
(2.2)
C=
S
X
CD,D−K qD,D−K BD,D−K (GID+2+K,D + (1 − G) ID+2+K,D )(1 + ik )−tk
k=0
where:
• CD,D−K is the average settlement cost for claims occurred in year D − k
and settled in year D.
• qD,D−K is the average disposal rate for claims with a maturity of K years.
• BD,D−K is an adjustment factor to take into account structural changes
in the portfolio composition.
• I is the CPI projection adjustment.
• G represent the share of claims occurred in the year when the reference
policy was underwritten.
• (1+ik )−tk represents allowance for discounting based on premiums earned
on reserves.
The averege net pure premium then became P N = f (1+A)C where A represent
a staturory loading for some DCC expenses ”Spese di resistenza”. According to
6
1. HISTORICAL DEVELOPMENT OF TPML ACTUARIAL PRACTICE IN ITALY
[Clarke and Salvatori, 1991] the loading g was set to containing an implicit
profit & contingency margin of 3% and in the late years of CF tariff it could be set
by the company within a range of 25.5% - 29%.
2.3. Calculation of the average tariff premium. The average tariff premium was set according to formula 2.3
(2.3)
PT =
PN
1 − b(1 − a) − (g + d)
where a is the loading of administration cost, b is the statutory UM coverage
applied to the tariff premium net of statutory costs, g is the comprehensive expense
loading, d is the loading for SOFIGEA expenses.
2.4. Calculation of the percentage increase / decrease to be applied
to tariff D+1. The determination of tariff P T relative for year D + 2 requires to
determine the size of changes to be applied to the D + 1 tariff. The average PD+1
tariff was determined by 2.4.
(2.4)
PD+1 = η (1 + j) [p (1 + h) + (1 − p)] PD m
where:
• p represents the proportion of premiums earned in D but based on D-1
tariff.
• h is the percentage change in the tariff between D − 1 and D.
• j is the percentage change between tariff D + 1 and D
• η represents the bonus malus adjustment to preserve rate adequacy. This
factor is calculated according to a Markovian system and used to determine the normalized premium level according to the bonus - malus transition rules. This factor needs to be taken into account when determining
the overall premium adjustment as the bonus malus system evolution leads
to a disequilibrium between premium inflows and claim outflows if not
properly adjusted. See [Conterno, 2007, Clarke and Salvatori, 1991]
for further details.
2.5. Calculation of the relativities for the ratemaking factors. Relativities for territory, policy limit and territory class were determined using a chi-square
mininization approach constraining the relativities applied on a reference premium
to produce the same level of tariff premium.
3. Changes since 1994
CF scheme represented a very complex and articulated scheme to determine
the premium charged in TPML business. This scheme cannot be applied without
structural changes in the current situation for the following reasons:
• CF was implemented in a monopolistic context. It is no more possible to
pool all insurers experience to determine a tariff level as data exchange
has been prohibited by the regulator in general terms. Each company
has to carry on actuarial analysis on its own statistics and therefore rate
adequacy shall exist not only at nation - wide level, but also at specific
portfolio level.
5. OVERVIEW OF TPML RATEMAKING ACTUARIAL PRACTICE
7
• The number of classification variables used in the current ratemaking process is continuously increasing, as insurers wish to investigate portfolio
profiles to identify and attract most profitable insureds.
• The portfolio dynamics with respect to insured claim experience (BM,
number of claim history and similar variables) is much more complex
than with respect to posterior ratemaking practice at CF time. See
[Conterno, 2007, Gazzetta Ufficiale, 2007]) for further details.
Since 1994 the TPML insurance market has been liberalized. Most significant change has been the cancellation of requested filing and regulatory prior approval of TPML rate. Moreover the number of ratemaking variables has been
significantly increased. TPML Italian actuarial market is now widely using GLM
[de Jong and Heller, 2008] modeling.
3.1. 2007 laws. In 2007 Italian Parliament approved two important laws:
(1) DPR 254 [Gazzetta Ufficiale, 2006], that introduced DR in Italy by
means of the CARD scheme. This PhD thesis deepen the impact of DR
scheme on TPML practice.
(2) The ”legge Bersani II” [Gazzetta Ufficiale, 2007] introduction that altered the way Italian Bonus Malus Scheme worked. Bersani Law II has
allowed an insured buying or registering a car for a second time to inherit
the best BM class available in his family. Moreover the claim penalty
trigger has been switched from the occurence year to the year when the
first payment is performed.
4. General remarks about TPML pricing
5. Overview of TPML ratemaking actuarial practice
Italian TPML tariffs are subject to an actuarial audit conducted by the company TPML appointed actuary. This legal compliance disposition has been enacted
due to the relevant social importance of TPML tariffs coverage. The main duty of
the appointed actuary is the supervision of overall rate adequacy and compliance
with other provision of laws. The overall rate adequacy, in Italy named ”analisi del
fabbisogno tariffario”, is implemented in different way as a consensus method has
never been developed in Italian TPML actuarial practice.
TPML rate princing and analysis may be articulated in the following tasks:
(1) Determine the average indicated rate for future new business.
(2) Determine indicated relativities for tariff variables using suitable predictive modeling techniques (almost always GLMs). Effective final relativities usually differs from indicated ones, but the total projected premium
inflow shall be equal to the overall required premium amount.
(3) Simulate the future premium volume considering new business, renewals
and lapse. With respect to renewals, the effect of natural premium decrease (BM renewal business transition, eldering of age / vehicles) shall
be considered. It must be verified if the premium inflow is sufficient to
pay ultimate loss amount on new business and renewals, underwriting and
operating expense costs.
8
1. HISTORICAL DEVELOPMENT OF TPML ACTUARIAL PRACTICE IN ITALY
5.1. The indicated rate. The indicated rate adequacy is determined by determining the most reasonable claim frequency and claim severity for the future
periods rates will be effectives. Underlying framework is comparable to standard
ratemaking as described in [Geoff Werner and Claudine Modlin, 2009].
According to [Geoff Werner and Claudine Modlin, 2009] the following steps
should be performed within a classical ratemaking steps:
(1) collect premiums, exposures, losses and expense data for an adequate
experience periods.
(2) develop losses, premiums and exposure to ultimate and trend the experience period values to the level they would be expected when the future
rate will be in force. The latter adjustment may involve applying adjustment coefficinents to take into account inflationary and legal environment
forces.
(3) obtain a overall rate level indication.
(4) determine appropriate relativities constraining the experience period business mix to produce the same level of premium underlying the overall level
rate adequacy.
One or two years forward projections may be necessary as the future policy year
overlaps two calendar years and average accident date is therefore much delayed
since the period rate are estimated.
Ultimate value pure premium take into account IBNR and IBNRS effect and discount for investment income, see [Feldblum, 1989, Geoff Werner and Claudine Modlin, 2009].
Tariff premium 5.1 Π is determined by adding to the ultimate burning cost Bc
charges taking into account:
• Government uninsured motorist coverage (f , about 2.5%).
• Expense (general uw expense and commissions) variable charge (s, about
25%).
• Profit and contingency factor (q, depending by company policy).
When the ultimate burning cost is calculated, reserve discounting and profitability consideration may be considered in addition to shock losses and loss development adjustment charges.
(5.1)
P = Bc + f Π
Π=P +s∗Π+q∗Π
5.2. Classification analysis. The indicated rate provides that charged premium covers on the entire portfolio average the cost of risk transfer adequately.
Classification analysis, usually performed by multivariate GLM, provides that rates
charged to specific profiles cover as close as possible cost of individual risk charge.
With respect to territory analysis, most pricing software employ spatial statistic
techniques to cluster zones with respect to the level of a properly identified spatial
risk factor.
In brief, usual steps performed in classification analysis are:
(1) build a predictive model on frequency of claim. Standard is overdispersed
Poisson GLM using exposure as offset, with log link. For each risk i an
expected frequency is therefore obtained, fi .
5. OVERVIEW OF TPML RATEMAKING ACTUARIAL PRACTICE
9
(2) build a predictive model on severity of claim. Standard model is a Gamma
GLM on average cost using the number of claims as weight. An estimate
of severity ci is therefore obtained for each risk i.
(3) obtain an estimate of pure premium for each risk i, pi = fi ci .
(4) fit a final multiplicative model on pi using a Gamma GLM with log link.
Restriction on coefficient values can be set in this step in order to follow
marketing and regulatory consideration, as expained by [Jun Yan et al., ].
A proper discussion of classification analysis goes beyond the scope of this work.
Good references for GLM analysis in P&C insurance can be found in [de Jong and Heller, 2008,
Denuit et al., 2007] and [pre, 2010].
5.3. Adequacy of current and proposed rate. Actual and projected portfolio have to be simulated to check wherever the proposed new rates and the current
in force portfolio produce an aggregate premium amount able to cover prospective losses, associated fixed and variable expensed and a profit and contingency
allowance. In the simulation stage the following key points shall be considered:
• The in force premium evolution due to claim history loss (no claim discount & bonus malus analysis), the so called ”analisi del fabbisogno”.
See the works of Loredana Conterno and Francesco Maggina for a discussion of rate adequacy after the introduction of the Bersani II law
[Conterno, 2007, Maggina, 2008].
• New business and lapse effect on aggregate portfolio.
Most recent advanced in multivariate classification are expressed in the price
optimization schemes. Traditional employment of GLMs modeling by actuaries was
pure premium modeling. Nevertheless other models can be built:
• Retention models: to simulate lapse probability with respect to insurer
characteristic and commercial environment (difference of price between
paid premium and proposed renewal premium, relative competitive of the
actual tariff).
• Conversion models: to simulate the probability that a quote for a certain
profile will be accepted.
The model for losses, retention and conversion can be integrated to develop
new business and renewal strategies that consider the jointly the riskiness and the
elasticity to price for existing and potential customers.
CHAPTER 2
The DR scheme in Italy
1. The new regulatory environment
1.1. The introduction of ”CARD” system and its predecessors. I took
hint from an internal technical report [AXA Actuarial department, 2009] in
order to describe the CARD system to an international audience. The DR regulation has been effective on all claims occurred since 1rst February 2007 by law
DPR 254 2006 [Gazzetta Ufficiale, 2006]. DPR 254 2006 has strongly modified
the claims management in Italy by reversing the indemnification handling between
responsible and non responsible insurers for the majority of incurred claims.
The new system, named under the acronym ”CARD” (”Convenzione Assicuratori Risarcimento Diretto”, Convention for Insurers Direct Reimbursement),
replaces the old optional mechanisms of CID (”Convenzione Indennizzo Diretto”,
Agreement on Direct Reimbursement) and CTT (”Convenzione Terzi Trasportati”,
Agreement on Third Parties transported). These agreements regulated the facultative DR scheme for damages suffered by non responsible drivers and by vehicle
passengers respectively .
The old CID scheme (introduced in 1978 and modified in 2004) was in fact a non
compulsory agreement between insureds. Old CID allowed insurers to indemnify
directly their non responsible insured whether some condition had occurred:
• no more than two vehicles involved, nor any mopeds or agricultural machines.
• full agreement of involved plaintiffs regarding accident dynamics.
• property damage claims of whatever amount and (since 2004) and bodily
injuries to the driver up to 15,000.
The most important difference between new CARD and old CTT scheme lies
therefore in full refundment of the suffered claim cost paid in advance by the responsible part insurer and the voluntary status. Old CTT scheme, introduced in
2006, regulated the reimbursement of injured passengers of non responsible vehicle
in case of a two insured vehicle accident (excluding mopeds).
The most important differences between the old and the new system are:
• In the new system, the amount received by the non responsible part’s
insurer is set on a forfeit basis, instead of a full indemnification.
• Mopeds have been included in CTT.
CARD system is quite similar to the IDA mechanism prevailing in France but
is far more extended. There is no limitation of costs for material damage (6,000
11
12
2. THE DR SCHEME IN ITALY
in France) and the invalidity level threshold for bodily injuries is 9% when compared with 5% threshold of France. Moreover, in case the percentage of permanent
invalidity is higher, damages to the vehicle can still be managed under the direct
compensation system, whereas the injury of the driver is managed as a classic TPL
claim.
1.2. How the CARD system works.
1.2.1. The forfeit structure. The card system is applied according to the following guidelines:
•
•
•
•
•
No more than two vehicles are involved;
The claim occurred in Italy;
Both vehicles have Italian license plates;
Collision between the vehicles;
No necessity of an ”agreed statement of facts” (modulo blu) signed by the
two drivers (one signature is enough).
• Valid Cover Insurance in course for both cars implicated.
Each company shall promptly idemnify its clients even in case of partial responsibility. Legal limits are from 30 days in case of PD only and no discord between
plaintiff to 90 days in case of slight BI. An industry wide information system is
used to verify validity of insurance covers, inform the other company of incurred
claim and assign partial responsibilitity according to predefined schemes in case of
disagreement.
The handling company (no responsibility of its client) receives a predetermined
sum (forfeit) from the responsible part’s company. If the suffered part keeps some
responsibility in the claim occurrence the received forfeit is paid in half. Payments
are managed through a clearing house (CONSAP) that regulates inter companies
positions each month limiting the inter - insurer cash flows to the net responsible
forfeit balance.
The forfeits are yearly reviewed by a specific committee upon aggregated data
gathered by CONSAP on claims costs. Forfeit revisions may consists in update of
previous year forfeit figures or revision of the main forfeit structure. Claim cost
analysis are performed at territorial zone cluster and vehicle type.
E.g. [statistiche e studi attuariali, 2009] 2010 forfeits have been established
by projecting cost of CARD claims paid and occurred in accident year 2007 - 2008
to 30 June 2010 using CPI. Cost adjustment has not been applied to case reserves
as they already contain a prediction of final settlement cost. Average costs were
distinguished by type of claim (property damage / bodily injury) and type of vehicle
(two wheels / other). The frequency of property damage and bodily injury within
class of vehicle have been calculated. That is:
• two wheels: property damage severity 1,875 (96.2% incidence), bodily
injury severity 4,948 (39.7%). That leads to a mean nationwide forfeit
equal to 3,771 euro.
• other vehicles: property damage severity 1,496 (99.0% incidence), bodily
injury severity 3,225 (12.1%). That leads to a mean nationwide forfeit
equal to 1.871 euro.
1. THE NEW REGULATORY ENVIRONMENT
13
Provinces have been divided into three groups according to the relative level
with respect to the national average:
(1) group 1: specific severity: +10% with respect to average national severity.
(2) group 2: specific severity: -10% - +10% with respect to national severity.
(3) group 3: specific severity: -10% with respect to national severity.
A single claim occurrence may comprise one or more components of claims
(”partite di danno”): damages to the passengers, to the vehicle, to a pedestrian...
Each component of claim is managed in a different way, and may receive different
forfeits. The possible components of a single claim (”partite di danno”) are:
(1) CIDG, material damages to the vehicle or bodily injuries to the driver under 9% of invalidity suffered by the insurer of non responsible policyholder.
For each CIDG, the insurer of non responsible policyholder handles the
claim and receives a predetermined forfeit (CIDGF).
(2) CIDD, material damages to the vehicle or bodily injuries to the driver
under 9% of invalidity, caused by the responsible policyholder to the non
responsible vehicle. The responsible part’s insurer pays a predetermined
forfeit (any CIDD paid corresponds to a CIDGF for the handling company).
(3) CTTG, damages suffered by the passengers of non responsible policyholder. Non responsible policyholder’s insurer compensates the passengers
and then receives as many forfeits as passengers injured (CTTGF). The
level of each forfeit depends on the amount compensated to the passenger.
(4) CTTD, represents damages caused by responsible policyholder to the passengers of the other non responsible vehicle. In this case, the responsible
policyholder insurer pays as many forfeits as passengers injured to the
company of the non-responsible driver. The level of each forfeit depends
on the amount compensated to the injured passenger.
(5) NO CARD: all other components, for instance: damages to pedestrians,
damages to the passengers of the responsible policyholder, bodily injuries
to the driver over 9% of invalidity, claims with more than two vehicles
involved etc.
When responsible claims handled within CARD system occur, the responsible
part’s company pays the predetermined forfeit. Nevertheless by provision of law it
is not allowed to know the effective claim cost paid by the not responsible part’s
company.
Moreover when the injuries of the driver are higher than 9% of invalidity, bodily
injuries are treated under NO CARD rules, while property damages to the vehicle
are treated under CARD rules (CID). Therefore a single claim may lead to different
handling procedures.
The effective claim costs of CTTG and CIDG are usually different from the received forfeit. Therefore, a gain or loss on non responsible claims may be realized.
In particular, when the received forfeit is greater than the effective compensated
cost, the total claim cost is negative.
Another characteristic of the CARD system is the ”handling fee”, that is calculated
for each company as the 15% of the median zone forfeit times the difference of the
number of CARD claims suffered less the number of card claims handled. This
14
2. THE DR SCHEME IN ITALY
Figure 1. Forfeit 2007 by province Map
amount is calculated yearly by the clearinghouse CONSAP.
1. THE NEW REGULATORY ENVIRONMENT
15
1.3. Changes of CARD implementation through 2007 - 2010. Forfeits
assignment rules and amounts have changed yearly since 2007. A unique forfeit
was applied to both property damage and bodily injury claims in 2007. For each
CIDG claim, the responsible driver’s insurer compensated the non responsible part’s
insurer by a forfeit depending on territorial zone as shown in 1. CID forfeits specific
amount by territorial zone are reported in table 1, while CTT specific forfeits table
is 2.
Following rules determines compensating forfeit for CTTG claims, summarized
in formula 1.1:
• if the claim amount is lower than 500 euro, nothing;
• else if the claim amount is higher than 500 euro and lower than 5000, then
a fixed price of 3250 less a deductible of 500 (i.e. 2750 euro).
• else if the claim is greater than 5000 then a fixed price of 3250, plus
the difference between the total amount and 5000, less a proportional
deductible of 10% of the claim amount (with deductible upper limit of
20000 euro).
(1.1) (
F =
X̃ ≤ 500 → F = 0
X̃ > 500 → F = 3250 + max 0; X̃ − 5000 −max 500; min 0.1 ∗ X̃; 20000
From 2008 until 2008, forfeit for CIDG claims differs if the claim has a bodily
injury to the driver and/or a property damage. CIDG bodily injury forfeits follow
the same formula as 2007 CTTG forfeits. When a claim is composed of both damages to the vehicles and small injury to the driver, the company of the responsible
driver used to pay to the other company a unique forfeit in 2007. In 2008 the
company of responsible driver paid both a ”property damage” forfeit and a ”bodily
injury” forfeit in case of a similar claim. Finally forfeit for damages to passengers
was increased by 50 euro in 2008 (3300 euro).
In 2009, CIDG property damage forfeit for occurrence year 2009 has been
slightly changed and some provinces changed territorial group. Nevertheless there
has not been any change in forfeits covering TPL bodily injury to the driver and
passengers (same as in 2008 see above). Moreover, from January 1st 2009, claims
between policy holders insured in the same company are included within DR system
(even though the forfeit will not be calculated and the claim will not be considered
by CONSAP).
1.3.1. 2010 and 2011 changes. Forfeit rules have changed in 2010 again. Suffered CID compensating forfeit depends now by class of vehicle (and territorial
zone). Moreover there property damage and bodily injuries are no more distinguished, contrary to what happened in 2008-2009. Vehicles have been split in two
groups: two wheels vehicles and others. Moreover forfeit zone subdivision is different between two wheels vehicles and other and suffered CTT compensating forfeit
amount is different between two wheels and other vehicles. The two wheels compensating forfeit map by province is reported in figure 3. On the other hand, the
four wheels compensating forfeit map is reported in figure 2. Minor changes both in
figures and in zoning occurred for 2011 AY. A comprehensive summarization table
16
2. THE DR SCHEME IN ITALY
AY
cluster 1
cluster 2
cluster 3
split
2007
2300
2000
1800
none
2008
1670
1373
1175
BI and PD
2009
1658
1419
1162
BI and PD
2010 (4077) 2152 (3789) 1871 (3410) 1589 (two wheels) all other
2011 (4040) 2183 (3741) 1883 (3367) 1627 (two wheels) all other
Table 1. Synoptic CID forfeit structure by AY and territorial cluster
AY
two wheels all other
2007 - 2009
3250
3250
2010
4011
3150
2011
3959
3143
Table 2. Synoptic CTT forfeit structure by AY and class of vehicle
1 shows forfeit structure implemented in 2007 - 2011 AY.
CTT forfeit structure by AY is reported in table 1.
Finally a recent Italian Supreme Court ruling stated that DR claim handling
scheme cannot be compulsory by law. Even if all Italian insurers are still applying
DR scheme, the Supreme Court ruling may lead to an insurance regulation revision toward unpredictable final outcomes. It may be even possible that the direct
reimbursement scheme will definitively be frozen.
1. THE NEW REGULATORY ENVIRONMENT
Figure 2. Forfeit 2010 vehicles different than two wheels map
17
18
2. THE DR SCHEME IN ITALY
Figure 3. Forfeit 2007 two wheels vehicles map
2. CURRENT ACTUARIAL LITERATURE
19
2. Current actuarial literature
Current literature on CARD system is very scarce even after three years since
the introduction of the DR scheme. Existing literature still mainly comes from
technical meetings. This section will outline most relevant points stemming from
actual literature (at beginning 2010).
(2.1)

ppCARD = ppN oCard + ppCidG + ppCttG + ppCidD + ppCttD + HF



N oCard

pp
= f N oCard sN oCard



CidG

= f CidG sCidG
 pp
CttG
pp
= f CttG sCttG

CidD

= f CidD sCidD
 pp


CttD

= f CttD sCttD
 pp


HF = 0.15 f CidG − f CidD F̄










A theoretical introduction about the DR scheme may be found in Galli and
Savino [Galli and Savino, 2007] and Fabio Grasso [Grasso, 2007] papers. Galli
and Savino [Galli and Savino, 2007] wrote the first academic paper about direct
reimbursement scheme. It has outlined the similarities about a forfeit based scheme
(FB) and a cost based (CB) scheme both theoretically and using a simulation approach. They concluded that a CB scheme will led to similar result to the FB
scheme, and that the switch to a DR scheme will not modify drastically cost repartition among insured. The paper was written in April 2006, earlier than specific
Italian DR scheme was put in force.
Fabio Grasso in [Grasso, 2007] wrote the first actuarial analysis of the CARD
system under the point of a pure premium analysis. The derivation of the pure
premium equation as the sum of different claim components is deeply described
in [Grasso, 2007]. The CARD pure premium equation is reported in 2.1. Luigi
Vannucci in [Vannucci, 2007] analyzed CARD system profitability and mutability. In Vannucci paper a simplified model of CARD suffered and caused claims was
presented. Both CID and CTT agreements were analyzed according to the 2007
regulation environment. The purpose of the paper was to analyze the difference
in financial result and portfolio volatility before and after the introduction of DR
scheme.
Sergio De Santis in [Desantis, 2006] wrote about the implications of the new
DR scheme on the prior ratemaking. In De Santis paper following ways are suggested to handle Italian DR scheme when performing classification analysis through
GLM:
(1) create a model for the different paid block of damage and aggregate them
in a multiplicative way. It is not specified how to handle negative claim
cost.
(2) use a classical frequency / severity modeling, but model only the cost
of suffered claims. This approach is an approximation as we model the
expression of the burning cost
prj = fj c cj s + (F̄ − cs j )(fj c − fj s ) = fj c cj s + εj
as we assume j negligible.
20
2. THE DR SCHEME IN ITALY
(3) Model directly the pure premium in a simple multiplicative manner after
having preprocessed negative claims by some practitioner trick (e.g.g by
substituting negative amounts by a small ).
In another working paper [Cucinella, 2008], Cucinella suggests to create three
single pure premiums model. No card, caused card and suffered card. For suffered
card suggest to handle the forfeit as an offset. The drawbacks of this algorithm is
that does not lead to a multiplicative tariff.
In [Spedicato, 2009] the most problematic features on Italian DR actuarial
practice were discussed, as at mid 2009 business environment. Following points
were underlined:
• The pure premium under the new CARD system changed dramaticaly on
two wheels vehicles, probably due to higher no guilty claim frequency and
high severity of non guilty claims.
• A Monte Carlo simulation analysis has shown that on average forfeit for
bodily CID injuries and CTT injuries estimated by equation 1.1 underestimate effective claim cost by 15-20% on average.
• A double multiplicative model may be used to produce coherent relativities. The different components of claims have to be separately modeled
added and subtracted as usual and then a final multiplicative model have
to be refit.
3. Official risk statistic about Italian TPML
The main source of TPML insurance statistics are ANIA and ISVAP. ANIA is
the official association of insurance companies, while ISVAP is the Italian insurance
market regulation authority.
ISVAP published statistic regards average premium and average cost. In February 2010, ISVAP and ANIA published the most comprehensive analysis about
CARD components of claims handled directly by the insured by class of vehicle. Frequency and severities are reported for all three accident years and evaluated at different maturities. See [Desantis, 2010a] for further details. ANIA statistics have
been the main source for the introductory paper on CARD, [Spedicato, 2009]).
Tables 3 - 11 come from ANIA publications, see for example [ANIA, 2010].
A time series of TPML frequency and severity is reported in (3, 4), coming
from ANIA publications. Figures until 2006 are referred to responsible claim only,
figures since 2007 are referred to suffered claims (due to DR introduction). A general decrease of claim frequency and a strong increase of severity appears to have
occurred, even if year to year changes are not perfectly comparable (due to changes
in claim frequency. Since 2007 the frequency of claims handled by the company is
reported and not the frequency of caused claims). Figure 4 shows frequency and
severity trend in Italian market as collected in Ania publications.
Tables 5, 7 and 9 report frequencies and severities of components of claims
for AY 2007 handled directly by the insurer: NoCard, CARD caused (forfeits paid
in compensation to the non responsible part insurer), CARD suffered (amount of
suffered CID and CTT claims before compensating forfeit). Corresponding figures
3. OFFICIAL RISK STATISTIC ABOUT ITALIAN TPML
21
for 2008 AY are reported into tables 6, 8 and 10. These statistics do not provide
enough informations to derive a pure premium (even at overall level) as:
• No corresponding forfeits amount is reported.
• No IBNR correction is reported.
• Figures reflects current year specific CARD regulatory environment.
Moreover card components (CID and CTT) are not distinguished in (7,8,9 and
10). Reported tables figures shows that suffered CARD severity is usually lower for
most classes of vehicle. Most notable exception is two wheels (+105%).
The share of claims handled under CARD scheme was 79.4% in 2009 from 72.0%
in 2007, according to most recent publication [Sergio Desantis and Gianni Giuli, 2010].
CARD introduction has moreover determined a relevant change in LOB frequency
definition. Until 2007 the LOB published frequency was intended as the frequency
of caused claim occurred in the AY. Since 2007 TPML LOB published frequency
is intended as the frequency of claims handled direcly by the company: NoCard
claims and Suffered Card Claims. Therefore also non - responsible claims incur in
the frequency.
A multivariate pure premium analysis for caused claims and suffered claims
have been carried on by DeSantis in [Desantis, 2006], [Sergio Desantis and Gianni Giuli, 2009],
[Desantis, 2010b]. Reported synthetic effect plots shows that implied relativities
of suffered and caused claims move very closely. A tune up of pre CARD predictive
models is strongly suggested as the risk clusteing has been halted by CARD system.
The effect of the CARD system in the P&C reserving is discussed in [Mieli, 2010].
Both the lack of sufficient development years experience and regulatory environment
changes are discussed. Employement of different valuation approaches between
2007-most recent years and 2006-previous are stressed. Moreover it is suggested to
split the TPML reserving evalutation between Card and NoCard claims. Assuming
a full development of Card claims after three years is considered reasonable.
3.1. Official statistical tables. These statistics derive from ANIA publications. Most recent and relevant publication of this series is [Sergio Desantis and Gianni Giuli, 2010].
Claim frequency and severity statistics of years before CARD enforcement are reported into 3, 4). They come from [statistiche e studi attuariali ANIA, 2006].
Original figure cover years since 1998 through 2006. 2007 figures has been derived
as described in [Spedicato, 2009, Hyndman and Khandakar, 2008].
The first block of statistics comes from [ANIA, 2008]), that is itself derived
from ISVAP filings. Tables 5, 7 and 9 report basic risk statistics by class of vehicle. Severity by type of claim is reported in table 11, whose data are taken from
[Servizio statistiche e studi attuariali, 2008, Sergio Desantis and Gianni Giuli, 2009].
22
2. THE DR SCHEME IN ITALY
Figure 4. Italian market frequency and severity averages 1998-2008
1
2
3
4
5
6
7
8
9
10
YEAR
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
freqCAR
0.11
0.11
0.11
0.09
0.08
0.08
0.08
0.07
0.07
0.06
freqTAXI
0.34
0.36
0.35
0.30
0.27
0.26
0.25
0.22
0.21
0.19
freqBUS
0.52
0.54
0.65
0.59
0.60
0.59
0.70
0.46
0.50
0.57
freqTRUCKS
0.27
0.27
0.26
0.24
0.22
0.21
0.20
0.16
0.15
0.14
freqMOTORCYCLES
0.06
0.06
0.05
0.04
0.04
0.04
0.04
0.03
0.03
0.03
Table 3. TPML claim frequency time series
freqWORKINGMACHINES
0.11
0.11
0.10
0.10
0.09
0.09
0.09
0.08
0.08
0.08
freqBOATS
0.01
0.01
0.01
0.01
0.01
0.01
0.01
0.01
0.01
0.01
3. OFFICIAL RISK STATISTIC ABOUT ITALIAN TPML
1
2
3
4
5
6
7
8
9
10
YEAR
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
acCAR
2553
2628
2714
3247
3524
3771
3801
3977
3968
4397
acTAXI
2505
2582
2799
3203
3409
3710
4195
3842
4125
4507
acBUS
1641
2195
2099
2384
2285
2583
2615
2951
3917
3780
acTRUCKS
1700
1799
1922
2288
2577
2680
2805
3219
3276
3546
acMOTORCYCLES
1645
1886
2357
2642
2887
3145
3441
3688
3747
4090
23
acWORKINGMACHINES
1260
1911
2357
1907
1854
2049
2101
2180
2601
2504
Table 4. TPML claim severity time series
acBOATS
2881
4320
3665
3637
3834
3249
4181
4556
5099
4837
24
2. THE DR SCHEME IN ITALY
Class of vehicle
Cars
Taxis
Buses
Trucks
Two wheels
Digger and affines
Farm vehicles
Boats
Totale
Exposure
18045630
16618
58023
2797888
3540027
111692
932149
302633
25804660
Claim frequency
0.024
0.080
0.261
0.053
0.020
0.045
0.025
0.004
0.027
Severity
6785
6264
6277
5321
5664
2679
2575
5310
6173
Table 5. NoCard components of claim statistics, AY 2007
Class of vehicle
Cars
Taxis
Buses
Trucks
Two wheels
Digger and affines
Farm vehicles
Boats
Totale
Exposure
23585405
19526
71676
3638089
4440928
142745
1085926
378741
33363036
Frequency
0.018
0.054
0.231
0.041
0.016
0.039
0.027
0.005
0.021
Severity
7878
9600
6285
6038
6254
3280
2476
7298
7029
Table 6. NoCard components of claim statistics, AY 2008
Class of vehicle
Cars
Taxis
Buses
Trucks
Two wheels
Digger and affines
Farm vehicles
Boats
Total
Exposure
18045630
16618
58023
2797888
3540027
111692
932149
302633
25804660
Claim frequency
0.049
0.136
0.299
0.096
0.015
0.022
0.002
0.000
0.048
Severity
2393
2245
2277
2404
2167
2217
2390
3223
2384
Table 7. CARD caused components of claim statistics, AY 2007
Class of vehicle
Cars
Taxis
Buses
Trucks
Two wheels
Digger and affines
Farm vehicles
Boats
Totale
Exposure
23585405
19526
71676
3638089
4440928
142745
1085926
378741
33363036
Frequency
0.058
0.166
0.384
0.109
0.020
0.034
0.001
0.000
0.057
Severity
2028
1856
1546
1857
1661
1520
2394
2367
1967
Table 8. CARD caused components of claim statistics, AY 2008
3. OFFICIAL RISK STATISTIC ABOUT ITALIAN TPML
Class of vehicle
Cars
Taxis
Buses
Trucks
Two wheels
Digger and affines
Farm vehicles
Boats
Total
Exposure
18045630
16618
58023
2797888
3540027
111692
932149
302633
25804660
Claim frequency
0.060
0.289
0.200
0.039
0.023
0.002
0.000
0.000
0.050
25
Severity
2221
2070
2111
2189
4442
1797
1939
3159
2354
Table 9. CARD suffered components of claim statistics, AY 2007
Class of vehicle
Cars
Taxis
Buses
Trucks
Two wheels
Digger and affines
Farm vehicles
Boats
Totale
Exposure
23585405
19526
71676
3638089
4440928
142745
1085926
378741
33363036
Frequency
0.067
0.326
0.222
0.043
0.028
0.003
0.000
0.000
0.056
Severity
2279
2065
1804
2130
4308
2043
2369
6964
2396
Table 10. CARD suffered components of claim statistics, AY 2008
Class of vehicle
Cars
Taxis
Buses
Trucks
Motorcycles
Mopeds
Digger and affines
Total
Share of total CIDs
0.840
0.004
0.011
0.086
0.051
0.005
0.000
1.000
Suffered CID severity
1892
1912
1743
1984
3890
3272
1635
2007
Property damage CID severity
1495
1731
1690
1705
1811
1258
1475
1531
Bodily injury CID severity
4115
3879
4612
4584
5896
4767
6168
4450
Table 11. Severity of CID claim components by type of claim, AY 2007
CTT severity
5013
5920
4915
6565
8900
7168
8424
5425
CHAPTER 3
The Solvency II framework
1. Synthetic overview of the Solvency II framework
1.1. What is Solvency II. Solvency II is the conventional name to the European Commission directive regarding solvency regulation framework for insurance
company. Its purpose is to uniform regulatory capital requirements across EU
insurance market. Solvency rules will define the minimum amounts of financial resources that insurers and re insurers must have in order to cover the risks to which
they are exposed.
Solvency II will introduce economic risk-based solvency requirements across
all EU, more sensitive and sophisticated than in the past. A ”total balance sheet
approach” will be employed, that is liabilities will be evaluated under the so called
fair value (market value if available). A sophisticated risk sensitive approach will
determine the amount of capital needed to support the portfolio of the company.
The proposed Solvency II framework has three main areas (pillars):
(1) Pillar 1 consists of the quantitative requirements (for example, the amount
of capital an insurer should hold).
(2) Pillar 2 sets out requirements for the governance and risk management of
insurers, as well as for the effective supervision of insurers. Moreover Pillar
2 gives possibility to consider utilizing the option to base the Solvency
Capital Requirement, partially or fully, on the results produced by Internal
Model.
(3) Pillar 3 focuses on disclosure and transparency requirements.
Solvency II will provide standard formulas to establish the Solvency Capital
Requirement (SCR) and the absolute Minimum Capital Requirement (MCR), but
insurer are invited to assess the riskiness of underwritten portfolio by their own and
to develop internal models.
Solvency II framework has started to been developed since 2001. The most
important issues and the general framework have been studied until 2003. Henceforth more technical studies have been conducted aiming to developing the tool to
quantify the risk, the so called Quantitative Impact Studies (QIS). The purpose
of QIS is to collect data and to test the proposed risk metrics on the European
insurance market. Such studies are being calibrated on a sample representative of
insurance companies of EU market. QIS5 [CEIOPS, 2010]) is the last finalized
version of the proposed risk metric modules, drafted 5th July 2010.
Solvency II is expected to be in force since 2012.
27
28
3. THE SOLVENCY II FRAMEWORK
All considerations presented in this work used QIS4 [CEIOPS, 2007]) as reference, as they had been mostly elaborated before QIS5 [CEIOPS, 2010]) introduction.
1.2. SCR and MCR. Insurers will have to establish technical provisions to
cover expected future claims from policyholders. The technical provisions under
the new framework should be equivalent to the amount another insurer would be
expected to pay in order to take over and meet the insurer’s obligations to policyholders. In addition, insurers must have available resources sufficient to cover
both a Minimum Capital Requirement (MCR) and a Solvency Capital Requirement
(SCR).
The SCR is based on a Value-at-Risk measure calibrated to a 99.5% confidence
level over a 1-year time horizon. The distribution at risk is represented by the
Net Asset Value (NAV), defined as the difference of asset and liabilities using a
fair value approach. The SCR covers all risks that an insurer faces (e.g. insurance,
market, credit and operational risk) and will take full account of any risk mitigation
techniques applied by the insurer (e.g. reinsurance and securitisation). The SCR
may be calculated using either a new European Standard Formula or an internal
model validated by the supervisory authorities.
If an insurer’s available resources fall below the SCR, then supervisors are
required to take action with the aim of restoring the insurer’s finances back into
the level of the SCR as soon as possible. If, despite supervisory intervention, the
available resources of the insurer fall below the MCR the insurer’s liabilities will be
transferred to another insurer, the license of the insurer will be withdrawn and its
in-force business will be liquidated.
The QIS5 standard formula of the SCR for a generic insurer is shown in (1.1).
SCR is the sum of an operational risk charge, the (reduction) adjustment for provision and deferred taxes loss adsorbing capacity and the basic solvency capital
requirement. The BSCR is calculated by aggregating sub-modules specific capital
charges (1) through correlation matrix (whose off diagonal entries are mostly 0.25).
(1.1)

SCR = SCRBasic − Adj + SCROpRisk


s
X
ρij ∗ SCRi ∗ SCRj

 SCRBasic =
i,j
QIS5 adds to SCR standard formula an adjustment term (Adj) to provide an allowance for risk - mitigating effect of taxes and technical provisions. Moreover QIS5
added to BSCR a solvency capital requirement for intangible asset risk, SCRint .
The full overview of integrated risks considered and relative surcharges are
reported in (1).
The minimum capital requirement (MCR) represent the amount of NAV taken
as minimum threshold reference. It NAV falls below MCR the insurer’s stakeholders
are considered to be subject to an unbearable risk level manding the regulatory body
to take control of the insurer. QIS5 calculates MCR according to a VaR confidence
level of 85%.
Different key figures shall be considered when MCR is to be calculated. They are:
1. SYNTHETIC OVERVIEW OF THE SOLVENCY II FRAMEWORK
29
Figure 1. Solvency II SCR framework
• absolute minimum, that is 2.2 Mln euros for P&C only companies, 3.2 Mln
euros for L&H and 5.5 for companies licensed for life and P&C insurance
lines.
• relative minimum (0.25 SCR) and relative maximum (0.45 SCR).
• a linear MCR, M CRlinear = M CRN L + M CRLS . Left handed side addends are calculated
using a factor based formula. E.g. for NL addend,
P
M CRN L =
max (αj T Pj ; βj Pj ). Corresponding alpha and betas for
j
TPML lines are 12% (reserves net of reinsurance recoveries) and 13% (net
written premium).
The MCR is therefore calculated as (1.2).
(1.2)
M CR = max (M CRcombined ; AM CR)
M CRcombined = min (max (M CRlinear ; 0.25 ∗ SCR) ; 0.45 ∗ SCR)
Considered risks for which specific capital charges modules have been developed
are:
(1) Market risk, the risk derived from adverse volatility of admitted assets
market value.
(2) Credit risk / Counterpart risk, the risk arising from credit events, e.g.
of default of a counterpart. Key counterparts of this risk are: reinsurer,
insureds and producers.
30
3. THE SOLVENCY II FRAMEWORK
(3) Underwriting risk, the risk arising from insurance obligations, due from
perils covered and process used in the conduct of business [CEIOPS, 2010].
Life, non-life and health specific underwriting risk modules has been set
within QIS5 framework. Within P&C business, underwriting risk comprises three types of insurance risks [Goldfarb, 2006], that is
(a) loss reserve risk, arising from adverse development on prior years.
(b) underwriting risk from current period policy year.
(c) property and catastrophe risl, due to the modeling of catastrophe
risk arising from some LOB specific exposures.
It is worth to remember that the framework set by solvency remembers the
RBC, where the total capital requirement is set according to formula 1.3. R0 represents investment in insurance affiliates, R1 through R5 are respectively investments
in fixed income securities, equity investments, credit risk, written premium risk and
reserve risk.
q
(1.3)
RBC = R0 +
R1 2 + R2 2 + R3 2 + R4 2 + R5 2
2. SCR NON-LIFE UNDERWRITING RISK
31
2. SCR non-life underwriting risk
2.1. Description of non - life underwriting risk modules. This paragraph will deepen the NL premium risk capital charge as the main matter of the
thesis. QIS5 will be considered to present the theoretical framework, whilst thesis
calculation has been provided carried out with reference to QIS4 standard formula.
NL underwriting risk is defined as the risk arising from non - life insurance obligation generated from perils covered and the process used in the business conduct.
Underwriting risk is determined by three components:
(1) the non-life premium and reserve risk submodule;
(2) the non - life catastrophe risk submodule;
(3) the lapse risk submodule.
QIS5 premium risk definition is:
Premium risk results from fluctuation in the timing, frequency and
severity of insured events. Premium risk relates to policies to be
written (including renewals) during the period, and to unexpired
risks on existing contracts. Premium risk includes the risk that
premium provisions results to be insufficient to compensate claims
or need to be increased.
Premium risk includes also expense risk.
Reserve risk results from fluctuations in the timing and amount of claim settlements.
Lapse risk assesses the effect of embedded policyholder options eventually included in the non - life contract: early termination and renew at preexistent conditions. It is beyond the analysis of this thesis. The formula standard formula
for lapse risk assesses lapse risk as the maximum of three scenarios: permanent
increase (50% more than expected), permanent decrease (50% less than expected)
and mass lapse (30% of policies interested).
Catastrophe risk module assess the risks arising from low frequency and high
severity event not properly considered by premium and reserve risk module. QIS5
standard formula capital charge for catastrophe risk considers both scenario and
factor based calculations.
SCRnl underwriting risk is calculated integrating charges for premium and
reserve risk, catastrophe risk and lapse risk assuming a 0.25 correlation between
premium and reserve risk and catastrophe risk. The calculation is reported in
formula (2.1).
(2.1)
SCRnl =
q
SCR2 pr res + SCR2 nl cat + SCR2 lapse + 2 ∗ 0.25 ∗ SCRpr res SCRnl cat
Both a market-wide and an undertaking specific approaches may be considered.
Market wide approach applies coefficients determined on market - wide estimates,
while undertaking specific approach applies coefficients estimated on the specific
entity data.
32
3. THE SOLVENCY II FRAMEWORK
2.2. Premium risk analysis.
2.2.1. QIS 5 approach. QIS5 [CEIOPS, 2010])integrates premium risk and
reserve risk into a comprehensive formula for the capital charge determination,
while QIS4 [CEIOPS, 2007]) separated the capital charges for premium risk and
reserve risk.
The capital charge is of the form expressed in formula 2.2, where σ and V represent
combined measures of premium and reserve standard deviations (2.4) and volume
measures (2.3). Premiums volume measure consider also the present value of exPP
pected premium booked for years subsequent incoming Plob
. The combined measure of volume allows considering diversification (DIVpr,lob ) calculated by means
of premium Herfindal Index. Geographic diversification is not allowed when aggregating standard deviation.
Correlations used into aggregated standard deviation calculus are always positive
and comprised between 0.25 and 0.5.
(2.2)
(2.3)
N L = ρ (σ) V =

 Vres = P COlob
PP
Vpr = max P t,wri lob ; P t,ern lob ; P t−1,wri lob + Plob

Vlob = (Vprem,lob + Vres,lob ) (0.75 + 0.25DIVpr,lob )
s
(2.4)
!
√
exp z0.995 σ 2 + 1
√
−1 V
σ2 + 1
σlob =
2
2
(σpr Vpr ) + 2 ∗ 0.5 ∗ σpr σres Vpr Vres + (σres Vres )
Vpr + Vres
Vlob = (Vprem,lob + Vres,lob ) (0.75 + 0.25DIVpr,lob )
QIS5 approach corrects tabulated volatility coefficients by a factor that let to
consider the effect of non proportional reinsurance. Standard deviation for premium
and reserve risk for TPML are respectively 10% and 9.5% in the market wide
approach. QIS5 undertaking specific considers a credibility formula that allows the
use of internally calculated volatilities, as simplified in (2.5). TPML credibility
weight for internal estimate is reported in [CEIOPS, 2010]), eg 7 year weigth is
51%.
UY,lob =PVY,lob µlob +
p
VY,lob βlob εY,lob
Uj,lob
(2.5)
µ̂lob =
j
P
Vj,lob
j
σ̂lob = √β̂lob
VY,lob
→ β̂lob =
r
1
Nlob −1
P (Uj −Vj,lob µ̂lob )2
j
Vj,lob
Other details are reported in [CEIOPS, 2010]).
2.2.2. QIS 4 approach. Premium and reserve risks charges are calculated according to formula 2.6.
Credibility coefficients is 0.72 (seven years time series length) and standard deviation coefficient is 0.09 (MTPL).
2. SCR NON-LIFE UNDERWRITING RISK
(2.6)

t
t
t−1
Vprem,lob = max
P
,
P
,
1.05P

lob,wri
lob,earned
lob,wri

r

P j
2
1
P lob (LRlob,j − µlob )
σU,prem,lob = (nnob −1)V
prem,lob
j


p

σprem,lob = cσ 2 U,prem,lob + (1 − c) σ 2 M,prem,lob
33
34
3. THE SOLVENCY II FRAMEWORK
3. Literature on underwriting risk internal models
3.1. A premium risk frequency - severity approach internal model.
In [Savelli and Clemente, 2008]) a premium risk internal model framework has
been discussed. QIS3 framework was used, that presents some difference with
[CEIOPS, 2007]) framework: (e.g. perimeeter of used of credibility formula for
loss ratio standard deviation).
A multi line insurer has been modeled in order to take into account LOB diversification effect. For each line parameters of frequency and severity distributions
have been estimated from market wide statistics. Moreover loadings for profit and
contingencies have been taken into account.
The paper wishes to show an algorithm to determine the solvency capital requirement (SRC) both at single line level and at aggregate level. SCR is defined as
(3.1)
(3.1)
SCRα i = CCi α − λi Pi = (V aRα i − Pi ) − λi Pi = V aRα i − Pi (1 + λi )
being P the pure premium. Gross premium are defined by Bt = Pt−1 [(1 + g) (1 + i)] (1 + λ)+
cBt , allowing the presence of inflaction factor (i) and real growth (g).
The V aRα is the one year percentile of level α, usually 99.5%. RBC ratio are
α
defined as SCR
B .
The internal model employed to determine the total loss distribution for each
LOB is based on a a negative binomial compound log normal distribution. The
compound distribution is applied singularly on each LOB and then aggregated.
Aggregation can be done either under multivariate normality assumption (sole use
of correlation matrix as in [CEIOPS, 2007]) or by copulas.
Dependency has been also modeled with elliptic copulas (using the QIS correlation matrix), as Archimedean copulas are not optimal. Simulation analysis has
shown that especially with the employment of Gaussian Copulas the capital charge
are lower than those using standard model.
Calibration to take into account skewness of multivariate joint distribution have
been presented into [Savelli and Clemente, 2009]). The fist is a Cornish - Fisher
expansion, the second is based on an empirical multiplier. Those formula express
the C 2 term of the (3.1) using a ψ correction factor
C2 =
L X
L
X
j
Ci Cj ρij ξi ξj
i
that takes into account skewness. Cornish-Fisher correction factor is ξi CF =
6zα +γSCR (z 2 α −1)
, while empirical multiplication approach is
6zα +γi (z 2 α −1)
ξi EM =
kind α
=
ki α
L
P
RBCind α + λi Pi
i
s
L
P
(σi )2
i
RBC α i +λi P
σi
3. LITERATURE ON UNDERWRITING RISK INTERNAL MODELS
35
In [Savelli and Clemente, 2008, Savelli and Clemente, 2009]) it has been
shown that such model leads to a lower capital requirement especially for medium
- size insurer with respect to standard QIS model.
Another underwriting risk driver withing premium risk is represented by the
uncertainty of earned premium volumes. The average premium, new business and
lapse rates may be modeled within a more complex framework. Modeling premium
volumes may employ time series models for the premium volume. These models
may be further expanded taking into account new business, lapse rate and price
elasticity.
CHAPTER 4
Capital allocation and profit measures
1. Overview
Capital allocation issues, discussed in paragraph 2, represents a relevant task
in P&C business as it allows the insurer to achieve the business objective (solvency
and operation continuity) with an adequate level of confidence. The determination
of profit loading is moreover linked to capital allocation as only an adequate profit
loading allows the insurance business to be economically feasible. These techniques
may lead the insurer to achieve the purpose of maximizing the value of the firm.
The majority of techniques used to determine the capital allocation and the profit
loading are based on a measure of the volatility of the business. Therefore an
internal model framework that permits to assess the volatility of economic results
may give the input parameters for the
2. Market consistent value and capital allocation for a P&C insurer
The value of a generic business may be expressed as the sum of the run - off
value plus the client value.
The run off value represents the value of the company if it were put in run off. It
would be estimated as the difference between market value of assets less market
value of liabilities. Market value of liabilities is represented by the best estimate of
liabilities (BEL) plus a market value margin (MVM). The MVM is the additional
amount that an investor will require to take the BEL and the associated risk.
Within Solvency II framework, the MVM is based on the capital hold year by year
until complete runoff of the business (SCR) in order to cover the associated risk of
the liabilities (see below). It is calculated as the cost of capital of yearly insurance
risk SCR discounted at free risk rate.
The client value is the value generated by the future business, both the value of
pure renewals and the value of the new business and its renewals. A proper risk
free discount rate is used.
return
Profit measures in industry are of the form of ROE = capital
, but vary upon
specific definitions of numerator and denominator. In P&C industry some lines
are riskier than others. Capital allocation across LOBs is therefore it is important
both to assure solvency requirements and to assess financial performance correctly.
Risk adjusted performance measures are important to quantify properly profit and
contingencies in P&C industry.
Raroc 2.1 and Eva 2.2 are two relevant risk adjusted performance measures. Raroc
represents a risk-adjusted ROE version. A positive EVA instead represents a benchmark to assess whether a project adds value to the company. A positive EVA occurs
when the projects earns an after-tax income greater than the required cost of capital rrequired ∗ C risk−adj .
37
38
4. CAPITAL ALLOCATION AND PROFIT MEASURES
It is worth to remember that both income measures and capital allocation choices
must be disclosed. Income measures may vary between GAAP, statutory net income, IASB (that differs to GAAP regarding reserve discounting) and Economic
Income.
(2.1)
(2.2)
RAROC =
income
capitalrisk−adj
EV A = income − rrequired ∗ C risk−adj
A complete overview can be found in [Goldfarb, 2006] and [Cummins, 2000].
A target RAROC may be the basis to evaluate a proper risk margin πi for the analyzed LOB ( see [Goldfarb, 2006]) on the basis of equation (2.3). In [Cummins, 2000]
the CAPM model is cited as a way to find a required rate of return. The betas
for a generic i line are derived from 2.4 using liability and premium leverage ratio.
Therefore the required rate of return for a generic i line requires to apply the line
beta to the line premium leverage ratio and to reduce it by the risk - free rate
applied on policyholder losses supplied funds.
(2.3)
(2.4)
[Pi + πi − Ei ] 1 + iinvestment − Li
RAROCi =
Ci
rE = rF + βE (rM −!rE )
P Li
βE = βA 1 +
+ βi
E
j
ri =
−rF LEi
+
!
P Pi
j
E
βi PEi
Not risk adjusted capital measures may be actual committed capital (book value
measures of policy holder provided capital) or (market value of equity). Risk adjusted capital measures may be: regulatory required capital, the minimum amount
of capital required by the regulator, or Economic Capital (henceforth EC). A broad
definition of EC is:
the capital required to ensure a specified probability (level of
confidence) that the firm can achieve a specified objective over
a given time horizon.
Objective may be: solvency (do not bankrupt) or capital adequacy (support current
growth objectives and continue to pay dividends). A good definition of risk capital
is:
Risk capital is defined as the amount of capital that must be
contributed by the shareholders of the firm in order to absorb
the risk that liabilities will exceed the funds already provided for
in either the loss reserves or in the policyholder premiums.
Famous risk capital measures are:
• Ruin probability: probability that a default event occurs.
• Value At Risk (VaR), with confidence %X and time span T. It is the
(1 − X)% confidence interval of the portfolio net asset variation between
time 0 and T.
2. MARKET CONSISTENT VALUE AND CAPITAL ALLOCATION FOR A P&C INSURER 39
• Conditional tail expectation is closely related to VaR and it is defined as
E (X|X < V aR)
• Expected policyholder deficit (EDP), usually expressed as a target of
losses. It is defined as the ratio between the shortfall between asset and
liabilities and liabilities when assets are lower then liabilities, as reported
in formula 2.5.
(2.5)
E [L − A|A < L]
L
The EDP represents the put option the insurer holds to give up the asset and being released from its obligations when liabilities are greater than
asset. Therefore the value of policyholder claim is Le−rT −P (A, L, t, r, σ).
= EDP
and capital
Another way tho express the EDP ratio is P (A,L,t,r,σ)
L
L
allocations manages asset to liability ratio to set EDP to a predefined
target.
A specific figure for EdP ratio or VaR shall be selected. The standard default
ratio of an AA bond has been proposed as a solvency probability reference, but
this choice have some drawbacks. Other more naive reference source may be found
in managments’ preference [Goldfarb, 2006]. Solvency II reference VaR level il
99.5% .
A challenging aspect of capital modeling phase is to take into account conjoint
dependencies across risk categories. The challenges derives by the fact that the underlying true dependency structure of such extreme event is very difficult to know.
According to [Goldfarb, 2006], when singular source of risk have been modeled
two approach can be used: copula based method (modeling distribution by imposing a multivariate law on joint distribution of marginal variables quantiles) or
Iman-Conover Method (preserving rank correlation), or aggregating risk measures
by correlation matrices (see NAIC formula).
Once defined the sources of capital and the total capital needed under a probability of ruin / reference VaR, say C P ,the capital may be allocated:
• according to Regulatory Risk Based Capital, but this method ignores
some sources of risks, lacks theoretical foundations and ignores correlation
betwenn firm business characteristics.
• Proportional allocation - measures of risk are evaluated on single sources
of risk that are considered relative weights in risk allocation. Each risk
stand alone capital charge is calculated, Cj . Therefore the final allocation
0
Cj is set by formula 2.6.
• Incremental Allocation. This method determines the impact that each
risk source has on the aggregate risk measure and allocates the total risk
capital in proportion to these incremental amounts.
• Marginal Allocation - This method determines the impact of a small
change in the risk exposure for each risk source (e.g. amount of assets,
amount of reserves, premium volume) and allocates the total risk capital
in proportion to these marginal amounts.
40
4. CAPITAL ALLOCATION AND PROFIT MEASURES
• Co-Measures: firm wide risk measures are calculated, then specific LOB /
source of risk measures are calculates subject that the firm wide condition
apply.
0
Cj
Cj = C P P
Cj
(2.6)
J
The capital allocation through business lines is discussed in [Cummins, 2000]
and [Goldfarb, 2006]. Marginal capital methods, Merton - Perold (MP) and
Myers - Read (MR), recognize the benefits of diversification. When businesses are
not perfectly correlated the sum of stand alone capital required is lower than the
sum of overall capital required. Both methods require to choose a generic risk
measure and to calculate the overall risk capital.
MP method of capital allocation works as follow:
(1) calculate the capitals required by combining all lines minus one, for all
lines to be combined.
(2) the capital allocated to a line i is obtained by subtracting to the overall
risk capital the capital of the business comprised by all lines except line
i.w
One problem with the MP approach is that they do not allocate 100% of capital
to business line and it is allocated to business level. MR method allocates capital
to business line by determining the effect of a very small changes in loss liability for
each line of business. [Cummins, 2000] report a formula for the surplus to liability
ratio for the firm when the objective of the firm is to equalize the marginal default
∂p
value across LOBs (2.7). σ represents the line overall volatility, while ∂p
∂s and ∂σ are
the derivatives of EDP with respect to overall allocated capital and company overall
volatility. Most relevant advantages of this method are the complete allocation of
capital across business lines and the coerence with business operational that are
marginal in sense of involving a continuum of small changes. Relevant disadvantages
lies in lack of simplicity to understand and in the estimation of parameters.
(2.7)
si = s −
∂p
∂s
∂p
∂σ
σiL − σ 2 L − (σiV − σLV )
σ
Part 2
A CARD - coherent premium risk
internal model for a TPML
portfolio
CHAPTER 5
Preliminary data analysis
1. Description of data sources
The data sets used in the analysis come from a TPML portfolio of a mid
sized Italian insurer. It represent a sample without replacement of a relevant share
of earned exposures during 2007 - 2009 calendar years. The analysis has been
restricted to the three most important classes of vehicles for Italian market:
Major classes of vehicle are represented:
(1) four wheels, Sector 1
(2) trucks, Sector 4
(3) two wheels (mopeds & motorcycles), Sector 5
It is worthful to remember that four wheels, trucks and two wheels represent 71.2%,
10.8% and 13.0 % of market wide earned exposures in 2009 ([Sergio Desantis and Gianni Giuli, 2010]).
Remaining 5% of exposures pertains to other classes of vehicles (e.g. boats, buses,
. . . ).
For each CY / AY following data set were used:
• A policies data set.
• An unaggregated data set of claims.
Exposures for 2007 - 2009 calendar year were collected along with corresponding
accident years occurred claims. Policies and claims were aggregated by CY (earned
exposures) / AY (claims occurred in AY YYYY and valuated at 31th December
YYYY).
2. Components of claim distribution
Pictures 1 to 3 report average loss distribution of all possible components of
claim split by accident years. A kernel smoother has been added to the distribution
plot in each panel. The claim cost distributions come from four wheels portfolio and
shall be considered as example. Classical positively skewed continuous distributions
characterize only NoCard, CidG and CttG as they represent amounts paid in full
by the insurer.CidD, CttD, CidGF and CttGF show peaks of non - null probability
corresponding to forfeit for material damages. Therefore they cannot be treated
with usual inferential statistics and predictive modeling techniques.
43
44
5. PRELIMINARY DATA ANALYSIS
Figure 1. Responsible components of claim distribution, AY
2007-09
2. COMPONENTS OF CLAIM DISTRIBUTION
Figure 2. Non responsible CID component of claim (and corresponing forfeit) distribution, AY 2007-09
45
46
5. PRELIMINARY DATA ANALYSIS
Figure 3. Non responsible CTT component of claim (and corresponing forfeit) distribution, AY 2007-09
3. BASIC UNIVARIATE STATISTICS
47
3. Basic univariate statistics
3.1.
•
•
•
•
General remarks. Data available from insurer data set were:
Policy id data
Amounts and number of claims by component of claims
Earned exposure in terms of car years on the reference calendar years
ratemaking variables, distingued by class of vehicle.
Following ratemaking variables were available for cars:
– age crossed by sex (AGESEX).
– POWER (hp equivalent)
– FEED
– ZONE (2010 forfeit zone)
Following ratemaking variables were available for two wheels:
– AGE
– ENGINE VOLUME (in cm3 )
– ZONE (2010 forfeit zone)
Following ratemaking variables were available for trucks:
– Weight crossed by use (WEIGHTUSE).
– AGE.
– ZONE.
Provided variables represent only a small subset of classification factors available on TPML coverages at the moment. Relevant exceptions were:
• variables regarding past claim history: standard BM class and claims
within previous calendar years.
• detailed risk localization (e.g. ZIP code).
• refined make and vehicle symbol variables.
• deductible, policy limit and payment frequency.
Reasons of these exception were:
• The thesis aim is not to create a commercial TPML tariff.
• Acknowledging claim history requires additional modeling efforts.
• The algorithm of the internal models requests a number of simulations
that increases multiplicatively by the number of levels of each additional
variable. The computation time is already relevant even with a small
number of cluster.
The chosen variables are well known strong predictors of frequency or severity, as actuaries specialized in TPML pricing know [Gigante and Picech, 2004],
[Desantis, 2006].
While all of them contribute significantly to burning cost, not necessarily each of
them has been found significant in each component of claim. Chapter C will report
model results logs along with type III p-value test the significance of the analyzed
variable (smaller p-value indicate higher explicative power).
An appendix chapter B shows oneway analysis for most relevant variables by
class of vehicle. Frequency, severity and burning cost of each component of claims
is derived.
Four wheels statistics are reported into tables 1 - 25, trucks risk statistics are
reported into tables 26 - 45, two wheels risk statistics are reported into tables 46 65.
48
5. PRELIMINARY DATA ANALYSIS
For each ratemaking variable earned exposures, frequency, severity (eventually split between suffered amount and compensating forfeit) and burning cost is
reported. The sum of NoCard, CidG, CttG, CidD and CttD less compensating
forfeit would result in a burning cost that corrected by loss development factor,
expenses and profit charges would bring to a pure premium. We have to stress that
only multivariate analysis would lead to meaningful relativities, but reported risk
statistics are useful for preliminary analysis and exploratory purpose.
In general we see that NoCard frequency decreases between 2007 - 2009 for
each class of vehicle.
Four wheels risk statistics analysis leads to following considerations:
• YEAR: Suffered components of claims pure premium usually less than
10 euro. Frequencies of CARD component of claims tends to increase
between 2007 and 2009. Such consideration is valid for all components of
claims.
• AGE and POWER are relevant predictors of risk propensity. All component of claims pure premium decreases as insured’s age increases and they
are positively dependent with POWER, more or less for all components
of claims.
• FEED: Gasoline cars are less risky than other fuel, more or less for all
components of claims.
• ZONE: Zone 1 is the more risk, zone 3 the less risk. This consideration is
valid for all component of claims.
Trucks risk statistics analysis leads to following considerations:
• There does not appear a definite trend in pure premium, excepting for
NoCard component (due only to severity increase).
• The burning cost is higher for corporation than for persons (AGESEX).
• The burning cost increases both with weight and for ”conto terzi” use
(WEIGHTUSE).
• Zone 1 is the most risky, zone 3 the less risky (ZONE).
Two wheels risk statistics analysis leads to following considerations:
• CidG burning cost decreases strongly in 2008 (TEAR).
• CidD burning cost is always lower than CidD (Two wheels non responsible
claims’ cost is always higher than CidD).
• Engine is a strong predictor but the dependency is not always homogeneous (ENGINEVOL).
Figures 4 - 6 show one way pure premium by some relevant variables and
random component of claims as an example.
3. BASIC UNIVARIATE STATISTICS
Figure 4. Four wheels one way pure premium NoCards component of claims by AGESEX
Figure 5. Truck one way pure premium for CidD component of
claims WEIGHTUSE
49
50
5. PRELIMINARY DATA ANALYSIS
Figure 6. Two wheelsone way pure premium for CidG by ENGINE VOLUME
CHAPTER 6
CARD portfolio standard formula & internal
model exemplified
1. Overview
In the current paragraph we will discuss the premium risk capital charge for
a mono - line insurer operating in the Italian TPML. Experience period used to
calibrated the model consisted in AY/CY data from 2007 to 2009. Capital charge
in the proposed internal model has been estimated assuming 2009 steady state.
Premium risk capital charge has been also determined with the QIS4 standard formula for comparison purpose.
The modeling approach used in developing the internal model framework consisted in defining an appropriate distribution for the portfolio total losses, S̃. The
premium risk capital charge provision has been therefore estimated using a VaR
like approach, as in formula 1.1. Portfolio total loss figures will be generated by
Monte Carlo sampling for a relevant number of iterations (between 10,000 - 50,000
depending by model complexity). Each simulation represents the total loss amount
of a portfolio covering around one million vehicle/year. Allowance for ULAE, IBNR
and IBNERs has been defined using non-stochastic correction coefficients.
(1.1)
h i
N LprRisk = S̃99.5% − E S̃
The proposed internal models takes into account portfolio risk heterogeneity
when modeling total loss distribution. Insureds have been assigned to K subgroups
defined upon the level of few relevant ratemaking variables. A specific total loss
distribution has been defined within each cluster and then summed up to determine
the overall portfolio total loss, as in formula 1.2.
(1.2)
S̃ =
K
X
j=1
51
s̃j
52
6. CARD PORTFOLIO STANDARD FORMULA & INTERNAL MODEL EXEMPLIFIED
s̃j = s̃N oCard + s̃CidG + s̃CidD + s̃CttG + s̃CttdD + 0.15 ∗ F̄ ñCidG − ñCidD
oCard
ñN
j P


N oCard


s̃
=
C̃jN oCard



j=0



ñCidG

jP


CidG
 s̃
=
C̃ CidG − F̃ CidG C̃ CidG



j=0



ñCttG
jP s̃CttG =
C̃ CttG − F̃ CttG C̃ CttG


j=0



ñCidD

jP


 s̃CidD =
F̃ CidD



j=0



ñCttD

jP


CttD

s̃
=
F̃ CttD

j=0
(1.3)
j
=
P
Ci + 0.15F̄ (ñCidG − ñCidD
i=1...Ñ
(1.4)
Two class of internal models were defined:
(1) The first internal model class develops total loss distribution for each
component of claims, as in formula 1.3. These distributions are thereafter
combined to determine total loss distribution. The main advantage of this
model consists in modeling explicitly specific component of claims risk
sources. The main disadvantage is that within each cluster components
of claims have been assumed independent.
(2) The second internal model class develops total loss distribution by convolution of frequency and severity of total payments, as in formula 1.4.
Total payments are defined as the sum of all component of claims costs
and compensating forfeits. Relevant advantages with respect to first class
are that:
(a) there is no need to deepen the modeling stage to component of claims
level.
(b) modeling the total payment amount overcomes the assessment of
component of claims dependency.
Most relevant disadvantage lies in obtaining the total payments distribution by re sampling from 2009 total payment samples, even if stratified
by class of vehicle and forfeit zone. Therefore it is not possible to assess
how the cost of claims varies by risk characteristics and the model would
be biased if the business mix changes relevantly within the other variable
not taken into account in the sampling stratification. As an alternative
to re sampling, regression models with underlying marginal distribution
lying in all the real domain would have been used. Skew normal or skew-t
1. OVERVIEW
53
distribution would have been an example [Genton, 2003]. Even if the
theoretical approach is appealing this choice have been abandoned as skew
t regression with GAMLSS package [Rigby and Stasinopoulos, 2005]
was slow and definitively failed convergence. Another choice would have
been a special case of the Tweedie distribution, but software for estimation
is still unavailable ( see the appendix A for details).
Both classes of models model risk heterogeneity, at least at frequency level.
Following ratemaking variables were selected for different classes of vehicles:
• four wheels: territory, feed, engine power, sex crossed within age.
• trucks: territory, age, weight crossed by vehicle use.
• two wheels: engine volume, territory and age.
Continuous variables have been split by quartiles to determine initial levels on which
evaluate GAMLSS relativities.
The first class of internal model determines for each clustered a total loss distribution modeling explicitly the CARD components. That is for each i-th element in
cluster j, a distribution for NoCard, CidG, CttG, CidD, CttD is defined. The handling fee is finally added. Moreover each component of claim frequency and severity
central tendency and dispersion indexes will be determined by the use of GAMLSS
[Stasinopoulos et al., 2008]. GAMLSS models extend GLM framework allowing
joint modeling of location and shape parameters. Therefore both mean and dispersion may be assessed by choosing a marginal distribution and building a predictive
model using ratemaking factors as independent variables. The risk heterogeneity
is modeled as the distribution of frequency and cost of claims changes between
clusters by a function of the level of ratemaking factors underlying the analyzed
clusters. This approach has moreover the advantage to deal with a business - mix
varying portfolio, as the risk model does not change when the exposure of portfolio
clusters is rebalanced.
Models have been calibrated with respect to AIC goodness of fit index, as described in the statistical appendix A. During the first phase, we started to model
the mean of the risk factor (frequency and severity of the component of claim) by
adding all available variables for the chosen class of vehicle and testing whether
the remotion of each one lowered the AIC index. After having selected the most
parsimonious model, we completed the model as we tested which variable between
the ones used in the mean model would further decrease the AIC when inserted
in the dispersion model. Therefore dispersion parameter model has only a single
independent predictor, as done in [de Joung Joung et al., 2007]. This choice
have been employed to avoid excessive complexity and reduction of models understanding.
Conditional distributions for frequency and cost of claims were negative binomial
and gamma, both distribution that use a logarithm canonical link. Even if more
non - standard distributions would have been chosen (e.g. ZIP, Inverse Gaussian
and Weibull ) the choice of negative binomial and gamma has been lead by avoiding
numerical convergence problems and to maintain a link with classical ratemaking
practice were log-link canonical distributions are used.
Moreover AY has always been inserted as a factor (even if statistically not significant) to absorb year to year inflation and year specific legal context effect, as
suggested in [pre, 2010].
54
6. CARD PORTFOLIO STANDARD FORMULA & INTERNAL MODEL EXEMPLIFIED
Figures 1- 24 shows parameters relativities with 95% confidence bands. During
a standard ratemaking process levels with statistically equal coefficient of each
ratemaking factor are merged together until the most parsimonious model is build.
GAMLSS modeling has been performed on all frequency models and all non
- forfeit cost of claims models. Forfeit cost of claims and total payments in the
second group of models have been modeled through re sampling for the empirical
distribution stratified by class of vehicle and territorial forfeit.
Finally both models present two variation. One version does not split claims between attritional and large claims, the other version does it at models large claims
by use of GPD.
2. THE STANDARD FORMULA
55
2. The standard formula
2.1. The NL premium risk capital charge.
2.1.1. Review of capital charge calculation. Premium risk will be determined
according to QIS4 technical report instructions [CEIOPS, 2007]. Neither reserve
risk nor geographic diversification will be taken into account in the proposed model.
Provided data source described into chapter 5 let input parameters of SCR
Non Life - Underwriting risk module to be determined. Provided data allowed the
calculation of loss ratio accordingly QIS4 definition:
The year y loss ratio is defined as the ratio for year y of incurred
claims in a given LoB over earned premiums,determined at the
end of year y. The earned premiums should exclude prior year
adjustments, and incurred claims should exclude the run-off result, that is they should be the total for losses occurring in year
y of the claims paid (including claims expenses) during the year
and the provisions established at the end of the year.
The historical data length, eight years, allows the use of the credibility formula
(2.1), where corresponding SF credibility coefficient is c = 0.67 [CEIOPS, 2007].
Historical all years average loss ratio is not reported for confidentiality reason, but
historical LR standard deviation figures at 6.06%.
(2.1)
σcred =
p
cσint 2 + (1 − c) σext 2
2.1.2. Capital charge amount for the portfolio. Seven year of loss ratios (2002
- 2009) were available to estimate premium risk capital charges according to QIS4
undertaking specific formula as in 2.2.

Vprem,lob = max
Plob,wri t , Plob,earned t , 1.05Plob,wri t−1


r

P j
2
1
σU,prem,lob = (nnob −1)V
P lob (LRlob,j − µlob )
prem,lob
j


p

σprem,lob = cσ 2 U,prem,lob + (1 − c) σ 2 M,prem,lob
(2.2)
Their figures along with earned premium are not reported due to confidentiality
reasons. Historical LR standard deviation figures 6.06% according to formula 2.2.
Therefore:
• NL premium risk capital charge is 27.81% of most recent year earned
premiums using a market wide approach.
• NL premium risk capital charge is 18.16% of most recent year earned
premiums using an undertaking specific approach (as credibility weighted
LR standard deviation is 3.87%).
56
6. CARD PORTFOLIO STANDARD FORMULA & INTERNAL MODEL EXEMPLIFIED
3. Internal models
3.1. Modeling details of first class, first version internal model.
3.1.1. Overview. A portfolio total loss figure will be determined applying GAMLSS
frequency and severity models on the prospective years defined portfolio, The loss
distribution of the MTPL portfolio will be modeled, by convolution of a frequency
distribution and a severity distribution. Collective risk theory approach will be
applied within specific cluster level.
Modeling TPML loss distribution under the CARD system means to develop
specific compound distribution analyses for each components of claims [Grasso, 2007].
Moreover the approach we propose takes into account the heterogeneity of TMPL
portfolio explicitly. TPML risks pay premium that differs by insured characteristics,
determined by appropriated relativities estimated by GLM. An extension of GLM,
GAMLSS [Stasinopoulos et al., 2008] will be used, that allows joint modeling
of location and shape parameters for mean and dispersion. Different relativities
by used variables will determine a cluster, on which mean and dispersion parameters for frequency and severity will be estimated. This approach has moreover the
advantage to model appropriately a business - mix varying portfolio.
The typical use of GLM in personal lines ratemaking is directly modeling the
claim severity weighted by the number of claims using a Gamma log-link GLM.
[Tim Carter et al., 1994] and [Geoff Werner and Claudine Modlin, 2009]
suggest some adjustment to be applied to the raw claim amounts before being use
in the regression analysis. A high percentile is chosen, the claim amounts capped
at limit and in excess of such limit are calculated. The severity model is performed
on capped claim amount multiplied by the ratio of the sum of ground up losses
and the sum of capped losses. Capping is used as estimated relativities are very
influenced by the effect of shock losses.
While directly modeling the severity leads to an unbiased estimate of the risk premium, it is not correct if the purpose is the estimation of the total cost distribution as the assessment of moments higher than one is not correct. Therefore we
used Gamma conditional distribution on the cost of claim instead of the average
cost of claims. We have therefore decided to split claims between frictional claims
and shock claims. The economic rationale is to separate modeling of common
(attritional) claims from high amount (large) claims modeling. Attritional claim
modeling take into account risk heterogeneity. Large claims modeling considers a
general frequency rate and a loss distribution severity analysis performed under
the extreme value theory (EVT) framework. A variation of the framework will be
reported, where attritional and large losses take no different treatment.
3.1.2. Frictional claims modeling. Figures (1- 24) in the appendix chapter (C)
show effects plot of GAMLSS relativities values for all components of claims modeled with GAMLSS. The effect of classification variables on frequency and severity
by components of claims can thus be evidenced. Reported relativities show generally effects consistent in direction and magnitude to what can be reasonable expected to an underwriter / actuary professional. Example of GAMLSS relativities
output are 1 and 2.
−0.2
0.0 0.1
Partial for factor(YEAR)
0.0
0.4
C 050−
2007
2008
0.10
2
ENGINEVOL
M 451−650
−0.15
0.00
0.10
Partial for factor(FEED)
0.00
0.00 0.05
Partial for factor(POWER)
−0.10
−0.2
0.0
0.2
Partial for factor(AGESEX_REC)
20+
0.10
1
15−17
0.00
−0.10
Partial for factor(ZONECARD)
0−13
−0.10
Partial for factor(ZONECARD)
−0.4
Partial for factor(ENGINEVOL)
3. INTERNAL MODELS
POWER
ZONECARD
3
B
1
57
F 18−34
AGESEX_REC
M 35−41
D
2
S
FEED
G
Figure 1. NoCard Frequency model for Cars
ZONECARD
3
YEAR
2009
Figure 2. NoCard severity model for 2Ws
58
6. CARD PORTFOLIO STANDARD FORMULA & INTERNAL MODEL EXEMPLIFIED
3.1.3. Large claims modeling under CARD scheme. CARD DR scheme makes
necessary to conduce separate modeling for claims handled directly by the insurer.
Therefore separate modeling for NoCard, CidG and CttG large claims is needed.
Assumptions about forfeit for large claims are moreover needed.
Consistency of parameter estimation increases with the amount of data used in the
fitting processes. On the other side, if the threshold is excessively low in order to
obtain a sufficient number of data points, asymptotic distributional convergence to
GPD may not hold.
A way to increase available data is to pool different AY claims. Accident years 2007,
2008 and 2009 CidG, CttG and NoCard type of claims are available. Nevertheless
only 2008 and 2009 years shares the same external environment condition to be
pooled together.
Claim inflation has been taken into account by the methodology found in
[Brickman et al., 2005] article. In this article, ratios of central percentiles are
suggested to model severity inflation using by year of occurrence. According to
[Brickman et al., 2005], when positively skewed distribution are analyzed the
use of central percentiles instead of means is preferable as their estimators are relatively more efficient than the mean estimator and their use allows to to differentiate
inflation by claim size. Claim inflation distribution is reported in figure 3.
In fact the relative efficiency ratio of the variance of the percentile measure
with respect to the variance of sample mean is:
r=
p∗(1−p)
n∗f 2 (x(p) )
σ2
n
E.g., with respect to 50th percentile under normal distribution relative efficiency ratio is greater of one (0.5 ∗ π)while under log - normal distribution this
2
ratio is lower than one: r = 2eσ2 πσ
. Therefore the estimate of median is more
(eσ2 −1)
efficient than mean estimate. Claim inflation may vary by different percentiles level.
Usually relative efficiency decrease the higher percentiles are estimates. According
to simulation on out dataset for CidG, NoCard and CttG estimates for percentiles
are more efficient than mean estimates up to 90th percentile.
Following sub paragraphs report details extraordinary losses modeling, while
figures in table 4 represent the final estimates of GPD parameters and reported
3. INTERNAL MODELS
59
2007
2008
2009
year
2007.00
2008.00
2009.00
N
25397.00
20520.00
17862.00
Table 1.
Mu
Sd Skewness Median
q90
q99
4206.00 25922.00
26.00 1077.00 4215.00 34284.00
5401.00 29495.00
18.00 1396.00 4334.00 50773.00
5828.00 34287.00
19.00 1354.00 4535.00 52527.00
NoCard single losses statistics by AY
2007
2008
2009
year
2007.00
2008.00
2009.00
N
Mu
Sd Skewness Median
q90
q99
40240.00 1839.00 2643.00
9.00 1077.00 4215.00 12196.00
45892.00 1920.00 2611.00
5.00 1220.00 4334.00 11944.00
49767.00 1958.00 2710.00
7.00 1226.00 4535.00 12074.00
Table 2. CidG single losses statistics by AY
2007
2008
2009
year
2007.00
2008.00
2009.00
N
2054.00
2301.00
2509.00
Table
Mu
Sd Skewness Median
q90
q99
6285.00 30307.00
26.00 3800.00 9221.00 24658.00
5811.00 24991.00
21.00 3500.00 8775.00 25000.00
5089.00 16330.00
26.00 4000.00 9000.00 20886.00
3. CttG single losses statistics by AY
noCard
cidG
cttG
proportion
0.01
0.00
0.01
threshold 112000.00 60000.00 110000.00
scale 286802.78 26420.92 563148.02
shape
0.04
0.04
0.07
Table 4. GPD parameters estimates for component of claim tail distribution
Tables (1 - 3) show dispersion and location statistics for suffered claim components by accident years. To increase GPD parameter efficiency losses of different
years are needed to be pooled together. Combining different years of losses require
to put claim costs on level. All claims have been put on level with respect to 2009
AY. The measure of inflation considered is the ratio of 2009 to 2008 90th percentiles.
The reasons of this choice are the following:
• There is empirical evidence that inflation rate is not constant by amount
of loss. It is reasonable that 90th percentile year to year variation may
be an inflation measure more suitable for GPD modeled claim than the
yearly variation of mean / median distribution.
• Higher percentiles are not suitable candidates to be considered as reference
to model inflation as relative efficiency decrease dramatically. Relative
efficiency of percentile estimation variance to mean estimator variance is
always lower than one when percentiles are lower than 95th on CidG,
CttG and NoCard claims of 2008.
60
6. CARD PORTFOLIO STANDARD FORMULA & INTERNAL MODEL EXEMPLIFIED
With respect to NoCard losses, data analysis suggests not to pool 2009 losses
with 2008 losses. The number of claims is nevertheless consistent enough to property model the loss distribution tail. Tcplot and mean residual life plot (4) shows
that a threshold of 112,000 may be reasonable. Figure (5) shows loss distribution
p-p plot fits, that shows that fitted distribution may be a reasonable model for 2009
112,000 euro.
With respect to CidG losses, data analysis suggests to pool 2009 losses with
2008 losses, but to choose 75th percentile as reference to model inflation. Tcplot
and mean residual life plot in figure 6 show that a threshold of 22,250 euro may be
reasonable. Figure (7) shows loss distribution p-p plot fits, that shows that fitted
distribution may be a reasonable model for 2009 22,250 euro excess.
Finally, GPD extraordinary losses fitting is not available for CttG due to the
very low number of claim beyond candidate threshold, whose graphs are reported
for completeness in figure 8.
The resulting total loss distribution is reported in 9.
3.2. The other models.
3.2.1. First class, second version. The second version of first class of models
does not split claims between attritional and large losses. All claims have been modeled by fitting GAMLSS frequency and severity predictive models on all component
of claims. Figure 10 shows the total loss distribution for this model.
3.2.2. Second class, first version. The second class of models applies collective
risk theory modeling the frequency and the cost of claims of total payments, defined
as the sum of insurer’s payout including forfeits. The structure of the models allows
higher number of simulations (50,000) to run in a time frame comparable with other
models.
Figure 11 shows the total loss distribution for this model.
3.2.3. Second class, second version. The second version of the second class of
models differs from the first one as total losses has been split between attritional
total payments and large total payments. Whilst attritional total payments are
modeled through GAMLSS on total payments frequency and re sampling from 2009
empirical subsamples, large losses are sampled from a GPD. Large claim threshold
has been selected equal to 130,000 euro.
Figure 12 shows the total loss distribution for this model.
3. INTERNAL MODELS
Figure 3. NoCard GPD fit
Figure 4. NoCard tcpplot and mean residual life plot
61
62
6. CARD PORTFOLIO STANDARD FORMULA & INTERNAL MODEL EXEMPLIFIED
Figure 5. NoCard GPD fit
Figure 6. CidG tcpplot and mean residual life plot
3. INTERNAL MODELS
Figure 7. CidG GPD fit
Figure 8. CttG tcpplot and mean residual life plot
63
64
6. CARD PORTFOLIO STANDARD FORMULA & INTERNAL MODEL EXEMPLIFIED
Figure 9. Portfolio aggregate claim amount distribution
3. INTERNAL MODELS
Figure 10. Portfolio aggregate claim amount distribution
65
66
6. CARD PORTFOLIO STANDARD FORMULA & INTERNAL MODEL EXEMPLIFIED
Figure 11. Portfolio aggregate claim amount distribution
3. INTERNAL MODELS
Figure 12. Portfolio aggregate claim amount distribution
67
68
6. CARD PORTFOLIO STANDARD FORMULA & INTERNAL MODEL EXEMPLIFIED
model
on EP on EL
CV
SF market wide
27.8%
n.a. n.a.
SF undert. spec
18.6%
n.a. n.a.
first class, GPD
16.0% 21.2% 7.9%
first class, no GPD
20.8% 26.8% 9.7%
second class, GPD
20.5% 24.9% 8.9%
second class, no GPD 23.6% 30.5% 9.7%
Table 5. Premium risk capital charges ( on earned premiums and
expected losses) and CV
4. Discussion
4.1. Results overview. Table 5 shows capital charges estimation with standard formulas and internal models, while tables 6 - 9 report capital charge (scaled
on earned premium basis) assuming each class of vehicle representing a different
LOB both at stand alone basis and considering diversification advantage.
Stand alone capital charge, say Cj by class of vehicle is in fact calculated as formula
1.1. Diversification advantage arises as the total portfolio capital, C P , is allocated
by class of vehicle sub line of business according to formula 2.6 of chapter 4.
It is worthwhile to remark that the first class deeps risk analysis to specific
component of claims and builds full predictive models for frequency and cost of
claims for each component of claims. Nevertheless it fails to guarantee that component of claims dependency are properly taken into account.
The second class of models avoids correlation of component of claims conditional
independence assumption. The cost is to reduce the modeling deep of the cost of
total claims.
When claims are split betweens attritional and large claims a gain of 4 - 5 pp seems
to arise. The reason might be in the fact that TPML claims show a tail behavior
more light than Gamma distributions.
lob
properCaptlChg allocatedCaptlChg
car
0.21
0.15
trucks
0.24
0.17
two wheels
0.35
0.25
Table 6. Risk proportional capital charge EP ratios: first class,
first model
lob
properCaptlChg allocatedCaptlChg
car
0.27
0.21
trucks
0.26
0.20
two wheels
0.35
0.26
Table 7. Risk proportional capital charge EP ratios: first class,
second model
4. DISCUSSION
69
lob
properCaptlChg allocatedCaptlChg
car
0.31
0.23
trucks
0.29
0.22
two wheels
0.54
0.40
Table 8. Risk proportional capital charge EP ratios: second class,
first model
lob
properCaptlChg allocatedCaptlChg
car
0.27
0.20
trucks
0.26
0.19
two wheels
0.48
0.35
Table 9. Risk proportional capital charge EP ratios: second class,
second model
4.2. Analysis of models limitations. The proposed model deals on a number of assumptions and limitations. A limited list follows:
(1) The stationarity of the environment with respect to inflation and forfeit
structure evolution.
(2) The appropriateness of the chosen model with respect to the ratemaking
factors and the dependency structure between frequency and severity of
claim types.
(3) The analysis of the claim development process.
4.2.1. Environment stationarity. We have chosen to assume a zero inflation of
the component of claims handled directly by the insurer (ie NoCard, CttG and
CidG).
Moreover we supposed that the forfeit structure does not change abruptly between the experience period used in the analysis and the period the capital charge
would be estimated for. It is worthwhile to remark that the analysis used 2008-2009
forfeit structure to estimate 2010 premium risk capital change. Forfeit structure
changed abruptly in 2010, but notice of the new forfeit structure was given to insurers only one week before the year end. In order to take into account the impact
of changing structure when simulating the hypothetical forfeit of the forthcoming
regulation one should know:
• the province of both the parts involved in the accident.
• the amount of claim. But while the amount of claims suffered by own insureds is well known the actual regulation does not allow responsible part
insurer to know the full cost of claim caused. This point seriously halts
the ability to simulate directly the cost of CidD and CttG component of
claims cost. An alternative would consist in simulating new compensating forfeit on suffered claims data and to apply the percentage change on
caused claim. Nevertheless there is no guaranty on unbiasedness of such
estimate.
4.2.2. Model risk. We have chosen to model the frequency and the severity
of each component of claim distribution by means of GAMLSS. Moreover not all
70
6. CARD PORTFOLIO STANDARD FORMULA & INTERNAL MODEL EXEMPLIFIED
variables known to have a significant relationship with either the frequency or the
severity have been included.
During the empirical data analysis it has been noticed that while the frequency
of claims is correctly modeled by means of GAMLSS, the severity of claims modeled through GAMLSS is not. In fact the behavior of the GAMLSS standardized
quantile residuals is very good for frequency models (e.g. see figure 1) while it is
not for severity models (e.g. see figure 2).
Moreover the capital charge figures may be sensitive to the threshold selection
of GPD models.
4.2.3. IBNR & IBNER considerations. We used a fixed age to ultimate (ATA)
charge to obtain the figures for the ultimate burning cost, that is
cijk ultimate = c12month ∗ k
, being cijk the sum of all component of claim specifib burning costs evalutated at
12 months of maturity. We are aware that this approach has relevant drawbacks,
even their solution is difficult:
• ATA should vary by component of claims at least and maybe by business
line.
• ATA are not stationary as CARD scheme changes by year and year and
disposal rate is increasing at earlier maturities [Sergio Desantis and Gianni Giuli, 2010,
Mieli, 2010].
• A fixed charge does not allow to consider the stochastic nature of claims
emergence and settlement.
CHAPTER 7
Conclusions
1. Final remarks
An internal model framewok to evalutate premium risk capital charge for a
TPML portfolio operating in the italian DR environment has been developed in
this PhD thesis.
CARD DR TPML portfolios are characterized by mixed type claims, structural
presence of negative claim amounts and heterogeneity of risks. Moreover the length
of experience period available for analysis is relatively short with respect to other
LOBs. Finally TPML relevant regulation changes more frequently than in other
LOBs.
It has been shown that the proposed framework is able to represent a consistent
approach toward modeling the premium risk capital charge for a CARD portfolio.
Collective risk theory has been applied on a clustered portfolio. Frequency and
severity of each component of claims have been modeled in a consistent manner
depending by type of loss. Both the use of GAMLSS and resampling techniques
have been used.
Two alternative variations of the modeling framework have been proposed:
(1) the first alternative models each component of claims separately then calculates the total loss by the sum of each component of claim. Conditional
independence net of cluster characteristics between each component of
claims is assumed.
(2) the second alternative models only the frequency of payments (regarless
component of claims and responsibility). The cost of claims is modeled
through resamplig.
Both variations have been developed with and without GPD modeling for losses
over a specified threshold. Resulting capital charges seem reasonable and comparable with Solvency II QIS4 standard formula.
The modeling stage leads to two considerations:
(1) Modeling total payments leads to higher capital charge. This may be due
to residual correlation between components of claim not considered in the
modeling of specific components of claims.
(2) Splitting the claim cost modeling between attritional and large losses leads
to lower capital charge. This may be due either to wrongful selection of
threshold and parameter (model risk) or to short tailed nature of high
severity losses (with respect to exponential distribution).
Relevant limitations of the proposed framework are:
71
72
7. CONCLUSIONS
(1) Short time available to update premium risk models in a way consistent
with the forthcoming forfeit structure. This drawback is due to the short
delay between ANIA release of forfeit updates and the beginning of the
period when the new forfeits will be applied. This limitation becomes
substantial if the forfeit structure changes relevantly, e.g. between 2007
and 2008 or 2009 and 2010 when distinction between bodily injuries and
property damages has been introduced and after two years removed as
some information needed to simulated compensating forfeit may not be
available in the previous year data set.
(2) Poor fit of theoretical distributions used to model components of claim
costs of attritional claims. This drawback is due mainly to the discretization of values applied by claim adjusters while setting case reserves. Nevertheless this is a known failure of general theoretical severity fitting approach when applied on real data. The regularity of the loss cost distribution usually improves as claim approaches to settlement, but the short
experience period and the difficulty to find correct age to ultimate factors
lead the use of AY of different maturities not possible.
(3) IBNR and IBNER development are treated by applying a costant coefficient for all component of claims. A more detailed analysis would have
developed specific coefficients by component of claims. Age to ultimate
factors are known to vary systematically by AY.
Relevant advantages of the proposed framework are:
(1) It models consistently the loss structure of a CARD type MTPL portfolio.
(2) The use of GAMLSS is consistent with the standard approach of GLMs
modeling used in P&C ratemaking and it allows the dispersion parameter
to vary by AY..
(3) It permits to model underlying risk characteristics on heterogeneous portfolios by identifying clusters of insureds.
(4) it permits to model business mix varying porfolio total loss.
2. Disclaimer
This PhD thesis would not have been developed without AXA Assicurazioni
support. I’m extremely grateful to my AXA Assicurazioni actuarial supervisors,
Stella Garnier and Gloria Leonardi who supported me by allowing to use a sampled
data set and providing valuable suggestions.
Nevertheless considerations appearing in this paper are responsibility of myself
alone. In publishing these contents AXA Assicurazioni takes no position on the
opinion expressed by myself and disclaims all responsibility for any opinion, incorrect information or legal error found therein.
Part 3
Appendix
APPENDIX A
Review of statistics and predictive modeling
Most relevant statistical techniques used in this thesis are briefly presented
along with relevant bibliography:
• Predictive modeling related techniques: Generalized linear models (GLM)
and extensions (e.g. GAMSS).
• Collective risk theory.
• Peak over threshold extreme value theory approach.
• Monte Carlo simulation.
1. Predictive modeling
1.1. Generalized linear models. Generalized linear models have known a
widespread diffusion among P&C actuaries in the last two decades. They are now
considered the standard model to determine relativities used to price heterogeneous
portfolio. See [de Jong and Heller, 2008] for a valid introduction to GLM in
the insurance contexts. Classical models for number of claims are (over dispersed)
Poisson and Negative Binomial GLM regressions. Classical model for claim severity
are Gamma and Inverse Gaussian GLM regressions. Traditionally frequency and
severity have been modeled separately. An initial estimate of the pure premium
would have be determined by the cross product of average frequency and severity
estimate for each insured in the data set. Such estimate would have been the dependent variable of a the final GLM regression (usually gamma regression). The
final Gamma log link model would have find the indicated relativities as described
in [Geoff Werner and Claudine Modlin, 2009].Taking into account different
exposures and set restriction on parameters are issues of great importance in insurance applications. An appropriate use of offset allows to perform such issues. See
[Jun Yan et al., ] for further details.
o
n
f (y; µ, φ) = a(y, φ) exp φ1 [yθ(µ) − κ(θ(µ))]
( 1−p
µ
−1
p 6= 1
1−p
θ(µ) =
log µ
p=1
( 2−p
µ
−1
p 6= 2
2−p
κ(θ(µ)) =
.
log µ
p=2
(1.1)
Recently the Tweedie regression model has been acquiring widespread interest.
Tweedie distribution pertains to the exponential family form and comprises many
75
76
A. REVIEW OF STATISTICS AND PREDICTIVE MODELING
distributions used in actuarial application as special cases. In fact special formulations of 1.1 bring to Normal, Inverse Gaussian, Gamma and Poisson.
The Tweedie distribution would have been an interesting candidate to deal with DR
scheme as for p < 0, the data y are supported on the whole real line and, interestingly, µ > 0. Unfortunately all available software bound Tweedie distribution parameter estimation to p > 0 cases only [Dunn, 2010, Dunn and Smyth, 2005].
Tweedie regression regresses directly the total claim amount as dependent variable.
See [Meyers, 2009] for details.
Within GLMs through the link function, E (yi ) = µi = κ0 (θi ) the dependency
between the response variable and the predictors is determined. In the actuarial P&C practice both frequency and severity link functions are mostly chosen in
logarithmic form to easily obtain premium relativities. However non parametric
smoothers for the input variables (as polynomials or cubic splines) are used by
practitioners ([pre, 2010]) to assess marginal predictors non-linearity. In fact the
mean can be expressed by (1.2).
(1.2)
g(µi ) = xi T β + h (v) + ln (ei )
where h (v) and ln (ei ) takes into account the eventual smoother on suitable
explicative variables vector and an offset for exposures. Classical GLM conditional
distribution pertains to the exponential family. It can be shown that, set the
variance function as a0 (θ) = V (µ), the variance of the generic yi distribution is
var (yi ) = φV (µi ).
The analysis of residuals is very important to check wherever assumptions regarding
marginal distribution and the relationship between predictors and the independent
variable have been met.
Two general classes of residuals have been defined, Pearson residuals 1.3 and deviance residuals 1.3. Both converge to normality when φ is small, even if convergence is faster for deviance residuals convergence. Neither deviance nor Pearson
residuals can be assumed normally distributed when φ and µ are large, excepting
in particular distributions.
([Dunn and Smyth,
1996])
introduces randomized quantile residuals (henceforth
RQS). Assuming F yi , µ̂i , φ̂ as the cumulative distribution function, probability
theory tells that RQS are distributed as a standard normal if µ̂i , φ̂ are correctly
estimated. RQS can be easily extended to discrete marginal distributions.
(1.3)
rp,i =
yi − µ̂i
1
(V (µ̂i )) 2
1
(1.4)
rd,i = sign (yi − µ̂i ) (d (yi , µ̂i )) 2
(1.5)
rq,i = Φ−1 F yi ; µ̂i , φ̂
Within the internal model building process we have found that RQS for number
of claims models are very close to normality, whilst RQS for suffered claim costs
1. PREDICTIVE MODELING
77
are definitely not normally (ill residuals). See for example figures 1 and 2 .
Deviance role is critical in GLM goodness of fit assessment. Deviance statistic
is minimum in the saturated model (one parameter per observation), maximum in
the independence model (only one parameter).
Log-likelihood ratio measures the ratio of the log of likelihood of the saturated
model compared with the actual model ad it can be expressed as the deviance of
D (yi ,θ̂i )
Lact
=
the model divided by the dispersion parameter, as ln L
. Loglikelihood
φ
sat
ratio is the basis for most relevant test regarding inclusion - exclusion of candidate
predictors.
A drawbacks of GLM framework with
exists no goodness of fit index as OLS R2 .
proposed to value lift curve in order provide
competing models. See [Meyers, 2010] for
respect to OLS is that there usually
Nevertheless the Gini index has been
a ranking of predictive power between
details.
78
A. REVIEW OF STATISTICS AND PREDICTIVE MODELING
Against Fitted Values
20
40
2
0
60
● ●
●
● ● ●
● ●● ●● ●
●●
●●● ●
●
●
●
●
●
●
●
●
●
●
● ●
●
●●●●● ●
●● ● ●
●
●
●
●●
●
●
● ●
●●●
●
●●● ●●●●
●
●● ●● ● ●●●
●
●
●
●
●
●
●
● ●●
●
●
●
●
●
●
●
●
●●●
●
●●●
●
●
●
●●
●●
●●
●
● ●●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●●
●●
●
●
●
●
●●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●●
●
●
●●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●●● ●●
●●
●
●
●●●
●
●
●●
●
●
●●
●
●
●●●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●●●●●
●
●
●
●
●
●
●
●
●●●
●●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●●●
●
●
●
●●
●
●
●
●
●●
●
●
●●● ●
●
●
●●●
●●
●●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ● ●●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●●
●
●●
●
●
●
●●
●
●
●●●
●
●
●
0
400
800
1400
index
Density Estimate
Normal Q−Q Plot
●
−4
−2
0
2
4
Quantile. Residuals
2
0
−2
0.1
0.2
0.3
Sample Quantiles
4
0.4
Fitted Values
0.0
Density
−2
Quantile Residuals
● ●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●●●●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●●●● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●●● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ● ● ●
●
●
●
●
●
●
●
●
●
●
●●●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●● ●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●● ●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
● ●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●● ●
●
●●●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●
●
●●●● ●
●
●
●
●
●
●
●
●
●
●
●
● ● ●●
●●
●
●
●
●
●●●●
●●
●
● ●
●
0
●
4
●
4
2
0
−2
Quantile Residuals
Against index
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
−3
−1
1 2 3
Theoretical Quantiles
Figure 1. NoCard frequency residuals analysis for Car
1. PREDICTIVE MODELING
79
2000
5
0
●●
●
●
●
●
●●●●
●●
●
●●
●●
●
● ●
● ●●
● ● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●●● ●● ●
●●●
●●●● ●●
●
●
●● ●●● ●
●●
●
●
●
●●
●
●
●● ●●● ●
●
●
●
●●
●
●
●
●●
●
●
●
●
●
●●
●●
●
●
●
●●
●
●● ●
●
●
●
●●
●
●
●
●
●
●
●●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●●
●
●
●
●●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●●●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●●
●
●
●
●
●
●
●
●
●
●
● ●●
●
●
●
●
●
●
●
●
●
●
●● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●●●●
●
●
●
●
●
●●●●
●
●
●
●●●● ●
●
●
●
●
●●
●
●
●
●
●
●●
●
●
●
●
●
●
●●
●
●
●
●
●●● ●
●
●
●
●●
●
●
●
●
●
●
●
●
●
● ●●●
●
●●
●● ●
●●
●
●
●
●
●
●
●
●
●
●●
●
0
10000
20000
Density Estimate
Normal Q−Q Plot
0
Sample Quantiles
0.4
0.2
−10
0.0
5
index
0.6
Fitted Values
−5
5
0
−5
−5
2500
●
●
●
●
●
−10
●
1500
Density
Against index
Quantile Residuals
●
●
●
●
●
●
●●●
●
●●
●●
● ● ●● ●● ●
●
●
●
●
●
●●
●
●
●
●
●
●●●● ●●
●●
●
●●
●●
●●
●
●
●
●
●
●●●
●
●
●
● ●●●●
●●
●
●
●
●
●●●
●●
●
●●● ● ●
●
●●
●●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●●● ●●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●
●
●●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●●
●
●●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●●●
●●
●
●
●
●
●
●
●●●
●
●
●●●●●●
●●
●
●
●
● ●
●
●
●
●
●
●
●
●
● ●
●●●●●
● ●●
●● ●
●● ●
●
●● ●● ● ● ●
●
●
●
●●
●
−10
Quantile Residuals
Against Fitted Values
−10
−5
0
5
Quantile. Residuals
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
−4
−2
0
2
4
Theoretical Quantiles
Figure 2. CIDG cost of claims residuals analysis for Car
80
A. REVIEW OF STATISTICS AND PREDICTIVE MODELING
1.2. GLM extensions and GAMLSS. Double generalized linear models
extend the framework presented in formula 1.2, modeling mean and the dispersion
by a set of two equations, as
(
g(µi ) = xT β
h(φi ) = z T γ
see ([de Jong and Heller, 2008]) for further details.
Generalized Additive Models for Mean Location Scale and Shape (henceforth
GAMLSS) have been introduced by [Stasinopoulos and Rigby, 2007]. As described in the GAMLSS web site:
Generalized Additive Models for Location, Scale and Shape (GAMLSS)
are (semi) parametric regression type models. They are parametric, in that they require a parametric distribution assumption for
the response variable, and ”semi” in the sense that the modeling
of the parameters of the distribution, as functions of explanatory
variables, may involve using non-parametric smoothing functions.
GAMLSS were introduced by Rigby and Stasinopoulos (2001, 2005)
and Akantziliotou et al. (2002) as a way of overcoming some of the
limitations associated with the popular Generalized Linear Models (GLM) and Generalized Additive Models (GAM), Nelder and
Weddeburn (1972) and Hastie and Tibshirani (1990) respectively.
In GAMLSS the exponential family distribution assumption for
the response variable, y, is relaxed and replaced by a general distribution family, including highly skew and/or kurtotic continuous
and discrete distributions. The systematic part of the model is expanded to allow modeling not only the mean (or location) but all
the parameters of the distribution of y as linear and/or nonlinear
parametric and/or additive non-parametric functions of explanatory variables and/or random effects.
Hence GAMLSS is especially suited to modeling a response variable
which does not follow an exponential family distribution, (eg. leptokurtic or platykurtic and/or positive or negative skew response
data, or over-dispersed counts) or which exhibit heterogeneity (eg.
where the scale or shape of the distribution of the response variable
changes with explanatory variables(s)).
Model fitted within the GAMLSS framework may be compared using classical inferential theory (i.e. AIC, BIC). Tools to assess the
model adequacy are provided, based on the analysis of the quantile
residuals. The AIC index is defined as −2 (L − k) and represent a
compromise between goodness of fit measured by logLikelihood L
and number of parameters used.
Tens of continuous and discrete distribution may be used to model frequency
and severity of claims. We have reduced our analysis to distribution with logarithmic link, in order to preserve continuity with the P&C actuarial tradition.
Candidate distributions for the number of claims (besides traditional Poisson distribution) are:
1. PREDICTIVE MODELING
81
• Negative binomial distribution:

y σ1

Γ µ + σ1
σµ
1

N BD


Y (y|µ, σ) =
p
1 + σµ
Γ σ1 Γ (y + 1) 1 + σµ

y ∈ N, µ > 0, σ > 0




E [Y ] = µ, var [Y ] = µ + σµ2
• Poisson Inverse Gaussian:

21 y σ1

µ e Ky− 12 (α)
2α
 N BD
p
y
Y (y|µ, σ) =
π
(ασ) y!


y ∈ N, µ > 0, σ > 0
• Zero inflated Poisson:


−µ
 y = 0 → σ + (1 − σ) e



ZIP

µ

µy
Y (y|µ, σ) =
−( 1−o
p
)
 y = 1, 2, . . . → (1 − σ)
ye
y!(1 − σ)



y ∈ N, µ > 0, σ > 0



E [Y ] = (1 − σ) µ, var [Y ] = (1 − σ) µ (1 + σµ)
Candidate distribution for the (positive) cost of claims are:
• Gaussian distribution:

1
− y


1
y σ2 −1 e (σ2 µ)
 f GA (y|µ, σ) =
1
Y
2 µ) σ2
Γ σ12
(σ


 E (Y ) = µ, var (Y ) = σ 2 µ2
• Inverse Gaussian:
1
1
2
f GA Y (y|µ, σ) = p
exp − 2 2 (y − µ)
2µ σ y
2πσ 2 y 3
E (Y ) = µ, var (Y ) = σ 2 µ3
If even negative values would be allowed, a suitable distribution estimated within GAMLSS software would be the skew - t (1.6).
(1.6)
2
fY (y|µ, σ, v, τ ) = fZ1 (z) FZ1 (vz)
σ

y ∈ (−∞, ∞)



 µ ∈ (−∞, ∞)

v ∈ (−∞, ∞)



τ >0
fZ1 , FZ1 Z ∼ T F (0, 1, τ )
A comprehensive actuarial application of GAMLSS have been presented in
[de Joung Joung et al., 2007], where GAMLLS have been used to model a standard TPML portfolio losses from a Dutch company. The article discusses traditional
frequency and severity modeling and its pitfall when traditional exponential family
distribution are fitted assuming constant dispersion index across all level of used
82
A. REVIEW OF STATISTICS AND PREDICTIVE MODELING
explanatory factors. A very recent application of GAMLSS methodology to mortality projection can be found in [Venter, 2011].
They solution proposed in [de Joung Joung et al., 2007] to model single policies total loss has been based on separate modeling of frequency and severity by
GAMSS. Available predictors where: age group, car value, gender, area of residence, vehicle body type, vehicle make, age, car.
Models for frequency where chosen between Zero Inflated Poisson (ZIP), Negative
Binomial (NB) and Poisson. Models for severity where chosen between Gamma
(GA) and Inverse Gaussian (IG). Joint fit of frequency and severity models lead to
choose a NG - IG model due to lower AIC. The dispersion of frequency was modeled
through vehicle value. The dispersion of severity was modeled through area.
1.3. The model fitting process. The approach used to find a GAMLSS
model is based on minimization of AIC index, constrained to:
• Gamma conditional distribution model for the severity.
• Negative Binomial distribution for the frequency.
• Only one variable used on dispersion parameter model whether necessary.
Even if more distribution would have been suitable, NB and Gamma distribution were chosen to avoid numeric issues. Minimizing AIC means maximizing loglikelihood penalized by the number of parameters in order to avoid unnecessary
model complexity. No iterations have been considered in this phase
An initial model with all available predictors on mean parameter will be fitted.
Then predictors will be removed from the model using a chi - square test on deviance for nested models. The backward recursion process will stop until AIC is
minimized. Finally a model for the dispersion parameter is built by chosing a
ratemaking factor, if any, that would reduce the overall model AIC.
2. Individual and collective risk theory
A main topic of risk theory is the determination of portfolio’s total loss distribution, simplified in formula 2.1.
(2.1)
P
X
i=1
s̃i = S̃ =
Ñ
X
X̃i
i=1
LHS represents individual risk model while RHS represents collective risk model.
Individual risk theory approach derives portfolio total loss distribution as the sum
of total losses of policyholders. This approach requires a calculation of a convolution
integral, that is almost always analytically unfeasible even with simple univariate
density function.
Collective risk theory approach hypothesized that claims are i.i.d. and total loss
being a compound distribution determined by a frequency distribution and a claim
cost distribution. According to compound distributions mathematical proprieties
S̃ characteristics may be determined by the cumulants of the frequency and the
claim cost distribution. If the cost distribution is discrete - valued then Panjer
recursion formula may be employed to derive analytically the PdF of S̃. Collective
risk theory approach presents a valuable computational advantages with respect to
2. INDIVIDUAL AND COLLECTIVE RISK THEORY
83
individual risk theory approach. The main drawback is the requirement of identically distributed claim costs. See [Kaas et al., 2008] for further details
Within the scope of this paper the collective risk theory approach has been
used within each cluster of insured applied to each component of claims. Within
each cluster, each component of claims has a specified distribution of frequency
and cost of claims defining a fitted GAMLSS model for the analyzied component
of claims.
84
A. REVIEW OF STATISTICS AND PREDICTIVE MODELING
3. Peak over threshold extreme value theory approach
3.1. EVT matematics review. Extreme value theory has become a valuable
toolkit for actuaries to model loss distribution data. Often insurance loss distribution shows a fat tail that makes classical probability distribution not adequate to
describe the loss distribution adequately. Since a few large claims can significantly
impact an insurance portfolio, statistical methods that deal with extreme losses
have become worth to be known to actuaries.
Pareto distribution is very important within this framework as it states that
almost all tails of most used loss distributions may easily be approximated by a
Pareto type loss distribution, that is equation 3.1. The approximation of a Paretotype function has been demonstrated to be reasonable for many lines of insurance.
(3.1)
1(− FX (x) = l(x)x−α
α>0
l(x)
= 1 ∀t > 0
limx→∞ l(tx)
Therefore an actuary may assume that the tail of the loss distribution, where extreme losses occur, can be approximated by a Pareto-type function without making
specific assumption on the global density. A proper definition of the tail distribution leads to estimate quantities of interest that are related to extreme losses.
Distribution of tail of TPML is necessary within our approach to estimate NL premium risk model due to the solution we chose to assess total loss distribution.
In fact we selected to model frequency and severity by within a generalized linear models (GAMLSS) framework. As suggested in [pre, 2010, Tim Carter et al., 1994]
losses upon a certain threshold should be removed from raw data when estimate
relativities as they can distorce results. Correction coefficients employed consider
the contribution of shock and catastrophic losses within the total loss for a wide
range of years. This approach lead to an unbiased estimate of expected value but
it introduces a bias in the assessment of distribution characteristics. We therefore
decide to separate claim between frictional losses as extreme losses. To perform
that analysis following issues have to be addressed:
(1) Definition of a suitable threshold.
(2) Estimation of a loss distribution upon threshold.
Statistical theory has provided many possible estimators of the tail index. Recently estimators even for grouped data [John B. Henry, III and Ping-Hung Hsieh, 2010]
have been proposed. EVT assumes that equation 3.1 appears as the limiting distribution for the distribution of excess Xi − u as the threshold u becomes large. That
is it can be found a positive function β (u) [Embrecht et al., 2003] such that the
excess distribution follow equation 3.2.
(3.2)
Fu (u) = Pr
> u] ∼ Gξ,β(x)
[X − u ≤ x|X
ξ1

β
1 − β+ξx
Gξ,β(x) =
β > 0, ξ ≥ 0 → x ≥ 0, ξ < 0 → 0 ≤ x ≤ − βξ
x
 1 − e− β
3. PEAK OVER THRESHOLD EXTREME VALUE THEORY APPROACH
85
An heavy tail, the most interesting case in the actuarial field, occurs when the
shape parameter ξ, α is greater than zero. Assuming that peaks upon a threshold follow a GPD, combining historical simulations of Nu , the excess over the
threshold, we arrive to the estimator of F (x), x > u in formula (3.2). Finally
the probability distribution function ma be expressed as F (z) = Pr (Z 6 u) +
(1 − Pr (Z < u)) Gξ,σ (z − u).
Another approach followed by EVT is to model directly the distribution of
Maxima. The approach
leads in finding appropriate constants, an and bn , such
n Mn −bn
→ G (z) does not depend by n. The distribution of
that limn→∞ F
an
maxima may converge to the GEV family (3.3).
(3.3)
" − ξ1 #
z−µ
G (z) = exp − 1 + ξ
σ
3.2. Practical issues in applying EVT to actuarial data. According to
([Embrecht et al., 2003]) the following hyphotesis have to been considered when
applying EVT:
• i.i.d. losses: losses volatility should not change neither losses shall be
in some what serially independent. We know that TPML losses are not
stationary (e.g. inflationary pressures) but clustering do not appear to be
a problem.
• non repetitiveness: it is not a problem in TPML claim.
• number of excesses over threshold: in [Embrecht et al., 2003] VaR estimation on log normally distributed losses at a 99% convidence is said to
need at least 25 observations.
Relevant actuarial literature papers are [Embrecht et al., 2003] and [Gigante et al., 2002].
EVT has been used along with GLM and Buhlmann Strauss credibility theory in
order to determine a tariff for a TPML portfolio in [Gigante et al., 2002].
3.2.1. Threshold choice. The most challenging choice in GPD modeling is the
threshold choice. A very high threshold gives inconsistent estimates of the distribution parameter while a too low threshold leads to biased estimates as the GPD
convergence theorem may not be valid.
We may write GPD distribution as (3.4), being (µ, σ, ξ) the location, scale and
shape parameters. Changing the threshold means fitting a different GPD, but new
GPD parameters are linked with parameter of another GDP attacked to a lower
threshold, by following relationship:
σ1 = σ0 + ξ0 (µ1 − µ0 )
ξ1 = ξ0
So, as the modified scale parameter σ∗ = σ1 + ξ1 µ1 is independent of the
threshold we can plot the pairs (σ∗ , µi ) and (σ∗ , ξi ). A reasonable choice of the
location parameter is the point where modified scale and shape parameters seem to
be quite stable. Such techniques is implemented in the tcpplot ([Ribatet, 2009]).
Another way to select the threshold is to write the mean residual live plot as
E [X − µ1 |X > µ1 ] =
σµ0
ξ
+
µ1
1−ξ
1−ξ
86
A. REVIEW OF STATISTICS AND PREDICTIVE MODELING
from which we see that the approximation is good when the MRLP starts becoming
linear.
(
(3.4)
Pr (X ≤ y|X > y) → H(y)
− ξ1
H (y) = 1 − 1 + ξ y−µ
σ
APPENDIX B
One way data analysis
0.3. Four wheels basic stastistics.
1
2
3
ANNO
2007
2008
2009
exposure
262173.00
271357.00
279790.00
frequency
3.20
2.39
2.00
severity
4423.00
6366.00
6509.00
pure premium
142.00
152.00
130.00
Table 1. Cars, NoCard aggregates by YEAR
1
2
3
ANNO
2007
2008
2009
exposure
262173.00
271357.00
279790.00
frequency
6.49
7.09
7.43
handled severity
1760.00
1834.00
1885.00
compensating forfait
1901.00
1692.00
1737.00
pure premium
-9.00
10.00
11.00
Table 2. Cars, CidG aggregates by YEAR
1
2
3
ANNO
2007
2008
2009
exposure
262173.00
271357.00
279790.00
frequency
0.34
0.37
0.40
handled severity
6026.00
5726.00
4965.00
compensating forfait
4962.00
4505.00
4338.00
pure premium
4.00
4.00
3.00
Table 3. Cars, CttG aggregates by YEAR
1
2
3
ANNO
2007
2008
2009
exposure
262172.83
271357.43
279789.83
frequency
5.17
5.82
6.08
compensating forfait
1982.00
1703.00
1767.00
pure premium
102.00
99.00
107.00
Table 4. Cars, CidD aggregates by YEAR
0.3.1. YEAR.
87
88
B. ONE WAY DATA ANALYSIS
1
2
3
ANNO
2007
2008
2009
exposure
262173.00
271357.00
279790.00
frequency
0.30
0.41
0.35
compensating forfait
5758.11
4344.31
5750.96
pure premium
17.20
17.63
20.35
Table 5. Cars, CttD aggregates by YEAR
1
2
3
4
5
6
7
8
9
10
11
ETASEX REC
F 18-34
F 35-41
F 42-49
F 50-59
F 60+
M 18-34
M 35-41
M 42-49
M 50-59
M 60+
S
exposure
62868.00
59072.00
58114.00
53233.00
49342.00
69933.00
76067.00
83202.00
95534.00
137054.00
68901.00
frequency
2.78
2.47
2.44
2.35
2.06
3.30
2.70
2.47
2.45
2.07
2.86
severity
5750.00
4326.00
5404.00
6720.00
3732.00
6801.00
4864.00
5652.00
5561.00
5986.00
5679.00
pure premium
160.00
107.00
132.00
158.00
77.00
225.00
131.00
140.00
136.00
124.00
163.00
Table 6. Cars, NoCard aggregates by AGESEX
1
2
3
4
5
6
7
8
9
10
11
ETASEX REC
F 18-34
F 35-41
F 42-49
F 50-59
F 60+
M 18-34
M 35-41
M 42-49
M 50-59
M 60+
S
exposure
62868.00
59072.00
58114.00
53233.00
49342.00
69933.00
76067.00
83202.00
95534.00
137054.00
68901.00
frequency
8.14
7.32
7.43
6.79
5.42
9.09
7.50
7.09
6.87
5.38
7.44
handled severity
2001.00
1851.00
1857.00
1820.00
1715.00
1970.00
1803.00
1811.00
1783.00
1632.00
1912.00
compensating forfait
1917.00
1831.00
1812.00
1771.00
1760.00
1802.00
1770.00
1734.00
1744.00
1692.00
1698.00
Table 7. Cars, CidG aggregates by AGESEX
0.3.2. AGE SEX.
pure premium
7.00
1.00
3.00
3.00
-2.00
15.00
3.00
5.00
3.00
-3.00
16.00
B. ONE WAY DATA ANALYSIS
1
2
3
4
5
6
7
8
9
10
11
ETASEX REC
F 18-34
F 35-41
F 42-49
F 50-59
F 60+
M 18-34
M 35-41
M 42-49
M 50-59
M 60+
S
exposure
62868.00
59072.00
58114.00
53233.00
49342.00
69933.00
76067.00
83202.00
95534.00
137054.00
68901.00
frequency
0.51
0.40
0.41
0.32
0.25
0.66
0.43
0.40
0.35
0.23
0.24
handled severity
7097.00
4927.00
4595.00
4307.00
5077.00
6184.00
4805.00
4411.00
6783.00
5864.00
5080.00
89
compensating forfait
6470.00
4424.00
4034.00
3706.00
4370.00
5042.00
4036.00
3656.00
4392.00
5090.00
4075.00
Table 8. Cars, CttG aggregates by AGESEX
1
2
3
4
5
6
7
8
9
10
11
ETASEX REC
F 18-34
F 35-41
F 42-49
F 50-59
F 60+
M 18-34
M 35-41
M 42-49
M 50-59
M 60+
S
exposure
62867.78
59071.94
58114.00
53233.13
49341.93
69933.43
76067.16
83202.05
95533.73
137053.60
68901.32
frequency
5.94
5.15
6.00
5.58
5.35
6.36
5.39
5.65
5.34
5.10
7.44
compensating forfait
1793.00
1779.00
1858.00
1786.00
1744.00
1870.00
1883.00
1796.00
1836.00
1775.00
1761.00
pure premium
107.00
92.00
111.00
100.00
93.00
119.00
102.00
101.00
98.00
91.00
131.00
Table 9. Cars, CidD aggregates by AGESEX
1
2
3
4
5
6
7
8
9
10
11
ETASEX REC
F 18-34
F 35-41
F 42-49
F 50-59
F 60+
M 18-34
M 35-41
M 42-49
M 50-59
M 60+
S
exposure
62868.00
59072.00
58114.00
53233.00
49342.00
69933.00
76067.00
83202.00
95534.00
137054.00
68901.00
frequency
0.44
0.31
0.34
0.35
0.26
0.55
0.43
0.34
0.32
0.26
0.35
compensating forfait
4770.59
4644.33
4289.59
4813.00
4956.19
6219.00
4786.23
7606.75
4691.67
4208.81
5645.10
Table 10. Cars, CttD aggregates by AGESEX
pure premium
20.79
14.31
14.39
17.00
12.96
34.24
20.39
26.15
15.08
11.09
19.91
pure premium
3.00
2.00
2.00
2.00
2.00
8.00
3.00
3.00
8.00
2.00
2.00
90
B. ONE WAY DATA ANALYSIS
1
2
3
4
5
CAVALLI REC
0-13
14
15-17
18-19
20+
exposure
154059.00
121264.00
223413.00
152341.00
162243.00
frequency
2.21
2.33
2.46
2.80
2.77
severity
4637.00
5352.00
5735.00
6009.00
5969.00
pure premium
102.00
124.00
141.00
168.00
165.00
Table 11. Cars, NoCard aggregates by POWER
CAVALLI REC
0-13
14
15-17
18-19
20+
1
2
3
4
5
exposure
154059.00
121264.00
223413.00
152341.00
162243.00
frequency
4.78
6.50
7.14
7.92
8.50
handled severity
1700.00
1739.00
1814.00
1838.00
1965.00
compensating forfait
1821.00
1784.00
1782.00
1787.00
1710.00
pure premium
-6.00
-3.00
2.00
4.00
22.00
Table 12. Cars, CidG aggregates by POWER
CAVALLI REC
0-13
14
15-17
18-19
20+
1
2
3
4
5
exposure
154059.00
121264.00
223413.00
152341.00
162243.00
frequency
0.32
0.33
0.37
0.50
0.34
handled severity
6223.00
6537.00
5310.00
5347.00
4763.00
compensating forfait
4285.00
6003.00
4553.00
4444.00
4025.00
Table 13. Cars, CttG aggregates by POWER
1
2
3
4
5
CAVALLI REC
0-13
14
15-17
18-19
20+
exposure
154058.54
121264.45
223412.91
152341.33
162242.85
frequency
5.12
5.49
5.49
5.84
6.56
compensating forfait
1807.00
1798.00
1816.00
1820.00
1795.00
Table 14. Cars, CidD aggregates by POWER
0.3.3. POWER.
pure premium
92.00
99.00
100.00
106.00
118.00
pure premium
6.00
2.00
3.00
4.00
3.00
B. ONE WAY DATA ANALYSIS
1
2
3
4
5
CAVALLI REC
0-13
14
15-17
18-19
20+
exposure
154059.00
121264.00
223413.00
152341.00
162243.00
frequency
0.34
0.34
0.33
0.40
0.36
compensating forfait
4599.78
4906.33
4664.72
6135.61
5729.76
Table 15. Cars, CttD aggregates by POWER
91
pure premium
15.85
16.59
15.51
24.49
20.55
92
B. ONE WAY DATA ANALYSIS
1
2
3
ALIM REC
B
D
G
exposure
465024.00
188745.00
159551.00
frequency
2.26
2.64
3.11
severity
5200.00
6732.00
5346.00
pure premium
118.00
178.00
166.00
Table 16. Cars, NoCard aggregates by ALIM
1
2
3
ALIM REC
B
D
G
exposure
465024.00
188745.00
159551.00
frequency
5.79
8.83
8.45
handled severity
1741.00
1882.00
1944.00
compensating forfait
1780.00
1733.00
1799.00
pure premium
-2.00
13.00
12.00
Table 17. Cars, CidG aggregates by ALIM
1
2
3
ALIM REC
B
D
G
exposure
465024.00
188745.00
159551.00
frequency
0.31
0.37
0.55
handled severity
5823.00
4613.00
5778.00
compensating forfait
4754.00
3924.00
4809.00
Table 18. Cars, CttG aggregates by ALIM
1
2
3
ALIM REC
B
D
G
exposure
465024.08
188744.85
159551.15
frequency
5.28
6.18
6.36
compensating forfait
1803.00
1774.00
1857.00
Table 19. Cars, CidD aggregates by ALIM
0.3.4. FEED.
pure premium
95.00
110.00
118.00
pure premium
3.00
3.00
5.00
B. ONE WAY DATA ANALYSIS
1
2
3
ALIM REC
B
D
G
exposure
465024.00
188745.00
159551.00
frequency
0.32
0.32
0.47
compensating forfait
4852.71
5825.18
5442.16
Table 20. Cars, CttD aggregates by ALIM
93
pure premium
15.74
18.80
25.82
94
B. ONE WAY DATA ANALYSIS
1
2
3
zonaForfeit
1
2
3
exposure
134422.00
555885.00
123013.00
frequency
2.82
2.41
2.64
severity
5664.00
5815.00
4684.00
pure premium
160.00
140.00
124.00
Table 21. Cars, NoCard aggregates by ZONE
1
2
3
zonaForfeit
1
2
3
exposure
134422.00
555885.00
123013.00
frequency
7.20
6.87
7.46
handled severity
2159.00
1824.00
1508.00
compensating forfait
1930.00
1781.00
1561.00
pure premium
16.00
3.00
-4.00
Table 22. Cars, CidG aggregates by ZONE
1
2
3
zonaForfeit
1
2
3
exposure
134422.00
555885.00
123013.00
frequency
0.35
0.35
0.49
handled severity
5610.00
5552.00
5411.00
compensating forfait
4584.00
4783.00
3916.00
Table 23. Cars, CttG aggregates by ZONE
1
2
3
zonaForfeit
1
2
3
exposure
134421.52
555885.46
123013.10
frequency
5.71
5.61
6.07
compensating forfait
1903.00
1823.00
1648.00
Table 24. Cars, CidD aggregates by ZONE
0.3.5. ZONE.
pure premium
109.00
102.00
100.00
pure premium
4.00
3.00
7.00
B. ONE WAY DATA ANALYSIS
1
2
3
zonaForfeit
1
2
3
exposure
134422.00
555885.00
123013.00
frequency
0.35
0.33
0.46
compensating forfait
4491.55
5581.63
4620.05
Table 25. Cars, CttD aggregates by ZONE
95
pure premium
15.67
18.46
21.30
96
B. ONE WAY DATA ANALYSIS
0.4. Trucks basic stastistics.
1
2
3
ANNO
2007
2008
2009
exposure
41035.00
44005.00
45855.00
frequency
7.99
6.79
5.89
severity
3998.00
5109.00
4095.00
pure premium
319.00
347.00
241.00
Table 26. Trucks, NoCard aggregates by YEAR
1
2
3
ANNO
2007
2008
2009
exposure
41035.00
44005.00
45855.00
frequency
5.05
5.61
5.87
handled severity
1683.00
1799.00
1727.00
compensating forfait
1833.00
1474.00
1495.00
pure premium
-8.00
18.00
14.00
Table 27. Trucks, CidG aggregates by YEAR
1
2
3
ANNO
2007
2008
2009
exposure
41035.00
44005.00
45855.00
frequency
0.12
0.11
0.12
handled severity
5366.00
4036.00
3704.00
compensating forfait
4099.00
3507.00
3215.00
pure premium
2.00
1.00
1.00
Table 28. Trucks, CttG aggregates by YEAR
1
2
3
ANNO
2007
2008
2009
exposure
41035.20
44004.54
45854.92
frequency
12.32
13.57
14.39
compensating forfait
1991.00
1605.00
1664.00
pure premium
245.00
218.00
240.00
Table 29. Trucks, CidD aggregates by YEAR
0.4.1. YEAR.
B. ONE WAY DATA ANALYSIS
1
2
3
ANNO
2007
2008
2009
exposure
41035.00
44005.00
45855.00
frequency
0.45
0.65
0.48
compensating forfait
4942.20
5658.62
4795.75
97
pure premium
22.40
36.65
23.11
Table 30. Trucks, CttD aggregates by YEAR
98
B. ONE WAY DATA ANALYSIS
1
2
3
AGE REC
18-40
41+
S
exposure
25778.00
18550.00
86567.00
frequency
4.94
4.06
8.01
severity
4358.00
4987.00
4340.00
pure premium
215.00
203.00
348.00
Table 31. Trucks, NoCard aggregates by AGE
1
2
3
AGE REC
18-40
41+
S
exposure
25778.00
18550.00
86567.00
frequency
5.58
3.96
5.85
handled severity
1842.00
1782.00
1703.00
compensating forfait
1681.00
1690.00
1542.00
pure premium
9.00
4.00
9.00
Table 32. Trucks, CidG aggregates by AGE
1
2
3
AGE REC
18-40
41+
S
exposure
25778.00
18550.00
86567.00
frequency
0.22
0.13
0.08
handled severity
3925.00
4084.00
4776.00
compensating forfait
3250.00
3166.00
4010.00
Table 33. Trucks, CttG aggregates by AGE
1
2
3
AGE REC
18-40
41+
S
exposure
25778.14
18549.77
86566.75
frequency
11.18
9.18
15.07
compensating forfait
1771.00
1769.00
1726.00
pure premium
198.00
162.00
260.00
Table 34. Trucks, CidD aggregates by AGE
0.4.2. AGE.
pure premium
1.00
1.00
1.00
B. ONE WAY DATA ANALYSIS
1
2
3
AGE REC
18-40
41+
S
exposure
25778.00
18550.00
86567.00
frequency
0.58
0.32
0.56
compensating forfait
5183.38
9413.30
4668.11
99
pure premium
29.96
30.45
26.05
Table 35. Trucks, CttD aggregates by AGE
100
B. ONE WAY DATA ANALYSIS
1
2
3
4
5
6
WEIGHTUSE
P 24P 25-35
P 36+
T 115T 115-440
T 441+
exposure
39572.00
53885.00
21991.00
5367.00
8709.00
1371.00
frequency
3.33
5.16
9.29
13.10
20.85
22.25
severity
4566.00
3833.00
3917.00
4040.00
5612.00
5621.00
pure premium
152.00
198.00
364.00
529.00
1170.00
1251.00
Table 36. Trucks, NoCard aggregates by WEIGHT USE
WEIGHTUSE
P 24P 25-35
P 36+
T 115T 115-440
T 441+
1
2
3
4
5
6
exposure
39572.00
53885.00
21991.00
5367.00
8709.00
1371.00
frequency
5.66
4.85
4.38
7.94
9.84
9.63
handled severity
1762.00
1673.00
1766.00
1663.00
1810.00
2227.00
compensating forfait
1754.00
1584.00
1470.00
1523.00
1350.00
1287.00
pure premium
1.00
4.00
13.00
11.00
45.00
91.00
Table 37. Trucks, CidG aggregates by WEIGHT USE
WEIGHTUSE
P 24P 25-35
P 36+
T 115T 115-440
T 441+
1
2
3
4
5
6
exposure
39572.00
53885.00
21991.00
5367.00
8709.00
1371.00
frequency
0.19
0.12
0.04
0.06
0.00
0.00
handled severity
3886.00
5025.00
4424.00
1446.00
compensating forfait
3442.00
3912.00
3350.00
1321.00
Table 38. Trucks, CttG aggregates by WEIGHT USE
1
2
3
4
5
6
WEIGHTUSE
P 24P 25-35
P 36+
T 115T 115-440
T 441+
exposure
39572.15
53884.86
21990.61
5366.96
8709.31
1370.76
frequency
8.99
13.20
13.76
28.58
23.84
23.56
compensating forfait
1800.00
1724.00
1759.00
1707.00
1684.00
1666.00
pure premium
162.00
227.00
242.00
488.00
401.00
393.00
Table 39. Trucks, CidD aggregates by WEIGHT USE
0.4.3. WEIGHT - USE.
pure premium
1.00
1.00
0.00
0.00
B. ONE WAY DATA ANALYSIS
1
2
3
4
5
6
WEIGHTUSE
P 24P 25-35
P 36+
T 115T 115-440
T 441+
exposure
39572.00
53885.00
21991.00
5367.00
8709.00
1371.00
frequency
0.41
0.48
0.53
1.19
0.96
0.73
compensating forfait
4703.94
5814.24
5257.15
4773.33
4526.48
4523.30
101
pure premium
19.14
27.62
27.97
56.92
43.66
33.00
Table 40. Trucks, CttD aggregates by WEIGHT USE
102
B. ONE WAY DATA ANALYSIS
1
2
3
zonaForfeit
1
2
3
exposure
20354.00
91595.00
18946.00
frequency
6.94
6.83
6.85
severity
4034.00
4351.00
5017.00
pure premium
280.00
297.00
343.00
Table 41. Trucks, NoCard aggregates by ZONE
1
2
3
zonaForfeit
1
2
3
exposure
20354.00
91595.00
18946.00
frequency
5.23
5.60
5.48
handled severity
2107.00
1700.00
1554.00
compensating forfait
1764.00
1595.00
1350.00
pure premium
18.00
6.00
11.00
Table 42. Trucks, CidG aggregates by ZONE
1
2
3
zonaForfeit
1
2
3
exposure
20354.00
91595.00
18946.00
frequency
0.13
0.10
0.18
handled severity
5958.00
4002.00
4074.00
compensating forfait
4108.00
3470.00
3545.00
Table 43. Trucks, CttG aggregates by ZONE
1
2
3
zonaForfeit
1
2
3
exposure
20353.69
91595.25
18945.71
frequency
12.78
13.43
14.35
compensating forfait
1884.00
1727.00
1648.00
pure premium
241.00
232.00
237.00
Table 44. Trucks, CidD aggregates by ZONE
0.4.4. ZONE.
0.5. Trucks basic stastistics.
0.5.1. YEAR.
pure premium
2.00
1.00
1.00
B. ONE WAY DATA ANALYSIS
zonaForfeit
1
2
3
1
2
3
exposure
20354.00
91595.00
18946.00
frequency
0.52
0.48
0.77
103
compensating forfait
4743.98
5324.18
5106.29
pure premium
24.47
25.69
39.08
Table 45. Trucks, CttD aggregates by ZONE
1
2
3
ANNO
2007
2008
2009
exposure
42486.00
44068.00
45052.00
frequency
2.19
1.70
1.41
severity
2874.00
4130.00
4827.00
pure premium
63.00
70.00
68.00
Table 46. TwoWheels, NoCard aggregates by YEAR
1
2
3
ANNO
2007
2008
2009
exposure
42486.00
44068.00
45052.00
frequency
2.12
2.45
2.81
handled severity
3397.00
3325.00
3788.00
compensating forfait
1810.00
2807.00
3105.00
pure premium
34.00
13.00
19.00
Table 47. TwoWheels, CidG aggregates by YEAR
1
2
3
ANNO
2007
2008
2009
exposure
42486.00
44068.00
45052.00
frequency
0.12
0.14
0.22
handled severity
4740.00
9939.00
8503.00
compensating forfait
3460.00
8542.00
6267.00
pure premium
1.00
2.00
5.00
Table 48. TwoWheels, CttG aggregates by YEAR
1
2
3
ANNO
2007
2008
2009
exposure
42485.88
44068.39
45052.48
frequency
1.25
1.57
1.80
compensating forfait
1848.00
1458.00
1518.00
pure premium
23.00
23.00
27.00
Table 49. TwoWheels, CidD aggregates by YEAR
1
2
3
ANNO
2007
2008
2009
exposure
42486.00
44068.00
45052.00
frequency
0.03
0.06
0.04
compensating forfait
5335.42
3672.11
4600.00
pure premium
1.51
2.25
1.94
Table 50. TwoWheels, CttD aggregates by YEAR
104
B. ONE WAY DATA ANALYSIS
1
2
3
4
5
6
ENGINEVOL
C 050M 051-150
M 151-385
M 386-450
M 451-650
M 651+
exposure
46302.00
21187.00
22908.00
2647.00
18263.00
20300.00
frequency
2.70
1.38
1.29
1.70
1.19
1.05
severity
1967.00
4121.00
4340.00
10906.00
7162.00
8573.00
pure premium
53.00
57.00
56.00
185.00
85.00
90.00
Table 51. TwoWheels, NoCard aggregates by ENGINE
1
2
3
4
5
6
ENGINEVOL
C 050M 051-150
M 151-385
M 386-450
M 451-650
M 651+
exposure
46302.00
21187.00
22908.00
2647.00
18263.00
20300.00
frequency
1.10
2.91
3.57
4.95
3.19
2.92
handled severity
3162.00
3341.00
2938.00
3158.00
3740.00
4714.00
compensating forfait
2678.00
2803.00
2585.00
2243.00
2490.00
2787.00
pure premium
5.00
16.00
13.00
45.00
40.00
56.00
Table 52. TwoWheels, CidG aggregates by ENGINE
1
2
3
4
5
6
ENGINEVOL
C 050M 051-150
M 151-385
M 386-450
M 451-650
M 651+
exposure
46302.00
21187.00
22908.00
2647.00
18263.00
20300.00
frequency
0.07
0.20
0.28
0.23
0.18
0.16
handled severity
14283.00
3695.00
4919.00
4505.00
5238.00
16915.00
compensating forfait
12273.00
3009.00
3861.00
3083.00
4036.00
11940.00
Table 53. TwoWheels, CttG aggregates by ENGINE
1
2
3
4
5
6
ENGINEVOL
C 050M 051-150
M 151-385
M 386-450
M 451-650
M 651+
exposure
46302.42
21186.59
22907.82
2647.08
18263.28
20299.55
frequency
1.07
2.20
1.73
2.23
1.72
1.48
compensating forfait
1527.00
1632.00
1534.00
1677.00
1614.00
1620.00
pure premium
16.00
36.00
27.00
37.00
28.00
24.00
Table 54. TwoWheels, CidD aggregates by ENGINE
0.5.2. ENGINE.
pure premium
1.00
1.00
3.00
3.00
2.00
8.00
B. ONE WAY DATA ANALYSIS
1
2
3
4
5
6
ENGINEVOL
C 050M 051-150
M 151-385
M 386-450
M 451-650
M 651+
exposure
46302.00
21187.00
22908.00
2647.00
18263.00
20300.00
frequency
0.03
0.07
0.03
0.08
0.03
0.08
compensating forfait
4643.58
3711.86
4309.38
4125.00
3476.67
4956.12
105
pure premium
1.20
2.45
1.50
3.12
1.14
3.91
Table 55. TwoWheels, CttD aggregates by ENGINE
106
B. ONE WAY DATA ANALYSIS
1
2
3
4
5
6
7
AGESEX REC
F 18-39
F 40-49
F 50+
M 18-39
M 40-49
M 50+
S
exposure
7901.00
9477.00
5162.00
35943.00
35618.00
33886.00
3618.00
frequency
2.11
2.50
1.78
1.73
1.85
1.39
1.91
severity
3430.00
2320.00
2421.00
4821.00
4274.00
3244.00
2287.00
pure premium
72.00
58.00
43.00
83.00
79.00
45.00
44.00
Table 56. TwoWheels, NoCard aggregates by AGESEX
AGESEX REC
F 18-39
F 40-49
F 50+
M 18-39
M 40-49
M 50+
S
1
2
3
4
5
6
7
exposure
7901.00
9477.00
5162.00
35943.00
35618.00
33886.00
3618.00
frequency
3.10
2.65
1.72
3.25
2.39
1.56
3.18
handled severity
3117.00
3602.00
3272.00
3761.00
3459.00
3357.00
3313.00
compensating forfait
2425.00
3005.00
2725.00
2634.00
2618.00
2718.00
2301.00
pure premium
21.00
16.00
9.00
37.00
20.00
10.00
32.00
Table 57. TwoWheels, CidG aggregates by AGESEX
AGESEX REC
F 18-39
F 40-49
F 50+
M 18-39
M 40-49
M 50+
S
1
2
3
4
5
6
7
exposure
7901.00
9477.00
5162.00
35943.00
35618.00
33886.00
3618.00
frequency
0.28
0.17
0.08
0.23
0.14
0.09
0.17
handled severity
2735.00
4523.00
7738.00
13986.00
4446.00
4787.00
3932.00
compensating forfait
2073.00
3517.00
6132.00
10903.00
3414.00
3967.00
2828.00
Table 58. TwoWheels, CttG aggregates by AGESEX
1
2
3
4
5
6
7
AGESEX REC
F 18-39
F 40-49
F 50+
M 18-39
M 40-49
M 50+
S
exposure
7901.49
9476.59
5162.35
35943.40
35618.33
33886.22
3618.34
frequency
1.95
1.80
1.22
1.87
1.62
0.94
2.21
compensating forfait
1557.00
1617.00
1655.00
1629.00
1574.00
1485.00
1606.00
pure premium
30.00
29.00
20.00
30.00
25.00
14.00
36.00
Table 59. TwoWheels, CidD aggregates by AGESEX
0.5.3. AGE SEX.
pure premium
2.00
2.00
1.00
7.00
1.00
1.00
2.00
B. ONE WAY DATA ANALYSIS
1
2
3
4
5
6
7
AGESEX REC
F 18-39
F 40-49
F 50+
M 18-39
M 40-49
M 50+
S
exposure
7901.00
9477.00
5162.00
35943.00
35618.00
33886.00
3618.00
frequency
0.05
0.04
0.02
0.08
0.03
0.03
0.03
compensating forfait
4437.50
3321.25
4050.00
4526.78
4935.00
3630.11
1373.00
107
pure premium
2.25
1.40
0.78
3.40
1.66
0.96
0.38
Table 60. TwoWheels, CttD aggregates by AGESEX
108
B. ONE WAY DATA ANALYSIS
1
2
3
zonaForfeit
1
2
3
exposure
42999.00
77408.00
11200.00
frequency
1.57
1.76
2.52
severity
4505.00
3646.00
3007.00
pure premium
71.00
64.00
76.00
Table 61. TwoWheels, NoCard aggregates by ZONE
1
2
3
zonaForfeit
1
2
3
exposure
42999.00
77408.00
11200.00
frequency
2.04
2.59
3.25
handled severity
3938.00
3433.00
3046.00
compensating forfait
2915.00
2583.00
2353.00
pure premium
21.00
22.00
23.00
Table 62. TwoWheels, CidG aggregates by ZONE
1
2
3
zonaForfeit
1
2
3
exposure
42999.00
77408.00
11200.00
frequency
0.15
0.15
0.23
handled severity
10388.00
7602.00
4248.00
compensating forfait
7467.00
6297.00
3266.00
Table 63. TwoWheels, CttG aggregates by ZONE
1
2
3
zonaForfeit
1
2
3
exposure
42999.31
77407.52
11199.91
frequency
1.28
1.62
2.07
compensating forfait
1601.00
1604.00
1437.00
pure premium
20.00
26.00
30.00
Table 64. TwoWheels, CidD aggregates by ZONE
0.5.4. ZONE.
pure premium
4.00
2.00
2.00
B. ONE WAY DATA ANALYSIS
1
2
3
zonaForfeit
1
2
3
exposure
42999.00
77408.00
11200.00
frequency
0.04
0.04
0.04
compensating forfait
4543.58
4226.29
4110.00
109
pure premium
2.01
1.86
1.83
Table 65. TwoWheels, CttD aggregates by ZONE
APPENDIX C
GAMLSS first model relativities
0.6. Models for Cars portfolio.
0.6.1. GAMLSS relativities plot.
111
0.00
0.10
1
2
−0.10
0.00 0.05
Partial for factor(POWER)
−0.2
0.0
0.2
Partial for factor(AGESEX_REC)
20+
0.10
15−17
0.00
0−13
−0.15
Partial for factor(FEED)
−0.10
Partial for factor(ZONECARD)
112
C. GAMLSS FIRST MODEL RELATIVITIES
F 18−34
POWER
3
ZONECARD
Figure 1. NoCard Frequency model for Cars
B
M 35−41
D
FEED
S
AGESEX_REC
G
0.05
1
2
−0.05 0.00
0.05
Partial for factor(POWER)
−0.10
0.00
0.10
Partial for factor(AGESEX_REC)
20+
0.10
15−17
−0.05
0−13
−0.20
Partial for factor(YEAR)
−0.05
Partial for factor(ZONECARD)
C. GAMLSS FIRST MODEL RELATIVITIES
113
F 18−34
POWER
3
ZONECARD
Figure 2. NoCard severity model for Cars
2007
M 35−41
2008
YEAR
S
AGESEX_REC
2009
0.00
0.04
1
2
−0.2
0.0
0.1
Partial for factor(POWER)
−0.2
0.0
0.2
Partial for factor(AGESEX_REC)
20+
0.10
15−17
−0.05
0−13
−0.20
Partial for factor(FEED)
−0.06
Partial for factor(ZONECARD)
114
C. GAMLSS FIRST MODEL RELATIVITIES
F 18−34
POWER
3
ZONECARD
Figure 3. CIDG Frequency model for Cars
B
M 35−41
D
FEED
S
AGESEX_REC
G
0.0
0.1
0.2
1
2
−0.04
0.00
0.04
Partial for factor(POWER)
−0.10
0.00
0.10
Partial for factor(AGESEX_REC)
20+
0.04
15−17
0.00
0−13
−0.04
Partial for factor(FEED)
−0.2
Partial for factor(ZONECARD)
C. GAMLSS FIRST MODEL RELATIVITIES
POWER
3
ZONECARD
Figure 4. CIDG severity model for Cars
115
F 18−34
B
M 35−41
D
FEED
S
AGESEX_REC
G
0.1
0.2
1
2
−0.2
0.0
0.2
Partial for factor(FEED)
−0.5
0.0
0.5
Partial for factor(AGESEX_REC)
G
0.10
D
0.00
B
−0.10
Partial for factor(YEAR)
−0.1 0.0
Partial for factor(ZONECARD)
116
C. GAMLSS FIRST MODEL RELATIVITIES
F 18−34
FEED
3
ZONECARD
Figure 5. CTTG Frequency model for Cars
2007
M 35−41
2008
YEAR
S
AGESEX_REC
2009
0.05 0.15
1
2
−0.2
0.0
0.2
0.4
Partial for factor(POWER)
−0.3
−0.1
0.1
0.3
Partial for factor(AGESEX_REC)
20+
0.10
15−17
0.00
0−13
−0.10
Partial for factor(FEED)
−0.10
Partial for factor(ZONECARD)
C. GAMLSS FIRST MODEL RELATIVITIES
POWER
3
ZONECARD
Figure 6. CTTG severity model for Cars
117
F 18−34
B
M 35−41
D
FEED
S
AGESEX_REC
G
0.05
B
D
−0.10
0.00
0.10
Partial for factor(POWER)
−0.1
0.1
0.3
Partial for factor(AGESEX_REC)
20+
0.10
15−17
0.00
0−13
−0.10
Partial for factor(YEAR)
−0.05 0.00
Partial for factor(FEED)
118
C. GAMLSS FIRST MODEL RELATIVITIES
F 18−34
POWER
G
FEED
Figure 7. CIDD frequency model for Cars
2007
M 35−41
2008
YEAR
S
AGESEX_REC
2009
0.0
0.2
1
2
0.2
−0.2
0.0
0.2
Partial for factor(FEED)
−0.4
0.0
0.4
Partial for factor(AGESEX_REC)
G
0.1
D
0.0
B
−0.2
Partial for factor(YEAR)
−0.2
Partial for factor(ZONECARD)
C. GAMLSS FIRST MODEL RELATIVITIES
FEED
3
ZONECARD
Figure 8. CTTD freq model for Cars
119
F 18−34
2007
M 35−41
2008
YEAR
S
AGESEX_REC
2009
120
C. GAMLSS FIRST MODEL RELATIVITIES
P 24−
0.2
0.0
−0.2
Partial for factor(AGE_REC)
0.0 0.5 1.0
−1.0
Partial for factor(WEIGHTUSE)
0.7. Models for Trucks portfolio.
T 115−
0.2
0.0
−0.2
Partial for factor(YEAR)
WEIGHTUSE
2007
2008
2009
YEAR
Figure 9. NoCard Frequency model for Trucks
0.7.1. GAMLSS relativities plot.
18−40
41+
AGE_REC
S
0.00
0.10
1
2
0.00
0.0
0.2
−0.15
0.00
0.10
Partial for factor(AGE_REC)
−0.2
Partial for factor(WEIGHTUSE)
P 24−
−0.10
Partial for factor(YEAR)
−0.10
Partial for factor(ZONECARD)
C. GAMLSS FIRST MODEL RELATIVITIES
T 115−
WEIGHTUSE
3
ZONECARD
Figure 10. NoCard severity model for Trucks
121
18−40
2007
41+
2008
YEAR
S
AGE_REC
2009
P 24−
0.10
−0.05
−0.20
−0.4
0.0
0.4
Partial for factor(AGE_REC)
C. GAMLSS FIRST MODEL RELATIVITIES
Partial for factor(WEIGHTUSE)
122
T 115−
0.00
−0.10
Partial for factor(YEAR)
WEIGHTUSE
2007
2008
2009
YEAR
Figure 11. CIDG Frequency model for Trucks
18−40
41+
AGE_REC
S
0.1
0.2
1
2
0.00
0.04
0.1
0.2
0.3
−0.04
0.00
0.04
Partial for factor(AGE_REC)
−0.1
Partial for factor(WEIGHTUSE)
P 24−
−0.04
Partial for factor(YEAR)
−0.1
Partial for factor(ZONECARD)
C. GAMLSS FIRST MODEL RELATIVITIES
T 115−
WEIGHTUSE
3
ZONECARD
Figure 12. CIDG severity model for Trucks
123
18−40
2007
41+
2008
YEAR
S
AGE_REC
2009
2007
2008
0.6
0.2
−0.6 −0.2
−0.4
0.0
0.2
0.4
Partial for factor(AGE_REC)
C. GAMLSS FIRST MODEL RELATIVITIES
Partial for factor(YEAR)
124
2009
YEAR
Figure 13. CTTG Frequency model for Trucks
18−40
41+
AGE_REC
S
1
2
0.0
0.2
125
−0.2
Partial for factor(YEAR)
0.0 0.2 0.4
−0.4
Partial for factor(ZONECARD)
C. GAMLSS FIRST MODEL RELATIVITIES
3
ZONECARD
Figure 14. CTTG severity model for Trucks
2007
2008
YEAR
2009
P 24−
0.2
0.0
−0.2
−0.6
−0.2
0.2
0.6
Partial for factor(AGE_REC)
C. GAMLSS FIRST MODEL RELATIVITIES
Partial for factor(WEIGHTUSE)
126
T 115−
0.10
0.00
−0.10
Partial for factor(YEAR)
WEIGHTUSE
2007
2008
2009
YEAR
Figure 15. CIDD frequency model for Trucks
18−40
41+
AGE_REC
S
1
2
0.2
0.0
3
0.5
0.0
−0.5
Partial for factor(WEIGHTUSE)
ZONECARD
P 24−
127
−0.2
Partial for factor(YEAR)
0.4
0.0
−0.4
Partial for factor(ZONECARD)
C. GAMLSS FIRST MODEL RELATIVITIES
T 115−
WEIGHTUSE
Figure 16. CTTD freq model for Trucks
2007
2008
YEAR
2009
128
C. GAMLSS FIRST MODEL RELATIVITIES
C 050−
M 451−650
0.0 0.2
−0.4
Partial for factor(AGESEX_REC)
0.4
0.0
−0.4
Partial for factor(ENGINEVOL)
0.8. Models for two wheels portfolio.
F 18−39
0.2
0.0
−0.2
Partial for factor(YEAR)
ENGINEVOL
2007
2008
2009
YEAR
Figure 17. NoCard Frequency model for 2Ws
0.8.1. GAMLSS relativities plot.
M 40−49
AGESEX_REC
C 050−
0.10
0.00
M 451−650
0.0 0.1
−0.2
Partial for factor(YEAR)
ENGINEVOL
2007
2008
129
−0.10
Partial for factor(ZONECARD)
0.4
0.0
−0.4
Partial for factor(ENGINEVOL)
C. GAMLSS FIRST MODEL RELATIVITIES
2009
YEAR
Figure 18. NoCard severity model for 2Ws
1
2
ZONECARD
3
0.0
0.4
F 18−39
0.00
0.10
C 050−
−0.15
Partial for factor(YEAR)
−0.4
Partial for factor(AGESEX_REC)
0.0
0.5
−0.3 −0.1
0.1
0.3
Partial for factor(ZONECARD)
−1.0
Partial for factor(ENGINEVOL)
130
C. GAMLSS FIRST MODEL RELATIVITIES
M 451−650
1
ENGINEVOL
M 40−49
AGESEX_REC
Figure 19. CIDG Frequency model for 2Ws
2
2007
2008
YEAR
3
ZONECARD
2009
0.0
0.2
F 18−39
0.00
0.04
C 050−
−0.04
Partial for factor(YEAR)
−0.2
Partial for factor(AGESEX_REC)
0.0 0.1 0.2
−0.20 −0.10
0.00
Partial for factor(ZONECARD)
−0.2
Partial for factor(ENGINEVOL)
C. GAMLSS FIRST MODEL RELATIVITIES
M 451−650
ENGINEVOL
M 40−49
AGESEX_REC
Figure 20. CIDG severity model for 2Ws
131
1
2
2007
2008
YEAR
3
ZONECARD
2009
C 050−
0.4
0.0
−0.4
−1.5
−0.5
0.5
Partial for factor(YEAR)
C. GAMLSS FIRST MODEL RELATIVITIES
Partial for factor(ENGINEVOL)
132
M 451−650
ENGINEVOL
Figure 21. CTTG Frequency model for 2Ws
2007
2008
YEAR
2009
C 050−
0.0
0.1
0.2
133
−0.1
Partial for factor(YEAR)
1.0
0.5
0.0
−0.5
Partial for factor(ENGINEVOL)
C. GAMLSS FIRST MODEL RELATIVITIES
M 451−650
ENGINEVOL
Figure 22. CTTG severity model for 2Ws
2007
2008
YEAR
2009
0.0
0.2
2007
2008
0.1
0.3
−0.4
0.0
0.4
Partial for factor(ENGINEVOL)
−0.6
−0.2
0.2
Partial for factor(AGESEX_REC)
M 451−650
−0.1
C 050−
−0.3
Partial for factor(ZONECARD)
−0.2
Partial for factor(YEAR)
134
C. GAMLSS FIRST MODEL RELATIVITIES
F 18−39
ENGINEVOL
2009
YEAR
Figure 23. CIDD frequency model for 2Ws
1
M 40−49
AGESEX_REC
2
ZONECARD
3
0.5
0.0
−1.0
Partial for factor(YEAR)
C. GAMLSS FIRST MODEL RELATIVITIES
2007
2008
2009
YEAR
Figure 24. CTTD freq model for 2Ws
135
Bibliography
[pre, 2010] (2010). Pretium manual. Tower Watson, 3.1 edition.
[ANIA, 2008] ANIA (2008). Riepilogo gestione sinistri. statistica annuale rca 2007. Technical report, ANIA.
[ANIA, 2010] ANIA (2010). Riepilogo gestione sinistri. statistica annuale rca 2009. Technical report, ANIA.
[AXA Actuarial department, 2009] AXA Actuarial department (2009). The italian direct reimbursment system. Technical report, AXA Assicurazioni ed Investimenti.
[Brickman et al., 2005] Brickman, S., Forster, W., and Sheaf, S. (2005). Claim inflation - use and
abuses. In GIRO working party, pages 1–51.
[CEIOPS, 2007] CEIOPS (2007). Qis4 technical specifications.
[CEIOPS, 2010] CEIOPS (2010). Qis5 technical specifications.
[Clarke and Salvatori, 1991] Clarke, T. G. and Salvatori, L. (1991). Auto insurance in italy. In
Casualty Actuarial Society discussion paper program, pages 253–304.
[Conterno, 2007] Conterno, L. (2007). Modelli di tariffazione rca ed effetto del decreto bersani sul
sistema bonus malus. Master’s thesis, Universitá Cattolica del Sacro Cuore, Milano, Italy.
[Cucinella, 2008] Cucinella, A. (2008). Ripercussioni operative e tecniche dell’idenizzo diretto sui
criteri di tariffazione. In Corso di formazione CISA: L’indennizzo diretto nell’assicurazione RC
Auto in Italia. Modelli ed esperienze., pages ‘1–46.
[Cummins, 2000] Cummins, J. D. (2000). Allocation of capital in the insurance industry. Risk
managment and insurance review, 3(1).
[de Jong and Heller, 2008] de Jong, P. and Heller, G. (2008). Generalized linear models for insurance data. Cambridge University Press, New York, first edition edition.
[de Joung Joung et al., 2007] de Joung Joung, P., Stasinopoulos, M., Stasinopoulos, R., and
Heller, G. (2007). Mean and dispersion modeling for policy claim cost. Scandinavian Actuarial Journal.
[Denuit et al., 2007] Denuit, Marechal, Pitrebois, and Wahiln (2007). Actuarial Modelling of
Claim Counts: Risk Classification, Credibility and Bonus-Malus Systems.
[Desantis, 2006] Desantis, S. (2006). Risarcimento diretto nell’rca: effetti sulla riservazione e la
valutazione della riserva sinistri. Technical report, ANIA.
[Desantis, 2010a] Desantis, S. (2010a). Rilevazione statistica dei sinistri card per tipologia di veicolo. Technical report, ANIA and ISVAP.
[Desantis, 2010b] Desantis, S. (2010b). Risarcimento diretto: uno sguardo di insieme ai risultati
che ha generato dal suo avvio. Technical report, Ordine Nazionale degli Attuari.
[Dunn and Smyth, 2005] Dunn, P. and Smyth, G. (2005). Series evaluation of tweedie exponential
dispersion model densities. Statistics and Computing, 15:267–280. 10.1007/s11222-005-4070-y.
[Dunn and Smyth, 1996] Dunn, P. and Smyth, G. K. (1996). Randomized quantile residuals. J.
Computat. Graph. Statist, 5:236–244.
[Dunn, 2010] Dunn, P. K. (2010). tweedie: Tweedie exponential family models. R package version
2.0.7.
[Embrecht et al., 2003] Embrecht, P., Furrer, H., and Kaufman, R. (2003). Quantifying regulatory
capital for operational risk. ETH Zurich.
[Feldblum, 1989] Feldblum, S. (1989). Asset liability matching for property/casualty insurers. In
Valuation Issues, CAS Special Interest Seminar, pages 117–154.
[Filippi, 1993] Filippi, E. (1993). Relazione della commissione filippi sulla tariffa rca in vigore per
l’anno 1993. ANIA.
[Galli and Savino, 2007] Galli, G. and Savino, C. (2007). Direct reimbursement in motor liability
insurance. Giornale dell’Istituto Italiano degli Attuari.
137
138
BIBLIOGRAPHY
[Gazzetta Ufficiale, 2006] Gazzetta Ufficiale (2006). Dpr 254, 18-7-2006. Gazzetta Ufficiale.
[Gazzetta Ufficiale, 2007] Gazzetta Ufficiale (2007). L. 40/2007. Gazzetta Ufficiale.
[Genton, 2003] Genton, M. G. (2003). On fundamental skew distributions reinaldo b. arellanovalle lambda.
[Geoff Werner and Claudine Modlin, 2009] Geoff Werner and Claudine Modlin (2009). Basic
Ratemaking.
[Gigante and Picech, 2004] Gigante, P. and Picech, L. (2004). La scelta delle variabili tariffarie e
la personalizzazione. aspetti generali e metodologici. In Quaderni del Dipartimento Matematica
Applicata. Universitá di Trieste.
[Gigante et al., 2002] Gigante, P., Sigalotti, L., and Pichech, L. (2002). Rate making and large
claims. In Prooceeding of ICA Annual conference. ICA.
[Goldfarb, 2006] Goldfarb, R. (2006). Risk - adjusted performance meaures for p&c insurers.
Technical report, Casualty Actuarial Society.
[Grasso, 2007] Grasso, F. (2007). L’idenizzo diretto nell’assicurazione rca in italia. Technical report, Dipartimento di Scienze Attuariali e Finanziarie, Universitá La Sapienza, Rome (Italy).
[Hyndman and Khandakar, 2008] Hyndman, R. J. and Khandakar, Y. (2008). Automatic time
series forecasting: The forecast package for r. Journal of Statistical Software, 27(3):1–22.
[John B. Henry, III and Ping-Hung Hsieh, 2010] John B. Henry, III and Ping-Hung Hsieh (2010).
Extreme value theory for partitioned losses. Variance.
[Jun Yan et al., ] Jun Yan, James Guszcza, Matthew Flynn, and Cheng-Sheng Peter Wu. Applications of the offset in property-casualty predictive modeling. Technical report.
[Kaas et al., 2008] Kaas, R., Goovaerts, M., Dhaene, J., and Denuit, M. (2008). Modern actuarial
risk theory using R.
[Maggina, 2008] Maggina, F. (2008). Misure di efficienza per i sistemi BM. Approcci per la valutazione e impatto della recente normativa italiana. PhD thesis, La Sapienza, Universitá di
Roma.
[Meyers, 2009] Meyers, G. (2009). Pure premium regression with the tweedie model. The actuarial
review.
[Meyers, 2010] Meyers, G. (2010). Summarizing insurance scores with the gini curve. The Actuarial Review, (4).
[Mieli, 2010] Mieli, M. (2010). Problematiche sulla valutazione delle riserve sinistri rca. Technical
report, Ordine Nazionale degli Attuari.
[R Development Core Team, 2010] R Development Core Team (2010). R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria.
ISBN 3-900051-07-0.
[Ribatet, 2009] Ribatet, M. (2009). POT: Generalized Pareto Distribution and Peaks Over
Threshold. R package version 1.1-0.
[Rigby and Stasinopoulos, 2005] Rigby, R. and Stasinopoulos, M. (2005). Generalized additive
models for location, scale and shape,(with discussion). Applied Statistics, 54:507–554.
[Savelli and Clemente, 2008] Savelli, N. and Clemente, G. (2008). Modelling aggregate non-life
underwriting risk: standard formula vs internal models. In AFIR Colloquium, pages 1–30.
[Savelli and Clemente, 2009] Savelli, N. and Clemente, G. (2009). Hierarchical structures in the
aggregation of premium risk for insurance underwriting. In ASTIN 2009.
[Sergio Desantis and Gianni Giuli, 2009] Sergio Desantis and Gianni Giuli (2009). Statistica annuale r.c. auto. esercizio 2007. Technical report, ANIA.
[Sergio Desantis and Gianni Giuli, 2010] Sergio Desantis and Gianni Giuli (2010). Statistica annuale rc auto - esercizio 2009 e rilevazione ad hoc dati card. Technical report, ANIA.
[Servizio statistiche e studi attuariali, 2008] Servizio statistiche e studi attuariali (2008). Analisi
tecnica per la fissazione dei forfait della convenzione card per l’anno 2009. Technical report,
ANIA.
[Spedicato, 2009] Spedicato, G. A. (2009). Recent legislative changes in italian tpml insurance:
actuarial considerations. In Atti convegno Amases 2009, pages 1–32.
[Stasinopoulos and Rigby, 2007] Stasinopoulos, M. and Rigby, R. (2007). Generalized additive
models for location scale and shape (gamlss) in r. Journal of Statistical Software, 23(7):1–46.
[Stasinopoulos et al., 2008] Stasinopoulos, M., Rigby, R., and Akantziliotou, C. (2008). Instructions on how to use the gamlss package in R. Second Edition.
[statistiche e studi attuariali, 2009] statistiche e studi attuariali, S. (2009). Analisi tecnica per la
fissazione dei forfait della convenzione card per l’anno 2010. Technical report, ANIA.
BIBLIOGRAPHY
139
[statistiche e studi attuariali ANIA, 2006] statistiche e studi attuariali ANIA, U. (1998-2006).
Banca dati auto aaaa. Technical report, ANIA.
[Tim Carter et al., 1994] Tim Carter, A. T. et al. (1994). Statistical premium rating working
party. In 1994 General Insurance convention, pages 1–21.
[Vannucci, 2007] Vannucci, L. (2007). Profittabilitá, mutualitá e indennizzo diretto nell rc auto:
un modello per le valutazioni attuariali. In Convegno CISA 2007, pages 1–29.
[Venter, 2011] Venter, G. (2011). Mortality trend models. Casualty Actuarial Society Forum.