Managing Credit Risk In Credit Risk Domain

Transcription

Managing Credit Risk In Credit Risk Domain
Managing Credit Risk In Credit Risk
Domain
Syed Sarosh, Scotia Bank
Jorge Sobehart, Citibank
Alex Shenkar, SunTrust Banks Inc.
June 3, 2015
Managing Model Risks in the Credit Risk Domain
Model Risk Management Framework for Credit Risk
Syed Sarosh
Scotiabank
June 3, 2015
Disclaimer
The views expressed in this presentation are solely those of
the presenter, and should not be construed as representing
those of the presenter’s employer
Syed Sarosh, Managing Model Risks in the Credit Risk Domain, Toronto, June 3, 2015
3
Agenda
• Sources of model risk in the model lifecycle
• Nature and types of model risk controls
• Model risk organization and governance
Syed Sarosh, Managing Model Risks in the Credit Risk Domain, Toronto, June 3, 2015
4
Typical Sources of Model Risk in the Model Life Cycle
• Uncontrolled changes
• Infrequent parameter updates
• Inappropriate or inadequate
ongoing review of “fitness for use”
Maintenance
• Incorrect data input
• Incorrect use
• Misapplication of
model results
Production
Identification
of Purpose
• Incomplete or inaccurate
identification of business needs
• Inappropriate theory &
assumptions
Development & • Limited or
Validation
incorrect data
• Erroneous data
processing
• Wrongly applied theory
• Inaccurate analyses
Approval &
Deployment
• Implementation errors
• Issues on IT controls
Syed Sarosh, Managing Model Risks in the Credit Risk Domain, Toronto, June 3, 2015
5
Sources of Model Risk – Purpose
“To be able to ask a question clearly is two-thirds the way to getting it answered”
John Ruskin
• Front-end and back-end uses:
─
─
Credit risk assessment models for front-end use (e.g., origination, account management, rating)
Credit risk quantification models for (typically) back-end use (e.g., PD/LGD/EAD parameters)
• Models intended for regulatory purposes typically have defined requirements
they should meet, making it somewhat easier to identify issues
• Models for business use require more upfront discussion and clarity around
the intended purpose, scope and objectives
• If available at this stage, clarity on oversight and control expectations around
model usage can be valuable for model developers and validators
Syed Sarosh, Managing Model Risks in the Credit Risk Domain, Toronto, June 3, 2015
6
Sources of Model Risk – Data
“Torture the data, and it will confess to anything”
Ronald Coase
• The data landscape for credit risk modelling:
─
─
─
─
Retail
Small business
Mid-market
Large corporate, financial and sovereign
• Data is the outcome of historical processes (including definitions,
interpretations and controls) and related systems
• Issues around data depth and breadth, and their interaction with the
adopted model development methodology, are fairly well understood
• Less widely understood is the plausible impact of variations in historical data
experience across institutions on modelling outputs (i.e., output differences
may be attributed more to methodology than data)
Syed Sarosh, Managing Model Risks in the Credit Risk Domain, Toronto, June 3, 2015
7
Sources of Model Risk – Data (continued)
“Data that is loved tends to survive”
Kurt Bollacker
• Increasing expectations around assessment and validation of data
quality, especially inputs that are important to the modelling process
• Data elements used in modelling may require more detailed quality
assessments than typical portfolio balance reconciliations or
representativeness analyses
• Increasing interaction between model validators and data
governance on quality assessments, including identifying limitations
that may necessitate special modelling treatment
Syed Sarosh, Managing Model Risks in the Credit Risk Domain, Toronto, June 3, 2015
8
Sources of Model Risk – Model Development
“The purpose of models is not to fit the data but to sharpen the questions”
Samuel Karlin
•
Traditional interplay between role of statistical methodologies and fundamental credit
insights across the credit risk spectrum needs to be carefully managed, with the latter
increasing in importance for non-retail sectors
•
For sectors with limited default/loss data, statistical insights can still be used with
credit fundamentals in a rigorous manner to inform model development and model
stakeholders
Low
Medium
High
The two stages of a typical credit risk model, scoring and calibration, have fairly
established development protocols
Statistical Importance
•
Low
Medium
High
Qualitative Importance
Syed Sarosh, Managing Model Risks in the Credit Risk Domain, Toronto, June 3, 2015
9
Sources of Model Risk – Model Development
Using the example of traditional credit risk scorecards (used for rank-ordering likelihood of default), we can see how model
design/architecture and model usage environment are typically inter-connected and pose important considerations for
application of judgment to model outputs to mitigate model risk
Commonly Used
Model Type
Degree of Model
Specialization
Data Used
MODEL DESIGN /
ARCHITECTURE
Basis of Calibration
Complexity of Model
Inputs
Importance of Qualitative
Factors
Complexity of Model
Algorithm
MODEL USAGE
ENVIRONMENT
User Specialization &
Sophistication
Time Spent Per Rating
Impact of Each Rating
Decision
Focus on External
Information/Benchmarks
Expected Frequency of
Judgmental Overrides
IMPLICATIONS FOR
MODEL RISK MITIGATION
VIA APPLICATION OF Expected Nature of
JUDGMENT
Judgmental Overrides
RETAIL
Statistically derived
PD/scoring models
MIDDLE
MARKET
Statistically derived
PD/scoring models
LOW
(General models)
Very large default
datasets
LOW
(General models)
Relatively large
default datasets
Default data
Default data
LOW
(Account/ payment
history, utilization,
credit requests, etc.)
NONE
MEDIUM
(Financial & some
basic qualitative
indicators)
LOW-MEDIUM
HIGH
(Black-box)
MEDIUM-HIGH
(Black-box)
LOW
LARGE
FINANCIAL
SOVEREIGNS/
CORPORATE
SECTOR
PUBLIC SECTOR
Scorecards derived Judgmentally derived Judgmentally derived
based on analytics
rating models
rating models
and/or judgment
HIGH
HIGH
(By industry)
(By sub-sectors)
Some default data, or Available sample of Available sample of
available sample of
credit ratings
credit ratings
credit ratings
Default data / credit
Credit ratings
Credit ratings
ratings
HIGH
(Specialized ratios & complex qualitative factors)
MEDIUM-HIGH
HIGH
HIGH
LOW
(Transparent
weighted approach)
LOW
(Transparent
weighted approach)
MEDIUM
LOW
(Transparent
weighted approach)
HIGH
LOW
LOW
MEDIUM
MEDIUM
HIGH
HIGH
N. A.
LOW
HIGH
NONE
LOW-MEDIUM
HIGH
N.A.
Some identifiable
patterns; remainder
idiosyncratic
Likely to be idionsyncratic and complex
Syed Sarosh, Managing Model Risks in the Credit Risk Domain, Toronto, June 3, 2015
10
Sources of Model Risk – Model Validation
“Absolutely … asking the right question is one of the most important skills”
Attributed to Shane (web pseudonym)
• A key determination relates to scope of validation (from model purpose to
data to methodology). Important aspects not covered by a quantitative
group may need to assessed elsewhere
• Fairly extensive array of techniques available for quantitative validation
methods (e.g., related to credit risk rank-ordering and calibration tests)
• Increasingly, tight timelines for model deployment necessitate adoption of
milestone-based or parallel-validation approaches, instead of the
traditional sequential process from development to validation
• Challenges with perception of model “fail rates” (or identification of
material issues) as evidence of sufficiently robust effective challenge
Syed Sarosh, Managing Model Risks in the Credit Risk Domain, Toronto, June 3, 2015
11
Sources of Model Risk – Model Approval,
Implementation and Ongoing Maintenance
• Broad range of approaches to role of validation opinion in
approval (from none without validation concurrence to ones
that permit such exceptions but control/monitor them)
• Increasing expectations on role of validation in model
implementation to ensure that model is implemented as
validated and approved
• Ongoing review and maintenance cycle important to identify
continued model appropriateness/deficiencies or usage issues,
including monitoring of overrides or adjustments to model
outputs (where applicable)
Syed Sarosh, Managing Model Risks in the Credit Risk Domain, Toronto, June 3, 2015
12
Nature and Types of Model Risk Controls
• Depending on the control environment in which a credit risk model
will be used, the primary and secondary mitigants to model risk can
be clearly identified
• Regular model performance monitoring (e.g., accuracy, calibration
testing, etc.) in an ongoing cycle of model review, validation and
refinement often required as primary mitigant for models which are
used without direct human oversight (e.g., high-volume use in the
retail space)
• In the non-retail context, direct application of judgment to model
outputs can act as primary mitigant given larger exposure sizes and
role of traditional credit analysis/judgment, with model
review/validation cycle acting as a secondary mitigant
Syed Sarosh, Managing Model Risks in the Credit Risk Domain, Toronto, June 3, 2015
13
Model Risk Organization and Governance
• Location (i.e., risk management versus business line) and degree of
centralization of model development are important factors in
determining effectiveness of the development-validation interface
• Organization of model teams:
─
─
Model Development: Usually by business/portfolio type (e.g., retail versus non-retail)
Model Validation: By business/portfolio or by upstream/downstream application (e.g.,
upfront credit risk assessment versus back-end credit risk quantification models)
• Model risk governance covers all aspects of the model life cycle,
including processes such as model inventory management, use of
development & validation guidelines, issue tracking & resolution,
model use and ongoing refinement
Syed Sarosh, Managing Model Risks in the Credit Risk Domain, Toronto, June 3, 2015
14
Wholesale Credit Risk Models, Stress Testing
and Risk Capital Calculation:
Quantitative and Implementation Challenges
Jorge Sobehart
Managing Director
Credit and Operational Risk Analytics
Citi Franchise Risk Architecture
International Model Risk Management Conference
Toronto, June 3, 2015
The analysis and conclusions set forth are those of the authors only. Citigroup is not responsible for
any statement or conclusion herein, and no opinions or theories presented herein necessarily reflect
the position of the institution.
Quantitative and Implementation Challenges
 Product coverage and key risk factors for wholesale credit risk
models for wholesale portfolios
 Creating consistency in the understanding of credit stress
testing across global businesses
 Modeling challenges - Advanced credit risk models for
PD/LGD/EAD parameters, capital calculation and credit risk
stress testing
 Data challenges – how the usage of the best modeling approach
could be restricted by the data availability
 Implementation challenges - Governance and sustainability Measuring credit risk and stress losses and managing model
risk
 Validation challenges – Framework soundness, documentation
and testing
1
Model Development and Model Risk
Model Development
 Step 1. Identification of a real life problem (Reality)
 Step 2. Interpretation of information, identification of causal
factors, and generalizations
 Step 3. Simplifications and framework selection
 Fundamental assumptions to frame the problem
 Technical assumptions to make the problem tractable
 Step 4. Model building (Abstraction)
2
Model Risk Identification and Mitigation
Model Risk is the failure to identify the limitations and consequences of
the simplifications, assumptions and potential errors introduced when
building, calibrating and implementing a model. Model risk cannot be
avoided but can be mitigated.
Model Component
Developmental
Evidence
Benchmarking
Back-testing
Sensitivity Analysis,
Stress Testing and
Limit Cases
Inputs
Data relevance, data
quality
Alternative
sources of data
Identification of
outliers
Extreme input values
Assumptions
Historical evidence,
contextual information
Alternative
assumptions
Conditions that can
invalidate the model
assumptions
Model Specification
Sound/proven theory,
technical derivations
Alternative
frameworks
Conditions that can
invalidate technical
assumptions
Model
Implementation
Technical
implementation,
algorithms, systems
Working
examples,
alternative
implementation
Extreme input values or
corner cases that can
result in model failure
Outputs
Alignment to
expectations
Outputs from
alternative
models
Historical
comparisons
Sensitivity of outputs to
assumptions.
3
Modeling and Data Challenges
► Scenarios – Scenario severity is usually tied to an
institution’s risk and vulnerability to stress events.
► Analytics – Credit loss forecasting in the industry is based
on a wide range of models driven by systemic stress risk
factors and obligor and product characteristics.
► E.g., market-implied models vs. actuarial models
►
Data Challenges – There is a trade-off between model
complexity and accuracy limited by data availability for
building, testing and using models. This applies to models
developed in-house and vendor solutions.
4
Understanding Models and Assumptions
PD
Reordering the pieces
creates a gap.
How can this be true?
Corr
LGD
EAD
Review the fundamental
and technical
assumptions
LGD
PD
Corr
EAD
Model uncertainty:
For meaningful estimates
include the uncertainty
about data,
parameters and
structural relationships
20
5
Credit Risk Assessment Tools - Examples
Market-Implied PD and LGD Models
•
•
•
•
Models- Idealized Merton-style options pricing models
Coverage – Global. Obligors with public equity or debt
Drivers - Accounting and equity market information
Calibration - Publicly available defaults or losses (global coverage),
indirect link to economic variables and stress test drivers
Actuarial-Statistical PD, Risk Migration and Loss Models
• Models - Multi-year, global, industry or product specific
• Coverage - Global. Public and private obligors
• Drivers - Accounting information, economic and industry variables,
historical defaults and expert judgment
• Calibration - Usually mapped to risk ratings, default rates and losses,
direct link to economic variables and stress test drivers
6
Market-Implied PD Models
Risk Component
Merton
Framework
Descriptor
Leverage
Debt Term Structure
Capital
Structure
Market
Information
Stock Volatility
Equity Price
Distance From
Default
Profitability
These
variables
are not
explicitly
included in
the model
Return on Assets
Return on Equity
Liquidity
Available Capital
Access to Credit
Market
Presence
Firm Size
Level of Competition
Management
Quality
Earnings Restatements
Bad Press
Credit Rating
Interest Rates
F(x1,x2,…xn)
Probability of Default
7
Market-Implied PD Models
Asset Price
Distribution
Asset Price ($)
Distance
From Default
Point
Liabilities
Default Point
Historical or simulated prices
Market
Implied
Credit Risk
Question 1.
Question 2.
What is the distribution of fundamental values?
How do assets react to stress drivers?
8
Market-Implied PD Models – Equity As An Option
Market Equity = Present Value (Residual Value of the Firm)
Stock Volatility = Leveraged Volatility of Assets
1. Calculate the effective value of the firm’s obligations: D0 (STD, LTD)
2. Use equity information to estimate:
 Market value of the firm’s assets: A
(Random Walk)
 Volatility of assets: VA
3. Estimate the firm’s Distance-From-Default: DD = (A – D0)/(VA x A)
4. Estimate the Probability of Default: PD
9
Limitations of Market-Implied PD Models
• Assets are unobservable and must be inferred from model
• Credit cycle effects are not included in the framework or calibration
• Market under and over-reaction to price trends
• Prone to false positive signals in volatile markets
• Investor behavior and fat-tail effects may impact on the marketimplied estimates of credit quality
Frequency
Rational Gaussian
Model
2008-09 S&P-500
Minute Returns
Market Under and
Over-reaction
Model
Normalized Return
10
Default Probability Estimation in Practice
•
Long-term model performance tests show that models with additional
financial information can outperform market-implied models (based on
idealized market assumptions)
•
The performance gap is greater for low credit quality obligors
(exposed to higher levels of uncertainty), and during market
downturns
HPD vs. Merton-style – 53,233 observations, 844 defaults
Realized defaults
100%
HPD (2000-2011)
80%
Merton-style (2000-2011)
Better identification
of defaulters
(reduction of false
positive signals)
60%
40%
20%
0%
1%
10%
High Risk
Population sorted by risk
100%
Low Risk
11
C&I Model Performance – Downturn Effects
12
C&I Model Performance – Downturn Effects
13
C&I Model Performance – Downturn Effects
14
S&P 500 High Frequency Returns – 2008-2013
S&P500 - Minute Returns For Different Periods
Log(Frequency)
Observed
returns
2013
2012
2011
2010
2009
2008
Fat-tail Model
Normal Distribution
Random walk
- Theory
-20
-15
-10
-5
0
5
10
15
Normalized Return Z
Distribution of S&P 500 minute returns (symbols), fat-tail model
(solid line) and normal distribution (dashed line) ~465,000 samples
20
15
S&P 500 Daily Returns
Observed
returns
Theory
Distribution of S&P 500 daily returns (symbols), fat-tail model (solid
line) and normal distribution (dashed line)
16
S&P 500 Minute to Monthly Returns
Observed
returns
Theory
Distribution of S&P 500 minute to monthly equity returns
(symbols), fat-tail model (solid line)
17
Statistical PD and Rating Migration Models
Obligor
PDR
1 - PDR
Default
No Default
Rating Review
pRR’
Rating Revision
(New rating)
No Rating Review
pRR
No Rating Revision
(Same rating)
(*) Here R is the obligor’s risk rating
18
Probability of Default and Rating Migration
Use empirical relationships to construct rating migration models that can
be used at different points in the credit cycle
Odds of default
S&P [1987-2007]
Rating transitions Default
2
1
1 year
2 years
0
3 years
4 years
5 years
6 years
7 years
8 years
9 years
10 years
p21 p22 p23 ….
PD1
PD2
-2
-3
-4
-5
-6
-7
-8
Model
S&P
0.001
B3
[1987-2007]
Moody's [1987-2007]
Rating
B CCC
Default
CCC-C
B-
B
B+
BB
BB
BB-
BBB
BB+
BBB
BBB-
A-
A
BBB+
AAA AA
A
A+
AA-
AA
0.0001
AA+
P XY
Transition Rate
BBB
0.01
AAA
TMjk = pjk (X1,..Xn)
PD model
B2
downgrades
BBB/Baa Average Transition Rates [1987-2007]
0.1
Rating migration
model
B1
Ba3
Ba2
Ba1
Baa3
Rating
upgrades
upgrades downgrades Probability of
default
Caa-C
PDM
Baa2
A3
Baa1
A2
A1
Aa3
Aa2
Aa1
pM1 pM2 pM3 …
Aaa
-9
Transition Probability
Credit quality
p11 p12 p13 ….
log(Odds)
-1
19
Historical Structural Relationships for PD
 PDR  1
  R  b 
log
 1  PDR  a
Average log(Odds of Default) by
agency risk rating for different
time horizons (1 to 10 years)
Average 1-10Yr PD Period 1983-2009
1Yr PD Period 1920-1935
1920
Cumulative Default Rates (1983-2009)
2
1
1 year
3 years
5 years
7 years
9 years
0
-1
-2
2 years
4 years
6 years
8 years
10 years
-1
-1
-2
-2
-3
-3
-4
-4
-5
-5
-6
-6
-7
-7
-8
-8
0
-3
-4
5
10
1930
15
20
1925
0
-1
-1
-2
-2
-5
-3
-3
-6
-4
-4
-7
-5
-5
-6
-6
-7
-7
1yr default rate
-8
-9
CCC
B-
B
B
B+
Recent history
BB-
BB
BB
BB+
BBB
BBB-
BBB
BBB+
A-
A
A
A+
AA-
AA
AA+
AAA
AA
CCC
10
1935
15
20
-8
-8
AAA
5
AAA AA A BBB BB B CCC AAA AA A BBB BB B CCC
0
5
10
15
20
0
5
10
Model consistency
15
20
20
Now, let’s assume that analysts perceive transition risk as risk severity
Historical Relationships for Rating Migration
 p jk
log(Odds)  log
1 p
jk


  1 ( R j  Rk )  bd ,u 
 a
d ,u

Here pjk is the probability of a downgrade (upgrade) from rating Rj to
rating Rk during period T
BBB 1Yr Transition Rates
Upgrades
Downgrades
1
BBB
0.01
Model
0.001
CCC
B-
B
B
B+
BB-
BB
BB
BB+
BBB
BBB-
BBB
BBB+
A-
A
A
A+
AA-
AAA AA
AA
0.0001
AA+
Average (1983-2009)
AAA
Transition rates
0.1
Can we use these
structural relationships
to extrapolate rating
transition probabilities?
CCC
Estimate based on rating
transition model
21
Historical Relationships for Rating Migration
1983-2009
AA 1 Yr Transition Rates
Aa3 1 Yr Transition Rates
1
1
AA
0.01
model
0.001
Aa3
0.1
Transition rates
Transition rates
0.1
0.01
0.001
average(1983-2009)
Model
average (1983-2009)
0.0001
A 1Yr Transition Rates
B3
Caa
B2
B1
Ba3
Ba2
Ba1
Baa3
Baa2
A3
Baa1
A2
A1
Aa3
Aa2
Aa1
Aaa
CCC
B
B-
B+
BB
BB-
BB+
BBB
BBB-
BBB+
A
A-
A+
AA
AA-
AA+
AAA
0.0001
A2 1 Yr Transition Rates
1
1
A
0.01
0.001
A2
0.1
Transition rates
Transition rates
0.1
Model
0.01
0.001
Model
Average (1983-2009)
Average (1983-2009)
BBB 1Yr Transition Rates
Caa
B3
B2
B1
Ba3
Ba2
Ba1
Baa3
Baa2
Baa1
A3
Baa2 1 Yr Transition Rates
1
1
BBB
Baa2
0.1
Transition rates
0.1
Transition rates
A2
A1
Aa3
Aa2
Aaa
B-
Ratings
Aa1
0.0001
CCC
B
B+
BB-
BB
BB+
BBB-
BBB
BBB+
A-
A
A+
AA-
AA
AA+
AAA
0.0001
0.01
0.01
Model
0.001
0.001
Average (1983-2009)
Model
Caa
B3
B2
B1
Ba3
Ba2
Ba1
Baa3
Baa2
A3
A2
A1
Aa3
Aa2
Aa1
Aaa
CCC
B-
B
B+
BB-
BB+
BBB-
BBB
BBB+
A
A-
A+
AA-
BB
Downgrades
0.0001
Baa1
Upgrades
AA
AA+
AAA
Average (1983-2009)
0.0001
Average 1-year transition rates for different rating agency data (symbols), and
transition model (solid lines) (log-scale).
22
Creating PD and Rating Migration Scenarios
Risk Rating Migration
Find relationships between rating
upgrades and downgrades
Step 1. Construct a structural model
from historical risk rating data
BBB 1Yr Transition Rates
1
Transition rates
0.1
0.01
Model
0.001
from historical data and link them to
economic variables (e.g., GDP,
unemployment, industry indices, etc.)
CCC
B
B-
B+
BB
BB-
BB+
BBB
BBB-
BBB+
A
A-
A+
AA
AA-
AA+
Step 2. Find structural parameters
AAA
Average (1983-2009)
0.0001
Economic Variables
40%
8%
30%
X1
X2
20%
10%
6%
4%
0%
2%
-10%
-20%
0%
-30%
-2%
-40%
-50%
1980
-4%
1985
1990
1995
2000
2005
Year
Default rate range:
model (solid line) vs.
agency data (symbols)
Simulated PDs vs. Historical Default Rates
80%
Step 3. Use economic forecasts to
60%
1 Yr Default Rates
estimate probability of default and
risk rating migration scenarios
70%
S&P CCC-C
Moody's Caa-C
Mean+1SD
Mean
Mean-1SD
50%
40%
30%
20%
10%
0%
1980
1985
1990
1995
2000
Year
2005
2010
2015
Questions
Validation of Retail Credit Risk
Models for Use in CCAR: Qualitative
and Quantitative Approaches.
Alex Shenkar, Head of Credit Risk Model Validation
SunTrust Banks, Inc.
06/03/2015
Disclaimer

The views expressed in this presentation are those of the presenter and do not represent the views of
the Suntrust Banks, Inc.

The Suntrust Banks, Inc. assumes no liability in connection with any use of this information and makes no
warranty or guarantee that the information presented here is current, accurate, or complete.

The content is owned by presenter and intended for information purposes only.

The presenter expressly disclaims any obligation to update the information presented.
 SunTrust Banks, Inc., with total assets of $190 billion as of March 31, 2015, is one of the nation's largest
and strongest financial holding companies. Through its banking subsidiaries, the company provides
deposit, credit, trust, and investment services to a broad range of retail, business, and institutional clients.
Other subsidiaries provide mortgage banking, brokerage, investment management, equipment leasing, and
capital market services.
 Atlanta-based SunTrust enjoys leading market positions in some of the highest growth markets in the
United States and also serves clients in selected markets nationally. The company operates approximately
1,450 retail branches and 2,200 ATMs in Alabama, Arkansas, Florida, Georgia, Maryland, Mississippi, North
Carolina, South Carolina, Tennessee, Virginia, West Virginia, and the District of Columbia. SunTrust Banks,
Inc. was founded in 1891 and is headquartered in Atlanta, Georgia, USA.
Alex Shenkar, Validation of Retail Credit Risk Models for Use in CCAR, Toronto, 06/03/15
41
Introduction
 Comprehensive Capital Adequacy Review (CCAR) is annual regulatory requirement
that includes proposed capital actions such as changes in dividends, share
repurchases, and capital raises. CCAR measures impact of supervisory scenarios on
the regulatory capital ratios by projecting the balance sheet, risk-weighted assets
and net income over nine quarters.
 CCAR framework requires estimation of credit losses and includes: loan losses and
changes in the allowance for loan and lease losses, losses on loans held for sale
and measured under the fair value method, other-than-temporary impairment
losses on investment securities.
 As a part of CCAR submission, regulators expect model validation documentation
demonstrating end-to-end effectiveness of loss-estimation methodologies on the
following elements: conceptual soundness, assumptions, model robustness, risks
and limitations, sensitivities, use of qualitative adjustments or other expert
judgment, exception reports, and outcomes analysis.
Alex Shenkar, Validation of Retail Credit Risk Models for Use in CCAR, Toronto, 06/03/15
42
Evaluation of Conceptual Soundness
 Evaluation of conceptual soundness is foundational component of any model
validation, in particular, for a model designed to be used in CCAR.
 It involves assessing the quality of the modeling methodology, data, development
history, empirical evidence, testing, implementation, and supporting
documentation. Moreover, this is not once in a lifetime exercise. Evaluation of
conceptual soundness should be periodically repeated as long as model is in use.
 Validators should ensure that judgment exercised during model development is
well informed, carefully considered, documented, and consistent with published
research and regulatory guidance. Model development documentation should
clearly convey an understanding of model limitations and assumptions.
 For CCAR models, not all validation activities can be executed before each model is
used. For example, certain types of outcomes analysis will be incomplete such as
comparison of realized outcomes against projections generated under CCAR
scenarios.
 Robust evaluation of conceptual soundness focusing on available artifacts should
be conducted prior to model’s first use.
Alex Shenkar, Validation of Retail Credit Risk Models for Use in CCAR, Toronto, 06/03/15
43
Assumptions
DILBERT © 2012 Scott Adams. Used By permission of UNIVERSAL UCLICK. All rights reserved
 By nature of CCAR process, almost no model can be built without making significant
assumptions.
 While assumptions are unavoidable, they should be conservative and well justified.
 Unclear or frivolous assumptions could provide a one sided benefit to the bank and
are not acceptable.
Alex Shenkar, Validation of Retail Credit Risk Models for Use in CCAR, Toronto, 06/03/15
44
Assumptions - Definition
 To prevent frivolous assumptions, Model Risk Management organization should
provide clear definitions to developers.
 For example, assumption can be defined as a guess or belief about a model
development data, input variables, methodology, outputs, use or settings that is
taken to be true or valid without any proof or possibility of testing at the point of
model development.
 Clarity of definition helps to identify true assumptions that represent a source
incremental model risk and uncertainty.
Yes
Assumption
Evaluation
Is it
testable
now?
No
No, not an
assumption.
Document as test
Yes, assumption.
Document as
assumption
Alex Shenkar, Validation of Retail Credit Risk Models for Use in CCAR, Toronto, 06/03/15
45
Assumptions – Source of Risk
 The ultimate issue about assumptions is probable circularity: using assumptions as
foundational "input" in a model design, then proceeding to "prove" that the model's
"output" supports the validity of those assumptions.
 Models based on assumptions that are not possible to test can give a false sense of
precision, and that could be misleading, driving significant estimation errors.
 Each assumption should be identified as a source of risk and require a mitigation plan
until such assumption can be tested (if ever). Post model development assumptions
testing should be part of ongoing model monitoring and include sensitivity analysis. The
testing should consider possibility when assumption will become invalid.
 Model development documentation should capture the rationale and any empirical
evidence supporting validity of assumptions and consistency with CCAR scenario
conditions.
Identify
Assumption
Identify Sources
of Risk
Develop
Mitigation Plans
Alex Shenkar, Validation of Retail Credit Risk Models for Use in CCAR, Toronto, 06/03/15
46
Example – Model Fit Validation
 Figure above represents both in-sample fit results and sample scenario
forecasts under Baseline, Adverse and Severely Adverse conditions.
 Both quantitative and qualitative validations are equally important.
Alex Shenkar, Validation of Retail Credit Risk Models for Use in CCAR, Toronto, 06/03/15
47
Example – Model Fit Validation Criteria
 Scenario forecasts are not expected to perfectly match historical stress experience,
given underlying macroeconomic scenarios do not follow the exact same path as
during the last crisis, however, the comparison is a useful common sense check in
support of conceptual soundness of the model.
 Qualitative validation should be based on a set of consistently reusable criteria, for
example:
 Rank ordering of scenarios. Model should have higher severely adverse
and adverse forecasts compare to baseline.
 Reasonable response to scenarios. Model should have comparable or
rational amplitude to historical losses during the last financial crisis.
 Non-trending baseline. Model should not have upward/downward
trending baseline forecast.
Alex Shenkar, Validation of Retail Credit Risk Models for Use in CCAR, Toronto, 06/03/15
48
Evaluation of Overlays

It is common when CCAR model users will apply some form of expert judgment
or management adjustment overlay to modeled outputs to compensate for
model limitations.

Overlays should be introduced in response to a particular risk or to compensate
for a known limitation.

Overlay should not be considered as a permanent solution. Extensive use of
management overlays should trigger a discussion about a need for new or
improved modeling approaches. Overuse of overlays could suggest high levels of
model uncertainty and call into question a model's conceptual soundness.

Most importantly, overlays should be grounded in specific model weaknesses or
identified issues and not used a general “catch-all” adjustment to influence
aggregate modeled losses in the interest of conservatism.

Overlays built-in within the model are always included within the scope of the
overall model validation process. Post-validation overlays to account for risks not
captured by the model should receive an adequate level of independent review
comparable to model independent validation.
Alex Shenkar, Validation of Retail Credit Risk Models for Use in CCAR, Toronto, 06/03/15
49
Final Thoughts
 Quantitative and Qualitative validations are equally important. Use common sense
and economic theory. Common sense is not all that common. An approximate
answer to the right question is far more valuable than precise answer to the wrong
question.
 Know the context. Don’t try to validate a model without understanding nonstatistical aspects of the real life business practices you are trying to subject to
statistical analysis.
 Be prepared to compromise. There are no standard solutions to non-standard
problems.
 Statistical significance alone is not enough. Use economic reasoning, historical
perspective, and wide range of evidence from different sources.
 Discourage use of multi-purpose models. Good operational probability of default
model may not be just as good for use in CCAR.
 Pursue end-to-end transparency in loss forecasting methodology.
 Expect a healthy doze of bi-directional criticism which is not just normal, but
essential to good model development and validation practices.
Alex Shenkar, Validation of Retail Credit Risk Models for Use in CCAR, Toronto, 06/03/15
50
References
1. Board of Governors of the Federal Reserve System. “Comprehensive Capital Analysis
and Review 2015: Summary Instructions and Guidance.” October 17, 2014.
2. Kennedy, Peter E., Sinning in the Basement: What are the Rules? The Ten
Commandments of Applied Econometrics. Journal of Economic Surveys, Vol. 16, pp.
569-589, 2002.
Alex Shenkar, Validation of Retail Credit Risk Models for Use in CCAR, Toronto, 06/03/15
51