Effectively Measuring, Managing and Monitoring
Transcription
Effectively Measuring, Managing and Monitoring
Effectively Measuring, Managing and Monitoring Model Risk Abhinav Anand The opinions expressed within this article are the personal opinions of the presenter. Discover Financial Services is not responsible for the accuracy, completeness, suitability, or validity of any information on this article. The information, facts or opinions appearing in the article do not reflect the views of Discover Financial Services and Discover Financial Services does not assume any responsibility or liability for the same. In Search of the “One” Best Model “All models are wrong but some are useful.” - George Box of the Box-Jenkins Model 2 Model Risk Management Across Model Lifecycle Model Risk is defined as potential for adverse consequences from decisions based on incorrect or misused model outputs and reports. This risk exposes the institution to the possibility of: • Financial loss • Incorrect business decisions • Misstatement of financial disclosures • Damage to company’s reputation Model design Model testing (developer) Validation/ Risk Assessment (MRM) Implementation & usage Ongoing monitoring, governance and control The three primary drivers of risk are: 1. Model Development and Implementation Errors - Technical errors in model development or model implementation – such as errors in the data or computer code underlying a model - Errors in judgment by model developers – such as inappropriate model methodology / approach 2. Model Misuse - Misapplication of models, or model results, by model users - Use of poorly performing or unsupported models 3. Inherent Model Limitations and Uncertainty - The models are, at best, an approximation of reality Supervisory guidance expects BHCs to manage model risk like any other risk Model Risk Management Framework Key Considerations: • Clear Roles & responsibilities • Governance & escalation • Controls across entire model lifecycle • Staffing & Costs Identify Identify • Identify model risks emerging from limitations in model development, implementation and performance monitoring • Review model documentation for internally-built and vendorsupplied models • Create model dependency maps to evaluate aggregated model risk Measure Measure Monitor Monitor • Assign risk • Monitor model rating tier for all performance models against threshold of • Challenge operability model outcome through • Track new and benchmarking incremental methods model usage • Quantify impact of limitations identified in validation process • Manage annual review of models for impact due to changes in economic environment and/or business model Manage Manage Report Report Report • Manage action • Report plans against individual and identified risks aggregated for timely model risk via mitigation appropriate KRIs • Manage exception • Escalate action process for plans through model usage governance structure • Provide reporting to Risk Committees, and the Board Model Risk Measurement and Monitoring • Inherent model risks can be measured based on broader usage, complexity, potential impact etc and mitigated using controls through three lines of defense • Residual risk that impact forecast/prediction uncertainty should be monitored through measures such as model performance and outstanding action plans Controls Inherent Risk q 1. 2. 3. q 1. 2. Weaknesses Assumptions Data Process Limitations Choice of methodology Uncertainty of estimation due to factors outside of modeler’s control 1. 2. 3. 1st,line of defense: development and implementation testing 2nd line - Model risk office: Initial, change, and annual model validation 3rd line - Internal audit Model Stability Index 1. Model Performance 2. Forecast/Prediction Stability 3. Model Structure Stability 4. Model diagnostic issues Outstanding Action Plans Models in Policy Exception Compensating controls in form of model risk buffer or overlay 5 Residual Risk Setting Guardrails for Monitoring Model Risk • Establishing Model Risk Appetite – Sets Risk Boundaries and like other Operational Risk categories, this could be a qualitative measurement • Key Risk Indicators – Generally quantitative measurement • Governance Related • Model Performance Related • 6 Numerous challenges in setting Key Risk Indicators • Determining thresholds • Aligning monitoring activities to risk tiering of models • Aggregating results at product and business level Framework for Ongoing Model Performance Monitoring An ongoing performance monitoring framework could include the following three components: • Establishing objective operating threshold • • Model level performance metrics • • Appropriate performance metrics should be established to monitor the performance of each model depending on the types (or family) of the models Aggregated reporting of model risk • 7 Top down risk appetite from board/senior management should be considered so that there is objectivity and consistent across model families and business units An aggregated report of ongoing model performance should be generated by model families, business lines and portfolios to facilitate reporting to board and senior management Model Monitoring Categories • Model Performance Rationale: Model forecasts should be able to adequately capture the potential trend of actual or realized numbers ü Performance Measurement: Mean Percentage Error and Mean Absolute Percentage Error ü • Model Forecast Stability ü ü • Model Structure Stability ü ü • Rationale: Model coefficients should be stable and remain significant across quarterly forecasts, especially important for time series models Model Structure Stability Measurement: - Stability: QOQ % change in coefficients - Significance: p-value Model Diagnostic Issues ü ü 8 Rationale: Model forecasts should be consistent and stable from quarter to quarter under similar economic outlook Model Stability Measurement: - Absolute Deviation: Mean Absolute Percentage Deviation (MAPD) - Directionality: Correlation Rationale: Model diagnostic tests should not indicate severe violation of underlying statistical assumptions if model is refit for new periods Categories: Stationarity, Autocorrelation, Normality, Heteroskedasticity Threshold by Model Performance Category Category Description Thresholds MPE 10% Model Performance Low MAPE MAPD Forecast Stability Correlation Model Structure Stability Coefficient Stability p-‐value Goodness-‐of-‐Fit (R2) Stationarity Model Diagnostic Issues Autocorrelation (DW) Normality Heteroskedasticity 9 20% Medium 10% Low 20% Medium 0% High 20% Low 50% Medium 0.05 Low High 0.1 Medium 50% High High 70% Medium 0.05 Low Low 0.1 Medium Medium 50% High High 1.8 .1.6 2.2 Low Medium 75% Medium 0.05 High High 50% Medium Low High High Low 0.1 Medium Low 2.5 High Example Performance Metrics Scoring 10 • Three Point Scoring Scheme • Score Aggregation to get to a Model Stability Index (MSI) Major Component Weight Model Performance 50% Forecast Stability 25% Model Structure Stability 15% Model Diagnostic Issues 10% Component Weight 50% 50% 25% 25% 15% 15% 10% 10% 10% 10% 10% Measurement MPE MAPE MAPD Correlation Coefficient Stability p-‐value Goodness-‐of-‐Fit (R2) Stationarity Autocorrelation (DW) Normality Heteroskedasticity Weight 50% 50% 50% 50% 50% 50% 35% 35% 20% 5% 5% Example model performance aggregation and reporting process • Identifying criteria for aggregating risk due to model performance across organization ü Selection of criteria should aligned to model tiering and model risk appetite setting ü Determine which criteria should inform “Weight” of the model and which criteria should inform “risk index” of the model Top down criteria for managing model risk Ongoing Model Performance Monitoring Model Tier Assignment: § Complexity § Materiality § Business use 11 Translate the top down criteria to measurable items at the model level § § § § Performance Forecast stability Structure stability Diagnostic issues § Portfolio size § Impact to financial § Impact to capital ratios Model Risk Reporting Model stability report – count of models by stable, warning, and unstable Aggregated model risk index and reporting at the model family, portfolio, business unit, and/or institution level Aggregated Model Stability Index (MSI) reporting examples • Different levels of reporting structure for targeted audience 12 Example of “Common Currency” for Aggregate Reporting Model Risk Index (MRI) Ø MSI is a model level monitoring metric to trigger action Ø MSI would be converted to a “Stable”, “Warning” or “Unstable” status based on the threshold Ø Could serve as a trigger of model change or remediation by model owner Ø Model Risk Index (MRI) could be created at the model family, business unit or/and portfolio level Ø Captures materiality that is needed in order to facilitate appropriate aggregation across models and could be linked to risk appetite. Ø MRI could be an escalation/prioritization metric consumed by senior management and risk committees Model Stability Index Weight Assigned to Model (%) Residual Risk q 1. 2. 3. q 1. 13 Tier 1 and Tier 2 Models: Model Performance Forecast/Prediction Stability Model Structure Stability Tier 3 Models Model Performance Materiality Model Stability Index Impact to CCAR Results measured by metrics such as: 1. Impact to Capital Ratios 2. Baseline Forecast or actuals of impacted line item Non- CCAR models: 1. # of accounts affected 2. Portfolio size 3. Expected Benefit Model Risk Index (MRI) Aggregation and Reporting Hierarchy for Model Risk Index (MRI) Ø MRI could be aggregated at the functional/model family level to establish accountability as well as to provide a linkage to risk appetite Ø Granularity of aggregation is subject to the audience of the risk reporting e.g. first level report is consumed by board risk committee, and second level report is consumed by functional heads Level 0 Report: Model Universe Level 1 Report: Model Family Level 2 Report: Business Unit Card Provision Non - Secured Lending Product Credit Card Student Loan Personal Loan Secured Lending Others 14 Home Loan … PPNR Auto Loan … Financial Planning Models Home Equity Few words on Infrastructure Needed for Sustainability • Infrastructure that ensures that risk related information for models are centralized for efficiencies in monitoring and control 15 Thank You 16