Model risk management (SR 11-7) adaptation to AI involves applying the principles and expectations outlined in the SR 11-7 supervisory guidance to artificial intelligence models. This includes robust governance, validation, and documentation tailored to AI’s complexity, such as explainability, data quality, and ongoing monitoring. The adaptation ensures that AI-driven models are transparent, reliable, and compliant, addressing unique risks like algorithmic bias and model drift within established risk management frameworks.
Model risk management (SR 11-7) adaptation to AI involves applying the principles and expectations outlined in the SR 11-7 supervisory guidance to artificial intelligence models. This includes robust governance, validation, and documentation tailored to AI’s complexity, such as explainability, data quality, and ongoing monitoring. The adaptation ensures that AI-driven models are transparent, reliable, and compliant, addressing unique risks like algorithmic bias and model drift within established risk management frameworks.
What is SR 11-7 and how does it apply to AI risk management?
SR 11-7 is the OCC's Model Risk Management guidance. Adapting it to AI means applying its governance, validation, and documentation standards to AI models, recognizing their complexity and data use.
What does governance look like for AI under SR 11-7 adaptation?
Clear roles and responsibilities, board/committee oversight, policies for development, deployment, monitoring, change control, and risk escalation.
What should AI model validation include?
Assessment of performance, robustness to data shifts, fairness/bias checks, explainability, data quality, and reproducible testing with documented findings.
What kind of documentation is required for AI models?
Model inventory, intended use, data sources and quality, data lineage, versioning, validation reports, performance metrics, limitations, approvals, and monitoring plans.
Why is explainability important in AI risk management?
Explainability helps users understand predictions, supports trust and accountability, and aids regulatory compliance; include explainability methods and limitations.