Understanding SS1/23

Supervisory Statement 1/23 is a regulation by the PRA on Model Risk Management (MRM) for banks within the U.K. going into effect May 17th, 2024. While the SS1/23 statement covers the full range of how to govern all models within a firm and defines what a model is quite broadly, it also explicitly calls out AI as a sub-principle of the regulation on the Bank of England’s website.

This marks a major milestone in the governance of models as well as the use of AI. The European Artificial Intelligence Office has also been established within the European Commission in February of this year to ensure the safety and trustworthiness of AI as it is used within the EU. The key components of the regulation include:

  • Model Identification and Risk Classification: Having a clear definition of what is a model and maintaining an up to date inventory to manage model risk.
  • Controls and Governance: Creating strong structures of governance to support MRM, with board-level responsibilities and Senior Management Function (SMF) accountability.
  • Model Development and Implementation: Setting standards for the designing, implementation, and testing of models to ensure that they meet specific testing and performance criteria.
  • Independent Validation and Ongoing Monitoring: Having documented practices in place for continuous validation and performance assessments to ensure models remain fit for purpose and adheres to regulatory standards.

Therefore now, more than ever, is it important to proactively implement proper governance for the proliferating models within your organization.

Why is this regulation important?

A survey from The Economist shows that 77% of bankers believe that AI will be a key differentiator for banks, but at the same time, according to Gartner, 85% of AI Projects will deliver results. Managing a proliferation not just of models, but of the different types of models and which departments may be generating them (GenAI Chatbots, Fraud Detection, Credit Reporting, Report Generation Software, IT Operations, Candidate Prospecting, etc.) will be a critical element of success within banks.

The Regulatory Landscape

SS1/23 joins a chorus of regulatory requirements and recommendations around the world on AI and Model Risk Management. This includes:

  • CP 6/22: This consultation paper also from the PRA was published on June 21st, 2022 and serves as an earlier outline of the expectations for identifying and addressing model risk within banks.
  • The E.U. AI Act: This legislation passed by the E.U. aims to be a global standard for explicitly banning A.I. applications that are deemed to have an unacceptable or high risk such as the use of facial recognition in specific ways. This legislation is less directly related to banks and model risk management, but could be important to keep an eye out for.
  • The AI Risk Management Framework (U.S.): Released by NIST from the U.S. Department for Commerce on January 26, 2023, this framework guides organizations on how to govern, map, and measure risk to the organization.
  • SR 11-7 (U.S.): This Supervisory Guidance on model risk management was jointly developed by the Federal Reserve System as well as the O.C.C. and has been in effect since 2011.
  • The Artificial Intelligence and Data Act (Canada): Sets the expectations for the use of AI within Canada in order to protect the interests of the public and require that appropriate measures be put in place to identify, assess, and mitigate risks of harm or biased output.

 

Complying with SS1/23

SS1 provides a comprehensive approach to managing the risk from models and AI that can be greatly enhanced from having the right technology policy and tools in place to automate the process and reduce the reliance on human effort. Best practices to avoid costly errors and get on the path to compliance are:

  • Automated Model Identification: Take a model agnostic approach to identifying and risk assessing EUCs such as Excel files within an organization, as well as Models created in Python or R, and even 3rd party executables. This is especially important as these all could be considered models under the SS1/23 definition of a model: “A model is a quantitative method, system, or approach that applies statistical, economic, financial, or mathematical theories, techniques, and assumptions to process input data into output.
  • Self Organizing Model Inventory: Have regularly scheduled scans in order to uncover hidden areas of risk and automatically keep the Model Inventory up to date. Firms can also easily maintain inventories that are firm wide as well as department specific. This also directly aligns with SS1/23 Principle 1.3 Model Tiering: Firms should implement a consistent, firm-wide model tiering approach that assigns a risk-based materiality and complexity rating to each of their models.
  • Powerful, Yet Flexible Risk Assessment: Custom and nuanced risk assessment and testing treatment groups based on model type: Deep Learning Models, Classification Models, Regression Models, 3rd Party Executables. This can make testing different classes of models differently and generating the corresponding documentation and reports seamless.
  • Interdependency Map: A model’s level of risk is highly dependent on the models and data sources that serve as inputs to that model. Models that feed into high impact models can pose a major danger of hidden risk. With an interdependency map, you can easily visualize these relationships as well as adjust risk assessment scores for a model based on its relationships to other models and data. SS1/23 Principle 1.2 Model Inventory: Firms should maintain a firm-wide model inventory which would help to identify all direct and indirect model inter-dependencies in order to get a better understanding of aggregate model risk.
  • AI Model Testing & Documentation Generation: Have a comprehensive testing suite for models. Tests to consider including in your testing suite include: Data Drift, Validity & Reliability, Fairness, Interpretability, Privacy, the use of GenAI, Security Vulnerability, Code Quality among many others. The results of these tests should be consistently documented in a standardized and easy to follow way. SS1/23 Principle 1.3 Model tiering: Complexity assessment may also consider risk factors related to measures of a model's interpretability, explainability, transparency, and the potential for designer or data bias to be present.
  • Comprehensive Documentation Generation & Management: Qualitative Model Information such as Purpose, Owner, Impact as well as the most recent quantitative risk score and documentation on the results of model testing are all conveniently kept up to date in one place across the firm. SS1/23 Principle 3.5 Model development documentation: Firms should have comprehensive, and up-to-date documentation on the design, theory, and logic underlying the development of their models.
  • 3rd Party Risk Management: Firms are expected to test 3rd party applications and models to the extent of internally developed models and thus identifying the use of AI within 3rd Party Applications, 3rd Party Model Identification and Risk Assessment, Security Vulnerability analysis for 3rd party libraries, the testing of 3rd party data sources and models is key. SS1/23 Principle 2.6 Use of externally developed models, third-party vendor products. Firms should:(i) satisfy themselves that the vendor models have been validated to the same standards as their own internal MRM expectations.
  • Proper Controls and Accountability: Firms should restrict and track who makes changes to what models and when. This adds security and accountability as outlined in SS1/23 Principle 2.3 Policies and procedures: At a minimum, the policies and procedures should cover:[...] processes for restricting, prohibiting, or limiting a model’s use. It also allows firms to measure the frequency and extensiveness of use of models, which is also outlined in SS1/23 Principle 1.3 Model Tiering: The assessment of a model's complexity should consider the risk factors that impact a model’s inherent risk, eg [...] frequency and/or extensiveness of use of the model.
  • Approval Workflows: Firms should also create approval workflows automatically sending alerts and notifications to the proper approval authorities and tracking the stages of model approval. This helps you identify bottlenecks within the organization and areas of process improvement. SS1/23 Principle 2.3 Policies and procedures: At a minimum, the policies and procedures should cover: the model approval process and model change, including clear roles and responsibilities of dedicated model approval authorities.

 

About CIMCON Software

CIMCON Software has been at the forefront of managing AI, EUC, and Model Risk for over 25 years, trusted by over 800 customers worldwide. We are ISO 27001 Certified for Information Security and have offices in the USA, London and Asia Pacific. Our risk management platform directly supports the automation of best practices and policy including an EUC & Model Inventory, Risk Assessment, identifying Cybersecurity & Privacy Vulnerabilities, as well as an EUC Map showing the relationships between EUCs and Models. We also offer an AIValidator tool that allows for the automation of testing and documentation generation of models and 3rd party applications that can be leveraged as a no code tool or a Python Package.

Getting MRM Right

Overall, Model Risk Management can be a tedious process that is difficult to get right. This is especially true as the number and diversity of models within a firm proliferate to include EUCs, Models, AI, and 3rd Party Applications. However it is an incredibly important process to get right as errors and regulatory penalties for lack of proper controls can be costly to firms. With the right tools and experience, this risk becomes manageable and risk policies can be implemented that not only reduce errors, but reduce effort, and help future proof your organization from the risks of model errors.

A template AI Policy that can be modified to suit your organization. Request AI Policy today.

 

Request AI Policy