The Office of the Superintendent of Financial Institutions (OSFI) recently released an updated draft of Guideline E-23 Model Risk Management (Draft Guideline) setting expectations related to enterprise-wide artificial intelligence (AI) model risk management. The Draft Guideline is the first overhaul of the model risk management framework since it was first issued in September 2017 (2017 Guideline). Interested parties can provide comments until March 22, 2024, with a final guideline taking effect in July 2025.
The Draft Guideline recognizes that organizations are increasingly relying on models to drive decision-making, which could expose them to financial loss from flawed decision-making, operational loses, and reputational damage. To capture the risk posed by AI, the Draft Guideline modernizes the definition of “model” in the 2017 Guideline to explicitly include AI and machine learning methods. Unlike the 2017 Guideline, the Draft Guideline broadens its scope to capture all federally regulated financial institutions (FRFIs) supervised by OSFI, including federally regulated private pension plans, deposit-taking institutions, and federally regulated insurance companies.
OSFI will require that organizations mitigate their model risks by adopting a robust model risk management (MRM) framework throughout the entire model lifecycle, taking into account an organization's complexity, size and its model usage. A static compliance program will not be sufficient to satisfy the requirements of the Draft Guideline. Rather, organizations must demonstrate a commitment to ongoing testing, monitoring, and review.
The MRM framework encompasses the governance procedures, oversight and risk assessment mechanisms organizations must implement at the enterprise level for compliant model use. The Draft Guideline specifies that an organization’s data management policies will form a key part of the MRM, and should establish a consistent approach to managing modifications, vulnerabilities, challenges related to the data used in models, such as bias, fairness, and privacy.
Organizations retain ultimate accountability for outsourced activities, including models and data acquired from external sources, such as third-party vendors or foreign offices. Organizations are required to establish an MRM framework for external models that is consistent with internally developed models. In practice, this will require organizations to seek out adequate documentation to understand the model's design and underlying data, including any proprietary elements.
Furthermore, OSFI is coordinating with the department of Innovation Science and Economic Development to ensure its guidance and controls align with the proposed Artificial Intelligence and Data Act (AIDA) in Bill C-27. If the AIDA becomes law, in addition to any obligations imposed by OSFI, FRFIs internally developing AI, or using AI systems created by third parties will be required to implement several compliance, monitoring and record-keeping measures. For example, including a plain-language description of the system’s intended use, the type of content it is intended to generate or make, mitigation measures established, and any other information that may be prescribed by future regulation.
Organizations operating in the financial services sector should be prepared to adapt existing governance mechanisms and compliance programs to integrate considerations related to AI use and development. Additionally, they will need to assess their approach to managing third-party relationships to improve disclosure and data management practices to avoid incurring AI-related risk.
More insights
Blakes and Blakes Business Class communications are intended for informational purposes only and do not constitute legal advice or an opinion on any issue. We would be pleased to provide additional details or advice about specific situations if desired.
For permission to republish this content, please contact the Blakes Client Relations & Marketing Department at communications@blakes.com.
© 2024 Blake, Cassels & Graydon LLP