Skip Navigation

OSFI and FCAC Report on AI Use and Risk at Federally Regulated Financial Institutions

November 20, 2024

In late September, the Office of the Superintendent of Financial Institutions (OSFI) and the Financial Consumer Agency of Canada (FCAC) jointly released a report describing the evolving risk landscape related to the use by federally regulated financial institutions (FRFI) of artificial intelligence (AI) and suggesting best practices for responsible AI adoption (Report). This Report is largely based on responses received from FRFIs to a voluntary 2023 questionnaire on AI and quantum computing preparedness (Questionnaire).

Based on responses to the Questionnaire, OSFI and the FCAC found that AI use among FRFIs increased rapidly from 30% in 2019 to 50% in 2023, and it is projected to continue to steadily increase. According to the Report, this continuing growth is evidenced by significant increases in investments in AI and the use of AI models by FRFIs. In Questionnaire responses, FRFIs reported that operational efficiency, customer engagement, document creation and fraud detection are the primary use cases driving increased AI use. However, overall the Questionnaire revealed that FRFIs are increasingly using AI across multiple core functions with AI affecting the way FRFIs conduct their business, run their operations and manage risk.

The Report aims to highlight the continually evolving AI risk environment, thereby underscoring the need for responsible AI adoption and advocating for robust AI risk management strategies to counterbalance increased AI use among FRFIs.

Artificial Intelligence Risks

The Report classifies AI risks into two main groups: internal risks and external risks. According to the Report, internal risks are those that affect the FRFI and its products and services, whereas external risks include those that make FRFIs more susceptible to fraud and make smaller FRFIs more attractive targets for threat actors, thereby giving rise to systemic risk issues.

The Report cautions that the use of AI may amplify existing risks related to data governance, modelling, operations and cybersecurity faced by FRFIs. Additionally, it predicts that third-party risks will increase as external vendors are relied upon to develop AI models for FRFIs or are integrating AI into their service offerings to FRFIs. FRFIs that do not implement appropriate safeguards or AI risk management oversight may also be exposed to new legal and reputational risks that arise from the consumer impacts of using this technology.

Among the AI-related risks identified, the Report emphasizes the following risks to FRFIs:

  1. Data governance risks: Respondents of the Questionnaire viewed the data-related risks of AI use as a top concern. Factors that contribute to this risk include differing regulatory requirements, fragmented data ownership and the use of third-party arrangements. The Report notes that addressing AI data governance is crucial, whether through general data governance frameworks, specific AI data governance frameworks, or model risk management frameworks.
  2. Model risk and explainability: AI models pose an increased risk when compared to existing models employed by FRFIs because AI models are complex and their results may not be easily determinable. To comply with regulatory requirements, the Report suggests that FRFIs need to put in place governance mechanisms and procedures to ensure that AI models are explainable and provide both internal model users and customers with meaningful information about how these models make decisions.
  3. Legal, ethical, and reputational risks: The Report cautions against narrow adherence to AI-related legal requirements. Instead, the Report stresses the need for a comprehensive approach to AI use that considers legal, ethical, operational and reputational risks. This approach should prioritize consumer privacy and consent and be attentive to risks of bias and unfairness. For example, FRFIs should proactively assess bias by continuously monitoring AI models that impact customers and ensuring data representativeness.
  4. Third-party risks: The Report found that most FRFIs rely on third-party providers for AI models and systems, or work with suppliers that employ AI. However, the Report stresses that FRFIs remain accountable for the results of third-party AI systems. FRFIs are required to implement mechanisms to ensure third-party activities and services are performed in a safe manner, in compliance with applicable legislative and regulatory requirements, and in accordance with internal policies, standards and processes.
  5. Operational and cybersecurity risks: As FRFIs integrate AI into their processes, procedures and controls, their operational risk exposure also increases. This is due to the interconnected nature of systems and data, which may result in unforeseen issues or outages. Additionally, without proper security measures in place, the use of AI could increase the risk of cyberattacks. Cyber risks can stem from using AI tools internally, making FRFIs more vulnerable to data poisoning or disclosure of protected information. In particular, generative AI is increasingly leveraged by threat actors to create code that exploits vulnerabilities and perpetrates fraud through convincing deepfakes and phishing materials. The Report emphasizes the importance of applying robust safeguards around AI systems to ensure resiliency and avoid unnecessary financial risk, noting that the cost of severe cyberattacks on FRFIs has quadrupled in recent years.

Risk Management and AI Governance Considerations

The Report highlights eight potential AI risk management pitfalls and recommends several measures FRFIs can adopt to mitigate some of their AI risks and comply with their regulatory obligations. The following considerations may be useful to FRFIs in navigating these pitfalls:

  1. Timely implementation of risk management: Uneven or delayed adoption of risk management and AI governance mechanisms will not keep pace with the rapidly changing risk profile of AI technology. Risk management programs should be agile and vigilant.
  2. Enterprise-wide risk management: Risk management gaps could be exposed if a FRFI only addresses AI risks in the context of individual risk frameworks, such as cyber, privacy or model risk. AI risk management requires a comprehensive approach that enables collaboration and communication across relevant teams within a FRFI.
  3. Controls across the AI model lifecycle: Performing an initial risk assessment but failing to implement controls across the use of an AI model’s lifecycle will not account for new or unanticipated risks. AI models should be periodically reassessed and approved for continued use.
  4. Adequate contingency actions and safeguards: The explainability challenges of many AI models increase the chances of them not behaving as expected. FRFIs should implement appropriate controls and safeguards, including performance monitoring, backup systems, alerts and limitations on the use of personal information.
  5. Accounting for generative AI risks: Generative AI models can amplify existing AI risks such as explainability and bias. These risks warrant specific generative AI controls, such as an acceptable use policy, enhanced monitoring, employee education on appropriate use and preventing the input of confidential data into prompts.
  6. Employee training: More training at all levels of an organization is important to raise awareness of AI risks, provide guidance on appropriate AI use, and ensure users are aware of applicable limitations and controls. Enhanced training may be necessary for employees who directly use AI technology and senior decision-makers within the organization.
  7. Strategic AI adoption: Decisions regarding AI adoption should be the outcome of an informed strategic evaluation balancing the benefits and risks considering each FRFI’s circumstances.
  8. Protecting against external AI risks: Whether or not a FRFI uses AI, it still faces risk exposure from unsanctioned use of AI by external vendors and employees. Updating risk frameworks, conducting internal communication and training, and modifying third-party supplier policies can mitigate some external AI risk.

Next Steps

The Report notes that many FRFI respondents to the Questionnaire indicated that they are waiting for the final OSFI E-23 – Enterprise-Wide Model Risk Management Guideline and the passage of the Artificial Intelligence and Data Act in Bill C-27 before fully implementing an AI risk management program.

The Report cautions FRFIs to be vigilant, proactive and maintain comprehensive but adaptable risk and control frameworks to address both internal and external AI risks. Specifically, the Report underscores that FRFIs must continue to comply with existing regulatory obligations that are affected by AI use, including consumer protection laws, privacy and data security requirements, and OSFI’s guidelines on model risk management, third-party risk management, cybersecurity, and operational resilience.

For an overview of OSFI Guideline E-23, and other regulatory considerations related to AI use by FIs see our Blakes Bulletin Modernizing Financial Risk Management: OSFI’s Draft Guideline on AI Model Risk Management.

If you have any further questions, please do not hesitate to reach out to the authors, or any other member of the Financial Services Regulatory or Technology group.

More insights