The Canadian Securities Administrators (CSA) has released CSA Staff Notice and Consultation 11-348 – Applicability of Canadian Securities Laws and the Use of Artificial Intelligence Systems in Capital Markets (Notice), providing guidance on how securities legislation applies to the use of artificial intelligence (AI) systems by market participants including, among others, registrants, marketplaces and marketplace participants, clearing agencies and non-investment fund reporting issuers (Issuers).
The guidance for Issuers set out in the Notice is primarily focused on Issuers’ periodic (e.g., Management’s Discussion and Analysis and the Annual Information Form) and timely (e.g., press releases) disclosure requirements, but also similarly applies to securities offering documents (e.g., prospectuses and marketing materials).
What Is AI?
For purposes of the Notice, the CSA has adopted the definition of “AI system” used by the Organisation for Economic Co-operation and Development (OECD), being a machine-based system that, for explicit or implicit objectives, infers from the input it receives how to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments.
However, the CSA acknowledges that AI isn’t static and, therefore, the guidance contained in the Notice relates to AI in its current state and the way in which AI systems are currently being deployed in capital markets. Accordingly, as the technology underlying AI systems evolves, so too may the CSA’s views regarding the applicability of securities laws and their approach to how AI systems should be regulated.
General Guidance
The Notice provides that disclosure of an Issuer’s use of or plans to develop and deploy AI systems, as well as the associated risks, provides greater transparency and allows investors to make informed decisions. However, the CSA recognizes that not all Issuers will use AI systems in the same way or to the same extent, so disclosure should be commensurate with the materiality to the Issuer and focused on material information. There is also no “one size fits all” model for disclosure, which needs to be tailored to the Issuer, not boilerplate, and factual and balanced in order to avoid false, misleading or embellished claims — practices commonly referred to as “AI washing.” See our November 2024 Blakes Bulletin: Stay Clean, Don’t Wash: CSA Comments on Artificial Intelligence and Climate Disclosures.
AI Systems Business Use
It is the CSA’s view that disclosure of the use or development of AI systems should be tailored to provide investors with an entity-specific level of insight to understand the operational impact, financial impact and risk profile related to the use of such AI systems. Examples of specific disclosure provided in the Notice include information relating to:
- How AI is defined by the Issuer
- The nature of the products or services being developed or delivered
- How AI systems are being used or applied, their benefits and associated risks
- The current or anticipated impact that the use or development of AI systems will likely have on the Issuer’s business and financial condition
- Any material contracts relating to the use of AI systems
- Any events or conditions that have influenced the general development of the Issuer, including any material investment in AI systems
- How the adoption of AI systems will impact the Issuer’s competitive position in the Issuer’s primary markets
Further to the above, the Notice also provides that Issuers should consider disclosing the source and providers of the data that the AI system uses to perform its functions and whether the AI system used by the Issuer is being developed by the Issuer or supplied by a third party.
Risk Factors
In relation to the preparation of AI-related risk disclosure, the CSA states that Issuers should avoid the use of boilerplate language and should include relevant, clear and understandable entity-specific disclosure, as well as context about how the board and management assess and manage AI-related risks. Examples of AI-related risk factors provided in the Notice include the following:
- Operational risks, including the impact of disruptions, unintended consequences, misinformation, inaccuracies and errors, bias and technological challenges to the Issuer’s business, operations, financial condition and reputation; data considerations (ownership, source, gathering and updating of data); risks tied to the development, access and protection of AI systems
- Third-party risks, such as risks associated with reliance on AI systems offered by third-party service providers
- Ethical risks, including social and ethical issues arising from the use of AI systems (e.g., conflicts of interest, human rights, privacy, employment) that may have potentially adverse impacts on, for example, reputation, liability and costs
- Regulatory risks, such as compliance and legal risks and challenges associated with new and evolving AI regulation, laws and other standards relating to AI systems
- Competitive risks, such as the adverse impact of rapidly evolving products, services and industry standards involving AI systems on the Issuer’s business, operations, financial condition and reputation
- Cybersecurity risks associated with AI systems
The Notice also contemplates that Issuers should consider (and disclose where material) the potential consequences of such risks, the adequacy of preventative measures, and prior material incidents where AI system use has raised any regulatory, ethical or legal concerns and the incidents’ effects on the Issuer.
Promotional Statements
The CSA expects disclosure by Issuers discussing the prospects of the development or use of AI systems to be fair, balanced and not misleading, while avoiding exaggerated claims. Issuers should have a reasonable basis for discussing their use of AI systems, otherwise the CSA may view such disclosure as overly promotional. There are general prohibitions in applicable securities law against false or misleading statements that would reasonably be expected to have a significant effect on the price or value of their securities.
The Notice provides, as an example, if an Issuer claims that it uses AI systems extensively in one of its service offerings, CSA staff would expect the Issuer to define what the Issuer means by “AI system,” disclose how it is using AI systems and be able to fairly and accurately substantiate the claim that it does so extensively. Issuers that fail to provide sufficient detail to support such purported claims risk the CSA viewing the disclosure as vague, misleading and promotional.
Forward-Looking Information (FLI)
In the Notice, the CSA reminds Issuers to consider whether making statements about the prospective or future use of AI systems constitutes material FLI, noting that Issuers must:
- Have a reasonable basis for achievement of the FLI
- Clearly identify the information as forward-looking
- Caution that actual results may vary from the FLI
- Disclose the material factors and assumptions used to develop the FLI
- Identify material risk factors that could cause actual results to differ materially from the FLI
Conclusion
The Notice includes consultation questions regarding the use of AI systems in capital markets and seeks feedback on, among other things, use cases for AI systems that may require new or amended rules or targeted exemptions from current rules. The public comment period is set to end on March 31, 2025.
For more information, contact the authors of this bulletin or any other member of our Capital Markets group.
Related Insights
Blakes and Blakes Business Class communications are intended for informational purposes only and do not constitute legal advice or an opinion on any issue. We would be pleased to provide additional details or advice about specific situations if desired.
For permission to republish this content, please contact the Blakes Client Relations & Marketing Department at [email protected].
© 2025 Blake, Cassels & Graydon LLP