On August 16, 2023, the Government of Canada announced a consultation on a proposed Code of Practice (Code) for generative artificial intelligence (AI). The Code is intended to help developers, deployers and operators of generative AI systems comply with the forthcoming regulatory regime set out in the Artificial Intelligence and Data Act (AIDA), which was tabled as part of Bill C-27 in June 2022. (For more information on Bill C-27, see our previous Blakes Bulletin: Federal Government Proposes New Law to Regulate Artificial Intelligence Systems.)
Generative AI systems such as ChatGPT, Dall-E 2 and Midjourney are trained on various text and image datasets. They have increasingly garnered international attention given their ability to generate novel content in various forms and contexts. In recognition of the potential for generative AI systems to be used for malicious or inappropriate purposes given their distinct and wide risk profile, the government introduced AIDA to provide a legal foundation to regulate generative AI systems in Canada.
The Standing Committee on Industry and Technology is currently reviewing Bill C-27. While progress on the legislation is delayed, the government is demonstrating its commitment to regulating AI by consulting stakeholders on the development of a Code that Canadian firms may voluntarily opt into and implement.
The government is currently seeking comment on the following potential elements of the Code:
1. Safety
The proposed Code emphasizes that safety must be viewed holistically to properly assess potential impacts and misuse. Given the wide range of uses for such systems, safety risks must be assessed broadly in the generative AI context. Developers and deployers of generative AI systems would be encouraged to identify ways the system may attract malicious use and take steps to prevent such actions, such as using the system to impersonate real individuals or engage in spearfishing attacks. Developers, deployers and operators of generative AI systems would further be required to identify ways the system may attract harmful or inappropriate use and take steps to prevent such uses, such as by clearly identifying the capabilities and limitations of the system to end users.
2. Fairness and Equity
The proposed Code further emphasizes the role of generative AI systems in relation to societal fairness and equity, given the scale of deployment and broad training datasets. The Code highlights the need for models to be trained on representative and appropriate data to provide accurate, relevant and unbiased outputs. As such, the Code would recommend that developers of such systems seek to effectively evaluate and curate datasets to avoid non-representative datasets and low-quality data. Additionally, under the Code, developers, deployers and operators of generative AI systems would be encouraged to implement measures to mitigate risks associated with biased outputs.
3. Transparency
The proposed Code notes that generative AI systems have the potential to pose a particular challenge for transparency, given that training data and source code may not be readily available and the output of such systems may be difficult to rationalize. The proposed Code therefore emphasizes that individuals must be made aware of when they are interacting with AI systems or AI-generated content. To this end, developers and deployers of generative AI systems would be encouraged to provide a method such as watermarking to reliably and freely detect content generated by an AI system. Developers and deployers would also be encouraged to meaningfully explain the processes used to develop the AI system, including the measures adopted to identify risks. Operators of such systems should clearly identify AI systems that could be mistaken for humans.
4. Human Oversight and Monitoring
Human oversight and monitoring are critical to ensuring the safety of generative AI systems. The proposed Code emphasizes the need for developers, deployers and operators to exercise a sufficient level of human oversight of these systems. It also encourages them to implement mechanisms that promptly identify and report adverse impacts (by maintaining an incident dataset, for example) and to commit to routinely updating models through fine-tuning processes.
5. Validity and Robustness
The proposed Code emphasizes the need for AI systems to remain resilient across various contexts to build trust with end users. Although the flexibility of generative AI systems remains a vital advantage of the technology, the proposed Code notes the importance of implementing rigorous testing measures to prevent misuse. Developers would be encouraged to use a broad array of testing methods across various contexts, including adversarial testing (e.g., red-teaming), to measure performance and identify potential vulnerabilities. Developers, deployers and operators would also be encouraged to leverage appropriate cybersecurity mechanisms to prevent and identify adversarial attacks like data poisoning.
6. Accountability
Given the broad risk profile of generative AI systems, the proposed Code emphasizes the need to supplement internal governance mechanisms with a comprehensive and multifaceted risk management process in which employees across the AI value chain recognize their role. Developers, deployers and operators of generative AI systems would be encouraged to use multiple lines of defence to secure the system’s safety, in addition to performing internal and external audits before and after the system’s installation and operation. These measures include developing appropriate policies, procedures and training to ensure that roles and responsibilities are clearly delegated and staff are familiar with their duties in the context of the organization’s broader risk management practice.
The Code is currently under review by the government to assess whether such commitments remain effective ways to ensure the trustworthy and practical implementation of generative AI systems. The government is actively seeking feedback on improving the Code of Practice.
For more information, please contact:
More insights
Blakes and Blakes Business Class communications are intended for informational purposes only and do not constitute legal advice or an opinion on any issue. We would be pleased to provide additional details or advice about specific situations if desired.
For permission to republish this content, please contact the Blakes Client Relations & Marketing Department at communications@blakes.com.
© 2024 Blake, Cassels & Graydon LLP