Introduction
The regulation of AI systems in capital markets is primarily focused on the activities conducted rather than the technology itself [1], reflecting the technology-neutral stance of Canadian securities laws [1]. The integration of AI presents both opportunities and challenges [1] [4], necessitating a responsible approach to its use [3]. This document outlines the expectations for market participants, including investment fund managers and non-investment fund reporting issuers, regarding governance [1] [3] [4], risk management [1] [2] [3] [4], and disclosure practices related to AI systems.
Description
The CSA emphasizes that the regulation of AI systems in capital markets focuses on the activities conducted rather than the technology itself [1], reflecting the technology-neutral nature of Canadian securities laws [1]. The adoption of AI in capital markets presents both opportunities and challenges [1] [3] [4], necessitating a responsible approach to its use [3]. Market participants [1] [3] [4], including investment fund managers and non-investment fund reporting issuers (Non-IF Issuers), are urged to establish strong governance and risk management practices when utilizing AI systems [1]. This includes assessing how AI usage impacts obligations under securities laws [2], particularly when AI is integral to investment strategies [2], risk management [1] [2] [3] [4], or marketing [2]. Organizations must review the accuracy of generative AI outputs [3], assess the materiality of AI usage [3], and disclose the sources and providers of data used by their AI systems [4], as well as whether these systems are developed internally or by third parties [4].
Tailored disclosures are necessary to enhance investor understanding of AI system usage [1], avoiding generic statements and providing specific insights into operational [1], financial [1], and risk implications [1]. Disclosures should detail the nature and impact of AI applications [1], associated benefits and risks [1] [4], material contracts [1] [4], and competitive positioning [1]. Additionally, any strategic shift involving AI may necessitate investor approval as a fundamental change or require updated disclosures and material change reports [2].
Material AI-related risks must be clearly articulated in prospectus and continuous disclosure documents [1] [4], with clear [1], entity-specific explanations rather than boilerplate language [1]. Effective risk disclosure should clarify how AI-related risks are assessed and managed by the board and management [1], providing investors with a comprehensive understanding of their implications [4]. Non-IF Issuers are encouraged to adopt robust governance practices concerning AI [1], considering various risks such as operational [1] [4], third-party [1] [4], ethical [1] [4], regulatory [1] [2] [4], competitive [1] [4], and cybersecurity risks [1] [4]. Marketing claims regarding AI capabilities must be accurate and align with the fund’s disclosure records to prevent misleading representations [2].
Disclosures regarding AI systems must be fair [1] [4], balanced [1] [4], and not misleading [1] [4], with substantiated claims [1]. Non-IF Issuers should promptly disclose unfavorable news alongside favorable news and maintain consistent [1], high-quality disclosure practices across all platforms to meet reporting obligations [4]. Statements about AI use that may be considered forward-looking information (FLI) must be clearly identified [1] [4], with a reasonable basis for such disclosures [1] [4], including cautionary statements about potential variances in actual results [1]. Issuers must disclose material factors and assumptions behind any FLI and outline risk factors that could lead to significant differences from the FLI [4].
As AI evolves [1], its integration into capital markets presents both opportunities and challenges [1] [3] [4], prompting the CSA to promote responsible AI use through clarity on existing securities laws and stakeholder feedback [1]. The CSA is currently reviewing stakeholder feedback on AI in capital markets following the closure of its consultation period on March 31, 2025 [2], to determine if updates to the regulatory framework are necessary to address AI-related risks and opportunities [2], with potential new guidance or rule proposals anticipated in late 2025 or 2026 [2].
A practice notice from the TMOB governs the use of generative AI in documents for proceedings [1], addressing issues like AI “hallucinations.” Declarations are not required for objective criteria-based queries [1], but failure to provide necessary declarations may lead to costs against non-compliant parties [1]. Canadian courts have provided guidance on the necessity of identifying AI use in submissions [1], emphasizing human control to ensure the authenticity of cited authorities [1], with some jurisdictions requiring declarations regarding AI usage [1]. Marketplaces and infrastructure providers are expected to implement strong internal controls over AI systems [2], including testing [1] [2], validation [2], risk mitigation [2], and incident response protocols [2]. Compliance with obligations related to fair access [2], system integrity [2], and record-keeping is essential [2], along with ensuring that regulators have access to relevant system data for oversight [2]. Rating organizations and benchmark administrators are required to publicly disclose significant AI usage [2], including the models [2], assumptions [1] [2] [4], and methodologies employed [2], to enhance transparency and stakeholder understanding of output generation [2].
Conclusion
The integration of AI into capital markets necessitates a balanced approach that considers both opportunities and challenges. Effective governance, risk management [1] [2] [3] [4], and tailored disclosures are crucial for maintaining investor trust and ensuring compliance with securities laws. As AI technology continues to evolve, ongoing stakeholder engagement and potential regulatory updates will be essential to address emerging risks and opportunities, ensuring that AI’s integration into capital markets is both responsible and beneficial.
References
[1] https://www.jdsupra.com/legalnews/requirements-and-guidelines-from-3337380/
[2] https://www.goodmans.ca/insights/article/applying-securities-laws-to-ai–key-takeaways-from-csa-guidance-for-market-participants
[3] https://www.bennettjones.com/Blogs-Section/Requirements-and-Guidelines-from-Canadian-Regulators
[4] https://www.lexology.com/library/detail.aspx?g=e7612fe4-8e9e-4a90-b7fb-85478924d1ae