Introduction
The Hong Kong Securities and Futures Commission (SFC) has issued comprehensive guidelines for licensed corporations (LCs) [3] [4], including private equity firms [1], asset managers [1], and hedge funds [1], to ensure the responsible use of generative artificial intelligence language models (AI LMs) [3] [4]. These guidelines focus on addressing cybersecurity risks and promoting a risk-based approach to compliance, particularly for high-risk applications such as investment recommendations [3].
Description
The Hong Kong Securities and Futures Commission (SFC) has established comprehensive guidelines for licensed corporations (LCs) [3] [4], including private equity firms [1], asset managers [1], and hedge funds [1], to ensure the responsible use of generative artificial intelligence language models (AI LMs) while addressing associated cybersecurity risks. These guidelines advocate for a risk-based approach, allowing LCs to tailor their compliance efforts according to the specific risks linked to their AI LM applications, particularly for high-risk use cases such as providing investment recommendations or advice [1], which require enhanced oversight and continuous monitoring [3].
Senior management within LCs is responsible for implementing effective policies and internal controls throughout the AI LM lifecycle [3] [4], which includes design [1], implementation [1] [3] [4], training [1] [2], testing [1] [2] [3] [4], management [1] [2] [3] [4], validation [1], approval [1], use [1] [2] [3] [4], and decommissioning [1]. This responsibility remains with the licensed corporation even when functions are delegated to group companies. The SFC emphasizes the importance of human oversight, cautioning against excessive reliance on AI in workflows where oversight may be insufficient [1]. High-risk applications necessitate additional measures, such as human-in-the-loop reviews and ongoing client disclosures [3].
To mitigate risks, LCs must develop a comprehensive AI model risk management framework that encompasses all phases of model development and management [3], with a strong emphasis on addressing the risk of AI LM hallucinations. Segregated validation processes are required to regularly test for issues such as biases [1], drift [1], and deception [1]. LCs are also mandated to enhance their cybersecurity protocols to counter emerging threats, which include conducting adversarial testing on AI models [2], ensuring the encryption of sensitive data [3], and managing vulnerabilities related to browser extensions and sensitive data handling [2].
The guidelines stress the importance of managing risks from third-party AI LM providers [2], requiring extensive due diligence [2], continuous monitoring [2] [3] [4], and clear allocation of cyber risk responsibilities [2]. LCs are encouraged to evaluate their use of AI LMs [4], enhance their risk management frameworks [3] [4], and review contracts with third-party providers [3] [4]. They should also consider their notification obligations under relevant regulations when adopting AI LMs for high-risk applications and engage with the SFC early in the planning process [4].
In response to the evolving landscape of cyber threats, LCs must adopt stringent internal controls [4], including avoiding the input of sensitive information into AI LMs and implementing robust cybersecurity measures to ensure operational resilience. Continuous monitoring of third-party AI LM providers is essential [4], along with evaluating their cybersecurity readiness and ensuring appropriate indemnities and risk distribution among stakeholders [2].
While the economic implications of these guidelines may lead to increased operational costs for LCs [2], they also have the potential to drive innovation in cybersecurity solutions. Socially [2], these measures aim to enhance consumer trust in financial AI applications by ensuring the security and responsible management of user data [2]. Politically [2], Hong Kong’s proactive regulatory stance may influence other jurisdictions to adopt similar measures [2], promoting global standardization in AI governance [2].
The emphasis on accountability and human oversight underscores the necessity for workforce training to adapt to the evolving demands of AI governance and cybersecurity [2]. As generative AI becomes more prevalent [2], the potential for cyber threats increases [2], necessitating both preventative and responsive measures [2]. Future directions in AI cybersecurity may involve real-time threat detection systems powered by AI [2], international cooperation on cybersecurity standards [2], and specialized workforce development to address AI-specific challenges [2]. The evolving landscape presents both challenges and opportunities [2], with the potential for AI to significantly enhance cybersecurity strategies globally [2].
Conclusion
The SFC’s guidelines for the responsible use of AI LMs by licensed corporations have significant implications. Economically, they may increase operational costs but also drive innovation in cybersecurity [2]. Socially [2], they aim to build consumer trust in financial AI applications [2]. Politically [2], Hong Kong’s proactive stance could influence global AI governance standards. The guidelines highlight the need for workforce training and adaptation to evolving AI and cybersecurity demands [2], presenting both challenges and opportunities for enhancing global cybersecurity strategies.
References
[1] https://techinsights.linklaters.com/post/102jp7z/key-implications-of-hong-kongs-new-sfc-circular-on-genai-language-models
[2] https://opentools.ai/news/hong-kong-regulators-tighten-the-screws-on-ai-security-risks-in-finance
[3] https://www.aoshearman.com/en/insights/ao-shearman-on-data/hong-kong-sfc-issues-circular-on-the-use-of-generative-ai-language-models
[4] https://www.jdsupra.com/legalnews/hong-kong-sfc-issues-circular-on-the-7831973/




