Introduction

On December 5, 2024 [1] [3] [4] [8], the US Commodity Futures Trading Commission (CFTC) issued a staff advisory concerning the integration of artificial intelligence (AI) in regulated markets [1] [3] [4] [7] [8]. This advisory outlines the compliance obligations for CFTC-regulated entities as they adopt AI technologies, emphasizing the technology-neutral nature of compliance requirements and the need for updates to policies and systems to accommodate AI use.

Description

On December 5, 2024 [1] [3] [4] [8], the US Commodity Futures Trading Commission (CFTC) issued a staff advisory developed by various divisions within the agency, addressing the use of artificial intelligence (AI) in regulated markets [1] [2] [3] [7] [8]. This advisory emphasizes the compliance obligations of CFTC-regulated entities as they implement AI technologies [2], highlighting that AI may impact nearly all aspects of the derivatives trading lifecycle [1] [8]. It underscores the responsibility of managers to ensure that financial information and risk disclosures generated by AI tools comply with applicable statutory and regulatory requirements under the Commodity Exchange Act (CEA) and CFTC regulations. The guidance reinforces the principle that compliance requirements are technology-neutral, necessitating updates to policies [5], procedures [4] [5] [8], controls [4] [5] [8], and systems to accommodate AI use [8].

CFTC-regulated entities are advised to assess the risks associated with AI and update their compliance programs accordingly [3]. This includes conducting regulatory compliance reviews and ensuring that any significant changes to systems or processes resulting from AI integration are reviewed for compliance. Entities must provide timely advance notice to the CFTC of any material changes to automated systems that could affect their reliability [3], security [3] [4] [6] [8], or capacity [3]. AI can be utilized across various functions, including order processing [4] [5] [8], trade matching [4] [5] [8], and market surveillance [4] [5] [6], enhancing efficiency and resource allocation while ensuring competitive and open markets [5].

Designated Contract Markets (DCMs) and Swap Execution Facilities (SEFs) are encouraged to leverage AI for identifying abusive trading practices and ensuring compliance with relevant Core Principles. In market surveillance [4] [5] [6], AI plays a crucial role in detecting anomalies and abusive trading practices, requiring these entities to maintain adequate compliance resources for effective monitoring [5]. Thorough risk management processes are essential for AI systems [6], regardless of whether these solutions are developed internally or sourced from third parties [6]. Robust system safeguards are necessary to mitigate vulnerabilities that could lead to cybersecurity risks [6], algorithmic errors [6], and market disruptions stemming from automated decision-making [6].

Entities are urged to implement appropriate system safeguards [5], adhering to best practices for the development and operation of AI systems [5] [8]. This includes maintaining transparency, fairness [6], and accountability [6], which entails comprehensive documentation [6], testing [2] [6] [7] [8], and regular audits to ensure adherence to legal and ethical standards [6]. Derivatives Clearing Organizations (DCOs) intending to implement AI must comply with all regulatory requirements under the CEA and CFTC regulations [4], including various Core Principles related to system safeguards [4], risk management [2] [4] [5] [6] [8], and settlement processes [4] [5]. DCOs should leverage AI for cyber intrusion detection and system resilience [5], ensuring timely notification of significant changes to automated systems [5].

AI also assists DCOs in assessing compliance among clearing members and facilitating communication [5], while supporting settlement processes and managing risks associated with settlement banks [5]. In risk management [4] [5] [6] [8], AI is employed for margin calculations [5], focusing on ensuring adequate performance for effective risk management [5]. Furthermore, AI supports compliance and recordkeeping, particularly in customer protection activities [5], where Futures Commission Merchants (FCMs) must adhere to regulations regarding segregated funds.

The CFTC emphasizes the importance of responsible innovation while cautioning against AI washing and misinterpretation of compliance requirements [7]. All AI applications in financial markets must align with the CEA and other relevant CFTC regulations [7]. The advisory provides specific guidance on the requirements for underlying systems governed by CFTC regulations [6], detailing potential use cases and associated risks [6]. The CFTC’s oversight activities will increasingly focus on AI, which may lead to investigations or enforcement actions as necessary.

CFTC Chairman Rostin Behnam has highlighted the agency’s commitment to balancing market integrity with innovation through a technology-neutral approach. Additionally, CFTC Commissioner Kristin N [2]. Johnson has called for the establishment of an AI Fraud Task Force and enhanced regulatory measures to address potential risks associated with AI [2], particularly concerning fraudulent activities targeting investors through advanced technologies like deepfakes [2]. AI governance requires a dynamic approach that addresses the rapid technological changes and complexities of automated decision-making [6], including data integrity and algorithmic bias [6]. Organizations must develop sophisticated bias testing methodologies and ensure that these methods do not inadvertently introduce or perpetuate bias [6]. Red teaming has emerged as a standard practice for ensuring the trustworthiness [6], safety [3] [6], and legal compliance of AI technologies [6], particularly those based on generative AI and large language models [6]. This approach involves simulating adversarial attacks to identify and address potential vulnerabilities in AI systems [6].

The CFTC advocates for enhanced surveillance, monitoring [2] [4] [5] [7] [8], and penalties for violations [7], including increased civil penalties as a deterrent against the misuse of AI in fraudulent activities and market manipulation [7]. This approach aims to ensure the responsible use of AI in compliance with existing laws [7]. Ongoing monitoring of AI’s benefits and risks will inform future guidance and regulatory recommendations [5], ensuring that the CFTC remains vigilant in addressing AI-enabled misconduct and mitigating potential fraud in global derivatives markets.

Conclusion

The CFTC’s advisory on AI integration in regulated markets underscores the importance of maintaining compliance with existing laws while embracing technological advancements. By emphasizing a technology-neutral approach [5], the CFTC aims to ensure that AI adoption enhances market efficiency and integrity without compromising regulatory standards. The agency’s focus on responsible innovation, risk management [2] [4] [5] [6] [8], and robust system safeguards highlights the need for continuous monitoring and adaptation to address the evolving challenges posed by AI in financial markets.

References

[1] https://www.akingump.com/en/insights/alerts/cftcs-year-end-ai-placeholder-guidance
[2] https://www.pymnts.com/news/regulation/2024/cftc-to-monitor-use-of-ai-in-derivatives-markets/
[3] https://complianceconcourse.willkie.com/articles/cftc-staff-advisory-on-the-use-of-artificial-intelligence/
[4] https://www.lowenstein.com/news-insights/publications/client-alerts/key-considerations-when-adopting-artificial-intelligence-as-a-cftc-regulated-firm-cfd
[5] https://www.jdsupra.com/legalnews/cftc-staff-issues-advisory-on-the-use-1982417/
[6] https://www.jdsupra.com/legalnews/cftc-issues-advisory-on-use-of-ai-in-7751848/
[7] https://www.cftc.gov/PressRoom/SpeechesTestimony/johnsonstatement120524
[8] https://www.lexology.com/library/detail.aspx?g=ee88047a-31d5-43f7-95de-cb4bebd94a86