Introduction
In 2024 [1] [2] [3], the regulation of artificial intelligence (AI) became a central focus for governments and policymakers worldwide. Significant legislative and regulatory developments occurred across various jurisdictions, reflecting a growing commitment to managing the implications of AI technologies.
Description
In the United States [1], a fragmented regulatory landscape began to take shape, reminiscent of data privacy laws [3], as both state and federal initiatives were introduced. Three states enacted comprehensive AI legislation [1] [2], while others [1] [2], including California, introduced laws aimed at enhancing AI transparency [3], set to take effect in 2026. The Colorado Artificial Intelligence Act mandated developers and deployers of high-risk AI systems to implement risk management programs and ensure consumer transparency [3], while Utah’s Artificial Intelligence Policy Act established similar transparency requirements for AI interactions [2].
Key regulatory bodies intensified enforcement actions under existing laws affecting AI [2]. The Securities and Exchange Commission (SEC) settled charges against Delphia (USA) Inc. and Global Predictions Inc [2]. for making misleading claims about their AI usage [1] [2], resulting in fines of $225,000 and $175,000 [2], respectively [2]. SEC Chair Gary Gensler cautioned against “AI washing,” emphasizing the need for truthful representations of AI capabilities and the importance of disclosing material risks associated with AI use [2]. The Federal Trade Commission (FTC) echoed these concerns, launching “Operation AI Comply” to address potentially misleading claims about AI products and warning against false advertising [3]. The FTC’s efforts included enforcement actions against companies for deceptive practices and the necessity of preventing biased outcomes from AI systems.
At the federal level [3], the National Institute for Standards and Technology (NIST) released nonbinding guidance to improve the safety and trustworthiness of AI systems [3], following an Executive Order on AI [3]. This guidance outlined best practices for risk mitigation and secure testing of software [3]. The Department of Justice (DOJ) updated its guidance on corporate compliance programs, focusing on the effectiveness of compliance measures and encouraging employee reporting of misconduct [2]. Additionally, the DOJ proposed new rules regarding the transfer of sensitive personal data to certain countries [3], defining categories of sensitive data and imposing restrictions on foreign transactions that could compromise this information [3].
State lawmakers have been active in regulating AI since the rise of generative AI technologies [1]. Tennessee’s ELVIS Act aimed to protect artists from unauthorized deepfakes [2], expanding individual property rights to include voice and likeness [2], and allowing for private legal action against unauthorized deepfake distribution [2]. In October 2024 [2], the New York Department of Financial Services (DFS) issued guidance on cybersecurity risks associated with AI [1], particularly concerning deepfake technology and its potential to exacerbate cyber threats [2].
Internationally [1] [2] [3], the European Commission signed CETS No. 225 [2], the first legally binding international AI treaty [1] [2], while the United Nations General Assembly adopted a resolution promoting safe AI systems [1] [2]. The EU AI Act began its enforcement phase [2], establishing a comprehensive legal framework for AI by categorizing systems by risk and imposing varying requirements based on their classification [3]. The Act prohibits systems deemed to pose unacceptable risk and sets extensive obligations for high-risk systems, including specific regulations for generative AI models considered to pose systemic risk [3]. The Canadian government also launched the Canadian Artificial Intelligence Safety Institute to study AI risks [1], while Japan and Israel introduced guidelines and draft policies for responsible AI development [1].
Conclusion
The global focus on AI regulation in 2024 underscores the urgency and complexity of managing AI technologies. These legislative and regulatory efforts aim to ensure transparency, accountability, and safety in AI applications, addressing both domestic and international concerns. As AI continues to evolve, ongoing collaboration and adaptation of regulatory frameworks will be essential to mitigate risks and harness the benefits of AI innovations.
References
[1] https://www.jdsupra.com/legalnews/artificial-intelligence-2024-year-in-8498801/
[2] https://www.kramerlevin.com/en/perspectives-search/artificial-intelligence-2024-year-in-review.html
[3] https://www.wilmerhale.com/en/insights/blogs/wilmerhale-privacy-and-cybersecurity-law/20250115-year-in-review-the-top-ten-us-data-privacy-developments-from-2024




