Introduction

The rapid advancement and integration of generative machine learning and artificial intelligence (AI) technologies in the United States have necessitated the development of comprehensive regulatory frameworks. These frameworks aim to govern the creation, use [1] [3] [4], and disclosure of AI technologies, ensuring ethical practices and safeguarding societal interests.

Description

The rapid growth and adoption of generative machine learning and artificial intelligence technologies in the US have created an urgent need for comprehensive regulations governing their creation [3] [4], use [1] [3] [4], and disclosure [3] [4]. Currently, federal regulation is limited to guidance from the National Institute of Standards and Technology (NIST) [3] [4], executive orders from the Biden administration [3] [4], and the recently released draft AI Bill of Rights, which outlines principles for the ethical design [1], use [1] [3] [4], and deployment of automated systems [1]. This draft emphasizes transparency, accountability [1] [2] [3] [4], and protection against algorithmic discrimination [1], reflecting the increasing recognition of AI’s societal impact [1].

Senate Bill 2892 [3] [4], known as the Algorithmic Accountability Act (AAA) [3] [4], was introduced by Oregon Senator Ron Wyden on September 21, 2023 [4]. This bill mandates the Federal Trade Commission (FTC) to establish regulations requiring certain entities to conduct impact assessments for automated decision systems and augmented critical decision processes. An “automated decision system” is defined as any computational system that influences decision-making [2], while an “augmented critical decision process” refers to critical decisions supported by such systems [2]. Covered entities must maintain documentation, declare their coverage under the AAA [3] [4], and submit annual reports to the FTC detailing their findings. These reports should include the entity’s name, website [2], contact information [2], a description of the critical decision [2], intended purpose [2], testing documentation [2], and any identified negative impacts on consumers [2]. The FTC is tasked with providing guidance on compliance [2], including templates and consultation resources [2], and will establish a publicly accessible repository for information on automated decision systems [2]. Enforcement of the Act will align with the Federal Trade Commission Act and will not preempt existing state or local laws [2]. The bill builds on earlier legislation proposed in 2019 and is currently under consideration by the Senate Committee on Commerce [3], Science [3] [4], and Transportation [3] [4].

Regular Senate hearings are addressing the implications of AI technologies [1], featuring experts from technology [1], ethics [1], and law to discuss regulatory needs [1]. Key areas of focus include ensuring transparency in AI decision-making processes [1], establishing accountability mechanisms for developers and users [1], and mitigating biases in AI algorithms [1], particularly in sensitive sectors like hiring and law enforcement [1]. Data privacy is another critical concern [1], with the Senate exploring regulations to protect individual privacy rights in AI data handling [1]. The US is also informed by international developments [1], particularly the EU’s AI Act [1], which categorizes AI applications by risk levels and serves as a potential model for US regulation [1].

In addition to federal initiatives [4], various state laws are emerging [4], contributing to a fragmented regulatory landscape [3]. California’s law requires state agencies to inform users when they are interacting with AI and promotes investment in AI education [4]. Colorado’s law [4], effective February 1, 2026 [4], mandates developers of high-risk AI systems to take reasonable care to protect consumers from algorithmic discrimination [4], providing steps for developers to establish a rebuttable presumption of reasonable care [4]. Illinois has amended its Human Rights Act to protect residents from discriminatory AI decisions in employment contexts [4]. Utah’s law imposes disclosure requirements on entities using AI tools and limits their liability for generative AI statements that violate consumer protection laws [4], while also establishing an Office of Artificial Intelligence Policy to oversee state AI initiatives [4]. As states enact their own AI regulations [3], technology companies will face an increasing compliance burden, necessitating careful navigation of various state laws and their specific requirements [3].

The ongoing discussions and developments in both federal and state regulations aim to create a comprehensive regulatory environment that fosters innovation while safeguarding the rights of citizens. Ethical principles are emphasized in both US and EU approaches to AI governance [1], with the US focusing on ethical considerations in its Bill of Rights and the EU prioritizing legal compliance and safety [1]. A coordinated governance approach [1], supported by federal and state initiatives [1], is essential for aligning AI development with societal values [1], ensuring that ethical considerations remain central to innovation [1].

Conclusion

The evolving regulatory landscape for AI in the US underscores the critical need for a balanced approach that promotes innovation while protecting societal interests. The interplay between federal and state regulations, alongside international influences, highlights the complexity of establishing a cohesive framework. As AI technologies continue to advance, ongoing dialogue and collaboration among stakeholders will be vital in shaping policies that align with ethical standards and societal values.

References

[1] https://www.restack.io/p/ai-regulation-answer-senate-meeting-cat-ai
[2] https://www.govtrack.us/congress/bills/118/s2892/text/is
[3] https://kjk.com/2024/10/24/the-current-and-evolving-landscape-of-ai-in-the-united-states-whats-next/
[4] https://www.jdsupra.com/legalnews/the-current-and-evolving-landscape-of-4080755/