Introduction
The emergence of AI-enabled scams and related harms has become a bipartisan concern among lawmakers, prompting a shift in responsibility from consumers to AI companies to prevent misuse of their technologies [3]. This has led to increased legislative efforts and regulatory frameworks aimed at addressing the challenges posed by AI, with a focus on accountability, transparency [3] [4] [5] [7] [8], and civil rights considerations [6].
Description
AI-enabled scams and related harms have emerged as a bipartisan concern among lawmakers [3], leading to a growing emphasis on shifting the responsibility from consumers to AI companies to prevent misuse of their technologies [3]. The Biden Administration has shifted towards agency-level frameworks and guidelines for AI regulation due to Congress’s inability to pass comprehensive legislation over the past four years [7]. During a recent Senate hearing, Subcommittee Chair Hickenlooper emphasized the benefits of AI while underscoring the urgent need for federal legislation to combat AI-enabled fraud. He discussed five AI bills that have garnered bipartisan support [2], expressing a commitment to pass them into law promptly [2]. Witnesses highlighted the importance of accountability frameworks for AI developers, with Dr. Farid asserting that AI companies should be held responsible for the misuse of their technologies [2]. However, there are concerns that proposed legislation may prioritize industry interests while minimizing civil rights considerations, particularly under a Republican-controlled Congress [6], which could lead to regulatory rollbacks benefiting AI companies.
Proposed legislation includes the AI Research [3] [4], Innovation [1] [2] [3] [4] [6] [7] [8], and Accountability Act (AIRIA) [2] [3] [4] [8], which aims to establish transparency requirements for AI-generated content and enhance research and development into content authenticity. This act is designed to ensure accountability among AI developers and promote research on content provenance [2]. The Future of AI Innovation Act seeks to create partnerships among government [8], private sector [8], and academia to foster responsible AI innovation and strengthen the National Institute of Standards and Technology’s (NIST) role in guiding the responsible advancement of AI technology [1]. Additionally, the Validation and Evaluation for Trustworthy AI Act establishes a voluntary framework for third-party audits of AI systems [8], ensuring accountability for developers [8]. While there is some bipartisan agreement on creating an AI Safety Institute to establish voluntary safety standards [6], experts argue that mandatory compliance is necessary for effective protection [6].
The Content Origin Protection and Integrity from Edited and Deepfaked Media Act (COPIED Act) targets the rise of deepfakes by directing federal agencies to enhance research and development in synthetic content detection, establishing detection standards [2] [3], and enforcing rules against the manipulation of content labels [8]. It also includes disclosure requirements for AI-generated content and prohibits unauthorized use of copyrighted material for AI training [3]. The TAKE IT DOWN Act would criminalize the creation and distribution of non-consensual intimate imagery [8], including certain AI-generated deepfakes [2] [3] [4] [7], and require social media platforms to remove such content [2] [3]. Other related legislative efforts include the bipartisan NO FAKES Act and the AI CONSENT Act [5], which aim to protect human likenesses and require consent for using personal data in AI training [5], respectively [5].
Legislative efforts are concentrated on two main categories: “mitigating harms” and “government use of AI,” which together account for over 65 percent of the introduced bills [7]. The “mitigating harms” category [7], representing 49 percent of the tracked legislation [7], includes proposals aimed at restricting AI-generated content in elections [7], enhancing transparency through labeling or watermarking [7], and addressing civil rights and intellectual property protections [7]. Recent momentum in AI legislation is evident [7], with the Senate Commerce Committee advancing four AI-focused bills [7], including the bipartisan Future of AI Innovation Act and the TAKE IT DOWN Act [7]. The Software and Information Industry Association (SIIA) has urged Congress to take swift action on AI legislation before the end of the 118th Congress [1], emphasizing the need for bipartisan support for key bills [1], including the CREATE AI Act [1], which would enhance researchers’ access to computing power for analyzing AI models and assessing associated risks [1].
The likelihood of advancing AI legislation during the current Congress remains uncertain [2] [3], as it competes with other legislative priorities [3]. While there is bipartisan support for the discussed AI bills [3] [4], some lawmakers may prefer to address AI issues in the next Congress when they hold a majority [3]. Ongoing monitoring of AI legislative developments is anticipated [4], particularly as rising legal tensions surrounding the use of creative works in AI have led to recent lawsuits against AI companies for copyright infringement [5], highlighting concerns from artists about unauthorized use of their creations [5]. If the TRAIN Act [5], which aims to enhance transparency regarding the use of copyrighted works in AI training [5], does not pass in the current session [5], it is expected to be reintroduced next year [5].
In addition to legislative efforts, the Federal Trade Commission (FTC) has implemented rules to prevent impersonation of government entities and businesses using AI [8], with considerations for extending protections to individuals against deepfakes [8]. Various states have begun enacting their own legislation concerning deepfake media [8], resulting in a fragmented legal landscape that underscores the need for a consistent federal approach to protect citizens [8]. The potential for AI to impact areas such as employment [6], housing [5] [6] [7] [8], and immigration raises alarms about the risks of discrimination and wrongful actions against individuals based on AI decision-making [6]. Private sector collaboration is also essential for promoting responsible AI practices [8], with companies like Anthropic [8], Google [8], Microsoft [8], and OpenAI committing to transparency measures [8], such as identifying AI-generated content [8]. Experts in artificial intelligence and AI-generated media are being consulted to discuss current developments and necessary future actions to mitigate risks while fostering innovation in this rapidly evolving field.
Conclusion
The legislative and regulatory efforts surrounding AI reflect a growing recognition of the technology’s potential risks and benefits. While there is bipartisan support for various AI-related bills [3] [4], the path to comprehensive legislation remains uncertain due to competing priorities and political dynamics. The ongoing development of AI regulations and the involvement of both public and private sectors are crucial in ensuring that AI technologies are used responsibly, with adequate protections for civil rights and intellectual property. As AI continues to evolve, maintaining a balance between innovation and regulation will be essential to harness its potential while safeguarding against misuse and harm.
References
[1] https://www.siia.net/siia-joins-letter-supporting-the-create-ai-act-and-future-of-ai-innovation-act/
[2] https://www.mintz.com/insights-center/viewpoints/54731/2024-11-22-senators-hold-hearing-ai-fraud-and-scams-vow-pass-ai
[3] https://www.jdsupra.com/legalnews/senators-hold-hearing-on-ai-fraud-and-9260726/
[4] https://natlawreview.com/article/senators-hold-hearing-ai-fraud-and-scams-vow-pass-ai-bills-coming-weeks-ai
[5] https://www.nbcnews.com/tech/senate-bill-transparency-ai-developers-rcna181724
[6] https://rollcall.com/2024/11/13/schumers-ai-road-map-might-take-gop-detour/
[7] https://www.americanactionforum.org/insight/primer-a-look-at-biden-administrations-approach-to-ai-regulation/
[8] https://www.hickenlooper.senate.gov/press_releases/video-hickenlooper-chairs-senate-hearing-on-protecting-consumers-from-ai-deepfakes/