Introduction

Utah has emerged as a pioneer in the regulation of artificial intelligence (AI) with the enactment of the Artificial Intelligence Policy Act (AIPA) in 2024. This legislation [4], along with subsequent amendments, establishes comprehensive guidelines for the use of AI, particularly in regulated professions such as healthcare and law [3]. The AIPA and its amendments aim to ensure transparency, protect consumer rights, and address the ethical implications of AI technologies.

Description

Utah has solidified its position as a leader in AI regulation and legal innovation with the enactment of the Artificial Intelligence Policy Act (AIPA) in 2024, which mandates disclosures for consumer interactions with generative AI [3], particularly in regulated professions such as healthcare and law [3]. The AIPA establishes an “AI policy lab” and implements protections for users and consumers of AI [1], including requirements for healthcare providers to disclose the use of generative AI in patient treatment [1]. In 2025 [3], Utah introduced three significant laws—HB 452 [3], SB 226 [3] [5], and SB 332—that amend and expand the AIPA [3], establishing compliance requirements specifically for licensed professionals [2], including attorneys [1] [2], who utilize AI systems in their practice [2].

HB 452 focuses on mental health chatbots [3], instituting disclosure requirements that mandate providers to inform users when they are interacting with AI at multiple points, including before access and after a period of inactivity [4]. Mental health chatbots must transparently disclose any advertisements and are restricted from using user input for targeted ads [5], except for referrals to licensed professionals [4] [5]. Additionally, providers are prohibited from selling or sharing identifiable health information without user consent [1] [3] [4] [5], except for necessary functionality purposes under specific contractual agreements [1]. Compliance with HIPAA privacy and security provisions is required [1], and non-compliance may lead to fines of up to $2,500 and state court actions for injunctive relief, although there is no private right of action [4].

SB 226 revises the AIPA’s disclosure requirements [3], limiting them to instances where individuals explicitly request to know if they are interacting with AI [3]. Disclosure is also mandated in high-risk interactions involving sensitive personal information or significant decisions [3], requiring verbal disclosure at the beginning of conversations and written disclosure before any written interactions [3]. If generative AI clearly identifies itself as non-human throughout the interaction [3] [5], the supplier is exempt from enforcement actions for failing to disclose [3]. Violations under this law may incur fines up to $2,500 [3].

SB 332 extends the AIPA’s duration until July 1, 2027 [5], ensuring continued regulatory oversight during this period [3].

Additionally, SB 271 expands prohibitions against personal identity abuses to encompass AI-generated content [5], addressing risks associated with deepfakes [5]. This law now covers nonconsensual use of personal identity in various contexts beyond advertising [4], including fundraising and product sales [4] [5]. It revises the definition of personal identity to include AI-generated likenesses and prohibits the unauthorized creation or modification of content featuring an individual’s personal identity for commercial purposes [5]. The law also preserves the private right of action for individuals whose personal identity is used or published without authorization [5], allowing for injunctive relief [4] [5], damages [4] [5], and attorneys’ fees [4] [5].

Under the AIPA [5], generative AI is specifically defined to include technology that simulates human conversation [5], and consumers must be informed when interacting with generative AI instead of a human [5], particularly if requested or if the provider is in a regulated profession [5]. Advertising restrictions prevent suppliers from using mental health chatbots to promote specific products or services without clear identification of such advertisements [1], although chatbots can recommend users seek assistance from licensed professionals [1]. As AI continues to permeate various professional fields [2], it is anticipated that other states may adopt similar measures to ensure the responsible and safe use of this technology [2].

Conclusion

The enactment of the AIPA and its subsequent amendments position Utah at the forefront of AI regulation, setting a precedent for other states. By mandating transparency and protecting consumer rights, these laws address the ethical and privacy concerns associated with AI technologies. As AI becomes increasingly integrated into professional practices, the measures implemented by Utah may serve as a model for ensuring the responsible and safe use of AI across the United States.

References

[1] https://www.jdsupra.com/legalnews/utah-law-aims-to-regulate-ai-mental-1971027/
[2] https://news.bloomberglaw.com/bloomberg-law-analysis/analysis-utahs-regulation-of-lawyers-ai-use-what-to-know
[3] https://natlawreview.com/article/utah-enacts-ai-amendments-targeted-mental-health-chatbots-and-generative-ai
[4] https://perkinscoie.com/insights/update/new-utah-ai-laws-change-disclosure-requirements-and-identity-protections-target
[5] https://www.jdsupra.com/legalnews/new-utah-ai-laws-change-disclosure-4962331/