Introduction
The evolving landscape of regulations surrounding Automated Decision-Making Technologies (ADMT) is shaping the obligations of businesses, particularly in the context of privacy and artificial intelligence. Recent updates to the CPPA Regulations and various state-level initiatives in the United States highlight the increasing focus on managing the implications of these technologies.
Description
The CPPA Regulations impose specific obligations on businesses using Automated Decision-Making Technologies (ADMT) [2], which encompass artificial intelligence systems, to make significant decisions about consumers [2], including opt-out rights [2]. The definition of ADMT has been narrowed to technologies that process personal information and “replace or substantially replace” human decision-making [2], excluding those that merely assist human decision-making [2]. A “significant decision” is now defined as one affecting financial services [2], housing [2], education [2], employment [2], or healthcare [2], with the previous broader scope removed [2].
Businesses engaged in profiling for employment or educational purposes [2], or using personal information to train ADMT [2], are no longer subject to ADMT obligations but must still conduct risk assessments [2]. Risk assessments are now only required for profiling based on sensitive location data [2]. Additionally, profiling for behavioral advertising does not trigger compliance requirements [2]. Companies are no longer required to submit risk assessments to the Canadian Privacy Protection Authority (CPPA); instead [1], they must provide an attestation and designate a point of contact [1]. Documentation for risk assessments conducted in 2026 and 2027 is due by April 1, 2028 [1], with subsequent assessments needing to be submitted by April 1 of the year following their completion [1].
The regulations clarify that pre-use notices for ADMT can be included in existing notices at the time of data collection [2]. Businesses are no longer required to share abridged risk assessments with the CPPA but must submit risk assessments for any year in which they were conducted [2], with specific deadlines for future submissions [2].
In California [3], the California Privacy Protection Agency has raised concerns about a proposed moratorium that could undermine existing privacy rights established by voters in 2020 [3], particularly regarding automated decision-making technology and transparency in personal information usage [3]. Legal experts warn that this legislation may impede the agency’s ability to regulate automated decision-making and enforce protections against deepfakes [3]. State Sen [3]. Josh Becker has criticized the bill for undermining California’s leadership in AI regulation [3], emphasizing his efforts to introduce legislation that mandates AI developers provide consumers with tools to identify the use of generative AI [3]. California has been proactive in AI regulation [3], having passed 22 bills related to AI last year [3], more than any other state [3].
In addition to California’s initiatives, new AI regulations in Arkansas mandate public entities to develop comprehensive AI policies and establish ownership rights over content generated by generative AI [2], effective August 3, 2025 [1] [2]. Kentucky has enacted a law directing the creation of AI policy standards [2], while Maryland has formed a working group to study private sector AI use and recommend regulations [2]. Montana’s law requires risk management policies for AI-controlled critical infrastructure and limits government restrictions on computational resources [2]. Utah has introduced several regulations [2], including disclosure requirements for consumer-facing generative AI services and specific rules for AI-supported mental health chatbots [2]. West Virginia has established a task force to identify economic opportunities related to AI and develop best practices for public sector use [2].
If the CPPA finalizes the regulations after August 31, 2025 [1], the updates will come into effect on January 1, 2026 [1].
Conclusion
The regulatory landscape for Automated Decision-Making Technologies is becoming increasingly complex, with significant implications for businesses and consumers alike. The CPPA Regulations and various state-level initiatives in the United States underscore the need for businesses to adapt to new compliance requirements and the growing emphasis on privacy and transparency. These developments highlight the critical role of regulation in shaping the future of artificial intelligence and its integration into society.
References
[1] https://natlawreview.com/article/california-privacy-protection-agency-releases-updated-regulations-whats-next
[2] https://www.jdsupra.com/legalnews/from-california-to-kentucky-tracking-6446347/
[3] https://calmatters.org/economy/technology/2025/05/state-ai-regulation-ban/